US20040193760A1 - Storage device - Google Patents

Storage device Download PDF

Info

Publication number
US20040193760A1
US20040193760A1 US10/775,886 US77588604A US2004193760A1 US 20040193760 A1 US20040193760 A1 US 20040193760A1 US 77588604 A US77588604 A US 77588604A US 2004193760 A1 US2004193760 A1 US 2004193760A1
Authority
US
United States
Prior art keywords
file
storage
interface control
control device
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/775,886
Inventor
Naoto Matsunami
Koji Sonoda
Akira Yamamoto
Masafumi Nozawa
Masaaki Iwasaki
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IWASAKI, MASAAKI, SONODA, KOJI, YAMAMOTO, AKIRA, MATSUNAMI, NAOTO, NOZAWA, MASAFUMI
Publication of US20040193760A1 publication Critical patent/US20040193760A1/en
Priority to US11/030,608 priority Critical patent/US7330950B2/en
Priority to US11/121,998 priority patent/US7356660B2/en
Priority to US12/155,703 priority patent/US7925851B2/en
Priority to US13/051,010 priority patent/US8230194B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • G06F3/0649Lifecycle management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Definitions

  • the present invention relates to a storage device used in a computer system.
  • a high-speed storage device and a low-speed storage device may be connected to a computer called a hierarchical storage.
  • files that are frequently used are stored in the high-speed storage device such as a magnetic disk device, while files that are not frequently used are stored in the inexpensive, low-speed storage device such as a tape device.
  • a plurality of logical storage devices having different processing speeds and storage capacities are configured within a storage device that is connected to a computer and used.
  • Such a system may be represented by disk array subsystems.
  • the storage device manages as statistical information the frequency of accesses from the computer to data stored in the storage device, and, based on the statistical information, transfers data with high access frequency to logical storage devices with higher performance.
  • the first set of problems entailed in the prior art is that there is a high dependency on the computer connected to the storage device, that there is a limitation in the system configuration, and that it is difficult to simplify the system management.
  • the hierarchical storage control refers to a data storage control for controlling a plurality of storage regions having different processing speeds and storage capacities such that the storage regions can be changed according to the frequency of data usage.
  • the hierarchical storage control refers to controlling to select, based on the property of data such as frequency of data usage, an appropriate storage region from among a plurality of storage regions having different properties in terms of processing speed and/or storage capacity, and to store the data in the storage region selected.
  • the system configuration is altered, such as when an old computer is replaced by a new computer, maintaining the system can be difficult due to such reasons as the system configuration of the new computer not being able to take over the software's control information.
  • the second problem is that optimal placement of data according to the life cycle or type of data is difficult.
  • the third problem is that the effect of the hierarchical storage control is small.
  • the present invention relates to a control method or a storage device that can execute a hierarchical storage control for file storage positions without being dependent on the OS or applications executed on a host computer.
  • the present invention also relates to a hierarchical storage control method for a plurality of computers to share files, or to provide a storage device that executes such a hierarchical storage control.
  • the present invention also relates to a control method or a storage device that can execute a hierarchical storage control according to file properties.
  • the present invention further relates to a hierarchical storage control method with high cost reduction effect, or to provide a storage device that executes such a hierarchical storage control.
  • a storage device comprises a plurality of storage regions having different properties, an interface control device that accepts from one or more computers access requests containing file identification information, and an interface control device for accessing storage regions that store data of the file designated by identification information, wherein the interface control device controls storage of file data in one of a plurality of storage regions according to the file property.
  • FIG. 1 is a diagram of an example of the configuration of a computer system in accordance with an embodiment of the present invention.
  • FIG. 2 is a diagram of an example of the exterior appearance of a storage device.
  • FIG. 3 is a diagram of an example of the exterior appearance of an adapter board.
  • FIG. 4 is a diagram of an example of the configuration of a NAS channel adapter.
  • FIG. 5 is a diagram of an example of programs stored in a file system control memory.
  • FIG. 6 is a diagram of an example of programs stored in a disk array control memory.
  • FIG. 7 is a diagram of an example of the relationship among disk pools, LUs and file systems.
  • FIG. 8 is a diagram of an example of a storage class management table.
  • FIG. 9 is a diagram of an example of a filename management table.
  • FIG. 10 is a diagram of an example of a file storage management table and a buffer management table.
  • FIG. 11 is a diagram of an example of a file property information management table.
  • FIG. 12 is a diagram of an example of a file storage management table.
  • FIG. 13 is a diagram of an example of the second configuration of a system in accordance with another embodiment of the present invention.
  • FIG. 14 is a diagram of an example of the third configuration of the system in accordance with another embodiment the present invention.
  • FIG. 15 is a diagram of an example of the fourth configuration of the system in accordance with another embodiment the present invention.
  • FIG. 16 is a diagram of a configuration example of a NAS node.
  • FIG. 17 is a diagram of a configuration example of a Fibre Channel node.
  • FIG. 18 is a diagram of a configuration example of an IP node.
  • FIG. 19 is a diagram of a configuration example of a disk array node.
  • FIG. 1 is a diagram indicating an example of a computer system including a storage device 1 (it is also called a storage system), to which the present invention is applied.
  • a storage device 1 it is also called a storage system
  • x may be any integer.
  • the storage device 1 is a disk array system comprising a disk controller (hereinafter called “DKC”) 11 and a plurality of magnetic disk devices (hereinafter simply called “disks”) 170 x and 171 x .
  • the storage device 1 is provided with two types of disks 170 x and 171 x .
  • 170 x are Fibre Channel (hereinafter called “FC”) disks with FC-type interface
  • 171 x are serial AT attached (hereinafter called “SATA”) disks with serial ATA-type (SATA-type) interface.
  • FC Fibre Channel
  • SATA serial AT attached
  • a plurality of FC disks 170 x makes up an FC disk pool 0 ( 170 )
  • a plurality of SATA disks 171 x makes up a SATA disk pool 1 ( 171 ).
  • the disk pools will be described in detail later.
  • the DKC 11 comprises one or more NAS channel adapters 110 x , one or more Fibre Channel adapters 111 x , a plurality of disk adapters 12 x , a shared memory 13 (hereinafter called “SM”), a shared memory controller 15 (hereinafter called “SMC”), a cache memory 14 (hereinafter called “CM”), and a cache memory controller 16 (hereinafter called “CMC”).
  • SM shared memory 13
  • SMC shared memory controller 15
  • CM cache memory 14
  • CMC cache memory controller
  • the NAS channel adapters (hereinafter called “CHN”) 110 x are interface control devices connected by file I/O interfaces to computers 40 x (hereinafter called “NAS hosts”), which are connected to a local area network (hereinafter called “LAN”) 20 or a LAN 21 .
  • CHN NAS channel adapters
  • LAN local area network
  • the Fibre Channel adapters (hereinafter called “CHF”) 111 x are interface control devices connected by block I/O interfaces to computers (hereinafter called “SAN hosts”) 50 x , which are connected to a storage area network (hereinafter called “SAN”) 30 .
  • CHN and CHF are collectively called channel adapters (hereinafter called “CH”).
  • the disks 17 x are connected to the disk adapters 12 x .
  • Each disk adapter (hereinafter called “DKA”) 12 x controls input and output to and from one or more disks 17 x connected to itself.
  • the SMC 15 is connected to the CHN 110 x , the CHF 111 x , the DKA 12 x and the SM 13 .
  • the SMC 15 controls data transfer among the CHN 110 x , the CHF 111 x , the DKA 12 x and the SM 13 .
  • the CMC 16 is connected to the CHN 110 x , the CHF 111 x , the DKA 12 x and the CM 14 .
  • the CMC 16 controls data transfer among the CHN 110 x , the CHF 111 x , the DKA 12 x and the CM 14 .
  • the SM 13 stores a disk pool management table 131 .
  • the disk pool management table 131 is information that is used to manage the configuration of the disk pools.
  • the LANs 20 and 21 connect the CHNs 110 x to the NAS hosts 40 x .
  • Ethernet ® is used for LAN.
  • the SAN 30 connects the CHFs 111 x to the SAN hosts 50 x .
  • Fibre Channel is used for SAN.
  • an IP network can be used as the SAN, such that iSCSI, by which SCSI commands according to SCSI protocol are encapsulated into IP packets for sending and receiving, is used among equipment connected to the SAN.
  • the SAN 35 according to the present embodiment is a dedicated SAN for connecting the storage device 1 and no SAN hosts are connected to the SAN 35 .
  • all CHs can access the CM 14 , the SM 13 , any DKAs 12 x and any disks 17 x , via the CMC 16 or the SMC 15 .
  • the storage device 1 shown in FIG. 1 has both the SAN interfaces (CHFs 111 x ) for connecting to the SAN hosts 50 x and the NAS interfaces (CHNs 111 x ) for connecting to the NAS hosts 40 x , but the present embodiment can be implemented even if the storage device 1 has only the NAS interfaces.
  • CHFs 111 x the SAN interfaces
  • CHNs 111 x the NAS interfaces
  • FIG. 2 is a diagram of an example of the exterior appearance of the storage device 1 .
  • a DKC unit 19 stores the CHNs 111 x , the CHFs 110 x , the DKAs 12 x , the SM 13 and the CM 14 , which are components of the DKC 11 .
  • the SM 13 actually comprises a plurality of controller boards 13 x .
  • the CM 14 also comprises a plurality of cache boards 14 x. Users of the storage device 1 can increase or decrease the number of such boards in order to configure the storage device 1 with the CM 14 and the SM 13 having the desired storage capacity.
  • Disk units (hereinafter called “DKU”) 180 and DKUs 181 store the disk pool 170 and the disk pool 171 , respectively.
  • Adapter boards built-in with the CHNs 110 x , the CHFs 111 x , the DKAs 12 x , the controller boards 13 x and the cache boards 14 x are stored in slots 190 .
  • the shape of the slots 190 , the size of the adapter boards and the shape of connectors are made uniform regardless of the type of adapter boards or the type of interface, which maintains compatibility among various types of boards.
  • any adapter board can be mounted into any slot 190 regardless of the type of the adapter board or the type of the interface.
  • users of the storage device 1 can freely select the number of adapter boards for the CHNs 110 x and the CHFs 111 x in order to mount the number of the CHNs 110 x and the CHFs 111 x selected into the slots 190 of the DKC unit 19 .
  • NAS board Example of Exterior Configuration of Adapter Board (hereinafter called “NAS board”) with the CHN 110 X Built-in (FIG. 3)
  • FIG. 3 is a diagram of an example of the exterior configuration of a NAS board.
  • a connector 11007 is connected to a connector of the DKC unit 19 .
  • An interface connector 2001 is Ethernet ®-compatible and can be connected to Ethernet ®.
  • the adapter boards with built-in CHNs 110 x and the adapter boards with built-in CHFs 111 x have connectors of the same shape.
  • the interface connector 2001 is Fibre Channel-compatible and configured to be connected to Fibre Channel.
  • FIG. 4 is a diagram of an example of the configuration of the CHN 110 x .
  • a file access control CPU 11001 is a processor for controlling file access.
  • a LAN controller 11002 is connected to the LAN 20 via the interface connector 2001 and controls sending and receiving of data to and from the LAN 20 .
  • a file access control memory 11004 is connected to the file access control CPU 11001 .
  • the file access control memory 11004 stores programs executed by the file access control CPU 11001 and control data.
  • a disk array control CPU 11008 is a processor for controlling a disk array.
  • the disk array refers to a storage device consisting of a plurality of disks. Disk arrays in which at least one of a plurality of disks stores redundant data to provide fault tolerance are called RAIDs. RAIDs are described later.
  • a disk array control memory 11009 is connected to the disk array control CPU 11008 and stores programs executed by the disk array control CPU 11009 and control data.
  • An SM I/F control circuit 11005 is a circuit for controlling access from the CHNs 110 x to the SM 13 .
  • a CM I/F control circuit 11006 is a circuit for controlling access from the CHNs 110 x to the CM 14 .
  • An inter-CPU communications circuit 11007 is a communications circuit used when the file access control CPU 11001 communicates with the disk array control CPU 11008 in order to access disks.
  • the present embodiment indicates an example of an asymmetrical multiprocessor configuration in which two processors, the file access control CPU 11001 and the disk array control CPU 11208 , are mounted on each CHN 110 x ; however, each CHN 110 x can be configured by mounting a single processor that executes both the file access control and the disk array control, or as a symmetrical multiprocessor configuration in which two or more processors are mounted as equivalents to execute the file access control and the disk array control.
  • each CHF 111 x is the configuration shown in FIG. 4, except that components shown in top half of FIG. 4, namely the LAN controller 11002 , the file access control CPU 11001 , the file access control memory 11004 and the inter-CPU communications circuit 11007 , are replaced by a Fibre Channel controller.
  • FIG. 5 is a diagram of an example of programs and control data stored in the file access control memory 11004 of the CHN 110 x .
  • An operating system program 110040 is used for the management of programs as a whole and for input/output control.
  • a LAN controller driver program 110041 is used for the control of the LAN controller 11002 .
  • a TCP/IP program 110042 is used for the control of TCP/IP, which is the communications protocol for LAN.
  • a file system program 110043 is used for managing files stored in the storage device 1 .
  • a network file system program 110044 is used for controlling NFS and/or CIFS, which are protocols for providing files stored in the storage device 1 to the NAS hosts 40 x .
  • a volume control program 110045 is used for controlling the configuration of each logical volume by combining a plurality of logical disk units (hereinafter called “LU”), each of which is a unit of storage region set within the disk pools 17 x .
  • An inter-CPU communications driver program 110046 is used for controlling the inter-CPU communications circuit 11007 , which is used for communication between the file access control CPU 11001 and the disk array control CPU 11008 .
  • the file system program 110043 includes the following:
  • a file storage management table 1100435 for managing addresses of storage regions on disks that store blocks that make up each file
  • a filename management table 1100436 for managing filenames of open files and file handlers used to access the file storage management table 1100435 of each file;
  • a buffer management table 1100437 for managing buffer addresses indicating storage regions within buffers corresponding to blocks that make up a file
  • file property information management table 1100438 for storing file static properties, such as the file type, the application that generated the file, the intent of the file generator, and file dynamic properties, such as the value of the file that varies according to the file's life cycle stage and the file's access properties;
  • a migration management section 110043 A used when executing a processing to migrate files between LUs.
  • a storage class management table 1100439 that registers for each LU in the storage pool a storage class, described later, and identification information of the storage device in which an LU resides.
  • FIG. 6 is a diagram of an example of programs stored in the disk array control memory 11009 .
  • An operating system program 110090 is used for managing programs as a whole and for controlling input/output.
  • a disk array control program 110091 is used for constructing LUs within the disk pools 17 x and for processing access requests from the file access control CPU 11001 .
  • a disk pool management program 110092 is used for managing the configuration of the disk pools 17 x by using information in the disk pool management table 131 stored in the SM 13 .
  • An inter-CPU communications driver program 110093 is used for controlling the inter-CPU communications circuit 11007 , which is used for communication between the file access control CPU 11001 and the disk array control CPU 11008 .
  • a cache control program 110094 is used for managing data stored in the CM 14 and for controlling cache hit/miss judgments.
  • a DKA communications driver program 110095 is used when accessing an LU in order to communicate with the DKAs 12 x , which control the disks 170 x and 171 x that make up the LU.
  • FIG. 7 is a diagram of an example of the configuration of the disk pools.
  • the FC disk pool 170 are set two LUs, LU 0 ( 50 ) and LU 1 ( 51 ).
  • the LU 0 ( 50 ) comprises two FC disks, DK 000 and DK 010 , where the DK 000 and the DK 010 make up RAID 1 .
  • the LU 1 ( 51 ) consists of 5FC disks, DK 001 , DK 002 , DK 003 , DK 004 , and DK 005 , where the five FC disks make up a 4D +1P configuration RAID 5 .
  • the RAID 1 and the RAID 5 refer to data placement methods in a disk array and are discussed in detail in “A Case for Redundant Arrays of Inexpensive Disks (RAID)” by D. Patterson, et al., ACM SIGMOD Conference Proceedings, 1988, pp. 109-116.
  • the LU 0 with the RAID 1 configuration the two FC disks DK 000 and DK 010 have a mirror relationship with each other.
  • the LU 1 having the RAID 5 configuration consists of one or more disks that store data stripes, which store data of files accessed from host computers, and one or more disks that store parity stripes, which are used to retrieve data stored in the data stripes.
  • the LU 1 has the 4D +1P configuration RAID 5 , which indicates a RAID 5 consisting of four data stripes and one parity stripe. Similar representations will be used hereinafter to indicate the number of data stripes and the number of parity stripes in LUs having the RAID 5 configuration.
  • LU 2 ( 52 ) is established in the SATA disk pool 171 .
  • the LU 2 ( 52 ) consists of nine SATA disks, DK 100 , DK 101 , DK 102 , DK 103 , DK 104 , DK 110 , DK 1 , DK 112 and DK 1 l 3 , where the nine SATA disks make up an 8D +1P configuration RAID 5 .
  • each disk is 140 GB
  • the LU 0 ( 50 ) has 140 GB
  • the LU 1 ( 51 ) has 560 GB
  • the LU 2 ( 52 ) has 1120 GB in usable storage capacity.
  • LFS 0 ( 60 ), LFS 1 ( 61 ) and LFS 2 ( 62 ) are established and constructed for the LU 0 ( 50 ), the LU 1 ( 51 ) and the LU 2 ( 52 ), respectively.
  • FIG. 8 is an example of the configuration of a storage class management table 1100451 stored in the file access control memory 11004 of each CHN 110 x .
  • the storage class management table 1100451 is generated by the file access control CPU 11001 's executing the file system program 110043 and referring to information stored in the disk pool management table 131 of the SM 13 .
  • the disk pool management table 131 is stored in the SM 13 and contains information similar to the information in the storage class management table 1100451 for all CHs.
  • the storage class management table 1100451 stored in the file access control memory 11004 of each CHN 110 x contains information regarding LUs used by the CHN 110 x , but rearranged with the storage class as a key.
  • a storage class entry ( 1100451 a ) stores information indicating storage class.
  • a storage node # entry ( 1100451 b ) stores an identification number (called a “storage node number”) of the storage device that makes up each storage class.
  • a disk pool # entry ( 1100451 c ) stores a disk pool number that makes up each storage class.
  • An LU# entry ( 1100451 d ) stores an LU number set for each disk pool.
  • information stored indicates whether the corresponding LU is set internally (local) or externally (remote) to the given storage device and whether a file system is set in the LU.
  • a storage class is a hierarchical attribute provided for each storage region based on the usage of data storage; according to the present embodiment, three attributes of OnLine Storage, NearLine Storage and Archive Storage are defined. In addition, sub-attributes of Premium and Normal are defined for the OnLine Storage.
  • the OnLine Storage is an attribute set for LUs suitable for storing data of files that are frequently accessed, such as files being accessed online and files being generated. Premium indicates an attribute set for LUs suitable for storing data especially requiring fast response.
  • the NearLine Storage is an attribute set for LUs suitable for storing data of files that are not frequently used but are occasionally accessed.
  • the Archive Storage is an attribute set for LUs suitable for storing data of files that are hardly ever accessed and are maintained for long-term storage.
  • FIG. 8 indicates that there are the LU 0 ( 50 ) of the OnLine Storage (Premium) class and the LU 1 ( 51 ) of the OnLine Storage (Normal) class in the FC disk pool 170 of the storage device 1 (called “STR 0 ”). Further, in the SATA disk pool 171 of the storage device 1 (STR 0 ) is the LU 2 ( 52 ) of the NearLine Storage class. Moreover, in a different storage device (STR 1 ) is an LU 3 ( 53 ) of the Archive Storage class in a SATA disk pool. An example of constructing disk pools in different storage devices is described later.
  • FIG. 9 shows an example of the filename management table 1100436 that is stored in the file access control memory 11004 .
  • the filename management table 1100436 is a table prepared for each file system, where filenames and file handlers are stored in a tree structure for easy searchability.
  • the filename of the file is included in an access request received by the CHN 110 x from the NAS host 40 x .
  • the CHN 110 x uses the filename to search the filename management table 11004 and obtains the file handler that corresponds to the filename, which enables the CHN 110 x to refer to the file storage management table 1100435 that corresponds to the file handler.
  • Each filename management table 1100436 is stored in the LU in which the file system that corresponds to the filename management table 1100436 is constructed, and is read to the file access control memory 11004 when necessary and used by the file access control CPU 11001 .
  • FIG. 10 is a diagram of an example of the file storage management table 1100435 and the buffer management table 1100437 .
  • the file storage management table 1100435 is provided in the file access control memory 11004 for each file and is a table that manages file storage addresses.
  • the file storage management table 1100435 can be referred to by designating a file handler that represents a file.
  • a file property information management table entry stores a pointer for referring to the file property information management table 1100438 for the corresponding file.
  • a size indicates the size of the file in units of bytes.
  • a number of blocks indicates the number of logical blocks used in managing the file, which is done by dividing the file into blocks called logical blocks.
  • Each logical block that stores the file also stores a pointer to the buffer management table 1100437 that corresponds to the logical block.
  • each buffer management table 1100437 contains the following.
  • a hash link entry stores a link pointer to a hash table for quickly determining whether a buffer is valid.
  • a queue link entry stores a link pointer for forming a queue.
  • a flag entry stores a flag that indicates the status of the corresponding buffer, i.e. whether valid data is stored in the buffer, whether the buffer is being used, whether the content of the buffer is unreflected on the disk.
  • An equipment number entry stores an identifier of the storage device and an identifier of the LU in which the corresponding logical block is stored.
  • a block number entry stores a disk address number that indicates the storage position of the logical block within the storage device indicated by the equipment number.
  • a number of bytes entry stores the number of bytes of valid data stored in the logical block.
  • a buffer size entry stores the size of the buffer in units of bytes.
  • a buffer pointer entry stores a pointer to the corresponding physical buffer memory.
  • the file storage management table 1100435 is stored in the LU that stores the corresponding file and is read to the memory when necessary for use.
  • FIG. 11 is an example of the file property information management table 1100438 stored in the file access control memory 11004 .
  • the file property information management table 1100438 stores static property information and dynamic property information.
  • the static property information is determined when a file is configured and carries over thereafter. Although the static property information can be intentionally altered, it otherwise remains unaltered.
  • the dynamic property information changes over time after a file is created.
  • the static property information is divided into a file information category and a policy category.
  • the file information category includes basic information of a file.
  • a file type indicates the type of the file, such as a text file, document file, picture file, moving picture file or a voice file.
  • An application indicates the application that generated the file.
  • a date created indicates the date the file was first generated. The time at which the file was generated can be registered in addition to the date the file was created.
  • An owner indicates the name of the user who created the file.
  • An access identifier indicates a range of access authorization for the file.
  • the policy category is information that is set by the user or the application that created the file, and is information that is designated by the user or the application with regard to file storage conditions.
  • An initial storage class is information that indicates the storage class of the LU in which the file is to be stored when the file is stored in a storage device for the first time.
  • An asset value type indicates the asset value of the file.
  • a life cycle model indicates the model applicable to the file from among life cycle models defined in advance.
  • a migration plan indicates the plan applicable to the file from among plans concerning file migration (hereinafter called “migration”) defined in advance.
  • the asset value is an attribute that designates the importance or value attached to the file.
  • An attribute of “extra important,” “important” or “regular,” for example, can be designated as an asset value.
  • the asset value can be used as a supplemental standard for selecting a storage class, i.e., files with an attribute of “important” or higher are stored in LUs that belong to the OnLine Storage class with Premium attribute, or as a standard for selecting a storage class when no life cycle models are designated, for example.
  • the life cycle stages have been named by drawing analogy with life cycle stages of humans to describe how the usage status of a file changes over time, i.e., the period in which data is created is the birth, the period in which the data is updated and/or used is the growth stage, the period in which the data is rarely updated and is mainly referred to is the mature stage, and the period in which the data is no longer used and is archived is the old age.
  • a life cycle model defines the life cycle a file experiences. The most general method of defining a life cycle is to define the stages based on the amount of time that has elapsed since a file was generated.
  • One example is to define the “growth stage,” or the “update stage,” in which there are frequent updates, as one month; the “mature stage,” or the “reference stage,” in which the file is mainly referred to, as one year; and the “old age,” or the “archive stage,” as thereafter.
  • this definition is called a “model 1 ” and is used in the following description.
  • a specific life cycle model can be applied to a certain type of files, or life cycle models can be applied on a per-application basis such that a specific life cycle model is applied to files created by a certain application.
  • Names of the life cycle stages can be expressed in terms of “growth stage,” “mature stage,” and “old age” that correspond to the life of a person, or in terms of “update stage,” “reference stage,” and “archive stage” based on file behavior. In the present embodiment, the latter expressions are used in order to more clearly indicate the behavior of files.
  • the migration plan defines to which storage class LU a file is transferred according to the file's life cycle stage.
  • One example is a method for storing “update stage” files in OnLine Storage class LUs, “reference stage” files in the NearLine Storage class LUs, and “archive stage” files in Archive Storage class LUs.
  • this definition is called a “plan 1 ” and is used in the following description.
  • various plans can be defined, such as a plan that defines “update stage” files to be stored in OnLine Storage (Premium) class LUs, and “reference stage” files in OnLine Storage (Normal) class LUs, while “archive stage” files remain in the NearLine Storage class LUs, and one plan from among a plurality of plans can be selected for use.
  • a specific migration plan can be applied to a certain type of files, or migration plans can be applied on a per-application basis such that a migration plan is applied to files created by a certain application.
  • the dynamic property information is divided into an access information category and a life cycle information category.
  • the access information category includes access statistical information for each file.
  • a time stamp indicates the date and time a given file was last read or written, or the date and time the file storage management table 1100435 of the file was last updated.
  • An access count indicates the total number of accesses to the file.
  • a read count and a write count indicate the number of reads and the number of writes, respectively, to and from the file.
  • a read size and a write size indicate the average value of the data transfer size when reading and writing, respectively, to and from the file.
  • a read sequential count and a write sequential count indicate the number of times there is address continuity, i.e., sequentiality, between two of multiple consecutive accesses in reading or writing.
  • the life cycle information category includes information related to the life cycle of a file.
  • a current life cycle stage indicates the current positioning of a file within its life cycle, i.e., the update stage, the reference stage, or the archive stage.
  • a current storage class indicates the storage class of a storage pool set for the LU that currently stores the file.
  • FIG. 11 indicates one example of the file property information, but various other types of property information can be defined and stored in the file property information management table 1100438 . Furthermore, an embodiment may use only a part of the property information as necessary.
  • the NAS host 0 ( 400 ) issues to the CHN 0 ( 1100 ) an open request for the file abc.doc.
  • the open request includes a filename as identification information to identify the file. Since the open processing is executed to store the file for the first time, the NAS host 0 ( 400 ) sends to the CHN 0 ( 1100 ) the following information included in the file information category and the policy category as the static property information of the file property information, along with the open request.
  • the information sent includes a file type “document,” an application that generated the file “XYZ Word,” and an access identifier “-rw-rw-rw-” as information included in the file information category, as well as an initial storage class “undesignated,” an asset value type “important,” the life cycle model “model 1 ,” and the migration plan “plan 1 ” as information included in the policy category.
  • the CHN 0 ( 1100 ) receives the open request from the NAS host 0 ( 400 ) via the LAN controller 11002 , and the file access control CPU 11001 executes the file system program 110043 .
  • the open request received is specified through a control by the file access control CPU 11001 as an access request to access the local file system LFS 0 ( 60 ) based on the directory information of the filename.
  • the file open processing section 1100431 refers to the filename management table 1100436 of the LFS 0 ( 60 ) and searches for abc.doc. Since it is determined as a result that abc.doc is a file that does not yet exist in the filename management table 1100436 and is to be stored for the first time, the file open processing section 1100431 registers abc.doc in the filename management table 1100436 and assigns a file handler to abc.doc.
  • the file storage management section 1100433 creates the file storage management table 1100435 to correspond to the file handler assigned to the file abc.doc.
  • the file storage management section 1100433 generates the file property information management table 1100438 and correlates it to the file storage management table 1100435 (i.e., a pointer to the file property information management table 1100438 is stored in the file storage management table 1100435 ); the file storage management section 1100433 then stores in the file property information management table 1100438 the static property information of the file property information for the file abc.doc obtained from the NAS host 0 ( 400 ), as well as the date created and owner of the file. Next, the file storage management table 1100435 and the file property information management table 1100438 are written to the LU in which is constructed the file system the file belongs to.
  • the CHN 0 ( 1100 ) returns the file handler to the NAS host 0 ( 400 ) and the open processing is terminated.
  • the NAS host 0 ( 400 ) issues to the CHN 0 ( 1100 ) a write request to store data of the file abc.doc in the storage device 1 .
  • the file access control CPU 11001 executes the file system program 110043 and uses a method similar to the method used in the open processing to specify that the write request is an access request to access the local file system LFS 0 ( 60 ).
  • the request processing section 1100432 of the file system program 110043 interprets the access request as a write request based on the information included in the access request received, and uses the file handler designated in the write request to obtain the file storage management table 1100435 of the file that corresponds to the file handler.
  • the file storage management section 1100433 secures buffers required to store the data and determines the storage positions on disks for the file.
  • the file storage management section 100433 refers to the static property information in the file property information management table 1100438 .
  • the file storage management section 1100433 specifies the current life cycle stage of the file abc.doc as “growth stage.” Further, since the initial storage class is “undesignated” and the asset value type is “important,” the file storage management section 1100433 selects “OnLine Storage (Premium)” as the storage class of the storage pool in which to store the file abc.doc.
  • the file storage management section 1100433 refers to the storage class management table 1100439 and decides to store the file abc.doc in an LU whose storage class is “OnLine Storage (Premium)” and that is specified by “STR 0 (i.e., the primary storage device 1 )” as the storage node, “FC disk pool 1700 ” as the disk pool #, and “LU 0 (i.e., the local file system LFS 0 )” as the LU #.
  • STR 0 i.e., the primary storage device 1
  • FC disk pool 1700 as the disk pool #
  • LU 0 i.e., the local file system LFS 0
  • the file storage management section 1100433 divides the data of the file into one or more logical blocks based on an appropriate algorithm, determines storage addresses of the logical blocks in the LU 0 , generates buffer management tables 1100437 to register the storage addresses determined, and stores in the buffer management table entry of the file storage management table 1100435 pointers to the buffer management tables 1100437 generated. Furthermore, the file storage management section 1100433 stores information in the remaining entries of the file storage management table 1100435 . In the present embodiment, NULL is registered for all entries for link destinations in the file storage management table 1100435 .
  • the file storage management section 1100433 sets the current life cycle stage as “update stage” and the current storage class as “OnLine Storage (Premium)” in the life cycle information category of the dynamic property information of the file property information management table 1100438 .
  • the file storage management section 1100433 performs appropriate calculations for information included in the access information category of the dynamic property information before registering the results into the file property information management table 1100438 .
  • the request processing section 1100432 executes a processing according to the write request received; and the LAN controller driver program 110041 , the TCP/IP program 110042 , and the network file system program 110044 are executed by the file access control CPU 11001 ; as a result, the write data is transferred from the NAS host 0 ( 400 ) to the CHN 0 ( 1100 ) and temporarily stored in the buffer of the file access control memory 11004 .
  • the inter-CPU communications driver program 110046 is executed by the file access control CPU 11001 , and this causes the write request to be transferred to the disk array control CPU 11008 at proper timing.
  • the disk array control CPU 11008 caches the write data temporarily in the CM 14 and sends a reply of completion with regard to the write request from the NAS host 0 ( 400 ).
  • files can be initially placed in storage regions that belong to the appropriate storage class based on the static property information of the file.
  • the migration management section 110043 A of the file system program 110043 is activated by the file access control CPU 11001 based on a preset timing.
  • the migration management section 110043 A refers to the file property information management table 1100438 of a file included in the local file system set in advance as the subject of the migration processing, and checks whether the file that is the subject of migration exists. The following is a detailed description of a situation in which the file abc.doc is the subject of the migration processing.
  • the migration management section 110043 A refers to the file property information management table 1100438 of the file abc.doc and compares the date created to the current date and time. If one month has elapsed since the date created, the migration management section 110043 A recognizes that the current life cycle stage has shifted from the “update stage” to the “reference stage” due to the fact that the life cycle model in the static property information indicates “model 1 ” and that one month, which is the period of the “update stage,” has already passed.
  • the migration management section 110043 A recognizes that the file must be migrated from the LU whose storage class is the “OnLine Storage (Premium)” to an LU whose storage class is the “NearLine Storage.”
  • the migration management section 110043 A refers to the storage class management table 1100439 and decides to transfer the file to an LU whose storage class is the “NearLine Storage” and that is designated by “STR 0 (i.e., the primary storage device 1 )” as the storage node, “SATA disk pool 1710 ” as the disk pool #, and “LU 2 (i.e., a local file system LFS 2 )” as the LU #.
  • STR 0 i.e., the primary storage device 1
  • SATA disk pool 1710 as the disk pool #
  • LU 2 i.e., a local file system LFS 2
  • the migration management section 110043 A changes the current life cycle stage to “reference stage” and the current storage class to “NearLine Storage” in the dynamic property information of the file property information management table 1100438 .
  • the migration management section 110043 A defines a unique filename (in this case FILE 00001 ) that is used to manage the file abc.doc within the storage device STR 0 ( 1 ).
  • the file open processing section 1100431 refers to the filename management table 1100436 of the LFS 2 ( 60 ) and checks whether the filename FILE 00001 is registered in the filename management table 1100436 ; if it is not registered, the file open processing section 1100431 registers the filename FILE 00001 in the filename management table 1100436 and assigns a file handler to the filename FILE 00001 .
  • the file storage management section 1100433 generates the file storage management table 1100435 and the file property information management table 1100438 to correspond to the file handler assigned to the filename FILE 00001 . Contents identical to the contents registered in the file property information management table of the file abc.doc are stored in the file property information management table 1100438 generated.
  • the file storage management section 1100433 writes in the LU, which stores FILE 00001 , the file storage management table 1100435 and the file property information management table 110438 of FILE 00001 .
  • the file storage management section 1100433 secures buffer regions required to store the data of FILE 00001 and determines the storage regions (or the storage positions) within the LU 2 for storing the file. Using a method similar to the method used in the data write processing, the file storage management section 1100433 generates the buffer management tables 1100437 to register the storage positions determined, and stores in the buffer management table entry of the file storage management table 1100435 pointers to the buffer management tables 1100437 generated. NULL is registered for all entries for link destinations in the file storage management table 1100435 of the FILE 00001 stored in the LFS 2 .
  • the file storage management section 1100433 changes the link destination node name to STR 0 , the link destination FS name to LFS 2 , and the link destination filename to FILE 00001 in the file storage management table 1100435 of abc.doc in the LFS 0 .
  • the request processing section 1100432 reads data of the abc.doc from disks that make up the LU 0 to buffers in the file access control memory 11004 .
  • the file storage management section 1100433 determines the data read to the buffers in the file access control memory 11004 as data of the FILE 00001 to be written to the disks that make up the LU 2 , and the request processing section 1100432 writes the data to storage regions in the buffers registered in the buffer management tables 1100437 .
  • the file storage management section 1100433 clears all buffer management tables 1100437 that can be referred to from pointers registered in the file storage management table 1100435 of the file abc.doc in the LFS 0 , and registers NULL in entries of these buffer management tables 1100437 .
  • the data of the FILE 00001 stored in the buffers is stored at proper timing in the LU 2 via the CM 14 of the storage device 1 through a procedure similar to the procedure that took place in the data write processing of the initial placement processing. This completes the migration processing.
  • files can be migrated to storage regions of an appropriate storage class by taking into consideration the life cycle stage of the file based on the migration plan of the file.
  • LUs for storing files can be selected based on a concept of storage classes, and LUs for storing files can be changed, without being dependent on host computers or applications executed on the host computers.
  • a storage device with storage hierarchy i.e., a plurality of storage regions with varying properties, having high cost effectiveness can be realized without being dependent on host computers.
  • a file-based hierarchy storage control can be executed based on static properties of the file, such as the file type, the type of application that generated the file, the intent (policy) of the file generator, and on dynamic properties of the file, such as changes in the life cycle stage, value and access property of the file.
  • a hierarchical storage control is executed between storage devices in a system in which a storage device 1 (hereinafter called “STR 0 ”) described in the first embodiment and another storage device 1 a (hereinafter called “STR 1 ”) are connected via a network.
  • STR 0 storage device 1
  • STR 1 storage device 1 a
  • the storage device STR 1 ( 1 a ) is the other storage device connected to the storage device STR 0 ( 1 ) via a LAN 20 ; otherwise, the system (configuration components are the same as in FIG. 1.
  • an NCTL 0 ( 1100 a ) and an NCTL 1 ( 1101 a ) are NAS controllers, and a disk pool 0 ( 170 a ) is a disk pool connected to the NCTL 0 and NCTL 1 .
  • the NAS controller NCTLx is provided with an FC controller 11010 a for connecting with the disk pool 0 1700 a .
  • the NAS controller NCTLx also has a cache memory CM 14 a within the NAS controller, as well as a data transfer control circuit 1011 a , which is a control circuit for the cache memory CM 14 a .
  • the data transfer control circuit 11011 a serves to connect the NAS controller 1100 a and the NAS controller 1101 a to each other.
  • the NAS controller 1101 a has a configuration similar to that of the NAS controller 1100 a .
  • Components that are assigned the same numbers as components of the CHN 1100 in the first embodiment have the same configuration and the same function as the corresponding components of the CHN 1100 .
  • the STR 1 is a storage device that is smaller and cheaper than the STR 0 . Also, as shown in FIG. 13, a CHN 0 of the STR 0 and the NCTL 0 of the STR 1 are connected via the LAN 20 .
  • the CHN 0 ( 1100 ) of the storage device 1 recognizes that the storage device 1 a (STR 1 ) of a different type is connected to the LAN 20 .
  • the different storage device can be recognized using a method based on information designated in advance by an administrator or a method based on whether or not there is a device that reacts to a broadcast of a command for recognition to network segments of the LAN 20 .
  • the CHN 0 of the STR 0 becomes an initiator and issues to the STR 1 a command to collect information.
  • the response from the STR 1 to the command includes the type of the disk pool and the configuration of LUs that the STR 1 has; as a result, by referring to the response, the CHN 0 can recognize that the STR 1 has the SATA disk pool 170 a and that there is a low-cost file type LU having a 15D +1P configuration RAID 5 and with a large capacity of 2100 GB in the disk pool 170 a .
  • the CHN 0 of the STR 0 decides to manage the STR 1 's LU as a remote LU, i.e., as an LU that is in the other storage device STR 1 ( 1 a ) but as one of the LUs that are managed by the primary storage device STR 0 ( 1 ).
  • the CHN 0 assigns a number LU 3 to the LU that the STR 1 has and assigns a remote file system number RFS 3 to the file system constructed within the LU. Due to the fact that the LU is in a large capacity, low-cost disk pool, the storage class of the LU is set as “Archive Storage.” Based on a control by a disk array control CPU 11008 of the CHN 0 , information regarding the LU 3 in the STR 1 , such as the type of the disk pool, the configuration of the LU, the LU number and the storage class, is stored in a disk pool management table 131 of an SM 13 of the storage device 1 (STR 0 ).
  • the CHN of the storage device 1 refers to the disk pool management table 131 by having a file access control CPU 11001 execute a file system program 110043 , and can register information regarding the LU 3 in a storage class management table 1100451 in a file access control memory 11004 by copying the information regarding the LU 3 from the disk pool management table 131 .
  • the file abc.doc's current life cycle stage is “reference stage,” its current storage class is “NearLine Storage,” and its data section is stored under the name FILE 00001 in the LFS 2 of the LU 2 constructed in the SATA disk pool of the STR 0 , as shown in FIGS. 11 and 12.
  • the filename management table 1100436 in which the filename “abc.doc” is registered and the file storage management table 1100435 for the file abc.doc are in the LFS 0 .
  • information regarding the abc.doc is stored in the filename management table 1100436 for the LFS 0 and in the file storage management table 1100435 for the file abc.doc in the LFS 0 .
  • the file property information management table 1100438 is in both the LFS 0 and the LFS 2 .
  • the data section of the file abc.doc has already been migrated to the LU 2 in which is constructed the LFS 2 , which means that the data section of the abc.doc does not reside in the LU 0 , in which is constructed the LFS 0 .
  • the migration management section 110043 A of the STR 0 refers to the file property information management table 1100438 of the abc.doc and compares the date created to the current date and time. If one year has elapsed since the migration, the migration management section 110043 A recognizes that the current life cycle stage has shifted from the “reference stage” to the “archive stage” due to the fact that the life cycle model in the static property information for abc.doc indicates “model 1 ” and that one year, which is the period of the “reference stage,” has already passed. Further, due to the fact that the migration plan is “plan 1 ,” the migration management section 110043 A recognizes that the file must be migrated from an LU whose storage class is “NearLine Storage” to an LU whose storage class is “Archive Storage”.
  • the migration management section 110043 A refers to the storage class management table 1100439 , selects the LU 3 that belongs to the “Archive Storage” class, and decides to transfer the file abc.doc to the LU 3 .
  • the LU 3 has attributes of “STR 1 (i.e., the other storage device la)” as the storage node, “SATA disk pool” as the disk pool #, and “remote file” as the LU type.
  • the migration management section 110043 A changes the current life cycle stage to “archive stage” and the current storage class to “Archive Storage” in the dynamic property information of the file property information management table 1100438 for the abc.doc.
  • the migration management section 110043 A defines a unique filename (in this case STR 1 -FILE 00001 ) that is used to manage the file abc.doc within the storage device STR 0 ( 1 ).
  • the migration management section 110043 A behaves as it were a NAS host and issues to the STR 1 an open request for the file STR 1 -FILE 00001 .
  • This open processing is an open processing executed in order to store the file for the first time from the perspective of the STR 1 .
  • the STR 0 includes in the open request sent to the STR 1 the information that the STR 0 has in the file property information management table 1100438 as the static property information of the file abc.doc.
  • the STR 0 expressly designates to the STR 1 to store the file STR 1 -FILE 00001 in the Archive Storage class from the beginning.
  • the NCTL 0 of the STR 1 receives the open request via a LAN controller 11002 a , and a file access control CPU 11001 a executes a file system program 110043 a.
  • the open request received is specified in a manner similar to the first embodiment as an access request to access the remote file system RFS 3 ;
  • the STR 1 -FILE 00001 is registered in a filename management table 1100436 a in a file access control memory 11004 a and a file handler is assigned to the STR 1 -FILE 00001 based on a control by the file access control CPU 11001 a ;
  • a file storage management table 1100435 a and a file property information management table 1100438 a are created within the file access control memory 11004 a and information to be registered in the tables is set.
  • the NCTL 0 sends to the migration management section 110043 A of the CHN 0 the file handler assigned to the STR 1 -FILE 00001 , and the open processing is terminated.
  • the migration management section 110043 A of the STR 0 issues to the STR 1 a write request containing the file handler obtained from the NCTL 0 of the STR 1 in the open processing, and requests to write actual data of abc.doc (i.e., data that is also actual data of FILE 00001 ) as actual data of the file STR 1 -FILE 00001 .
  • abc.doc i.e., data that is also actual data of FILE 00001
  • the file storage management section 1100433 a of the STR 1 secures buffer regions required to store the write data, determines storage positions on disks of the actual data of the file, and stores the write data received from the STR 0 in the buffers.
  • the file storage management section 1100433 a refers to the static property information in the file property information management table 1100438 a .
  • the file storage management section 1100433 a specifies the current life cycle stage of the file STR 1 -FILE 00001 as “archive stage” due to the fact that the life cycle model of the file STR 1 -FILE 00001 is “model 1 ” and to the fact that more than one year and one month have passed since the file was generated. Further, the file storage management section 1100433 a specifies “Archive Storage” as the initial storage class as designated by the STR 0 .
  • the file storage management section 1100433 a sets the current life cycle stage as “archive stage” and the current storage class as “Archive Storage” in the life cycle information category of the dynamic property information of the file property information management table 1100438 a .
  • the file storage management section 1100433 a further performs appropriate calculations for access information regarding the file STR 1 -FILE 00001 and updates the information in the access information category of the file property information management table 1100438 a .
  • NULL is registered for all entries for link destinations in the file storage management table 1100435 a of the file STR 1 -FILE 00001 .
  • the file storage management section 1100433 of the STR 0 changes the link destination node name to STR 1 , the link destination FS name to LFS 3 , and the link destination filename to STR 1 -FILE 00001 in the file storage management table 1100435 of the FILE 00001 in the LFS 2 .
  • the file storage management section 1100433 then clears all buffer management tables 1100437 that can be referred to from pointers registered in the file storage management table 1100435 of the FILE 00001 and enters NULL in all buffer management table entries of the file storage management table 1100435 .
  • the CHN of the STR 0 refers to the file storage management table 1100435 of the abc.doc in the LFS 0 and obtains its link destination node name, FS name and filename, and refers to the file storage management table 1100435 of the FILE 00001 in the LFS 2 based on the identification information of the link destination obtained (i.e., STR 0 , LFS 2 , FILE 00001 ).
  • the CHN of the STR 0 further obtains the link destination node name, the FS name and the filename from the file storage management table 1100435 of the FILE 00001 in the LFS 2 and issues to the NCTL of the STR 1 an access request designating identification information of the link destination obtained (i.e., STR 1 , LFS 3 , STR 1 -FILE 00001 ), which allows the CHN of the STR 0 to reach STR 1 -FILE 00001 in the RFS 3 of the STR 1 and access the data section of the abc.doc via the NCTL of the STR 1 .
  • the Archive Storage class suitable for archiving is selected for files in “archive stage,” or the old age, in its life cycle.
  • a hierarchical storage control is executed among storage devices in a system in which another storage device STR 2 ( 1 b ) is connected to a storage device STR 0 ( 1 ) via a network.
  • the third embodiment differs from the second embodiment in that while the network that connects the storage devices was the LAN 20 and file I/O interfaces were used between storage devices in the second embodiment, the network that connects the storage devices is a SAN 35 , which is a dedicated network for connection between the storage devices, and a block I/O interface is used between storage devices in the third embodiment.
  • the storage device STR 2 ( 1 b ) is a storage device with a small-scale configuration similar to the storage device STR 1 ( 1 a ) in the second embodiment, but instead of the NAS controller NCTL 0 of the storage device STR 1 ( 1 a ) in the second embodiment, the storage device STR 2 ( 1 b ) has SAN controllers FCTLx. Each FCTLx is provided with an FC controller 11012 b to connect with the SAN 35 , but it does not have the file access control CPU 11001 a or its peripheral circuits as the STR 1 does and does not perform file control. Otherwise, the storage device STR 2 ( 1 b ) according to the present embodiment has a configuration similar to that of the storage device STR 1 ( 1 a ) according to the second embodiment.
  • the SAN 35 is a dedicated network for connecting the storage device STR 0 ( 1 ) to the storage device STR 2 ( 1 b ), and SAN hosts are not connected to the SAN 35 .
  • no SAN hosts are connected to the SAN 35 , which is the network to connect the storage devices, and that there is only one network that connects the storage devices.
  • SAN hosts can be connected to the SAN 35 and a plurality of networks for connecting the storage devices can be provided to improve fault tolerance.
  • the storage device STR 2 ( 1 b ) is under the control of the storage device STR 0 ( 1 ), and file accesses from a NAS host 0 ( 400 ) reaches the storage device STR 2 ( 1 b ) via the storage device STR 0 ( 1 ).
  • Such a configuration is hereinafter called a “connection of diverse storage devices.”
  • a CHF 1 ( 1111 ) of the storage device STR 0 ( 1 ) recognizes that the storage device STR 2 ( 1 b ), which is a divergent storage device, is connected to the SAN 35 .
  • the CHF 1 ( 1111 ) acts as an initiator and issues a command to collect information and thereby recognizes that the STR 2 ( 1 b ) is connected to the SAN 35 .
  • the CHF 1 ( 1111 ) treats storage regions of the STR 2 as if they were a disk pool within the primary storage device according to the first embodiment.
  • a CHN 0 ( 1110 ) can use the disk pool via the CHF 1 ( 1111 ). The management method of the disk pool is described later.
  • the CHN 0 ( 1100 ) of the STR 0 ( 1 ) becomes an initiator and issues a command to collect information via the CHF 1 ( 1111 ) to the STR 2 .
  • the CHN 0 ( 1100 ) of the STR 0 ( 1 ) receives a response from the STR 2 to the command via the CHF 1 ( 1111 ) and recognizes from the information included in the response that the STR 2 has a SATA disk pool and a low-cost, block type LU having a 15D +1P configuration RAID 5 and with a large capacity of 2100 GB; based on this, the CHN 0 ( 1100 ) of the STR 0 decides to manage the LU as a remote LU.
  • the CHN 0 ( 1100 ) of the STR 0 determines the storage class of the disk pool as “Archive Storage.”
  • the CHN 0 ( 1100 ) of the STR 0 assigns the number LU 4 to the LU inside the STR 2 and stores in a disk pool management table 131 of an SM 13 information concerning the LU, i.e., “Archive Storage” as the storage class #, “STR 2 ” as the storage node #, “SATA pool” as the disk pool #, “LU 4 ” as the LU #, “remote block” as the LU type, “RAID 5 , 15D +1P” as the RAID Conf., and “2100 GB” as the usable capacity.
  • the disk pool management table 131 is referred to and the information concerning the LU is copied from the disk pool management table 131 to a storage class management table 1100451 in a file access control memory.
  • a migration management section 110043 A of the CHN 0 ( 1100 ) of the STR 0 has decided to migrate a file abc.doc from a NearLine Storage class to the Archive Storage class, and the following is a description of the migration processing of the file executed based on this assumption.
  • the migration management section 110043 A of the STR 0 refers to a storage class management table 1100439 , selects the LU 4 that belongs to the “Archive Storage” class, and decides to transfer the file abc.doc to the LU 4 .
  • the LU 4 has attributes of “STR 2 (i.e., the other storage device 1 b )” as its storage node, “SATA disk pool” as its disk pool #, and “remote block” as its LU type.
  • the file system program stored in the CHN 0 ( 1100 ) constructs a local file system LFS 4 in the LU 4 . Due to the fact that the disk pool in which the LU 4 is set resides in the other storage device STR 2 from the perspective of the STR 0 , it is therefore a “remote” disk pool and the LU 4 is a remote LU; however, since the file system LFS 4 set in the LU 4 is to be controlled by the CHN 0 ( 1100 ), the file system LFS 4 is managed as a local file system.
  • a file storage management table is treated differently in the present embodiment compared to its treatment in the first and the second embodiments.
  • a file storage management section 1100433 of the CHN 0 ( 1100 ) of the STR 0 assigns “STR 2 ” as the link destination node name, “LFS 4 ” as the link destination FS name, and a STR 2 -FILE 00001 as the link destination filename, and sets these in a file storage management table for the file abc.doc.
  • the CHN 0 ( 1100 ) can alternatively set the assigned link destination node name, the link destination FS name and the link destination filename in the file storage management table for the file FILE 00001 in the LFS 2 . Since the STR 2 , in which the LU 4 actually exists, does not execute the file access control as described earlier, a file storage management table for the STR 2 -FILE 00001 is not created in the STR 2 .
  • the processing that takes place when the file system program 110043 of the CHN 0 ( 1100 ) is executed is the same as the processing that takes place on the local file system according to the first embodiment in terms of the file open processing, write processing and migration processing, except for the fact that the processing is executed with the awareness that the link destination node of the file abc.doc (i.e., the storage device in which the actual data of the file abc.doc is stored) is the STR 2 .
  • a CHF communications driver section is realized by having a disk array control CPU 11008 execute the CHF communications driver program 110096 .
  • the CHF communications driver section sends a disk input/output command (hereinafter called an “I/O command”) to the SM 13 .
  • Address information representing storage positions of the data is included in the I/O command.
  • the CHF 1 ( 1111 ) receives the I/O command via the SM 13 and based on the I/O command received issues an I/O command to the storage device 1 b (STR 2 ) via the SAN 35 .
  • the I/O command issued by the CHF 1 ( 1111 ) includes address information representing the data storage positions within the storage device 1 b (STR 2 ).
  • the storage device 1 b processes the I/O command received from the CHF 1 ( 1111 ) according to the same procedure applied when a disk I/O command is received from a normal host.
  • the CHF 1 of the STR 0 is recognized as a host from the perspective of the STR 2 .
  • the disk pool of the divergent storage device STR 2 provided with the block I/O interface can be treated as one of the disk pools of the storage device STR 0 , and a file system managed by the STR 0 can be constructed on the LU that is in the disk pool of the STR 2 . Furthermore, due to the fact that files stored in the LU of the STR 0 can be migrated to the LU within the STR 2 , a flexible storage hierarchy with superior cost effectiveness can be constructed.
  • FIG. 15 is a diagram of an example of the system configuration according to the present embodiment.
  • a storage device STR 3 ( 1 c ) is provided with a DKC 70 and disk pools.
  • an SW 71 is a switch
  • NNODEs ( 72 x ) are NAS nodes each provided with a file I/O control mechanism to connect with a LAN
  • FNODEs ( 73 x ) are FC nodes each provided with a block I/O control mechanism to connect with a SAN
  • INODEs ( 74 x ) are IP nodes each provided with an IP network control mechanism to connect with an IP network
  • DNODEs ( 75 x ) are disk controller nodes each provided with a disk control mechanism to connect with a disk pool.
  • a node to control iSCSI can be connected to the switch SW 71 to form an IP SAN.
  • the node to control the iSCSI would have functions and a configuration similar to those of the FNODE.
  • the DNODE 0 and the DNODE 1 are connected to and control two types of disk pools, a disk pool 0 and a disk pool 1 , which are an FC disk pool 170 and a SATA disk pool 171 .
  • the INODE 2 and the INODE 3 are connected to a NAS-type divergent storage device STR 1 ( 1 a ), which is external to the storage device STR 3 and is a storage device provided with file I/O interfaces described in the second embodiment.
  • the FNODE 2 and the FNODE 3 are connected to a SAN-type divergent storage device STR 2 ( 1 b ), which is external to the storage device STR 3 and is a storage device provided with block I/O interfaces described in the third embodiment.
  • FIG. 16 is a diagram of an example of the configuration of the NNODE.
  • the NNODE 720 is equivalent to the CHN 1100 shown in FIG. 4 with the inter-CPU communications circuit 11007 and components below it removed and replaced by an SW node controller 7204 .
  • Other components are the same as in the CHN 1100 in terms of configuration and function.
  • the SW node controller 7204 is a controller circuit for connecting with the SW 71 ; it forms commands, data and controller information in internal frame formats that are sent and received within the storage device STR 3 ( 1 c ) and sent as disk I/O to other nodes such as the DNODEs.
  • FIG. 17 is a diagram of an example of the configuration of the FNODE.
  • the FNODE 730 has a configuration in which an SW node controller 7302 is connected to the FC controller 11012 b of the FCTL 1100 b in FIG. 14, which makes the FNODE 730 capable of connecting with the SW 71 via the SW node controller 7302 .
  • An FC controller 73 - 01 operates as a target device and sends and receives frames of commands, data and controller information to and from the SAN.
  • the SW node controller 7302 converts frames sent or received by the FC controller 7301 into internal frame configurations of the storage device STR 3 ( 1 c ) and sends or receives the converted frames to and from other nodes, such as the DNODEs.
  • the FNODE 73 x operates as an initiator device and based on disk I/O commands received from the NNODEs or other FNODEs can send I/O commands to other storage devices connected externally to the storage device STR 3 .
  • the FNODE 2 and the FNODE 3 in FIG. 15 can send I/O commands to the divergent storage device STR 2 ( 1 b ) externally connected to the storage device STR 3 .
  • the FNODE 2 and the FNODE 3 appear to be operating as host computers from the perspective of the STR 2 .
  • FC controller 7301 and the SW node controller 7302 are shown in FIG. 17 for the sake of simplification, a CPU can be mounted on the FNODEs in order to perform target processing, initiator processing or internal frame generation processing.
  • a node that controls iSCSI can be configured; by connecting such a node to the SW 71 , an IP SAN can be configured.
  • FIG. 18 is a diagram of an example of the configuration of the INODE.
  • the INODE 740 has a configuration in which an SW node controller 7402 is connected to a LAN controller 7401 , which is similar to the LAN controller 11002 a of the NCTL 0 ( 1100 a ) in FIG. 13; this configuration makes the INODE 740 capable of connecting with the SW 71 via the SW node controller 7402 .
  • the INODEs are provided on the storage device STR 3 ( 1 c ) in order to connect the external NAS-type storage device STR 1 a to the STR 3 .
  • FIG. 19 is a diagram of an example of the configuration of the DNODE.
  • the DNODE 750 is similar to the FCTL 1100 b in FIG. 14, but with the FC controller 11012 b removed and replaced with an SW node controller 7501 .
  • the DNODE 750 goes into operation when it receives a disk I/O command from one of the NNODEs or FNODEs via the SW 71 ; as a result, a section 1 d outlined by a broken line in FIG. 15 operates as if it were the independent storage device STR 2 in FIG. 14.
  • the DNODE 0 ( 750 ) and the DNODE 1 ( 751 ) form a pair of redundant controllers. Having redundant DNODEs is similar to the configuration of the storage device STR 2 in FIG. 14, where there are also redundant FCTLs.
  • the present embodiment only differs from the first, second and third embodiments in its configuration of the storage device, and its processing procedure for executing a hierarchical storage control is similar to that in the first, second and third embodiments; accordingly, only those parts that differ in the operation as a result of differences in the configuration of the storage device are described below.
  • a hierarchical storage control inside the storage device STR 3 can be executed using a procedure similar to that in the first embodiment.
  • a file system program 110043 stored in a file access control memory of the NNODE 72 x is equipped with a storage class management table 1100439 for managing usable LUs, and can recognize disk pools and LUs managed by the DNODEs 75 x by referring to the storage class management table 1100439 .
  • an SM node for connecting with an SM can be provided for connection with the SW 71 in the present embodiment, so that the storage class management table 1100439 can consist of information stored in the SM, as in the first embodiment.
  • the NNODE 72 x specifies a usable disk pool and an LU, creates the storage class management table 1100439 , and defines a storage class
  • a processing similar to that in the first embodiment can be applied subsequently to execute a hierarchical storage control within the storage device STR 3 ( 1 c ), i.e., a hierarchical storage control using LUs set in the disk pool 0 and the disk pool 1 .
  • an SW node driver program stored in the file access control memory 7203 of the NNODE is executed by the file access control CPU 7202 , which causes a disk I/O command to be issued via the SW node to the DNODE 750 that manages the LU that is the subject of access.
  • the NAS-type divergent storage device STR 1 ( 1 a ) provided with file I/O interfaces can be connected externally to the storage device STR 3 , which would result in a configuration of a storage hierarchy as in the second embodiment.
  • the file system program 110043 stored in the file access control memory 7203 of the NNODE 72 x is executed by the file access control CPU 7202 , the file system program 110043 queries the INODE 74 x whether there is a NAS-type divergent storage device connected to the INODE 74 x ; if there is a divergent storage device connected, the file system program 110043 obtains from the divergent storage device the information for identifying remote LUs and remote file systems that are in the divergent storage device.
  • a storage class is defined for each of the remote LUs and the remote file systems, and information concerning the LUs is registered and managed in the storage class management table 1100439 . Subsequent steps are the same as in the processing procedure in the second embodiment.
  • the SW node driver program stored in the file access control memory 7203 of the NNODE is executed by the file access control CPU 7202 , which causes a disk I/O command to be issued from the NNODE via the SW node to the INODE 740 connected to the storage device STR 1 ( 1 a ) that is provided with the LU that is the subject of access.
  • the INODE 740 issues to the storage device STR 1 ( 1 a ) a disk I/O command for a file access, as well as sends and receives actual data of the file and control information to and from the STR 1 ( 1 a ).
  • the INODEs 74 x have no involvement whatsoever in the file control information and operate simply as gateways of an IP network. In such a case, a hierarchical storage configuration without any interference from other devices, such as NAS hosts, can be realized.
  • the divergent storage device STR 1 ( 1 a ) can be connected to the LAN 20 , to which the NNODE 720 is connected, as in the second embodiment.
  • the SAN-type divergent storage device STR 2 ( 1 b ), which is a storage device provided with block I/O interfaces, could be connected externally to the storage device STR 3 , which would result in a configuration of a storage hierarchy as in the third embodiment.
  • the NNODE queries the FNODEs 73 x whether there is a SAN-type divergent storage device connected to the FNODEs 73 x .
  • the NNODE recognizes remote LUs of the divergent storage device based on the contents of the response from the FNODEs 73 x to the query and constructs local file systems in the remote LUs.
  • the NNODE then defines a storage class for each of the remote LUs and the local file systems, and registers and manages information concerning the LUs in the storage class management table 1100439 . Subsequent steps are the same as in the third embodiment.
  • the SW node driver program is executed by the file access control CPU 7202 , which causes a disk I/O command to be issued from the NNODE via the SW node to the FNODE 732 connected to the storage device STR 2 ( 1 b ), which is provided with the LU that is the subject of access.
  • the FNODE 732 issues to the storage device STR 2 a disk I/O command, as well as sends and receives data and control information to and from the STR 2 .
  • the storage device STR 3 behaves as if it were a central control controller for constructing a hierarchy storage system and various types of storage devices can be connected internally and externally to the storage device STR 3 ; consequently, an extremely flexible, scalable and large-scale hierarchical storage system can be constructed. Furthermore, due to the fact that disks and other storage devices can be connected internally and externally to the storage device STR 3 as nodes on the SW 71 of the storage device STR 3 , high-speed data transfer becomes possible.
  • files can be transferred based on other standards and a plurality of standards can be combined. Possible standards other than the data life cycle stage include a file's access property and an LU's used capacity. In such cases, transfer of files can be controlled by providing a migration plan based on the file's access property or the LU's used capacity.
  • Examples of migration plans based on a file's access property include a plan to re-transfer a file into a storage class one class higher in the hierarchy when the access frequency of the file exceeds a certain level, or a plan that provides a storage class specialized for sequential accesses and that transfers a file into the specialized storage class once the sequential access frequency to the file exceeds a certain level.
  • Examples of migration plans based on an LU's used capacity include a plan to transfer a file to a storage class one class lower in the hierarchy even if its current life cycle stage has not shifted, if the file that is stored in an LU has low access frequency or if a long time has elapsed since its date of creation, once the used capacity of the LU exceeds a certain level.
  • the file property information management table 1100438 in the above embodiments manages dynamic properties for each file as access information.
  • the storage class management table 1100439 manages the total capacity and used capacity of each LU. By utilizing such information, migration plans described above can be readily realized.
  • a hierarchical storage control according to the property of a file can be realized through a processing within a storage device and without being dependent on host computers.

Abstract

A storage device is provided with a file I/O interface control device and a plurality of disk pools. The file I/O interface control device sets one of a plurality of storage hierarchies defining storage classes, respectively, for each of LUs within the disk pools, thereby forming a file system in each of the LUs. The file I/O interface control device migrates at least one of the files from one of the LUs to another one of the LUs of an optimal storage class, based on static properties and dynamic properties of each file.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to a storage device used in a computer system. [0002]
  • 2. Related Background Art [0003]
  • In a conventional system, a high-speed storage device and a low-speed storage device may be connected to a computer called a hierarchical storage. In the system, files that are frequently used are stored in the high-speed storage device such as a magnetic disk device, while files that are not frequently used are stored in the inexpensive, low-speed storage device such as a tape device. Which files should be placed, i.e., stored, in which storage device is determined by using a table that manages access frequency of each file. [0004]
  • In another conventional system, a plurality of logical storage devices having different processing speeds and storage capacities are configured within a storage device that is connected to a computer and used. Such a system may be represented by disk array subsystems. In this system, the storage device manages as statistical information the frequency of accesses from the computer to data stored in the storage device, and, based on the statistical information, transfers data with high access frequency to logical storage devices with higher performance. [0005]
  • The first set of problems entailed in the prior art is that there is a high dependency on the computer connected to the storage device, that there is a limitation in the system configuration, and that it is difficult to simplify the system management. [0006]
  • In the conventional system described above, a hierarchical storage control is realized through software operating on the computer. The hierarchical storage control refers to a data storage control for controlling a plurality of storage regions having different processing speeds and storage capacities such that the storage regions can be changed according to the frequency of data usage. In other words, the hierarchical storage control refers to controlling to select, based on the property of data such as frequency of data usage, an appropriate storage region from among a plurality of storage regions having different properties in terms of processing speed and/or storage capacity, and to store the data in the storage region selected. However, when the system configuration is altered, such as when an old computer is replaced by a new computer, maintaining the system can be difficult due to such reasons as the system configuration of the new computer not being able to take over the software's control information. [0007]
  • Also in the conventional system described above, although a hierarchical storage control is implemented on a per-logical storage device basis, a technology for the storage device to recognize a data structure of data stored in the logical storage device or a technology for executing exclusive control are not disclosed. As a result, it would be difficult for a plurality of computers to share the same logical storage devices, and integrating storage devices used by a plurality of computers in order to reduce the management cost of the computer system would require imposing certain limitations on the configuration of the computer system, such as allocating a logical storage device for each computer. [0008]
  • The second problem is that optimal placement of data according to the life cycle or type of data is difficult. [0009]
  • According to the conventional technology, data that had high access frequency in the past is assumed to have high access frequency in the future as well, and the storage regions in which the data is stored are determined based on statistical information regarding data access frequency and on used capacity of storage regions that can be accessed at high-speed. The processing efficiency can be improved by increasing the probability with which data with high access frequency can reside in a storage device that can be accessed at high-speed. However, there are no technologies disclosed for determining storage regions in which to store data by taking into consideration differences in data properties that are dependent on the data's life cycle stage, i.e., the time elapsed since the corresponding file was generated, the type of application that generates and uses the data, and the type of data itself. [0010]
  • The third problem is that the effect of the hierarchical storage control is small. [0011]
  • Although the conventional system described above executes a hierarchical storage control by taking advantage of the difference in capacity and price between magnetic tapes and magnetic disks, the difference in capacity and price between magnetic tapes and magnetic disks have been growing smaller in recent years; consequently, the effect of cost optimization and cost reduction through the use of hierarchical storage control has also been growing smaller. Furthermore, due to the fact that the access speed to magnetic tapes is extremely slow compared to access speed to magnetic disks, it is difficult to use magnetic tapes as storage device for online access. [0012]
  • In the conventional system described above, a hierarchical storage control is executed by taking advantage of the difference in price and performance resulting from different RAID configurations of magnetic disks; however, since the price difference results only from the difference in the degree of redundancy in RAID configurations, the only cost reduction that can be hoped for is the cost reduction equivalent only to the difference in the degree of redundancy. [0013]
  • SUMMARY OF THE INVENTION
  • The present invention relates to a control method or a storage device that can execute a hierarchical storage control for file storage positions without being dependent on the OS or applications executed on a host computer. [0014]
  • The present invention also relates to a hierarchical storage control method for a plurality of computers to share files, or to provide a storage device that executes such a hierarchical storage control. [0015]
  • The present invention also relates to a control method or a storage device that can execute a hierarchical storage control according to file properties. [0016]
  • The present invention further relates to a hierarchical storage control method with high cost reduction effect, or to provide a storage device that executes such a hierarchical storage control. [0017]
  • In accordance with an embodiment of the present invention, a storage device comprises a plurality of storage regions having different properties, an interface control device that accepts from one or more computers access requests containing file identification information, and an interface control device for accessing storage regions that store data of the file designated by identification information, wherein the interface control device controls storage of file data in one of a plurality of storage regions according to the file property. [0018]
  • Other features and advantages of the invention will be apparent from the following detailed description, taken in conjunction with the accompanying drawings that illustrate, by way of example, various features of embodiments of the invention.[0019]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a diagram of an example of the configuration of a computer system in accordance with an embodiment of the present invention. [0020]
  • FIG. 2 is a diagram of an example of the exterior appearance of a storage device. [0021]
  • FIG. 3 is a diagram of an example of the exterior appearance of an adapter board. [0022]
  • FIG. 4 is a diagram of an example of the configuration of a NAS channel adapter. [0023]
  • FIG. 5 is a diagram of an example of programs stored in a file system control memory. [0024]
  • FIG. 6 is a diagram of an example of programs stored in a disk array control memory. [0025]
  • FIG. 7 is a diagram of an example of the relationship among disk pools, LUs and file systems. [0026]
  • FIG. 8 is a diagram of an example of a storage class management table. [0027]
  • FIG. 9 is a diagram of an example of a filename management table. [0028]
  • FIG. 10 is a diagram of an example of a file storage management table and a buffer management table. [0029]
  • FIG. 11 is a diagram of an example of a file property information management table. [0030]
  • FIG. 12 is a diagram of an example of a file storage management table. [0031]
  • FIG. 13 is a diagram of an example of the second configuration of a system in accordance with another embodiment of the present invention. [0032]
  • FIG. 14 is a diagram of an example of the third configuration of the system in accordance with another embodiment the present invention. [0033]
  • FIG. 15 is a diagram of an example of the fourth configuration of the system in accordance with another embodiment the present invention. [0034]
  • FIG. 16 is a diagram of a configuration example of a NAS node. [0035]
  • FIG. 17 is a diagram of a configuration example of a Fibre Channel node. [0036]
  • FIG. 18 is a diagram of a configuration example of an IP node. [0037]
  • FIG. 19 is a diagram of a configuration example of a disk array node.[0038]
  • PREFERRED EMBODIMENTS OF THE INVENTION
  • The following is a description of embodiments of the present invention. The following embodiments do not limit the present invention. [0039]
  • Embodiment 1
  • (1) Example of System Configuration (FIG. 1) [0040]
  • FIG. 1 is a diagram indicating an example of a computer system including a storage device [0041] 1 (it is also called a storage system), to which the present invention is applied. In the following, x may be any integer.
  • The [0042] storage device 1 is a disk array system comprising a disk controller (hereinafter called “DKC”) 11 and a plurality of magnetic disk devices (hereinafter simply called “disks”) 170 x and 171 x. In the present embodiment, the storage device 1 is provided with two types of disks 170 xand 171 x. 170 x are Fibre Channel (hereinafter called “FC”) disks with FC-type interface, while 171 x are serial AT attached (hereinafter called “SATA”) disks with serial ATA-type (SATA-type) interface. A plurality of FC disks 170 x makes up an FC disk pool 0 (170), while a plurality of SATA disks 171 x makes up a SATA disk pool 1 (171). The disk pools will be described in detail later.
  • Next, the configuration of the DKC [0043] 11 of the storage device 1 will be described. The DKC 11 comprises one or more NAS channel adapters 110 x, one or more Fibre Channel adapters 111 x, a plurality of disk adapters 12 x, a shared memory 13 (hereinafter called “SM”), a shared memory controller 15 (hereinafter called “SMC”), a cache memory 14 (hereinafter called “CM”), and a cache memory controller 16 (hereinafter called “CMC”).
  • The NAS channel adapters (hereinafter called “CHN”) [0044] 110 x are interface control devices connected by file I/O interfaces to computers 40 x (hereinafter called “NAS hosts”), which are connected to a local area network (hereinafter called “LAN”) 20 or a LAN 21.
  • The Fibre Channel adapters (hereinafter called “CHF”) [0045] 111 x are interface control devices connected by block I/O interfaces to computers (hereinafter called “SAN hosts”) 50 x, which are connected to a storage area network (hereinafter called “SAN”) 30. Hereinafter, CHN and CHF are collectively called channel adapters (hereinafter called “CH”).
  • The disks [0046] 17 x are connected to the disk adapters 12 x. Each disk adapter (hereinafter called “DKA”) 12 x controls input and output to and from one or more disks 17 x connected to itself.
  • The [0047] SMC 15 is connected to the CHN 110 x, the CHF111 x, the DKA 12 x and the SM 13. The SMC 15 controls data transfer among the CHN 110 x, the CHF 111 x, the DKA 12 x and the SM 13. The CMC 16 is connected to the CHN 110 x, the CHF111 x, the DKA 12 x and the CM 14. The CMC 16 controls data transfer among the CHN 110 x, the CHF111 x, the DKA 12 x and the CM 14.
  • The SM [0048] 13 stores a disk pool management table 131. The disk pool management table 131 is information that is used to manage the configuration of the disk pools.
  • The [0049] LANs 20 and 21 connect the CHNs 110 x to the NAS hosts 40 x. Generally, Ethernet ® is used for LAN. The SAN 30 connects the CHFs 111 x to the SAN hosts 50 x. Generally, Fibre Channel is used for SAN. However, an IP network can be used as the SAN, such that iSCSI, by which SCSI commands according to SCSI protocol are encapsulated into IP packets for sending and receiving, is used among equipment connected to the SAN. The SAN 35 according to the present embodiment is a dedicated SAN for connecting the storage device 1 and no SAN hosts are connected to the SAN 35.
  • In the [0050] storage device 1, all CHs can access the CM 14, the SM 13, any DKAs 12 x and any disks 17 x, via the CMC 16 or the SMC 15.
  • The [0051] storage device 1 shown in FIG. 1 has both the SAN interfaces (CHFs 111 x) for connecting to the SAN hosts 50 x and the NAS interfaces (CHNs 111 x) for connecting to the NAS hosts 40 x, but the present embodiment can be implemented even if the storage device 1 has only the NAS interfaces.
  • (2) Example of Exterior Appearance of Storage Device (FIG. 2) [0052]
  • FIG. 2 is a diagram of an example of the exterior appearance of the [0053] storage device 1.
  • A [0054] DKC unit 19 stores the CHNs 111 x, the CHFs 110 x, the DKAs 12 x, the SM 13 and the CM 14, which are components of the DKC 11. The SM 13 actually comprises a plurality of controller boards 13 x. The CM 14 also comprises a plurality of cache boards 14x. Users of the storage device 1 can increase or decrease the number of such boards in order to configure the storage device 1 with the CM 14 and the SM 13 having the desired storage capacity. Disk units (hereinafter called “DKU”) 180 and DKUs 181 store the disk pool 170 and the disk pool 171, respectively.
  • Adapter boards built-in with the CHNs [0055] 110 x, the CHFs 111 x, the DKAs 12 x, the controller boards 13 x and the cache boards 14 x are stored in slots 190. According to the present embodiment, the shape of the slots 190, the size of the adapter boards and the shape of connectors are made uniform regardless of the type of adapter boards or the type of interface, which maintains compatibility among various types of boards. As a result, in the DKC unit 19, any adapter board can be mounted into any slot 190 regardless of the type of the adapter board or the type of the interface. Furthermore, users of the storage device 1 can freely select the number of adapter boards for the CHNs 110 x and the CHFs 111 x in order to mount the number of the CHNs 110 x and the CHFs 111 x selected into the slots 190 of the DKC unit 19.
  • (3) Example of Exterior Configuration of Adapter Board (hereinafter called “NAS board”) with the CHN [0056] 110X Built-in (FIG. 3)
  • FIG. 3 is a diagram of an example of the exterior configuration of a NAS board. A [0057] connector 11007 is connected to a connector of the DKC unit 19. An interface connector 2001 is Ethernet ®-compatible and can be connected to Ethernet ®.
  • According to the present embodiment, due to the fact that the shape of the connector on adapter boards is uniform regardless of the type of the adapter board as described earlier, the adapter boards with built-in CHNs [0058] 110 x and the adapter boards with built-in CHFs 111 x have connectors of the same shape. On the adapter boards with the built-in CHFs 111 x, the interface connector 2001 is Fibre Channel-compatible and configured to be connected to Fibre Channel.
  • (4) Example of Configuration of NAS Board (or CHN) (FIG. 4) [0059]
  • FIG. 4 is a diagram of an example of the configuration of the CHN [0060] 110 x. A file access control CPU 11001 is a processor for controlling file access. A LAN controller 11002 is connected to the LAN 20 via the interface connector 2001 and controls sending and receiving of data to and from the LAN 20. A file access control memory 11004 is connected to the file access control CPU 11001. The file access control memory 11004 stores programs executed by the file access control CPU 11001 and control data.
  • A disk [0061] array control CPU 11008 is a processor for controlling a disk array. The disk array refers to a storage device consisting of a plurality of disks. Disk arrays in which at least one of a plurality of disks stores redundant data to provide fault tolerance are called RAIDs. RAIDs are described later. A disk array control memory 11009 is connected to the disk array control CPU 11008 and stores programs executed by the disk array control CPU 11009 and control data. An SM I/F control circuit 11005 is a circuit for controlling access from the CHNs 110 x to the SM 13. A CM I/F control circuit 11006 is a circuit for controlling access from the CHNs 110 x to the CM 14. An inter-CPU communications circuit 11007 is a communications circuit used when the file access control CPU 11001 communicates with the disk array control CPU 11008 in order to access disks.
  • The present embodiment indicates an example of an asymmetrical multiprocessor configuration in which two processors, the file [0062] access control CPU 11001 and the disk array control CPU 11208, are mounted on each CHN 110 x; however, each CHN 110 x can be configured by mounting a single processor that executes both the file access control and the disk array control, or as a symmetrical multiprocessor configuration in which two or more processors are mounted as equivalents to execute the file access control and the disk array control.
  • The configuration of each CHF[0063] 111 x is the configuration shown in FIG. 4, except that components shown in top half of FIG. 4, namely the LAN controller 11002, the file access control CPU 11001, the file access control memory 11004 and the inter-CPU communications circuit 11007, are replaced by a Fibre Channel controller.
  • (5) Example of Programs Stored in File Access Control Memory (FIG. 5) [0064]
  • FIG. 5 is a diagram of an example of programs and control data stored in the file [0065] access control memory 11004 of the CHN 110 x. An operating system program 110040 is used for the management of programs as a whole and for input/output control. A LAN controller driver program 110041 is used for the control of the LAN controller 11002. A TCP/IP program 110042 is used for the control of TCP/IP, which is the communications protocol for LAN. A file system program 110043 is used for managing files stored in the storage device 1. A network file system program 110044 is used for controlling NFS and/or CIFS, which are protocols for providing files stored in the storage device 1 to the NAS hosts 40 x. A volume control program 110045 is used for controlling the configuration of each logical volume by combining a plurality of logical disk units (hereinafter called “LU”), each of which is a unit of storage region set within the disk pools 17 x. An inter-CPU communications driver program 110046 is used for controlling the inter-CPU communications circuit 11007, which is used for communication between the file access control CPU 11001 and the disk array control CPU 11008.
  • The [0066] file system program 110043 includes the following:
  • 1) a file [0067] open processing section 1100431 for executing a file open processing when using a file;
  • 2) a [0068] request processing section 1100432 for executing a processing according to a file access request when a file access request is received;
  • 3) a file [0069] storage management section 1100433 for dividing each file into blocks, determining the storage position on a disk for each block, and managing the storage position of each block;
  • 4) a [0070] buffer management section 1100434 for managing correlation between each block and a buffer formed in the memory;
  • 5) a file storage management table [0071] 1100435 for managing addresses of storage regions on disks that store blocks that make up each file;
  • 6) a filename management table [0072] 1100436 for managing filenames of open files and file handlers used to access the file storage management table 1100435 of each file;
  • 7) a buffer management table [0073] 1100437 for managing buffer addresses indicating storage regions within buffers corresponding to blocks that make up a file;
  • 8) a file property information management table [0074] 1100438 for storing file static properties, such as the file type, the application that generated the file, the intent of the file generator, and file dynamic properties, such as the value of the file that varies according to the file's life cycle stage and the file's access properties;
  • 9) a [0075] migration management section 110043A used when executing a processing to migrate files between LUs; and
  • 10) a storage class management table [0076] 1100439 that registers for each LU in the storage pool a storage class, described later, and identification information of the storage device in which an LU resides.
  • (6) Configuration of Disk Array Control Memory (FIG. 6) [0077]
  • FIG. 6 is a diagram of an example of programs stored in the disk [0078] array control memory 11009. An operating system program 110090 is used for managing programs as a whole and for controlling input/output. A disk array control program 110091 is used for constructing LUs within the disk pools 17 x and for processing access requests from the file access control CPU 11001. A disk pool management program 110092 is used for managing the configuration of the disk pools 17 x by using information in the disk pool management table 131 stored in the SM 13. An inter-CPU communications driver program 110093 is used for controlling the inter-CPU communications circuit 11007, which is used for communication between the file access control CPU 11001 and the disk array control CPU 11008. A cache control program 110094 is used for managing data stored in the CM 14 and for controlling cache hit/miss judgments. A DKA communications driver program 110095 is used when accessing an LU in order to communicate with the DKAs 12 x, which control the disks 170 x and 171 x that make up the LU.
  • (7) Configuration of Disk Pools (FIG. 7) [0079]
  • FIG. 7 is a diagram of an example of the configuration of the disk pools. [0080]
  • In the [0081] FC disk pool 170 are set two LUs, LU0 (50) and LU1 (51). The LU0 (50) comprises two FC disks, DK000 and DK010, where the DK000 and the DK010 make up RAID 1. The LU1 (51) consists of 5FC disks, DK001, DK002, DK003, DK004, and DK005, where the five FC disks make up a 4D +1P configuration RAID 5. The RAID 1 and the RAID 5 refer to data placement methods in a disk array and are discussed in detail in “A Case for Redundant Arrays of Inexpensive Disks (RAID)” by D. Patterson, et al., ACM SIGMOD Conference Proceedings, 1988, pp. 109-116. In the LU0 with the RAID 1 configuration, the two FC disks DK000 and DK010 have a mirror relationship with each other. In the meantime, the LU1 having the RAID 5 configuration consists of one or more disks that store data stripes, which store data of files accessed from host computers, and one or more disks that store parity stripes, which are used to retrieve data stored in the data stripes. The LU1 has the 4D +1P configuration RAID 5, which indicates a RAID 5 consisting of four data stripes and one parity stripe. Similar representations will be used hereinafter to indicate the number of data stripes and the number of parity stripes in LUs having the RAID 5 configuration.
  • LU[0082] 2 (52) is established in the SATA disk pool 171. The LU2 (52) consists of nine SATA disks, DK100, DK101, DK102, DK103, DK104, DK110, DK1, DK112 and DK1l3, where the nine SATA disks make up an 8D +1P configuration RAID 5.
  • When the capacity of each disk is 140 GB, the LU[0083] 0 (50) has 140 GB, the LU1 (51) has 560 GB, and the LU2 (52) has 1120 GB in usable storage capacity.
  • Independent local file systems LFS[0084] 0 (60), LFS1 (61) and LFS2 (62) are established and constructed for the LU0 (50), the LU1 (51) and the LU2 (52), respectively.
  • (8) Storage Class Management Table (FIG. 8) [0085]
  • FIG. 8 is an example of the configuration of a storage class management table [0086] 1100451 stored in the file access control memory 11004 of each CHN 110 x. The storage class management table 1100451 is generated by the file access control CPU 11001's executing the file system program 110043 and referring to information stored in the disk pool management table 131 of the SM 13.
  • Although the disk pool management table [0087] 131 is not shown, the disk pool management table 131 is stored in the SM 13 and contains information similar to the information in the storage class management table 1100451 for all CHs. In other words, of the information in the disk pool management table 131, the storage class management table 1100451 stored in the file access control memory 11004 of each CHN 110 x contains information regarding LUs used by the CHN 110 x, but rearranged with the storage class as a key.
  • The following is a description of the configuration of the storage class management table [0088] 1100451. A storage class entry (1100451 a) stores information indicating storage class. A storage node # entry (1100451 b) stores an identification number (called a “storage node number”) of the storage device that makes up each storage class. A disk pool # entry (1100451 c) stores a disk pool number that makes up each storage class. An LU# entry (1100451 d) stores an LU number set for each disk pool. In an LU type entry (1100451 e), information stored indicates whether the corresponding LU is set internally (local) or externally (remote) to the given storage device and whether a file system is set in the LU. In other words, if the LU is within the storage device, “Local” is registered in the LU type entry, while “Remote” is registered if the LU is in a different storage device; if a file system is constructed in that LU, “File” is registered in the LU type entry, while “Block” is registered if no file systems are constructed in the LU. In a RAID Conf. entry (1100451 f), information stored indicates the RAID level of the disk array that makes up each LU and the structure of the corresponding disk array, such as data record and parity record number within a parity group. In a usable capacity entry (1100451 g) and a used capacity entry (1100451 h), information that indicates the total storage capacity of the given LU and information that indicates the storage capacity being used, respectively, are stored.
  • A storage class is a hierarchical attribute provided for each storage region based on the usage of data storage; according to the present embodiment, three attributes of OnLine Storage, NearLine Storage and Archive Storage are defined. In addition, sub-attributes of Premium and Normal are defined for the OnLine Storage. The OnLine Storage is an attribute set for LUs suitable for storing data of files that are frequently accessed, such as files being accessed online and files being generated. Premium indicates an attribute set for LUs suitable for storing data especially requiring fast response. The NearLine Storage is an attribute set for LUs suitable for storing data of files that are not frequently used but are occasionally accessed. The Archive Storage is an attribute set for LUs suitable for storing data of files that are hardly ever accessed and are maintained for long-term storage. [0089]
  • FIG. 8 indicates that there are the LU[0090] 0 (50) of the OnLine Storage (Premium) class and the LU1 (51) of the OnLine Storage (Normal) class in the FC disk pool 170 of the storage device 1 (called “STR0”). Further, in the SATA disk pool 171 of the storage device 1 (STR0) is the LU2 (52) of the NearLine Storage class. Moreover, in a different storage device (STR1) is an LU3 (53) of the Archive Storage class in a SATA disk pool. An example of constructing disk pools in different storage devices is described later.
  • (9) Filename Management Table (FIG. 9) [0091]
  • FIG. 9 shows an example of the filename management table [0092] 1100436 that is stored in the file access control memory 11004. The filename management table 1100436 is a table prepared for each file system, where filenames and file handlers are stored in a tree structure for easy searchability. When a file is accessed by one of the NAS hosts 40 x, the filename of the file is included in an access request received by the CHN 110 x from the NAS host 40 x. The CHN 110 x uses the filename to search the filename management table 11004 and obtains the file handler that corresponds to the filename, which enables the CHN 110 x to refer to the file storage management table 1100435 that corresponds to the file handler.
  • Each filename management table [0093] 1100436 is stored in the LU in which the file system that corresponds to the filename management table 1100436 is constructed, and is read to the file access control memory 11004 when necessary and used by the file access control CPU 11001.
  • (10) File Storage Management Table (FIG. 10) [0094]
  • FIG. 10 is a diagram of an example of the file storage management table [0095] 1100435 and the buffer management table 1100437. The file storage management table 1100435 is provided in the file access control memory 11004 for each file and is a table that manages file storage addresses. The file storage management table 1100435 can be referred to by designating a file handler that represents a file.
  • A file property information management table entry stores a pointer for referring to the file property information management table [0096] 1100438 for the corresponding file. A size indicates the size of the file in units of bytes. A number of blocks indicates the number of logical blocks used in managing the file, which is done by dividing the file into blocks called logical blocks. Each logical block that stores the file also stores a pointer to the buffer management table 1100437 that corresponds to the logical block.
  • There is one buffer management table [0097] 1100437 for each logical block, and each buffer management table 1100437 contains the following. A hash link entry stores a link pointer to a hash table for quickly determining whether a buffer is valid. A queue link entry stores a link pointer for forming a queue. A flag entry stores a flag that indicates the status of the corresponding buffer, i.e. whether valid data is stored in the buffer, whether the buffer is being used, whether the content of the buffer is unreflected on the disk. An equipment number entry stores an identifier of the storage device and an identifier of the LU in which the corresponding logical block is stored. A block number entry stores a disk address number that indicates the storage position of the logical block within the storage device indicated by the equipment number. A number of bytes entry stores the number of bytes of valid data stored in the logical block. A buffer size entry stores the size of the buffer in units of bytes. A buffer pointer entry stores a pointer to the corresponding physical buffer memory.
  • The file storage management table [0098] 1100435 is stored in the LU that stores the corresponding file and is read to the memory when necessary for use.
  • (11) File Property Information Management Table (FIG. 11) [0099]
  • FIG. 11 is an example of the file property information management table [0100] 1100438 stored in the file access control memory 11004. The file property information management table 1100438 stores static property information and dynamic property information. The static property information is determined when a file is configured and carries over thereafter. Although the static property information can be intentionally altered, it otherwise remains unaltered. The dynamic property information changes over time after a file is created.
  • (12) Static Property Information [0101]
  • The static property information is divided into a file information category and a policy category. [0102]
  • The file information category includes basic information of a file. In the file information category, a file type indicates the type of the file, such as a text file, document file, picture file, moving picture file or a voice file. An application indicates the application that generated the file. A date created indicates the date the file was first generated. The time at which the file was generated can be registered in addition to the date the file was created. An owner indicates the name of the user who created the file. An access identifier indicates a range of access authorization for the file. [0103]
  • The policy category is information that is set by the user or the application that created the file, and is information that is designated by the user or the application with regard to file storage conditions. An initial storage class is information that indicates the storage class of the LU in which the file is to be stored when the file is stored in a storage device for the first time. An asset value type indicates the asset value of the file. A life cycle model indicates the model applicable to the file from among life cycle models defined in advance. A migration plan indicates the plan applicable to the file from among plans concerning file migration (hereinafter called “migration”) defined in advance. [0104]
  • The asset value is an attribute that designates the importance or value attached to the file. An attribute of “extra important,” “important” or “regular,” for example, can be designated as an asset value. The asset value can be used as a supplemental standard for selecting a storage class, i.e., files with an attribute of “important” or higher are stored in LUs that belong to the OnLine Storage class with Premium attribute, or as a standard for selecting a storage class when no life cycle models are designated, for example. [0105]
  • In the description of the present embodiment, it will be assumed that files that are “important” or higher are stored in LUs that belong to the OnLine Storage (Premium) class. Needless to say, the present invention is not restricted to such an assumption and different standards may be used to select storage classes of LUs for storing files. [0106]
  • The life cycle stages have been named by drawing analogy with life cycle stages of humans to describe how the usage status of a file changes over time, i.e., the period in which data is created is the birth, the period in which the data is updated and/or used is the growth stage, the period in which the data is rarely updated and is mainly referred to is the mature stage, and the period in which the data is no longer used and is archived is the old age. A life cycle model defines the life cycle a file experiences. The most general method of defining a life cycle is to define the stages based on the amount of time that has elapsed since a file was generated. One example is to define the “growth stage,” or the “update stage,” in which there are frequent updates, as one month; the “mature stage,” or the “reference stage,” in which the file is mainly referred to, as one year; and the “old age,” or the “archive stage,” as thereafter. Hereinafter this definition is called a “[0107] model 1” and is used in the following description. By varying the time interval of the life cycle model or by defining stages with finer resolution, various life cycle models can be defined and one life cycle model from among a plurality of life cycle models can be selected for use. Furthermore, a specific life cycle model can be applied to a certain type of files, or life cycle models can be applied on a per-application basis such that a specific life cycle model is applied to files created by a certain application. Names of the life cycle stages can be expressed in terms of “growth stage,” “mature stage,” and “old age” that correspond to the life of a person, or in terms of “update stage,” “reference stage,” and “archive stage” based on file behavior. In the present embodiment, the latter expressions are used in order to more clearly indicate the behavior of files.
  • The migration plan defines to which storage class LU a file is transferred according to the file's life cycle stage. One example is a method for storing “update stage” files in OnLine Storage class LUs, “reference stage” files in the NearLine Storage class LUs, and “archive stage” files in Archive Storage class LUs. Hereinafter, this definition is called a “[0108] plan 1” and is used in the following description. In addition to this plan, various plans can be defined, such as a plan that defines “update stage” files to be stored in OnLine Storage (Premium) class LUs, and “reference stage” files in OnLine Storage (Normal) class LUs, while “archive stage” files remain in the NearLine Storage class LUs, and one plan from among a plurality of plans can be selected for use. Furthermore, a specific migration plan can be applied to a certain type of files, or migration plans can be applied on a per-application basis such that a migration plan is applied to files created by a certain application.
  • (13) Dynamic Property Information [0109]
  • The dynamic property information is divided into an access information category and a life cycle information category. [0110]
  • The access information category includes access statistical information for each file. In the access information category, a time stamp indicates the date and time a given file was last read or written, or the date and time the file storage management table [0111] 1100435 of the file was last updated. An access count indicates the total number of accesses to the file. A read count and a write count indicate the number of reads and the number of writes, respectively, to and from the file. A read size and a write size indicate the average value of the data transfer size when reading and writing, respectively, to and from the file. A read sequential count and a write sequential count indicate the number of times there is address continuity, i.e., sequentiality, between two of multiple consecutive accesses in reading or writing.
  • The life cycle information category includes information related to the life cycle of a file. In the life cycle information category, a current life cycle stage indicates the current positioning of a file within its life cycle, i.e., the update stage, the reference stage, or the archive stage. A current storage class indicates the storage class of a storage pool set for the LU that currently stores the file. [0112]
  • FIG. 11 indicates one example of the file property information, but various other types of property information can be defined and stored in the file property information management table [0113] 1100438. Furthermore, an embodiment may use only a part of the property information as necessary.
  • (14) Initial File Placement: A File Open processing [0114]
  • Next, a description will be made as to a file open processing that takes place in the initial placement processing to store a file in a storage device for the first time. [0115]
  • Let us assume that the NAS host [0116] 0 (400) generated a file abc.doc.
  • The NAS host [0117] 0 (400) issues to the CHN0 (1100) an open request for the file abc.doc. The open request includes a filename as identification information to identify the file. Since the open processing is executed to store the file for the first time, the NAS host 0 (400) sends to the CHN0 (1100) the following information included in the file information category and the policy category as the static property information of the file property information, along with the open request. The information sent includes a file type “document,” an application that generated the file “XYZ Word,” and an access identifier “-rw-rw-rw-” as information included in the file information category, as well as an initial storage class “undesignated,” an asset value type “important,” the life cycle model “model 1,” and the migration plan “plan 1” as information included in the policy category.
  • The CHN[0118] 0 (1100) receives the open request from the NAS host 0 (400) via the LAN controller 11002, and the file access control CPU 11001 executes the file system program 110043.
  • When the [0119] file system program 110043 is executed, the open request received is specified through a control by the file access control CPU 11001 as an access request to access the local file system LFS0 (60) based on the directory information of the filename. The file open processing section 1100431 refers to the filename management table 1100436 of the LFS0 (60) and searches for abc.doc. Since it is determined as a result that abc.doc is a file that does not yet exist in the filename management table 1100436 and is to be stored for the first time, the file open processing section 1100431 registers abc.doc in the filename management table 1100436 and assigns a file handler to abc.doc.
  • Next, the file [0120] storage management section 1100433 creates the file storage management table 1100435 to correspond to the file handler assigned to the file abc.doc.
  • Next, the file [0121] storage management section 1100433 generates the file property information management table 1100438 and correlates it to the file storage management table 1100435 (i.e., a pointer to the file property information management table 1100438 is stored in the file storage management table 1100435); the file storage management section 1100433 then stores in the file property information management table 1100438 the static property information of the file property information for the file abc.doc obtained from the NAS host 0 (400), as well as the date created and owner of the file. Next, the file storage management table 1100435 and the file property information management table 1100438 are written to the LU in which is constructed the file system the file belongs to.
  • Next, the CHN[0122] 0 (1100) returns the file handler to the NAS host 0 (400) and the open processing is terminated.
  • (15) Initial File Placement: A Data Write Processing [0123]
  • Next, a description will be made as to a data write processing executed in the initial placement processing of a file. [0124]
  • Using the file handler obtained in the open processing, the NAS host [0125] 0 (400) issues to the CHN0 (1100) a write request to store data of the file abc.doc in the storage device 1.
  • When the write request is received by the CHN[0126] 0 (1100), the file access control CPU 11001 executes the file system program 110043 and uses a method similar to the method used in the open processing to specify that the write request is an access request to access the local file system LFS0 (60).
  • The [0127] request processing section 1100432 of the file system program 110043 interprets the access request as a write request based on the information included in the access request received, and uses the file handler designated in the write request to obtain the file storage management table 1100435 of the file that corresponds to the file handler.
  • Next, the file [0128] storage management section 1100433 secures buffers required to store the data and determines the storage positions on disks for the file.
  • To determine the storage positions, the file storage management section [0129] 100433 refers to the static property information in the file property information management table 1100438. In this case, due to the fact that the life cycle model of the file abc.doc, which is the subject of the write request, is “model 1,” and to the fact that the write request received is an access taking place within one month of the file generation since it is an access request occurring in an initial file placement, the file storage management section 1100433 specifies the current life cycle stage of the file abc.doc as “growth stage.” Further, since the initial storage class is “undesignated” and the asset value type is “important,” the file storage management section 1100433 selects “OnLine Storage (Premium)” as the storage class of the storage pool in which to store the file abc.doc.
  • Next, the file [0130] storage management section 1100433 refers to the storage class management table 1100439 and decides to store the file abc.doc in an LU whose storage class is “OnLine Storage (Premium)” and that is specified by “STR0 (i.e., the primary storage device 1)” as the storage node, “FC disk pool 1700” as the disk pool #, and “LU0 (i.e., the local file system LFS0)” as the LU #. The file storage management section 1100433 divides the data of the file into one or more logical blocks based on an appropriate algorithm, determines storage addresses of the logical blocks in the LU0, generates buffer management tables 1100437 to register the storage addresses determined, and stores in the buffer management table entry of the file storage management table 1100435 pointers to the buffer management tables 1100437 generated. Furthermore, the file storage management section 1100433 stores information in the remaining entries of the file storage management table 1100435. In the present embodiment, NULL is registered for all entries for link destinations in the file storage management table 1100435.
  • The file [0131] storage management section 1100433 sets the current life cycle stage as “update stage” and the current storage class as “OnLine Storage (Premium)” in the life cycle information category of the dynamic property information of the file property information management table 1100438. The file storage management section 1100433 performs appropriate calculations for information included in the access information category of the dynamic property information before registering the results into the file property information management table 1100438.
  • The [0132] request processing section 1100432 executes a processing according to the write request received; and the LAN controller driver program 110041, the TCP/IP program 110042, and the network file system program 110044 are executed by the file access control CPU 11001; as a result, the write data is transferred from the NAS host 0 (400) to the CHN0 (1100) and temporarily stored in the buffer of the file access control memory 11004. Next, the inter-CPU communications driver program 110046 is executed by the file access control CPU 11001, and this causes the write request to be transferred to the disk array control CPU 11008 at proper timing. Upon receiving the write request, the disk array control CPU 11008 caches the write data temporarily in the CM 14 and sends a reply of completion with regard to the write request from the NAS host 0 (400).
  • Next, under the control of the [0133] DKA 120 that controls disks that make up the LU0, the write data is stored at proper timing on appropriate disks.
  • As described above, files can be initially placed in storage regions that belong to the appropriate storage class based on the static property information of the file. [0134]
  • (16) File Migration Processing (FIG. 12) [0135]
  • Next, a migration processing of a file will be described. [0136]
  • The [0137] migration management section 110043A of the file system program 110043 is activated by the file access control CPU 11001 based on a preset timing.
  • The [0138] migration management section 110043A refers to the file property information management table 1100438 of a file included in the local file system set in advance as the subject of the migration processing, and checks whether the file that is the subject of migration exists. The following is a detailed description of a situation in which the file abc.doc is the subject of the migration processing.
  • The [0139] migration management section 110043A refers to the file property information management table 1100438 of the file abc.doc and compares the date created to the current date and time. If one month has elapsed since the date created, the migration management section 110043A recognizes that the current life cycle stage has shifted from the “update stage” to the “reference stage” due to the fact that the life cycle model in the static property information indicates “model 1” and that one month, which is the period of the “update stage,” has already passed.
  • Further, due to the fact that the migration plan is “[0140] plan 1,” the migration management section 110043A recognizes that the file must be migrated from the LU whose storage class is the “OnLine Storage (Premium)” to an LU whose storage class is the “NearLine Storage.”
  • The [0141] migration management section 110043A refers to the storage class management table 1100439 and decides to transfer the file to an LU whose storage class is the “NearLine Storage” and that is designated by “STR0 (i.e., the primary storage device 1)” as the storage node, “SATA disk pool 1710” as the disk pool #, and “LU2 (i.e., a local file system LFS2)” as the LU #.
  • Next, the [0142] migration management section 110043A changes the current life cycle stage to “reference stage” and the current storage class to “NearLine Storage” in the dynamic property information of the file property information management table 1100438.
  • The [0143] migration management section 110043A defines a unique filename (in this case FILE00001) that is used to manage the file abc.doc within the storage device STR0 (1).
  • The file [0144] open processing section 1100431 refers to the filename management table 1100436 of the LFS2 (60) and checks whether the filename FILE00001 is registered in the filename management table 1100436; if it is not registered, the file open processing section 1100431 registers the filename FILE00001 in the filename management table 1100436 and assigns a file handler to the filename FILE00001.
  • Next, the file [0145] storage management section 1100433 generates the file storage management table 1100435 and the file property information management table 1100438 to correspond to the file handler assigned to the filename FILE00001. Contents identical to the contents registered in the file property information management table of the file abc.doc are stored in the file property information management table 1100438 generated. The file storage management section 1100433 writes in the LU, which stores FILE00001, the file storage management table 1100435 and the file property information management table 110438 of FILE00001.
  • Next, the file [0146] storage management section 1100433 secures buffer regions required to store the data of FILE00001 and determines the storage regions (or the storage positions) within the LU2 for storing the file. Using a method similar to the method used in the data write processing, the file storage management section 1100433 generates the buffer management tables 1100437 to register the storage positions determined, and stores in the buffer management table entry of the file storage management table 1100435 pointers to the buffer management tables 1100437 generated. NULL is registered for all entries for link destinations in the file storage management table 1100435 of the FILE00001 stored in the LFS2.
  • As indicated in FIG. 12, the file [0147] storage management section 1100433 changes the link destination node name to STR0, the link destination FS name to LFS2, and the link destination filename to FILE00001 in the file storage management table 1100435 of abc.doc in the LFS0.
  • Next, the [0148] request processing section 1100432 reads data of the abc.doc from disks that make up the LU0 to buffers in the file access control memory 11004. The file storage management section 1100433 determines the data read to the buffers in the file access control memory 11004 as data of the FILE00001 to be written to the disks that make up the LU2, and the request processing section 1100432 writes the data to storage regions in the buffers registered in the buffer management tables 1100437.
  • The file [0149] storage management section 1100433 clears all buffer management tables 1100437 that can be referred to from pointers registered in the file storage management table 1100435 of the file abc.doc in the LFS0, and registers NULL in entries of these buffer management tables 1100437.
  • The data of the FILE[0150] 00001 stored in the buffers is stored at proper timing in the LU2 via the CM 14 of the storage device 1 through a procedure similar to the procedure that took place in the data write processing of the initial placement processing. This completes the migration processing.
  • As described above, according to the present embodiment, files can be migrated to storage regions of an appropriate storage class by taking into consideration the life cycle stage of the file based on the migration plan of the file. [0151]
  • According to the present embodiment, LUs for storing files can be selected based on a concept of storage classes, and LUs for storing files can be changed, without being dependent on host computers or applications executed on the host computers. As a result, a storage device with storage hierarchy, i.e., a plurality of storage regions with varying properties, having high cost effectiveness can be realized without being dependent on host computers. [0152]
  • Further, due to the fact that data is migrated on a per-file basis, same files can be accessed from a plurality of host computers using a file I/O interface, even after the files are migrated. [0153]
  • Moreover, a file-based hierarchy storage control can be executed based on static properties of the file, such as the file type, the type of application that generated the file, the intent (policy) of the file generator, and on dynamic properties of the file, such as changes in the life cycle stage, value and access property of the file. [0154]
  • Embodiment 2
  • (1) Example of System Configuration (FIG. 13) [0155]
  • Next, referring to FIG. 13, an example of the system configuration of the second embodiment will be described. In the present embodiment, a hierarchical storage control is executed between storage devices in a system in which a storage device [0156] 1 (hereinafter called “STR0”) described in the first embodiment and another storage device 1 a (hereinafter called “STR1”) are connected via a network.
  • In FIG. 13, the storage device STR[0157] 1 (1 a) is the other storage device connected to the storage device STR0 (1) via a LAN 20; otherwise, the system (configuration components are the same as in FIG. 1.
  • In the STR[0158] 1 (1 a), an NCTL0 (1100 a) and an NCTL1 (1101 a) are NAS controllers, and a disk pool 0 (170 a) is a disk pool connected to the NCTL0 and NCTL1.
  • Instead of the SM I/[0159] F control circuit 11005 and the CM I/F control circuit 11006 in the configuration of the CHN 1100 according to the first embodiment shown in FIG. 4, the NAS controller NCTLx is provided with an FC controller 11010 a for connecting with the disk pool 0 1700 a. The NAS controller NCTLx also has a cache memory CM 14 a within the NAS controller, as well as a data transfer control circuit 1011 a, which is a control circuit for the cache memory CM 14 a. Further, the data transfer control circuit 11011 a serves to connect the NAS controller 1100 a and the NAS controller 1101 a to each other. Although details of the configuration of the NAS controller NCTL1 (1101 a) are not shown in FIG. 13, the NAS controller 1101 a has a configuration similar to that of the NAS controller 1100 a. Components that are assigned the same numbers as components of the CHN 1100 in the first embodiment have the same configuration and the same function as the corresponding components of the CHN 1100.
  • Let us assume that the STR[0160] 1 is a storage device that is smaller and cheaper than the STR0. Also, as shown in FIG. 13, a CHN0 of the STR0 and the NCTL0 of the STR1 are connected via the LAN 20.
  • (2) Migration Processing of File to the Other Storage Device [0161]
  • The following is a description of the operation according to the present embodiment. [0162]
  • The CHN[0163] 0 (1100) of the storage device 1 (STR0) recognizes that the storage device 1 a (STR1) of a different type is connected to the LAN 20. The different storage device can be recognized using a method based on information designated in advance by an administrator or a method based on whether or not there is a device that reacts to a broadcast of a command for recognition to network segments of the LAN 20. In order to ascertain the configuration of the STR1, the CHN0 of the STR0 becomes an initiator and issues to the STR1 a command to collect information. The response from the STR1 to the command includes the type of the disk pool and the configuration of LUs that the STR1 has; as a result, by referring to the response, the CHN0 can recognize that the STR1 has the SATA disk pool 170 a and that there is a low-cost file type LU having a 15D +1P configuration RAID 5 and with a large capacity of 2100 GB in the disk pool 170 a. The CHN0 of the STR0 decides to manage the STR1's LU as a remote LU, i.e., as an LU that is in the other storage device STR1 (1 a) but as one of the LUs that are managed by the primary storage device STR0 (1).
  • The CHN[0164] 0 assigns a number LU3 to the LU that the STR1 has and assigns a remote file system number RFS3 to the file system constructed within the LU. Due to the fact that the LU is in a large capacity, low-cost disk pool, the storage class of the LU is set as “Archive Storage.” Based on a control by a disk array control CPU 11008 of the CHN0, information regarding the LU3 in the STR1, such as the type of the disk pool, the configuration of the LU, the LU number and the storage class, is stored in a disk pool management table 131 of an SM 13 of the storage device 1 (STR0). The CHN of the storage device 1 refers to the disk pool management table 131 by having a file access control CPU 11001 execute a file system program 110043, and can register information regarding the LU3 in a storage class management table 1100451 in a file access control memory 11004 by copying the information regarding the LU3 from the disk pool management table 131.
  • As described in the first embodiment, let us assume that the NAS host [0165] 0 (400) stored the file abc.doc in the LU0 of the STR0 via the CHN0 and that subsequently the file abc.doc was migrated to the LU2 of the STR0 based on a control by the CHN0; in the following, only those parts that differ from the first embodiment in the processing executed to migrate the file abc.doc further to the LU3 in the other storage device STR1 are described.
  • As described in the first embodiment, the file abc.doc's current life cycle stage is “reference stage,” its current storage class is “NearLine Storage,” and its data section is stored under the name FILE[0166] 00001 in the LFS2 of the LU2 constructed in the SATA disk pool of the STR0, as shown in FIGS. 11 and 12. The filename management table 1100436 in which the filename “abc.doc” is registered and the file storage management table 1100435 for the file abc.doc are in the LFS0. In other words, information regarding the abc.doc is stored in the filename management table 1100436 for the LFS0 and in the file storage management table 1100435 for the file abc.doc in the LFS0. In the meantime, the file property information management table 1100438 is in both the LFS0 and the LFS2. The data section of the file abc.doc has already been migrated to the LU2 in which is constructed the LFS2, which means that the data section of the abc.doc does not reside in the LU0, in which is constructed the LFS0.
  • The [0167] migration management section 110043A of the STR0 refers to the file property information management table 1100438 of the abc.doc and compares the date created to the current date and time. If one year has elapsed since the migration, the migration management section 110043A recognizes that the current life cycle stage has shifted from the “reference stage” to the “archive stage” due to the fact that the life cycle model in the static property information for abc.doc indicates “model 1” and that one year, which is the period of the “reference stage,” has already passed. Further, due to the fact that the migration plan is “plan 1,” the migration management section 110043A recognizes that the file must be migrated from an LU whose storage class is “NearLine Storage” to an LU whose storage class is “Archive Storage”.
  • Next, the [0168] migration management section 110043A refers to the storage class management table 1100439, selects the LU3 that belongs to the “Archive Storage” class, and decides to transfer the file abc.doc to the LU3. The LU3 has attributes of “STR1 (i.e., the other storage device la)” as the storage node, “SATA disk pool” as the disk pool #, and “remote file” as the LU type.
  • Next, the [0169] migration management section 110043A changes the current life cycle stage to “archive stage” and the current storage class to “Archive Storage” in the dynamic property information of the file property information management table 1100438 for the abc.doc.
  • Next, the [0170] migration management section 110043A defines a unique filename (in this case STR1-FILE00001) that is used to manage the file abc.doc within the storage device STR0 (1).
  • The [0171] migration management section 110043A behaves as it were a NAS host and issues to the STR1 an open request for the file STR1-FILE00001. This open processing is an open processing executed in order to store the file for the first time from the perspective of the STR1. For this reason, the STR0 includes in the open request sent to the STR1 the information that the STR0 has in the file property information management table 1100438 as the static property information of the file abc.doc. However, by changing only the initial storage class in the static property information to “Archive Storage” in the information sent, the STR0 expressly designates to the STR1 to store the file STR1-FILE00001 in the Archive Storage class from the beginning.
  • The NCTL[0172] 0 of the STR1 receives the open request via a LAN controller 11002 a, and a file access control CPU 11001 a executes a file system program 110043 a.
  • When the file system program [0173] 110043 a is executed, the open request received is specified in a manner similar to the first embodiment as an access request to access the remote file system RFS3; the STR1-FILE00001 is registered in a filename management table 1100436 a in a file access control memory 11004 a and a file handler is assigned to the STR1-FILE00001 based on a control by the file access control CPU 11001 a; a file storage management table 1100435 a and a file property information management table 1100438 a are created within the file access control memory 11004 a and information to be registered in the tables is set. The NCTL0 sends to the migration management section 110043A of the CHN0 the file handler assigned to the STR1-FILE00001, and the open processing is terminated.
  • Next, like the NAS host [0174] 0 (400) in the data write processing according to the first embodiment, the migration management section 110043A of the STR0 issues to the STR1 a write request containing the file handler obtained from the NCTL0 of the STR1 in the open processing, and requests to write actual data of abc.doc (i.e., data that is also actual data of FILE00001) as actual data of the file STR1-FILE00001.
  • The file storage management section [0175] 1100433 a of the STR1 secures buffer regions required to store the write data, determines storage positions on disks of the actual data of the file, and stores the write data received from the STR0 in the buffers.
  • To determine the storage positions, the file storage management section [0176] 1100433 a refers to the static property information in the file property information management table 1100438 a. The file storage management section 1100433 a specifies the current life cycle stage of the file STR1-FILE00001 as “archive stage” due to the fact that the life cycle model of the file STR1-FILE00001 is “model 1” and to the fact that more than one year and one month have passed since the file was generated. Further, the file storage management section 1100433 a specifies “Archive Storage” as the initial storage class as designated by the STR0.
  • The file storage management section [0177] 1100433 a sets the current life cycle stage as “archive stage” and the current storage class as “Archive Storage” in the life cycle information category of the dynamic property information of the file property information management table 1100438 a. The file storage management section 1100433 a further performs appropriate calculations for access information regarding the file STR1-FILE00001 and updates the information in the access information category of the file property information management table 1100438 a. NULL is registered for all entries for link destinations in the file storage management table 1100435 a of the file STR1-FILE00001.
  • Next, under the control of the NCTL[0178] 0, the data section of the file STR1-FILE00001 is stored at proper timing on disks that make up the LU3.
  • This concludes the write processing in the STR[0179] 1 and the processing returns to STR0.
  • The file [0180] storage management section 1100433 of the STR0 changes the link destination node name to STR1, the link destination FS name to LFS3, and the link destination filename to STR1-FILE00001 in the file storage management table 1100435 of the FILE00001 in the LFS2. The file storage management section 1100433 then clears all buffer management tables 1100437 that can be referred to from pointers registered in the file storage management table 1100435 of the FILE00001 and enters NULL in all buffer management table entries of the file storage management table 1100435.
  • The preceding transfers the substance of the data section of the file abc.doc from the FILE[0181] 00001 in the LFS2 of the STR0 to STR1-FILE00001 in the RFS3 of the STR1.
  • After this, whenever an access request is issued by any of the NAS hosts to access the file abc.doc, the CHN of the STR[0182] 0 refers to the file storage management table 1100435 of the abc.doc in the LFS0 and obtains its link destination node name, FS name and filename, and refers to the file storage management table 1100435 of the FILE00001 in the LFS2 based on the identification information of the link destination obtained (i.e., STR0, LFS2, FILE00001). The CHN of the STR0 further obtains the link destination node name, the FS name and the filename from the file storage management table 1100435 of the FILE00001 in the LFS2 and issues to the NCTL of the STR1 an access request designating identification information of the link destination obtained (i.e., STR1, LFS3, STR1-FILE00001), which allows the CHN of the STR0 to reach STR1-FILE00001 in the RFS3 of the STR1 and access the data section of the abc.doc via the NCTL of the STR1.
  • Due to the fact that a plurality of file storage management tables [0183] 1100435 must be referred to in order to access a file that has been migrated to the LU3 of the STR1 according to the present embodiment, the access speed does suffer slightly. However, since files that are stored in the LU3 of the STR1 are files whose current life cycle stage is “archive stage” and therefore files that are rarely subjects of access requests, this poses no problem in practical terms. Even if an access request were to be issued from a host computer for data of a file in “archive stage,” the data can be retrieved in real-time from its storage positions on disks since it is stored on magnetic disks, even though it is a file that belongs to the Archive Storage class; unlike conventional situations in which such files are stored on tapes, neither enormous access time for tape control nor transferring the data from the tape to a disk is required according to the present embodiment.
  • As described above, according to the present embodiment, due to the fact that the storage positions of a file are determined based on the life cycle stage of the file, the Archive Storage class suitable for archiving is selected for files in “archive stage,” or the old age, in its life cycle. [0184]
  • Furthermore, other storage devices can be connected to the primary storage device, so that a storage hierarchy that takes advantage of differences in features of various storage devices can be constructed. Files can be migrated to LUs of the other storage devices, instead of migrating only within the primary storage device, according to the migration plan of each file; this further optimizes cost for storage devices compared to situations in which a hierarchical storage control is realized using only one storage device. [0185]
  • In addition, drives on disk devices that make up LUs whose storage class is “Archive Storage” can be halted to realize low power consumption and to extend the life of the disks. [0186]
  • Moreover, due to the fact that even cheaper storage devices can be connected to the storage device STR[0187] 1 according to the present embodiment, a storage hierarchy that is even more extensive can be established among a plurality of storage devices; by executing a hierarchical storage control using such a configuration, cost can be further optimized.
  • Embodiment 3
  • (1) Example of System Configuration (FIG. 14) [0188]
  • Next, with reference to FIG. 14, an example of the system configuration according to the third embodiment will be described. In the present embodiment, as in the second embodiment, a hierarchical storage control is executed among storage devices in a system in which another storage device STR[0189] 2 (1 b) is connected to a storage device STR0 (1) via a network. The third embodiment differs from the second embodiment in that while the network that connects the storage devices was the LAN 20 and file I/O interfaces were used between storage devices in the second embodiment, the network that connects the storage devices is a SAN 35, which is a dedicated network for connection between the storage devices, and a block I/O interface is used between storage devices in the third embodiment.
  • In FIG. 14, the storage device STR[0190] 2 (1 b) is a storage device with a small-scale configuration similar to the storage device STR1 (1 a) in the second embodiment, but instead of the NAS controller NCTL0 of the storage device STR1 (1 a) in the second embodiment, the storage device STR2 (1 b) has SAN controllers FCTLx. Each FCTLx is provided with an FC controller 11012 b to connect with the SAN 35, but it does not have the file access control CPU 11001 a or its peripheral circuits as the STR1 does and does not perform file control. Otherwise, the storage device STR2 (1 b) according to the present embodiment has a configuration similar to that of the storage device STR1 (1 a) according to the second embodiment.
  • The [0191] SAN 35 is a dedicated network for connecting the storage device STR0 (1) to the storage device STR2 (1 b), and SAN hosts are not connected to the SAN 35. For the sake of simplification, let us assume that in the present embodiment no SAN hosts are connected to the SAN 35, which is the network to connect the storage devices, and that there is only one network that connects the storage devices. However, SAN hosts can be connected to the SAN 35 and a plurality of networks for connecting the storage devices can be provided to improve fault tolerance.
  • In the present embodiment, the storage device STR[0192] 2 (1 b) is under the control of the storage device STR0 (1), and file accesses from a NAS host 0 (400) reaches the storage device STR2 (1 b) via the storage device STR0 (1). Such a configuration is hereinafter called a “connection of diverse storage devices.”
  • (2) Migration Processing of File to the Other Storage Device [0193]
  • Next, a description will be made as to the processing for migrating a file stored in the STR[0194] 0 to the STR2, with emphasis on the difference between this processing and the processing according to the second embodiment.
  • A CHF[0195] 1 (1111) of the storage device STR0 (1) recognizes that the storage device STR2 (1 b), which is a divergent storage device, is connected to the SAN 35. The CHF1 (1111) acts as an initiator and issues a command to collect information and thereby recognizes that the STR2 (1 b) is connected to the SAN 35. The CHF1 (1111) treats storage regions of the STR2 as if they were a disk pool within the primary storage device according to the first embodiment. A CHN0 (1110) can use the disk pool via the CHF1 (1111). The management method of the disk pool is described later. In order to ascertain the configuration of the STR2, the CHN0 (1100) of the STR0 (1) becomes an initiator and issues a command to collect information via the CHF1 (1111) to the STR2. The CHN0 (1100) of the STR0 (1) receives a response from the STR2 to the command via the CHF1 (1111) and recognizes from the information included in the response that the STR2 has a SATA disk pool and a low-cost, block type LU having a 15D +1P configuration RAID 5 and with a large capacity of 2100 GB; based on this, the CHN0 (1100) of the STR0 decides to manage the LU as a remote LU. Furthermore, due to the fact that the disk pool that the STR2 has is a large capacity, low-cost disk pool, the CHN0 (1100) of the STR0 determines the storage class of the disk pool as “Archive Storage.” The CHN0 (1100) of the STR0 assigns the number LU4 to the LU inside the STR2 and stores in a disk pool management table 131 of an SM 13 information concerning the LU, i.e., “Archive Storage” as the storage class #, “STR2” as the storage node #, “SATA pool” as the disk pool #, “LU4” as the LU #, “remote block” as the LU type, “ RAID 5, 15D +1P” as the RAID Conf., and “2100 GB” as the usable capacity. When the CHN (1100) of the STR0 executes a file system program, the disk pool management table 131 is referred to and the information concerning the LU is copied from the disk pool management table 131 to a storage class management table 1100451 in a file access control memory.
  • As in the second embodiment, it is assumed that a [0196] migration management section 110043A of the CHN0 (1100) of the STR0 has decided to migrate a file abc.doc from a NearLine Storage class to the Archive Storage class, and the following is a description of the migration processing of the file executed based on this assumption.
  • The [0197] migration management section 110043A of the STR0 refers to a storage class management table 1100439, selects the LU4 that belongs to the “Archive Storage” class, and decides to transfer the file abc.doc to the LU4. The LU4 has attributes of “STR2 (i.e., the other storage device 1 b)” as its storage node, “SATA disk pool” as its disk pool #, and “remote block” as its LU type.
  • Unlike the second embodiment, since the LU type of the LU[0198] 4 is “block” type, there is no file system in the LU4. For this reason, the file system program stored in the CHN0 (1100) constructs a local file system LFS4 in the LU4. Due to the fact that the disk pool in which the LU4 is set resides in the other storage device STR2 from the perspective of the STR0, it is therefore a “remote” disk pool and the LU4 is a remote LU; however, since the file system LFS4 set in the LU4 is to be controlled by the CHN0 (1100), the file system LFS4 is managed as a local file system.
  • Due to the fact that the LFS[0199] 4 is to be managed as a local file system and the LU4 in which the LFS4 is constructed is an LU that is in the block type, divergent storage device, a file storage management table is treated differently in the present embodiment compared to its treatment in the first and the second embodiments. In other words, a file storage management section 1100433 of the CHN0 (1100) of the STR0 assigns “STR2” as the link destination node name, “LFS4” as the link destination FS name, and a STR2-FILE00001 as the link destination filename, and sets these in a file storage management table for the file abc.doc. Since the file abc.doc has already being migrated to the LU2 under the filename FILE00001, the CHN0 (1100) can alternatively set the assigned link destination node name, the link destination FS name and the link destination filename in the file storage management table for the file FILE00001 in the LFS2. Since the STR2, in which the LU4 actually exists, does not execute the file access control as described earlier, a file storage management table for the STR2-FILE00001 is not created in the STR2.
  • The processing that takes place when the [0200] file system program 110043 of the CHN0 (1100) is executed is the same as the processing that takes place on the local file system according to the first embodiment in terms of the file open processing, write processing and migration processing, except for the fact that the processing is executed with the awareness that the link destination node of the file abc.doc (i.e., the storage device in which the actual data of the file abc.doc is stored) is the STR2.
  • However, unlike the first embodiment in which a file is transferred to an LU that is in the primary storage device STR[0201] 0, data of a file is transferred to an LU that is in the other storage device STR2 according to the present embodiment, which results in input/output processing to and from disks that is different from the first embodiment. While the DKA 12 x of the STR0 controlled the input/output processing to and from disks in the first embodiment, the CHF1 (1111) of the STR0 controls the processing according to the configuration of the present embodiment. For this reason, a CHF communications driver program 110096 is stored in a disk array control memory 11009 of the CHN0. A CHF communications driver section is realized by having a disk array control CPU 11008 execute the CHF communications driver program 110096. The CHF communications driver section sends a disk input/output command (hereinafter called an “I/O command”) to the SM 13. Address information representing storage positions of the data is included in the I/O command. The CHF1 (1111) receives the I/O command via the SM 13 and based on the I/O command received issues an I/O command to the storage device 1 b (STR2) via the SAN 35. The I/O command issued by the CHF1 (1111) includes address information representing the data storage positions within the storage device 1 b (STR2). The storage device 1 b (STR2) processes the I/O command received from the CHF1 (1111) according to the same procedure applied when a disk I/O command is received from a normal host. In other words, the CHF1 of the STR0 is recognized as a host from the perspective of the STR2.
  • According to the present embodiment, the disk pool of the divergent storage device STR[0202] 2 provided with the block I/O interface can be treated as one of the disk pools of the storage device STR0, and a file system managed by the STR0 can be constructed on the LU that is in the disk pool of the STR2. Furthermore, due to the fact that files stored in the LU of the STR0 can be migrated to the LU within the STR2, a flexible storage hierarchy with superior cost effectiveness can be constructed.
  • Embodiment 4
  • (1) Example of System Configuration (FIG. 15) [0203]
  • The following is a description of the fourth embodiment. The present embodiment differs from preceding embodiments in its configuration. [0204]
  • FIG. 15 is a diagram of an example of the system configuration according to the present embodiment. A storage device STR[0205] 3 (1 c) is provided with a DKC 70 and disk pools. In the DKC 70, an SW 71 is a switch, NNODEs (72 x) are NAS nodes each provided with a file I/O control mechanism to connect with a LAN, FNODEs (73 x) are FC nodes each provided with a block I/O control mechanism to connect with a SAN, INODEs (74 x) are IP nodes each provided with an IP network control mechanism to connect with an IP network, and DNODEs (75 x) are disk controller nodes each provided with a disk control mechanism to connect with a disk pool. To the switch SW 71 are connected one or more NNODEs 72 x, one or more FNODEs 73 x, one or more INODEs 74 x and one or more DNODEs 75 x. A node to control iSCSI can be connected to the switch SW 71 to form an IP SAN. The node to control the iSCSI would have functions and a configuration similar to those of the FNODE.
  • The DNODE[0206] 0 and the DNODE1 are connected to and control two types of disk pools, a disk pool 0 and a disk pool 1, which are an FC disk pool 170 and a SATA disk pool 171.
  • The INODE[0207] 2 and the INODE3 are connected to a NAS-type divergent storage device STR1 (1 a), which is external to the storage device STR3 and is a storage device provided with file I/O interfaces described in the second embodiment. The FNODE2 and the FNODE3 are connected to a SAN-type divergent storage device STR2 (1 b), which is external to the storage device STR3 and is a storage device provided with block I/O interfaces described in the third embodiment.
  • (2) Example of Configuration of the NNODE (FIG. 16) [0208]
  • FIG. 16 is a diagram of an example of the configuration of the NNODE. The [0209] NNODE 720 is equivalent to the CHN 1100 shown in FIG. 4 with the inter-CPU communications circuit 11007 and components below it removed and replaced by an SW node controller 7204. Other components are the same as in the CHN 1100 in terms of configuration and function.
  • The [0210] SW node controller 7204 is a controller circuit for connecting with the SW 71; it forms commands, data and controller information in internal frame formats that are sent and received within the storage device STR3 (1 c) and sent as disk I/O to other nodes such as the DNODEs.
  • (3) Example of Configuration of the FNODE (FIG. 17) [0211]
  • FIG. 17 is a diagram of an example of the configuration of the FNODE. The [0212] FNODE 730 has a configuration in which an SW node controller 7302 is connected to the FC controller 11012 b of the FCTL 1100 b in FIG. 14, which makes the FNODE 730 capable of connecting with the SW 71 via the SW node controller 7302. An FC controller 73-01 operates as a target device and sends and receives frames of commands, data and controller information to and from the SAN. The SW node controller 7302 converts frames sent or received by the FC controller 7301 into internal frame configurations of the storage device STR3 (1 c) and sends or receives the converted frames to and from other nodes, such as the DNODEs.
  • The FNODE [0213] 73 x operates as an initiator device and based on disk I/O commands received from the NNODEs or other FNODEs can send I/O commands to other storage devices connected externally to the storage device STR3. For example, based on commands received from the NNODEs or other FNODEs of the storage device STR3, the FNODE2 and the FNODE3 in FIG. 15 can send I/O commands to the divergent storage device STR2 (1 b) externally connected to the storage device STR3. In this case, the FNODE2 and the FNODE3 appear to be operating as host computers from the perspective of the STR2.
  • Although only the [0214] FC controller 7301 and the SW node controller 7302 are shown in FIG. 17 for the sake of simplification, a CPU can be mounted on the FNODEs in order to perform target processing, initiator processing or internal frame generation processing.
  • By installing an iSCSI controller instead of the [0215] FC controller 7301, a node that controls iSCSI can be configured; by connecting such a node to the SW 71, an IP SAN can be configured.
  • (4) Example of Configuration of INODE (FIG. 18) [0216]
  • FIG. 18 is a diagram of an example of the configuration of the INODE. The [0217] INODE 740 has a configuration in which an SW node controller 7402 is connected to a LAN controller 7401, which is similar to the LAN controller 11002 a of the NCTL0 (1100 a) in FIG. 13; this configuration makes the INODE 740 capable of connecting with the SW 71 via the SW node controller 7402. The INODEs are provided on the storage device STR3 (1 c) in order to connect the external NAS-type storage device STR1 a to the STR3.
  • (5) Example of Configuration of DNODE (FIG. 19) [0218]
  • FIG. 19 is a diagram of an example of the configuration of the DNODE. The [0219] DNODE 750 is similar to the FCTL 1100 b in FIG. 14, but with the FC controller 11012 b removed and replaced with an SW node controller 7501. The DNODE 750 goes into operation when it receives a disk I/O command from one of the NNODEs or FNODEs via the SW 71; as a result, a section 1 d outlined by a broken line in FIG. 15 operates as if it were the independent storage device STR2 in FIG. 14. In the present embodiment, the DNODE0 (750) and the DNODE1 (751) form a pair of redundant controllers. Having redundant DNODEs is similar to the configuration of the storage device STR2 in FIG. 14, where there are also redundant FCTLs.
  • (6) Migration Processing of Files [0220]
  • The present embodiment only differs from the first, second and third embodiments in its configuration of the storage device, and its processing procedure for executing a hierarchical storage control is similar to that in the first, second and third embodiments; accordingly, only those parts that differ in the operation as a result of differences in the configuration of the storage device are described below. [0221]
  • In the present embodiment, a hierarchical storage control inside the storage device STR[0222] 3 can be executed using a procedure similar to that in the first embodiment. A file system program 110043 stored in a file access control memory of the NNODE 72 x is equipped with a storage class management table 1100439 for managing usable LUs, and can recognize disk pools and LUs managed by the DNODEs 75 x by referring to the storage class management table 1100439. However, unlike the first embodiment, there is no SM 13 for storing shared information; consequently, the NNODE 72 x must query all DNODEs 75 x in advance to specify a usable LU and register it in the storage class management table 1100439. Of course, an SM node for connecting with an SM can be provided for connection with the SW 71 in the present embodiment, so that the storage class management table 1100439 can consist of information stored in the SM, as in the first embodiment.
  • When the NNODE [0223] 72 x specifies a usable disk pool and an LU, creates the storage class management table 1100439, and defines a storage class, a processing similar to that in the first embodiment can be applied subsequently to execute a hierarchical storage control within the storage device STR3 (1 c), i.e., a hierarchical storage control using LUs set in the disk pool 0 and the disk pool 1.
  • To issue a disk I/O command, an SW node driver program stored in the file [0224] access control memory 7203 of the NNODE is executed by the file access control CPU 7202, which causes a disk I/O command to be issued via the SW node to the DNODE 750 that manages the LU that is the subject of access.
  • Through the configuration and processing described above, a system in which a file-based storage hierarchy is constructed within the storage device STR[0225] 3, as in the first embodiment, can be realized.
  • Furthermore, the NAS-type divergent storage device STR[0226] 1 (1 a) provided with file I/O interfaces can be connected externally to the storage device STR3, which would result in a configuration of a storage hierarchy as in the second embodiment. When the file system program 110043 stored in the file access control memory 7203 of the NNODE 72 x is executed by the file access control CPU 7202, the file system program 110043 queries the INODE 74 x whether there is a NAS-type divergent storage device connected to the INODE 74 x; if there is a divergent storage device connected, the file system program 110043 obtains from the divergent storage device the information for identifying remote LUs and remote file systems that are in the divergent storage device. Through a control by the file access control CPU 7202, a storage class is defined for each of the remote LUs and the remote file systems, and information concerning the LUs is registered and managed in the storage class management table 1100439. Subsequent steps are the same as in the processing procedure in the second embodiment.
  • To issue a disk I/O command, the SW node driver program stored in the file [0227] access control memory 7203 of the NNODE is executed by the file access control CPU 7202, which causes a disk I/O command to be issued from the NNODE via the SW node to the INODE 740 connected to the storage device STR1 (1 a) that is provided with the LU that is the subject of access. Based on the I/O command received, the INODE 740 issues to the storage device STR1 (1 a) a disk I/O command for a file access, as well as sends and receives actual data of the file and control information to and from the STR1 (1 a).
  • The INODEs [0228] 74 x have no involvement whatsoever in the file control information and operate simply as gateways of an IP network. In such a case, a hierarchical storage configuration without any interference from other devices, such as NAS hosts, can be realized. Of course, the divergent storage device STR1 (1 a) can be connected to the LAN 20, to which the NNODE 720 is connected, as in the second embodiment.
  • Through the configuration and processing described above, a file-based storage hierarchy that utilizes storage pools of an external divergent storage device, as in the second embodiment, can be realized. [0229]
  • Furthermore, the SAN-type divergent storage device STR[0230] 2 (1 b), which is a storage device provided with block I/O interfaces, could be connected externally to the storage device STR3, which would result in a configuration of a storage hierarchy as in the third embodiment. When the file system program 110043 stored in the file access control memory 7203 of the NNODE 72 x is executed by the file access control CPU 7202, the NNODE queries the FNODEs 73 x whether there is a SAN-type divergent storage device connected to the FNODEs 73 x. If there is a divergent storage device connected, the NNODE recognizes remote LUs of the divergent storage device based on the contents of the response from the FNODEs 73 x to the query and constructs local file systems in the remote LUs. The NNODE then defines a storage class for each of the remote LUs and the local file systems, and registers and manages information concerning the LUs in the storage class management table 1100439. Subsequent steps are the same as in the third embodiment.
  • To issue a disk I/O command, the SW node driver program is executed by the file [0231] access control CPU 7202, which causes a disk I/O command to be issued from the NNODE via the SW node to the FNODE 732 connected to the storage device STR2 (1 b), which is provided with the LU that is the subject of access. The FNODE 732 issues to the storage device STR2 a disk I/O command, as well as sends and receives data and control information to and from the STR2. Through the configuration and processing procedure described above, a file-based storage hierarchy that utilizes a file system that is constructed in an external storage device STR2 as in the third embodiment and managed by the STR3 can be realized.
  • According to the present embodiment, the storage device STR[0232] 3 behaves as if it were a central control controller for constructing a hierarchy storage system and various types of storage devices can be connected internally and externally to the storage device STR3; consequently, an extremely flexible, scalable and large-scale hierarchical storage system can be constructed. Furthermore, due to the fact that disks and other storage devices can be connected internally and externally to the storage device STR3 as nodes on the SW 71 of the storage device STR3, high-speed data transfer becomes possible.
  • (7) Other Applications [0233]
  • Although file transfer methods and storage devices that execute hierarchical migration processing of files based on file's data life cycle stages have been described in the first through fourth embodiments, files can be transferred based on other standards and a plurality of standards can be combined. Possible standards other than the data life cycle stage include a file's access property and an LU's used capacity. In such cases, transfer of files can be controlled by providing a migration plan based on the file's access property or the LU's used capacity. [0234]
  • Examples of migration plans based on a file's access property include a plan to re-transfer a file into a storage class one class higher in the hierarchy when the access frequency of the file exceeds a certain level, or a plan that provides a storage class specialized for sequential accesses and that transfers a file into the specialized storage class once the sequential access frequency to the file exceeds a certain level. [0235]
  • Examples of migration plans based on an LU's used capacity include a plan to transfer a file to a storage class one class lower in the hierarchy even if its current life cycle stage has not shifted, if the file that is stored in an LU has low access frequency or if a long time has elapsed since its date of creation, once the used capacity of the LU exceeds a certain level. [0236]
  • The file property information management table [0237] 1100438 in the above embodiments manages dynamic properties for each file as access information. The storage class management table 1100439 manages the total capacity and used capacity of each LU. By utilizing such information, migration plans described above can be readily realized.
  • A hierarchical storage control according to the property of a file can be realized through a processing within a storage device and without being dependent on host computers. [0238]
  • While the description above refers to particular embodiments of the present invention, it will be understood that many modifications may be made without departing from the spirit thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention. [0239]
  • The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. [0240]

Claims (25)

What is claimed is:
1. A storage system that is connected to at least one computer, the storage system comprising:
a first interface control device that receives from the at least one computer an access request designating identification information of a file;
a second interface control device connecting to the first interface control device; and
a plurality of disks connecting to the second interface control device, wherein the plurality of disks include at least one first disk, and at least one second disk, the first disk and the second disk being different kinds, the first interface control device decides based on identification information received from the computer a storage position of data of the file designated by the identification information within the plurality of disks, and
the second interface control device controls to store the data of the file designated by the identification information at the storage position decided by the first interface control device.
2. A storage system according to claim 1, wherein the at least one first disk is a Fibre Channel disk equipped with a Fibre Channel type interface, and the at least one disk is a serial ATA disk equipped with a serial ATA type interface.
3. A storage system according to claim 1, further comprising a memory, a memory controller for controlling the memory, a plurality of first interface control devices each being connected to the memory controller, and a plurality of second interface control devices each being connected to the memory controller,
wherein one of the first interface control devices receives identification information of a file and data of the file from the computer, and stores the data of the file in the memory, and
one of the second interface control devices that is connected to one of the disks that is to store the data of the file controls to store the data of the file retained in the memory in the one of the disks according to the storage position of the data of the file decided by the first interface control device.
4. A storage system according to claim 1, wherein
a first storage region exists in the at least one first disk,
a second storage region exists in the at least one second disk, and
the first interface control device sets up a first file system in the first storage region, and a second file system in the second storage region.
5. A storage system according to claim 4, wherein the first interface control device decides, according to static property that is predetermined property and dynamic property that is property that changes with passage of time since a point of time when the file is created, as to which one of the first storage region and the second storage region to store the data of the file indicated by the identification information.
6. A storage system according to claim 5, wherein the first interface control device controls to migrate data of a file stored in one of the first storage region and the second storage region to the other storage region according to a change in the dynamic property, and changes identification information that specifies the file and information indicative of correlation between the file and the storage position.
7. A storage system according to claim 6, wherein the static property includes at least one of information that specifies a kind of the file, information that specifies a time when the file is created, and information that specifies a value of the file, and the dynamic property includes at least one of information concerning an access property to the file, and information concerning passage of time elapsed since the file is created.
8. A storage system that is connected to a computer, the storage system comprising:
at least one first interface control device that connects to the computer and receives from the computer an access request containing identification information of a file;
at least one second interface control device that connects to the at least one first interface control device; and
a plurality of first disks each being connected to the at least one second interface control device,
wherein the at least one first interface control device connects to a second storage system having a plurality of second disks,
a first storage region is set in the plurality of first disks,
a second storage region is set in the plurality of second disks,
the at least one first interface control device, upon receiving an access request from the computer, decides, according to property of a file designated by identification information contained in the access request received, as to which one of the first storage region and the second storage region to store data of the file,
when the data of the file is stored in the first storage region, the at least one second interface control device stores the data of the file in one of the plurality of first disks, and
when the data of the file is stored in the second storage region, the at least one first interface control device that received the access request from the computer controls such that the data of the file is transmitted to the second storage system through the at least one first interface control device that is connected to the second storage system.
9. A storage system according to claim 8, wherein
the second storage system includes a third interface control device that receives an access request having identification information of a file, and that accesses a storage region within the second storage region correlated to the identification information received to thereby access data of the file specified by the identification information.
10. A storage system according to claim 9, wherein the third interface control device sets a file system in the second storage region, and wherein, when the first interface control device receives an access request from the computer, and when data of a file designated by identification information contained in the access request is to be stored in the second storage region, the first interface control device controls to transmit the access request having the identification information correlated to the file to the third interface control device through the first interface control device connected to the second storage system.
11. A storage system according to claim 10, wherein the first interface control device connected to the second storage system receives an access request from the computer.
12. A storage system according to claim 10, wherein the first interface control device migrates the data of the file from the first storage region to the second storage region through the third interface control device, based on property of files whose data is stored in the first storage region.
13. A storage system according to claim 12, wherein, when the first interface control device migrates data of a file stored in the first storage region to the second storage region, the first interface control device transmits to the third interface control device an access request having identification information correlated to the file, and stores the identification information of the file received from the computer correlated with the file system set in the second storage region.
14. A storage system according to claim 13, wherein
the first interface control device stores information concerning property of files and information concerning property of the first storage region and the second storage region, and decides, based on the information concerning property of files and the information concerning property of the first storage region and the second storage region, as to whether or not data of a file stored in the first storage region is to be migrated to the second storage region.
15. A storage system that is connected to a computer, the storage system comprising:
a first interface control device that receives from the computer an access request having identification information for designating a file;
a second interface control device that is connected to the first interface control device;
a plurality of first disks that are connected to the second interface control device;
a third interface control device that is connected to a second storage system having a fourth interface control device that receives an access request containing address information indicating a storage position of data and having a plurality of second disks that are connected to the fourth interface control device;
a first storage region existing in the plurality of first disks; and
a second storage region existing in the plurality of second disks; wherein
the first interface control device, upon receiving an access request for a file from the computer, decides as to which one of the first storage region and the second storage region to store data of the file according to property of the file indicated by identification information contained in the access request received,
when the data of the file is stored in the first storage region, the second interface control device stores the data of the file in one of the plurality of first disks, and
when the data of the file is stored in the second storage region, the first interface control device controls to transmit to the fourth interface control device through the third interface control device the access request containing address information for an address within the second storage region where the data of the file is to be stored.
16. A storage system according to claim 15, wherein the third interface control device is an interface control device that corresponds to a block I/O interface.
17. A storage system according to claim 16, wherein the first interface control device sets up a file system in the second storage region.
18. A storage system according to claim 17, wherein the first interface control device controls to migrate data of a file from the first storage region to the second storage region through the third interface control device based on property of the file whose data is stored in the first storage region.
19. A storage system according to claim 18, wherein, when the first interface control device migrates the data of the file stored in the first storage region to the second storage region, the first interface control device controls to transmit to the second storage system through the third interface control device an access request containing an address of a storage region in the second storage region that stores the data of the file, and changes relation between identification information of the file and information indicating the storage region that stores the data of the file.
20. A storage system that is connected to a computer, the storage system comprising:
a first node that receives from the computer an access request containing identification information of a file;
a second node that is connected to at least one first disk;
a third node that is connected to at least one second disk and a second storage system that is connected to the at least one second disk and has a file I/O interface control device that receives an access request having identification information of a file;
a fourth node that is connected to at least one third disk and a third storage system that is connected to the at least one third disk and has a block I/O interface control device that receives an access request having address information of the file indicating a storage position of the file within the at least one third disk; and
a switch device that mutually the first node, the second node, the third node and the fourth node,
wherein a first storage region exists in the at least one first disk, a second storage region exists in the at least one second disk, and a third storage region exists in the at least one third disk, and
the first node controls to store data of the file in one of the first storage region, the second storage region and the third storage region according to property of the file specified by identification information received from the computer.
21. A storage system according to claim 20, wherein, when the data of the file is stored in the first storage region, the second node controls to store the data of the file in a storage region in the at least one first disk.
22. A storage system according to claim 20, wherein, when the data of the file is stored in the second storage region, the first node controls to transmit to the second storage system through the third node an access request containing identification information correlated to the file.
23. A storage system according to claim 20, wherein, when the data of the file is stored in the third storage region, the first node controls to transmit to the third storage system through the fourth node an access request containing address information indicating a storage position of the data of the file.
24. A storage system according to claim 20, wherein,
when the data of the file is stored in the first storage region, the second node controls to store the data of the file in a storage region in the at least one first disk,
when the data of the file is stored in the second storage region, the first node controls to transmit to the second storage system through the third node an access request containing identification information correlated to the file, and
when the data of the file is stored in the third storage region, the first node controls to transmit to the third storage system through the fourth node an access request containing address information indicating a storage position of the data of the file.
25. A storage system comprising:
a file I/O interface control device; and
a plurality of disk pools,
wherein the file I/O interface control device sets one of a plurality of storage hierarchies defining storage classes, respectively, for each of LUs within the disk pools, thereby forming a file system in each of the LUs, and the file I/O interface control device migrates at least one of the files from one of the LUs to another one of the LUs of an optimal storage class, based on static properties and dynamic properties of each file.
US10/775,886 2003-03-27 2004-02-10 Storage device Abandoned US20040193760A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US11/030,608 US7330950B2 (en) 2003-03-27 2005-01-06 Storage device
US11/121,998 US7356660B2 (en) 2003-03-27 2005-05-05 Storage device
US12/155,703 US7925851B2 (en) 2003-03-27 2008-06-09 Storage device
US13/051,010 US8230194B2 (en) 2003-03-27 2011-03-18 Storage device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2003-086828 2003-03-27
JP2003086828A JP4322031B2 (en) 2003-03-27 2003-03-27 Storage device

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US11/030,608 Continuation US7330950B2 (en) 2003-03-27 2005-01-06 Storage device
US11/121,998 Continuation US7356660B2 (en) 2003-03-27 2005-05-05 Storage device
US12/155,703 Continuation US7925851B2 (en) 2003-03-27 2008-06-09 Storage device

Publications (1)

Publication Number Publication Date
US20040193760A1 true US20040193760A1 (en) 2004-09-30

Family

ID=32821523

Family Applications (5)

Application Number Title Priority Date Filing Date
US10/775,886 Abandoned US20040193760A1 (en) 2003-03-27 2004-02-10 Storage device
US11/030,608 Expired - Fee Related US7330950B2 (en) 2003-03-27 2005-01-06 Storage device
US11/121,998 Expired - Fee Related US7356660B2 (en) 2003-03-27 2005-05-05 Storage device
US12/155,703 Expired - Fee Related US7925851B2 (en) 2003-03-27 2008-06-09 Storage device
US13/051,010 Expired - Fee Related US8230194B2 (en) 2003-03-27 2011-03-18 Storage device

Family Applications After (4)

Application Number Title Priority Date Filing Date
US11/030,608 Expired - Fee Related US7330950B2 (en) 2003-03-27 2005-01-06 Storage device
US11/121,998 Expired - Fee Related US7356660B2 (en) 2003-03-27 2005-05-05 Storage device
US12/155,703 Expired - Fee Related US7925851B2 (en) 2003-03-27 2008-06-09 Storage device
US13/051,010 Expired - Fee Related US8230194B2 (en) 2003-03-27 2011-03-18 Storage device

Country Status (4)

Country Link
US (5) US20040193760A1 (en)
EP (1) EP1462927A3 (en)
JP (1) JP4322031B2 (en)
CN (2) CN1311328C (en)

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040139168A1 (en) * 2003-01-14 2004-07-15 Hitachi, Ltd. SAN/NAS integrated storage system
US20050050275A1 (en) * 2003-02-17 2005-03-03 Ikuya Yagisawa Storage system
US20050097132A1 (en) * 2003-10-29 2005-05-05 Hewlett-Packard Development Company, L.P. Hierarchical storage system
US20050120264A1 (en) * 2003-11-28 2005-06-02 Azuma Kano Disk array system and method for controlling disk array system
US20050149672A1 (en) * 2003-05-22 2005-07-07 Katsuyoshi Suzuki Disk array apparatus and method for controlling the same
US20050177681A1 (en) * 2004-02-10 2005-08-11 Hitachi, Ltd. Storage system
US20050182900A1 (en) * 2004-02-16 2005-08-18 Naoto Matsunami Storage system
US20050182864A1 (en) * 2004-02-16 2005-08-18 Hitachi, Ltd. Disk controller
US20060035569A1 (en) * 2001-01-05 2006-02-16 Jalal Ashjaee Integrated system for processing semiconductor wafers
US20060095666A1 (en) * 2004-01-09 2006-05-04 Ryoji Furuhashi Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US20060129537A1 (en) * 2004-11-12 2006-06-15 Nec Corporation Storage management system and method and program
US20060206675A1 (en) * 2005-03-11 2006-09-14 Yoshitaka Sato Storage system and data movement method
US20060218366A1 (en) * 2005-03-28 2006-09-28 Satoshi Fukuda Data relocation method
US20060259728A1 (en) * 2005-05-11 2006-11-16 Sashikanth Chandrasekaran Storing information on storage devices having different performance capabilities within a storage system
US7143096B2 (en) 2002-06-14 2006-11-28 Hitachi, Ltd. Information processing method and system
US20070038644A1 (en) * 2004-01-23 2007-02-15 Yasunori Kaneda Management computer and method of managing data storage apparatus
US20070073939A1 (en) * 2003-11-28 2007-03-29 Hitachi, Ltd. Disk array apparatus and data relay method of the disk array apparatus
US20070083482A1 (en) * 2005-10-08 2007-04-12 Unmesh Rathi Multiple quality of service file system
US7225211B1 (en) * 2003-12-31 2007-05-29 Veritas Operating Corporation Multi-class storage mechanism
US20070124551A1 (en) * 2005-11-30 2007-05-31 Dai Taninaka Storage system and management method thereof
US20070192560A1 (en) * 2006-02-10 2007-08-16 Hitachi, Ltd. Storage controller
US20070239803A1 (en) * 2006-03-28 2007-10-11 Yasuyuki Mimatsu Remote mirroring method between tiered storage systems
US20080177948A1 (en) * 2007-01-19 2008-07-24 Hitachi, Ltd. Method and apparatus for managing placement of data in a tiered storage system
US7467238B2 (en) 2004-02-10 2008-12-16 Hitachi, Ltd. Disk controller and storage system
US20090150639A1 (en) * 2007-12-07 2009-06-11 Hideo Ohata Management apparatus and management method
US7590671B2 (en) 2005-09-07 2009-09-15 Hitachi, Ltd. Storage system, file migration method and computer program product
US20090276568A1 (en) * 2006-02-01 2009-11-05 Hitachi, Ltd. Storage system, data processing method and storage apparatus
US7671485B2 (en) 2003-12-25 2010-03-02 Hitachi, Ltd. Storage system
US7685362B2 (en) 2003-05-22 2010-03-23 Hitachi, Ltd. Storage unit and circuit for shaping communication signal
US20100095164A1 (en) * 2008-10-15 2010-04-15 Hitachi, Ltd. File management method and hierarchy management file system
US20100211547A1 (en) * 2009-02-18 2010-08-19 Hitachi, Ltd. File sharing system, file server, and method for managing files
US7823010B2 (en) 2004-02-04 2010-10-26 Hitachi, Ltd. Anomaly notification control in disk array
US20100274826A1 (en) * 2009-04-23 2010-10-28 Hitachi, Ltd. Method for clipping migration candidate file in hierarchical storage management system
US20110055272A1 (en) * 2009-08-28 2011-03-03 International Business Machines Corporation Extended data storage system
US20110078112A1 (en) * 2009-09-30 2011-03-31 Hitachi, Ltd. Method and system for transferring duplicate files in hierarchical storage management system
EP2309372A2 (en) * 2009-10-05 2011-04-13 Hitachi Ltd. Data migration control method for storage device
US20110113194A1 (en) * 2004-11-05 2011-05-12 Data Robotics, Inc. Filesystem-Aware Block Storage System, Apparatus, and Method
US20110208938A1 (en) * 2005-07-15 2011-08-25 International Business Machines Corporation Virtualization engine and method, system, and computer program product for managing the storage of data
WO2011104741A1 (en) * 2010-02-23 2011-09-01 Hitachi, Ltd. Management system for storage system and method for managing storage system
US20110213814A1 (en) * 2009-11-06 2011-09-01 Hitachi, Ltd. File management sub-system and file migration control method in hierarchical file system
US20110213916A1 (en) * 2005-09-22 2011-09-01 Akira Fujibayashi Storage control apparatus, data management system and data management method
US20110231458A1 (en) * 2010-03-01 2011-09-22 Hitachi, Ltd. File level hierarchical storage management system, method, and apparatus
US20110264855A1 (en) * 2010-04-27 2011-10-27 Hitachi, Ltd. Storage apparatus and method for controlling storage apparatus
US8234477B2 (en) 1998-07-31 2012-07-31 Kom Networks, Inc. Method and system for providing restricted access to a storage medium
US8315973B1 (en) * 2004-09-28 2012-11-20 Symantec Operating Corporation Method and apparatus for data moving in multi-device file systems
US8782009B2 (en) 1999-05-18 2014-07-15 Kom Networks Inc. Method and system for electronic file lifecycle management
US8930402B1 (en) * 2005-10-31 2015-01-06 Verizon Patent And Licensing Inc. Systems and methods for automatic collection of data over a network
US9361243B2 (en) 1998-07-31 2016-06-07 Kom Networks Inc. Method and system for providing restricted access to a storage medium
US20170160964A1 (en) * 2015-12-08 2017-06-08 Kyocera Document Solutions Inc. Electronic device and non-transitory computer readable storage medium
US20240028259A1 (en) * 2022-07-21 2024-01-25 Micron Technology, Inc. Buffer allocation for reducing block transit penalty

Families Citing this family (122)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4283004B2 (en) * 2003-02-04 2009-06-24 株式会社日立製作所 Disk control device and control method of disk control device
EP1668486A2 (en) 2003-08-14 2006-06-14 Compellent Technologies Virtual disk drive system and method
EP2385456A3 (en) * 2003-08-14 2012-02-01 Compellent Technologies Virtual disk drive system and method
US9489150B2 (en) 2003-08-14 2016-11-08 Dell International L.L.C. System and method for transferring data between different raid data storage types for current data and replay data
US8825591B1 (en) 2003-12-31 2014-09-02 Symantec Operating Corporation Dynamic storage mechanism
US8127095B1 (en) 2003-12-31 2012-02-28 Symantec Operating Corporation Restore mechanism for a multi-class file system
US7293133B1 (en) 2003-12-31 2007-11-06 Veritas Operating Corporation Performing operations without requiring split mirrors in a multi-class file system
US7103740B1 (en) * 2003-12-31 2006-09-05 Veritas Operating Corporation Backup mechanism for a multi-class file system
JP2005242757A (en) * 2004-02-27 2005-09-08 Hitachi Ltd Storage system
CN100351766C (en) * 2004-04-21 2007-11-28 华为技术有限公司 Disk array system
JP4491273B2 (en) * 2004-05-10 2010-06-30 株式会社日立製作所 Storage system, file access control program, and file access control method
US7533230B2 (en) 2004-10-13 2009-05-12 Hewlett-Packard Developmetn Company, L.P. Transparent migration of files among various types of storage volumes based on file access properties
EP1650646A3 (en) 2004-10-22 2008-11-19 Quantum Corporation Data storage system for storing data in different types of data storage media
JP4688201B2 (en) * 2004-11-10 2011-05-25 日本放送協会 Storage device, content storage management method, and content storage management program
US7523286B2 (en) * 2004-11-19 2009-04-21 Network Appliance, Inc. System and method for real-time balancing of user workload across multiple storage systems with shared back end storage
JP4596902B2 (en) * 2004-12-10 2010-12-15 株式会社日立製作所 Storage management device, computer system, storage management method, and storage management program
JP4341072B2 (en) 2004-12-16 2009-10-07 日本電気株式会社 Data arrangement management method, system, apparatus and program
US7493344B2 (en) * 2005-04-01 2009-02-17 Schlumberger Technology Corporation Method and system for dynamic data merge in databases
US9547708B2 (en) * 2005-04-01 2017-01-17 Schlumberger Technology Corporation Method and system for database licensing
US7480676B2 (en) * 2005-04-01 2009-01-20 Schlumberger Technology Corporation Chasing engine for data transfer
US7853741B2 (en) * 2005-04-11 2010-12-14 Emulex Design & Manufacturing Corporation Tunneling SATA targets through fibre channel
JP4688556B2 (en) 2005-04-22 2011-05-25 株式会社日立製作所 Volume migration system, volume relocation method, and program
JP2006350599A (en) 2005-06-15 2006-12-28 Hitachi Ltd Storage system and data migration method thereof
JP4824374B2 (en) * 2005-09-20 2011-11-30 株式会社日立製作所 System that controls the rotation of the disc
JP4896500B2 (en) * 2005-11-14 2012-03-14 株式会社日立製作所 Virtual volume control method with device stop
JP4684864B2 (en) * 2005-11-16 2011-05-18 株式会社日立製作所 Storage device system and storage control method
US8271548B2 (en) * 2005-11-28 2012-09-18 Commvault Systems, Inc. Systems and methods for using metadata to enhance storage operations
US20070185926A1 (en) * 2005-11-28 2007-08-09 Anand Prahlad Systems and methods for classifying and transferring information in a storage network
US20200257596A1 (en) 2005-12-19 2020-08-13 Commvault Systems, Inc. Systems and methods of unified reconstruction in storage systems
US8930496B2 (en) 2005-12-19 2015-01-06 Commvault Systems, Inc. Systems and methods of unified reconstruction in storage systems
JP4890048B2 (en) * 2006-02-24 2012-03-07 株式会社日立製作所 Storage control device and data migration method using storage control device
EP2026184B1 (en) * 2006-04-10 2015-09-30 International Business Machines Corporation Device, method, and program for selecting data storage destination from a plurality of tape recording devices
EP2372520B1 (en) * 2006-05-03 2014-03-19 Data Robotics, Inc. Filesystem-aware block storage system, apparatus, and method
EP2021904A2 (en) 2006-05-24 2009-02-11 Compellent Technologies System and method for raid management, reallocation, and restriping
JP4887955B2 (en) * 2006-07-21 2012-02-29 日本電気株式会社 Data allocation management system, method and program
US7555575B2 (en) * 2006-07-27 2009-06-30 Hitachi, Ltd. Method and apparatus for migrating data between storage volumes of different data pattern
JP4943081B2 (en) * 2006-07-27 2012-05-30 株式会社日立製作所 File storage control device and method
JP4975396B2 (en) * 2006-08-24 2012-07-11 株式会社日立製作所 Storage control device and storage control method
JP4859595B2 (en) * 2006-09-01 2012-01-25 株式会社日立製作所 Storage system, data relocation method thereof, and data relocation program
US7630225B2 (en) 2006-09-29 2009-12-08 Sandisk Corporation Apparatus combining once-writeable and rewriteable information storage to support data processing
US7730270B2 (en) 2006-09-29 2010-06-01 Sandisk Corporation Method combining once-writeable and rewriteable information storage to support data processing
WO2008042068A2 (en) * 2006-09-29 2008-04-10 Sandisk Corporation Method and apparatus combining once-writeable and rewriteable information storage to support data processing
US7882077B2 (en) 2006-10-17 2011-02-01 Commvault Systems, Inc. Method and system for offline indexing of content and classifying stored data
JP2008112291A (en) 2006-10-30 2008-05-15 Hitachi Ltd Storage control device and data migration method for storage control device
US8370442B2 (en) 2008-08-29 2013-02-05 Commvault Systems, Inc. Method and system for leveraging identified changes to a mail server
US20080228771A1 (en) * 2006-12-22 2008-09-18 Commvault Systems, Inc. Method and system for searching stored data
JP2008176608A (en) * 2007-01-19 2008-07-31 Matsushita Electric Ind Co Ltd Data backup device and data backup method
US7613857B2 (en) 2007-03-30 2009-11-03 Sandisk Corporation Memory device with a built-in memory array and a connector for a removable memory device
US7603499B2 (en) 2007-03-30 2009-10-13 Sandisk Corporation Method for using a memory device with a built-in memory array and a connector for a removable memory device
US7633799B2 (en) 2007-03-30 2009-12-15 Sandisk Corporation Method combining lower-endurance/performance and higher-endurance/performance information storage to support data processing
JP2009075923A (en) * 2007-09-21 2009-04-09 Canon Inc File system, data processor, file reference method, program, and storage medium
US7870154B2 (en) * 2007-09-28 2011-01-11 Hitachi, Ltd. Method and apparatus for NAS/CAS unified storage system
US8918603B1 (en) * 2007-09-28 2014-12-23 Emc Corporation Storage of file archiving metadata
US8375190B2 (en) * 2007-12-11 2013-02-12 Microsoft Corporation Dynamtic storage hierarachy management
US8229895B2 (en) * 2008-01-08 2012-07-24 International Business Machines Corporation Preservation management of digital content
US7836174B2 (en) 2008-01-30 2010-11-16 Commvault Systems, Inc. Systems and methods for grid-based data scanning
US8296301B2 (en) 2008-01-30 2012-10-23 Commvault Systems, Inc. Systems and methods for probabilistic data classification
US20090228669A1 (en) * 2008-03-10 2009-09-10 Microsoft Corporation Storage Device Optimization Using File Characteristics
JP5186982B2 (en) * 2008-04-02 2013-04-24 富士通株式会社 Data management method and switch device
JP5028381B2 (en) 2008-10-22 2012-09-19 株式会社日立製作所 Storage apparatus and cache control method
US8086694B2 (en) * 2009-01-30 2011-12-27 Bank Of America Network storage device collector
JP5243991B2 (en) 2009-02-18 2013-07-24 株式会社日立製作所 Storage system, capacity management method, and management computer
US8775746B2 (en) 2009-03-06 2014-07-08 Nec Corporation Information processing system and method
JP2010225024A (en) 2009-03-25 2010-10-07 Hitachi Ltd Storage apparatus, its file control method, and storage system
JP5185445B2 (en) * 2009-05-13 2013-04-17 株式会社日立製作所 Storage system and used capacity management method in storage system
JP5250482B2 (en) * 2009-05-21 2013-07-31 株式会社日立製作所 Power saving control apparatus and method
US8468292B2 (en) 2009-07-13 2013-06-18 Compellent Technologies Solid state drive data storage system and method
US8566550B2 (en) * 2009-09-22 2013-10-22 Hitachi, Ltd. Application and tier configuration management in dynamic page reallocation storage system
US8775736B2 (en) * 2009-10-20 2014-07-08 Dell Products, Lp System and method for enhanced application performance with tiered storage in an information handling system
JP4985750B2 (en) * 2009-12-01 2012-07-25 富士通株式会社 Data storage system
US8442983B2 (en) * 2009-12-31 2013-05-14 Commvault Systems, Inc. Asynchronous methods of data classification using change journals and other data structures
US8281105B2 (en) 2010-01-20 2012-10-02 Hitachi, Ltd. I/O conversion method and apparatus for storage system
US8285959B2 (en) * 2010-01-25 2012-10-09 Netapp, Inc. Method for placement of virtual volume hot-spots in storage pools using ongoing load measurements and ranking
US8230192B2 (en) * 2010-02-05 2012-07-24 Lsi Corporation System and method for QoS-based storage tiering and migration technique
US8423727B2 (en) * 2010-03-16 2013-04-16 Hitachi, Ltd. I/O conversion method and apparatus for storage system
EP2378409A3 (en) * 2010-04-15 2012-12-05 Hitachi Ltd. Method for controlling data write to virtual logical volume conforming to thin provisioning, and storage apparatus
US8356147B2 (en) * 2010-08-20 2013-01-15 Hitachi, Ltd. Tiered storage pool management and control for loosely coupled multiple storage environment
US8230123B2 (en) * 2010-08-23 2012-07-24 International Business Machines Corporation Using information on input/output (I/O) sizes of accesses to an extent to determine a type of storage device for the extent
US9195412B2 (en) 2010-10-07 2015-11-24 International Business Machines Corporation System and method for transforming an in-use raid array including migrating data using reserved extents
CN102446072B (en) 2010-10-07 2014-11-19 国际商业机器公司 System and method for DAID array transformation in a pooled storage system
CN102456049A (en) * 2010-10-28 2012-05-16 无锡江南计算技术研究所 Data migration method and device, and object-oriented distributed file system
JP5727756B2 (en) * 2010-11-22 2015-06-03 株式会社東芝 Magnetic recording apparatus and data reading method thereof
US8856073B2 (en) * 2010-12-14 2014-10-07 Hitachi, Ltd. Data synchronization among file storages using stub files
US8812677B2 (en) 2010-12-21 2014-08-19 Hitachi, Ltd. Data processing method and apparatus for remote storage system
CN102147711B (en) * 2010-12-31 2014-04-02 华为数字技术(成都)有限公司 Storage method and device based on data content identification
US8719264B2 (en) 2011-03-31 2014-05-06 Commvault Systems, Inc. Creating secondary copies of data based on searches for content
US10089017B2 (en) * 2011-07-20 2018-10-02 Futurewei Technologies, Inc. Method and apparatus for SSD storage access
US9104606B2 (en) * 2011-11-22 2015-08-11 Landy Wang Temporal standby list
CN102521152B (en) * 2011-11-29 2014-12-24 华为数字技术(成都)有限公司 Grading storage method and grading storage system
US9146851B2 (en) 2012-03-26 2015-09-29 Compellent Technologies Single-level cell and multi-level cell hybrid solid state drive
TWI450187B (en) * 2012-05-08 2014-08-21 Acer Inc Data storage method
US9037587B2 (en) * 2012-05-10 2015-05-19 International Business Machines Corporation System and method for the classification of storage
WO2013179348A1 (en) * 2012-05-31 2013-12-05 富士通株式会社 Index generating program and search program
US8892523B2 (en) 2012-06-08 2014-11-18 Commvault Systems, Inc. Auto summarization of content
US8738585B2 (en) 2012-07-13 2014-05-27 Symantec Corporation Restore software with aggregated view of site collections
US8712971B2 (en) 2012-07-13 2014-04-29 Symantec Corporation Restore software with aggregated view of content databases
DE102012108103B4 (en) * 2012-08-31 2014-12-04 Fujitsu Technology Solutions Intellectual Property Gmbh Method of working for a mass storage system, mass storage system and computer program product
US8875229B2 (en) * 2012-12-21 2014-10-28 International Business Machines Corporation Quantifying risk based on relationships and applying protections based on business rules
CN103902593B (en) * 2012-12-27 2018-09-07 中国移动通信集团河南有限公司 A kind of method and apparatus of Data Migration
CN103076993A (en) * 2012-12-28 2013-05-01 北京思特奇信息技术股份有限公司 Storage system and method for concentration type system
US9460111B2 (en) 2013-07-02 2016-10-04 Hitachi Data Systems Engineering UK Limited Method and apparatus for virtualization of a file system, data storage system for virtualization of a file system, and file server for use in a data storage system
US9460097B2 (en) 2013-07-02 2016-10-04 Hitachi Data Systems Engineering UK Limited Method and apparatus for migration of a virtualized file system, data storage system for migration of a virtualized file system, and file server for use in a data storage system
EP2836902B1 (en) 2013-07-02 2018-12-26 Hitachi Data Systems Engineering UK Limited Method and apparatus for migration of a virtualized file system, data storage system for migration of a virtualized file system, and file server for use in a data storage system
JP6100404B2 (en) * 2014-01-15 2017-03-22 株式会社日立製作所 Computer system and method for controlling hierarchical storage thereof
CN103914516B (en) * 2014-02-25 2017-09-08 深圳市中博科创信息技术有限公司 A kind of method and system of storage system multi-zone supervision
WO2015132873A1 (en) 2014-03-04 2015-09-11 株式会社 東芝 Computer system including hierarchical block storage device, storage controller, and program
CN105022753B (en) * 2014-04-29 2018-09-04 中国移动通信集团内蒙古有限公司 A kind of date storage method and system
CN104731534A (en) * 2015-04-22 2015-06-24 浪潮电子信息产业股份有限公司 Method and device for managing video data
US9626312B2 (en) * 2015-07-17 2017-04-18 Sandisk Technologies Llc Storage region mapping for a data storage device
CN105183389A (en) * 2015-09-15 2015-12-23 北京金山安全软件有限公司 Data hierarchical management method and device and electronic equipment
CN105892937B (en) * 2016-02-23 2020-09-25 联想(北京)有限公司 Information processing method and electronic equipment
CN107870916A (en) * 2016-09-23 2018-04-03 伊姆西Ip控股有限责任公司 Memory management method and equipment
US10204059B2 (en) 2016-09-29 2019-02-12 International Business Machines Corporation Memory optimization by phase-dependent data residency
US10540516B2 (en) 2016-10-13 2020-01-21 Commvault Systems, Inc. Data protection within an unsecured storage environment
US10922189B2 (en) 2016-11-02 2021-02-16 Commvault Systems, Inc. Historical network data-based scanning thread generation
US10389810B2 (en) 2016-11-02 2019-08-20 Commvault Systems, Inc. Multi-threaded scanning of distributed file systems
US10521143B2 (en) * 2017-03-23 2019-12-31 Netapp Inc. Composite aggregate architecture
US10984041B2 (en) 2017-05-11 2021-04-20 Commvault Systems, Inc. Natural language processing integrated with database and data storage management
US10642886B2 (en) 2018-02-14 2020-05-05 Commvault Systems, Inc. Targeted search of backup data using facial recognition
US11159469B2 (en) 2018-09-12 2021-10-26 Commvault Systems, Inc. Using machine learning to modify presentation of mailbox objects
CN109918027B (en) * 2019-05-16 2019-08-09 上海燧原科技有限公司 Store access control method, device, equipment and storage medium
US11494417B2 (en) 2020-08-07 2022-11-08 Commvault Systems, Inc. Automated email classification in an information management system

Citations (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US107315A (en) * 1870-09-13 Improvement in cheese-hoop
US143563A (en) * 1873-10-14 Improvement in compounds for making artificial stone
US199515A (en) * 1878-01-22 Improvement in window-frames
US260862A (en) * 1882-07-11 Oil-press envelope
US5379423A (en) * 1988-09-28 1995-01-03 Hitachi, Ltd. Information life cycle processor and information organizing method using it
US5537585A (en) * 1994-02-25 1996-07-16 Avail Systems Corporation Data storage management for network interconnected processors
US5873103A (en) * 1994-02-25 1999-02-16 Kodak Limited Data storage management for network interconnected processors using transferrable placeholders
US5941972A (en) * 1997-12-31 1999-08-24 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
US5991753A (en) * 1993-06-16 1999-11-23 Lachman Technology, Inc. Method and system for computer file management, including file migration, special handling, and associating extended attributes with files
US6041381A (en) * 1998-02-05 2000-03-21 Crossroads Systems, Inc. Fibre channel to SCSI addressing method and system
US6065087A (en) * 1998-05-21 2000-05-16 Hewlett-Packard Company Architecture for a high-performance network/bus multiplexer interconnecting a network and a bus that transport data using multiple protocols
US6209023B1 (en) * 1998-04-24 2001-03-27 Compaq Computer Corporation Supporting a SCSI device on a non-SCSI transport medium of a network
US6269382B1 (en) * 1998-08-31 2001-07-31 Microsoft Corporation Systems and methods for migration and recall of data from local and remote storage
US6275898B1 (en) * 1999-05-13 2001-08-14 Lsi Logic Corporation Methods and structure for RAID level migration within a logical unit
US6327614B1 (en) * 1997-09-16 2001-12-04 Kabushiki Kaisha Toshiba Network server device and file management system using cache associated with network interface processors for redirecting requested information between connection networks
US20010054133A1 (en) * 2000-05-24 2001-12-20 Akira Murotani Data storage system and method of hierarchical control thereof
US20020059539A1 (en) * 1997-10-08 2002-05-16 David B. Anderson Hybrid data storage and reconstruction system and method for a data storage device
US20020062387A1 (en) * 2000-10-30 2002-05-23 Michael Yatziv Interface emulation for storage devices
US20020069280A1 (en) * 2000-12-15 2002-06-06 International Business Machines Corporation Method and system for scalable, high performance hierarchical storage management
US6446141B1 (en) * 1999-03-25 2002-09-03 Dell Products, L.P. Storage server system including ranking of data source
US20020161855A1 (en) * 2000-12-05 2002-10-31 Olaf Manczak Symmetric shared file storage system
US20030046270A1 (en) * 2001-08-31 2003-03-06 Arkivio, Inc. Techniques for storing data based upon storage policies
US20030061440A1 (en) * 2001-01-31 2003-03-27 Elliott Stephen J. New fibre channel upgrade path
US20030065873A1 (en) * 2001-07-31 2003-04-03 Kevin Collins Storage device manager
US20030074523A1 (en) * 2001-10-11 2003-04-17 International Business Machines Corporation System and method for migrating data
US6598174B1 (en) * 2000-04-26 2003-07-22 Dell Products L.P. Method and apparatus for storage unit replacement in non-redundant array
US20030182288A1 (en) * 2002-03-25 2003-09-25 Emc Corporation Method and system for migrating data while maintaining access to data with use of the same pathname
US20030182525A1 (en) * 2002-03-25 2003-09-25 Emc Corporation Method and system for migrating data
US6647474B2 (en) * 1993-04-23 2003-11-11 Emc Corporation Remote data mirroring system using local and remote write pending indicators
US6654830B1 (en) * 1999-03-25 2003-11-25 Dell Products L.P. Method and system for managing data migration for a storage system
US20030225801A1 (en) * 2002-05-31 2003-12-04 Devarakonda Murthy V. Method, system, and program for a policy based storage manager
US20040039891A1 (en) * 2001-08-31 2004-02-26 Arkivio, Inc. Optimizing storage capacity utilization based upon data storage costs
US20040044854A1 (en) * 2002-08-29 2004-03-04 International Business Machines Corporation Method, system, and program for moving data among storage units
US20040083202A1 (en) * 2002-08-30 2004-04-29 Arkivio, Inc. Techniques to control recalls in storage management applications
US20040098419A1 (en) * 2002-11-18 2004-05-20 International Business Machines Corporation Method and apparatus for a migration assistant
US20040098394A1 (en) * 2002-02-12 2004-05-20 Merritt Perry Wayde Localized intelligent data management for a storage system
US6757695B1 (en) * 2001-08-09 2004-06-29 Network Appliance, Inc. System and method for mounting and unmounting storage volumes in a network storage environment
US20040139167A1 (en) * 2002-12-06 2004-07-15 Andiamo Systems Inc., A Delaware Corporation Apparatus and method for a scalable network attach storage system
US20040143648A1 (en) * 2003-01-20 2004-07-22 Koning G. P. Short-cut response for distributed services
US20040143563A1 (en) * 2001-09-26 2004-07-22 Mark Saake Sharing objects between computer systems
US20040162940A1 (en) * 2003-02-17 2004-08-19 Ikuya Yagisawa Storage system
US20040199515A1 (en) * 2003-04-04 2004-10-07 Penny Brett A. Network-attached storage system, device, and method supporting multiple storage device types
US20040210724A1 (en) * 2003-01-21 2004-10-21 Equallogic Inc. Block data migration
US6810462B2 (en) * 2002-04-26 2004-10-26 Hitachi, Ltd. Storage system and method using interface control devices of different types
US20050097126A1 (en) * 2000-08-24 2005-05-05 Microsoft Corporation Partial migration of an object to another storage location in a computer system
US20050120189A1 (en) * 2000-06-27 2005-06-02 David Black Method and apparatus for moving logical entities among storage elements in a computer storage system
US20050149528A1 (en) * 2002-07-30 2005-07-07 Anderson Owen T. Uniform name space referrals with location independence
US20050149671A1 (en) * 2003-05-22 2005-07-07 Katsuyoshi Suzuki Disk array apparatus and method for controlling the same
US20050172097A1 (en) * 2004-01-30 2005-08-04 Hewlett-Packard Development Company, L.P. Storage system with capability to allocate virtual storage segments among a plurality of controllers
US6938039B1 (en) * 2000-06-30 2005-08-30 Emc Corporation Concurrent file across at a target file server during migration of file systems between file servers using a network file system access protocol
US6950920B1 (en) * 2001-11-28 2005-09-27 Hitachi, Ltd. Dual controller system for dynamically allocating control of disk units
US6973455B1 (en) * 1999-03-03 2005-12-06 Emc Corporation File server system providing direct data sharing between clients with a server acting as an arbiter and coordinator
US6983039B2 (en) * 2002-06-05 2006-01-03 Ntt Docomo, Inc. Call admission control method and communication system to which method is applied
US20060010154A1 (en) * 2003-11-13 2006-01-12 Anand Prahlad Systems and methods for performing storage operations using network attached storage

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0391041A (en) 1989-09-04 1991-04-16 Nec Corp Automatic migration system for file
US5291593A (en) * 1990-10-24 1994-03-01 International Business Machines Corp. System for persistent and delayed allocation object reference in an object oriented environment
JPH0581090A (en) 1991-09-19 1993-04-02 Nec Corp File recall control system
US5619690A (en) 1993-06-21 1997-04-08 Hitachi, Ltd. Computer system including a computer which requests an access to a logical address in a secondary storage system with specification of a local address in the secondary storage system
GB9401522D0 (en) * 1994-01-27 1994-03-23 Int Computers Ltd Hierarchic data storage system
JP3119992B2 (en) 1994-03-31 2000-12-25 株式会社東芝 Data storage device
US5504882A (en) * 1994-06-20 1996-04-02 International Business Machines Corporation Fault tolerant data storage subsystem employing hierarchically arranged controllers
US5659743A (en) * 1994-12-05 1997-08-19 Legent Corporation Method and apparatus for a pattern based spaced management system
JPH09259037A (en) 1996-03-21 1997-10-03 Toshiba Corp Information storage device
JP3641872B2 (en) 1996-04-08 2005-04-27 株式会社日立製作所 Storage system
JPH09297699A (en) 1996-04-30 1997-11-18 Hitachi Ltd Hierarchical storage and hierarchical storage file management method
JP3781212B2 (en) 1996-06-04 2006-05-31 株式会社日立製作所 sub-system
US6032224A (en) * 1996-12-03 2000-02-29 Emc Corporation Hierarchical performance system for managing a plurality of storage units with different access speeds
JPH10171690A (en) * 1996-12-06 1998-06-26 Hitachi Ltd Electronic file device
JP3671595B2 (en) 1997-04-01 2005-07-13 株式会社日立製作所 Compound computer system and compound I / O system
JPH10301720A (en) 1997-04-24 1998-11-13 Nec Ibaraki Ltd Disk array device
US6330572B1 (en) * 1998-07-15 2001-12-11 Imation Corp. Hierarchical data storage management
US7392234B2 (en) * 1999-05-18 2008-06-24 Kom, Inc. Method and system for electronic file lifecycle management
EP0981091B1 (en) 1998-08-20 2008-03-19 Hitachi, Ltd. Data copying in storage systems
JP4400895B2 (en) * 1999-01-07 2010-01-20 株式会社日立製作所 Disk array controller
IE20000203A1 (en) * 1999-03-25 2001-02-21 Converge Net Technologies Inc Storage domain management system
US6490666B1 (en) * 1999-08-20 2002-12-03 Microsoft Corporation Buffering data in a hierarchical data storage environment
US6681310B1 (en) * 1999-11-29 2004-01-20 Microsoft Corporation Storage management system having common volume manager
JP2002049511A (en) * 2000-05-24 2002-02-15 Hitachi Ltd Allocation changing method for address and external storage subsystem using the same
US6850959B1 (en) 2000-10-26 2005-02-01 Microsoft Corporation Method and system for transparently extending non-volatile storage
EP1202549A3 (en) * 2000-10-31 2004-03-17 Eastman Kodak Company A method and apparatus for long term document preservation
JP2002182859A (en) 2000-12-12 2002-06-28 Hitachi Ltd Storage system and its utilizing method
JP4041284B2 (en) * 2001-01-23 2008-01-30 株式会社日立製作所 Storage system
JP2002229740A (en) 2001-01-31 2002-08-16 Toshiba Corp Storage system with high availability control interface, host computer used therefor, and computer stored with driver program executed by computer
KR100360893B1 (en) * 2001-02-01 2002-11-13 엘지전자 주식회사 Apparatus and method for compensating video motions
US6889232B2 (en) * 2001-02-15 2005-05-03 Microsoft Corporation System and method for data migration
US20040233910A1 (en) 2001-02-23 2004-11-25 Wen-Shyen Chen Storage area network using a data communication protocol
US20020133539A1 (en) * 2001-03-14 2002-09-19 Imation Corp. Dynamic logical storage volumes
JP4039821B2 (en) * 2001-05-09 2008-01-30 株式会社日立製作所 Computer system using disk controller and its operation service
JP4632574B2 (en) 2001-05-25 2011-02-16 株式会社日立製作所 Storage device, file data backup method, and file data copy method
JP2003015917A (en) * 2001-07-04 2003-01-17 Hitachi Ltd Data migration processing method and program
JP4156817B2 (en) * 2001-07-27 2008-09-24 株式会社日立製作所 Storage system
EP1430399A1 (en) * 2001-08-31 2004-06-23 Arkivio, Inc. Techniques for storing data based upon storage policies
US7136883B2 (en) * 2001-09-08 2006-11-14 Siemens Medial Solutions Health Services Corporation System for managing object storage and retrieval in partitioned storage media
US6941328B2 (en) * 2002-01-22 2005-09-06 International Business Machines Corporation Copy process substituting compressible bit pattern for any unqualified data objects
US7328225B1 (en) * 2002-03-27 2008-02-05 Swsoft Holdings, Ltd. System, method and computer program product for multi-level file-sharing by concurrent users
US6925531B2 (en) * 2002-07-11 2005-08-02 Storage Technology Corporation Multi-element storage array
US20040049513A1 (en) * 2002-08-30 2004-03-11 Arkivio, Inc. Techniques for moving stub files without recalling data
US7035972B2 (en) * 2002-09-03 2006-04-25 Copan Systems, Inc. Method and apparatus for power-efficient high-capacity scalable storage system
US20040260862A1 (en) 2003-06-20 2004-12-23 Eric Anderson Adaptive migration planning and execution

Patent Citations (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US107315A (en) * 1870-09-13 Improvement in cheese-hoop
US143563A (en) * 1873-10-14 Improvement in compounds for making artificial stone
US199515A (en) * 1878-01-22 Improvement in window-frames
US260862A (en) * 1882-07-11 Oil-press envelope
US5379423A (en) * 1988-09-28 1995-01-03 Hitachi, Ltd. Information life cycle processor and information organizing method using it
US6647474B2 (en) * 1993-04-23 2003-11-11 Emc Corporation Remote data mirroring system using local and remote write pending indicators
US5991753A (en) * 1993-06-16 1999-11-23 Lachman Technology, Inc. Method and system for computer file management, including file migration, special handling, and associating extended attributes with files
US5873103A (en) * 1994-02-25 1999-02-16 Kodak Limited Data storage management for network interconnected processors using transferrable placeholders
US5537585A (en) * 1994-02-25 1996-07-16 Avail Systems Corporation Data storage management for network interconnected processors
US6327614B1 (en) * 1997-09-16 2001-12-04 Kabushiki Kaisha Toshiba Network server device and file management system using cache associated with network interface processors for redirecting requested information between connection networks
US20020059539A1 (en) * 1997-10-08 2002-05-16 David B. Anderson Hybrid data storage and reconstruction system and method for a data storage device
US5941972A (en) * 1997-12-31 1999-08-24 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
US6041381A (en) * 1998-02-05 2000-03-21 Crossroads Systems, Inc. Fibre channel to SCSI addressing method and system
US6209023B1 (en) * 1998-04-24 2001-03-27 Compaq Computer Corporation Supporting a SCSI device on a non-SCSI transport medium of a network
US6065087A (en) * 1998-05-21 2000-05-16 Hewlett-Packard Company Architecture for a high-performance network/bus multiplexer interconnecting a network and a bus that transport data using multiple protocols
US6269382B1 (en) * 1998-08-31 2001-07-31 Microsoft Corporation Systems and methods for migration and recall of data from local and remote storage
US6973455B1 (en) * 1999-03-03 2005-12-06 Emc Corporation File server system providing direct data sharing between clients with a server acting as an arbiter and coordinator
US6446141B1 (en) * 1999-03-25 2002-09-03 Dell Products, L.P. Storage server system including ranking of data source
US6654830B1 (en) * 1999-03-25 2003-11-25 Dell Products L.P. Method and system for managing data migration for a storage system
US6275898B1 (en) * 1999-05-13 2001-08-14 Lsi Logic Corporation Methods and structure for RAID level migration within a logical unit
US6598174B1 (en) * 2000-04-26 2003-07-22 Dell Products L.P. Method and apparatus for storage unit replacement in non-redundant array
US20010054133A1 (en) * 2000-05-24 2001-12-20 Akira Murotani Data storage system and method of hierarchical control thereof
US20050120189A1 (en) * 2000-06-27 2005-06-02 David Black Method and apparatus for moving logical entities among storage elements in a computer storage system
US6938039B1 (en) * 2000-06-30 2005-08-30 Emc Corporation Concurrent file across at a target file server during migration of file systems between file servers using a network file system access protocol
US20050097126A1 (en) * 2000-08-24 2005-05-05 Microsoft Corporation Partial migration of an object to another storage location in a computer system
US20020062387A1 (en) * 2000-10-30 2002-05-23 Michael Yatziv Interface emulation for storage devices
US20020161855A1 (en) * 2000-12-05 2002-10-31 Olaf Manczak Symmetric shared file storage system
US20020069280A1 (en) * 2000-12-15 2002-06-06 International Business Machines Corporation Method and system for scalable, high performance hierarchical storage management
US20030061440A1 (en) * 2001-01-31 2003-03-27 Elliott Stephen J. New fibre channel upgrade path
US20030065873A1 (en) * 2001-07-31 2003-04-03 Kevin Collins Storage device manager
US6757695B1 (en) * 2001-08-09 2004-06-29 Network Appliance, Inc. System and method for mounting and unmounting storage volumes in a network storage environment
US20040039891A1 (en) * 2001-08-31 2004-02-26 Arkivio, Inc. Optimizing storage capacity utilization based upon data storage costs
US20030046270A1 (en) * 2001-08-31 2003-03-06 Arkivio, Inc. Techniques for storing data based upon storage policies
US20040143563A1 (en) * 2001-09-26 2004-07-22 Mark Saake Sharing objects between computer systems
US20030074523A1 (en) * 2001-10-11 2003-04-17 International Business Machines Corporation System and method for migrating data
US6950920B1 (en) * 2001-11-28 2005-09-27 Hitachi, Ltd. Dual controller system for dynamically allocating control of disk units
US20040098394A1 (en) * 2002-02-12 2004-05-20 Merritt Perry Wayde Localized intelligent data management for a storage system
US20030182525A1 (en) * 2002-03-25 2003-09-25 Emc Corporation Method and system for migrating data
US20030182288A1 (en) * 2002-03-25 2003-09-25 Emc Corporation Method and system for migrating data while maintaining access to data with use of the same pathname
US6922761B2 (en) * 2002-03-25 2005-07-26 Emc Corporation Method and system for migrating data
US6810462B2 (en) * 2002-04-26 2004-10-26 Hitachi, Ltd. Storage system and method using interface control devices of different types
US20030225801A1 (en) * 2002-05-31 2003-12-04 Devarakonda Murthy V. Method, system, and program for a policy based storage manager
US6983039B2 (en) * 2002-06-05 2006-01-03 Ntt Docomo, Inc. Call admission control method and communication system to which method is applied
US20050149528A1 (en) * 2002-07-30 2005-07-07 Anderson Owen T. Uniform name space referrals with location independence
US6947940B2 (en) * 2002-07-30 2005-09-20 International Business Machines Corporation Uniform name space referrals with location independence
US20040044854A1 (en) * 2002-08-29 2004-03-04 International Business Machines Corporation Method, system, and program for moving data among storage units
US20040083202A1 (en) * 2002-08-30 2004-04-29 Arkivio, Inc. Techniques to control recalls in storage management applications
US20040098419A1 (en) * 2002-11-18 2004-05-20 International Business Machines Corporation Method and apparatus for a migration assistant
US20040139167A1 (en) * 2002-12-06 2004-07-15 Andiamo Systems Inc., A Delaware Corporation Apparatus and method for a scalable network attach storage system
US20040143648A1 (en) * 2003-01-20 2004-07-22 Koning G. P. Short-cut response for distributed services
US20040210724A1 (en) * 2003-01-21 2004-10-21 Equallogic Inc. Block data migration
US20040162940A1 (en) * 2003-02-17 2004-08-19 Ikuya Yagisawa Storage system
US20040199515A1 (en) * 2003-04-04 2004-10-07 Penny Brett A. Network-attached storage system, device, and method supporting multiple storage device types
US20050149671A1 (en) * 2003-05-22 2005-07-07 Katsuyoshi Suzuki Disk array apparatus and method for controlling the same
US20060010154A1 (en) * 2003-11-13 2006-01-12 Anand Prahlad Systems and methods for performing storage operations using network attached storage
US20050172097A1 (en) * 2004-01-30 2005-08-04 Hewlett-Packard Development Company, L.P. Storage system with capability to allocate virtual storage segments among a plurality of controllers

Cited By (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9361243B2 (en) 1998-07-31 2016-06-07 Kom Networks Inc. Method and system for providing restricted access to a storage medium
US8234477B2 (en) 1998-07-31 2012-07-31 Kom Networks, Inc. Method and system for providing restricted access to a storage medium
US8782009B2 (en) 1999-05-18 2014-07-15 Kom Networks Inc. Method and system for electronic file lifecycle management
US20060035569A1 (en) * 2001-01-05 2006-02-16 Jalal Ashjaee Integrated system for processing semiconductor wafers
US20070043919A1 (en) * 2002-06-14 2007-02-22 Hitachi, Ltd. Information processing method and system
US7143096B2 (en) 2002-06-14 2006-11-28 Hitachi, Ltd. Information processing method and system
US7185143B2 (en) * 2003-01-14 2007-02-27 Hitachi, Ltd. SAN/NAS integrated storage system
US20070168559A1 (en) * 2003-01-14 2007-07-19 Hitachi, Ltd. SAN/NAS integrated storage system
US20040139168A1 (en) * 2003-01-14 2004-07-15 Hitachi, Ltd. SAN/NAS integrated storage system
US7697312B2 (en) 2003-01-14 2010-04-13 Hitachi, Ltd. SAN/NAS integrated storage system
US7925830B2 (en) 2003-02-17 2011-04-12 Hitachi, Ltd. Storage system for holding a remaining available lifetime of a logical storage region
US7272686B2 (en) 2003-02-17 2007-09-18 Hitachi, Ltd. Storage system
US7275133B2 (en) 2003-02-17 2007-09-25 Hitachi, Ltd. Storage system
US7366839B2 (en) 2003-02-17 2008-04-29 Hitachi, Ltd. Storage system
US20050066126A1 (en) * 2003-02-17 2005-03-24 Ikuya Yagisawa Storage system
US20050065984A1 (en) * 2003-02-17 2005-03-24 Ikuya Yagisawa Storage system
US7047354B2 (en) 2003-02-17 2006-05-16 Hitachi, Ltd. Storage system
US8370572B2 (en) 2003-02-17 2013-02-05 Hitachi, Ltd. Storage system for holding a remaining available lifetime of a logical storage region
US20050050275A1 (en) * 2003-02-17 2005-03-03 Ikuya Yagisawa Storage system
US7146464B2 (en) 2003-02-17 2006-12-05 Hitachi, Ltd. Storage system
US20110167220A1 (en) * 2003-02-17 2011-07-07 Hitachi, Ltd. Storage system for holding a remaining available lifetime of a logical storage region
US8151046B2 (en) 2003-05-22 2012-04-03 Hitachi, Ltd. Disk array apparatus and method for controlling the same
US8200898B2 (en) 2003-05-22 2012-06-12 Hitachi, Ltd. Storage apparatus and method for controlling the same
US8429342B2 (en) 2003-05-22 2013-04-23 Hitachi, Ltd. Drive apparatus and method for controlling the same
US7685362B2 (en) 2003-05-22 2010-03-23 Hitachi, Ltd. Storage unit and circuit for shaping communication signal
US20050149672A1 (en) * 2003-05-22 2005-07-07 Katsuyoshi Suzuki Disk array apparatus and method for controlling the same
US20090150609A1 (en) * 2003-05-22 2009-06-11 Katsuyoshi Suzuki Disk array apparatus and method for controlling the same
US20050097132A1 (en) * 2003-10-29 2005-05-05 Hewlett-Packard Development Company, L.P. Hierarchical storage system
US20070073939A1 (en) * 2003-11-28 2007-03-29 Hitachi, Ltd. Disk array apparatus and data relay method of the disk array apparatus
US8468300B2 (en) 2003-11-28 2013-06-18 Hitachi, Ltd. Storage system having plural controllers and an expansion housing with drive units
US20050120263A1 (en) * 2003-11-28 2005-06-02 Azuma Kano Disk array system and method for controlling disk array system
US20050117462A1 (en) * 2003-11-28 2005-06-02 Azuma Kano Disk array system and method for controlling disk array system
US7401167B2 (en) * 2003-11-28 2008-07-15 Hitachi, Ltd.. Disk array apparatus and data relay method of the disk array apparatus
US7865665B2 (en) 2003-11-28 2011-01-04 Hitachi, Ltd. Storage system for checking data coincidence between a cache memory and a disk drive
US20050154942A1 (en) * 2003-11-28 2005-07-14 Azuma Kano Disk array system and method for controlling disk array system
US20050120264A1 (en) * 2003-11-28 2005-06-02 Azuma Kano Disk array system and method for controlling disk array system
US7671485B2 (en) 2003-12-25 2010-03-02 Hitachi, Ltd. Storage system
US7225211B1 (en) * 2003-12-31 2007-05-29 Veritas Operating Corporation Multi-class storage mechanism
US20060095666A1 (en) * 2004-01-09 2006-05-04 Ryoji Furuhashi Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US8607010B2 (en) 2004-01-09 2013-12-10 Hitachi, Ltd. Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US7502904B2 (en) 2004-01-09 2009-03-10 Hitachi, Ltd. Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US7730275B2 (en) 2004-01-09 2010-06-01 Hitachi, Ltd. Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US7930506B2 (en) 2004-01-09 2011-04-19 Hitachi, Ltd. Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US20090187639A1 (en) * 2004-01-09 2009-07-23 Ryoji Furuhashi Information processing system and management device for managing relocation of data based on a change in the characteristics of the data over time
US7188213B2 (en) 2004-01-23 2007-03-06 Hitachi, Ltd. Management computer and method of managing data storage apparatus
US20070038644A1 (en) * 2004-01-23 2007-02-15 Yasunori Kaneda Management computer and method of managing data storage apparatus
US7823010B2 (en) 2004-02-04 2010-10-26 Hitachi, Ltd. Anomaly notification control in disk array
US8365013B2 (en) 2004-02-04 2013-01-29 Hitachi, Ltd. Anomaly notification control in disk array
US8015442B2 (en) 2004-02-04 2011-09-06 Hitachi, Ltd. Anomaly notification control in disk array
US7467238B2 (en) 2004-02-10 2008-12-16 Hitachi, Ltd. Disk controller and storage system
US20050177681A1 (en) * 2004-02-10 2005-08-11 Hitachi, Ltd. Storage system
US20090077272A1 (en) * 2004-02-10 2009-03-19 Mutsumi Hosoya Disk controller
US20100153961A1 (en) * 2004-02-10 2010-06-17 Hitachi, Ltd. Storage system having processor and interface adapters that can be increased or decreased based on required performance
US7917668B2 (en) 2004-02-10 2011-03-29 Hitachi, Ltd. Disk controller
US7464222B2 (en) 2004-02-16 2008-12-09 Hitachi, Ltd. Storage system with heterogenous storage, creating and copying the file systems, with the write access attribute
US20050192980A1 (en) * 2004-02-16 2005-09-01 Naoto Matsunami Storage system
US20050182900A1 (en) * 2004-02-16 2005-08-18 Naoto Matsunami Storage system
US20050182864A1 (en) * 2004-02-16 2005-08-18 Hitachi, Ltd. Disk controller
US7231469B2 (en) 2004-02-16 2007-06-12 Hitachi, Ltd. Disk controller
US7469307B2 (en) 2004-02-16 2008-12-23 Hitachi, Ltd. Storage system with DMA controller which controls multiplex communication protocol
US8315973B1 (en) * 2004-09-28 2012-11-20 Symantec Operating Corporation Method and apparatus for data moving in multi-device file systems
US20110113194A1 (en) * 2004-11-05 2011-05-12 Data Robotics, Inc. Filesystem-Aware Block Storage System, Apparatus, and Method
US20060129537A1 (en) * 2004-11-12 2006-06-15 Nec Corporation Storage management system and method and program
US7818287B2 (en) 2004-11-12 2010-10-19 Nec Corporation Storage management system and method and program
US7502902B2 (en) 2005-03-11 2009-03-10 Hitachi, Ltd. Storage system and data movement method
US20060206675A1 (en) * 2005-03-11 2006-09-14 Yoshitaka Sato Storage system and data movement method
US20060218366A1 (en) * 2005-03-28 2006-09-28 Satoshi Fukuda Data relocation method
EP1708078A1 (en) 2005-03-28 2006-10-04 Hitachi, Ltd. Data relocation method
US7711916B2 (en) * 2005-05-11 2010-05-04 Oracle International Corporation Storing information on storage devices having different performance capabilities with a storage system
US20060259728A1 (en) * 2005-05-11 2006-11-16 Sashikanth Chandrasekaran Storing information on storage devices having different performance capabilities within a storage system
US9258364B2 (en) * 2005-07-15 2016-02-09 International Business Machines Corporation Virtualization engine and method, system, and computer program product for managing the storage of data
US20110208938A1 (en) * 2005-07-15 2011-08-25 International Business Machines Corporation Virtualization engine and method, system, and computer program product for managing the storage of data
US7590671B2 (en) 2005-09-07 2009-09-15 Hitachi, Ltd. Storage system, file migration method and computer program product
US8166270B2 (en) 2005-09-22 2012-04-24 Hitachi, Ltd. Storage control apparatus, data management system and data management method for determining storage heirarchy based on a user policy
US20110213916A1 (en) * 2005-09-22 2011-09-01 Akira Fujibayashi Storage control apparatus, data management system and data management method
US20080154993A1 (en) * 2005-10-08 2008-06-26 Unmesh Rathi Methods of provisioning a multiple quality of service file system
US20070083482A1 (en) * 2005-10-08 2007-04-12 Unmesh Rathi Multiple quality of service file system
US20080154840A1 (en) * 2005-10-08 2008-06-26 Unmesh Rathi Methods of processing files in a multiple quality of service file system
US20090228535A1 (en) * 2005-10-08 2009-09-10 Unmesh Rathi Multiple quality of service file system using performance bands of storage devices
US8438138B2 (en) * 2005-10-08 2013-05-07 Oracle International Corporation Multiple quality of service file system using performance bands of storage devices
US8930402B1 (en) * 2005-10-31 2015-01-06 Verizon Patent And Licensing Inc. Systems and methods for automatic collection of data over a network
US7716440B2 (en) 2005-11-30 2010-05-11 Hitachi, Ltd. Storage system and management method thereof
US20070124551A1 (en) * 2005-11-30 2007-05-31 Dai Taninaka Storage system and management method thereof
US20090276568A1 (en) * 2006-02-01 2009-11-05 Hitachi, Ltd. Storage system, data processing method and storage apparatus
US8037239B2 (en) * 2006-02-10 2011-10-11 Hitachi, Ltd. Storage controller
US8352678B2 (en) 2006-02-10 2013-01-08 Hitachi, Ltd. Storage controller
US20070192560A1 (en) * 2006-02-10 2007-08-16 Hitachi, Ltd. Storage controller
US20070239803A1 (en) * 2006-03-28 2007-10-11 Yasuyuki Mimatsu Remote mirroring method between tiered storage systems
US8001327B2 (en) * 2007-01-19 2011-08-16 Hitachi, Ltd. Method and apparatus for managing placement of data in a tiered storage system
US20080177948A1 (en) * 2007-01-19 2008-07-24 Hitachi, Ltd. Method and apparatus for managing placement of data in a tiered storage system
US20090150639A1 (en) * 2007-12-07 2009-06-11 Hideo Ohata Management apparatus and management method
US20100095164A1 (en) * 2008-10-15 2010-04-15 Hitachi, Ltd. File management method and hierarchy management file system
US8645645B2 (en) 2008-10-15 2014-02-04 Hitachi, Ltd. File management method and hierarchy management file system
EP2178005A2 (en) 2008-10-15 2010-04-21 Hitachi Ltd. File management method and hierarchy management file system
US8949557B2 (en) 2008-10-15 2015-02-03 Hitachi, Ltd. File management method and hierarchy management file system
US20100211547A1 (en) * 2009-02-18 2010-08-19 Hitachi, Ltd. File sharing system, file server, and method for managing files
US8015157B2 (en) 2009-02-18 2011-09-06 Hitachi, Ltd. File sharing system, file server, and method for managing files
US8433674B2 (en) 2009-04-23 2013-04-30 Hitachi, Ltd. Method for clipping migration candidate file in hierarchical storage management system
US20100274826A1 (en) * 2009-04-23 2010-10-28 Hitachi, Ltd. Method for clipping migration candidate file in hierarchical storage management system
US8229972B2 (en) 2009-08-28 2012-07-24 International Business Machines Corporation Extended data storage system
US8468176B2 (en) 2009-08-28 2013-06-18 International Business Machines Corporation Extended data storage system
US20110055272A1 (en) * 2009-08-28 2011-03-03 International Business Machines Corporation Extended data storage system
US20110078112A1 (en) * 2009-09-30 2011-03-31 Hitachi, Ltd. Method and system for transferring duplicate files in hierarchical storage management system
US8209498B2 (en) 2009-09-30 2012-06-26 Hitachi, Ltd. Method and system for transferring duplicate files in hierarchical storage management system
EP2309372A2 (en) * 2009-10-05 2011-04-13 Hitachi Ltd. Data migration control method for storage device
US20110213814A1 (en) * 2009-11-06 2011-09-01 Hitachi, Ltd. File management sub-system and file migration control method in hierarchical file system
US8554808B2 (en) 2009-11-06 2013-10-08 Hitachi, Ltd. File management sub-system and file migration control method in hierarchical file system
WO2011104741A1 (en) * 2010-02-23 2011-09-01 Hitachi, Ltd. Management system for storage system and method for managing storage system
US20110320754A1 (en) * 2010-02-23 2011-12-29 Hitachi, Ltd Management system for storage system and method for managing storage system
US8756199B2 (en) 2010-03-01 2014-06-17 Hitachi, Ltd. File level hierarchical storage management system, method, and apparatus
US20110231458A1 (en) * 2010-03-01 2011-09-22 Hitachi, Ltd. File level hierarchical storage management system, method, and apparatus
US9110909B2 (en) 2010-03-01 2015-08-18 Hitachi, Ltd. File level hierarchical storage management system, method, and apparatus
US8521978B2 (en) * 2010-04-27 2013-08-27 Hitachi, Ltd. Storage apparatus and method for controlling storage apparatus
US20110264855A1 (en) * 2010-04-27 2011-10-27 Hitachi, Ltd. Storage apparatus and method for controlling storage apparatus
US20170160964A1 (en) * 2015-12-08 2017-06-08 Kyocera Document Solutions Inc. Electronic device and non-transitory computer readable storage medium
US10437488B2 (en) * 2015-12-08 2019-10-08 Kyocera Document Solutions Inc. Electronic device and non-transitory computer readable storage medium
US20240028259A1 (en) * 2022-07-21 2024-01-25 Micron Technology, Inc. Buffer allocation for reducing block transit penalty

Also Published As

Publication number Publication date
US7925851B2 (en) 2011-04-12
US7330950B2 (en) 2008-02-12
US7356660B2 (en) 2008-04-08
CN1570842A (en) 2005-01-26
US20050119994A1 (en) 2005-06-02
US20080263277A1 (en) 2008-10-23
US20050203964A1 (en) 2005-09-15
JP2004295457A (en) 2004-10-21
CN1311328C (en) 2007-04-18
US8230194B2 (en) 2012-07-24
EP1462927A3 (en) 2008-09-24
US20110185123A1 (en) 2011-07-28
CN100520696C (en) 2009-07-29
JP4322031B2 (en) 2009-08-26
EP1462927A2 (en) 2004-09-29
CN101034340A (en) 2007-09-12

Similar Documents

Publication Publication Date Title
US7356660B2 (en) Storage device
JP5507670B2 (en) Data distribution by leveling in a striped file system
US7069380B2 (en) File access method in storage-device system, and programs for the file access
US7464222B2 (en) Storage system with heterogenous storage, creating and copying the file systems, with the write access attribute
JP4787315B2 (en) Storage system architecture for striping the contents of data containers across multiple volumes of a cluster
US7865677B1 (en) Enhancing access to data storage
EP2411918B1 (en) Virtualized data storage system architecture
US8095577B1 (en) Managing metadata
EP1875384B1 (en) System and method for multi-tiered meta-data caching and distribution in a clustered computer environment
US8078819B2 (en) Arrangements for managing metadata of an integrated logical unit including differing types of storage media
US6973556B2 (en) Data element including metadata that includes data management information for managing the data element
US7739250B1 (en) System and method for managing file data during consistency points
US8510526B2 (en) Storage apparatus and snapshot control method of the same
US9449007B1 (en) Controlling access to XAM metadata
JP2002082775A (en) Computer system
US11822520B2 (en) Freeing pages within persistent memory
JP4409521B2 (en) Storage device
US9727588B1 (en) Applying XAM processes
US20220300429A1 (en) Low-overhead atomic writes for persistent memory
Soltis The design and implementation of a distributed file system based on shared network storage
US7783611B1 (en) System and method for managing file metadata during consistency points

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATSUNAMI, NAOTO;SONODA, KOJI;YAMAMOTO, AKIRA;AND OTHERS;REEL/FRAME:014980/0586;SIGNING DATES FROM 20030902 TO 20030908

STCB Information on status: application discontinuation

Free format text: EXPRESSLY ABANDONED -- DURING EXAMINATION