US20130024734A1 - Storage control apparatus and control method of storage control apparatus - Google Patents

Storage control apparatus and control method of storage control apparatus Download PDF

Info

Publication number
US20130024734A1
US20130024734A1 US12/866,915 US86691510A US2013024734A1 US 20130024734 A1 US20130024734 A1 US 20130024734A1 US 86691510 A US86691510 A US 86691510A US 2013024734 A1 US2013024734 A1 US 2013024734A1
Authority
US
United States
Prior art keywords
specified
storage
storage apparatus
data
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/866,915
Other versions
US8984352B2 (en
Inventor
Eiju Katsuragi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Assigned to HITACHI, LTD. reassignment HITACHI, LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KATSURAGI, EIJU
Publication of US20130024734A1 publication Critical patent/US20130024734A1/en
Application granted granted Critical
Publication of US8984352B2 publication Critical patent/US8984352B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0727Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • G06F11/0757Error or fault detection not based on redundancy by exceeding limits by exceeding a time limit, i.e. time-out, e.g. watchdogs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD

Definitions

  • This invention relates to a storage control apparatus and the control method of the storage control apparatus.
  • a storage control apparatus groups physical storage areas which multiple storage apparatuses comprise respectively as redundant storage areas based on RAID (Redundant Array of Independent (or Inexpensive) Disks).
  • the storage control apparatus creates logical volumes by using grouped storage areas, and provides the same to a host computer (hereinafter referred to as the host).
  • the storage control apparatus receiving a read request from the host, instructs a hard disk to read the data.
  • the address of the data read from the hard disk is converted, stored in a cache memory, and transmitted to the host.
  • the hard disk if unable to read data from storage media due to the occurrence of a certain type of problem in the storage media, a magnetic head or others, retries [read] after a period of time. If unable to read the data from the storage media in spite of performing the retry processing, the storage control apparatus performs correction copy, and generates the data required by the host.
  • Correction copy is the method for restoring the data by reading the data and the parity from the other hard disks belonging to the same parity group as the hard disk in which the failure occurred (Patent Literature 1).
  • the retry processing is performed in the hard disk, the time before the read request issued by the host is performed becomes longer. Therefore, the response performance of the storage control apparatus is deteriorated, and the quality of the services provided by the application programs on the host is deteriorated.
  • an application program operating on the host does not care the response time, no particular problem occurs.
  • an application program operating on the host does not care the response time, no particular problem occurs.
  • the application programs which must process a large number of accesses from the client machines in a short time if the response time of the storage control apparatus becomes longer, the service quality is reduced.
  • the purpose of this invention is to provide a storage control apparatus and the control method of the storage control apparatus which, even if the response time of the storage control apparatus is long, can inhibit the response time from the storage control apparatus to the higher-level device from being longer.
  • the further purposes of this invention are disclosed by the description of the embodiments described later.
  • the storage control apparatus complying with the Aspect 1 of this invention is a storage control apparatus which inputs/outputs data in accordance with a request from a higher-level device and comprises multiple storage apparatuses for storing data and a controller connected to the higher-level device and each storage apparatus and which makes a specified storage apparatus of the respective storage apparatuses input/output the data in accordance with the request from the higher-level device, wherein the controller, if receiving an access request from the higher-level device, sets the timeout time to a second value which is shorter than a first value in a certain case, requires the read of specified data corresponding to the access request to the specified storage apparatus of the respective storage apparatuses and, if the data cannot be acquired from the specified storage apparatus within the set timeout time, detects that a timeout error occurred and, if the timeout error is detected, makes a second management unit which is different from a first management unit for managing failures which occur in the respective storage apparatuses manage the occurrence of the time
  • the controller at the Aspect 1 comprises a first communication control unit for communicating with the higher-level device, a second communication control unit for communicating with the respective storage apparatuses, and a memory used by the first communication control unit and the second communication control unit, wherein the memory stores timeout time setting information for determining whether to set the timeout time to the first value or to the second value, wherein the timeout time setting information includes the number of queues whose targets are the respective storage apparatuses, a threshold for First In First Out in cases where the First In First Out mode is set as the queuing mode, and a threshold for sorting which is smaller than the threshold for First In First Out in cases where the queuing mode is set to the sorting mode in which sorting is performed in ascending order of distance of logical addresses, wherein, if the first communication control unit receives an access request from the higher-level device, the second communication control unit, in accordance with the timeout time setting information, if the number of queues whose target is the specified storage apparatus is equal to or larger than either the
  • the management unit at the Aspect 1 manages the number of failures which occurred in the respective storage apparatuses and a threshold for restoration for starting a specified restoration step related to the storage apparatuses in which the failures occurred by making the same correspond to each other
  • the second management unit manages the number of timeout errors which occurred in the respective storage apparatuses and another threshold for restoration for starting the specified restoration step related to the storage apparatuses in which the timeout errors occurred by making the same correspond to each other
  • the other threshold for restoration managed by the second management unit is set larger than the threshold for restoration managed by the first management unit.
  • the controller at the Aspect 1 if the guarantee mode for guaranteeing the response within the specified time is set in the specified storage apparatus, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • the controller if the queuing mode related to the specified storage apparatus is set to the First In First Out mode, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • the controller at the Aspect 1 if the specified storage apparatus is a storage apparatus other than the previously specified low-speed storage apparatus, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • the controller at the Aspect 1 if the number of queues whose target is the specified storage apparatus is smaller than the specified threshold, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • the controller at the Aspect 1 comprises timeout time setting information for determining whether to set the timeout time to the first value or to the second value, which includes the number of queues whose targets are the respective storage apparatuses, the threshold for First In First Out in cases where the First In First Out mode is set as the queuing mode, and the threshold for sorting which is smaller than the threshold for First In First Out in cases where the queuing mode is set to the sorting mode in which sorting is performed in ascending order of distance of logical addresses, and further, the controller, if the number of queues whose target is the specified storage apparatus is equal to or larger than either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the first value as the timeout time for reading the specified data from the specified storage apparatus and, if the number of queues whose target is the specified storage apparatus is under either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, select
  • the controller at the Aspect 1 if a timeout error is detected, sets another timeout time for which the first value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus.
  • the controller at the Aspect 1 if a timeout error is detected, sets another timeout time for which the second value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus.
  • the controller at the Aspect 10 if unable to acquire the other data from the other storage apparatuses within another timeout time, changes the timeout time to the first value, and requires the read of the specified data to the specified storage apparatus again.
  • the controller at the Aspect 10 if unable to acquire the other data from the other storage apparatuses within another timeout time, notifies the user.
  • This invention can also be comprehended as a control method of a storage control apparatus. Furthermore, at least a part of the configuration of this invention can be configured as a computer program. This computer program can be distributed fixed in storage media or via a communication network. Furthermore, other combinations than the combinations of the above-mentioned aspects are also included in the scope of this invention.
  • FIG. 1 is an explanatory diagram showing the overall concept of the embodiment of this invention.
  • FIG. 2 is an explanatory diagram showing the overall configuration of the system including the storage control apparatus.
  • FIG. 3 is a block diagram of the storage control apparatus.
  • FIG. 4 is an explanatory diagram showing the mapping status of slots and storage apparatuses.
  • FIG. 5 is an explanatory diagram showing the differences between the queuing modes.
  • FIG. 6 is a table for managing the relationship between the storage apparatuses and virtual devices (RAID groups).
  • FIG. 7 is a table for managing virtual devices.
  • FIG. 8 is a table for managing the modes which can be set from the management terminal.
  • FIG. 9 is a table for managing jobs.
  • FIG. 10 is a flowchart showing the read processing.
  • FIG. 11 is a flowchart showing the staging processing.
  • FIG. 12 is a flowchart showing the correction read processing.
  • FIG. 13 is a flowchart showing the error count processing.
  • FIG. 14 shows a table for managing the error count.
  • FIG. 15 is an explanatory diagram showing the method for setting the timeout time shorter than the normal value.
  • FIG. 16 is a table for managing the thresholds for setting the timeout time with regard to the Embodiment 2.
  • FIG. 17 is a flowchart showing the correction read processing with regard to the Embodiment 3.
  • FIG. 18 is a table for managing the status of the staging processing with regard to the Embodiment 4.
  • FIG. 19 is a flowchart showing the staging processing.
  • FIG. 20 is a flowchart continued from FIG. 19 .
  • FIG. 21 is a flowchart of the correction read processing
  • FIG. 22 is a flowchart showing the staging processing with regard to the Embodiment 5.
  • FIG. 23 is a table for managing the response time of the respective storage apparatuses.
  • FIG. 24 is a diagram of the overall configuration of a system with regard to the Embodiment 6.
  • FIG. 25 is a flowchart of the staging processing.
  • FIG. 26 is a flowchart continued from FIG. 25 .
  • FIG. 1 is stated to the extent required for the understanding and practice of this invention.
  • the scope of this invention is not limited to the configuration stated in FIG. 1 .
  • the characteristics which are not stated in FIG. 1 are disclosed in the embodiments described later.
  • FIG. 1 shows the overview of the overall [invention].
  • the configuration of the computer system is stated on the left side of FIG. 1 and the overview of the processing is stated on the right respectively.
  • the computer system comprises a storage control apparatus 1 and a host 2 as a higher-level device.
  • the storage control apparatus 1 comprises a controller 3 and a storage apparatus 4 .
  • the controller 3 comprises a channel adapter 5 as the first communication control unit, a memory 6 , and a disk adapter 7 as the second communication control unit.
  • the channel adapter is abbreviated to the CHA
  • the disk adapter is abbreviated to the DKA.
  • the range surrounded by a dashed line in FIG. 1 indicates the contents of the processing by the DKA 7 .
  • various types of devices capable of reading and writing data are available, for example, a hard disk device, a semiconductor memory device, an optical disk device, a magnetic-optical disk device, a magnetic tape device, a flexible disk device, and others.
  • a hard disk device for example, an FC (Fibre Channel) disk, an SCSI (Small Computer System Interface) disk, an SATA disk, an ATA (AT Attachment) disk, an SAS (Serial Attached SCSI) disk, and others can be used.
  • FC Fibre Channel
  • SCSI Serial Computer System Interface
  • SATA Serial Advanced Technology Attachment
  • SAS Serial Attached SCSI
  • semiconductor memory device various types of memory devices are available, for example, a flash memory, an FeRAM (Ferroelectric Random Access Memory), an MRAM (Magnetoresistive Random Access Memory), a phase-change memory (Ovonic Unified Memory), an RRAM (Resistance RAM), a PRAM (Phase change RAM), and others.
  • An application program operating on the host 2 issues an access request (referred to as an “IO” in the figure) to the storage control apparatus 1 .
  • the access request is either a read request or a write request.
  • the read request require's data read from the storage apparatus 4 .
  • the write request requires data write to the storage apparatus 4 . If the storage control apparatus 1 processes the write request, the existing data is frequently read at first. That is, for processing the write request, data read is performed in the storage control apparatus 1 .
  • the CHA 5 receiving an access request (e.g. a read request) from the host 2 , generates a job for acquiring the required data (S 1 ).
  • an access request e.g. a read request
  • the DKA 7 detecting the job created by the CHA 5 , issues a read request to the specified storage apparatus 4 storing the data required by the host 2 (S 2 ).
  • the storage apparatus 4 accepting the read request, tries to read the data from the storage media (S 3 ).
  • the DKA 7 sets the upper limit time (timeout time) required for acquiring the data from the storage apparatus 4 (S 4 ).
  • timeout time is occasionally abbreviated to a TOV (Time Out Value).
  • TOVs are prepared in advance, which are a TOV 1 as the first value and a TOV 2 as a second value.
  • the TOV 1 is a normally set value.
  • the TOV 2 is a value which is set if the response performance is prioritized, and the value is set shorter than the TOV 1 . Therefore, it is possible to also refer to the TOV 1 as a normal value and the TOV 2 as a shortened value.
  • the TOV 1 is set to approximately 4 to 6 seconds.
  • the TOV 2 is set to around 1 second, for example, approximately 0.9 second.
  • the TOV 2 is set to ensure that the total value of the time required for the correction read processing and the TOV 2 falls within a specified time, for example, approximately 2 seconds.
  • the DKA 7 in accordance with the previously set condition, sets the timeout time to either the TOV 1 or the TOV 2 .
  • the TOV 2 is selected. If the queuing mode (queue processing method) related to the storage apparatus 3 as the read target is set to the first-in first-out (FIFO: First In First Out) mode, the TOV 2 is selected. If the storage apparatus 4 as the read target is other than a low-speed storage apparatus, the TOV 2 is selected. Furthermore, with reference to the operating status (load status) of the storage apparatus 4 as the read target, either the TOV 1 or the TOV 2 can be selected.
  • FIFO First In First Out
  • the data read from the storage apparatus 4 is transmitted via the CHA 5 to the host 2 . Meanwhile, if a certain type of error occurs inside the storage apparatus 4 and if the response cannot be transmitted within the timeout time, the DKA 7 determines the occurrence of a timeout error (S 5 ).
  • the DKA 7 makes the management unit for managing timeout errors (the second management unit) store the occurrence of the timeout error (timeout failure).
  • An ordinary failure reported from the storage apparatus 4 is stored in the management unit for managing ordinary failures in the storage apparatus (the first management unit).
  • the DKA 7 detecting the timeout error, resets the read request issued at S 3 (S 7 ).
  • the DKA 7 starts the correction read processing (S 8 ).
  • the correction read processing is the processing of reading other data (and a parity) belonging to the same stripe string as the first read target data from the other respective storage apparatuses 4 belonging to the same parity group as the storage apparatus 4 in which the timeout error is detected, and of generating the first read target data by a logical operation.
  • the correction read processing is also referred to as the correction copy processing.
  • the DKA 7 transfers the restored data to the cache memory (S 9 ). Though not shown in the figure, the CHA 5 transmits the data transferred to the cache memory to the host 2 . By this step, the processing of the read request (read command) received from the host 2 is completed.
  • the DKA 7 if satisfying a specified condition, sets a short timeout time TOV 2 for the read request transmitted to the storage apparatus 4 and, if a timeout error occurs, resets the read request and performs the correction read processing.
  • the response time of the storage control apparatus 1 becomes the value ascertained by adding the time required for the correction read processing to the TOV 2 , and it is possible to transmit the data to the host 2 within the specified response time.
  • the response time guarantee mode is set, if the queuing mode is FIFO, if [the specified storage apparatus is] not a low-speed storage apparatus, or if the storage apparatus is not highly loaded, the timeout time for reading data from the storage apparatus 4 is set to the TOV 2 which is a shorter value than usual. Therefore, in this embodiment, in accordance with the circumstances, the response performance of the storage control apparatus 1 can be prevented from deterioration.
  • timeout errors are managed in a management unit which is different from the management unit for managing ordinary failures in the storage apparatus. Therefore, in this embodiment, the start of the restoration step related to the storage apparatus 4 in which the failure occurred (e.g. the processing of copying the data in the storage apparatus 4 to a spare storage apparatus or the processing of restoring the data in the storage apparatus 4 by the correction copy processing) can be controlled separately for timeout errors and for ordinary failures.
  • the timeout time for reading the data from the storage apparatus 4 is set to the TOV 2 which is shorter than the conventional value TOV 1 . Therefore, depending on the status of the storage apparatus 4 , it is possible that a relatively large number of timeout errors might occur. If timeout errors and ordinary failures are collectively managed, the possibility of the total number of the failure counts exceeding the threshold becomes higher, and the number of times of performing the restoration step increases. if the restoration step is performed frequently, the load on the storage control apparatus 1 increases, and the response performance of the storage control apparatus 1 might be negatively affected. Therefore, in this embodiment, timeout errors and ordinary failures in the storage apparatus are managed separately.
  • FIG. 2 shows the overall configuration of the system including the storage control apparatus 10 with regard to this embodiment.
  • This system can be configured, for example, by including at least one storage control apparatus 10 , one or more hosts 20 , and at least one management terminal 30 .
  • the storage control apparatus 10 corresponds to the storage control apparatus 1 in FIG. 1
  • the storage apparatus 210 corresponds to the storage apparatus 4 in FIG. 1
  • the host 20 corresponds to the host 2 in FIG. 1
  • the controller 100 corresponds to the controller 3 in FIG. 1
  • the channel adapter 110 corresponds to the CHA 5 in FIG. 1
  • the disk adapter 120 corresponds to the DKA 7 in FIG. 1
  • the cache memory 130 and the shared memory 140 correspond to the memory 6 in FIG. 1 respectively.
  • the host 20 and the management terminal 30 are described at first, and then the storage control apparatus 10 is described.
  • the host 20 for example, is configured as a mainframe computer or a server computer.
  • the host 20 is connected to the storage control apparatus 10 via a communication network CN 1 .
  • the communication network CN 1 can be configured as a communication network, for example, such as an FC-SAN (Fibre Channel-Storage Area Network) or an IP-SAN (Internet Protocol_SAN).
  • the management terminal 30 is connected to a service processor 160 in the storage control apparatus 10 via a communication network CN 3 .
  • the service processor 160 is connected to the CHA 110 and others via an internal network CN 4 .
  • the communication networks CN 3 and CN 4 are configured, for example, as a communication network such as LAN (Local Area Network).
  • the management terminal 30 via the service processor (hereinafter referred to as the SVP) 160 , collects various types of information in the storage control apparatus 10 . Furthermore, the management terminal 30 , via the SVP 160 , can instruct various types of setting in the storage control apparatus 10 .
  • the configuration of the storage control apparatus 10 is described below.
  • the storage control apparatus 10 can be roughly classified into the controller 100 and the storage apparatus installed unit 200 .
  • the controller 100 is configured, for example, by comprising at least one or more CHAs 110 , at least one or more DKAs 120 , at least one or more cache memories 130 , at least one or more shared memories 140 , a connection unit (“SW” in the figure) 150 , and the SVP 160 .
  • SW connection unit
  • SVP 160 connection unit
  • a cluster can be configured of multiple controllers 100 .
  • the CHA 110 is for controlling data communication with the host 20 and is configured, for example, as a computer apparatus comprising a microprocessor, a local memory, and others. Each CHA 110 comprises at least one or more communication ports.
  • the DKA 120 is for controlling data communication with the respective storage apparatuses 210 and is configured, as the CHA 110 , as a computer apparatus comprising a microprocessor, a local memory, and others.
  • the respective DKAs 120 and the respective storage apparatuses 210 are connected, for example, via a communication path CN 2 complying with the fibre channel protocol.
  • the respective DKAs 120 and the respective storage apparatuses 210 perform data transfer in units of blocks.
  • the path through which the controller 100 accesses the respective storage apparatuses 210 is made redundant. Even if a failure occurs in one of DKAs 120 or one of the communication paths CN 2 , the controller 100 can access the storage apparatus 210 by using the other DKA 120 or the other communication path CN 2 . Similarly, the path between the host 20 and the controller 100 can also be made redundant.
  • the configuration of the CHA 110 and the DKA 120 is described later in FIG. 3 .
  • the operation of the CHA 110 and the DKA 120 is briefly described.
  • the CHA 110 receiving a read command issued by the host 20 , stores this read command in the shared memory 140 .
  • the DKA 120 refers to the shared memory 140 as needed and, if discovering an unprocessed read command, reads the data from the storage apparatus 210 and stores the same in the cache memory 130 .
  • the CHA 110 reads the data transferred to the cache memory 130 , and transmits the same to the host 20 .
  • the processing in which the DKA 120 transfers the data read from the storage apparatus 210 to the cache memory 130 is referred to as the staging processing. The details of the staging processing are described later.
  • the CHA 110 receiving a write command issued by the host 20 , stores the write command in the shared memory 140 . Furthermore, the CHA 110 stores the received write data in the cache memory 130 . The CHA 110 , after storing the write data in the cache memory 130 , reports the write completion to the host 20 . The DKA 120 , complying with the write command stored in the shared memory 140 , reads the data stored in the cache memory 130 , and stores the same in the specified storage apparatus 210 .
  • the cache memory 130 for example, for storing user data and others received from the host 20 .
  • the cache memory 130 is configured of, for example, a volatile memory or a non-volatile memory.
  • the shared memory 140 is configured of, for example, a non-volatile memory. In the shared memory 140 , various types of tables T's described later, management information, and others are stored.
  • the shared memory 140 and the cache memory 130 can be set together on the same memory substrate. Otherwise, it is also possible to use a part of the memory as a cache area and use another part as a control area.
  • connection unit 150 connects the respective CHAs 110 , the respective DKAs 120 , the cache memory 130 , and the shared memory 140 respectively. By this method, all the CHAs 110 and the DKAs 120 can access the cache memory 130 and the shared memory 140 respectively.
  • the connection unit 150 can be configured, for example, as a crossbar switch and others.
  • the SVP 160 is, via the internal network CN 4 , connected to the respective CHAs 110 and the respective DKAs 120 respectively. Meanwhile, the SVP 160 is connected to the management terminal 30 via the communication network CN 3 .
  • the SVP 160 collects the respective statuses inside the storage control apparatus 10 and provides the same to the management terminal 30 . Note that the SVP 160 may also be only connected to either the CHAs 110 or the DKAs 120 . This is because the SVP 160 can collect the respective types of status information via the shared memory 140 .
  • the configuration of the controller 100 is not limited to the above-mentioned configuration.
  • the configuration in which, on one or multiple control substrates, the function of performing data communication with the host 20 , the function of performing data communication with the storage apparatuses 210 , the function of temporarily storing the data, and the function of storing the respective tables as rewritable are respectively set may also be permitted.
  • the configuration of the storage apparatus installed unit 200 is described.
  • the storage apparatus installed unit 200 comprises multiple storage apparatuses 210 .
  • the respective storage apparatuses 210 are configured, for example, as hard disk devices. Not limited to the hard disk devices, in some cases, flash memory devices, magnetic-optical storage apparatuses, holographic memory devices, and others can be used.
  • a parity group 220 is configured of a specified number of storage apparatuses 210 , of which [the number] differs depending on the RAID configuration and others, for example, a pair or a group of four [storage apparatuses].
  • the parity group 220 is the virtualization of the physical storage areas which the respective storage apparatuses 210 in the parity group 220 comprise respectively.
  • the parity group 220 is a virtualized physical storage area.
  • This virtualized physical storage area is also referred to as a VDEV in this embodiment.
  • one or multiple logical storage apparatuses (LDEVs) 230 can be set.
  • the logical storage apparatuses 230 are made to correspond to LUNs (Logical Unit Numbers), and are provided to the host 20 .
  • the logical storage apparatuses 230 are also referred to as logical volumes.
  • FIG. 3 is a block diagram showing the configuration of the CHA 110 and the DKA 120 .
  • the CHA 110 for example, comprises a protocol chip 111 , a DMA circuit 112 , and a microprocessor 113 .
  • the protocol chip 111 is a circuit for performing the communication with the host 20 .
  • the microprocessor 113 controls the overall operation of the CHA 110 .
  • the DMA circuit 122 is a circuit for performing the data transfer between the protocol chip 111 and the cache memory 130 in the DMA (Direct Memory Access) method.
  • the DKA 120 as the CHA 110 , for example, comprises a protocol chip 121 , a DMA circuit 112 , and a microprocessor 123 . Furthermore, the DKA 120 also comprises a parity generation circuit 124 .
  • the protocol chip 121 is a circuit for communicating with the respective storage apparatuses 210 .
  • the microprocessor 123 controls the overall operation of the DKA 120 .
  • the parity generation circuit 124 is a circuit for generating parity data by performing a specified logical operation in accordance with the data stored in the cache memory 130 .
  • the DMA circuit 122 is a circuit for performing the data transfer between the storage apparatuses 210 and the cache memory 130 in the DMA method.
  • FIG. 4 is an explanatory diagram showing the frame format of the mapping status between the slots 300 and the storage apparatuses 210 .
  • FIG. 4( a ) shows the case of the RAID 5
  • FIG. 4( b ) shows the case of the RAID 1 .
  • FIG. 4( a ) shows the case where the 3D+1PRAID 5 is configured of three data disks (# 0 , # 1 , # 2 ) and one parity disk (# 3 ). Slots # 0 to # 7 are allocated in the data disk (# 0 ), slots # 8 to # 15 are allocated in the data disk (# 1 ), slots # 16 to # 23 are allocated in the data disk (# 2 ), and parity # 0 to # 7 are allocated in the parity disk (# 3 ) on the right side respectively. That is, in each data disk, eight serial slots are allocated respectively.
  • parity cycle The size of a parity which is equal to eight slots (# 0 to # 7 ) is referred to as a parity cycle.
  • the parity cycle next to the parity cycle shown in the figure the parity is stored in the disk (# 2 ) to the left of the disk (# 3 ).
  • the parity is stored in the disk (# 1 ).
  • the disk storing the parity data shifts in each parity cycle.
  • the number of slots included in one parity cycle can be ascertained by multiplying the number of data disks by 8 .
  • FIG. 5 shows the frame format of the queue processing method.
  • FIG. 5( a ) seven queues from number 1 to 7 are shown.
  • the horizontal axis in FIG. 5( a ) shows the logical address on the storage area in the storage apparatus 210 .
  • the queue number shows the order of accepting commands.
  • the distance between queues corresponds to the distance on the logical address.
  • FIG. 5( b ) shows the queue processing method (mode).
  • the queuing modes for example, the FIFO mode and the sorting mode are known.
  • the FIFO mode the first received queue is processed first. Therefore, the queues are processed in order from the first queue to the seventh queue.
  • the sorting mode queues are sorted for reducing as much rotation latency and seek latency as possible.
  • the processing is performed in order of the first queue, the sixth queue, the third queue, the fifth queue, the fourth queue, and the second queue. Though the second queue is generated early, the processing of the same is postponed. If the seventh queue is received before the processing of the fourth queue is completed, the seventh queue is processed immediately after the fourth queue, and the second queue is processed last.
  • FIG. 6 shows a table T 10 for managing the correspondence relationship between the device IDs and VDEVs.
  • This management table T 10 is stored in the shared memory 140 .
  • the CHA 110 and the DKA 120 can use at least a part of the table T 10 by copying the same in the local memories of the CHA 110 and the DKA 120 .
  • the device ID-VDEV correspondence relationship management table T 10 manages the correspondence relationship between the logical volumes 230 and VDEVs 220 as virtual intermediate storage apparatuses.
  • the management table T 10 for example, manages a device ID field C 11 , a VDEV number field C 12 , a starting slot field C 13 , and a slot amount field C 14 by making the same correspond to each other.
  • the information for identifying the logical volumes 230 is stored in the device ID field C 11 .
  • the VDEV number field C 12 the information for identifying the VDEVs 220 is stored in the VDEV number field C 12 .
  • the starting slot field C 13 the slot number indicating in which slot in the VDEV 220 the logical volume 230 starts is stored.
  • the slot amount field C 14 the number of slots configuring the logical volume 230 is stored.
  • FIG. 7 shows a table T 20 for managing VDEVs 220 .
  • This management table T 20 is stored in the shared memory 140 .
  • the CHA 110 and the DKA 120 can use at least a part of the management table T 20 by copying the same in the local memories.
  • the VDEV management table T 20 for example, comprises a VDEV number field C 21 , a slot size field C 22 , a RAID level field C 23 , a data drive amount field C 24 , a parity cycle slot amount field C 25 , a disk type field C 26 , a queuing mode field C 27 , and a response time guarantee mode field C 28 by making the same correspond to each other.
  • VDEV number field C 21 the information for identifying the respective VDEVs 220 is stored.
  • slot size field C 22 the number of slots made to correspond to VDEVs is stored.
  • RAID level field C 23 the information such as RAID 1 to RAID 6 indicating the RAID type is stored.
  • data drive amount field C 24 the number of storage apparatuses 210 storing the data is stored.
  • parity cycle slot amount field C 25 the number of slots included in a parity cycle is stored.
  • the number of slots indicates, when allocating slots in the storage apparatuses 210 , with how many slots the allocation should shift to the next storage apparatus 210 .
  • disk type field C 26 the type of the storage apparatuses 210 configuring the VDEV 220 is stored.
  • the queuing mode field C 27 the type of the queuing mode applied to the VDEV 220 is stored. “0,” in case of the FIFO mode, and “1,” for the sorting mode, are set in the queuing mode field C 27 .
  • the response time guarantee mode field C 28 the setting value of the response time guarantee mode is stored.
  • the response time guarantee mode is the mode which guarantees that the response time of the VDEV 220 falls within a specified length of time. The case where “1” is stored indicates that the response time guarantee mode is set.
  • FIG. 8 shows the mode setting table T 30 .
  • the mode setting table T 30 is set by the management terminal 30 via the SVP 160 .
  • the mode setting table T 30 for the entire storage control apparatus 10 , sets the queuing mode and the response time guarantee mode.
  • the mode setting table T 30 comprises an item field C 31 and a setting value field C 32 .
  • the queuing mode and the response time guarantee mode are stored.
  • the setting value field C 32 the value indicating whether to set each mode or not is stored.
  • the storage control apparatus 10 may not have to comprise both of the tables T 20 and T 30 .
  • the queuing mode is either set in units of VDEVs (C 27 ) or is set for the entire storage control apparatus 10 (T 30 ).
  • the response time guarantee mode is also either set in units of VDEVs (C 28 ) or is set for the entire storage control apparatus 10 (T 30 ).
  • the configuration in which the VDEV management table T 20 and the mode setting table T 30 coexist may also be permitted.
  • FIG. 9 shows a table T 40 for managing jobs.
  • the job management table T 40 is also referred to as a job control block (JCB).
  • the job management table T 40 manages the status of jobs generated by the kernel.
  • the job management table T 40 manages a JCB number field C 41 , a job status field C 42 , a WAIT expiration time field C 43 , a starting flag field C 44 , a failure occurrence flag field C 45 , and a inheritance information field C 46 by making the same correspond to each other.
  • the JCB number field C 41 the number for identifying the JCB for controlling each job is stored.
  • the job status field C 42 the status of the job managed by the JCB is stored.
  • the job statuses are, for example, “RUN,” “WAIT,” and “Unused.” “RUN” indicates that the job is running. If the DKA 120 receives a message from the CHA 110 , the kernel of the DKA 120 generates a job, and assigns one unused JCB to the job. The DKA 120 changes the job status field C 42 of the JCB assigned to the job from “Unused” to “RUN.” “WAIT” indicates the status in which the completion of the job processing is being waited for. “Unused” indicates that the JCB is not assigned to any job.
  • the WAIT expiration time field C 43 the value created by adding the processing latency (timeout time) to the current time is stored.
  • the starting flag field C 44 the value of the flag for determining whether to restart the job or not is stored. If the data input/output of the storage apparatus 210 is normally terminated or abnormally terminated, the starting flag is set to “1” by the interruption procession.
  • failure occurrence flag field C 45 the value of the flag indicating whether a failure occurred in the storage apparatus 210 or not is stored. If a failure occurred in the storage apparatus 210 , “1” is set in the failure occurrence flag field C 45 .
  • the inheritance information field C 46 the information required for restarting the job is stored. That type of information is, for example, the VDEV number, the slot number, and others.
  • the kernel regularly monitors, among the jobs in the “WAIT” status, whether any job whose starting flag is set to “1” or whose WAIT expiration time elapses the current time exists or not.
  • the kernel of the DKA 120 restarts the job.
  • the status of the restarted job is changed from “WAIT” to “RUN.”
  • the restarted job continues the processing by referring to the inheritance information.
  • the status is changed from “RUN” to “Unused.”
  • FIG. 10 is a flowchart showing the read processing performed by the CHA 110 .
  • the CHA 110 realizes the functions shown in FIG. 10 by the microprocessor reading a specified computer program stored in the CHA 110 and performing the same.
  • the CHA 110 receiving a read command from the host 20 (S 10 ), converts the logical address specified by the read command into a combination of a VDEV number and a slot number (S 11 ).
  • the CHA 110 determines whether there is a cache hit or not (S 12 ). If a cache area corresponding to the read target slot number is already secured and, at the same time, if the staging bit within the range of the read target logical block is set to on, a cache hit is determined.
  • the CHA 110 transmits a read message to the DKA 120 (S 13 ).
  • a VDEV number, a slot number, a starting block number in the slot, and a number of target blocks are included.
  • the CHA 110 after transmitting the read message to the DKA 120 , waits for the completion of the data read processing (staging processing) by the DKA 120 (S 14 ).
  • the CHA 110 receiving the completion report from the DKA 120 (S 15 ), determines whether the data read from the storage apparatus is normally terminated or not (S 16 ).
  • the CHA 110 transmits the data stored in the cache memory 130 to the host 20 (S 17 ), and completes this processing. If the data read from the storage apparatus fails (S 16 : NO), the CHA 110 notifies an error to the host 20 (S 18 ), and completes this processing.
  • FIG. 11 is a flowchart of the staging processing.
  • the staging processing is the processing of reading data from the storage apparatus and transferring the same to the cache memory, and is performed by the DKA 120 .
  • the DKA 120 receiving the message from the CHA 110 (S 20 ), secures an area for storing the data in the cache memory, and further converts the address specified by the message into a physical address (S 21 ). That is, the DKA 120 converts the read destination address into a combination of a storage apparatus number, a logical address, and the number of logical blocks, and requires data read to the storage apparatus 210 (S 22 ).
  • the DKA 120 for requiring data read to the storage apparatus 210 , sets a timeout time (referred to as a TOV in the figure), and shifts to the waiting status (S 23 ).
  • the DKA 120 sets either the normal value TOV 1 which is relatively a long time or the shortened value TOV 2 which is relatively a short time as a timeout time.
  • the selection method of the timeout time is described later in FIG. 15 .
  • the job for reading the data from the storage apparatus 210 is changed to the “WAIT” status. If the starting flag is set to “1” or if the WAIT expiration time elapses, the job processing is restarted (S 24 ).
  • the DKA 120 determines whether the data read is normally terminated or abnormally terminated (S 25 ). The case where the data can be transferred from the storage apparatus 210 to the cache memory 130 is determined to be a normal termination. In case of the normal termination, the DKA 120 sets the staging bit to on (S 26 ), and reports to the CHA 110 that the data read is normally terminated (S 27 ).
  • the DKA 120 determines whether a timeout error occurred or not (S 28 ).
  • the timeout error is an error in cases where the data cannot be read from the storage apparatus 210 within the set timeout time.
  • the DKA 120 issues a reset command to the storage apparatus 210 (S 29 ). By the reset command, the data read request to the storage apparatus 210 is cancelled.
  • the DKA 120 after cancelling the data read request, performs the correction read processing (S 30 ).
  • the details of the correction read processing are described later in FIG. 12 . If a failure other than the timeout error occurs in the storage apparatus 210 (S 28 : NO), the DKA 120 skips S 29 , and shifts to the S 30 .
  • the DKA 120 determines whether the correction read processing is normally terminated or not (S 31 ). If the correction read processing is normally terminated (S 31 : YES), the DKA 120 reports to the CHA 110 that the read request is normally terminated (S 27 ). If the correction read processing is not terminated normally (S 31 : NO), the DKA 120 reports to the CHA 110 that the processing of the read request is terminated abnormally (S 32 ).
  • FIG. 12 is a flowchart of the correction read processing shown as S 30 in FIG. 11 .
  • the DKA 120 determines the RAID level of the VDEV 220 to which the read target storage apparatus 210 belongs (S 40 ). In this embodiment, as an example, whether [the RAID level is] the RAID 1 , the RAID 5 , or the RAID 6 is determined.
  • the DKA 120 identifies the numbers of the other respective slots related to the error slot (S 41 ).
  • the error slot is the slot from which no data can be read and in which a certain type of failure occurred.
  • the other respective slots related to the error slot are the other slots included in the same stripe string as the error slot.
  • the DKA 120 after securing an area for storing the data to be acquired from the other respective slots in the cache memory 130 , issues a read request to the respective storage apparatuses 210 which comprise the other respective slots identified at S 41 (S 42 ). Furthermore, the DKA 120 sets the timeout time for reading the data from the respective storage apparatuses 210 as the normal value (S 43 ). In this embodiment, for further ensuring the acquisition of the data required for restoring the data in the error slot, the timeout time is set as the normal value.
  • the DKA 120 issues a read request to a storage apparatus 210 which is paired with the storage apparatus 210 in which the error occurred (S 44 ), and shifts to S 43 .
  • the job related to the read request is in the WAIT status. If the starting flag is set or the WAIT expiration time elapses, [the job] is restarted (S 45 ). The DKA 120 determines whether the data read is normally terminated or not (S 46 ). If [the data read is] not terminated normally, the DKA 120 terminates this processing abnormally.
  • the DKA 120 determines the RAID level (S 47 ). If [the RAID level] is either the RAID 5 or the RAID 6 , the DKA 120 , in accordance with the data and the parity read from the respective storage apparatuses 210 , restores the data, and stores the restored data in the cache area corresponding to the error slot (S 48 ). The DKA 120 sets the staging bit related to the slot to on (S 49 ). In case of the RAID 1 , the DKA 120 skips S 48 , and shifts to the S 49 .
  • FIG. 13 is a flowchart of the error count processing. This processing is performed by the DKA 120 .
  • the DKA 120 monitors whether an error (failure) occurred in the storage apparatus 210 or not (S 60 ). If an error occurred (S 60 : YES), the DKA 120 determines whether [the error is] a timeout error or not (S 61 ).
  • the DKA 120 records the timeout error to an timeout failure field C 53 in the error count management table T 50 shown in FIG. 14 (S 62 ).
  • the DKA 120 records the error to an HDD failure field C 52 in the error count management table T 50 (S 63 ).
  • the error count management table T 50 is described with reference to FIG. 14 .
  • the error count management table T 50 manages the number of errors which occurred in the storage apparatus 210 and the threshold for performing the restoration step.
  • the error management table T 50 is stored in the shared memory 140 , and the DKA 120 can use a part of the same by copying the same in the local memory.
  • the error count management table T 50 manages an HDD number field C 51 , the HDD failure field C 52 , and the timeout failure field C 53 by making the same correspond to each other.
  • the HDD number field C 51 stores the information for identifying each storage apparatus 210 .
  • the HDD failure field C 52 manages ordinary failures which occur in the storage apparatus 210 .
  • the HDD failure field C 52 comprises an error count field C 520 , a threshold field C 521 for starting the copy to the spare storage apparatus, and a threshold field C 522 for starting the correction copy.
  • the error count field C 520 stores the number of times of ordinary failures which occurred in the storage apparatus.
  • the threshold field C 521 stores a threshold TH 1 a for starting the “sparing processing” in which the data is copied from the storage apparatus where the error occurred to a spare storage apparatus.
  • the other threshold field C 522 stores a threshold TH 2 a for starting the correction copy processing.
  • the timeout failure field C 53 is for managing timeout errors occurring in the storage apparatus 210 , and comprises an error count field C 530 , a threshold field C 531 for starting the sparing processing, and a threshold field C 532 for starting the correction copy.
  • the thresholds for performing the sparing processing and the correction copy processing as the restoration steps are also set separately for ordinary failures and timeout errors respectively.
  • FIG. 15 shows the method for selecting the timeout time which is set for reading data from the storage apparatuses 210 .
  • multiple timeout time [values] TOV 1 and TOV 2 are prepared.
  • the first timeout time TOV 1 is set to a relatively long time, for example, a few seconds, and is also referred to as a normal value.
  • the second timeout time TOV 2 is set to a relatively short time, for example, one second or shorter, and is also referred to as a shortened value. If the specified conditions described below are satisfied, the DKA 120 can set the timeout time to a short value TOV 2 .
  • the storage apparatus 210 as the read target is not a low-speed storage apparatus such as an SATA. If the storage apparatus as the read target is low-speed (if the response performance is low) and if the timeout time is set short, a timeout error might occur even if no failure occurs.
  • FIFO mode as queues are processed in order of issuance, it does not occur that the processing of a queue with a distant logical address is postponed and is made to wait for an extremely long time.
  • the sorting mode as a queue at an isolated position might be made to wait for a long time, if the timeout time is shortened, the possibility that a timeout error might occur even if no failure occurs becomes higher.
  • the DKA 120 if the specified conditions are satisfied, sets a short timeout time TOV 2 for a read request transmitted to the storage apparatuses 210 and, if a timeout error occurs, resets the read request and performs the correction read processing.
  • the response time guarantee mode is set, if the queuing mode is FIFO, if [the storage apparatus is] not a low-speed storage apparatus, or if the storage apparatus is not highly loaded, the timeout time for reading data from the storage apparatus 210 is set to a shorter value than usual. Therefore, in this embodiment, in accordance with the circumstances, the deterioration of the response performance of the storage control apparatus 10 can be prevented.
  • timeout errors are managed separately from ordinary failures in the storage apparatus. Therefore, even if the timeout time is set shorter than usual, the restoration step such as the sparing processing or the correction copy processing can be inhibited from being performed. Therefore, the deterioration of the response performance due to the increase of the load on the storage control apparatus 10 by performing the restoration steps can be prevented.
  • the Embodiment 2 is described with reference to FIG. 16 .
  • the respective embodiments described below including this embodiment are equivalent to a variation of the Embodiment 1. Therefore, the differences from the Embodiment 1 are mainly described.
  • the timeout time is set short.
  • This embodiment is a variation of the (Specified Condition 5) described in the Embodiment 1.
  • FIG. 16 is a table T 70 storing thresholds for setting the timeout time.
  • the threshold table T 70 for example, manages an HDD number field C 71 , a queuing command amount field C 72 , a threshold field C 73 for the FIFO mode, and a threshold field for the sorting mode C 74 by making the same correspond to each other.
  • the HDD number field C 71 the information for identifying the respective storage apparatuses 210 is stored.
  • the queuing command amount field C 72 the number of unprocessed commands whose target is the storage apparatus 210 is stored.
  • the threshold field for the FIFO mode C 73 the threshold TH 3 for the cases where the queuing mode is set to the FIFO mode is stored.
  • the threshold TH 4 for the cases where the queuing mode is set to the sorting mode is stored.
  • the timeout time of the read request whose read target is the storage apparatus 210 is set to a normal value.
  • a timeout error might occur regardless of failures.
  • the possibility that a timeout error might occur also varies depending on the method for processing the unprocessed commands.
  • the timeout time is set in accordance with the number of unprocessed commands and the queuing mode.
  • FIG. 17 is a flowchart of the correction read processing. This processing comprises the steps S 40 to S 42 , S 44 to S 49 which are common to the processing shown in FIG. 12 . This processing is different from FIG. 12 at the point of S 43 A. That is, in the correction read processing of this embodiment, the timeout time is set to a shorter value than usual, and the data and the parity are read from the respective storage apparatuses 210 .
  • This embodiment which is configured as above also has the same effect as the Embodiment 1. Furthermore, in this embodiment, the timeout time for the correction read is set short, which can further prevent the deterioration of the response performance in the storage control apparatus 10 .
  • Embodiment 4 is described with reference to FIG. 18 to FIG. 21 .
  • the correction read processing fails, the data read from the storage apparatus 210 as the first read target is retried.
  • FIG. 18 is a status management table T 80 for managing the progress of the staging processing.
  • the status management table T 80 for example, comprises an item number field C 81 , a contents field C 82 , and a value field C 83 .
  • the item number field C 81 each step in the staging processing for reading data from the storage apparatus 210 and transferring the same to the cache memory 130 is shown.
  • “1” is set in the [corresponding] value field C 83 .
  • the timeout time is set to the shortened value TOV 2 , and data read is required to the storage apparatus 210 .
  • Step 2 a timeout error related to the first read request occurs.
  • the timeout time is set to the normal value TOV 1 , and the second data read is required to the storage apparatus 210 as the read target.
  • FIG. 19 and FIG. 20 are the flowcharts of the staging processing. This processing corresponds to the staging processing shown in FIG. 11 . The differences between this processing and the processing shown in FIG. 11 are S 70 to S 76 .
  • the DKA 120 receiving a read message from the CHA 110 (S 20 ), initializes the value field C 83 of the status management table T 80 (S 83 ).
  • the DKA 120 after performing the address conversion and others (S 21 ), issues a read request to the storage apparatus 210 (S 22 ).
  • the DKA 120 sets the timeout time of the read request to the TOV 2 which is a shorter value than usual (S 71 ). Note that, if data read from the same storage apparatus 210 is retried, the timeout time is set to the normal value TOV 1 (S 71 ).
  • the DKA 120 if setting the timeout time to the shortened value TOV 2 , sets the value of the Step 1 in the status management table to “1” (S 72 ). By this method, it is recorded to the table T 80 that the first read is started.
  • the processing proceeds to FIG. 20 . If the first data read from the storage apparatus 210 fails with a timeout (S 28 : YES), the DKA 120 issues a reset command and cancels the read request (S 29 ). The DKA 120 sets the value of the Step 2 in the status management table T 80 to “1” (S 73 ). By this method, the occurrence of a timeout error related to the first read request is recorded to the status management table T 80 .
  • the DKA 120 refers to the status management table T 80 , and determines whether the staging processing reaches the Step 3 or not (S 74 ). At this point, as the correction read processing is not started yet, [the processing] is determined not to reach the Step 3 (S 74 : NO). Therefore, the DKA 120 performs the correction read processing (S 75 ).
  • the DKA 120 If the correction read processing is normally terminated (S 31 : YES), the DKA 120 notifies to the CHA 110 that the read request is normally terminated (S 27 ). If the correction read processing is not terminated normally (S 31 : NO), the DKA 120 refers to the status management table T 80 and determines whether the progress of the staging processing reaches the Step 2 or not (S 76 ).
  • the DKA 120 determines that [the processing] reaches the Step 2 (S 76 : YES), and returns to S 22 in FIG. 19 .
  • the DKA 120 issues a read request to the storage apparatus 210 as the read target again (S 22 ).
  • the DKA 120 sets the timeout value related to the second read request to the normal value TOV 1 (S 71 ). As this is the second read request and the timeout value is not shortened, S 72 is skipped.
  • the DKA 120 sets the staging bit to on (S 26 ), and reports the normal termination to the CHA 110 (S 27 ).
  • the DKA 120 resets the second read request (S 29 ). Note that, as the Step 2 in the status management table T 80 is set to “1, ” “1” is not set at S 73 again, and [the processing] shifts to S 73 .
  • the DKA 120 refers to the status management table T 80 , and determines whether the [processing] reaches the Step 3 or not (S 74 ). At this point, as the attempt of the correction read processing failed (S 74 : YES), the DKA 120 notifies the CHA 110 that the processing of the read request failed (S 32 ). That is, if the second read request fails, this processing is terminated without performing the second correction read processing.
  • FIG. 21 is a flowchart of the correction read processing. This processing is different from the processing shown in FIG. 12 in S 80 and S 81 .
  • the DKA 120 sets the normal value as the timeout time for the correction read (S 80 ). If the correction read processing is terminated abnormally, the DKA 120 sets the Step 3 of the status management table T 80 to “1” and records that the correction read failed (S 81 ).
  • This embodiment which is configured as above also has the same effect as the Embodiment 1. Furthermore, in this embodiment, if the correction read fails, data read from the storage apparatus 210 is retried with the normal timeout time. Therefore, the possibility of being able to read data from the storage apparatus 210 can be increased, and the reliability in the storage control apparatus 10 can be improved.
  • Embodiment 5 is described with reference to FIG. 22 and FIG. 23 .
  • the performance of the correction read processing is controlled.
  • FIG. 22 is a flowchart of the staging processing.
  • the processing in FIG. 22 is different from the processing shown in FIG. 11 in S 90 and S 91 . If a timeout error occurs (S 28 : YES), the DKA 120 refers to the response time management table T 90 (S 90 ), and determines whether the response time [values] of all the storage apparatuses 210 as the target of the correction read are longer than the standard value or not (S 91 ).
  • the DKA 120 does not perform the correction read processing and notifies the CHA 110 that the processing of the read request failed (S 32 ).
  • the DKA 120 resets the read request (S 29 ), and performs the correction read processing (S 30 ).
  • the configuration in which the correction read processing is not performed may also be permitted.
  • FIG. 23 shows the table T 90 managing the response time of the respective storage apparatuses 210 .
  • the response time management table T 90 for example, manages a VDEV number field C 91 , an HDD number field C 92 , a response time field C 93 , and a determination field C 94 by making the same correspond to each other.
  • the response time field C 93 the latest response time of each storage apparatus 210 is recorded.
  • the determination field C 94 the result of comparing the response time of each storage apparatus 210 with the specified standard value is recorded. If the response time is equal to or larger than the standard value, “Late” is recorded while, if the response time is under the standard value, “Normal” is recorded.
  • the response time management table T 90 it can be determined whether the correction read can be completed in a short time or not. Note that, instead of managing the response time directly, the number of unprocessed commands of each storage apparatus may also be managed. Furthermore, the configuration in which, in accordance with the number of unprocessed commands, the type of the storage apparatus 210 , and other information, the time required for the correction read processing is presumed may also be permitted.
  • Embodiment 6 is described with reference to FIG. 24 to FIG. 26 .
  • the correction read processing fails, [the failure] is notified to the user, and [the processing is] switched to the storage control apparatus 10 ( 2 ) of the standby system.
  • FIG. 24 is a system configuration diagram of this embodiment.
  • This embodiment comprises the storage control apparatus 10 ( 1 ) of the currently used system and the storage control apparatus 10 ( 2 ) of the standby system. In normal cases, the user uses the storage control apparatus 10 ( 1 ) of the currently used system.
  • FIG. 25 and FIG. 26 are the flowcharts of the staging processing.
  • the flowchart in FIG. 25 is different from the flowchart in FIG. 19 in that the connector 2 is not included.
  • the flowchart in FIG. 26 is different from the flowchart in FIG. 20 in the processing after the correction read processing fails.
  • 1 storage control apparatus
  • 2 host
  • 3 controller
  • 4 storage apparatus
  • 5 channel adapter (CHA)
  • 6 memory
  • 7 disk adapter (DKA)
  • 10 storage control apparatus
  • 20 host
  • 30 management terminal
  • 100 controller
  • 110 CHA
  • 120 DKA
  • 130 cache memory
  • 140 shared memory
  • 210 storage apparatus
  • 220 parity group (VDEV)
  • 230 logical volume (LDEV).

Abstract

[This invention] inhibits the response time of the storage control apparatus from being longer even if the response time of the storage apparatus is long.
The disk adapter (DKA), receiving a read message from the channel adapter (CHA), sets the timeout time in accordance with specified conditions, and tries to read data from the storage apparatus 4. As the timeout time, either the normal value or the shortened value is selected. If a timeout error occurs, the read job is reset, and correction read is started.

Description

    TECHNICAL FIELD
  • This invention relates to a storage control apparatus and the control method of the storage control apparatus.
  • BACKGROUND ART
  • Corporate users and others manage data by using storage control apparatuses. A storage control apparatus groups physical storage areas which multiple storage apparatuses comprise respectively as redundant storage areas based on RAID (Redundant Array of Independent (or Inexpensive) Disks). The storage control apparatus creates logical volumes by using grouped storage areas, and provides the same to a host computer (hereinafter referred to as the host).
  • The storage control apparatus, receiving a read request from the host, instructs a hard disk to read the data. The address of the data read from the hard disk is converted, stored in a cache memory, and transmitted to the host.
  • The hard disk, if unable to read data from storage media due to the occurrence of a certain type of problem in the storage media, a magnetic head or others, retries [read] after a period of time. If unable to read the data from the storage media in spite of performing the retry processing, the storage control apparatus performs correction copy, and generates the data required by the host. Correction copy is the method for restoring the data by reading the data and the parity from the other hard disks belonging to the same parity group as the hard disk in which the failure occurred (Patent Literature 1).
  • CITATION LIST Patent Literature
  • PTL 1: Japanese Unexamined Patent Application Publication No. 2007-213721
  • SUMMARY OF INVENTION Technical Problem
  • If the retry processing is performed in the hard disk, the time before the read request issued by the host is performed becomes longer. Therefore, the response performance of the storage control apparatus is deteriorated, and the quality of the services provided by the application programs on the host is deteriorated.
  • If an application program operating on the host does not care the response time, no particular problem occurs. However, for example, such as a ticketing program, a reservation program, and a video distribution program, in case of the application programs which must process a large number of accesses from the client machines in a short time, if the response time of the storage control apparatus becomes longer, the service quality is reduced.
  • Therefore, the purpose of this invention is to provide a storage control apparatus and the control method of the storage control apparatus which, even if the response time of the storage control apparatus is long, can inhibit the response time from the storage control apparatus to the higher-level device from being longer. The further purposes of this invention are disclosed by the description of the embodiments described later.
  • Solution to Problem
  • For solving the above-mentioned problem, the storage control apparatus complying with the Aspect 1 of this invention is a storage control apparatus which inputs/outputs data in accordance with a request from a higher-level device and comprises multiple storage apparatuses for storing data and a controller connected to the higher-level device and each storage apparatus and which makes a specified storage apparatus of the respective storage apparatuses input/output the data in accordance with the request from the higher-level device, wherein the controller, if receiving an access request from the higher-level device, sets the timeout time to a second value which is shorter than a first value in a certain case, requires the read of specified data corresponding to the access request to the specified storage apparatus of the respective storage apparatuses and, if the data cannot be acquired from the specified storage apparatus within the set timeout time, detects that a timeout error occurred and, if the timeout error is detected, makes a second management unit which is different from a first management unit for managing failures which occur in the respective storage apparatuses manage the occurrence of the timeout error and, furthermore, requires the read of other data corresponding to the specified data to another storage apparatus related to the specified storage apparatus, generates the specified data in accordance with the other data acquired from another storage apparatus, and transfers the generated specified data to the higher-level device.
  • At the Aspect 2, the controller at the Aspect 1 comprises a first communication control unit for communicating with the higher-level device, a second communication control unit for communicating with the respective storage apparatuses, and a memory used by the first communication control unit and the second communication control unit, wherein the memory stores timeout time setting information for determining whether to set the timeout time to the first value or to the second value, wherein the timeout time setting information includes the number of queues whose targets are the respective storage apparatuses, a threshold for First In First Out in cases where the First In First Out mode is set as the queuing mode, and a threshold for sorting which is smaller than the threshold for First In First Out in cases where the queuing mode is set to the sorting mode in which sorting is performed in ascending order of distance of logical addresses, wherein, if the first communication control unit receives an access request from the higher-level device, the second communication control unit, in accordance with the timeout time setting information, if the number of queues whose target is the specified storage apparatus is equal to or larger than either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the first value as the timeout time for reading the specified data from the specified storage apparatus and, if the number of queues whose target is the specified storage apparatus is under either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the second value which is smaller than the first value as the timeout time for reading the specified data from the specified storage apparatus, wherein the second communication control unit requires the read of the specified data to the specified storage apparatus, wherein the second communication control unit, if unable to acquire the specified data from the specified storage apparatus within the set timeout time, detects the occurrence of a timeout error, wherein the second communication control unit, if the timeout error is detected, makes a second management unit which is different from a first management unit for managing failures which occur in the respective storage apparatuses manage the occurrence of the timeout error, wherein the value of a threshold for restoration for starting a specified restoration step related to the storage apparatus in which the failure occurred is set larger for the second control unit than the first control unit, wherein the second communication control unit sets another timeout time for which the first value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus, generates the specified data in accordance with the other data acquired from the other storage apparatuses, and transfers the generated specified data to the higher-level device, and wherein the second communication control unit, if unable to acquire the other data from the other storage apparatuses within another timeout time and if the second value is set as the timeout time, changes the timeout time to the first value, and requires the read of the specified data to the specified storage apparatus again.
  • At the Aspect 3, the management unit at the Aspect 1 manages the number of failures which occurred in the respective storage apparatuses and a threshold for restoration for starting a specified restoration step related to the storage apparatuses in which the failures occurred by making the same correspond to each other, the second management unit manages the number of timeout errors which occurred in the respective storage apparatuses and another threshold for restoration for starting the specified restoration step related to the storage apparatuses in which the timeout errors occurred by making the same correspond to each other, and the other threshold for restoration managed by the second management unit is set larger than the threshold for restoration managed by the first management unit.
  • At the Aspect 4, the controller at the Aspect 1, if the guarantee mode for guaranteeing the response within the specified time is set in the specified storage apparatus, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • At the Aspect 5, the controller, if the queuing mode related to the specified storage apparatus is set to the First In First Out mode, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • At the Aspect 6, the controller at the Aspect 1, if the specified storage apparatus is a storage apparatus other than the previously specified low-speed storage apparatus, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • At the Aspect 7, the controller at the Aspect 1, if the number of queues whose target is the specified storage apparatus is smaller than the specified threshold, the timeout time for reading the specified data from the specified storage apparatus is set to the second value.
  • At the Aspect 8, the controller at the Aspect 1 comprises timeout time setting information for determining whether to set the timeout time to the first value or to the second value, which includes the number of queues whose targets are the respective storage apparatuses, the threshold for First In First Out in cases where the First In First Out mode is set as the queuing mode, and the threshold for sorting which is smaller than the threshold for First In First Out in cases where the queuing mode is set to the sorting mode in which sorting is performed in ascending order of distance of logical addresses, and further, the controller, if the number of queues whose target is the specified storage apparatus is equal to or larger than either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the first value as the timeout time for reading the specified data from the specified storage apparatus and, if the number of queues whose target is the specified storage apparatus is under either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the second value which is smaller than the first value as the timeout time for reading the specified data from the specified storage apparatus.
  • At the Aspect 9, the controller at the Aspect 1, if a timeout error is detected, sets another timeout time for which the first value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus.
  • At the Aspect 10, the controller at the Aspect 1, if a timeout error is detected, sets another timeout time for which the second value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus.
  • At the Aspect 11, the controller at the Aspect 10, if unable to acquire the other data from the other storage apparatuses within another timeout time, changes the timeout time to the first value, and requires the read of the specified data to the specified storage apparatus again.
  • At the Aspect 12, the controller at the Aspect 10, if unable to acquire the other data from the other storage apparatuses within another timeout time, notifies the user.
  • This invention can also be comprehended as a control method of a storage control apparatus. Furthermore, at least a part of the configuration of this invention can be configured as a computer program. This computer program can be distributed fixed in storage media or via a communication network. Furthermore, other combinations than the combinations of the above-mentioned aspects are also included in the scope of this invention.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is an explanatory diagram showing the overall concept of the embodiment of this invention.
  • FIG. 2 is an explanatory diagram showing the overall configuration of the system including the storage control apparatus.
  • FIG. 3 is a block diagram of the storage control apparatus.
  • FIG. 4 is an explanatory diagram showing the mapping status of slots and storage apparatuses.
  • FIG. 5 is an explanatory diagram showing the differences between the queuing modes.
  • FIG. 6 is a table for managing the relationship between the storage apparatuses and virtual devices (RAID groups).
  • FIG. 7 is a table for managing virtual devices.
  • FIG. 8 is a table for managing the modes which can be set from the management terminal.
  • FIG. 9 is a table for managing jobs.
  • FIG. 10 is a flowchart showing the read processing.
  • FIG. 11 is a flowchart showing the staging processing.
  • FIG. 12 is a flowchart showing the correction read processing.
  • FIG. 13 is a flowchart showing the error count processing.
  • FIG. 14 shows a table for managing the error count.
  • FIG. 15 is an explanatory diagram showing the method for setting the timeout time shorter than the normal value.
  • FIG. 16 is a table for managing the thresholds for setting the timeout time with regard to the Embodiment 2.
  • FIG. 17 is a flowchart showing the correction read processing with regard to the Embodiment 3.
  • FIG. 18 is a table for managing the status of the staging processing with regard to the Embodiment 4.
  • FIG. 19 is a flowchart showing the staging processing.
  • FIG. 20 is a flowchart continued from FIG. 19.
  • FIG. 21 is a flowchart of the correction read processing
  • FIG. 22 is a flowchart showing the staging processing with regard to the Embodiment 5.
  • FIG. 23 is a table for managing the response time of the respective storage apparatuses.
  • FIG. 24 is a diagram of the overall configuration of a system with regard to the Embodiment 6.
  • FIG. 25 is a flowchart of the staging processing.
  • FIG. 26 is a flowchart continued from FIG. 25.
  • DESCRIPTION OF EMBODIMENTS
  • Hereinafter, with reference to the figures, the embodiments of this invention are described. Firstly, the overview of this invention is described with reference to FIG. 1, and then the embodiments are described with reference to FIG. 2 and the subsequent figures. FIG. 1 is stated to the extent required for the understanding and practice of this invention. The scope of this invention is not limited to the configuration stated in FIG. 1. The characteristics which are not stated in FIG. 1 are disclosed in the embodiments described later.
  • FIG. 1 shows the overview of the overall [invention]. The configuration of the computer system is stated on the left side of FIG. 1 and the overview of the processing is stated on the right respectively. The computer system comprises a storage control apparatus 1 and a host 2 as a higher-level device. The storage control apparatus 1 comprises a controller 3 and a storage apparatus 4. The controller 3 comprises a channel adapter 5 as the first communication control unit, a memory 6, and a disk adapter 7 as the second communication control unit. In the description below, the channel adapter is abbreviated to the CHA, and the disk adapter is abbreviated to the DKA. The range surrounded by a dashed line in FIG. 1 indicates the contents of the processing by the DKA 7.
  • As the storage apparatus 4, various types of devices capable of reading and writing data are available, for example, a hard disk device, a semiconductor memory device, an optical disk device, a magnetic-optical disk device, a magnetic tape device, a flexible disk device, and others.
  • If a hard disk device is to be used as a storage apparatus, for example, an FC (Fibre Channel) disk, an SCSI (Small Computer System Interface) disk, an SATA disk, an ATA (AT Attachment) disk, an SAS (Serial Attached SCSI) disk, and others can be used. If a semiconductor memory device is to be used as a storage apparatus, various types of memory devices are available, for example, a flash memory, an FeRAM (Ferroelectric Random Access Memory), an MRAM (Magnetoresistive Random Access Memory), a phase-change memory (Ovonic Unified Memory), an RRAM (Resistance RAM), a PRAM (Phase change RAM), and others.
  • An application program operating on the host 2 issues an access request (referred to as an “IO” in the figure) to the storage control apparatus 1. The access request is either a read request or a write request. The read request require's data read from the storage apparatus 4. The write request requires data write to the storage apparatus 4. If the storage control apparatus 1 processes the write request, the existing data is frequently read at first. That is, for processing the write request, data read is performed in the storage control apparatus 1.
  • The CHA 5, receiving an access request (e.g. a read request) from the host 2, generates a job for acquiring the required data (S1).
  • The DKA 7, detecting the job created by the CHA 5, issues a read request to the specified storage apparatus 4 storing the data required by the host 2 (S2). The storage apparatus 4, accepting the read request, tries to read the data from the storage media (S3).
  • The DKA 7 sets the upper limit time (timeout time) required for acquiring the data from the storage apparatus 4 (S4). Hereinafter, the timeout time is occasionally abbreviated to a TOV (Time Out Value).
  • Multiple TOVs are prepared in advance, which are a TOV 1 as the first value and a TOV 2 as a second value. The TOV 1 is a normally set value. The TOV 2 is a value which is set if the response performance is prioritized, and the value is set shorter than the TOV 1. Therefore, it is possible to also refer to the TOV 1 as a normal value and the TOV 2 as a shortened value.
  • In one example, the TOV 1 is set to approximately 4 to 6 seconds. The TOV 2 is set to around 1 second, for example, approximately 0.9 second. The TOV 2 is set to ensure that the total value of the time required for the correction read processing and the TOV 2 falls within a specified time, for example, approximately 2 seconds.
  • The DKA 7, in accordance with the previously set condition, sets the timeout time to either the TOV 1 or the TOV 2. Though the details are described later, for example, if the mode which guarantees the response time of the storage control apparatus 1 is set, the TOV 2 is selected. If the queuing mode (queue processing method) related to the storage apparatus 3 as the read target is set to the first-in first-out (FIFO: First In First Out) mode, the TOV 2 is selected. If the storage apparatus 4 as the read target is other than a low-speed storage apparatus, the TOV 2 is selected. Furthermore, with reference to the operating status (load status) of the storage apparatus 4 as the read target, either the TOV 1 or the TOV 2 can be selected.
  • If there is a response from the storage apparatus 4 within the set timeout time, the data read from the storage apparatus 4 is transmitted via the CHA 5 to the host 2. Meanwhile, if a certain type of error occurs inside the storage apparatus 4 and if the response cannot be transmitted within the timeout time, the DKA 7 determines the occurrence of a timeout error (S5).
  • The DKA 7 makes the management unit for managing timeout errors (the second management unit) store the occurrence of the timeout error (timeout failure). An ordinary failure reported from the storage apparatus 4 is stored in the management unit for managing ordinary failures in the storage apparatus (the first management unit).
  • The DKA 7, detecting the timeout error, resets the read request issued at S3 (S7). The DKA 7 starts the correction read processing (S8). The correction read processing is the processing of reading other data (and a parity) belonging to the same stripe string as the first read target data from the other respective storage apparatuses 4 belonging to the same parity group as the storage apparatus 4 in which the timeout error is detected, and of generating the first read target data by a logical operation. The correction read processing is also referred to as the correction copy processing.
  • The DKA 7 transfers the restored data to the cache memory (S9). Though not shown in the figure, the CHA 5 transmits the data transferred to the cache memory to the host 2. By this step, the processing of the read request (read command) received from the host 2 is completed.
  • In this embodiment which is configured as described above, the DKA 7, if satisfying a specified condition, sets a short timeout time TOV 2 for the read request transmitted to the storage apparatus 4 and, if a timeout error occurs, resets the read request and performs the correction read processing.
  • Therefore, even if the response performance of the storage apparatus 4 as the read target is deteriorated due to high-load or other reasons, the correction read processing is performed after the TOV 2 elapses, and therefore the response performance of the storage control apparatus 1 can be prevented from deterioration. The response time of the storage control apparatus 1 becomes the value ascertained by adding the time required for the correction read processing to the TOV 2, and it is possible to transmit the data to the host 2 within the specified response time.
  • In this embodiment, for example, if the response time guarantee mode is set, if the queuing mode is FIFO, if [the specified storage apparatus is] not a low-speed storage apparatus, or if the storage apparatus is not highly loaded, the timeout time for reading data from the storage apparatus 4 is set to the TOV 2 which is a shorter value than usual. Therefore, in this embodiment, in accordance with the circumstances, the response performance of the storage control apparatus 1 can be prevented from deterioration.
  • In this embodiment, timeout errors are managed in a management unit which is different from the management unit for managing ordinary failures in the storage apparatus. Therefore, in this embodiment, the start of the restoration step related to the storage apparatus 4 in which the failure occurred (e.g. the processing of copying the data in the storage apparatus 4 to a spare storage apparatus or the processing of restoring the data in the storage apparatus 4 by the correction copy processing) can be controlled separately for timeout errors and for ordinary failures.
  • That is, in this embodiment, for preventing the response performance of the storage control apparatus 1 from deterioration, under the specified condition, the timeout time for reading the data from the storage apparatus 4 is set to the TOV 2 which is shorter than the conventional value TOV 1. Therefore, depending on the status of the storage apparatus 4, it is possible that a relatively large number of timeout errors might occur. If timeout errors and ordinary failures are collectively managed, the possibility of the total number of the failure counts exceeding the threshold becomes higher, and the number of times of performing the restoration step increases. if the restoration step is performed frequently, the load on the storage control apparatus 1 increases, and the response performance of the storage control apparatus 1 might be negatively affected. Therefore, in this embodiment, timeout errors and ordinary failures in the storage apparatus are managed separately.
  • Embodiment 1
  • FIG. 2 shows the overall configuration of the system including the storage control apparatus 10 with regard to this embodiment. This system can be configured, for example, by including at least one storage control apparatus 10, one or more hosts 20, and at least one management terminal 30.
  • The correspondence relationship to the embodiment described above in FIG. 1 is described. The storage control apparatus 10 corresponds to the storage control apparatus 1 in FIG. 1, the storage apparatus 210 corresponds to the storage apparatus 4 in FIG. 1, the host 20 corresponds to the host 2 in FIG. 1, the controller 100 corresponds to the controller 3 in FIG. 1, the channel adapter 110 corresponds to the CHA 5 in FIG. 1, the disk adapter 120 corresponds to the DKA 7 in FIG. 1, and the cache memory 130 and the shared memory 140 correspond to the memory 6 in FIG. 1 respectively.
  • The host 20 and the management terminal 30 are described at first, and then the storage control apparatus 10 is described. The host 20, for example, is configured as a mainframe computer or a server computer. The host 20 is connected to the storage control apparatus 10 via a communication network CN1. The communication network CN1 can be configured as a communication network, for example, such as an FC-SAN (Fibre Channel-Storage Area Network) or an IP-SAN (Internet Protocol_SAN).
  • The management terminal 30 is connected to a service processor 160 in the storage control apparatus 10 via a communication network CN3. The service processor 160 is connected to the CHA 110 and others via an internal network CN4. The communication networks CN3 and CN4 are configured, for example, as a communication network such as LAN (Local Area Network). The management terminal 30, via the service processor (hereinafter referred to as the SVP) 160, collects various types of information in the storage control apparatus 10. Furthermore, the management terminal 30, via the SVP 160, can instruct various types of setting in the storage control apparatus 10.
  • The configuration of the storage control apparatus 10 is described below. The storage control apparatus 10 can be roughly classified into the controller 100 and the storage apparatus installed unit 200. The controller 100 is configured, for example, by comprising at least one or more CHAs 110, at least one or more DKAs 120, at least one or more cache memories 130, at least one or more shared memories 140, a connection unit (“SW” in the figure) 150, and the SVP 160. Note that the configuration in which multiple controllers 100 are connected to each other via switches may also be permitted. For example, a cluster can be configured of multiple controllers 100.
  • The CHA 110 is for controlling data communication with the host 20 and is configured, for example, as a computer apparatus comprising a microprocessor, a local memory, and others. Each CHA 110 comprises at least one or more communication ports.
  • The DKA 120 is for controlling data communication with the respective storage apparatuses 210 and is configured, as the CHA 110, as a computer apparatus comprising a microprocessor, a local memory, and others.
  • The respective DKAs 120 and the respective storage apparatuses 210 are connected, for example, via a communication path CN2 complying with the fibre channel protocol. The respective DKAs120 and the respective storage apparatuses 210 perform data transfer in units of blocks.
  • The path through which the controller 100 accesses the respective storage apparatuses 210 is made redundant. Even if a failure occurs in one of DKAs 120 or one of the communication paths CN2, the controller 100 can access the storage apparatus 210 by using the other DKA 120 or the other communication path CN2. Similarly, the path between the host 20 and the controller 100 can also be made redundant. The configuration of the CHA 110 and the DKA 120 is described later in FIG. 3.
  • The operation of the CHA 110 and the DKA 120 is briefly described. The CHA 110, receiving a read command issued by the host 20, stores this read command in the shared memory 140. The DKA 120 refers to the shared memory 140 as needed and, if discovering an unprocessed read command, reads the data from the storage apparatus 210 and stores the same in the cache memory 130. The CHA 110 reads the data transferred to the cache memory 130, and transmits the same to the host 20. The processing in which the DKA 120 transfers the data read from the storage apparatus 210 to the cache memory 130 is referred to as the staging processing. The details of the staging processing are described later.
  • Meanwhile, the CHA 110, receiving a write command issued by the host 20, stores the write command in the shared memory 140. Furthermore, the CHA 110 stores the received write data in the cache memory 130. The CHA 110, after storing the write data in the cache memory 130, reports the write completion to the host 20. The DKA 120, complying with the write command stored in the shared memory 140, reads the data stored in the cache memory 130, and stores the same in the specified storage apparatus 210.
  • The cache memory 130, for example, for storing user data and others received from the host 20. The cache memory 130 is configured of, for example, a volatile memory or a non-volatile memory. The shared memory 140 is configured of, for example, a non-volatile memory. In the shared memory 140, various types of tables T's described later, management information, and others are stored.
  • The shared memory 140 and the cache memory 130 can be set together on the same memory substrate. Otherwise, it is also possible to use a part of the memory as a cache area and use another part as a control area.
  • The connection unit 150 connects the respective CHAs 110, the respective DKAs 120, the cache memory 130, and the shared memory 140 respectively. By this method, all the CHAs 110 and the DKAs 120 can access the cache memory 130 and the shared memory 140 respectively. The connection unit 150 can be configured, for example, as a crossbar switch and others.
  • The SVP 160 is, via the internal network CN4, connected to the respective CHAs 110 and the respective DKAs 120 respectively. Meanwhile, the SVP 160 is connected to the management terminal 30 via the communication network CN3. The SVP 160 collects the respective statuses inside the storage control apparatus 10 and provides the same to the management terminal 30. Note that the SVP 160 may also be only connected to either the CHAs 110 or the DKAs 120. This is because the SVP 160 can collect the respective types of status information via the shared memory 140.
  • The configuration of the controller 100 is not limited to the above-mentioned configuration. For example, the configuration in which, on one or multiple control substrates, the function of performing data communication with the host 20, the function of performing data communication with the storage apparatuses 210, the function of temporarily storing the data, and the function of storing the respective tables as rewritable are respectively set may also be permitted.
  • The configuration of the storage apparatus installed unit 200 is described. The storage apparatus installed unit 200 comprises multiple storage apparatuses 210. The respective storage apparatuses 210 are configured, for example, as hard disk devices. Not limited to the hard disk devices, in some cases, flash memory devices, magnetic-optical storage apparatuses, holographic memory devices, and others can be used.
  • A parity group 220 is configured of a specified number of storage apparatuses 210, of which [the number] differs depending on the RAID configuration and others, for example, a pair or a group of four [storage apparatuses]. The parity group 220 is the virtualization of the physical storage areas which the respective storage apparatuses 210 in the parity group 220 comprise respectively.
  • Therefore, the parity group 220 is a virtualized physical storage area. This virtualized physical storage area is also referred to as a VDEV in this embodiment. In the virtualized physical storage area, one or multiple logical storage apparatuses (LDEVs) 230 can be set. The logical storage apparatuses 230 are made to correspond to LUNs (Logical Unit Numbers), and are provided to the host 20. The logical storage apparatuses 230 are also referred to as logical volumes.
  • FIG. 3 is a block diagram showing the configuration of the CHA 110 and the DKA 120. The CHA 110, for example, comprises a protocol chip 111, a DMA circuit 112, and a microprocessor 113. The protocol chip 111 is a circuit for performing the communication with the host 20. The microprocessor 113 controls the overall operation of the CHA 110. The DMA circuit 122 is a circuit for performing the data transfer between the protocol chip 111 and the cache memory 130 in the DMA (Direct Memory Access) method.
  • The DKA 120, as the CHA 110, for example, comprises a protocol chip 121, a DMA circuit 112, and a microprocessor 123. Furthermore, the DKA 120 also comprises a parity generation circuit 124.
  • The protocol chip 121 is a circuit for communicating with the respective storage apparatuses 210. The microprocessor 123 controls the overall operation of the DKA 120. The parity generation circuit 124 is a circuit for generating parity data by performing a specified logical operation in accordance with the data stored in the cache memory 130. The DMA circuit 122 is a circuit for performing the data transfer between the storage apparatuses 210 and the cache memory 130 in the DMA method.
  • FIG. 4 is an explanatory diagram showing the frame format of the mapping status between the slots 300 and the storage apparatuses 210. FIG. 4( a) shows the case of the RAID5, and FIG. 4( b) shows the case of the RAID1.
  • FIG. 4( a) shows the case where the 3D+1PRAID5 is configured of three data disks (#0, #1, #2) and one parity disk (#3). Slots # 0 to #7 are allocated in the data disk (#0), slots # 8 to #15 are allocated in the data disk (#1), slots #16 to #23 are allocated in the data disk (#2), and parity # 0 to #7 are allocated in the parity disk (#3) on the right side respectively. That is, in each data disk, eight serial slots are allocated respectively.
  • The size of a parity which is equal to eight slots (#0 to #7) is referred to as a parity cycle. In the parity cycle next to the parity cycle shown in the figure, the parity is stored in the disk (#2) to the left of the disk (#3). In the further next parity cycle, the parity is stored in the disk (#1). As described above, the disk storing the parity data shifts in each parity cycle. As shown by FIG. 4( a), the number of slots included in one parity cycle can be ascertained by multiplying the number of data disks by 8.
  • FIG. 5 shows the frame format of the queue processing method. In FIG. 5( a), seven queues from number 1 to 7 are shown. The horizontal axis in FIG. 5( a) shows the logical address on the storage area in the storage apparatus 210. The queue number shows the order of accepting commands. The distance between queues corresponds to the distance on the logical address.
  • FIG. 5( b) shows the queue processing method (mode). As the queuing modes, for example, the FIFO mode and the sorting mode are known. In the FIFO mode, the first received queue is processed first. Therefore, the queues are processed in order from the first queue to the seventh queue. Meanwhile, in the sorting mode, queues are sorted for reducing as much rotation latency and seek latency as possible. In the example shown in the figure, the processing is performed in order of the first queue, the sixth queue, the third queue, the fifth queue, the fourth queue, and the second queue. Though the second queue is generated early, the processing of the same is postponed. If the seventh queue is received before the processing of the fourth queue is completed, the seventh queue is processed immediately after the fourth queue, and the second queue is processed last.
  • If, as shown in FIG. 5, an identified small area is significantly accessed and a command which accesses a distant position is occasionally accepted, the processing of the one distant command is overtaken by the commands which are accepted later. It is possible that the one distant command might not be processed for a long time (e.g. approximately one second). As described above, in the sorting mode, though the average response time becomes faster than in the FIFO mode, the maximum value of the response time also becomes large.
  • FIG. 6 shows a table T10 for managing the correspondence relationship between the device IDs and VDEVs. This management table T10 is stored in the shared memory 140. The CHA 110 and the DKA 120 can use at least a part of the table T10 by copying the same in the local memories of the CHA 110 and the DKA 120.
  • The device ID-VDEV correspondence relationship management table T10 manages the correspondence relationship between the logical volumes 230 and VDEVs 220 as virtual intermediate storage apparatuses. The management table T10, for example, manages a device ID field C11, a VDEV number field C12, a starting slot field C13, and a slot amount field C14 by making the same correspond to each other.
  • In the device ID field C11, the information for identifying the logical volumes 230 is stored. In the VDEV number field C12, the information for identifying the VDEVs 220 is stored. In the starting slot field C13, the slot number indicating in which slot in the VDEV 220 the logical volume 230 starts is stored. In the slot amount field C14, the number of slots configuring the logical volume 230 is stored.
  • FIG. 7 shows a table T20 for managing VDEVs 220. This management table T20 is stored in the shared memory 140. The CHA 110 and the DKA 120 can use at least a part of the management table T20 by copying the same in the local memories.
  • The VDEV management table T20, for example, comprises a VDEV number field C21, a slot size field C22, a RAID level field C23, a data drive amount field C24, a parity cycle slot amount field C25, a disk type field C26, a queuing mode field C27, and a response time guarantee mode field C28 by making the same correspond to each other.
  • In the VDEV number field C21, the information for identifying the respective VDEVs 220 is stored. In the slot size field C22, the number of slots made to correspond to VDEVs is stored. In the RAID level field C23, the information such as RAID1 to RAID6 indicating the RAID type is stored. In the data drive amount field C24, the number of storage apparatuses 210 storing the data is stored.
  • In the parity cycle slot amount field C25, the number of slots included in a parity cycle is stored. The number of slots indicates, when allocating slots in the storage apparatuses 210, with how many slots the allocation should shift to the next storage apparatus 210. In the disk type field C26, the type of the storage apparatuses 210 configuring the VDEV 220 is stored.
  • In the queuing mode field C27, the type of the queuing mode applied to the VDEV 220 is stored. “0,” in case of the FIFO mode, and “1,” for the sorting mode, are set in the queuing mode field C27. In the response time guarantee mode field C28, the setting value of the response time guarantee mode is stored. The response time guarantee mode is the mode which guarantees that the response time of the VDEV 220 falls within a specified length of time. The case where “1” is stored indicates that the response time guarantee mode is set.
  • FIG. 8 shows the mode setting table T30. The mode setting table T30 is set by the management terminal 30 via the SVP 160. The mode setting table T30, for the entire storage control apparatus 10, sets the queuing mode and the response time guarantee mode. The mode setting table T30 comprises an item field C31 and a setting value field C32. In the item field C31, the queuing mode and the response time guarantee mode are stored. In the setting value field C32, the value indicating whether to set each mode or not is stored.
  • Note that either the mode setting table T30 or the queuing mode field C27 and the response time guarantee mode field C28 in the VDEV management table T20 must be set, and the storage control apparatus 10 may not have to comprise both of the tables T20 and T30.
  • That is, the queuing mode is either set in units of VDEVs (C27) or is set for the entire storage control apparatus 10 (T30). The response time guarantee mode is also either set in units of VDEVs (C28) or is set for the entire storage control apparatus 10 (T30).
  • Note that the configuration in which the VDEV management table T20 and the mode setting table T30 coexist may also be permitted. For example, it is possible to apply the setting values of the mode setting table T30 to all the VDEVs 220, and then ensure the configuration in which the queuing mode or the response time guarantee mode can be set for each VDEV 220 separately.
  • FIG. 9 shows a table T40 for managing jobs. The job management table T40 is also referred to as a job control block (JCB). The job management table T40 manages the status of jobs generated by the kernel.
  • The job management table T40, for example, manages a JCB number field C41, a job status field C42, a WAIT expiration time field C43, a starting flag field C44, a failure occurrence flag field C45, and a inheritance information field C46 by making the same correspond to each other.
  • In the JCB number field C41, the number for identifying the JCB for controlling each job is stored. In the job status field C42, the status of the job managed by the JCB is stored.
  • The job statuses are, for example, “RUN,” “WAIT,” and “Unused.” “RUN” indicates that the job is running. If the DKA 120 receives a message from the CHA 110, the kernel of the DKA 120 generates a job, and assigns one unused JCB to the job. The DKA 120 changes the job status field C42 of the JCB assigned to the job from “Unused” to “RUN.” “WAIT” indicates the status in which the completion of the job processing is being waited for. “Unused” indicates that the JCB is not assigned to any job.
  • In the WAIT expiration time field C43, the value created by adding the processing latency (timeout time) to the current time is stored. The current time is acquired from the system timer. For example, if the current time is “0000” and “1000” is set as the timeout time, the WAIT expiration time becomes 1000 (=0000+1000).
  • In the starting flag field C44, the value of the flag for determining whether to restart the job or not is stored. If the data input/output of the storage apparatus 210 is normally terminated or abnormally terminated, the starting flag is set to “1” by the interruption procession.
  • In the failure occurrence flag field C45, the value of the flag indicating whether a failure occurred in the storage apparatus 210 or not is stored. If a failure occurred in the storage apparatus 210, “1” is set in the failure occurrence flag field C45.
  • In the inheritance information field C46, the information required for restarting the job is stored. That type of information is, for example, the VDEV number, the slot number, and others.
  • The status of the job created by the reception of the read message, when the data read from the storage apparatus 210 is started, is changed from “RUN” to “WAIT.” The kernel regularly monitors, among the jobs in the “WAIT” status, whether any job whose starting flag is set to “1” or whose WAIT expiration time elapses the current time exists or not.
  • If discovering a job whose starting flag is set to “1” or a job whose WAIT expiration time elapses, the kernel of the DKA 120 restarts the job. The status of the restarted job is changed from “WAIT” to “RUN.” The restarted job continues the processing by referring to the inheritance information. When the job is completed, the status is changed from “RUN” to “Unused.”
  • With reference to the flowcharts from FIG. 10 to FIG. 13, the operation of the storage control apparatus 10 is described. Each flowchart shows the overview of each processing, and might be different from the actual computer programs. What is called a person with an ordinary skill in the art may be able to alter or delete part of the steps shown in the figures or add new steps to the same.
  • FIG. 10 is a flowchart showing the read processing performed by the CHA 110. The CHA 110 realizes the functions shown in FIG. 10 by the microprocessor reading a specified computer program stored in the CHA 110 and performing the same.
  • The CHA 110, receiving a read command from the host 20 (S10), converts the logical address specified by the read command into a combination of a VDEV number and a slot number (S11).
  • The CHA 110 determines whether there is a cache hit or not (S12). If a cache area corresponding to the read target slot number is already secured and, at the same time, if the staging bit within the range of the read target logical block is set to on, a cache hit is determined.
  • If no cache hit is determined (S12: NO), the CHA 110 transmits a read message to the DKA 120 (S13). In the read message, a VDEV number, a slot number, a starting block number in the slot, and a number of target blocks are included.
  • The CHA 110, after transmitting the read message to the DKA 120, waits for the completion of the data read processing (staging processing) by the DKA 120 (S14). The CHA 110, receiving the completion report from the DKA 120 (S15), determines whether the data read from the storage apparatus is normally terminated or not (S16).
  • If the data read from the storage apparatus is normally terminated (S16: YES), the CHA 110 transmits the data stored in the cache memory 130 to the host 20 (S17), and completes this processing. If the data read from the storage apparatus fails (S16: NO), the CHA 110 notifies an error to the host 20 (S18), and completes this processing.
  • FIG. 11 is a flowchart of the staging processing. The staging processing is the processing of reading data from the storage apparatus and transferring the same to the cache memory, and is performed by the DKA 120.
  • The DKA 120, receiving the message from the CHA 110 (S20), secures an area for storing the data in the cache memory, and further converts the address specified by the message into a physical address (S21). That is, the DKA 120 converts the read destination address into a combination of a storage apparatus number, a logical address, and the number of logical blocks, and requires data read to the storage apparatus 210 (S22).
  • The DKA 120, for requiring data read to the storage apparatus 210, sets a timeout time (referred to as a TOV in the figure), and shifts to the waiting status (S23). The DKA 120 sets either the normal value TOV 1 which is relatively a long time or the shortened value TOV 2 which is relatively a short time as a timeout time. The selection method of the timeout time is described later in FIG. 15.
  • As described in FIG. 9, the job for reading the data from the storage apparatus 210 is changed to the “WAIT” status. If the starting flag is set to “1” or if the WAIT expiration time elapses, the job processing is restarted (S24).
  • The DKA 120 determines whether the data read is normally terminated or abnormally terminated (S25). The case where the data can be transferred from the storage apparatus 210 to the cache memory 130 is determined to be a normal termination. In case of the normal termination, the DKA 120 sets the staging bit to on (S26), and reports to the CHA 110 that the data read is normally terminated (S27).
  • Meanwhile, if the data read from the storage apparatus 210 is terminated abnormally, the DKA 120 determines whether a timeout error occurred or not (S28). The timeout error is an error in cases where the data cannot be read from the storage apparatus 210 within the set timeout time.
  • If a timeout error occurred (S28: YES), the DKA 120 issues a reset command to the storage apparatus 210 (S29). By the reset command, the data read request to the storage apparatus 210 is cancelled.
  • The DKA 120, after cancelling the data read request, performs the correction read processing (S30). The details of the correction read processing are described later in FIG. 12. If a failure other than the timeout error occurs in the storage apparatus 210 (S28: NO), the DKA 120 skips S29, and shifts to the S30.
  • Then, the DKA 120 determines whether the correction read processing is normally terminated or not (S31). If the correction read processing is normally terminated (S31: YES), the DKA 120 reports to the CHA 110 that the read request is normally terminated (S27). If the correction read processing is not terminated normally (S31: NO), the DKA 120 reports to the CHA 110 that the processing of the read request is terminated abnormally (S32).
  • FIG. 12 is a flowchart of the correction read processing shown as S30 in FIG. 11. The DKA 120 determines the RAID level of the VDEV 220 to which the read target storage apparatus 210 belongs (S40). In this embodiment, as an example, whether [the RAID level is] the RAID1, the RAID5, or the RAID6 is determined.
  • If the RAID level is either the RAID5 or the RAID6, the DKA 120 identifies the numbers of the other respective slots related to the error slot (S41). The error slot is the slot from which no data can be read and in which a certain type of failure occurred. The other respective slots related to the error slot are the other slots included in the same stripe string as the error slot.
  • The DKA 120, after securing an area for storing the data to be acquired from the other respective slots in the cache memory 130, issues a read request to the respective storage apparatuses 210 which comprise the other respective slots identified at S41 (S42). Furthermore, the DKA 120 sets the timeout time for reading the data from the respective storage apparatuses 210 as the normal value (S43). In this embodiment, for further ensuring the acquisition of the data required for restoring the data in the error slot, the timeout time is set as the normal value.
  • Meanwhile, if the RAID level is the RAID1, the DKA 120 issues a read request to a storage apparatus 210 which is paired with the storage apparatus 210 in which the error occurred (S44), and shifts to S43.
  • The job related to the read request is in the WAIT status. If the starting flag is set or the WAIT expiration time elapses, [the job] is restarted (S45). The DKA 120 determines whether the data read is normally terminated or not (S46). If [the data read is] not terminated normally, the DKA 120 terminates this processing abnormally.
  • If the data read is terminated normally, the DKA 120 determines the RAID level (S47). If [the RAID level] is either the RAID5 or the RAID6, the DKA 120, in accordance with the data and the parity read from the respective storage apparatuses 210, restores the data, and stores the restored data in the cache area corresponding to the error slot (S48). The DKA 120 sets the staging bit related to the slot to on (S49). In case of the RAID1, the DKA 120 skips S48, and shifts to the S49.
  • FIG. 13 is a flowchart of the error count processing. This processing is performed by the DKA 120. The DKA 120 monitors whether an error (failure) occurred in the storage apparatus 210 or not (S60). If an error occurred (S60: YES), the DKA 120 determines whether [the error is] a timeout error or not (S61).
  • If the error which occurred in the storage apparatus 210 is a timeout error (S61: YES), the DKA 120 records the timeout error to an timeout failure field C53 in the error count management table T50 shown in FIG. 14 (S62).
  • If the error which occurred in the storage apparatus 210 is a storage apparatus error other than a timeout error (S61: NO), the DKA 120 records the error to an HDD failure field C52 in the error count management table T50 (S63).
  • The error count management table T50 is described with reference to FIG. 14. The error count management table T50 manages the number of errors which occurred in the storage apparatus 210 and the threshold for performing the restoration step. The error management table T50 is stored in the shared memory 140, and the DKA 120 can use a part of the same by copying the same in the local memory.
  • The error count management table T50, for example, manages an HDD number field C51, the HDD failure field C52, and the timeout failure field C53 by making the same correspond to each other. The HDD number field C51 stores the information for identifying each storage apparatus 210.
  • The HDD failure field C52 manages ordinary failures which occur in the storage apparatus 210. The HDD failure field C52 comprises an error count field C520, a threshold field C521 for starting the copy to the spare storage apparatus, and a threshold field C522 for starting the correction copy.
  • The error count field C520 stores the number of times of ordinary failures which occurred in the storage apparatus. The threshold field C521 stores a threshold TH1 a for starting the “sparing processing” in which the data is copied from the storage apparatus where the error occurred to a spare storage apparatus. The other threshold field C522 stores a threshold TH2 a for starting the correction copy processing.
  • The timeout failure field C53 is for managing timeout errors occurring in the storage apparatus 210, and comprises an error count field C530, a threshold field C531 for starting the sparing processing, and a threshold field C532 for starting the correction copy.
  • That is, the number of times of the occurrence of ordinary failures (error count value) and the number of times of the occurrence of timeout errors are managed separately. Furthermore, the thresholds for performing the sparing processing and the correction copy processing as the restoration steps are also set separately for ordinary failures and timeout errors respectively. Furthermore, in this embodiment, the thresholds TH1 b and TH2 b related to timeout errors are set larger than the thresholds TH1 a and TH2 a related to ordinary failures (e.g. TH1 b=TH1 2, TH2 b=TH2 2).
  • Therefore, in this embodiment, even if timeout errors occur frequently as a result of setting the timeout time short for reading data from the storage apparatuses 210, the possibility of performing the restoration steps such as the sparing processing or the correction copy processing can be reduced. In this embodiment, by inhibiting the start of the restoration steps, the increase of the load on the storage control apparatus 10 is prevented.
  • FIG. 15 shows the method for selecting the timeout time which is set for reading data from the storage apparatuses 210. As described above, in this embodiment, multiple timeout time [values] TOV 1 and TOV 2 are prepared. The first timeout time TOV 1 is set to a relatively long time, for example, a few seconds, and is also referred to as a normal value. The second timeout time TOV 2 is set to a relatively short time, for example, one second or shorter, and is also referred to as a shortened value. If the specified conditions described below are satisfied, the DKA 120 can set the timeout time to a short value TOV 2.
  • (Specified Condition 1)
  • The cases where “1” is set in the response time guarantee mode field C28 of the VDEV management table T20 shown in FIG. 7. That is, in cases where the mode to respond within a specified time is selected, the shortened value is selected as the timeout time.
  • (Specified Condition 2)
  • The cases where “1” is set for the response time guarantee mode of the mode setting table T30 shown in FIG. 8. [This condition is] the same as the Specified Condition 1. However, while the response time guarantee mode can be set in units of VDEVs under the Specified Condition 1, the response time guarantee mode can be set for the entire storage control apparatus 10 under the Specified Condition 2.
  • (Specified Condition 3)
  • The cases where the storage apparatus 210 as the read target is not a low-speed storage apparatus such as an SATA. If the storage apparatus as the read target is low-speed (if the response performance is low) and if the timeout time is set short, a timeout error might occur even if no failure occurs.
  • (Specified Condition 4)
  • The cases where the queuing mode is set to “1” either in the queuing mode field C27 of the VDEV management table T20 or in the mode setting table (queuing mode=FIFO mode). In the FIFO mode, as queues are processed in order of issuance, it does not occur that the processing of a queue with a distant logical address is postponed and is made to wait for an extremely long time. Meanwhile, in the sorting mode, as a queue at an isolated position might be made to wait for a long time, if the timeout time is shortened, the possibility that a timeout error might occur even if no failure occurs becomes higher.
  • (Specified Condition 5)
  • The cases where the load status of the storage apparatus 210 as the read target is equal to or smaller than the specified value. If the load on the storage apparatus 210 is equal to or larger than the specified value, data read takes time and a timeout error might occur even if no failure occurs. Therefore, unless the storage apparatus 210 are in the high-load status, the timeout time is set short.
  • In this embodiment which is configured as above, the DKA 120, if the specified conditions are satisfied, sets a short timeout time TOV 2 for a read request transmitted to the storage apparatuses 210 and, if a timeout error occurs, resets the read request and performs the correction read processing.
  • Therefore, even if the response performance of the storage apparatus 210 as the read target is deteriorated, if the timeout time elapses, the correction read processing can be performed. Therefore, the deterioration of the response performance of the storage control apparatus 10 can be prevented.
  • In this embodiment, for example, if the response time guarantee mode is set, if the queuing mode is FIFO, if [the storage apparatus is] not a low-speed storage apparatus, or if the storage apparatus is not highly loaded, the timeout time for reading data from the storage apparatus 210 is set to a shorter value than usual. Therefore, in this embodiment, in accordance with the circumstances, the deterioration of the response performance of the storage control apparatus 10 can be prevented.
  • In this embodiment, timeout errors are managed separately from ordinary failures in the storage apparatus. Therefore, even if the timeout time is set shorter than usual, the restoration step such as the sparing processing or the correction copy processing can be inhibited from being performed. Therefore, the deterioration of the response performance due to the increase of the load on the storage control apparatus 10 by performing the restoration steps can be prevented.
  • Embodiment 2
  • The Embodiment 2 is described with reference to FIG. 16. The respective embodiments described below including this embodiment are equivalent to a variation of the Embodiment 1. Therefore, the differences from the Embodiment 1 are mainly described. In this embodiment, in accordance with the queuing mode and the load status of the storage apparatus 210, the timeout time is set short. This embodiment is a variation of the (Specified Condition 5) described in the Embodiment 1.
  • FIG. 16 is a table T70 storing thresholds for setting the timeout time. The threshold table T70, for example, manages an HDD number field C71, a queuing command amount field C72, a threshold field C73 for the FIFO mode, and a threshold field for the sorting mode C74 by making the same correspond to each other.
  • In the HDD number field C71, the information for identifying the respective storage apparatuses 210 is stored. In the queuing command amount field C72, the number of unprocessed commands whose target is the storage apparatus 210 is stored. In the threshold field for the FIFO mode C73, the threshold TH3 for the cases where the queuing mode is set to the FIFO mode is stored. In the threshold field for the sorting mode C74, the threshold TH4 for the cases where the queuing mode is set to the sorting mode is stored.
  • If the number of unprocessed commands whose target is a storage apparatus 210 reaches either the threshold TH3 or the TH4 specified by the queuing mode, the timeout time of the read request whose read target is the storage apparatus 210 is set to a normal value.
  • The threshold TH3 for the FIFO mode is set larger than the threshold TH4 for the sorting mode (e.g. TH3=TH4×4). If the queuing mode is set to the FIFO mode, as there is no command whose processing is extremely postponed, the threshold TH3 is set larger than the TH4 for the sorting mode. If the queuing mode is the sorting mode, as the processing might be postponed depending on the logical address as the target of the command, the threshold TH4 is set smaller than the TH3 for the FIFO mode.
  • If a large number of unprocessed commands are cumulated in the storage apparatus 210, a timeout error might occur regardless of failures. The possibility that a timeout error might occur also varies depending on the method for processing the unprocessed commands.
  • Therefore, in this embodiment, the timeout time is set in accordance with the number of unprocessed commands and the queuing mode. By this method, the possibility that a timeout error unrelated to failures might occur can be inhibited. This embodiment also has the same effect as the Embodiment 1.
  • Embodiment 3
  • The Embodiment 3 is described with reference to FIG. 17. In this embodiment, the timeout time in the correction read is set to a short value. FIG. 17 is a flowchart of the correction read processing. This processing comprises the steps S40 to S42, S44 to S49 which are common to the processing shown in FIG. 12. This processing is different from FIG. 12 at the point of S43A. That is, in the correction read processing of this embodiment, the timeout time is set to a shorter value than usual, and the data and the parity are read from the respective storage apparatuses 210.
  • This embodiment which is configured as above also has the same effect as the Embodiment 1. Furthermore, in this embodiment, the timeout time for the correction read is set short, which can further prevent the deterioration of the response performance in the storage control apparatus 10.
  • Embodiment 4
  • The Embodiment 4 is described with reference to FIG. 18 to FIG. 21. In this embodiment, if the correction read processing fails, the data read from the storage apparatus 210 as the first read target is retried.
  • FIG. 18 is a status management table T80 for managing the progress of the staging processing. The status management table T80, for example, comprises an item number field C81, a contents field C82, and a value field C83. In the item number field C81, each step in the staging processing for reading data from the storage apparatus 210 and transferring the same to the cache memory 130 is shown. When the staging processing reaches each step, “1” is set in the [corresponding] value field C83. An example of the respective steps in the staging processing is described below.
  • (Step 1)
  • At the Step 1, the timeout time is set to the shortened value TOV 2, and data read is required to the storage apparatus 210.
  • (Step 2)
  • At the Step 2, a timeout error related to the first read request occurs.
  • (Step 3)
  • At the Step 3, the correction read processing is attempted but fails.
  • (Step 4)
  • At the Step 4, the timeout time is set to the normal value TOV 1, and the second data read is required to the storage apparatus 210 as the read target.
  • FIG. 19 and FIG. 20 are the flowcharts of the staging processing. This processing corresponds to the staging processing shown in FIG. 11. The differences between this processing and the processing shown in FIG. 11 are S70 to S76.
  • As shown in FIG. 19, the DKA 120, receiving a read message from the CHA 110 (S20), initializes the value field C83 of the status management table T80 (S83). The DKA 120, after performing the address conversion and others (S21), issues a read request to the storage apparatus 210 (S22).
  • The DKA 120 sets the timeout time of the read request to the TOV 2 which is a shorter value than usual (S71). Note that, if data read from the same storage apparatus 210 is retried, the timeout time is set to the normal value TOV 1 (S71).
  • The DKA 120, if setting the timeout time to the shortened value TOV 2, sets the value of the Step 1 in the status management table to “1” (S72). By this method, it is recorded to the table T80 that the first read is started.
  • [The processing] proceeds to FIG. 20. If the first data read from the storage apparatus 210 fails with a timeout (S28: YES), the DKA 120 issues a reset command and cancels the read request (S29). The DKA 120 sets the value of the Step 2 in the status management table T80 to “1” (S73). By this method, the occurrence of a timeout error related to the first read request is recorded to the status management table T80.
  • The DKA 120 refers to the status management table T80, and determines whether the staging processing reaches the Step 3 or not (S74). At this point, as the correction read processing is not started yet, [the processing] is determined not to reach the Step 3 (S74: NO). Therefore, the DKA 120 performs the correction read processing (S75).
  • If the correction read processing is normally terminated (S31: YES), the DKA 120 notifies to the CHA 110 that the read request is normally terminated (S27). If the correction read processing is not terminated normally (S31: NO), the DKA 120 refers to the status management table T80 and determines whether the progress of the staging processing reaches the Step 2 or not (S76).
  • At this point, at S72 in FIG. 19 and at S73 in FIG. 20, the Step 1 and the Step 2 of the status management table T80 are set to “1” respectively. Therefore, the DKA 120 determines that [the processing] reaches the Step 2 (S76: YES), and returns to S22 in FIG. 19. The DKA 120 issues a read request to the storage apparatus 210 as the read target again (S22). In that case, the DKA 120 sets the timeout value related to the second read request to the normal value TOV 1 (S71). As this is the second read request and the timeout value is not shortened, S72 is skipped.
  • By the second read request, if the data is normally read from the storage apparatus 210 within the timeout time, the DKA 120 sets the staging bit to on (S26), and reports the normal termination to the CHA 110 (S27).
  • If the second read request also fails and a timeout error occurs (S28: YES), the DKA 120 resets the second read request (S29). Note that, as the Step 2 in the status management table T80 is set to “1, ” “1” is not set at S73 again, and [the processing] shifts to S73.
  • The DKA 120 refers to the status management table T80, and determines whether the [processing] reaches the Step 3 or not (S74). At this point, as the attempt of the correction read processing failed (S74: YES), the DKA 120 notifies the CHA 110 that the processing of the read request failed (S32). That is, if the second read request fails, this processing is terminated without performing the second correction read processing.
  • FIG. 21 is a flowchart of the correction read processing. This processing is different from the processing shown in FIG. 12 in S80 and S81. The DKA 120 sets the normal value as the timeout time for the correction read (S80). If the correction read processing is terminated abnormally, the DKA 120 sets the Step 3 of the status management table T80 to “1” and records that the correction read failed (S81).
  • This embodiment which is configured as above also has the same effect as the Embodiment 1. Furthermore, in this embodiment, if the correction read fails, data read from the storage apparatus 210 is retried with the normal timeout time. Therefore, the possibility of being able to read data from the storage apparatus 210 can be increased, and the reliability in the storage control apparatus 10 can be improved.
  • Embodiment 5
  • The Embodiment 5 is described with reference to FIG. 22 and FIG. 23. In this embodiment, in accordance with the status of the respective storage apparatuses 210 as the target of the correction read, the performance of the correction read processing is controlled.
  • FIG. 22 is a flowchart of the staging processing. The processing in FIG. 22 is different from the processing shown in FIG. 11 in S90 and S91. If a timeout error occurs (S28: YES), the DKA 120 refers to the response time management table T90 (S90), and determines whether the response time [values] of all the storage apparatuses 210 as the target of the correction read are longer than the standard value or not (S91).
  • If the response time [values] of the respective storage apparatuses 210 as the correction read target are longer [than the standard value] (S91: YES), the DKA 120 does not perform the correction read processing and notifies the CHA 110 that the processing of the read request failed (S32).
  • If the response time [values] of the respective storage apparatuses 210 as the correction read target are not longer than the standard value (S91: NO), the DKA 120 resets the read request (S29), and performs the correction read processing (S30).
  • Note that, not limited to the cases where the response time [values] of all the storage apparatuses 210 as the correction read target are late, if the response time [values] of the specified number of storage apparatuses 210 or more among all the storage apparatuses 210 as the correction read target are larger than the standard value, or if the response time [values] of one or more storage apparatuses 210 of all the storage apparatuses 210 as the correction read target are larger than the standard value, the configuration in which the correction read processing is not performed may also be permitted.
  • FIG. 23 shows the table T90 managing the response time of the respective storage apparatuses 210. The response time management table T90, for example, manages a VDEV number field C91, an HDD number field C92, a response time field C93, and a determination field C94 by making the same correspond to each other.
  • In the response time field C93, the latest response time of each storage apparatus 210 is recorded. In the determination field C94, the result of comparing the response time of each storage apparatus 210 with the specified standard value is recorded. If the response time is equal to or larger than the standard value, “Late” is recorded while, if the response time is under the standard value, “Normal” is recorded.
  • By using the response time management table T90, it can be determined whether the correction read can be completed in a short time or not. Note that, instead of managing the response time directly, the number of unprocessed commands of each storage apparatus may also be managed. Furthermore, the configuration in which, in accordance with the number of unprocessed commands, the type of the storage apparatus 210, and other information, the time required for the correction read processing is presumed may also be permitted.
  • Embodiment 6
  • The Embodiment 6 is described with reference to FIG. 24 to FIG. 26. In this embodiment, if the correction read processing fails, [the failure] is notified to the user, and [the processing is] switched to the storage control apparatus 10 (2) of the standby system.
  • FIG. 24 is a system configuration diagram of this embodiment. This embodiment comprises the storage control apparatus 10 (1) of the currently used system and the storage control apparatus 10 (2) of the standby system. In normal cases, the user uses the storage control apparatus 10 (1) of the currently used system.
  • FIG. 25 and FIG. 26 are the flowcharts of the staging processing. The flowchart in FIG. 25 is different from the flowchart in FIG. 19 in that the connector 2 is not included. The flowchart in FIG. 26 is different from the flowchart in FIG. 20 in the processing after the correction read processing fails.
  • In this embodiment, if the correction read processing fails (S31: NO, S76: YES), [the failure] is notified to the user, and this processing is terminated (S100). The notification is transmitted to the user via the management terminal 30. The user can select whether to issue a read request from the host 20 to the storage control apparatus 10 (1) of the currently used system again or to switch [the processing] from the storage control apparatus 10 (1) of the currently used system to the storage control apparatus 10 (2) of the standby system. This embodiment which is configured as above also has the same effect as the Embodiment 1.
  • Note that this invention is not limited to the above-mentioned embodiments. A person with an ordinary skill in the art, for example, such as combining the above-mentioned respective embodiments appropriately, may be able to perform various types of addition, alteration, and others within the scope of this invention.
  • REFERENCE SIGN LIST
  • 1: storage control apparatus, 2: host, 3: controller, 4: storage apparatus, 5: channel adapter (CHA), 6: memory, 7: disk adapter (DKA), 10: storage control apparatus, 20: host, 30: management terminal, 100: controller, 110: CHA, 120: DKA, 130: cache memory, 140: shared memory, 210: storage apparatus, 220: parity group (VDEV), 230: logical volume (LDEV).

Claims (13)

1. A storage control apparatus which inputs/outputs data in accordance with a request from a higher-level device, comprising:
a plurality of storage apparatuses for storing data; and
a controller which is connected to the higher-level device and each of the storage apparatus and which makes a specified storage apparatus of the respective storage apparatuses input/output the data in accordance with the request from the higher-level device,
wherein the controller sets the timeout time to a second value which is shorter than a first value in a certain case and requires the read of specified data corresponding to the access request to the specified storage apparatus of the respective storage apparatuses in the case in which receiving an access request from the higher-level device,
the controller detects that a timeout error occurred in the case in which the data cannot be acquired from the specified storage apparatus within the set timeout time,
the controller makes a second management unit which is different from a first management unit for managing failures which occur in the respective storage apparatuses manage the occurrence of the timeout error in the case in which the timeout error is detected, and
the controller requires the read of other data corresponding to the specified data to another storage apparatus related to the specified storage apparatus, generates the specified data in accordance with the other data acquired from another storage apparatus, and transfers the generated specified data to the higher-level device.
2. The storage control apparatus according to claim 1, wherein:
the controller comprises a first communication control unit for communicating with the higher-level device, a second communication control unit for communicating with the respective storage apparatuses, and a memory used by the first communication control unit and the second communication control unit,
the memory stores timeout time setting information for determining whether to set the timeout time to the first value or to the second value,
the timeout time setting information includes the number of queues whose targets are the respective storage apparatuses, a threshold for First In First Out in cases where the First In First Out mode is set as the queuing mode, and a threshold for sorting which is smaller than the threshold for First In First Out in cases where the queuing mode is set to the sorting mode in which sorting is performed in ascending order of distance of logical addresses,
in the case in which the first communication control unit receives an access request from the higher-level device,
the second communication control unit, in accordance with the timeout time setting information, if the number of queues whose target is the specified storage apparatus is equal to or larger than either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the first value as the timeout time for reading the specified data from the specified storage apparatus, and
if the number of queues whose target is the specified storage apparatus is under either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the second value which is smaller than the first value as the timeout time for reading the specified data from the specified storage apparatus,
the second communication control unit requires the read of the specified data to the specified storage apparatus,
the second communication control unit, if unable to acquire the specified data from the specified storage apparatus within the set timeout time, detects the occurrence of a timeout error,
the second communication control unit, if the timeout error is detected, makes a second management unit which is different from a first management unit for managing failures which occur in the respective storage apparatuses manage the occurrence of the timeout error,
the value of a threshold for restoration for starting a specified restoration step related to the storage apparatus in which the failure occurred is set larger for the second control unit than the first control unit,
the second communication control unit sets another timeout time for which the first value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus, generates the specified data in accordance with the other data acquired from the other storage apparatuses, and transfers the generated specified data to the higher-level device, and
the second communication control unit, if unable to acquire the other data from the other storage apparatuses within another timeout time and if the second value is set as the timeout time, changes the timeout time to the first value, and requires the read of the specified data to the specified storage apparatus again.
3. The storage control apparatus according to claim 1, wherein:
the first management unit manages the number of failures which occurred in the respective storage apparatuses and a threshold for restoration for starting a specified restoration step related to the storage apparatuses in which the failures occurred by making the same correspond to each other,
the second management unit manages the number of timeout errors which occurred in the respective storage apparatuses and another threshold for restoration for starting the specified restoration step related to the storage apparatuses in which the timeout errors occurred by making the same correspond to each other, and
the other threshold for restoration managed by the second management unit is set larger than the threshold for restoration managed by the first management unit.
4. The storage control apparatus according to claim 1, wherein the controller, if the guarantee mode for guaranteeing the response within the specified time is set in the specified storage apparatus, sets the timeout time for reading the specified data from the specified storage apparatus to the second value.
5. The storage control apparatus according to claim 1, wherein the controller, if the queuing mode related to the specified storage apparatus is set to the First In First Out mode, sets the timeout time for reading the specified data from the specified storage apparatus to the second value.
6. The storage control apparatus according to claim 1, wherein the controller, if the specified storage apparatus is a storage apparatus other than the previously specified low-speed storage apparatus, sets the timeout time for reading the specified data from the specified storage apparatus to the second value.
7. The storage control apparatus according to claim 1, wherein the controller, if the number of queues whose target is the specified storage apparatus is smaller than the specified threshold, sets the timeout time for reading the specified data from the specified storage apparatus to the second value.
8. The storage control apparatus according to claim 1, wherein:
the controller comprises timeout time setting information for determining whether to set the timeout time to the first value or to the second value, which includes the number of queues whose targets are the respective storage apparatuses, the threshold for First In First Out in cases where the First In First Out mode is set as the queuing mode, and the threshold for sorting which is smaller than the threshold for First In First Out in cases where the queuing mode is set to the sorting mode in which sorting is performed in ascending order of distance of logical addresses, and
the controller, if the number of queues whose target is the specified storage apparatus is equal to or larger than either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the first value as the timeout time for reading the specified data from the specified storage apparatus and, if the number of queues whose target is the specified storage apparatus is under either the threshold for First In First Out or the threshold for sorting corresponding to the queuing mode set for the specified storage apparatus, selects the second value which is smaller than the first value as the timeout time for reading the specified data from the specified storage apparatus.
9. The storage control apparatus according to claim 1, wherein the controller, if a timeout error is detected, sets another timeout time for which the first value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus.
10. The storage control apparatus according to claim 1, wherein the controller, if a timeout error is detected, sets another timeout time for which the second value is selected, requires the read of other data corresponding to the specified data to the other storage apparatuses related to the specified storage apparatus.
11. The storage control apparatus according to claim 1, wherein the controller, if unable to acquire the other data from the other storage apparatuses within another timeout time, changes the timeout time to the first value and requires the read of the specified data to the specified storage apparatus again.
12. The storage control apparatus according to claim 1, wherein the controller, if unable to acquire the other data from the other storage apparatuses within another timeout time, notifies the user.
13. A control method of a storage control apparatus which is connected to a higher-level device and a plurality of storage apparatuses, comprising the steps of:
setting the timeout time to a second value which is shorter than a first value in a certain case and requiring the read of specified data corresponding to the access request to the specified storage apparatus of the respective storage apparatuses in the case in which an access request from the higher-level device is received;
detecting that a timeout error occurred in the case in which the data cannot be acquired from the specified storage apparatus within the set timeout time;
making a second management unit which is different from a first management unit for managing failures which occur in the respective storage apparatuses manage the occurrence of the timeout error in the case in which the timeout error is detected;
requiring the read of other data corresponding to the specified data to another storage apparatus related to the specified storage apparatus;
generating the specified data in accordance with the other data acquired from another storage apparatus; and
transferring the generated specified data to the higher-level device.
US12/866,915 2010-04-14 2010-04-14 Storage control apparatus and control method of storage control apparatus Active US8984352B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/002687 WO2011128936A1 (en) 2010-04-14 2010-04-14 Storage control device and control method of storage control device

Publications (2)

Publication Number Publication Date
US20130024734A1 true US20130024734A1 (en) 2013-01-24
US8984352B2 US8984352B2 (en) 2015-03-17

Family

ID=44798331

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/866,915 Active US8984352B2 (en) 2010-04-14 2010-04-14 Storage control apparatus and control method of storage control apparatus

Country Status (5)

Country Link
US (1) US8984352B2 (en)
EP (1) EP2560089B1 (en)
JP (1) JP5451874B2 (en)
CN (1) CN102741801B (en)
WO (1) WO2011128936A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130227199A1 (en) * 2012-02-23 2013-08-29 National Taiwan University Flash memory storage system and access method
US20140317443A1 (en) * 2013-04-23 2014-10-23 International Business Machines Corporation Method and apparatus for testing a storage system
US20150269046A1 (en) * 2014-03-18 2015-09-24 Kabushiki Kaisha Toshiba Data transfer device, data transfer method, and non-transitory computer readable medium
US20180067792A1 (en) * 2016-09-05 2018-03-08 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product
US10095431B2 (en) * 2015-06-18 2018-10-09 John Edward Benkert Device controller and method of enforcing time-based sector level security
US20190026031A1 (en) * 2016-10-03 2019-01-24 Samsung Electronics Co., Ltd. Method for read latency bound in ssd storage systems
US10691519B2 (en) * 2016-09-15 2020-06-23 International Business Machines Corporation Hang detection and recovery
US11461037B2 (en) * 2018-07-09 2022-10-04 Yokogawa Electric Corporation Data collection system and data collection method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577357B (en) * 2013-11-06 2017-11-17 华为技术有限公司 A kind of processing method and controller of I/O request messages
US10452278B2 (en) * 2017-03-24 2019-10-22 Western Digital Technologies, Inc. System and method for adaptive early completion posting using controller memory buffer
TWI639921B (en) 2017-11-22 2018-11-01 大陸商深圳大心電子科技有限公司 Command processing method and storage controller using the same
US10990319B2 (en) 2018-06-18 2021-04-27 Micron Technology, Inc. Adaptive watchdog in a memory device
JP7137612B2 (en) * 2020-12-24 2022-09-14 株式会社日立製作所 DISTRIBUTED STORAGE SYSTEM, DATA RECOVERY METHOD, AND DATA PROCESSING PROGRAM

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5758057A (en) * 1995-06-21 1998-05-26 Mitsubishi Denki Kabushiki Kaisha Multi-media storage system
US20030212858A1 (en) * 2002-05-10 2003-11-13 International Business Machines Corp. Data storage array method and system
US20050240742A1 (en) * 2004-04-22 2005-10-27 Apple Computer, Inc. Method and apparatus for improving performance of data storage systems
US20050240743A1 (en) * 2004-04-22 2005-10-27 Apple Computer, Inc. Method and apparatus for accessing data storage systems
US20060026347A1 (en) * 2004-07-29 2006-02-02 Ching-Hai Hung Method for improving data reading performance and storage system for performing the same
US20090106491A1 (en) * 2007-10-18 2009-04-23 Michael Piszczek Method for reducing latency in a raid memory system while maintaining data integrity
US20110154134A1 (en) * 2008-10-15 2011-06-23 Tetsuhiro Kohada Information storage device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09258907A (en) * 1996-03-25 1997-10-03 Mitsubishi Electric Corp Highly available external storage device having plural storage disk parts
JP3284963B2 (en) * 1998-03-10 2002-05-27 日本電気株式会社 Disk array control device and control method
JP3778171B2 (en) * 2003-02-20 2006-05-24 日本電気株式会社 Disk array device
JP4851063B2 (en) * 2003-12-22 2012-01-11 ソニー株式会社 Data recording / reproducing apparatus and data recording / reproducing method
JP2007213721A (en) 2006-02-10 2007-08-23 Hitachi Ltd Storage system and control method thereof
JP2007233903A (en) 2006-03-03 2007-09-13 Hitachi Ltd Storage controller and data recovery method for storage controller
CN1997033B (en) * 2006-12-28 2010-11-24 华中科技大学 A protocol for network storage and its system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5758057A (en) * 1995-06-21 1998-05-26 Mitsubishi Denki Kabushiki Kaisha Multi-media storage system
US20030212858A1 (en) * 2002-05-10 2003-11-13 International Business Machines Corp. Data storage array method and system
US20050240742A1 (en) * 2004-04-22 2005-10-27 Apple Computer, Inc. Method and apparatus for improving performance of data storage systems
US20050240743A1 (en) * 2004-04-22 2005-10-27 Apple Computer, Inc. Method and apparatus for accessing data storage systems
US20060026347A1 (en) * 2004-07-29 2006-02-02 Ching-Hai Hung Method for improving data reading performance and storage system for performing the same
US20090106491A1 (en) * 2007-10-18 2009-04-23 Michael Piszczek Method for reducing latency in a raid memory system while maintaining data integrity
US20110154134A1 (en) * 2008-10-15 2011-06-23 Tetsuhiro Kohada Information storage device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9256526B2 (en) * 2012-02-23 2016-02-09 National Taiwan University Flash memory storage system and access method
US20130227199A1 (en) * 2012-02-23 2013-08-29 National Taiwan University Flash memory storage system and access method
US20140317443A1 (en) * 2013-04-23 2014-10-23 International Business Machines Corporation Method and apparatus for testing a storage system
US10698758B2 (en) * 2014-03-18 2020-06-30 Toshiba Memory Corporation Data transfer device, data transfer method, and non-transitory computer readable medium
US9811410B2 (en) * 2014-03-18 2017-11-07 Toshiba Memory Corporation Data transfer device, data transfer method, and non-transitory computer readable medium
US20180113756A1 (en) * 2014-03-18 2018-04-26 Toshiba Memory Corporation Data transfer device, data transfer method, and non-transitory computer readable medium
US20150269046A1 (en) * 2014-03-18 2015-09-24 Kabushiki Kaisha Toshiba Data transfer device, data transfer method, and non-transitory computer readable medium
US10095431B2 (en) * 2015-06-18 2018-10-09 John Edward Benkert Device controller and method of enforcing time-based sector level security
US10282117B2 (en) * 2015-06-18 2019-05-07 John Edward Benkert Device controller and method of enforcing time based sector level security
US20180067792A1 (en) * 2016-09-05 2018-03-08 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product
US10417068B2 (en) * 2016-09-05 2019-09-17 Kabushiki Kaisha Toshiba Information processing apparatus, information processing method, and computer program product
US10691519B2 (en) * 2016-09-15 2020-06-23 International Business Machines Corporation Hang detection and recovery
US20190026031A1 (en) * 2016-10-03 2019-01-24 Samsung Electronics Co., Ltd. Method for read latency bound in ssd storage systems
US10732849B2 (en) * 2016-10-03 2020-08-04 Samsung Electronics Co., Ltd. Method for read latency bound in SSD storage systems
US11262915B2 (en) 2016-10-03 2022-03-01 Samsung Electronics Co., Ltd. Method for read latency bound in SSD storage systems
US11461037B2 (en) * 2018-07-09 2022-10-04 Yokogawa Electric Corporation Data collection system and data collection method

Also Published As

Publication number Publication date
JPWO2011128936A1 (en) 2013-07-11
CN102741801A (en) 2012-10-17
JP5451874B2 (en) 2014-03-26
US8984352B2 (en) 2015-03-17
WO2011128936A1 (en) 2011-10-20
EP2560089A4 (en) 2014-01-08
EP2560089A1 (en) 2013-02-20
CN102741801B (en) 2015-03-25
EP2560089B1 (en) 2018-07-04

Similar Documents

Publication Publication Date Title
US8984352B2 (en) Storage control apparatus and control method of storage control apparatus
US8234467B2 (en) Storage management device, storage system control device, storage medium storing storage management program, and storage system
US9092142B2 (en) Storage system and method of controlling the same
US7523253B2 (en) Storage system comprising a plurality of tape media one of which corresponding to a virtual disk
US7814351B2 (en) Power management in a storage array
JP4871546B2 (en) Storage system
JP5638744B2 (en) Command queue loading
US20090300283A1 (en) Method and apparatus for dissolving hot spots in storage systems
US9262087B2 (en) Non-disruptive configuration of a virtualization controller in a data storage system
US20070266218A1 (en) Storage system and storage control method for the same
US7937553B2 (en) Controlling virtual memory in a storage controller
US8352766B2 (en) Power control of target secondary copy storage based on journal storage usage and accumulation speed rate
US20110179188A1 (en) Storage system and storage system communication path management method
GB2416415A (en) Logical unit reassignment between redundant controllers
JP2006285808A (en) Storage system
JP2016126561A (en) Storage control device, storage device, and program
US9760296B2 (en) Storage device and method for controlling storage device
US8572347B2 (en) Storage apparatus and method of controlling storage apparatus
US8285943B2 (en) Storage control apparatus and method of controlling storage control apparatus
US8041917B2 (en) Managing server, pool adding method and computer system
US8966173B1 (en) Managing accesses to storage objects
US8234419B1 (en) Storage system and method of controlling same to enable effective use of resources
US20220222015A1 (en) Storage system, storage control device, and storage control method
JP2023125009A (en) Storage system, path control method, and program
JP2021189884A (en) Storage control device and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI, LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KATSURAGI, EIJU;REEL/FRAME:024813/0535

Effective date: 20100716

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8