US20070192555A1 - Remote copy system - Google Patents
Remote copy system Download PDFInfo
- Publication number
- US20070192555A1 US20070192555A1 US11/727,947 US72794707A US2007192555A1 US 20070192555 A1 US20070192555 A1 US 20070192555A1 US 72794707 A US72794707 A US 72794707A US 2007192555 A1 US2007192555 A1 US 2007192555A1
- Authority
- US
- United States
- Prior art keywords
- write
- storage
- write data
- storage systems
- logical volume
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2058—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2064—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2071—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using a plurality of controllers
- G06F11/2074—Asynchronous techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2056—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
- G06F11/2082—Data synchronisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/82—Solving problems relating to consistency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/835—Timestamp
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99953—Recoverability
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S707/00—Data processing: database and file management or data structures
- Y10S707/99951—File or database maintenance
- Y10S707/99952—Coherency, e.g. same view to multiple users
- Y10S707/99955—Archiving or backup
Definitions
- the present invention relates to a storage system that stores data that is employed by a computer and that receives updating of data from a computer, and in particular relates to processing for maintaining copies of data between a plurality of storage systems.
- Laid-open European Patent Application No. 0672985 a technique is disclosed whereby the data that is employed by a computer is stored by a storage system and a copy of this data is stored in a separate storage system arranged at a remote location, while reflecting the write sequence of the data.
- the source storage system that has received the write data from the primary host computer reports completion of reception of the write data to the primary host computer only after reception of the write data. After this, the primary host computer reads a copy of the write data from the source storage system.
- a write time which is the time at which the write request in respect of the write data was issued, is applied to this write data and, when the write data is read by the primary host computer, the write time is also transferred to the primary host computer.
- the primary host computer transfers the write data and the write time to the secondary host computer.
- the secondary host computer After receiving the write data and the write time, the secondary host computer writes information including the write time to a control volume in the storage system on the secondary side and, in addition, writes the write data in the target storage system in the write time sequence, with reference to the write times at which the various items of write data were presented. By writing the write data in the target storage system in the write time sequence, it is possible to maintain consistent data in the target storage system.
- U.S. Pat. No. 6,092,066 discloses a technique whereby the data that is used by a computer is stored in a storage system and, by copying the data that is stored in this storage system to a separate storage system arranged at a remote location, the data can be maintained in the separate storage system even if the first storage system has become unusable due to for example a natural disaster or fire.
- U.S. Pat. No. 6,209,002 discloses a technique whereby data employed by a computer is stored in a storage system and, by copying the data that is stored in this storage system to a separate storage system arranged at a remote location, and additionally copying the data that has been received by this separate storage system to a third storage system, a high level of redundancy can be obtained in respect of data.
- the system comprises a first storage device system having a first logical volume coupled to a computer and in which data received from the computer is stored and a second storage device system coupled to the first storage device system and having a second logical volume in which a copy of data stored in the first logical volume is stored.
- the first storage device system applies time information to the write data received from the computer and sends the write data and this time information to the second storage device system; the second storage device system stores the write data received from the first storage device system in the second logical volume in accordance with the time information applied to this write data.
- FIG. 1 is a view showing an example of-the layout of a computer system according to embodiment 1;
- FIG. 2 is a diagram showing an example of a logical volume group
- FIG. 3 is a flow diagram showing an example of processing in the case where a write request is received by a storage device A;
- FIG. 4 is a view showing an example of group management information
- FIG. 5 is a view showing an example of write data management information for managing write data
- FIG. 6 is a flow diagram showing an example of transfer processing of write data from the storage device A to a storage device B;
- FIG. 7 is a view showing an example of remote logical volume information of a logical volume
- FIG. 8 is a view showing an example of arrived write time information
- FIG. 9 is a flow diagram showing an example of reflection processing of write data in the storage device B.
- FIG. 10 is a flow diagram showing another example of processing in the case where the storage device A has received a write request
- FIG. 11 is a flow diagram showing another example of processing in the case where the storage device A has received a write request
- FIG. 12 is a view showing an example of the layout of a computer system according to embodiment 2.
- FIG. 13 is a view showing an example of the layout of a computer system according to embodiment 3.
- FIG. 14 is a flow diagram showing another example of processing in the case where the storage device A in embodiment 3 has received a write request
- FIG. 15 is a flow diagram showing an example of processing in the case where the management software A gives instructions for deferring processing of a write request in respect of the storage device A and creation of a marker;
- FIG. 16 is a view showing an example of marker number information
- FIG. 17 is a view showing another example of write data management information
- FIG. 18 is a flow diagram showing an example of transfer processing of write data from the storage device A in embodiment 3 to the storage device B;
- FIG. 19 is a flow diagram showing an example of reflection processing of write data in the storage device B in embodiment 3;
- FIG. 20 is a flow diagram showing another example of reflection processing of write data in the storage device B in embodiment 3.
- FIG. 21 is a view showing an example of the layout of a computer system according to embodiment 4.
- FIG. 1 is a view showing an example of the layout of a computer system according to a first embodiment.
- This system comprises a storage device (also referred to as a storage system).
- a 100 a mainframe host computer A (also called MFA) 600 , an open system host computer A 700 , a storage device B 190 , a mainframe host computer B (also referred to as MFB) 690 and an open system host computer B 790 .
- the storage devices A 100 and MFA 600 and the open system host A 700 are respectively connected by I/O paths 900 .
- the storage device B 190 and MFB 690 and open system host B 790 are also respectively connected by I/O paths 900 .
- the MFB. 690 and open system host B 790 are normally a standby system.
- the MFA 600 , MFB 690 and open system host A 700 and open system host B 790 are connected by a network 920 .
- the MFA 600 and MFB 690 include an OS 610 and application software (APP) 620 .
- the open system host A 700 and open system host B 790 likewise include an OS 710 and APP 720 .
- An I/O request issued from the APP of the MFA 600 , MFB 690 , open system host A 700 , or open system host B 790 through the OS is issued to the storage device A 100 or storage device B 190 through the I/O path 900 .
- software such as a DBMS is included in the APP 620 or APP 720 .
- the storage device A 100 comprises a control section 200 , control memory 300 and cache 400 .
- the control section 200 comprises a write data reception section A 210 and write data transfer section A 220 .
- the control section 200 accesses the control memory 300 and performs the following processing, utilizing the information stored in the control memory 300 .
- the cache 400 comprises high-speed memory that chiefly stores the read data or write data so that the storage device A can achieve a high I/O processing performance by employing the cache 400 . It should be noted that, preferably, these components are duplicated and provided with back-up power sources, for purposes of fault resistance and availability.
- the storage device B 190 also comprises a control section 200 , control memory 300 and cache 400 .
- the control section 200 comprises a write data reception section B 211 and write data reflection instruction section 230 and write data reflection section 240 .
- the role of the control memory 300 and cache 400 is the same as in the description of the storage device A 100 above.
- the storage device A 100 and storage device B 190 provide logical volumes 500 constituting data storage regions in respect of the MFA 600 , open system host A 700 , MFB 690 and open system host B 790 . It is not necessary that a single logical volume 500 should constitute the single physical device; for example it could be constituted by a set of storage regions dispersed on a plurality of magnetic disc devices. Also, a logical volume may have for example a mirror construction or a construction that has redundancy such as for example a RAID construction, in which parity data is added.
- the storage device A 100 provides a logical volume 500 as described above; however, in the case of the MFA 600 and open system host A 700 , the type of logical volume 500 that is provided is different from that provided in the case of the storage device A 100 ; also, the logical and/or physical interfaces of the I/O paths 900 are different. The same applies to the storage device B 190 , MFB 690 and open system host B 790 .
- the time of the write request 630 is included in the write request 630 from the MFA 600 as the write time 650 , but is not included in the write request 730 from the open system host A 700 .
- the storage device A 100 and the storage device B 190 are connected by transfer paths 910 .
- the storage device A 100 and the storage device B 190 can hold a copy of the content of one logical volume in another logical volume.
- a copy of the content of the logical volume 500 of the storage device A 100 is held in the logical volume 500 of the storage device B 190 ; the content of the updating performed on the logical volume 500 of the storage device A 100 is also stored in the logical volume 500 of the storage device B 190 by being sent to the storage device B 190 through the transfer path 910 .
- the storage device A 100 and the storage device B 200 hold management information regarding the copy, indicating the relationship between the logical volumes and maintenance of the copy referred to above is performed by using this management information.
- the relationship between the logical volumes and the relationship of the logical volume groups, to be described, is set by the user in accordance with the user's needs.
- FIG. 2 is a diagram showing an example of a group of logical volumes.
- the broken lines indicate the copy relationship between the logical volumes 500 or between the logical volume groups i.e. the correspondence relationship of the source and target.
- the sequence of write data in the storage device A 100 and reflection in the storage device B 190 are managed in units of logical volume groups comprising a plurality of such logical volumes and allocation of the necessary resources for processing as described above is also performed in units of logical volume groups.
- FIG. 3 is a view showing the processing that is performed in the case where a write request is received from the MFA 600 or open system host A 700 in respect of a logical volume 500 (logical volume 500 constituting the source) where a copy of the logical volume 500 is being created.
- the write data reception section A 210 receives a write request from the MFA 600 or open system host A 700 (step 1000 ). If the write time 650 is included in the write request that is received (step 1001 ), the write data reception section A 210 stores the write data in the cache 400 (step 1002 ) and creates (step 1003 ) write data management information 330 by applying (assigning) a sequential number to the write data.
- the write data reception section A 210 then records the write time 650 in the write data management information 330 . Also, when the sequential number is applied, the write data reception section A 210 obtains the sequential number from the group management information 310 of the logical volume group to which the logical volume that is being written belongs and records a value obtained by adding 1 thereto in the write data management information 330 as the sequential number of the write data, and records this new sequential number in the group management information 310 .
- FIG. 4 is a view showing an example of group management information 310 of the various logical volume groups.
- the group ID is the ID for identifying a logical volume group in the storage device A 100 .
- the sequential numbers are numbers that are continuously given to write data in respect of a logical volume belonging to the logical volume group in question. Numbers successively increased by 1 in each case are applied to such write data, the initial value being for example 0.
- the logical volume number is the number of the logical volume that belongs to the logical volume group in question.
- the logical volume number is the ID of the logical volume belonging to the logical volume group in question in the storage device A 100 .
- the remote storage device ID has a logical volume group that is paired with the logical volume group in question and is an ID (e.g.
- the remote group ID is an ID that specifies the logical volume group that is paired with the logical volume group in question in the remote storage device (storage device B 190 ) i.e. the logical volume group to which the logical volume 500 (also called the remote logical volume) belongs in which a copy of the content of the logical volume belonging to the logical volume group in question is stored.
- FIG. 5 is a view showing an example of write data management information 330 for managing the various write data.
- the logical volume ID is the ID of the logical volume in which the write data is stored.
- the write address is the write start address of the write data in question in the aforesaid logical volume.
- the write data length is the length of the write data in question.
- the write data pointer is the storage start address of the write data in question in the cache 400 .
- the sequential numbers are numbers that are continuously given to write data in the logical volume group and to which the logical volume belongs in which the write data is written. The write time will be discussed below.
- the “transfer required” bit is a bit that indicates whether or not the write data in question needs to be transferred to the storage device B and is set to ON when write data management information 330 is created by receipt of write data by the write data reception section A 210 .
- the write data management information 330 is managed in the form of a list for example for each logical volume group.
- step 1004 the write data reception section A 210 records the write time 650 as the write time information 340 in the control memory 300 .
- the write data reception section A 210 stores the write data in the cache 400 (step 1005 ) and obtains from the write time information 340 a write time, which it applies (assigns) to the write data, and creates write data management information 330 (step 1006 ) by applying a sequential number obtained from the group management information 310 .
- the write data reception section A 210 then records the time at which the write time information 340 was recorded, as the write time of the write data management information 300 , and finds a sequential number by the same procedure as in the case of step 1003 described above and records this sequential number in the write data management information 300 .
- step 1007 completion of writing is reported to the MFA 600 or to the open system host A 700 .
- the aforesaid processing does not include the time-consuming processing of physically writing the write data that is stored in the cache 400 to the recording medium of the logical volume 500 or of transferring the write data to the storage device B 190 ; this processing is performed subsequently in asynchronous fashion, with an appropriate timing. Consequently, the time required until reporting of completion of writing after receiving the write request by the write data reception section A 210 need only be a short time, so rapid response to the MFA 600 or open system host A 700 can be achieved.
- FIG. 6 is a view showing an example of transfer processing of write data to the storage device B 190 from the storage device A 100 .
- the write data transfer section A 220 finds (step 1100 ) the information relating to the write data that is transferred to the storage device B 190 by referring to the list of the write data management information 330 to find the write data that needs to be transferred and, in addition, referring to the write data management information 330 , group management information 310 and remote logical volume information 320 .
- This information includes the write address acquired from the write data management information 330 , the write data length, the sequential number, the write time, the remote storage device ID acquired from the remote logical volume information 320 , the remote logical volume number, and the remote group number obtained from the group management information 310 using the logical volume ID.
- FIG. 7 is a view showing an example of the remote logical volume information 320 of the various logical volumes.
- the logical volume ID is the ID of the logical volume on the source side (logical volume 500 included in the storage device A 100 in embodiment 1).
- the remote storage device ID is an ID (for example a serial number) specifying the storage device (storage device B 190 in embodiment 1) having the logical volume (also called the remote logical volume) in which is stored a copy of the data stored by the logical volume in question that is paired with the logical volume in question.
- the remote logical volume ID is an ID that specifies the remote logical volume (i.e. the logical volume 500 on the target side, where a copy of the data that was stored in the logical volume is stored) in the remote storage device (storage device B 190 in embodiment 1).
- the write data transfer section A 220 transfers (step 1101 ) to the storage device B 190 the write data and the information found in step 1100 .
- the write data reception section B 211 of the storage device B stores (step 1102 ) the received write data and information in the cache 400 and creates (step 1103 ) write data management information 330 from the received information.
- the items of the write data management information 330 of the storage device B 190 are the same as the items of the write data management information 330 of the storage device A 100 .
- the content of the write data management information 330 of the storage device B 190 differs from that of the write data management information 330 of the storage device A 100 in that the logical volume ID is the ID of the logical volume 500 on the target side where the copy is stored and the write data pointer is the storage start address of the write data in the cache 400 of the storage device B 190 and the “transfer needed” bit is normally OFF, but is otherwise the same.
- the storage device B 190 also has group management information 310 , but the items thereof are the same as in the case of the storage device A 100 .
- the group ID is an ID that specifies the logical volume group to which the logical volume 500 on the side of the target where the copy is stored belongs
- the remote storage device ID is the ID of the storage device (storage device A 100 in the case of embodiment 1) constituting the source
- the remote group ID is an ID that specifies the logical volume group to which the remote logical volume (i.e. the logical volume 500 constituting the source) belongs in the remote storage device (storage device A 100 in embodiment 1).
- the storage device B 190 also has remote logical volume information 320 , but the items thereof are the same as in the case of the storage device A 100 and, regarding its content, the logical volume ID is an ID that specifies the logical volume 500 where the copy is stored, the remote storage device ID is an ID that specifies the ID of the storage device (storage device A 100 ) constituting the source and the remote logical volume ID is an ID that specifies the remote logical volume (logical volume 500 constituting the source) in the remote storage device (storage device A 100 ).
- the write data reception section B 211 updates the arrived write time information 350 (step 1104 ).
- FIG. 8 is a view showing an example of arrived write time information 350 of the various groups.
- the group ID is an ID that specifies the logical volume group in the storage device B 190 .
- the latest write time of the arrived write data is the latest time closest to the current time, of the write times applied to the write data received by the write data reception section of B 211 , in respect of the logical volume groups of the storage device B 190 . However, if it appears, from the sequential number order, that some of the write data has not yet arrived (some of the sequence of write data is missing), the latest time of the write time applied to these items of write data is recorded as the arrived write data time information, taking the continuous time comparison range in the order of the sequential numbers as being up to the final write data (write data immediately preceding the missing data).
- a plurality of items of write data may be simultaneously transferred in parallel.
- the write data is therefore not necessarily received in the write data reception section B 211 in the order of the sequential numbers but, as will be described, the write data is reflected in the order of the sequential numbers to each of the logical volume groups (i.e. it is stored in the logical volumes of the storage device B 190 ), so the write data is reflected to the copy in the order of updating (i.e. in the order of writing of the write data in the storage device A 100 ).
- the write data reception section B 211 reports completion of reception of the write data to the write data transfer section A 220 (step 1105 ).
- the write data transfer section A 220 of the storage device A 100 that has received this write data turns the “transfer required” bit of the write data management information 330 OFF in respect of the write data corresponding to the report of completion of reception of write data.
- the storage device A 100 may discard from the cache the arrived write data that was held for transfer to the storage device B 190 .
- FIG. 9 it is a view showing an example of the reflection processing of write data in the storage device B 190 (i.e. the processing of storage of the write data to the logical volume).
- the write data reflection instruction section B 230 checks the arrived write time information 350 of all the logical volume groups of the storage device B 190 and finds, of these, the earliest time (step 1200 ).
- the write data reflection instruction section B 230 gives instructions (or permission) (step 1201 ) to the write data reflection section B 240 for reflection to these logical volumes of the write data whose write time is previous to the time that was thus found.
- the write data reflection section 240 receives these instructions (or permission), by referring to the write data management information 330 and group management information 310 , it reflects the write data in the designated time range (i.e.
- the write data reflection section B 240 reports completion of the instructed processing (step 1203 ) to the write data reflection instruction section 230 .
- the storage device B may discard the reflected write data from the cache 400 .
- step 1200 to step 1203 By means of the above processing from step 1200 to step 1203 , one of cycle of reflection processing is completed.
- the write data reflection instruction section B 230 and the write data reflection section B 240 repeat the above cycle in order to reflect the write data transferred from the storage device A continuously.
- a copy of the updated data of the storage device B 190 is stored maintaining the order between updating of data by the mainframe host and updating of data by the open system host.
- data consistency between the copies mutual consistency can be maintained between the data of the mainframe host and the data of the open system host.
- the storage device A 100 utilizes the write time 650 contained in the write request 630 received from the mainframe host and applies a write time also to the write data received from the open system host and, furthermore, manages the received write data using both the write times and the sequential numbers.
- the target storage device B 190 designates the write data that is capable of being reflected (i.e. that is capable of storage in a logical volume on the target side) using the sequential numbers and the write times and stores the designated write data in a logical volume on the target side.
- write times are applied to all of the write data received by the storage device A 100 , irrespective of whether the host that employs the data is a mainframe host or open system host, it is possible to ascertain information such as up to which write time the write data in any desired logical volume 500 has been transferred from the storage device A 100 to the storage device B 190 or has arrived at the storage device B 190 or has been reflected at the storage device B 190 (i.e. has been stored in a logical volume).
- the write data in the designated time range may be stored in the logical volume 500 that stores the copy in sequential number order in the various logical volume groups, neglecting the write time order.
- consistency between the copies i.e. between the logical volumes of the storage device B 190 on the target side
- a snapshot of the logical volume 500 in which the copy is stored may be acquired with the timing of the report of completion of processing.
- 6,658,434 may be employed as a method of acquiring such a snapshot.
- the storage content of a logical volume 500 (source volume) in which is stored the data whereof a snapshot is to be acquired is copied to another logical volume 500 (target volume) of the storage device B 190 , so that the updated content is reflected also to the target volume when the source of volume is updated.
- the content of the target volume is frozen and verified by stopping reflection at that time.
- the write data transfer section A 220 transfers the write data in respect of the write data reception section B 211 ; however, it would be possible for the write data reception section B 211 to initially issue a write data transfer request in respect of the write data transfer section 220 and for the write data transfer section A 220 to transfer the write data in respect of the write data reception section B 211 after having received this request.
- the pace of transfer of write data can be adjusted in accordance with for example the processing condition or load of the storage device B 190 or the amount of write data that has been accumulated.
- the location of storage of the write data was the cache 400 ; however, by preparing a separate logical volume 500 for write data storage, the write data could be stored in this logical volume 500 .
- a logical volume 500 of large volume may be prepared in respect of the cache 400 , so this makes it possible for more write data to be accumulated.
- FIG. 10 shows an example of the processing that is executed when a write request in respect of a logical volume 500 (logical volume 500 constituting the source) where the storage device A 100 creates a copy is received from the MFA 600 or open system host A 700 .
- This processing is processing corresponding to the processing shown in FIG. 3 .
- the write data reception section A 210 receives (step 1300 ) a write request from the MFA 600 or open system host A 700 .
- the write data reception section A 210 stores (step 1301 ) the write data in the cache 400 and applies a write time to the write data by referring to the write time information 340 that is constantly updated in accordance with the clock provided in the storage device A 100 , and creates (step 1302 ) write data management information 330 by applying a sequential number to the write data, by referring to the group management information 310 . Finally, completion of writing is reported to the MFA 600 or open system host A 700 (step 1303 ).
- FIG. 11 shows an example of the processing when the storage device A 100 has received a write request in respect of the logical volume 500 (logical volume 500 constituting the source), where the copy is created, from the MFA 600 or open system host A 700 , in a case where the storage device A 100 itself updates the write time information 340 .
- This processing is processing corresponding to FIG. 3 or FIG. 10 .
- the initial value of the write time information 340 may for example be 0 and numbers successively incremented by 1 may be applied to the write data as shown below as the write times.
- the write data reception section A 210 receives a write request (step 1400 ) from the MFA 600 or open system host A 700 .
- the write data reception section A 210 stores the write data in the cache 400 (step 1401 ), reads the number from the write time information 340 and applies to the write data (step 1402 ) as the write time the value obtained by incrementing this by 1. Then the write data reception section A 210 records the value after incrementing by 1 as the write time information 340 , thereby updating the write time information 340 (step 1403 ).
- the write data reception section A 210 also creates the write data management information 330 (step 1405 ) by applying a sequential number to the write data (step 1404 ) by referring to the group management information 310 .
- the write data reception section A 210 finally reports completion of writing (step 1406 ) to the MFA 600 or open system host A 700 .
- the storage device B 190 when a sequential number is employed as the write time in this manner, in the storage device B 190 , instead of the write data reception section B 211 being arranged to update the arrived write time information 350 using the write time applied to the write data received and the write data reflection instruction section B 230 being arranged to designate the range of write data capable being stored in a logical volume of the storage device B by checking the arrived write time information 350 of the various logical volume groups, it may be arranged for the write data reflection section 240 to reflect (i.e. store) the write data arriving at the storage device B by referring to the sequential number recorded at the write time of the write data management information 330 in the logical volume 500 without skipping numbers in the number sequence.
- FIG. 12 is a view showing an example of the layout of a computer system according to a second embodiment.
- the differences with respect to embodiment 1 lie in that the MFA 600 and open system host A 700 are connected with the storage device C 180 through an I/O path 900 and the storage device C 180 is connected with the storage device A 100 through a transfer path 910 .
- a copy of the data stored in the logical volume 500 of the storage device C 180 is stored in a logical volume 500 of the storage device A 100 .
- a copy of the data stored in the logical volume 500 of the storage device A is stored in the logical volume 500 of the storage device B 190 in processing like the processing described in embodiment 1. That is, in this embodiment, a copy of the data stored in the logical volume 500 of the storage device C 180 is stored in the storage device A 100 and the storage device B 190 .
- the storage device C 180 is provided with the various items of information and a construction like that of the storage device A 100 described in embodiment 1.
- the storage device C 180 When the storage device C 180 has received a write request 630 or a write request 730 for the logical volume 500 from the MFA 600 or open system host A 700 , it stores the received write data 640 or write data 740 in a logical volume in the storage device C 180 and transfers this to the write data reception section A 210 of the storage device A 100 . At this point, in contrast to the processing described in embodiment 1, the storage device C 180 sends notification of completion of writing to the MFA 600 or open system host A 700 after waiting for notification of completion of reception from the write data reception section A 210 , and the storage device C 180 is thereby able to guarantee that a copy of the write data 640 or write data 740 that was written thereto is present in the storage device A 100 .
- the MFA 600 or open system host A 700 will not deem write data that have not been transferred to the storage device A 100 to have been written but will only deem write data that have been received by the storage device A 100 to have actually been written; a copy as expected by the APP 620 on the MFA 600 or the APP 720 on the open system host A 700 will therefore exist on the storage device A 100 .
- the write data reception section C 212 of the storage device C 100 when the write time information 340 is updated by the write time 650 applied to the write data, the write data reception section C 212 of the storage device C 100 , if a write time 650 is included in the received write request 630 , records the write time also in the write data management information 330 and the write data transfer section C 222 also transfers this write time to the write data reception section A 210 of the storage device A 100 when performing write data transfer.
- the write data reception section A 210 processes the write data and the write time received from the storage device C 180 by the same method as the processing of the write request 630 that was received from the mainframe host in embodiment 1; consistency between the copies stored in the logical volumes in the storage device A 100 is thereby maintained and consistency between the write data issued from the mainframe host and the write data issued from the open system host can thereby be maintained.
- the mainframe host or open system host that is connected with the storage device A may continue the business that was being conducted by the MFA 600 or open system host A 700 using the consistent content of a logical volume 500 of the storage device A 100 that was matched therewith, in the event that a fault occurs in the MFA 600 or open system host A 700 or storage device C 180 .
- FIG. 13 is a view showing an example of the construction of a computer system according to Embodiment 3.
- the chief differences with respect to embodiment 1 lie in that there are a plurality of respective storage devices A 100 and storage devices B 190 , the MFA 600 and open system host A 700 are connected through an I/O path 900 respectively with a plurality of storage devices A 100 , the MFB 690 and the open system host B 790 are connected through an I/O path 900 respectively with a plurality of storage devices B 190 , the MFA 600 includes management software A 800 and the MFB 690 includes management software B 890 .
- Other differences will be described below.
- FIG. 14 is a view showing an example of the processing when a write request in respect of the logical volume 500 (logical volume 500 constituting the source) in which a copy is created by the storage device A 100 is received from the MFA 600 or open system host A 700 .
- the write data reception section A 210 receives (step 1500 ) a write request from the MFA 600 or open system host A 700 .
- the write data reception section A 210 stores the write data in the cache 400 (step 1501 ) or, as in embodiment 1, creates write data management information 330 (step 1502 ) by acquiring a sequential number by referring to the group management information 310 .
- the write data reception section A 210 reports to the MFA 600 or open system host A 700 completion of writing (step 1503 ).
- the group management information 310 is the same as that in the case of embodiment 1.
- the write data management information 330 of this embodiment will be described later.
- FIG. 15 is a view showing an example of the processing when the management software A 800 gives instructions for deferment of processing of write requests in respect of the storage device A 100 and creation of a marker.
- consistency is established between the copies stored in the plurality of storage devices B 190 by subsequently performing synchronization of reflection to the copies, with the timing with which this processing was performed during updating of the logical volume 500 of the storage device A 100 .
- the management software A 800 gives instructions for deferment of processing of write requests to all of the storage devices A 100 (step 1600 ).
- the write data reception section A 210 defers processing of write requests (step 1601 ) and reports to the management software A 800 the fact that deferment has been commenced (step 1602 ).
- processing advances to the following processing (step 1603 and step 1604 ).
- the management software 800 instructs all of the storage devices A 100 to create markers (step 1605 ).
- This instruction includes a marker number as a parameter.
- the marker number will be described subsequently.
- the marker creation section A 250 records the received marker number in the marker number information 360 shown in FIG. 16 stored in the control memory 300 (step 1606 ) and creates (step 1607 ) special write data (hereinbelow called a marker) for information transmission in respect of all of the logical volume groups.
- a marker is write data in which a marker attribute is set in the write data management information 300 .
- FIG. 17 is a view showing an example of write data management information 330 of write data in this embodiment a marker attribute bit and marker number are added to the write data management information 330 of embodiment 1.
- the marker attribute bit is a bit indicating that the write data in question is a marker and is OFF in the case of ordinary write data but is set to ON in the case of a marker.
- a marker number as described above is set in the “marker number”.
- the sequential number in the group is acquired and applied in respect of a marker in the same way as in the case of ordinary write data.
- the marker creation section A 250 obtains a sequential number from the group management information 310 of the group in the same way as in the processing of the write data reception section A 210 and records a value obtained by adding 1 thereto in the write data management information 330 as the sequential number of the aforesaid marker, and records the new sequential number in the group management information 310 .
- the sequential number has been applied in this way to the marker, it is transferred to the storage device B 190 in the same way as in the case of ordinary write data, but the marker is not reflected to the logical volume 500 .
- the marker number is a number for identifying the instruction in response to which the marker was created; when a marker creation instruction is issued by the management software A 800 , for example the initial value thereof is 0 and the marker number is incremented by 1 before being issued.
- the management software A 800 may confirm the current marker number by reading the marker number recorded in the marker number information 360 .
- the marker creation section A 250 reports completion of marker creation to the management software A 800 (step 1608 ). After confirming that completion of marker creation has been reported from all of the designated storage devices A 100 , the management software A 800 proceeds to the subsequent processing (step 1609 , step 1610 ).
- the management software A 800 gives instructions (step 1611 ) for cancellation of deferment of processing of write requests to all of the storage devices A 100 .
- the write data reception section A 210 cancels deferment of processing of write requests (step 1612 ) and reports to the management software A 800 (step 1613 ) the fact that such deferment has been cancelled.
- FIG. 18 is a view showing an example of transfer processing of write data to a storage device B 190 from a storage device A 100 .
- This processing is substantially the same as the transfer processing described in FIG. 6 of embodiment 1, but differs in that no updating of the arrived write time information 350 is performed by the write data reception section B 211 .
- the write data management information 330 of the storage device B 190 is the same as the write data management information shown in FIG. 17 , described above; in step 1703 , the presence or absence of the marker attribute of the write data and/or the marker number recorded in the write data management information 330 .
- FIG. 19 is a view showing an example of the processing of reflection (storage) of write data to a logical volume in the storage device B 190 .
- the management software B 890 gives instructions for reflection of the write data, as far as the marker, to the logical volume 500 in which a copy is stored (step 1800 ) to all of the storage devices B 190 .
- the write data reflection section B 240 refers to the write data information 330 and group management information 310 and reflects (step 1801 ) the write data as far as the marker, in the sequential number order in each group, to the logical volume 500 in which the copy is stored.
- the write data reflection section B 240 continues to store the write data in the logical volume in the order of the sequential numbers, but stops data storage processing on finding write data with the marker attribute (i.e. a marker) and then reports completion of reflection to the management software B 890 (step 1802 ).
- the write data reflection section B 240 checks the marker numbers of the markers that are recorded in the write data management information 330 and thereby ascertains whether the marker number is correct (whether the marker conforms to rules which are the same as the marker number decision rules, described above, for example of being a number whose initial value is 0 and that is incremented by 1 with respect to the previous marker number).
- the write data reflection section B 240 reports an abnormal situation to the management software B 890 ; if the marker number is correct, the write data reflection section B 240 records the marker number in the marker number information 360 and reports a normal situation.
- the management software B 890 may confirm the current marker number by reading the marker number that is recorded in the marker number information 360 .
- the management software B 890 After confirming that a “normal reflection completed” report has been obtained from all of the storage devices B 190 that had been designated, the management software B 890 proceeds to the next processing (step 1803 , step 1804 ).
- the management software B 890 gives instructions (step 1805 ) for updating of the snapshot of the logical volume 500 that stores the copy to all of the storage devices B 190 .
- the snapshot acquisition section B 260 updates (step 1806 ) the snapshot of the content of the logical volume 500 .
- the method of acquiring such a snapshot for example the technique disclosed in U.S. Pat. No. 6,658,434 may be employed. It should be noted that, in this embodiment, just as in the case of the method described in embodiment 1, reflection of the write data to the volume that stores the snapshot data is stopped at the time of acquisition of the snapshot, and the content of the volume that stores the snapshot is frozen.
- the snapshot acquisition section B 260 reports completion of snapshot updating to the management software B 890 (step 1807 ). After confirming that a report of completion of snapshot updating has been obtained from all of the storage devices B 190 that were designated, the management software B 890 proceeds to the next processing (step 1808 , step 1809 ).
- the management software A 800 and the management software B 890 respectively repeat the processing of the aforesaid step 1600 to step 1613 and of step 1800 to step 1809 . In this way, the updating of the storage device A 100 to the logical volume 500 is constantly reflected to the logical volume 500 of the storage device B 190 .
- the data updating by the MFA 600 and the open system host A 700 is stopped and a marker is created with the timing (checkpoint) at which the updating condition is unified between the plurality of storage devices; reflection (i.e. storage) of the updated data to the stored copy data in the plurality of target logical volumes provided in the plurality of target storage devices B 190 can be synchronized at the time immediately preceding the writing of the marker, so mutual consistency between the various copies can be obtained with the data of the mainframe host and the data of the open system host at the time of this marker.
- the MFB 690 or open system host B 790 can continue business using the matched data stored in the snapshot volume, since a copy having mutual consistency is held in the snapshot volume, this snapshot being acquired by reflection of the updated data to the copy data at a time that is synchronized between the plurality of copy data.
- FIG. 20 shows an example of the reflection processing of write data to the copy in the storage devices B 190 in this case.
- the management software B 890 gives instructions (step 1900 ) for reflection of the write data as far as the marker to the logical volume of 500 that stores the copy in all of the storage devices B 190 .
- the write data reflection section B 240 reflects the write data in the same way as in the processing described with reference to FIG. 19 but stops the reflection as soon as it finds a marker and notifies the snapshot acquisition section B 260 (step 1901 ).
- the snapshot acquisition section B 260 updates the snapshot of the content of the logical volume 500 and notifies the write data reflection section B 240 (step 1902 ).
- the write data reflection section B 240 reports completion of reflection to the management software B 890 (step 1903 ).
- the management software B 890 confirms that a report of completion of snapshot updating has been obtained from all of the storage devices B 190 that were designated and then proceeds to the next processing (step 1904 , step 1905 ).
- the storage device A 100 or storage device B 190 reported completion of processing in respect of the various types of instructions from the management software A 800 or management software B 890 .
- completion of the various types of processes by the storage device A 100 or storage device B 190 to be detected by the management software A 800 or management software B 890 by the management software A 800 or management software B 890 periodically making inquiries of the storage device A 100 or storage device B 190 regarding their processing condition in respect of the aforesaid instructions.
- transfer processing of write data from the storage device A 100 to the storage device B 190 is performed continuously, but it would be possible for the storage device A 100 to create a marker and to then stop transfer of write data and, in addition, for the storage device B 190 , after detecting reflection processing of the received marker (after reflection of the write data previous to the marker) to stop reflection of the write data i.e. to put the processing by the storage device A 100 and storage device B 190 in a stopped condition (also called a suspended condition).
- the storage device B 190 could perform write data reflection up to the detection of the marker without reference to instructions from the management software B 890 .
- the marker creation instruction is equivalent to an instruction to shift to the suspended condition and mutually matched copies are created in the logical volume 500 of the storage device B 190 at the time where all of the storage devices B 190 have shifted to the suspended condition.
- the copy processing is recommenced by the storage device A 100 and storage device B 190 in response to an instruction for recommencement of copy processing from the management software A 800 or management software B 890 after acquisition of the snapshot of the logical volume 500 .
- copies having mutual consistency can be held in data stored by the snapshots, so MFB 690 or open system host B 790 can continue business using the matched data.
- the various types of instructions, reports and exchange of information between the management software A 800 or management software B 890 and storage device A 100 and storage device B 190 may be executed by way of an I/O path 900 or could be executed by way of a network 920 .
- instructions for marker creation are given in the form of a write request to the storage device A 100
- a logical volume 500 that is not subject to the processing deferment of write instructions is provided at the storage device A 100 and the marker creation instructions are given in respect of this logical volume 500 .
- the storage device A 100 and storage device B 190 need not be connected in one-to-one relationship and it is not necessary that there should be the same number of devices, so long as the respective logical volumes 500 and logical volume groups correspond as source and copy.
- management software A 800 was present in the MFA 600 and the management software B 890 was present in the MFB 690 ; however, it would be possible for the management software A 800 and management software B 890 to be present in any of the MFA 600 , MFB 690 , open system host A 700 , open system host B 790 , storage device A 100 or storage device B 190 . Also, they could be present in another computer, not shown, connected with the storage device A 100 or storage device B 190 .
- the write data reflection section B 240 determined the correct marker number, but it would also be possible for the correct marker number to be designated to the storage device B 190 as a parameter of the reflection instructions by the management software B. Also, it could be arranged that when the management software A 800 gives instructions for deferment of processing of write requests and marker creation to the storage device A 100 , a unique marker number is determined and designated to the storage device A 100 and communicated to the management software A 890 and that this management software B 890 then designates this marker number to the storage device B 190 .
- the occasion at which the management software A 800 instructions for deferment of processing of write requests and marker creation to the storage device A 100 may be determined in a manner linked with the processing of the APP 620 or APP 720 .
- synchronization of reflection to the copy may be performed at the checkpoint by giving instructions for deferment of write request processing and marker creation on the occasion of creation of a DBMS checkpoint.
- Business can therefore be continued by the MFB 690 or open system host B 790 using the data of this condition, by obtaining a snapshot in the condition in which the stored content of the source logical volume 500 at the checkpoint has been reflected to the copy in the target logical volume.
- the MFA 600 or open system host A 700 could also be arranged for the MFA 600 or open system host A 700 to defer issue of a write request to the storage device A 100 or to restart, by linking the OS 610 or OS 710 with the management software A 800 , instead of the management software A 800 giving instructions for deferment of processing of write requests and canceling of deferment in respect of the storage device A 100 .
- a logical volume for write data storage that is separate from the cache 400 could be prepared and the write data stored in this logical volume 500 for write data storage. Also, in the transfer processing of write data, it would be possible for a write data transfer request to be initially issued in respect of the write data transfer section 220 by the write data reception section B 211 and for the write data to be transferred in respect of the write data reception section B 211 by the write data transfer section A 220 after receiving this request.
- the processing described in this embodiment could also be implemented even if the write request does not contain a write time.
- FIG. 21 is a view showing an example of the layout of a computer system in embodiment 4.
- Embodiment 3 lies in that the MFA 600 and the open system host A 700 are respectively connected with a plurality of storage devices C 180 by way of an I/O path 900 and the plurality of storage devices C 180 are connected with a plurality of storage devices A 100 by way of a transfer path 910 .
- the plurality of storage devices C 180 are connected with another computer or device by means of a network 920 .
- the storage device A 100 and the storage device B 190 of embodiment 4 have the same construction and function as the storage device A 100 and storage device B 190 in embodiment 3.
- a copy of the data stored in the logical volume 500 of the storage device C 180 is stored in the logical volume 500 of the storage device A 100 .
- the storage device C 180 comprises the same construction and various types of information as in embodiment 2 and after receiving a write request to the logical volume 500 from the MFA 600 or open system host A 700 , the storage device C 180 stores the write data that it has received and transfers this received write data to the write data reception section A 210 of the storage device A 100 ; however, it is then guaranteed that a copy of the write data 640 or write data 740 that was written by the storage device C 180 exists in the storage device A 100 , by sending a write completion notification to the MFA 600 or open system host A 700 after waiting for a notification of completion of reception from the write data reception section A 210 , in the same way as in embodiment 2.
- a copy of the data stored in the logical volume 500 of the storage device C 180 is stored in a logical volume 500 of the storage device B 190 by the same processing as the processing described in embodiment 3.
- the expected content that was recognized as having been stored in the storage device C 186 when processing of the MFA 600 or open system host A 700 was interrupted can still be obtained from the storage device B 190 , so the MFB 690 or open system host B 790 can continue business using this data.
- the management software A 800 gives instructions for deferment of processing of write requests or marker creation or cancellation of deferment of processing of write requests in respect of all of the storage devices C 180 in the same way as in the case of the processing performed in respect of the storage device A 100 in embodiment 3.
- the management software A 800 first of all gives instructions for deferment of processing of write requests to all of the storage devices C 180 .
- the write data reception section C 212 of the storage device C 180 defers processing of write requests in the same way as in the case of the processing performed by the storage device A 100 in step 1601 and step 1602 of embodiment 3 and reports commencement of deferment to the management software A 800 .
- the management software A 800 confirms that a report of commencement of deferment has been obtained from all of the designated storage devices C 180 before proceeding to the following processing.
- the management software A 800 gives instructions for marker creation to all of the storage devices C 180 in the same way as in the step 1605 of embodiment 3.
- the storage device C 180 transmits a marker creation instruction through the path 910 or network 920 to the storage device A 100 that stores the copy.
- the storage device A 100 creates a marker in the same way as in step 1606 , step 1607 and step 1608 of embodiment 3 and reports completion of marker creation to the storage device C 180 through the transfer path 910 or network 920 .
- the storage device C 180 reports completion of marker creation to the management software A 800 .
- the management software A 800 confirms that a report of completion of marker creation has been received from all of the designated storage devices C 180 in the same way as in step 1609 and step 1610 of embodiment 3 before proceeding to the next processing.
- the management software A 800 in the same way as in step 1611 of embodiment 3, gives instructions for cancellation of deferment of processing of write requests to all of the storage devices C 180 .
- the write data reception section C 212 of the storage device C 180 cancels the write request processing deferment in the same way as the processing that was performed by the storage device A 100 in step 1612 and step 1613 of embodiment 3 and reports this cancellation of deferment to the management software A 800 .
- deferment of processing of write requests and cancellation of deferment are performed by the storage device C 180 and marker creation meanwhile is performed by the storage device A 100 on transmission to the storage device A 100 of an instruction by the storage device C 180 .
- write data in respect of which completion of writing has been notified to the MFA 600 or open system host A 700 has already been transferred to the storage device A 100 and write data management information 300 of such write data is created in the storage device A 100 , so deferment of processing of write requests by the storage device A 100 in embodiment 3 and deferment of processing of write requests by the storage device C 180 in this embodiment are equivalent.
- reflection of updating to the copies can be synchronized at the marker time by stopping data updating by the MFA 600 and open system host A 700 in the same way as in embodiment 3 and creating a marker of the updated condition with unified timing (checkpoint) between the plurality of storage devices; mutual consistency of the respective copies with the mainframe host data and the open system host data can thus be achieved at this time.
- mutually matched copies are maintained in snapshot volumes by acquiring snapshots at the time of synchronization of reflection and the MFB 690 or open system host B 790 can therefore continue business using matched data.
- the management software A 800 gave instructions for marker creation to the storage devices C 180 and the storage devices C 180 transmitted these instructions to the storage devices A 100 ; however, it would also be possible for the management software A 800 to give instructions for marker creation directly to all of the storage devices A 100 and for the storage devices A 100 to report completion of marker creation to the management software 800 .
- the management software A 800 first of all gives instructions for deferment of write request processing to all of the storage devices C 180 and the management software A 800 confirms that reports of commencement of deferment have been received from all of the designated storage devices C 180 before giving instructions for marker creation to all of the storage devices A 180 in the same way as in step 1605 of embodiment 3.
- the storage device A 100 After having received these instructions, the storage device A 100 creates a marker in the same way as in step 1606 , step 1607 and step 1608 of embodiment 3 and reports completion of marker creation to the management software 800 . After confirming that reports of completion of marker creation have been obtained from all of the designated storage devices A 100 in the same way as in step 1609 and step 1610 of embodiment 3, the management software A 800 may be arranged to give instructions for the cancellation of deferment of write request processing to all of the storage devices C 180 .
- the storage devices C 180 are provided with a marker creation section and marker number information 330 and create a marker on receipt of instructions for marker creation from the management software A 800 ; the marker, which has been created as write data, is then transferred to the storage device A 100 and completion of marker creation may be arranged to be reported to the management software A 800 when a report of receipt thereof has been received from the write data reception section 210 of the storage device A 100 .
- the storage device A 100 treats the received marker as a special type of write data, which is transferred to the storage device B 190 after processing in the same way as ordinary write data except that reflection to the copy is not performed.
- the above can be implemented irrespective of the number of storage devices C 180 that are connected with the storage devices A 100 and deposit copies on the storage devices A 100 .
- mainframe host and open system host are connected with the storage devices A 100 by an I/O path, if for example some fault occurs in the MFA 600 or open system host A 700 or storage devices C 180 , the aforesaid mainframe host and open system host can continue business using the content of the logical volume 500 of the storage device A 100 that is matched therewith.
Abstract
In a system in which data employed by a computer is stored in a storage system, the storage system transfers this data to another storage system and a copy of the data is maintained in the other storage system. The consistency of the copy is maintained even when data is written, to the storage system by a computer, without having a write time applied. A source storage system, when a write time is applied to a write request, records the write time and applies this write time to the received write data and, when no write time is applied, applies the recorded write time to the received write data and transfers the write data with this write time applied thereto, to a target storage system. The target storage system stores the write data in a logical volume in the target storage system in accordance with the write time.
Description
- This application is a continuation application of U.S. Ser. No. 11/002,105, filed Dec. 3, 2004, which is a continuation application of U.S. Ser. No. 10/796,175, filed Mar. 10, 2004 (now U.S. Pat. No. 7,085,788.
- 1. Field of the Invention
- The present invention relates to a storage system that stores data that is employed by a computer and that receives updating of data from a computer, and in particular relates to processing for maintaining copies of data between a plurality of storage systems.
- 2. Description of the Related Art
- In Laid-open European Patent Application No. 0672985, a technique is disclosed whereby the data that is employed by a computer is stored by a storage system and a copy of this data is stored in a separate storage system arranged at a remote location, while reflecting the write sequence of the data. In the processing indicated in Laid-open European Patent Application No. 0672985, the source storage system that has received the write data from the primary host computer reports completion of reception of the write data to the primary host computer only after reception of the write data. After this, the primary host computer reads a copy of the write data from the source storage system. A write time, which is the time at which the write request in respect of the write data was issued, is applied to this write data and, when the write data is read by the primary host computer, the write time is also transferred to the primary host computer. In addition, the primary host computer transfers the write data and the write time to the secondary host computer. After receiving the write data and the write time, the secondary host computer writes information including the write time to a control volume in the storage system on the secondary side and, in addition, writes the write data in the target storage system in the write time sequence, with reference to the write times at which the various items of write data were presented. By writing the write data in the target storage system in the write time sequence, it is possible to maintain consistent data in the target storage system.
- If write data were to be reflected to the target storage system neglecting the write sequence (the operation of storing write data in the target storage system will hereinbelow be referred to as “reflecting” the data), for example in the case of a bank account database, in processing to transfer funds from an account A to an account B, it would not be possible to reproduce the debiting of the account A and the crediting of the account B as a single transaction and it would be possible for example for a period to occur in the target storage system in which the balance of the account B was credited before debiting of the balance of the account A. If, in this case, some fault occurred in the source storage system rendering it unusable prior to debiting the balance of the account A in the target storage system, mismatching data would be left in the target storage system, with the result that incorrect processing would be performed if business were to be subsequently continued using the secondary host computer. Consequently, by storing the write data in the target storage system preserving the write sequence, consistent data can be maintained, making it possible to guarantee correctness of a sequence of related operations in respect of related data.
- U.S. Pat. No. 6,092,066 discloses a technique whereby the data that is used by a computer is stored in a storage system and, by copying the data that is stored in this storage system to a separate storage system arranged at a remote location, the data can be maintained in the separate storage system even if the first storage system has become unusable due to for example a natural disaster or fire.
- U.S. Pat. No. 6,209,002 discloses a technique whereby data employed by a computer is stored in a storage system and, by copying the data that is stored in this storage system to a separate storage system arranged at a remote location, and additionally copying the data that has been received by this separate storage system to a third storage system, a high level of redundancy can be obtained in respect of data.
- In the technique that is disclosed in Laid-open European Patent Application No. 0672985, consistency of the copy of data stored in the target storage system cannot be maintained unless the host computer applies a write time to the write data, since the write sequence is maintained using the write time applied to the write data by the host computer when the write data from the host computer is reflected to the target storage system. In the case of a so-called mainframe host computer, the write time is applied to the write request, but, in the case of a so-called open system host computer, the write time is not applied to the write request. Consequently, in the technique disclosed in Laid-open European Patent Application No. 0672985, consistency of the copy of the data stored in the target storage system with I/O from an open system host computer cannot be maintained.
- Also in the case of U.S. Pat. No. 6,092,066 and U.S. Pat. No. 6,209,002; there is no disclosure concerning maintenance of consistency of a copy of data stored in a target storage system when the host computers include an open system host computer.
- Accordingly, in a computer system in which data that is employed by computer is stored in a storage system and the data that is stored in this storage system is transferred to a separate storage system so that a copy of the data is also held in this separate storage system, there is herein disclosed a technique for maintaining consistency of the copy of the data stored in the separate storage system (i.e. the target storage system) even in respect of data written to the storage system by a host computer that does not apply a write time to the write data, such as an open system host computer.
- The system comprises a first storage device system having a first logical volume coupled to a computer and in which data received from the computer is stored and a second storage device system coupled to the first storage device system and having a second logical volume in which a copy of data stored in the first logical volume is stored.
- The first storage device system applies time information to the write data received from the computer and sends the write data and this time information to the second storage device system; the second storage device system stores the write data received from the first storage device system in the second logical volume in accordance with the time information applied to this write data.
- In a computer system in which data that is employed by computer is stored in a storage system and the data that is stored in this storage system is transferred to a separate storage system so that a copy of the data is also held in this separate storage system, it is thereby possible to maintain consistency of the copy of the data that is stored in the separate storage system (target storage system), even in the case of data stored in the storage system by a host computer that does not apply the write time to the write data, such as an open system host computer.
-
FIG. 1 is a view showing an example of-the layout of a computer system according toembodiment 1; -
FIG. 2 is a diagram showing an example of a logical volume group; -
FIG. 3 is a flow diagram showing an example of processing in the case where a write request is received by a storage device A; -
FIG. 4 is a view showing an example of group management information; -
FIG. 5 is a view showing an example of write data management information for managing write data; -
FIG. 6 is a flow diagram showing an example of transfer processing of write data from the storage device A to a storage device B; -
FIG. 7 is a view showing an example of remote logical volume information of a logical volume; -
FIG. 8 is a view showing an example of arrived write time information; -
FIG. 9 is a flow diagram showing an example of reflection processing of write data in the storage device B; -
FIG. 10 is a flow diagram showing another example of processing in the case where the storage device A has received a write request; -
FIG. 11 is a flow diagram showing another example of processing in the case where the storage device A has received a write request; -
FIG. 12 is a view showing an example of the layout of a computer system according to embodiment 2; -
FIG. 13 is a view showing an example of the layout of a computer system according to embodiment 3; -
FIG. 14 is a flow diagram showing another example of processing in the case where the storage device A in embodiment 3 has received a write request; -
FIG. 15 is a flow diagram showing an example of processing in the case where the management software A gives instructions for deferring processing of a write request in respect of the storage device A and creation of a marker; -
FIG. 16 is a view showing an example of marker number information; -
FIG. 17 is a view showing another example of write data management information; -
FIG. 18 is a flow diagram showing an example of transfer processing of write data from the storage device A in embodiment 3 to the storage device B; -
FIG. 19 is a flow diagram showing an example of reflection processing of write data in the storage device B in embodiment 3; -
FIG. 20 is a flow diagram showing another example of reflection processing of write data in the storage device B in embodiment 3; and -
FIG. 21 is a view showing an example of the layout of a computer system according to embodiment 4. - Embodiments of the present invention are described below. However, it should be noted that the present invention is not restricted to the embodiments described below.
-
FIG. 1 is a view showing an example of the layout of a computer system according to a first embodiment. - This system comprises a storage device (also referred to as a storage system). A100, a mainframe host computer A (also called MFA) 600, an open system host computer A700, a storage device B190, a mainframe host computer B (also referred to as MFB) 690 and an open system host computer B790. The storage devices A 100 and
MFA 600 and the opensystem host A 700 are respectively connected by I/O paths 900. Thestorage device B 190 andMFB 690 and opensystem host B 790 are also respectively connected by I/O paths 900. The MFB.690 and opensystem host B 790 are normally a standby system. TheMFA 600,MFB 690 and opensystem host A 700 and opensystem host B 790 are connected by anetwork 920. - The
MFA 600 andMFB 690 include anOS 610 and application software (APP) 620. Also, the opensystem host A 700 and opensystem host B 790 likewise include anOS 710 andAPP 720. An I/O request issued from the APP of theMFA 600,MFB 690, opensystem host A 700, or opensystem host B 790 through the OS is issued to thestorage device A 100 orstorage device B 190 through the I/O path 900. In this case, software such as a DBMS is included in theAPP 620 orAPP 720. - The
storage device A 100 comprises acontrol section 200,control memory 300 andcache 400. Thecontrol section 200 comprises a write datareception section A 210 and write datatransfer section A 220. Thecontrol section 200 accesses thecontrol memory 300 and performs the following processing, utilizing the information stored in thecontrol memory 300. Thecache 400 comprises high-speed memory that chiefly stores the read data or write data so that the storage device A can achieve a high I/O processing performance by employing thecache 400. It should be noted that, preferably, these components are duplicated and provided with back-up power sources, for purposes of fault resistance and availability. - The
storage device B 190 also comprises acontrol section 200,control memory 300 andcache 400. Thecontrol section 200 comprises a write datareception section B 211 and write datareflection instruction section 230 and writedata reflection section 240. The role of thecontrol memory 300 andcache 400 is the same as in the description of thestorage device A 100 above. - The
storage device A 100 andstorage device B 190 providelogical volumes 500 constituting data storage regions in respect of theMFA 600, opensystem host A 700,MFB 690 and opensystem host B 790. It is not necessary that a singlelogical volume 500 should constitute the single physical device; for example it could be constituted by a set of storage regions dispersed on a plurality of magnetic disc devices. Also, a logical volume may have for example a mirror construction or a construction that has redundancy such as for example a RAID construction, in which parity data is added. - The
storage device A 100 provides alogical volume 500 as described above; however, in the case of theMFA 600 and opensystem host A 700, the type oflogical volume 500 that is provided is different from that provided in the case of thestorage device A 100; also, the logical and/or physical interfaces of the I/O paths 900 are different. The same applies to thestorage device B 190,MFB 690 and opensystem host B 790. The time of thewrite request 630 is included in thewrite request 630 from theMFA 600 as thewrite time 650, but is not included in thewrite request 730 from the opensystem host A 700. - The
storage device A 100 and thestorage device B 190 are connected bytransfer paths 910. As will be described, thestorage device A 100 and thestorage device B 190 can hold a copy of the content of one logical volume in another logical volume. In this embodiment, a copy of the content of thelogical volume 500 of thestorage device A 100 is held in thelogical volume 500 of thestorage device B 190; the content of the updating performed on thelogical volume 500 of thestorage device A 100 is also stored in thelogical volume 500 of thestorage device B 190 by being sent to thestorage device B 190 through thetransfer path 910. As will be described, thestorage device A 100 and thestorage device B 200 hold management information regarding the copy, indicating the relationship between the logical volumes and maintenance of the copy referred to above is performed by using this management information. The relationship between the logical volumes and the relationship of the logical volume groups, to be described, is set by the user in accordance with the user's needs. - In this embodiment, the relationships between the logical volumes are grouped.
FIG. 2 is a diagram showing an example of a group of logical volumes. The broken lines indicate the copy relationship between thelogical volumes 500 or between the logical volume groups i.e. the correspondence relationship of the source and target. In this embodiment, the sequence of write data in thestorage device A 100 and reflection in thestorage device B 190 are managed in units of logical volume groups comprising a plurality of such logical volumes and allocation of the necessary resources for processing as described above is also performed in units of logical volume groups. - If these are performed for each of the individual logical volumes, the large number of items to be managed makes the management process complicated and there is also a possibility of the resources required for this processing being increased, due to the large number of items to be processed. On the other hand, if the entire
storage device A 100 is treated as a unit, detailed management can no longer be performed. In particular, since demands such as performance in regard to thelogical volumes 500 differ greatly between a mainframe host and an open system host, it is desirable to arrange for example for manual control operations from the user in regard to processing and setting such as of tuning conditions to be accepted separately, by arranging for such hosts to perform processing separately, divided into respective groups. By setting up logical volume groups in this way, flexible copy processing management can be provided in response to the requirements of users or businesses. - Next, processing of writing of data onto each
logical volume 500, transfer of data to astorage device B 190 and processing for reflection of data in thestorage device B 190 will be described for the case where thelogical volumes 500 that are used by theMFA 600 and the opensystem host A 700 are arranged to belong to different logical volume groups. By means of these processes, reflection to a copy is performed in write sequence between the various logical volumes of thestorage device A 100 and, regarding consistency between copies, it is arranged that mutual consistency can always be maintained between the mainframe host data and open system host data. -
FIG. 3 is a view showing the processing that is performed in the case where a write request is received from theMFA 600 or opensystem host A 700 in respect of a logical volume 500 (logical volume 500 constituting the source) where a copy of thelogical volume 500 is being created. The write datareception section A 210 receives a write request from theMFA 600 or open system host A 700 (step 1000). If thewrite time 650 is included in the write request that is received (step 1001), the write datareception section A 210 stores the write data in the cache 400 (step 1002) and creates (step 1003) writedata management information 330 by applying (assigning) a sequential number to the write data. The write datareception section A 210 then records thewrite time 650 in the writedata management information 330. Also, when the sequential number is applied, the write datareception section A 210 obtains the sequential number from thegroup management information 310 of the logical volume group to which the logical volume that is being written belongs and records a value obtained by adding 1 thereto in the writedata management information 330 as the sequential number of the write data, and records this new sequential number in thegroup management information 310. -
FIG. 4 is a view showing an example ofgroup management information 310 of the various logical volume groups. The group ID is the ID for identifying a logical volume group in thestorage device A 100. The sequential numbers are numbers that are continuously given to write data in respect of a logical volume belonging to the logical volume group in question. Numbers successively increased by 1 in each case are applied to such write data, the initial value being for example 0. The logical volume number is the number of the logical volume that belongs to the logical volume group in question. The logical volume number is the ID of the logical volume belonging to the logical volume group in question in thestorage device A 100. The remote storage device ID has a logical volume group that is paired with the logical volume group in question and is an ID (e.g. serial number) that specifies the storage device (in this embodiment, the storage device B 190) where a copy of the content of the logical volume belonging to the logical volume group in question is stored. The remote group ID is an ID that specifies the logical volume group that is paired with the logical volume group in question in the remote storage device (storage device B 190) i.e. the logical volume group to which the logical volume 500 (also called the remote logical volume) belongs in which a copy of the content of the logical volume belonging to the logical volume group in question is stored. -
FIG. 5 is a view showing an example of writedata management information 330 for managing the various write data. The logical volume ID is the ID of the logical volume in which the write data is stored. The write address is the write start address of the write data in question in the aforesaid logical volume. The write data length is the length of the write data in question. The write data pointer is the storage start address of the write data in question in thecache 400. The sequential numbers are numbers that are continuously given to write data in the logical volume group and to which the logical volume belongs in which the write data is written. The write time will be discussed below. The “transfer required” bit is a bit that indicates whether or not the write data in question needs to be transferred to the storage device B and is set to ON when writedata management information 330 is created by receipt of write data by the write datareception section A 210. The writedata management information 330 is managed in the form of a list for example for each logical volume group. - Returning to
FIG. 3 , instep 1004, the write datareception section A 210 records thewrite time 650 as thewrite time information 340 in thecontrol memory 300. - If, in
step 1001, no write time is included in the write request, the write datareception section A 210 stores the write data in the cache 400 (step 1005) and obtains from the write time information 340 a write time, which it applies (assigns) to the write data, and creates write data management information 330 (step 1006) by applying a sequential number obtained from thegroup management information 310. At this time, the write datareception section A 210 then records the time at which thewrite time information 340 was recorded, as the write time of the writedata management information 300, and finds a sequential number by the same procedure as in the case ofstep 1003 described above and records this sequential number in the writedata management information 300. - Finally, in
step 1007, completion of writing is reported to theMFA 600 or to the opensystem host A 700. The aforesaid processing does not include the time-consuming processing of physically writing the write data that is stored in thecache 400 to the recording medium of thelogical volume 500 or of transferring the write data to thestorage device B 190; this processing is performed subsequently in asynchronous fashion, with an appropriate timing. Consequently, the time required until reporting of completion of writing after receiving the write request by the write datareception section A 210 need only be a short time, so rapid response to theMFA 600 or opensystem host A 700 can be achieved. -
FIG. 6 is a view showing an example of transfer processing of write data to thestorage device B 190 from thestorage device A 100. The write datatransfer section A 220 finds (step 1100) the information relating to the write data that is transferred to thestorage device B 190 by referring to the list of the writedata management information 330 to find the write data that needs to be transferred and, in addition, referring to the writedata management information 330,group management information 310 and remotelogical volume information 320. This information includes the write address acquired from the writedata management information 330, the write data length, the sequential number, the write time, the remote storage device ID acquired from the remotelogical volume information 320, the remote logical volume number, and the remote group number obtained from thegroup management information 310 using the logical volume ID. -
FIG. 7 is a view showing an example of the remotelogical volume information 320 of the various logical volumes. The logical volume ID is the ID of the logical volume on the source side (logical volume 500 included in thestorage device A 100 in embodiment 1). The remote storage device ID is an ID (for example a serial number) specifying the storage device (storage device B 190 in embodiment 1) having the logical volume (also called the remote logical volume) in which is stored a copy of the data stored by the logical volume in question that is paired with the logical volume in question. The remote logical volume ID is an ID that specifies the remote logical volume (i.e. thelogical volume 500 on the target side, where a copy of the data that was stored in the logical volume is stored) in the remote storage device (storage device B 190 in embodiment 1). - Next, returning to
FIG. 6 , the write datatransfer section A 220 transfers (step 1101) to thestorage device B 190 the write data and the information found instep 1100. The write datareception section B 211 of the storage device B stores (step 1102) the received write data and information in thecache 400 and creates (step 1103) writedata management information 330 from the received information. The items of the writedata management information 330 of thestorage device B 190 are the same as the items of the writedata management information 330 of thestorage device A 100. The content of the writedata management information 330 of thestorage device B 190 differs from that of the writedata management information 330 of thestorage device A 100 in that the logical volume ID is the ID of thelogical volume 500 on the target side where the copy is stored and the write data pointer is the storage start address of the write data in thecache 400 of thestorage device B 190 and the “transfer needed” bit is normally OFF, but is otherwise the same. - The
storage device B 190 also hasgroup management information 310, but the items thereof are the same as in the case of thestorage device A 100. Regarding the content of thegroup management information 310, the group ID is an ID that specifies the logical volume group to which thelogical volume 500 on the side of the target where the copy is stored belongs, the remote storage device ID is the ID of the storage device (storage device A 100 in the case of embodiment 1) constituting the source and the remote group ID is an ID that specifies the logical volume group to which the remote logical volume (i.e. thelogical volume 500 constituting the source) belongs in the remote storage device (storage device A 100 in embodiment 1). Thestorage device B 190 also has remotelogical volume information 320, but the items thereof are the same as in the case of thestorage device A 100 and, regarding its content, the logical volume ID is an ID that specifies thelogical volume 500 where the copy is stored, the remote storage device ID is an ID that specifies the ID of the storage device (storage device A 100) constituting the source and the remote logical volume ID is an ID that specifies the remote logical volume (logical volume 500 constituting the source) in the remote storage device (storage device A 100). - Returning to
FIG. 6 , next, the write datareception section B 211 updates the arrived write time information 350 (step 1104). -
FIG. 8 is a view showing an example of arrivedwrite time information 350 of the various groups. The group ID is an ID that specifies the logical volume group in thestorage device B 190. The latest write time of the arrived write data is the latest time closest to the current time, of the write times applied to the write data received by the write data reception section ofB 211, in respect of the logical volume groups of thestorage device B 190. However, if it appears, from the sequential number order, that some of the write data has not yet arrived (some of the sequence of write data is missing), the latest time of the write time applied to these items of write data is recorded as the arrived write data time information, taking the continuous time comparison range in the order of the sequential numbers as being up to the final write data (write data immediately preceding the missing data). - In transfer of the write data between the write data
transfer section A 220 and the write datareception section B 211, a plurality of items of write data may be simultaneously transferred in parallel. The write data is therefore not necessarily received in the write datareception section B 211 in the order of the sequential numbers but, as will be described, the write data is reflected in the order of the sequential numbers to each of the logical volume groups (i.e. it is stored in the logical volumes of the storage device B 190), so the write data is reflected to the copy in the order of updating (i.e. in the order of writing of the write data in the storage device A 100). - Returning once more to
FIG. 6 , finally, the write datareception section B 211 reports completion of reception of the write data to the write data transfer section A 220 (step 1105). The write datatransfer section A 220 of thestorage device A 100 that has received this write data turns the “transfer required” bit of the writedata management information 330 OFF in respect of the write data corresponding to the report of completion of reception of write data. At this time, thestorage device A 100 may discard from the cache the arrived write data that was held for transfer to thestorage device B 190. -
FIG. 9 it is a view showing an example of the reflection processing of write data in the storage device B 190 (i.e. the processing of storage of the write data to the logical volume). - The write data reflection
instruction section B 230 checks the arrivedwrite time information 350 of all the logical volume groups of thestorage device B 190 and finds, of these, the earliest time (step 1200). The write data reflectioninstruction section B 230 gives instructions (or permission) (step 1201) to the write datareflection section B 240 for reflection to these logical volumes of the write data whose write time is previous to the time that was thus found. When the writedata reflection section 240 receives these instructions (or permission), by referring to the writedata management information 330 andgroup management information 310, it reflects the write data in the designated time range (i.e. the write data whose write time is previous to the time found in step 1200), in the order of the write times, or, if these write times are the same, in the order of the sequential numbers in the various logical volume groups, in respect of thelogical volume 500 in which the copy is stored (i.e. the write data is stored in the logical volume on the target side) (step 1202). After completion of reflection of all of the write data in the range specified instep 1202, the write datareflection section B 240 reports completion of the instructed processing (step 1203) to the write datareflection instruction section 230. The storage device B may discard the reflected write data from thecache 400. - By means of the above processing from
step 1200 to step 1203, one of cycle of reflection processing is completed. The write data reflectioninstruction section B 230 and the write datareflection section B 240 repeat the above cycle in order to reflect the write data transferred from the storage device A continuously. - By means of the above processing, a copy of the updated data of the
storage device B 190 is stored maintaining the order between updating of data by the mainframe host and updating of data by the open system host. Regarding data consistency between the copies, mutual consistency can be maintained between the data of the mainframe host and the data of the open system host. - Specifically, the
storage device A 100 utilizes thewrite time 650 contained in thewrite request 630 received from the mainframe host and applies a write time also to the write data received from the open system host and, furthermore, manages the received write data using both the write times and the sequential numbers. The targetstorage device B 190 designates the write data that is capable of being reflected (i.e. that is capable of storage in a logical volume on the target side) using the sequential numbers and the write times and stores the designated write data in a logical volume on the target side. As a result, even if buffering and/or transferring are provided in parallel mid-way, write order is maintained between the data written from the mainframe host and the data written from the open system host, so copy data can be stored in a logical volume of thestorage device B 190 on the target side. - Also, even if some fault occurs in for example the
storage device A 100, so that previously updated write data does not reach thestorage device B 190, since the sequential numbers will not be continuous in respect of the write data of write times subsequent to the write time of the write data that failed to arrive, reflection thereof will not be allowed. Gaps of updating of data cannot therefore occur in the target sidestorage device B 190 and consistency between the sourcestorage device A 100 and targetstorage device B 190 is ensured. As a result, even if a fault occurs in the sourcestorage device A 100, business can be continued using the content of thelogical volume 500 of thestorage device B 190, which is matched with theMFB 690 and/or opensystem host B 790. - Also, since, in the above processing, write times are applied to all of the write data received by the
storage device A 100, irrespective of whether the host that employs the data is a mainframe host or open system host, it is possible to ascertain information such as up to which write time the write data in any desiredlogical volume 500 has been transferred from thestorage device A 100 to thestorage device B 190 or has arrived at thestorage device B 190 or has been reflected at the storage device B 190 (i.e. has been stored in a logical volume). - It should be noted that, in order to lighten the processing load in the
above step 1202, the write data in the designated time range may be stored in thelogical volume 500 that stores the copy in sequential number order in the various logical volume groups, neglecting the write time order. In this case, consistency between the copies (i.e. between the logical volumes of thestorage device B 190 on the target side) is maintained by the timing of the reports of completion of processing instep 1203. If it is desired to hold consistent data of the period between a report of completion of processing and the next report of completion of processing, a snapshot of thelogical volume 500 in which the copy is stored may be acquired with the timing of the report of completion of processing. The technique disclosed in for example U.S. Pat. No. 6,658,434 may be employed as a method of acquiring such a snapshot. In this method, the storage content of a logical volume 500 (source volume) in which is stored the data whereof a snapshot is to be acquired is copied to another logical volume 500 (target volume) of thestorage device B 190, so that the updated content is reflected also to the target volume when the source of volume is updated. However, in this embodiment, once the snapshot of the source volume has been stored in the target volume, the content of the target volume is frozen and verified by stopping reflection at that time. - Also in the transfer processing of the above write data, it was assumed that, initially, the write data
transfer section A 220 transfers the write data in respect of the write datareception section B 211; however, it would be possible for the write datareception section B 211 to initially issue a write data transfer request in respect of the writedata transfer section 220 and for the write datatransfer section A 220 to transfer the write data in respect of the write datareception section B 211 after having received this request. By employing write data transfer requests, the pace of transfer of write data can be adjusted in accordance with for example the processing condition or load of thestorage device B 190 or the amount of write data that has been accumulated. - Also, in the above processing, it was assumed that the location of storage of the write data was the
cache 400; however, by preparing a separatelogical volume 500 for write data storage, the write data could be stored in thislogical volume 500. In general, alogical volume 500 of large volume may be prepared in respect of thecache 400, so this makes it possible for more write data to be accumulated. - Also, in the above processing, it was assumed that the
write time information 340 was updated by thewrite time 650 of reception from the mainframe host; however, it may be arranged for thestorage device A 100 to possess an internal clock and to constantly update thewrite time information 340 by reference to this clock. In this case,FIG. 10 shows an example of the processing that is executed when a write request in respect of a logical volume 500 (logical volume 500 constituting the source) where thestorage device A 100 creates a copy is received from theMFA 600 or opensystem host A 700. This processing is processing corresponding to the processing shown inFIG. 3 . - The write data
reception section A 210 receives (step 1300) a write request from theMFA 600 or opensystem host A 700. The write datareception section A 210 stores (step 1301) the write data in thecache 400 and applies a write time to the write data by referring to thewrite time information 340 that is constantly updated in accordance with the clock provided in thestorage device A 100, and creates (step 1302) writedata management information 330 by applying a sequential number to the write data, by referring to thegroup management information 310. Finally, completion of writing is reported to theMFA 600 or open system host A 700 (step 1303). - Also, in the above processing, a time is used in the
write time information 340 or the write time of the writedata management information 300 or the arrivedwrite time information 350; however; the time that is employed for this purpose need not necessarily be of the form of years, months, days, hours, minutes, seconds, milliseconds, microseconds, nanoseconds or a total of an ordinary time and instead a sequential number could be employed. In particular,FIG. 11 shows an example of the processing when thestorage device A 100 has received a write request in respect of the logical volume 500 (logical volume 500 constituting the source), where the copy is created, from theMFA 600 or opensystem host A 700, in a case where thestorage device A 100 itself updates thewrite time information 340. This processing is processing corresponding toFIG. 3 orFIG. 10 . It should be noted that, inFIG. 11 , the initial value of thewrite time information 340 may for example be 0 and numbers successively incremented by 1 may be applied to the write data as shown below as the write times. - The write data
reception section A 210 receives a write request (step 1400) from theMFA 600 or opensystem host A 700. The write datareception section A 210 stores the write data in the cache 400 (step 1401), reads the number from thewrite time information 340 and applies to the write data (step 1402) as the write time the value obtained by incrementing this by 1. Then the write datareception section A 210 records the value after incrementing by 1 as thewrite time information 340, thereby updating the write time information 340 (step 1403). The write datareception section A 210 also creates the write data management information 330 (step 1405) by applying a sequential number to the write data (step 1404) by referring to thegroup management information 310. The write datareception section A 210 finally reports completion of writing (step 1406) to theMFA 600 or opensystem host A 700. - When a sequential number is employed as the write time in this manner, in the
storage device B 190, instead of the write datareception section B 211 being arranged to update the arrivedwrite time information 350 using the write time applied to the write data received and the write data reflectioninstruction section B 230 being arranged to designate the range of write data capable being stored in a logical volume of the storage device B by checking the arrivedwrite time information 350 of the various logical volume groups, it may be arranged for the writedata reflection section 240 to reflect (i.e. store) the write data arriving at the storage device B by referring to the sequential number recorded at the write time of the writedata management information 330 in thelogical volume 500 without skipping numbers in the number sequence. -
FIG. 12 is a view showing an example of the layout of a computer system according to a second embodiment. - The differences with respect to
embodiment 1 lie in that theMFA 600 and opensystem host A 700 are connected with thestorage device C 180 through an I/O path 900 and thestorage device C 180 is connected with thestorage device A 100 through atransfer path 910. In this embodiment, a copy of the data stored in thelogical volume 500 of thestorage device C 180 is stored in alogical volume 500 of thestorage device A 100. Further, a copy of the data stored in thelogical volume 500 of the storage device A is stored in thelogical volume 500 of thestorage device B 190 in processing like the processing described inembodiment 1. That is, in this embodiment, a copy of the data stored in thelogical volume 500 of thestorage device C 180 is stored in thestorage device A 100 and thestorage device B 190. In order to implement such processing, thestorage device C 180 is provided with the various items of information and a construction like that of thestorage device A 100 described inembodiment 1. - When the
storage device C 180 has received awrite request 630 or awrite request 730 for thelogical volume 500 from theMFA 600 or opensystem host A 700, it stores the receivedwrite data 640 or writedata 740 in a logical volume in thestorage device C 180 and transfers this to the write datareception section A 210 of thestorage device A 100. At this point, in contrast to the processing described inembodiment 1, thestorage device C 180 sends notification of completion of writing to theMFA 600 or opensystem host A 700 after waiting for notification of completion of reception from the write datareception section A 210, and thestorage device C 180 is thereby able to guarantee that a copy of thewrite data 640 or writedata 740 that was written thereto is present in thestorage device A 100. In this way, if for example due to the occurrence of some fault in thestorage device C 180 or on thetransmission path 910, transfer of data to thestorage device A 100 has not succeeded, theMFA 600 or opensystem host A 700 will not deem write data that have not been transferred to thestorage device A 100 to have been written but will only deem write data that have been received by thestorage device A 100 to have actually been written; a copy as expected by theAPP 620 on theMFA 600 or theAPP 720 on the opensystem host A 700 will therefore exist on thestorage device A 100. Furthermore, after all of the write data received by thestorage device A 100 have been sent to thestorage device B 190, a copy as expected will also exist on thestorage device B 190, so, at the time where the processing executed by theMFA 600 or opensystem host A 700 was interrupted, theMFB 690 or opensystem host B 790 will be able to continue business using data as expected identical with the data that are recognized as having been written by theMFA 600 or opensystem host A 700. - As initially indicated in
embodiment 1, when thewrite time information 340 is updated by thewrite time 650 applied to the write data, the write datareception section C 212 of thestorage device C 100, if awrite time 650 is included in the receivedwrite request 630, records the write time also in the writedata management information 330 and the write datatransfer section C 222 also transfers this write time to the write data reception section A210 of thestorage device A 100 when performing write data transfer. After receiving the write data and the write time, the write datareception section A 210 processes the write data and the write time received from thestorage device C 180 by the same method as the processing of thewrite request 630 that was received from the mainframe host inembodiment 1; consistency between the copies stored in the logical volumes in thestorage device A 100 is thereby maintained and consistency between the write data issued from the mainframe host and the write data issued from the open system host can thereby be maintained. - In this way, even if, due for example to a large-scale disaster, faults occur in both of the
storage device C 180 and thestorage device A 100, business can be continued using the consistent content of thelogical volume 500 of thestorage device B 190, which was matched with theMFB 690 and opensystem host B 790. As indicated in the final part ofembodiment 1, when thewrite time information 340 is updated from thestorage device A 100 itself, transfer of the write time from thestorage device C 180 is unnecessary, so that, after receiving the write data from thestorage device C 180, the write datareception section A 210 may perform processing on the write data like the processing ofFIG. 11 indicated in the latter part ofembodiment 1. - It should be noted that there may be a plurality of
storage devices C 180 that connect to thestorage device A 100. - Also, although not shown, if the mainframe host and open system host are connected by an I/O path with the
storage device A 100, the mainframe host or open system host that is connected with the storage device A may continue the business that was being conducted by theMFA 600 or opensystem host A 700 using the consistent content of alogical volume 500 of thestorage device A 100 that was matched therewith, in the event that a fault occurs in theMFA 600 or opensystem host A 700 orstorage device C 180. -
FIG. 13 is a view showing an example of the construction of a computer system according to Embodiment 3. - The chief differences with respect to
embodiment 1 lie in that there are a plurality of respective storage devices A 100 andstorage devices B 190, theMFA 600 and opensystem host A 700 are connected through an I/O path 900 respectively with a plurality of storage devices A 100, theMFB 690 and the opensystem host B 790 are connected through an I/O path 900 respectively with a plurality ofstorage devices B 190, theMFA 600 includesmanagement software A 800 and theMFB 690 includesmanagement software B 890. Other differences will be described below. - Hereinbelow, the processing in respect of writing performed to the various
logical volumes 500, transfer of write data to thestorage device B 190 and the processing of reflection of write data in the storage device B 190 (i.e. storage of the write data in the logical volume) will be described in respect of thelogical volumes 500 employed by theMFA 600 and the opensystem host A 700. This processing ensures that mutual consistency is maintained between the data of the mainframe host and the data of the open system host in regard to consistency between copies respectively stored in the plurality of logical volumes that are possessed by the plurality ofstorage devices B 190. -
FIG. 14 is a view showing an example of the processing when a write request in respect of the logical volume 500 (logical volume 500 constituting the source) in which a copy is created by thestorage device A 100 is received from theMFA 600 or opensystem host A 700. - The write data
reception section A 210 receives (step 1500) a write request from theMFA 600 or opensystem host A 700. The write datareception section A 210 stores the write data in the cache 400 (step 1501) or, as inembodiment 1, creates write data management information 330 (step 1502) by acquiring a sequential number by referring to thegroup management information 310. Finally, the write datareception section A 210 reports to theMFA 600 or opensystem host A 700 completion of writing (step 1503). Thegroup management information 310 is the same as that in the case ofembodiment 1. The writedata management information 330 of this embodiment will be described later. -
FIG. 15 is a view showing an example of the processing when themanagement software A 800 gives instructions for deferment of processing of write requests in respect of thestorage device A 100 and creation of a marker. As will be described later, consistency is established between the copies stored in the plurality ofstorage devices B 190 by subsequently performing synchronization of reflection to the copies, with the timing with which this processing was performed during updating of thelogical volume 500 of thestorage device A 100. - First of all, the
management software A 800 gives instructions for deferment of processing of write requests to all of the storage devices A 100 (step 1600). On receipt of these instructions, the write datareception section A 210 defers processing of write requests (step 1601) and reports to themanagement software A 800 the fact that deferment has been commenced (step 1602). After themanagement software A 800 has confirmed that commencement of deferment has been reported from all of the storage devices A 100 that have been so instructed, processing advances to the following processing (step 1603 and step 1604). - Next, the
management software 800 instructs all of the storage devices A 100 to create markers (step 1605). This instruction includes a marker number as a parameter. The marker number will be described subsequently. On receipt of this instruction, the markercreation section A 250 records the received marker number in themarker number information 360 shown inFIG. 16 stored in the control memory 300 (step 1606) and creates (step 1607) special write data (hereinbelow called a marker) for information transmission in respect of all of the logical volume groups. A marker is write data in which a marker attribute is set in the writedata management information 300. -
FIG. 17 is a view showing an example of writedata management information 330 of write data in this embodiment a marker attribute bit and marker number are added to the writedata management information 330 ofembodiment 1. - The marker attribute bit is a bit indicating that the write data in question is a marker and is OFF in the case of ordinary write data but is set to ON in the case of a marker. A marker number as described above is set in the “marker number”. The sequential number in the group is acquired and applied in respect of a marker in the same way as in the case of ordinary write data. Specifically, in marker creation, the marker
creation section A 250 obtains a sequential number from thegroup management information 310 of the group in the same way as in the processing of the write datareception section A 210 and records a value obtained by adding 1 thereto in the writedata management information 330 as the sequential number of the aforesaid marker, and records the new sequential number in thegroup management information 310. When the sequential number has been applied in this way to the marker, it is transferred to thestorage device B 190 in the same way as in the case of ordinary write data, but the marker is not reflected to thelogical volume 500. - The marker number is a number for identifying the instruction in response to which the marker was created; when a marker creation instruction is issued by the
management software A 800, for example the initial value thereof is 0 and the marker number is incremented by 1 before being issued. Themanagement software A 800 may confirm the current marker number by reading the marker number recorded in themarker number information 360. - Returning to
FIG. 15 , after the markercreation section A 250 has created a marker in respect of all of the logical volume groups, the markercreation section A 250 reports completion of marker creation to the management software A 800 (step 1608). After confirming that completion of marker creation has been reported from all of the designated storage devices A 100, themanagement software A 800 proceeds to the subsequent processing (step 1609, step 1610). - The
management software A 800 gives instructions (step 1611) for cancellation of deferment of processing of write requests to all of thestorage devices A 100. On receipt of these instructions, the write datareception section A 210 cancels deferment of processing of write requests (step 1612) and reports to the management software A 800 (step 1613) the fact that such deferment has been cancelled. -
FIG. 18 is a view showing an example of transfer processing of write data to astorage device B 190 from astorage device A 100. This processing is substantially the same as the transfer processing described inFIG. 6 ofembodiment 1, but differs in that no updating of the arrivedwrite time information 350 is performed by the write datareception section B 211. It should be noted that the writedata management information 330 of thestorage device B 190 is the same as the write data management information shown inFIG. 17 , described above; instep 1703, the presence or absence of the marker attribute of the write data and/or the marker number recorded in the writedata management information 330. -
FIG. 19 is a view showing an example of the processing of reflection (storage) of write data to a logical volume in thestorage device B 190. First of all, themanagement software B 890 gives instructions for reflection of the write data, as far as the marker, to thelogical volume 500 in which a copy is stored (step 1800) to all of thestorage devices B 190. After receiving such an instruction, the write datareflection section B 240 refers to thewrite data information 330 andgroup management information 310 and reflects (step 1801) the write data as far as the marker, in the sequential number order in each group, to thelogical volume 500 in which the copy is stored. Specifically, the write datareflection section B 240 continues to store the write data in the logical volume in the order of the sequential numbers, but stops data storage processing on finding write data with the marker attribute (i.e. a marker) and then reports completion of reflection to the management software B 890 (step 1802). In the aforementioned processing, the write datareflection section B 240 checks the marker numbers of the markers that are recorded in the writedata management information 330 and thereby ascertains whether the marker number is correct (whether the marker conforms to rules which are the same as the marker number decision rules, described above, for example of being a number whose initial value is 0 and that is incremented by 1 with respect to the previous marker number). If the marker number is not correct, the write datareflection section B 240 reports an abnormal situation to themanagement software B 890; if the marker number is correct, the write datareflection section B 240 records the marker number in themarker number information 360 and reports a normal situation. Themanagement software B 890 may confirm the current marker number by reading the marker number that is recorded in themarker number information 360. - After confirming that a “normal reflection completed” report has been obtained from all of the
storage devices B 190 that had been designated, themanagement software B 890 proceeds to the next processing (step 1803, step 1804). - Next, the
management software B 890 gives instructions (step 1805) for updating of the snapshot of thelogical volume 500 that stores the copy to all of thestorage devices B 190. After receiving this instruction; the snapshotacquisition section B 260 updates (step 1806) the snapshot of the content of thelogical volume 500. As the method of acquiring such a snapshot, for example the technique disclosed in U.S. Pat. No. 6,658,434 may be employed. It should be noted that, in this embodiment, just as in the case of the method described inembodiment 1, reflection of the write data to the volume that stores the snapshot data is stopped at the time of acquisition of the snapshot, and the content of the volume that stores the snapshot is frozen. After updating the snapshot, the snapshotacquisition section B 260 reports completion of snapshot updating to the management software B 890 (step 1807). After confirming that a report of completion of snapshot updating has been obtained from all of thestorage devices B 190 that were designated, themanagement software B 890 proceeds to the next processing (step 1808, step 1809). - The
management software A 800 and themanagement software B 890 respectively repeat the processing of theaforesaid step 1600 to step 1613 and ofstep 1800 to step 1809. In this way, the updating of thestorage device A 100 to thelogical volume 500 is constantly reflected to thelogical volume 500 of thestorage device B 190. - By processing as described above, the data updating by the
MFA 600 and the opensystem host A 700 is stopped and a marker is created with the timing (checkpoint) at which the updating condition is unified between the plurality of storage devices; reflection (i.e. storage) of the updated data to the stored copy data in the plurality of target logical volumes provided in the plurality of targetstorage devices B 190 can be synchronized at the time immediately preceding the writing of the marker, so mutual consistency between the various copies can be obtained with the data of the mainframe host and the data of the open system host at the time of this marker. In addition, theMFB 690 or opensystem host B 790 can continue business using the matched data stored in the snapshot volume, since a copy having mutual consistency is held in the snapshot volume, this snapshot being acquired by reflection of the updated data to the copy data at a time that is synchronized between the plurality of copy data. - In the above processing, the snapshot was assumed to be updated by the
storage device B 190 in response to an instruction from themanagement software B 890, but it would be possible to update the snapshot with the timing of synchronization of reflection of the updated data between the copy data of a plurality ofstorage devices B 190.FIG. 20 shows an example of the reflection processing of write data to the copy in thestorage devices B 190 in this case. - The
management software B 890 gives instructions (step 1900) for reflection of the write data as far as the marker to the logical volume of 500 that stores the copy in all of thestorage devices B 190. After receiving such an instruction, the write datareflection section B 240 reflects the write data in the same way as in the processing described with reference toFIG. 19 but stops the reflection as soon as it finds a marker and notifies the snapshot acquisition section B 260 (step 1901). After receiving such notification, the snapshotacquisition section B 260 updates the snapshot of the content of thelogical volume 500 and notifies the write data reflection section B 240 (step 1902). After receiving this notification, the write datareflection section B 240 reports completion of reflection to the management software B 890 (step 1903). Themanagement software B 890 confirms that a report of completion of snapshot updating has been obtained from all of thestorage devices B 190 that were designated and then proceeds to the next processing (step 1904, step 1905). - Also, in the aforesaid processing, it was assumed that the
storage device A 100 orstorage device B 190 reported completion of processing in respect of the various types of instructions from themanagement software A 800 ormanagement software B 890. However, it would also be possible for completion of the various types of processes by thestorage device A 100 orstorage device B 190 to be detected by themanagement software A 800 ormanagement software B 890 by themanagement software A 800 ormanagement software B 890 periodically making inquiries of thestorage device A 100 orstorage device B 190 regarding their processing condition in respect of the aforesaid instructions. - Also, in the above processing, transfer processing of write data from the
storage device A 100 to thestorage device B 190 is performed continuously, but it would be possible for thestorage device A 100 to create a marker and to then stop transfer of write data and, in addition, for thestorage device B 190, after detecting reflection processing of the received marker (after reflection of the write data previous to the marker) to stop reflection of the write data i.e. to put the processing by thestorage device A 100 andstorage device B 190 in a stopped condition (also called a suspended condition). However, thestorage device B 190 could perform write data reflection up to the detection of the marker without reference to instructions from themanagement software B 890. In this case, the marker creation instruction is equivalent to an instruction to shift to the suspended condition and mutually matched copies are created in thelogical volume 500 of thestorage device B 190 at the time where all of thestorage devices B 190 have shifted to the suspended condition. When restarting the copy processing, the copy processing is recommenced by thestorage device A 100 andstorage device B 190 in response to an instruction for recommencement of copy processing from themanagement software A 800 ormanagement software B 890 after acquisition of the snapshot of thelogical volume 500. As a result, copies having mutual consistency can be held in data stored by the snapshots, soMFB 690 or opensystem host B 790 can continue business using the matched data. - Also, in the processing described above, the various types of instructions, reports and exchange of information between the
management software A 800 ormanagement software B 890 andstorage device A 100 andstorage device B 190 may be executed by way of an I/O path 900 or could be executed by way of anetwork 920. In the case where instructions for marker creation are given in the form of a write request to thestorage device A 100, alogical volume 500 that is not subject to the processing deferment of write instructions is provided at thestorage device A 100 and the marker creation instructions are given in respect of thislogical volume 500. - In the above processing, the
storage device A 100 andstorage device B 190 need not be connected in one-to-one relationship and it is not necessary that there should be the same number of devices, so long as the respectivelogical volumes 500 and logical volume groups correspond as source and copy. - Also, in the above construction, it was assumed that the
management software A 800 was present in theMFA 600 and themanagement software B 890 was present in theMFB 690; however, it would be possible for themanagement software A 800 andmanagement software B 890 to be present in any of theMFA 600,MFB 690, opensystem host A 700, opensystem host B 790,storage device A 100 orstorage device B 190. Also, they could be present in another computer, not shown, connected with thestorage device A 100 orstorage device B 190. - In the above processing, it was assumed that the write data
reflection section B 240 determined the correct marker number, but it would also be possible for the correct marker number to be designated to thestorage device B 190 as a parameter of the reflection instructions by the management software B. Also, it could be arranged that when themanagement software A 800 gives instructions for deferment of processing of write requests and marker creation to thestorage device A 100, a unique marker number is determined and designated to thestorage device A 100 and communicated to themanagement software A 890 and that thismanagement software B 890 then designates this marker number to thestorage device B 190. - In the above processing, the occasion at which the
management software A 800 instructions for deferment of processing of write requests and marker creation to thestorage device A 100 may be determined in a manner linked with the processing of theAPP 620 orAPP 720. For example, synchronization of reflection to the copy may be performed at the checkpoint by giving instructions for deferment of write request processing and marker creation on the occasion of creation of a DBMS checkpoint. Business can therefore be continued by theMFB 690 or opensystem host B 790 using the data of this condition, by obtaining a snapshot in the condition in which the stored content of the sourcelogical volume 500 at the checkpoint has been reflected to the copy in the target logical volume. - It could also be arranged for the
MFA 600 or opensystem host A 700 to defer issue of a write request to thestorage device A 100 or to restart, by linking theOS 610 orOS 710 with themanagement software A 800, instead of themanagement software A 800 giving instructions for deferment of processing of write requests and canceling of deferment in respect of thestorage device A 100. - Also, as described in
embodiment 1, a logical volume for write data storage that is separate from thecache 400 could be prepared and the write data stored in thislogical volume 500 for write data storage. Also, in the transfer processing of write data, it would be possible for a write data transfer request to be initially issued in respect of the writedata transfer section 220 by the write datareception section B 211 and for the write data to be transferred in respect of the write datareception section B 211 by the write datatransfer section A 220 after receiving this request. - The processing described in this embodiment could also be implemented even if the write request does not contain a write time.
-
FIG. 21 is a view showing an example of the layout of a computer system in embodiment 4. - The difference with respect to Embodiment 3 lies in that the
MFA 600 and the opensystem host A 700 are respectively connected with a plurality ofstorage devices C 180 by way of an I/O path 900 and the plurality ofstorage devices C 180 are connected with a plurality of storage devices A 100 by way of atransfer path 910. In addition, the plurality ofstorage devices C 180 are connected with another computer or device by means of anetwork 920. Thestorage device A 100 and thestorage device B 190 of embodiment 4 have the same construction and function as thestorage device A 100 andstorage device B 190 in embodiment 3. - In this embodiment, just as in the case of embodiment 2, a copy of the data stored in the
logical volume 500 of thestorage device C 180 is stored in thelogical volume 500 of thestorage device A 100. Specifically, thestorage device C 180 comprises the same construction and various types of information as in embodiment 2 and after receiving a write request to thelogical volume 500 from theMFA 600 or opensystem host A 700, thestorage device C 180 stores the write data that it has received and transfers this received write data to the write datareception section A 210 of thestorage device A 100; however, it is then guaranteed that a copy of thewrite data 640 or writedata 740 that was written by thestorage device C 180 exists in thestorage device A 100, by sending a write completion notification to theMFA 600 or opensystem host A 700 after waiting for a notification of completion of reception from the write datareception section A 210, in the same way as in embodiment 2. - In addition, a copy of the data stored in the
logical volume 500 of thestorage device C 180 is stored in alogical volume 500 of thestorage device B 190 by the same processing as the processing described in embodiment 3. By processing as described above, as described in embodiment 2, even if for example some fault occurs in thestorage device C 180 or in thetransfer path 910, causing transfer of data to thestorage device A 100 to become impossible, the expected content that was recognized as having been stored in the storage device C 186 when processing of theMFA 600 or opensystem host A 700 was interrupted can still be obtained from thestorage device B 190, so theMFB 690 or opensystem host B 790 can continue business using this data. - In the above processing, the
management software A 800 gives instructions for deferment of processing of write requests or marker creation or cancellation of deferment of processing of write requests in respect of all of thestorage devices C 180 in the same way as in the case of the processing performed in respect of thestorage device A 100 in embodiment 3. Just as in the case ofstep 1600 of embodiment 3, themanagement software A 800 first of all gives instructions for deferment of processing of write requests to all of thestorage devices C 180. After receiving these instructions, the write datareception section C 212 of thestorage device C 180 defers processing of write requests in the same way as in the case of the processing performed by thestorage device A 100 instep 1601 and step 1602 of embodiment 3 and reports commencement of deferment to themanagement software A 800. As described above, at this time, write data in respect of which a write completion notification has been given in respect of theMFA 600 or opensystem host A 700 has already been transferred to thestorage device A 100 and thestorage device A 100 creates writedata management information 300 of this write data. In the same way as in the case ofstep 1603 and step 1604 of embodiment 3, themanagement software A 800 confirms that a report of commencement of deferment has been obtained from all of the designatedstorage devices C 180 before proceeding to the following processing. - Next, the
management software A 800 gives instructions for marker creation to all of thestorage devices C 180 in the same way as in thestep 1605 of embodiment 3. After receiving such an instruction, thestorage device C 180 transmits a marker creation instruction through thepath 910 ornetwork 920 to thestorage device A 100 that stores the copy. After receiving the marker creation instruction, thestorage device A 100 creates a marker in the same way as instep 1606,step 1607 and step 1608 of embodiment 3 and reports completion of marker creation to thestorage device C 180 through thetransfer path 910 ornetwork 920. After receiving the report, thestorage device C 180 reports completion of marker creation to themanagement software A 800. Themanagement software A 800 confirms that a report of completion of marker creation has been received from all of the designatedstorage devices C 180 in the same way as instep 1609 and step 1610 of embodiment 3 before proceeding to the next processing. - Next, the
management software A 800, in the same way as instep 1611 of embodiment 3, gives instructions for cancellation of deferment of processing of write requests to all of thestorage devices C 180. After receiving these instructions, the write datareception section C 212 of thestorage device C 180 cancels the write request processing deferment in the same way as the processing that was performed by thestorage device A 100 instep 1612 and step 1613 of embodiment 3 and reports this cancellation of deferment to themanagement software A 800. - Specifically, deferment of processing of write requests and cancellation of deferment are performed by the
storage device C 180 and marker creation meanwhile is performed by thestorage device A 100 on transmission to thestorage device A 100 of an instruction by thestorage device C 180. As described above, write data in respect of which completion of writing has been notified to theMFA 600 or opensystem host A 700 has already been transferred to thestorage device A 100 and writedata management information 300 of such write data is created in thestorage device A 100, so deferment of processing of write requests by thestorage device A 100 in embodiment 3 and deferment of processing of write requests by thestorage device C 180 in this embodiment are equivalent. Consequently, by performing processing as described above and by performing other processing as described in embodiment 3, in the construction of this embodiment, reflection of updating to the copies can be synchronized at the marker time by stopping data updating by theMFA 600 and opensystem host A 700 in the same way as in embodiment 3 and creating a marker of the updated condition with unified timing (checkpoint) between the plurality of storage devices; mutual consistency of the respective copies with the mainframe host data and the open system host data can thus be achieved at this time. Furthermore, mutually matched copies are maintained in snapshot volumes by acquiring snapshots at the time of synchronization of reflection and theMFB 690 or opensystem host B 790 can therefore continue business using matched data. - In the above processing, it was assumed that the
management software A 800 gave instructions for marker creation to thestorage devices C 180 and thestorage devices C 180 transmitted these instructions to the storage devices A 100; however, it would also be possible for themanagement software A 800 to give instructions for marker creation directly to all of the storage devices A 100 and for the storage devices A 100 to report completion of marker creation to themanagement software 800. Specifically, themanagement software A 800 first of all gives instructions for deferment of write request processing to all of thestorage devices C 180 and themanagement software A 800 confirms that reports of commencement of deferment have been received from all of the designatedstorage devices C 180 before giving instructions for marker creation to all of the storage devices A 180 in the same way as instep 1605 of embodiment 3. After having received these instructions, thestorage device A 100 creates a marker in the same way as instep 1606,step 1607 and step 1608 of embodiment 3 and reports completion of marker creation to themanagement software 800. After confirming that reports of completion of marker creation have been obtained from all of the designated storage devices A 100 in the same way as instep 1609 and step 1610 of embodiment 3, themanagement software A 800 may be arranged to give instructions for the cancellation of deferment of write request processing to all of thestorage devices C 180. - Also, it would be possible that the
storage devices C 180 are provided with a marker creation section andmarker number information 330 and create a marker on receipt of instructions for marker creation from themanagement software A 800; the marker, which has been created as write data, is then transferred to thestorage device A 100 and completion of marker creation may be arranged to be reported to themanagement software A 800 when a report of receipt thereof has been received from the writedata reception section 210 of thestorage device A 100. In this case, thestorage device A 100 treats the received marker as a special type of write data, which is transferred to thestorage device B 190 after processing in the same way as ordinary write data except that reflection to the copy is not performed. - In any case, the above can be implemented irrespective of the number of
storage devices C 180 that are connected with the storage devices A 100 and deposit copies on thestorage devices A 100. - Also, although not shown, if a mainframe host and open system host are connected with the storage devices A 100 by an I/O path, if for example some fault occurs in the
MFA 600 or opensystem host A 700 orstorage devices C 180, the aforesaid mainframe host and open system host can continue business using the content of thelogical volume 500 of thestorage device A 100 that is matched therewith.
Claims (20)
1. A remote copy system comprising:
a plurality of first storage systems, each comprising a first control section to be coupled to a computer, a first logical volume, and a first storage area for storing data to be transmitted to a second storage system; and
a plurality of second storage systems, each coupled to a first storage system and comprising a second control section, a second logical volume, and a second storage area for storing data received from the first storage system,
wherein a plurality of pairs, each including a first logical volume for storing data received from the computer and a second logical volume for storing replicate data of data in the first logical volume, are configured between the plurality of first storage systems and the plurality of second storage systems,
wherein first control sections of the plurality of first storage systems receive a plurality of write requests from the computer, the first control sections store write data, which is received according to the plurality of write requests and to be stored in first logical volumes of the plurality of first storage systems, and write order information assigned to the write data in first storage areas of the first storage systems, and transmit write completion reports to the computer,
wherein when one of the plurality of first storage systems instructs the rest of the plurality of first storage systems to defer operations for executing write requests, the operations for executing write requests are deferred in the plurality of first storage systems, the one of the plurality of first storage systems sends information related to a marker to the rest of the plurality of first storage systems, and the first control sections of the plurality of first storage systems store markers and the write order information assigned to the markers in the first storage areas of the plurality of first storage systems and restart the operations for executing write requests, and
wherein second control sections of the plurality of second storage systems read the write data with the write order information and the markers with the write order information from the plurality of first storage systems, and control operation such that the write data received from the plurality of first storage systems is stored in second logical volumes of the plurality of second storage systems according to the write order information and the markers.
2. A remote copy system according to claim 1 ,
wherein when the second control sections read the write data with the write order information and the markers with the write order information from the plurality of first storage systems, the second control sections send a plurality of transmission requests to the plurality of first storage systems, receive the write data with the write order information and the markers with the write order information from the plurality of first storage systems in response to the plurality of transmission requests, and store the received write data with the write order information and the markers with the write order information in second storage areas of the plurality of second storage systems, and
wherein one of the plurality of second storage systems instructs the rest of the plurality of second storage systems to store write data to second logical volumes, and the second control sections control to store write data from the second storage areas to the second logical volumes according to the write order information assigned to the write data, until reaching the markers.
3. A remote copy system according to claim 2 , wherein the write order information assigned to the write data or the markers in a sequential number indicates a receiving order of the write data from the computer.
4. A remote copy system according to claim 2 ,
wherein the information related to the marker includes a number, and the first control sections store the markers each including the number in the first storage areas.
5. A remote copy system according to claim 4 ,
wherein the one of the plurality of first storage systems instructs the rest of the plurality of first storage systems to defer operations for executing write requests and send the information related to the marker to the rest of the plurality of first storage systems repeatedly, and
wherein the one of the plurality of first storage systems increases a value of the number included in the information related to the marker upon sending the information related to the marker to the rest of the plurality of first storage systems.
6. A remote copy system according to claim 4 ,
wherein when the one of the plurality of second storage systems instructs the rest of the plurality of second storage systems to store write data to the second logical volumes, the one of the plurality of second storage systems designates the number included in the markers, the second control sections control to store the write data from the second storage areas to the second logical volumes according to the write order information assigned to the write data, until reaching the marker including the number designated by the one of the plurality of second storage systems.
7. A remote copy system according to claim 2 ,
wherein a pace of transmission of write data from a first storage system to a second storage system is controlled by a transmission request issued from a second control section of the second storage system to the first storage system.
8. A remote copy system according to claim 7 ,
wherein the pace of the transmission of write data is controlled based on the amount of write data received by the second storage system.
9. A remote copy system according to claim 7 ,
wherein the pace of the transmission of write data is controlled based on a load of the second storage system.
10. A remote copy system according to claim 2 ,
wherein each first storage area includes a storage area in a logical volume configured by at least one physical storage device.
11. A remote copy system comprising:
a plurality of first storage systems, each comprising a first control section to be coupled to a computer, a first logical volume, and a first storage area for storing data to be transmitted to a second storage system; and
a plurality of second storage systems, each coupled to a first storage system and comprising a second control section, a second logical volume, and a second storage area for storing data received from the first storage system,
wherein a plurality of pairs, each including a first logical volume for storing data received from the computer and a second logical volume for storing replicate data of data in the first logical volume, are configured between the plurality of first storage systems and the plurality of second storage systems,
wherein when a first control section of a first storage system receives a write request from the computer, the first control section stores write data, which is to be stored in a first logical volume included in a pair according to the write request, and write order information assigned to the write data in a first storage area of the first storage system, and transmits a write completion report to the computer,
wherein when one of the plurality of first storage systems instructs the rest of the plurality of first storage systems to defer operations for executing write requests, the operations for executing write requests are deferred in the plurality of first storage systems, the one of the plurality of first storage systems sends information related to a marker to the rest of the plurality of first storage systems, and each first control section stores a marker and the write order information assigned to the marker in a first storage area of a first storage system including the first control section, and the operation for executing write requests are restarted in the plurality of first storage systems,
wherein a second control section of a second storage system, which includes a second logical volume included in a pair, issues a transmission request to a first storage system, which includes a first logical volume included in the same pair as the second logical volume repeatedly, so that in response to one or more transmission requests issued from the second control section, both the write data with the write order information and the marker with the write order information are transmitted from the first storage system to the second storage system, and the second control section stores the write data with the write order information and the marker with the write order information in a second storage area of the second storage system, and
wherein when one of the plurality of second storage systems instructs the rest of the plurality of second storage systems to store write data to second logical volumes, each second control section controls to store write data from a second storage area of a second storage system including the second control section to a second logical volume of the second storage system, according to the write order information assigned to the write data, until reaching the marker.
12. A remote copy system according to claim 11 ,
wherein the information related to the marker includes a number, and each first control section stores the marker including the number in the first storage area.
13. A remote copy system according to claim 12 ,
wherein the one of the plurality of first storage systems instructs the rest of the plurality of first storage systems to defer operations for executing write requests and sends the information related to the marker to the rest of the plurality of first storage systems repeatedly, and
wherein the one of the plurality of first storage systems increases a value of the number included in the information related to the marker upon sending the information related to the marker to the rest of the plurality of first storage systems.
14. A remote copy system according to claim 12 ,
wherein when the one of the plurality of second storage systems instructs the rest of the plurality of second storage systems to store write data to the second logical volumes, the one of the plurality of second storage systems designates the number included in the marker, and each second control section controls to store the write data from the second storage area to the second logical volume according to the storage area to the second logical volume according to the write order information assigned to the write data, until reaching the marker including the number designated by the one of the plurality of second storage systems.
15. A remote copy system according to claim 11 ,
wherein a pace of transmission of write data from a first storage system to a second storage system is controlled by using the transmission request issued from a second control section of the second storage system to the first storage system.
16. A remote copy system according to claim 15 ,
wherein the pace of the transmission of write data is controlled based on the amount of write data received by the second storage system.
17. A remote copy system according to claim 15 ,
wherein the pace of the transmission of write data is controlled based on a load of the second storage system.
18. A remote copy system according to claim 11 ,
wherein each first storage area includes a storage area in a logical volume configured by at least one physical storage device.
19. A remote copy system according to claim 11 ,
wherein each second storage area includes a storage area in a logical volume configured by at least one physical storage device.
20. A remote copy system according to claim 11 ,
wherein the write order information assigned to the write data or the marker in a first storage system is a sequential number, which indicates a receiving order of the write data from the computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/727,947 US20070192555A1 (en) | 2003-12-03 | 2007-03-29 | Remote copy system |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003403970 | 2003-12-03 | ||
JP2003-403970 | 2003-12-03 | ||
US10/796,175 US7085788B2 (en) | 2003-12-03 | 2004-03-10 | Remote copy system configured to receive both a write request including a write time and a write request not including a write time. |
US11/002,105 US7293050B2 (en) | 2003-12-03 | 2004-12-03 | Remote copy system |
US11/727,947 US20070192555A1 (en) | 2003-12-03 | 2007-03-29 | Remote copy system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/002,105 Continuation US7293050B2 (en) | 2003-12-03 | 2004-12-03 | Remote copy system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20070192555A1 true US20070192555A1 (en) | 2007-08-16 |
Family
ID=34463972
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/796,175 Active 2024-05-11 US7085788B2 (en) | 2003-12-03 | 2004-03-10 | Remote copy system configured to receive both a write request including a write time and a write request not including a write time. |
US11/002,105 Expired - Fee Related US7293050B2 (en) | 2003-12-03 | 2004-12-03 | Remote copy system |
US11/727,947 Abandoned US20070192555A1 (en) | 2003-12-03 | 2007-03-29 | Remote copy system |
US11/727,918 Active 2024-08-22 US8176010B2 (en) | 2003-12-03 | 2007-03-29 | Remote copy system |
US13/438,440 Expired - Lifetime US8375000B2 (en) | 2003-12-03 | 2012-04-03 | Remote copy system |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/796,175 Active 2024-05-11 US7085788B2 (en) | 2003-12-03 | 2004-03-10 | Remote copy system configured to receive both a write request including a write time and a write request not including a write time. |
US11/002,105 Expired - Fee Related US7293050B2 (en) | 2003-12-03 | 2004-12-03 | Remote copy system |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/727,918 Active 2024-08-22 US8176010B2 (en) | 2003-12-03 | 2007-03-29 | Remote copy system |
US13/438,440 Expired - Lifetime US8375000B2 (en) | 2003-12-03 | 2012-04-03 | Remote copy system |
Country Status (2)
Country | Link |
---|---|
US (5) | US7085788B2 (en) |
EP (2) | EP1837768A3 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060074957A1 (en) * | 2004-09-29 | 2006-04-06 | Hitachi, Ltd. | Method of configuration management of a computer system |
US7702871B1 (en) * | 2007-08-31 | 2010-04-20 | Emc Corporation | Write pacing |
US20110087850A1 (en) * | 2009-10-13 | 2011-04-14 | Fujitsu Limited | Storage apparatus and method for storage apparatus |
US20120216073A1 (en) * | 2011-02-18 | 2012-08-23 | Ab Initio Technology Llc | Restarting Processes |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7085788B2 (en) | 2003-12-03 | 2006-08-01 | Hitachi, Ltd. | Remote copy system configured to receive both a write request including a write time and a write request not including a write time. |
US8032726B2 (en) | 2003-12-03 | 2011-10-04 | Hitachi, Ltd | Remote copy system |
US7724599B2 (en) | 2003-12-03 | 2010-05-25 | Hitachi, Ltd. | Remote copy system |
US7437389B2 (en) | 2004-03-10 | 2008-10-14 | Hitachi, Ltd. | Remote copy system |
JP4434857B2 (en) | 2003-12-04 | 2010-03-17 | 株式会社日立製作所 | Remote copy system and system |
JP4477370B2 (en) * | 2004-01-30 | 2010-06-09 | 株式会社日立製作所 | Data processing system |
JP2005258850A (en) * | 2004-03-12 | 2005-09-22 | Hitachi Ltd | Computer system |
JP2005346610A (en) * | 2004-06-07 | 2005-12-15 | Hitachi Ltd | Storage system and method for acquisition and use of snapshot |
JP4477950B2 (en) | 2004-07-07 | 2010-06-09 | 株式会社日立製作所 | Remote copy system and storage device system |
JP4412722B2 (en) | 2004-07-28 | 2010-02-10 | 株式会社日立製作所 | Remote copy system |
JP4915775B2 (en) * | 2006-03-28 | 2012-04-11 | 株式会社日立製作所 | Storage system and storage system remote copy control method |
US7330861B2 (en) | 2004-09-10 | 2008-02-12 | Hitachi, Ltd. | Remote copying system and method of controlling remote copying |
US20060080362A1 (en) * | 2004-10-12 | 2006-04-13 | Lefthand Networks, Inc. | Data Synchronization Over a Computer Network |
US7657578B1 (en) * | 2004-12-20 | 2010-02-02 | Symantec Operating Corporation | System and method for volume replication in a storage environment employing distributed block virtualization |
JP4249719B2 (en) * | 2005-03-29 | 2009-04-08 | 株式会社日立製作所 | Backup system, program, and backup method |
US8401997B1 (en) * | 2005-06-30 | 2013-03-19 | Symantec Operating Corporation | System and method for replication using consistency interval markers in a distributed storage environment |
US7165158B1 (en) * | 2005-08-17 | 2007-01-16 | Hitachi, Ltd. | System and method for migrating a replication system |
US7702851B2 (en) | 2005-09-20 | 2010-04-20 | Hitachi, Ltd. | Logical volume transfer method and storage network system |
JP4693589B2 (en) | 2005-10-26 | 2011-06-01 | 株式会社日立製作所 | Computer system, storage area allocation method, and management computer |
JP4856955B2 (en) | 2006-01-17 | 2012-01-18 | 株式会社日立製作所 | NAS system and remote copy method |
JP2007323218A (en) * | 2006-05-31 | 2007-12-13 | Hitachi Ltd | Backup system |
US10587688B2 (en) * | 2014-09-19 | 2020-03-10 | Netapp, Inc. | Techniques for coordinating parallel performance and cancellation of commands in a storage cluster system |
US10303782B1 (en) | 2014-12-29 | 2019-05-28 | Veritas Technologies Llc | Method to allow multi-read access for exclusive access of virtual disks by using a virtualized copy of the disk |
US9959069B2 (en) * | 2015-02-12 | 2018-05-01 | Microsoft Technology Licensing, Llc | Externalized execution of input method editor |
US10303360B2 (en) * | 2015-09-30 | 2019-05-28 | International Business Machines Corporation | Replicating data in a data storage system |
US20180143766A1 (en) * | 2016-11-18 | 2018-05-24 | International Business Machines Corporation | Failure protection copy management |
US10180802B2 (en) | 2017-05-18 | 2019-01-15 | International Business Machines Corporation | Collision detection at multi-node storage sites |
WO2019126395A1 (en) | 2017-12-19 | 2019-06-27 | Chase Therapeutics Corporation | Methods for developing pharmaceuticals for treating neurodegenerative conditions |
US10452502B2 (en) | 2018-01-23 | 2019-10-22 | International Business Machines Corporation | Handling node failure in multi-node data storage systems |
EP3963047A4 (en) | 2019-04-30 | 2023-06-21 | Chase Therapeutics Corporation | Alpha-synuclein assays |
Citations (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5404548A (en) * | 1990-04-19 | 1995-04-04 | Kabushiki Kaisha Toshiba | Data transfer control system transferring command data and status with identifying code for the type of information |
US5577240A (en) * | 1994-12-07 | 1996-11-19 | Xerox Corporation | Identification of stable writes in weakly consistent replicated databases while providing access to all writes in such a database |
US5603003A (en) * | 1992-03-04 | 1997-02-11 | Hitachi, Ltd. | High speed file access control method and computer system including a plurality of storage subsystems connected on a bus |
US5623599A (en) * | 1994-07-29 | 1997-04-22 | International Business Machines Corporation | Method and apparatus for processing a synchronizing marker for an asynchronous remote data copy |
US5926816A (en) * | 1996-10-09 | 1999-07-20 | Oracle Corporation | Database Synchronizer |
US5996054A (en) * | 1996-09-12 | 1999-11-30 | Veritas Software Corp. | Efficient virtualized mapping space for log device data storage system |
US6092066A (en) * | 1996-05-31 | 2000-07-18 | Emc Corporation | Method and apparatus for independent operation of a remote data facility |
US6157991A (en) * | 1998-04-01 | 2000-12-05 | Emc Corporation | Method and apparatus for asynchronously updating a mirror of a source device |
US6209002B1 (en) * | 1999-02-17 | 2001-03-27 | Emc Corporation | Method and apparatus for cascading data through redundant data storage units |
US6260124B1 (en) * | 1998-08-13 | 2001-07-10 | International Business Machines Corporation | System and method for dynamically resynchronizing backup data |
US6353878B1 (en) * | 1998-08-13 | 2002-03-05 | Emc Corporation | Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem |
US6366987B1 (en) * | 1998-08-13 | 2002-04-02 | Emc Corporation | Computer data storage physical backup and logical restore |
US6408370B2 (en) * | 1997-09-12 | 2002-06-18 | Hitachi, Ltd. | Storage system assuring data integrity and a synchronous remote data duplexing |
US20020078296A1 (en) * | 2000-12-20 | 2002-06-20 | Yasuaki Nakamura | Method and apparatus for resynchronizing paired volumes via communication line |
US6449622B1 (en) * | 1999-03-08 | 2002-09-10 | Starfish Software, Inc. | System and methods for synchronizing datasets when dataset changes may be received out of order |
US6463501B1 (en) * | 1999-10-21 | 2002-10-08 | International Business Machines Corporation | Method, system and program for maintaining data consistency among updates across groups of storage areas using update times |
US6516327B1 (en) * | 1998-12-24 | 2003-02-04 | International Business Machines Corporation | System and method for synchronizing data in multiple databases |
US20030051111A1 (en) * | 2001-08-08 | 2003-03-13 | Hitachi, Ltd. | Remote copy control method, storage sub-system with the method, and large area data storage system using them |
US20030050930A1 (en) * | 2001-09-12 | 2003-03-13 | Malcolm Mosher | Method and apparatus for lockstep data replication |
US20030078903A1 (en) * | 1998-10-29 | 2003-04-24 | Takahisa Kimura | Information storage system |
US6581143B2 (en) * | 1999-12-23 | 2003-06-17 | Emc Corporation | Data processing method and apparatus for enabling independent access to replicated data |
US20030177321A1 (en) * | 2002-01-03 | 2003-09-18 | Hitachi, Ltd. | Data synchronization of multiple remote storage after remote copy suspension |
US20030188116A1 (en) * | 2002-03-29 | 2003-10-02 | Hitachi, Ltd. | Method and apparatus for storage system |
US6647474B2 (en) * | 1993-04-23 | 2003-11-11 | Emc Corporation | Remote data mirroring system using local and remote write pending indicators |
US6658542B2 (en) * | 1999-03-03 | 2003-12-02 | International Business Machines Corporation | Method and system for caching data in a storage system |
US6658434B1 (en) * | 2000-02-02 | 2003-12-02 | Hitachi, Ltd. | Method of and a system for recovering data in an information processing system |
US6665781B2 (en) * | 2000-10-17 | 2003-12-16 | Hitachi, Ltd. | Method and apparatus for data duplexing in storage unit system |
US20040024975A1 (en) * | 2002-07-30 | 2004-02-05 | Noboru Morishita | Storage system for multi-site remote copy |
US20040078399A1 (en) * | 2000-03-31 | 2004-04-22 | Hitachi, Ltd. | Method for duplicating data of storage subsystem and data duplicating system |
US20040128442A1 (en) * | 2002-09-18 | 2004-07-01 | Netezza Corporation | Disk mirror architecture for database appliance |
US20040148477A1 (en) * | 2001-06-28 | 2004-07-29 | Cochran Robert A | Method and system for providing logically consistent logical unit backup snapshots within one or more data storage devices |
US20040193816A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at a remote storage device |
US20040193802A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at local storage device |
US6816951B2 (en) * | 2001-11-30 | 2004-11-09 | Hitachi, Ltd. | Remote mirroring with write ordering sequence generators |
US6823347B2 (en) * | 2003-04-23 | 2004-11-23 | Oracle International Corporation | Propagating commit times |
US20040250031A1 (en) * | 2003-06-06 | 2004-12-09 | Minwen Ji | Batched, asynchronous data redundancy technique |
US20040250030A1 (en) * | 2003-06-06 | 2004-12-09 | Minwen Ji | Data redundancy using portal and host computer |
US20040260972A1 (en) * | 2003-06-06 | 2004-12-23 | Minwen Ji | Adaptive batch sizing for asynchronous data redundancy |
US20040268177A1 (en) * | 2003-06-06 | 2004-12-30 | Minwen Ji | Distributed data redundancy operations |
US20040267829A1 (en) * | 2003-06-27 | 2004-12-30 | Hitachi, Ltd. | Storage system |
US20050033828A1 (en) * | 2003-08-04 | 2005-02-10 | Naoki Watanabe | Remote copy system |
US20050066122A1 (en) * | 2003-03-25 | 2005-03-24 | Vadim Longinov | Virtual ordered writes |
US20050091415A1 (en) * | 2003-09-30 | 2005-04-28 | Robert Armitano | Technique for identification of information based on protocol markers |
US20050102554A1 (en) * | 2003-11-05 | 2005-05-12 | Ofir Zohar | Parallel asynchronous order-preserving transaction processing |
US6898685B2 (en) * | 2003-03-25 | 2005-05-24 | Emc Corporation | Ordering data writes from a local storage device to a remote storage device |
US20050120056A1 (en) * | 2003-12-01 | 2005-06-02 | Emc Corporation | Virtual ordered writes for multiple storage devices |
US20050125617A1 (en) * | 2003-12-04 | 2005-06-09 | Kenta Ninose | Storage control apparatus and storage control method |
US20050132248A1 (en) * | 2003-12-01 | 2005-06-16 | Emc Corporation | Data recovery for virtual ordered writes for multiple storage devices |
US20050149817A1 (en) * | 2003-12-11 | 2005-07-07 | International Business Machines Corporation | Data transfer error checking |
US20050198454A1 (en) * | 2004-03-08 | 2005-09-08 | Emc Corporation | Switching between virtual ordered writes mode and synchronous or semi-synchronous RDF transfer mode |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06149485A (en) | 1992-11-06 | 1994-05-27 | Fujitsu Ltd | Data completion guarantee processing method |
KR0128271B1 (en) | 1994-02-22 | 1998-04-15 | 윌리암 티. 엘리스 | Remote data duplexing |
JP2894676B2 (en) | 1994-03-21 | 1999-05-24 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Asynchronous remote copy system and asynchronous remote copy method |
US6101497A (en) * | 1996-05-31 | 2000-08-08 | Emc Corporation | Method and apparatus for independent and simultaneous access to a common data set |
US5937414A (en) * | 1997-02-28 | 1999-08-10 | Oracle Corporation | Method and apparatus for providing database system replication in a mixed propagation environment |
US6493796B1 (en) | 1999-09-01 | 2002-12-10 | Emc Corporation | Method and apparatus for maintaining consistency of data stored in a group of mirroring devices |
JP2001150210A (en) | 1999-11-25 | 2001-06-05 | Matsushita Electric Works Ltd | Fastening implement |
US6622152B1 (en) * | 2000-05-09 | 2003-09-16 | International Business Machines Corporation | Remote log based replication solution |
US7133986B2 (en) | 2003-09-29 | 2006-11-07 | International Business Machines Corporation | Method, system, and program for forming a consistency group |
US7085788B2 (en) | 2003-12-03 | 2006-08-01 | Hitachi, Ltd. | Remote copy system configured to receive both a write request including a write time and a write request not including a write time. |
-
2004
- 2004-03-10 US US10/796,175 patent/US7085788B2/en active Active
- 2004-06-07 EP EP07010832A patent/EP1837768A3/en not_active Ceased
- 2004-06-07 EP EP04013357A patent/EP1538527A3/en not_active Withdrawn
- 2004-12-03 US US11/002,105 patent/US7293050B2/en not_active Expired - Fee Related
-
2007
- 2007-03-29 US US11/727,947 patent/US20070192555A1/en not_active Abandoned
- 2007-03-29 US US11/727,918 patent/US8176010B2/en active Active
-
2012
- 2012-04-03 US US13/438,440 patent/US8375000B2/en not_active Expired - Lifetime
Patent Citations (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5404548A (en) * | 1990-04-19 | 1995-04-04 | Kabushiki Kaisha Toshiba | Data transfer control system transferring command data and status with identifying code for the type of information |
US5603003A (en) * | 1992-03-04 | 1997-02-11 | Hitachi, Ltd. | High speed file access control method and computer system including a plurality of storage subsystems connected on a bus |
US6647474B2 (en) * | 1993-04-23 | 2003-11-11 | Emc Corporation | Remote data mirroring system using local and remote write pending indicators |
US5623599A (en) * | 1994-07-29 | 1997-04-22 | International Business Machines Corporation | Method and apparatus for processing a synchronizing marker for an asynchronous remote data copy |
US5577240A (en) * | 1994-12-07 | 1996-11-19 | Xerox Corporation | Identification of stable writes in weakly consistent replicated databases while providing access to all writes in such a database |
US6092066A (en) * | 1996-05-31 | 2000-07-18 | Emc Corporation | Method and apparatus for independent operation of a remote data facility |
US5996054A (en) * | 1996-09-12 | 1999-11-30 | Veritas Software Corp. | Efficient virtualized mapping space for log device data storage system |
US5926816A (en) * | 1996-10-09 | 1999-07-20 | Oracle Corporation | Database Synchronizer |
US6408370B2 (en) * | 1997-09-12 | 2002-06-18 | Hitachi, Ltd. | Storage system assuring data integrity and a synchronous remote data duplexing |
US6157991A (en) * | 1998-04-01 | 2000-12-05 | Emc Corporation | Method and apparatus for asynchronously updating a mirror of a source device |
US20010010070A1 (en) * | 1998-08-13 | 2001-07-26 | Crockett Robert Nelson | System and method for dynamically resynchronizing backup data |
US6353878B1 (en) * | 1998-08-13 | 2002-03-05 | Emc Corporation | Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem |
US6366987B1 (en) * | 1998-08-13 | 2002-04-02 | Emc Corporation | Computer data storage physical backup and logical restore |
US6260124B1 (en) * | 1998-08-13 | 2001-07-10 | International Business Machines Corporation | System and method for dynamically resynchronizing backup data |
US20050120092A1 (en) * | 1998-09-09 | 2005-06-02 | Hitachi, Ltd. | Remote copy for a storage controller with reduced data size |
US20030078903A1 (en) * | 1998-10-29 | 2003-04-24 | Takahisa Kimura | Information storage system |
US6516327B1 (en) * | 1998-12-24 | 2003-02-04 | International Business Machines Corporation | System and method for synchronizing data in multiple databases |
US6209002B1 (en) * | 1999-02-17 | 2001-03-27 | Emc Corporation | Method and apparatus for cascading data through redundant data storage units |
US6658542B2 (en) * | 1999-03-03 | 2003-12-02 | International Business Machines Corporation | Method and system for caching data in a storage system |
US6449622B1 (en) * | 1999-03-08 | 2002-09-10 | Starfish Software, Inc. | System and methods for synchronizing datasets when dataset changes may be received out of order |
US6463501B1 (en) * | 1999-10-21 | 2002-10-08 | International Business Machines Corporation | Method, system and program for maintaining data consistency among updates across groups of storage areas using update times |
US6581143B2 (en) * | 1999-12-23 | 2003-06-17 | Emc Corporation | Data processing method and apparatus for enabling independent access to replicated data |
US6658434B1 (en) * | 2000-02-02 | 2003-12-02 | Hitachi, Ltd. | Method of and a system for recovering data in an information processing system |
US20040078399A1 (en) * | 2000-03-31 | 2004-04-22 | Hitachi, Ltd. | Method for duplicating data of storage subsystem and data duplicating system |
US6665781B2 (en) * | 2000-10-17 | 2003-12-16 | Hitachi, Ltd. | Method and apparatus for data duplexing in storage unit system |
US20020078296A1 (en) * | 2000-12-20 | 2002-06-20 | Yasuaki Nakamura | Method and apparatus for resynchronizing paired volumes via communication line |
US20040148477A1 (en) * | 2001-06-28 | 2004-07-29 | Cochran Robert A | Method and system for providing logically consistent logical unit backup snapshots within one or more data storage devices |
US20030051111A1 (en) * | 2001-08-08 | 2003-03-13 | Hitachi, Ltd. | Remote copy control method, storage sub-system with the method, and large area data storage system using them |
US20030050930A1 (en) * | 2001-09-12 | 2003-03-13 | Malcolm Mosher | Method and apparatus for lockstep data replication |
US6816951B2 (en) * | 2001-11-30 | 2004-11-09 | Hitachi, Ltd. | Remote mirroring with write ordering sequence generators |
US20030177321A1 (en) * | 2002-01-03 | 2003-09-18 | Hitachi, Ltd. | Data synchronization of multiple remote storage after remote copy suspension |
US20030188116A1 (en) * | 2002-03-29 | 2003-10-02 | Hitachi, Ltd. | Method and apparatus for storage system |
US20040024975A1 (en) * | 2002-07-30 | 2004-02-05 | Noboru Morishita | Storage system for multi-site remote copy |
US20040128442A1 (en) * | 2002-09-18 | 2004-07-01 | Netezza Corporation | Disk mirror architecture for database appliance |
US20040193802A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at local storage device |
US20050149666A1 (en) * | 2003-03-25 | 2005-07-07 | David Meiri | Virtual ordered writes |
US20040193816A1 (en) * | 2003-03-25 | 2004-09-30 | Emc Corporation | Reading virtual ordered writes at a remote storage device |
US20050066122A1 (en) * | 2003-03-25 | 2005-03-24 | Vadim Longinov | Virtual ordered writes |
US6898685B2 (en) * | 2003-03-25 | 2005-05-24 | Emc Corporation | Ordering data writes from a local storage device to a remote storage device |
US6823347B2 (en) * | 2003-04-23 | 2004-11-23 | Oracle International Corporation | Propagating commit times |
US20040250030A1 (en) * | 2003-06-06 | 2004-12-09 | Minwen Ji | Data redundancy using portal and host computer |
US20040268177A1 (en) * | 2003-06-06 | 2004-12-30 | Minwen Ji | Distributed data redundancy operations |
US20040260972A1 (en) * | 2003-06-06 | 2004-12-23 | Minwen Ji | Adaptive batch sizing for asynchronous data redundancy |
US20040250031A1 (en) * | 2003-06-06 | 2004-12-09 | Minwen Ji | Batched, asynchronous data redundancy technique |
US20040267829A1 (en) * | 2003-06-27 | 2004-12-30 | Hitachi, Ltd. | Storage system |
US20050033828A1 (en) * | 2003-08-04 | 2005-02-10 | Naoki Watanabe | Remote copy system |
US20050091415A1 (en) * | 2003-09-30 | 2005-04-28 | Robert Armitano | Technique for identification of information based on protocol markers |
US20050102554A1 (en) * | 2003-11-05 | 2005-05-12 | Ofir Zohar | Parallel asynchronous order-preserving transaction processing |
US20050120056A1 (en) * | 2003-12-01 | 2005-06-02 | Emc Corporation | Virtual ordered writes for multiple storage devices |
US20050132248A1 (en) * | 2003-12-01 | 2005-06-16 | Emc Corporation | Data recovery for virtual ordered writes for multiple storage devices |
US20050125617A1 (en) * | 2003-12-04 | 2005-06-09 | Kenta Ninose | Storage control apparatus and storage control method |
US20050149817A1 (en) * | 2003-12-11 | 2005-07-07 | International Business Machines Corporation | Data transfer error checking |
US20050198454A1 (en) * | 2004-03-08 | 2005-09-08 | Emc Corporation | Switching between virtual ordered writes mode and synchronous or semi-synchronous RDF transfer mode |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060074957A1 (en) * | 2004-09-29 | 2006-04-06 | Hitachi, Ltd. | Method of configuration management of a computer system |
US7702871B1 (en) * | 2007-08-31 | 2010-04-20 | Emc Corporation | Write pacing |
US20110087850A1 (en) * | 2009-10-13 | 2011-04-14 | Fujitsu Limited | Storage apparatus and method for storage apparatus |
US8589646B2 (en) * | 2009-10-13 | 2013-11-19 | Fujitsu Limited | Storage apparatus and method for storage apparatus |
US20120216073A1 (en) * | 2011-02-18 | 2012-08-23 | Ab Initio Technology Llc | Restarting Processes |
US9021299B2 (en) * | 2011-02-18 | 2015-04-28 | Ab Initio Technology Llc | Restarting processes |
Also Published As
Publication number | Publication date |
---|---|
EP1538527A3 (en) | 2006-04-26 |
US20050125465A1 (en) | 2005-06-09 |
US7293050B2 (en) | 2007-11-06 |
EP1837768A2 (en) | 2007-09-26 |
US7085788B2 (en) | 2006-08-01 |
US20050125618A1 (en) | 2005-06-09 |
US8176010B2 (en) | 2012-05-08 |
EP1538527A2 (en) | 2005-06-08 |
US20070174352A1 (en) | 2007-07-26 |
EP1837768A3 (en) | 2012-02-15 |
US20120191652A1 (en) | 2012-07-26 |
US8375000B2 (en) | 2013-02-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8176010B2 (en) | Remote copy system | |
US7330861B2 (en) | Remote copying system and method of controlling remote copying | |
US7536444B2 (en) | Remote copying system and remote copying method | |
JP4044717B2 (en) | Data duplication method and data duplication system for storage subsystem | |
US8397041B2 (en) | Remote copy system | |
US7225307B2 (en) | Apparatus, system, and method for synchronizing an asynchronous mirror volume using a synchronous mirror volume | |
EP0671686B1 (en) | Synchronous remote data duplexing | |
JP4791051B2 (en) | Method, system, and computer program for system architecture for any number of backup components | |
JP2003507791A (en) | Remote mirroring system, apparatus and method | |
US8250240B2 (en) | Message conversion method and message conversion system | |
US7437389B2 (en) | Remote copy system | |
US8032726B2 (en) | Remote copy system | |
JP2005190456A (en) | Remote copy system | |
EP1691291B1 (en) | System and method to maintain consistency on the secondary storage of a mirrored system | |
EP1840747A1 (en) | Remote copying system and method of controlling remote copying |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |