CA2020268A1 - Digital data management system - Google Patents

Digital data management system

Info

Publication number
CA2020268A1
CA2020268A1 CA002020268A CA2020268A CA2020268A1 CA 2020268 A1 CA2020268 A1 CA 2020268A1 CA 002020268 A CA002020268 A CA 002020268A CA 2020268 A CA2020268 A CA 2020268A CA 2020268 A1 CA2020268 A1 CA 2020268A1
Authority
CA
Canada
Prior art keywords
storage media
data
operations
host
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002020268A
Other languages
French (fr)
Inventor
Scott H. Davis
William L. Goleman
David W. Thiel
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Equipment Corp
Original Assignee
Scott H. Davis
William L. Goleman
David W. Thiel
Digital Equipment Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Scott H. Davis, William L. Goleman, David W. Thiel, Digital Equipment Corporation filed Critical Scott H. Davis
Publication of CA2020268A1 publication Critical patent/CA2020268A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1608Error detection by comparing the output signals of redundant hardware
    • G06F11/1612Error detection by comparing the output signals of redundant hardware where the redundant component is persistent storage
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/74Masking faults in memories by using spares or by reconfiguring using duplex memories, i.e. using dual copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Abstract

ABSTRACT OF THE DISCLOSURE
A method of performing a management operation on a data storage device, the preferred embodiment describing an improved method and apparatus for merging two storage media (e.g., hard disks) in a shadow set of storage media. The preferred method of managing a shadow set of storage media accessible by one or more data sources (e.g. host processors) for I/O operations, comprises the steps of: A. carrying out successive comparisons of data stored in corresponding locations in a plurality of shadow set storage media, respectively, while maintaining access to the storage media by the data sources for I/O
operations; and B. performing a management operation on at least one of the shadow set storage media, the management operation comprising: (a) interrupting I/O
operations to at least the shadow set storage medium on which the management operation is performed; (b) modifying data on the shadow set storage medium whose I/O
is interrupted, based on the results of the comparisons performed in step A; and (c) resuming the availability of the modified storage medium for I/O operations by the data sources.

0190u

Description

2~2~2~8 BAC~GROUND OF T~ INVENTION
This invention rela~es to a device for storing digital data. The preferred embodiment is described in connection with a systam for establishing and maintaining one or more duplicate or "shadow" copies of stored data to thereby improve the availability of the stored data.
A typical digital computer system includes one or more mass storage subsystems ~or storing data (which may include program instructions) to be processed. In typical mass storage subsystems, the data is actually storecl on disks. Disks are divided into a plurality of tracks, at selected radial distances from the center, and sectors, defining particular angular regions across each track, with each track and set of one or more sectors comprising a block, in which data is stored.
Since stored data may be unintentionally corrupted or destroyed, systems have been developed that create multiple copies of stored data, usually on separate storage devices, so that if the data on one of the devices or disks is damaged, it can be recovered ~rom one or more of the remaining copies. Such multiple copies are known as the shadow set. In a shadow set, typically data that is stored in particular blocks on one member of the shadow set is the same as data stored in corresponding blocks on the other members of the shadow `~
set. It is usually desirable to permit multiple host processors to simultaneously access (i.e., in parallel~
the shadow set for read and write type requests ("I/O"
requests).
It is sometimes necessary to "merge" two (or more) storage devices to reassemble a complete shadow set, where the devices were previously members of the same shadow set, but currently contain data that is valid, 2 ~`~

although possibly inconsistent. Data in a particular block is valid if it is not erroneous, that is, if it is correct, as determined ~y an error correction technique, or, if it is incorrect but correctable with use of the error correction technique. Shadow set members have data that is inconsistent if they have corresponding blocks whose data contents are different. For example, if one of the hosts malfunctions (e.g., fails), it may have had outstanding writes that completed to some shadow set members but not to others, resulting in data that is inconsistent. A merge operation ensures that the data stored on corresponding blocks of the shadow set members is consistent but does not determine the integrity (i.e., accuracy) of the data stored in the blocks which may be inconsistent. The integrity of the data i5 verified by higher level techniques (e.g., by an applications program).

The invention generally features a method of performing a management operation on a data stora~e device, the preferred embodiment describing an improYed method and apparatus for merging two storage media ~e.g., hard disks) in a shadow set of storage media. The preferred method of managing a shadow set of storage media accessible by one or more data sources (e.g. host processor~) for I/O operations, comprises the steps of:
A. carrying out successive comparisons of data stored in corresponding locations in a plurality of shadow set storage media, respectively, while maintaining access to the storage media by the data sources for I/O operations;
and B. performing a management operation on at least one of the shadow set storage media, the management operation comprising: (a) interrupting I/O operations to at least the shadow set storage medium on whic~ the management . -- ~ -, :

: ~

2 a ~

operation is performed; (b~ modifying data on the shadow set storage medium whose I/O is interrupted, based on the results of the comparisons performed in step A; and (c) resuming the availability of the modified storage medium for I/O operations by the data sources.
In the preferred embodiment, the step of modifying comprises making data on the modified storage medium consistent with data on another of the shadow set storage media, by reading data from one of the storage media and writing the read data to the modified storage medium. The shadow set storage media are a~cessible by a plurality of data sources.
The invention allows inconsistencies in duplicate copies of stored data to be corrected while maximizing the availability of the data in the various copies comprising the shadow set during the correction process.
Maximum availability is achieved since I/O operations, initiated by the hosts, are interrupted only when an inconsistency is found, and only for a period long enough to correct the inconsistency.
Other advantages and features of the invention will be apparent from the following detailed description of the invention and the appended claims.

2~2~2~8 60~12-2091 D~CRIPTION 0~ PR~RRBD 3M~ODIM~NTB
Dra~inas We first briafly describe the drawings.
Fig. 1 is a system according to the present invention using a shadow set.
Figs. 2A, 2B, 3A, 3B, 4, 5A, 5B, 6A and 6B illustrate data structures use~ with the invention.
Figs. 7A, B and C together form a flow chart illus-tra~ing a method of merging two members of a shadow set according to a preferred embodiment of the invPntion.
Structure and operation Referring to Flg. 1, a computer system including the invention includes a plurality of hosts 9, each of which include~ a processor 10, memory 12 (including buffer storage) and a communications interface 14. The hosts 9 are each directly connected through a communications medium 16 (e.g., by a virtual circuit) to two or more storage subsystems illustrated generally identified by reference numeral 17 (two are shown).
Each storag~ subsystem 17 includes a disk controller 18 that oontrols one or mora disks 20, which form the members of the ~hadow set. Disk controller 18 includes a buffer 22, a processor 24 and memory 26 (a.g., volatile memory). Processor 24 receivos I/O requests ~rom hosts 9 and control~ reads ~rom and writes to disk 20. Buffer 22 temporarily store~ data raceivad in connection with a write command before the data i written to a disk 20. Bu~fer 22 also stores data read from a disk 20 be~ore the data is transmitted to the host in responso to a read command. Processor 24 stores various types of in~ormation in memory 2G, da~cribed more fully below.
Eaci~ ho~t 9 will stora, in its memory 12, a tabla that includes information about the system that ths hostq : . ~, : . :, ..

2~2~2~

9 need to perform many operations. For example, hosts 9 will perform IfO operations to storage subsystems 17 and must know which storage subsystems are available for use, what disks are storad in the subsystems, etc. As will be described in greater detail below, the hosts 9 will slightly alter the procedure for I/O operations if a merge operation is being carried out in the system by a particular host 9. Therefore, the table will store status information regarding any ongoin~ merge operation ~as well as other operations~. The table also contains other standard information.
While each storage subsystem may include multiple disks 20, the members of the shadow set axe chosen to include disks in different storage subsystems 17.
Therefore, a host may directly access each member of the shadow set, through its interface 14 and over communication medium 16, without requiring it to access two shadow set members through the same disX controller 18. This will avoid a "single point of failure" in the event of a failure of one of the disk controllers 18. In other ~ords, if members of a shadow set have a common disk controller 18, and if that controller 18 malfunctions, the hosts will not be able to successfully perform any I/0 operations. In the preferred system, the shadow set members are "distributed", so that the failure of one device (e.g., one disk controller 18) will not inhibit ~/0 operations because they can be performed -using another shadow set member accessed through another disk controller.
In some cases a host 9 initiates a merge operation in which it makes data on two disks 20 comprising members of a shadow set consistent. In a merge operation, the host 9 initiates by means of write and read commands, read and write operations. Before proceeding further, it ,, , ' :~

2~2~2~8 will be helpful to describe these commands in greater detail.
When a host 9 wishes to write data to a disk 20, which may comprise a member of a shadow set, the host issues a command whose format is illustrated in Fig. 2A.
The command includes a "command reference number" field that uniquely identifies the command, and a "unit number"
field that identifies the unit (e.g. the disk 20) to which data is to be written. To accomplish write operations for each disk 20 comprising a member of the shadow set, the host issues a separate write command to each disk 20 with the proper unit number that identifies the disk 20. The "opcode" field identifie~ that the operation is a write. The "byte count" field contains a value that identifies the total number of bytes comprising the data to be written and the "logical bloc~
number" identi~`ies the starting storage location on the disk at which the data is to be written. The "buffer descriptor" identifies the location in host memory 12 that contains the data to be written.
The "host reference number," "entry locator" and "entry id" are used in connection with a "write log"
feature, described in detail below.
The format of a read command is illustrated in Fig. 2B, and include~ fields tha~ are similar to the write command fields. For a read command, the buffer descriptor contains the location in host memory 12 where the data read from the disk is to be stored. The read command does not include the fields in the write command that are associated with the write log feature, namely, the host reference number field, entry locator field~ and entry ID field (Fig. 2A).
Once a host transmits a read or write command, it is received by the disX controller 18 that serves the disX 20 identified in the "unit number" field. For a - .

- .~ . .
~ : ~

~2~

write command, the disk controller 18 will perform the write operation in connection with the identified disk 20 and return an "end message" to the originating host 9.
The format of the write command end message is illustrated in Fig. 3A. The end message includes a number of fields, including a command reference number field whose contents correspond to the contents of the command reference number field of the write command which initiated storage operation, and status field that informs the host whether or not the command was completed successfully. If the disk 20 was unable to complete the write operation, the status field can include an error code identifying the nature of the failure.
In response to a read command, the disk controller 18 will read the re~uested data from its disk 20 and transmit the data to memory 12 of the originating host.
After the data has been transmitted, an end message is generated by the disk controller and sent to the originating host, the format of the read command end message being illustrated in Fig. 3B. The read command end message is similar to the end message for the write command, with the exception that the read command end message does not include an entry locator or entry In field associated with the write history log feature, described below.
The disk controller 18 also maintains a write history log that includes a num~er of "write history entries" (Fig. 4), each of which stores information regarding a recent write operation. As described above, each of the storage subsystems that form each shadow set member includes a processor 24 and an associated memory 26. When a write operation is performed to a shadow set member, its disk controller 18 stores, in a write history log in its memory ~6, information in a write history entry indicating the data blocks of the shadow set member ~ ~ , ,, ;

.
.

202~?J~

to which data has been written. The write history entry also stores information identifying the source of the write command message (e.g., the originating host) which initiated the write operation.
Thereafter, if ~ merge operation becomes necessary, a host 9 managing the merge operation accesses the write history log for each shadow set member engaged in the merge operation and determines from the lo~
entries which data blocks may be inconsistent. For example, if one of hosts 9 in Fig. l should fail while initiating write operation to members o~ the shadow set, one shadow set member may have completed the write operation but not another shadow set member, leaving the data on the shadow set inconsistent to the extent of the one write operation. Therefore, a merge operation may be performed by a properly functioning host, but it need only be performed for the data blocks that the failed host has recently enabled to be written because other data blocks will be consistent. Therefore, the host performing the merge operation will access the write history log associated with each member of the shadow set, determine which blocks have been written by the host that failed, and perform a merge on corresponding blocks in the shadow set. Since only the blocks that were written in the shadow set are merged, the operation is completed much more ~uickly than if a host managing a merge operation enabled the contents of an entire shadow set member to be copied to another shadow set member.
We will first describe how the write history log is created and maintained and then will describe how this information is utilized in a merge operation. ~hen a write command is received by one of the disk controllers 18, the disk controller prepares and stores a write history entry in its memory 26. The format of a write history entry is shown in Fig. 4. The "entry flags"

- t ; ', .

~ ~ 2 ~

indicate the state of a write history entry. An "Allocated" flag is set if the write history entry is currently allocated (i.e., being used to store information about a write operation). The contents of a "unit number" field identify the specific disk to which the write operation associated with the write history entry was addressed. A "command identifier/status" field is used to identify and give the status of a current command (examples of the information stored in this field are described below).
The "starting logical block number" gives the address (position) on the disk volume at which the associated write operation begins and the "transfer length" specifies the number of bytes of data written to the member's disk. I.e., these fields specify what part of the shadow set member has been potentiall~ modified.
The "host reference number" field identifies the host from which the write operation originated.
The "entry id" field contains a value assigned by a host to uniquely identify a write history entry.
The "entry locator" field contains a value assigned by the shadow set member that uniquely identifies the internal location of thQ write history entry, i.e., the location within memory 26.
When a shadow set disk controller 18 receives a write command from a host, the controller performs the following operations. First, the controller validates the command message fields and chacks the state of the disk 20 to perform the write operation in a standard manner for the protocall being used. If the field validation or state checks fail, the shadow set member rejects the command issuing a write end message (Fig.
3A) whose status field contains the appropriate status.
The controller then checks a flag in the write command that indicates the contents of whether a new 202~2~

write history entry for the command is to be allocated or whether a previously allocated write history entry will be reused. If a new write history entry is to be allocated, the "Host Reference Number/entry locator"
field will contain the Host reference number. The controller will search the set of write history entries in memory 26 for an entry that is not currently allocated -- i.e., a write history entry with a clear l'allocated entry" flag.
If an unallocated write history entry cannot be found, the controller 18 completes the command and sends an end message to the host whose "status" field indicatas that the write history log is invalid with respect to that host. The controller will also invalidate all write history entries for the host that issued the write command in ordsr to prevent another host from relying on these entries in a merge operation. The entries can be invalidated by using a "history log" modifier contained in each entry that indicates whether that entry is valid or invalid.
If an unallocated write history entry is found, the controller 18 performs the following operations in connection with the write history entry:
a. Sets the "allocated entry flag".
b. Copies the contents of the command message's "unit number" field to the entry's "unit number" field.
c. Copies the contents of the command message's "opcode" and "modi~iers" fields to the entry's "command id//status" field.
d. Copies the contents of the command message's "byte count" (or "logical blocX count") field to the entry's "transfer length" field.
e. Copies the contents of the command message's "logical block number" (or l'destination lbn") field to the entry's "starting logical block number" field.

.

2~2~

f. Copies the contents of the command message's 'host re~erence number" field to the entry's "host reference number" ~ield.
g. Copies the contents of the command message's "entry id" field to the entry's "entry id"
field.
h. Continues normal processing of the command.
If a previously used write history entry is to be reused, then the 'IHost Reference number/entry locator"
field will contain the "entry locator" that defines the location of the write history entry to be used. ~his may occur if, for example, a host 9 is re-transmitting a previous write command message, which will have the same value in the "host reference number" field. To accomplish this, the controller 18 will first determine if the value contained in the "entry locator" field identifies one of the set of write history entries in its write log If the contents of the "entry locator" field does not identify an entry in the set, the controller 18 rejects the command as an invalid command. If the contents of the "entry locator" identifies one of the entries in the set of write history entries, the disk controller 18 uses the value in the "entry locator" field as an index into the set of write history entries to find the write history entry to be reused. The controller then checks the setting of the allocate entry flag of the found write history entry. If that flag is clear, indicating that the entry was not actually already allocated, the controller 18 rejects the command with a 3Q status of Write History Entry Access Error.
The controller then checks the "command identifier/status" field of the identified write history entry to see if the entry is currently associated with an in progress command such as a write command that is being carried out -- i.e., the controller determines if an "encode" flag within the "opcodel' field is clear. The encode flag is set when an operation begins and is cleared when the operation completes. If the entry is associated with an in progress command, the controller rejects the command and sends an end message to the host whose status field identifies a Write History Entry Access Error.
Finally, if the entry is not associated with an in progress command, the controller performs the following operations: -a. Copies the contents of the command's commandmessage l'unit number" fiald to the entry's "unit number" f ield.

- . ' ~ `-'. . ' . ~ , ' ~ . . -: . ~

~2~2~

b. Copies the contents of the command's command message "opcode" and "modifiers" fields to the entry's "command id/status' field.
c. Copies the contents of the command's command S message "byte count" (or "logical block count") field to the entry's "transfer length"
field.
d. Copies the contents of the command's command message "logical block number" (or "destination lbn~) field to the entry's "starting logical block number~ field.
e. Continues normal processing of the command.
After a write command has aborted, terminated, or completed, the disk controller copiPs the "encode,"
"flags," and "status" end message fields into the appropriate fields of the write history entry asso~iated with the command and then continues standard processing.
In addition, prior to returning the end message of a write command, the disk controller sets the "host reference number," "entry id," and "entry locator" end message fields equal to the values contained in the corresponding fields of the write history entry associated with the command. Note that with one exception the requirement just described can also be met by copying those fields directly from the command message to the end messaga. The only exception i3 that when a new write history entry has been allocated, the controller must set the "entry locator" end message field equal to the value contained in the "entry locator" field of the associated write history entry.
As will be explained below, the system utilizes a "Compare Host" operation in performing a merge of two shadow set members. The command message forma~ for the Compare Host operation is shown in Fig. 6A. The Compare 3S Host operation instructs the disk controller supporting the disX idPntified in the "unit number" field to compare the data stored in a section of host memory identified in , . : . ,, : ... . ' - 13 - 2~2~2~
the "buffer descriptor" field, to the data stored on the disk in the location identified by the "logical block number" and "byte count" fields.
The disk controller receiving the Compare Host command will execute the requested operation by reading the identified data from host memory, reading the data from the identified section of the disk, and comparing the data read from the host to the data read from the disk. The disk controller then is~ues an end message, the format of which is shown in Fig. 6B, to the host that issued the Compare Host command. The status field of the end m~ssage will indicate whether the compared data was found to be identical.
one of the system's hosts will control operations for merging two storage devices. The system can select any host to execute the operation. For example, the host with the best transmission path (e.g., the shortest path) to the shadow set may be chosen.
During a merge operation, the host controlling the merge will utilize a "Write History Management" command to perform a number of operations. The command messaga format (i.e., the message sent by the host to the shadow set) of a Write History Management command is shown in Fig. 5A. Fig. 5B illustrates the end message format (i.e., the message returned to the host by a shadow set member's disk controller ). The host selects a particular operation as the operations are needed during the mergQ, and specifies the operation in the "operation"
field of the command message. The other fields contain other information, explained more fully below as each operation is explained.
The "DEALLOCATE ALL" operation is used to deallocate all of the write history entries for the disk identified in the "unit number" field. The DEALLOCATE
ALL operation makes all of the write log spaces available ~; - ' - ':

2~2~8 for new entries. For example, after a merge operation is performed, the entries stored in the write history log are no longer needed because the members are assumed to be consistent immediately following a merge. The DEALLOCATE ALL operation will deallocate (i.e. free up) all of the write history entries to make them available for new information as new writes are made to the shadow set.
A host can deallocate only those write log entries associated with a particular host using a "DEALLOCATE BY
HOST REFERENCE NUMBER" operation. This may be desirable if a merge was performed as a result of a particular host having failed. Such a merge, as described more fully below, would involve merging only those blocks that were written with information from the failed host. Once that merge is completed, the write history entries associated with that host will no longer be needed and may be deallocated. All write history entries are not deallocated because if a different host fails, its entries would be needed to perform a similar merge. To execute this operation, each disk controller deallocates all of the write history entries that are associated with the host identified in the "host reference number" fiald for the disk identified in the "unit number" field.
A specific write history entry can be deallocated using a "DEALLQCATE BY ENTRY LOCATOR" operation. The disk controller deallocates the specific write history entry that is located within the write log at the location specified in the "entry locator" field. If the 30 contents of the entry locator field does not specify an `
entry within the limits of the write history log (i.e., if there is no write history entry at the identified location), the command is rejected and an end msssage is returned as an invalid command. If the value contained in the "host reference number" field of the write history : ~ . .. .

. : . .:

~2~2~

entry identified by the entry locator does not equal the value contained in the command message "host reference number" field (i.e., if the host identified in the command is also not ~he same as the host identifi~d in the write history entry), the command is rejected as an invalid command. Similarly, if the value contained in the "entry id" field of the write history entry located via the "entry locator" does not equal the value contained in the command message "entry id" field, the controller rejects the command as an invalid command.
The "READ ALL" operation is used to read information from the write log of a shadow set member supporting the disk identified in the unit number field.
If a host wishes to determine the total number of ~rite history entries stored in the write log, the "count"
field is set to zero, and the disk controller sets the command's end message "count" field (see Fig. 6B) equal to the number of write history entries that are associated with the identified unit.
The READ ALL operation is also used to read all of the write log entries from an identified shadow set member. In this case, the "count`' field is nonzero, and the disk controller transfers the numbsr of the write history entries specified in the "count" field to the host memory 12 (specified in the "write history buffer descriptor" field) beginning with the first write history entry. Note that only those write history entries that are associated with the unit identified in the "unit number" field are included in the transfer.
The "READ BY HOST REFERENCE NUMBER" operation is used to read information from the write logs associated with a specific host. If the 'Icount'' field message is zero, the disk controller sets the command's end message "count" field equal to the number of write history entries that are associated with both the host identified :', ~.' '', ~

~2~2~

in the "host reference number" field and the unit identified in the "unit" field. Therefore, the controller counts only those entries that resulted from writes to the identified unit from the identified host.
I~ the "count" ~ield in a "READ BY HOST REFERENCE
NUMBER" operation is nonzero, the controller transfers the number of write history entries specified in the "count" field to the location in host memory 12 specified in the "write history buffer descriptor" field, beginning with the first write history entry that is associated with both the host identified in the "host reference number" field and the unit identified in the "unit number" field.
As noted above, the end message format for the Write History Management command is shown in Fig. 5B.
The "unit alloc" field will contain the total number of write history entries curr~ntly allocated and associated with the unit identified in the command message "unit number" field. The "server alloc" field will contain the total number of write history entries currently allocated across every disk served by the particular disk controller. The "server unalloc" ~ield contains the total number of write history entries currently available.
Now that the Write History Management command, and its associated operations, have been described in detail, a merge operation utilizing the write log feature will be described with reference to the flowchart of Figs. 7A-C.
In step 1, the host issues one of the Write History Management commands d~scribed above to sach shadnw set member in order to obtain information from the write logs stored in memory 2~ o~ each disk controller 18. The specific commands used will depend on the circumstances that created the need for the merge operation. For example, if a host fails, and a merge is , ! ` ' ~ " : :
.: , . ~ :

2~2~2~8 performed by one of the properly functioning hosts, the host performing the merge will obtain, from the write log associated with each shadow set member, a complete list of all data blocks on that member disk to which the failed host had performed a write operation. As discussed above, the only data blocks that can possibly be inconsistent are those to which data was written by the failed host.
To obtain this information, the host performing the merge will first issue a Write History Management command to perform a "Read Host Reference Number"
operation to each disk controller that supports a shadow set member, with the "count" field set to zero. The command will identify the failed host in the "host }5 reference number" field. As described above, each disk controller will receive its command and will send an end message to the host with the count field set to the total number of write history entries in its write log that were created due to writes from the failad host (step 2).
The host will then determine whether it needs to issue more Write History Management commands (step 3).
In this example, the host has received end messages specifying the number of write history entries contained in each write history log for the failed host. The host controlling the merge will therefore need to issue another command to each shadow set member's disk controller that indicated it had write history entries for the failed host. A disk controller that returned an end message with the "count" field set to zero has no write history entries for the failed host and the host does not send a second command to these disk controllers. (Note that in the rare case where every shadow set controller returns a valid end messa~e with the "count" field set to zero, indicating that no shadow set member has been written with data from the failed .

, ~02~

host, no merge operation is necessary and the process terminates.) Therefore, a second Write History Management command is sent to each disk controller having needed write history entries (step 1), the command again specifying the "Read by Host Reference Number" operation and identifying the failed host, but this time setting the "count" field to the number of write history entries that the particular write history log has for the failed host. This time each disk controller reads the write history entries from memory 26, sends them to the controlling host's memory 12 J and issues an end message (step 2). The host receives the end messages and will determine, in this case, that it does not need to issue another Read By Host Reference Number command since it will now have all of the needed write history information.
The host (step 3) proceeds to establish a Logical Cluster Number table in its memory 12. The table contains numbers that identify the "clusters" (i.e., groups of data blocks) in the shaow set that are to be merged.
The controlling host sets up the table by listing logical cluster numbers that identify all of the clusters that are identified by the write history entries transmitted by the shadow set members.
As each write history entry is received, the host prepares a new entry, each of which is identified by a logical cluster counter (which starts at 1 and increases sequentially). The entry contains a number identi~ying the cluster to which the data had been written, and a disk controller ID that identifies the disk controller that sent the write history entry. By sequentially yoing through the logical cluster num~er table, the host will :
' ' ~
., , 2~2~2~8 be able to identify every cluster to which data has been written by the failed host.
The host may discount logical cluster number table entries if they are the result of write history entries where corresponding entries were received from every disk controller supporting a shadow set member, because if a write has been performed in every ~hadow set member, the shadow set will not be inconsistent due to that write.
In other words, since shadow set inconsistencies occur due to a write succeeding on some members but failing on other members, if a write history entry identifying a specific write command can be found in the write log associated with every shadow set member, then we know that the write operation succeeded on every member, and logical cluster numbers formed from these write history entries need not be used.
Next, in step 5, the host sets a logical cluster counter equal to the first logical clustar count number in the logical cluster number table. The logical cluster counter is used to access a logical cluster number. Nhen initialized to one in step 5, the first entry in the logical cluster number table is identified.
The host then selects one of the members as a "source" and the other member as the "target" (step 6).
The host issues a read command to the disX controller serving the source to read the data stored in the æection of the disk identified by the current logical cluster counter (step 7). The read command issued by the host is described above and shown in Fig. 2B. The "unit number"
is set to describe the source disk 20 with the "Logical Block Number" and "Byte Count" being set according to the cluster currently identified by the logical cluster counter.
Th~ source receives the read command and, after reading the identified data from the disk 20 to buffer 2~2~

22, will transmit the data to the host memory 12 in the section identified by the "Buffer Descriptor" field in the read command. The source will then transmit an end message of the type illustrated in Fig. 4B, which informs the host that the read operation has been performed ~step 8).
After the host receives the end message (step 9), the host will i~sue a Compare Host command to the target controller to compare the data read from the source to the data in the corresponding cluster in the target to determine if the data is identical (step 10).
If a "yes" result is obtained, the host will check to see if the logical cluster counter identifies the last logical cluster number in the logical cluster number table (step 13). If the result of step 13 is "yes", the merge operation is finished. Otherwise, the logical cluster counter is incremented (step 14) and the method returns to step 7 to process the next cluster.
If a no result was returned, indicating that the data read from corresponding clusters on the two merge members are not identical, the host will implement the following steps to make the data consistent.
The host will first establish cross-system synchronization by transmitting a message over communications medium 16 to all other hosts in the system (step 15). Each host will receive the transmitted message, will complete all outstanding I/O requests to tha shadow set and will stall new I/O requests. Once this has bean accomplished, each host sends an "acknowledge" message to the host controlling the merge indicating that it has complied with the synchronization request (step 16).
After receiving all of the acknowledge messages, the host will issue another read command to the source for the same cluster read in step 7, using an identical - - - ", - 2~2~2~
-command message ~step 17). The data is read again because anoth~r host may have modified the data since it was read in step 7. The source receives the read command message and executes it in the same manner as described in connection with step 8 above (step 18). The host will once again receive the ~nd message from the source as in step 9 above (step 19).
The host will now issue a write command to the target using the write command message format shown in Fig. 2A, and described above (step 20j. The write command will instruct the target to write the data read from the source in step 18, to the cluster in the target identified by the logical cluster counter. The target receives the write command, executes it and sends an end lS message to the host (step 21). These steps will result in the data in the two corresponding clusters in the source and target being consistent. Note that, because no I/0 requests are being processed during these steps (i.e., the system is synchronized) there is no danger of the data in the source being modified after it was read in step 18, so the data written to the target's cluster in step 21 will be identical to the data now stored at the corresponding cluster in the source.
After the host receives the end message indicating the write has been completed (step 22), the host transmits a message to all other hosts to end the cross-system synchronization, the message informing all other hosts that normal I/0 operations may resume (step 23).
The host will then return to step 13 and continue as described above.
Therefore, during a merge operation, the host selects a shadow set member as the source and sequentially compares the data stored in each data block (a cluster at a time) to data in the corresponding data blocks in the target shadow set member. If the h~st `~

2~2~

finds an inconsistency, the system is temporarily synchronized while the host resolves the inconsistency by reading the inconsistent data from the source and writing the data to the corresponding cluster in the target.
Because the shadow set ~embers being merged normally have only a very small amount of data that is inconsistent, cross-system syn~hronization will be nacessary for only a short time, resulting in only minimal disruption to normal I/O operations.
Because the data on both shadow set membexs is equally valid, if a cluster on the shadow set member selected as the source is corrupted (e.g., cannot be read), the corresponding cluster on the target is utilized. I.e., the target acts as the source for that cluster. Therefore, data transfer in a merge operation can occur in both directions.
If a host in the system needs to perform a read operation while another host is performing a merge as described above, the host must first ensure that the specific location to which the read is directed is consistent. To accomplish this, the host will effectively perform a merge operation on that specific section by first issuing the read to the source and then issuing a Compare Host command to tha target to determine if the data is consistent. If it is consistent, then the host continues with its processing of the read data. If it is not consistent, then the host will synchronize the system, reread the data from the source and write the data to the target as described above.
Since their is limited space in each disk controller memory 26 for the write history log, the hosts will try to keep as many write history entries as possible available for use. Therefore, if a host issues a write command to each member of the shadow set, and receives end messages from each disk controller : , ' ' ~2~2~8 - ~3 -indicating that the write request was successfully performed, the host will reuse the write history entries that were allocated for the write requests sent to each disk controller. These entries may be reused because, if a write has succeeded to all members of the shadow set, then the shadow set will not be inconsistent due to that write. As discussed above, a host that performs a merge operation will deallocate those entries that were used in th~ merge operation once the operation is completed.
The illustrative embodiment describes a merge operation performed on a single target, but several targets may be used to thereby merge three or more storage media. Similarly, the system need not use the same member disk as the source throughout the merge operation, but may use other disks as the source.
While the illustrative embodiment describes the use of the write history log when performing a merge operation, it should be clearly understood that the write history log need not be used. The advantages of the invention can be achieved without using the write log feature. The merge operation would be carri~d out for each cluster in the shadow set, or for clusters selected by some means other than the disclosed write history log.
Accordingly, the invention shall not be limitad by the specific illustrative embodiment described above, but shall be limited only by the scope of the appended claims.

Claims (39)

1. A method of managing a shadow set of storage media accessible by one or more data sources for I/O
operations, comprising the steps of:
A. carrying out successive comparisons of data stored in corresponding locations in a plurality of said storage media, respectively, while maintaining access to said storage media by said sources for I/O operations;
and B. performing a management operation on at least one of said storage media, said management operation comprising:
a. interrupting I/O operations to at least said one of said storage media;
b. modifying data on said one of said storage media based on the results of said comparisons; and c. resuming the availability of said one of said storage media for I/O operations.
2. The method of claim 1 wherein said step of modifying comprises making data on said modified storage medium consistent with data on another of said storage media.
3. The method of claim 1 wherein said step of modifying comprises reading data from one of said storage media and writing said read data to said modified storage medium.
4. The method of claim 1 wherein said storage media are accessible by a plurality of data sources.
5. A method of ensuring that data stored on first and second storage media is consistent, each of said storage media being divided into corresponding data blocks, said storage media being available to one or more host processors for I/O operations, said method comprising the steps of:
(a) reading data stored in a first data block in one of said storage media, the first data block initially constituting a current data block;
(b) comparing the data read in said current data block to data stored in a corresponding data block in the other of said storage media;
(c) if the data compared in step b are identical, reading the data stored in a different data block in one of said storage media, said different data block becoming the current data block, and returning to step b;
(d) if the data compared in step b are not identical, temporarily prohibiting the modification of data in said current data block, and modifying the data stored in one of said storage media such that the data in said current data block is identical to the corresponding data in the other of said storage media; and (e) reading the data stored in a different data block in one of said storage media, said different data block becoming the current data block, and returning to step b.
6. The method of claim 5 wherein said step of modifying comprises rereading the data in said current data block and modifying the data in the corresponding data block in said other storage media.
7. The method of claim 5 wherein said step of modifying comprises writing said data read from the current data block to the corresponding data block in the other of said storage media.
8. The method of claim 5 wherein said different data block is a data block adjacent to said current data block.
9. The method of claim 5 wherein each data block in said first storage medium is compared to a corresponding data block in said second storage medium.
10. The method of claim 5 wherein each of said storage media are members of a shadow set of storage media.
11. The method of claim 10 wherein each of said storage media may be directly accessed by a host processor.
12. The method of claim 10 wherein each of said storage media may be directly accessed by each of a plurality of host processors.
13. The method of claim 5 wherein said storage media are disk storage devices.
14. The method of claim 5 wherein steps a and g comprise reading only those data blocks which have been written with data during a predetermined period of time.
15. The method of claim 14 wherein one of said storage media comprises information indicating which of its blocks have been written with data during said predetermined period of time.
16. A method for ensuring that data stored on first and second storage media are consistent, said storage media being divided into corresponding data blocks, said method comprising the steps of A. comparing corresponding data blocks on said storage media to identify those blocks that have inconsistent data, said storage media being available to one or more host processors during said comparing process, such that said host processors can implement I/O
operations to said storage media; and B. temporarily making said storage media unavailable to said host processors for I/O operations while modifying the data in at least one data block identified as having inconsistent data in step A, such that said data blocks have consistent data.
17. An apparatus for managing a shadow set of storage media accessible by one or more data sources for I/O operations, comprising:
means for carrying out successive comparisons of data stored in corresponding locations in a plurality of said storage media, respectively, while maintaining access to said storage media by said sources for I/O
operations; and means for performing a management operation on at least one of said storage media, said means for performing said management operation comprising:
means for interrupting I/O operations to at least said one of said storage media;
means for modifying data on said one of said storage media based on the results of said comparisons;
and means for resuming the availability of said one of said storage media for I/O operations.
18. A program for controlling one or more processors in a digital computer, the digital computer processing at least one process which enables said processors to manage a shadow set of storage media accessible by one or more data sources for I/O
operations, comprising:
a comparison module for enabling one of said processors to carry out successive comparisons of data stored in corresponding locations in a plurality of said storage media, respectively, while maintaining access to said storage media by said sources for I/O operations;
and a management module for enabling one of said processors to perform a management operation on at least one of said storage media, said management operation comprising:
a. interrupting I/O operations to at least said one of said storage media;
b. modifying data on said one of said storage media based on the results of said comparisons; and c. resuming the availability of said one of said storage media for I/O operations.
19. The method of claim 1 wherein said step of carrying out successive comparisons comprises comparing only those locations of said storage media which have been written with data during a predetermined period of time.
20. The method of claim 19 wherein said storage media are divided into data blocks, and wherein at least one of said storage media comprises information indicating which of its blocks have been written with data during said predetermined period of time.
21. The method of claim 1 wherein said shadow set of storage media are each directly accessible by a plurality of said data sources.
22. The method of claim 1 wherein said management operation is performed only on portions of said one of said storage media, said portions selected based on said successive comparisons.
23. The method of claim 22 wherein said storage media are divided into corresponding data blocks, said step of carrying out successive comparisons comprising comparing corresponding data blocks in at least two of said storage media.
24. The method of claim 22 wherein said portions of said one of said storage media contain data that is not identical to data stored in corresponding portions in another of said storage media.
25. The apparatus of claim 17 wherein said means for modifying modifies data on said one of said storage media to be consistent with data on another of said storage media.
26. The apparatus of claim 17 wherein said means for modifying reads data from a second said storage media and writes said read data to said one of said storage media.
27. The apparatus of claim 17 wherein said shadow set of storage media are each directly accessible by a plurality of data sources.
28. The apparatus of claim 17 wherein said means for carrying out successive comparisons compares only those locations of said storage media which have been written with data during a predetermined period of time.
29. The apparatus of claim 28 wherein said storage media are divided into data blocks, and wherein at least one of said storage media comprises information indicating which of its blocks have been written with data during said predetermined period of time.
30. The apparatus of claim 17 wherein said means for performing a management operation performs said operation only on portions of said one of said storage media, said portions selected based on said successive comparisons.
31. The apparatus of claim 30 wherein said storage media are divided into corresponding data blocks, said means for carrying out successive comparisons comparing corresponding data blocks in at least two of said storage media.
32. The apparatus of claim 30 wherein said portions of said one of said storage media contain data that is not identical to data stored in corresponding portions in another of said storage media.
33. An apparatus for managing a storage medium accessible by one or more data sources for I/O
operations, comprising:
means for receiving one or more commands specifying a management operation to be performed on said storage medium;

means for interrupting I/O operations to said storage medium;
means for executing said received commands; and means for resuming the availability of said storage medium for I/O operations after said commands are executed.
34. The apparatus of claim 33 wherein said received commands comprise write commands.
35. The apparatus of claim 33 wherein said received commands comprise a plurality of commands transmitted from a processor supporting one of said data sources.
36. The apparatus of claim 33 wherein said storage medium is one member in a shadow set of storage media.
37. The apparatus of claim 36 wherein said received commands comprise write commands that contain data read from a second storage medium in said shadow set of storage media.
38. The apparatus of claim 33 wherein said storage medium is directly accessible by a plurality of data sources.
39. The apparatus of claim 33 wherein said storage medium is divided into data blocks, and wherein said apparatus further comprises means for storing information indicating which of its blocks have been written with data during a predetermined period of time.
CA002020268A 1989-06-30 1990-06-29 Digital data management system Abandoned CA2020268A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/374,490 US5239637A (en) 1989-06-30 1989-06-30 Digital data management system for maintaining consistency of data in a shadow set
US374,490 1989-06-30

Publications (1)

Publication Number Publication Date
CA2020268A1 true CA2020268A1 (en) 1990-12-31

Family

ID=23477069

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002020268A Abandoned CA2020268A1 (en) 1989-06-30 1990-06-29 Digital data management system

Country Status (7)

Country Link
US (1) US5239637A (en)
EP (1) EP0405925B1 (en)
JP (1) JP2766890B2 (en)
AT (1) ATE127253T1 (en)
AU (1) AU624966B2 (en)
CA (1) CA2020268A1 (en)
DE (1) DE69021957T2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113708780A (en) * 2021-08-13 2021-11-26 长安大学 Partial repetition code construction method based on shadow

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE145998T1 (en) * 1989-06-30 1996-12-15 Digital Equipment Corp METHOD AND ARRANGEMENT FOR CONTROLLING SHADOW STORAGE
EP0455922B1 (en) * 1990-05-11 1996-09-11 International Business Machines Corporation Method and apparatus for deriving mirrored unit state when re-initializing a system
JPH0452743A (en) * 1990-06-14 1992-02-20 Fujitsu Ltd Control system for duplex external storage
US5544347A (en) 1990-09-24 1996-08-06 Emc Corporation Data storage system controlled remote data mirroring with respectively maintained data indices
EP0584257B1 (en) * 1991-05-17 2004-08-04 Packard Bell NEC, Inc. Power management capability for a microprocessor having backward compatibility
JP2693292B2 (en) * 1991-09-30 1997-12-24 三田工業株式会社 Image forming apparatus having self-repair system
US5826075A (en) * 1991-10-16 1998-10-20 International Business Machines Corporation Automated programmable fireware store for a personal computer system
DE69231873T2 (en) * 1992-01-08 2002-04-04 Emc Corp Method for synchronizing reserved areas in a redundant memory arrangement
JP2855019B2 (en) * 1992-02-10 1999-02-10 富士通株式会社 External storage device data guarantee method and external storage device
US5463767A (en) * 1992-02-24 1995-10-31 Nec Corporation Data transfer control unit with memory unit fault detection capability
JPH06119253A (en) * 1992-10-02 1994-04-28 Toshiba Corp Dual memory controller
WO1995001599A1 (en) * 1993-07-01 1995-01-12 Legent Corporation System and method for distributed storage management on networked computer systems
US5909541A (en) * 1993-07-14 1999-06-01 Honeywell Inc. Error detection and correction for data stored across multiple byte-wide memory devices
KR0128271B1 (en) * 1994-02-22 1998-04-15 윌리암 티. 엘리스 Remote data duplexing
WO1995034860A1 (en) * 1994-06-10 1995-12-21 Sequoia Systems, Inc. Main memory system and checkpointing protocol for fault-tolerant computer system
US5592618A (en) * 1994-10-03 1997-01-07 International Business Machines Corporation Remote copy secondary data copy validation-audit function
US5619642A (en) * 1994-12-23 1997-04-08 Emc Corporation Fault tolerant memory system which utilizes data from a shadow memory device upon the detection of erroneous data in a main memory device
US6269458B1 (en) * 1995-02-21 2001-07-31 Nortel Networks Limited Computer system and method for diagnosing and isolating faults
US5737344A (en) * 1995-05-25 1998-04-07 International Business Machines Corporation Digital data storage with increased robustness against data loss
JP3086779B2 (en) * 1995-06-19 2000-09-11 株式会社東芝 Memory state restoration device
US5619644A (en) * 1995-09-18 1997-04-08 International Business Machines Corporation Software directed microcode state save for distributed storage controller
US5745672A (en) * 1995-11-29 1998-04-28 Texas Micro, Inc. Main memory system and checkpointing protocol for a fault-tolerant computer system using a read buffer
US5751939A (en) * 1995-11-29 1998-05-12 Texas Micro, Inc. Main memory system and checkpointing protocol for fault-tolerant computer system using an exclusive-or memory
US5737514A (en) * 1995-11-29 1998-04-07 Texas Micro, Inc. Remote checkpoint memory system and protocol for fault-tolerant computer system
US5864657A (en) * 1995-11-29 1999-01-26 Texas Micro, Inc. Main memory system and checkpointing protocol for fault-tolerant computer system
US5917998A (en) * 1996-07-26 1999-06-29 International Business Machines Corporation Method and apparatus for establishing and maintaining the status of membership sets used in mirrored read and write input/output without logging
TW379298B (en) * 1996-09-30 2000-01-11 Toshiba Corp Memory updating history saving device and memory updating history saving method
US5794254A (en) * 1996-12-03 1998-08-11 Fairbanks Systems Group Incremental computer file backup using a two-step comparison of first two characters in the block and a signature with pre-stored character and signature sets
US6038665A (en) * 1996-12-03 2000-03-14 Fairbanks Systems Group System and method for backing up computer files over a wide area computer network
US6192460B1 (en) 1997-12-16 2001-02-20 Compaq Computer Corporation Method and apparatus for accessing data in a shadow set after a failed data operation
US6073221A (en) * 1998-01-05 2000-06-06 International Business Machines Corporation Synchronization of shared data stores through use of non-empty track copy procedure
US6308284B1 (en) * 1998-08-28 2001-10-23 Emc Corporation Method and apparatus for maintaining data coherency
US6490596B1 (en) 1999-11-09 2002-12-03 International Business Machines Corporation Method of transmitting streamlined data updates by selectively omitting unchanged data parts
US6826711B2 (en) 2000-02-18 2004-11-30 Avamar Technologies, Inc. System and method for data protection with multidimensional parity
US7509420B2 (en) 2000-02-18 2009-03-24 Emc Corporation System and method for intelligent, globally distributed network storage
US7194504B2 (en) * 2000-02-18 2007-03-20 Avamar Technologies, Inc. System and method for representing and maintaining redundant data sets utilizing DNA transmission and transcription techniques
US7062648B2 (en) * 2000-02-18 2006-06-13 Avamar Technologies, Inc. System and method for redundant array network storage
US6704730B2 (en) 2000-02-18 2004-03-09 Avamar Technologies, Inc. Hash file system and method for use in a commonality factoring system
US6810398B2 (en) * 2000-11-06 2004-10-26 Avamar Technologies, Inc. System and method for unorchestrated determination of data sequences using sticky byte factoring to determine breakpoints in digital sequences
US6910098B2 (en) * 2001-10-16 2005-06-21 Emc Corporation Method and apparatus for maintaining data coherency
US7127475B2 (en) 2002-08-15 2006-10-24 Sap Aktiengesellschaft Managing data integrity
US7464097B2 (en) * 2002-08-16 2008-12-09 Sap Ag Managing data integrity using a filter condition
US20050174753A1 (en) * 2004-02-06 2005-08-11 Densen Cao Mining light
US7318134B1 (en) * 2004-03-16 2008-01-08 Emc Corporation Continuous data backup using distributed journaling
US20060168410A1 (en) * 2005-01-24 2006-07-27 Andruszkiewicz John J Systems and methods of merge operations of a storage subsystem
US8293810B2 (en) * 2005-08-29 2012-10-23 Cmet Inc. Rapid prototyping resin compositions
KR101381551B1 (en) 2006-05-05 2014-04-11 하이버 인크 Group based complete and incremental computer file backup system, process and apparatus
US9443114B1 (en) * 2007-02-14 2016-09-13 Marvell International Ltd. Auto-logging of read/write commands in a storage network
US20100300436A1 (en) * 2007-07-23 2010-12-02 Mckeown John S Device for locating person in emergency environment
US8756391B2 (en) * 2009-05-22 2014-06-17 Raytheon Company Multi-level security computing system
US8989904B2 (en) * 2012-02-27 2015-03-24 Fanuc Robotics America Corporation Robotic process logger

Family Cites Families (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3544777A (en) * 1967-11-06 1970-12-01 Trw Inc Two memory self-correcting system
US3668644A (en) * 1970-02-09 1972-06-06 Burroughs Corp Failsafe memory system
US4199810A (en) * 1977-01-07 1980-04-22 Rockwell International Corporation Radiation hardened register file
JPS5533321A (en) * 1978-08-30 1980-03-08 Hitachi Ltd Data transmission system
JPS5637883A (en) * 1979-09-04 1981-04-11 Fanuc Ltd Information rewrite system
US4467421A (en) * 1979-10-18 1984-08-21 Storage Technology Corporation Virtual storage system and method
US4476526A (en) * 1981-11-27 1984-10-09 Storage Technology Corporation Cache buffered memory subsystem
US4432057A (en) * 1981-11-27 1984-02-14 International Business Machines Corporation Method for the dynamic replication of data under distributed system control to control utilization of resources in a multiprocessing, distributed data base system
US4636946A (en) * 1982-02-24 1987-01-13 International Business Machines Corporation Method and apparatus for grouping asynchronous recording operations
DE3208573C2 (en) * 1982-03-10 1985-06-27 Standard Elektrik Lorenz Ag, 7000 Stuttgart 2 out of 3 selection device for a 3 computer system
JPS58163052A (en) * 1982-03-20 1983-09-27 Nippon Telegr & Teleph Corp <Ntt> Fault processing system for decentralized data base system
US4503534A (en) * 1982-06-30 1985-03-05 Intel Corporation Apparatus for redundant operation of modules in a multiprocessing system
EP0128945B1 (en) * 1982-12-09 1991-01-30 Sequoia Systems, Inc. Memory backup system
US4819154A (en) * 1982-12-09 1989-04-04 Sequoia Systems, Inc. Memory back up system with one cache memory and two physically separated main memories
JPS59142799A (en) * 1983-02-04 1984-08-16 Hitachi Ltd Doubled storage device with electricity storage device for backup
JPS59165162A (en) * 1983-03-11 1984-09-18 インタ−ナシヨナル ビジネス マシ−ンズ コ−ポレ−シヨン Volume restoration system
US4602368A (en) * 1983-04-15 1986-07-22 Honeywell Information Systems Inc. Dual validity bit arrays
US4600990A (en) * 1983-05-16 1986-07-15 Data General Corporation Apparatus for suspending a reserve operation in a disk drive
US4584681A (en) * 1983-09-02 1986-04-22 International Business Machines Corporation Memory correction scheme using spare arrays
US4608687A (en) * 1983-09-13 1986-08-26 International Business Machines Corporation Bit steering apparatus and method for correcting errors in stored data, storing the address of the corrected data and using the address to maintain a correct data condition
US4608688A (en) * 1983-12-27 1986-08-26 At&T Bell Laboratories Processing system tolerant of loss of access to secondary storage
US4638424A (en) * 1984-01-12 1987-01-20 International Business Machines Corporation Managing data storage devices connected to a digital computer
US4755928A (en) * 1984-03-05 1988-07-05 Storage Technology Corporation Outboard back-up and recovery system with transfer of randomly accessible data sets between cache and host and cache and tape simultaneously
JPS60191357A (en) * 1984-03-12 1985-09-28 Fujitsu Ltd Simultaneous replacement system of shared data
US4916605A (en) * 1984-03-27 1990-04-10 International Business Machines Corporation Fast write operations
US4617475A (en) * 1984-03-30 1986-10-14 Trilogy Computer Development Partners, Ltd. Wired logic voting circuit
US4959774A (en) * 1984-07-06 1990-09-25 Ampex Corporation Shadow memory system for storing variable backup blocks in consecutive time periods
US4686620A (en) * 1984-07-26 1987-08-11 American Telephone And Telegraph Company, At&T Bell Laboratories Database backup method
JPS6150293A (en) * 1984-08-17 1986-03-12 Fujitsu Ltd Semiconductor memory
US4747038A (en) * 1984-10-04 1988-05-24 Honeywell Bull Inc. Disk controller memory address register
JPS61264599A (en) * 1985-05-16 1986-11-22 Fujitsu Ltd Semiconductor memory device
US4751639A (en) * 1985-06-24 1988-06-14 Ncr Corporation Virtual command rollback in a fault tolerant data processing system
US4710870A (en) * 1985-07-10 1987-12-01 Bell Communications Research, Inc. Central computer backup system utilizing localized data bases
US4814971A (en) * 1985-09-11 1989-03-21 Texas Instruments Incorporated Virtual memory recovery system using persistent roots for selective garbage collection and sibling page timestamping for defining checkpoint state
JPS62145349A (en) * 1985-12-20 1987-06-29 Hitachi Ltd Intersystem data base sharing system
US4805095A (en) * 1985-12-23 1989-02-14 Ncr Corporation Circuit and a method for the selection of original data from a register log containing original and modified data
JPH0628042B2 (en) * 1986-01-31 1994-04-13 富士通株式会社 How to update the journal version
JPS62197858A (en) * 1986-02-26 1987-09-01 Hitachi Ltd Inter-system data base sharing system
JPS63140352A (en) * 1986-12-02 1988-06-11 Nippon Steel Corp Memory device for data base of on-line system
EP0303856B1 (en) * 1987-08-20 1995-04-05 International Business Machines Corporation Method and apparatus for maintaining duplex-paired devices by means of a dual copy function
JPH01128142A (en) * 1987-11-13 1989-05-19 Nec Corp System for copying duplexing file
JP2924905B2 (en) * 1988-03-25 1999-07-26 エヌシーアール インターナショナル インコーポレイテッド File backup system
US5089958A (en) * 1989-01-23 1992-02-18 Vortex Systems, Inc. Fault tolerant computer backup system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113708780A (en) * 2021-08-13 2021-11-26 长安大学 Partial repetition code construction method based on shadow
CN113708780B (en) * 2021-08-13 2024-02-02 上海映盛网络技术股份有限公司 Partial repetition code construction method based on shadow

Also Published As

Publication number Publication date
DE69021957D1 (en) 1995-10-05
AU5806690A (en) 1991-01-03
EP0405925A3 (en) 1991-12-11
EP0405925B1 (en) 1995-08-30
DE69021957T2 (en) 1996-05-02
JPH03135639A (en) 1991-06-10
US5239637A (en) 1993-08-24
EP0405925A2 (en) 1991-01-02
AU624966B2 (en) 1992-06-25
ATE127253T1 (en) 1995-09-15
JP2766890B2 (en) 1998-06-18

Similar Documents

Publication Publication Date Title
EP0405925B1 (en) Method and apparatus for managing a shadow set of storage media
EP0405859B1 (en) Method and apparatus for managing a shadow set of storage media
EP0405926B1 (en) Method and apparatus for managing a shadow set of storage media
US6950915B2 (en) Data storage subsystem
EP1131715B1 (en) Distributed transactional processing system and method
JP2576847B2 (en) Storage control device and related method
US7028216B2 (en) Disk array system and a method of avoiding failure of the disk array system
US5555371A (en) Data backup copying with delayed directory updating and reduced numbers of DASD accesses at a back up site using a log structured array data storage
US7240116B2 (en) Dynamic RDF groups
US6732171B2 (en) Distributed network storage system with virtualization
US6038639A (en) Data file storage management system for snapshot copy operations
US6898669B2 (en) Disk array apparatus and data backup method used therein
US6336172B1 (en) Storing and tracking multiple copies of data in a data storage library system
US5210865A (en) Transferring data between storage media while maintaining host processor access for I/O operations
US6701455B1 (en) Remote copy system with data integrity
US7373470B2 (en) Remote copy control in a storage system
US7277997B2 (en) Data consistency for mirroring updatable source data storage
US20070073985A1 (en) System for and method of retrieval-based data redundancy
US6249849B1 (en) “Fail-Safe” updating of redundant data in multiple data storage libraries
JP2002288014A (en) File control system and file data writing method
EP0405861A2 (en) Transferring data in a digital data processing system
JP2924786B2 (en) Exclusive control system, exclusive control method, and medium for storing exclusive control program for shared file in loosely coupled multiple computer system
JPH11338843A (en) Recovery system for system linking device and recording medium recording recovery program

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued