US20110258461A1 - System and method for resource sharing across multi-cloud arrays - Google Patents
System and method for resource sharing across multi-cloud arrays Download PDFInfo
- Publication number
- US20110258461A1 US20110258461A1 US13/086,794 US201113086794A US2011258461A1 US 20110258461 A1 US20110258461 A1 US 20110258461A1 US 201113086794 A US201113086794 A US 201113086794A US 2011258461 A1 US2011258461 A1 US 2011258461A1
- Authority
- US
- United States
- Prior art keywords
- storage
- data
- resource
- cloud
- location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1415—Saving, restoring, recovering or retrying at system level
- G06F11/1435—Saving, restoring, recovering or retrying at system level using file system or storage system metadata
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/14—Error detection or correction of the data by redundancy in operation
- G06F11/1402—Saving, restoring, recovering or retrying
- G06F11/1446—Point-in-time backing up or restoration of persistent data
- G06F11/1458—Management of the backup or restore process
- G06F11/1464—Management of the backup or restore process for networked environments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0689—Disk arrays, e.g. RAID, JBOD
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/04—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
- H04L63/0428—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
- H04L63/0457—Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply dynamic encryption, e.g. stream encryption
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/84—Using snapshots, i.e. a logical point-in-time copy of the data
Definitions
- the present invention relates to a system and a method for resource sharing across multi-cloud arrays, and more particularly to resource sharing across multi-cloud arrays that provides secure and reliable data replication and “compute anywhere” capability.
- Cloud storage refers to providing online data storage services including database-like services, web-based storage services, network attached storage services, and synchronization services.
- database storage services include Amazon SimpleDB, Google App Engine and BigTable datastore, among others.
- web-based storage services include Amazon Simple Storage Service (Amazon S3) and Nirvanix SDN, among others.
- network attached storage services include MobileMe, iDisk and Nirvanix NAS, among others.
- synchronization services include Live Mesh, MobileMe push functions and Live Desktop component, among others.
- Cloud storage provides flexibility of storage capacity planning and reduces the storage management overhead by centralizing and outsourcing data storage administrative and infrastructure costs.
- cloud storage does come with some significant drawbacks.
- Business data are extremely critical to the operations of any business and need to be reliable, secure and available on demand. Even a minor security breach or black out in the data availability can have drastic consequences.
- Current Internet-based cloud storage implementations do not usually deploy security measures that are adequate to protect against even minor security breaches.
- Availability and reliability has also not been up to the standards of even small-to-medium size enterprises.
- cloud storage is not standards-based and businesses usually need to invest in application development in order to be able to use them.
- different cloud storage systems provide different interfaces and have different requirements for the data presentation and transfer.
- Amazon S3 allows reading objects containing from 1 to 5 gigabytes of data each (extents), storing each object in a file and uploading (sending data from a local system to a remote system) only the entire file
- Nirvanix SDN allows writing to any extent but only downloading (receiving data to a local system from a remote system) the entire file. Continuous data replication between data stored in these two different cloud storage systems is currently unavailable.
- the invention provides a multi-cloud data replication system that utilizes shared storage resources for providing secure and reliable data replication and “compute anywhere” capability.
- Each storage resource is associated with a unique object identifier that identifies the location and structure of the corresponding storage resource at a given point-in-time within a specific cloud. Data contained in the storage resources are accessed by accessing the location and volume identified by the unique object identifier.
- the shared resources may be storage volumes, snapshots, among others.
- the shared storage resources may be located in any of the clouds included multi-cloud arrays.
- a cloud array storage (CAS) application manages the storage sharing processes.
- a “shared snapshot” is utilized to provide data replication.
- the ‘shared snapshot” data replication model a “snapshot” of the original volume is taken and then copies of the “snapshot” are distributed to the various clouds in the multi-cloud array.
- This multi-cloud data replication system provides cloud storage having enterprise-level functionality, security, reliability and increased operational performance without latency.
- the “shared-snapshot” data replication model is also used to provide an accelerated distributed computing environment.
- the invention features a system for resource sharing across multi-cloud storage arrays including a plurality of storage arrays and a cloud array storage (CAS) application.
- the plurality of storage resources are distributed in one or more cloud storage arrays, and each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time.
- the cloud array storage (CAS) application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays.
- the instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when the instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and the second storage location of the second storage resource comprises a second unique object identifier.
- Implementations of this aspect of the invention may include one or more of the following features.
- a user When a user tries to “write” new data into the first storage location of the first storage resource, the new data are written into a second storage location of the first storage resource and then the second storage location of the first storage resource is assigned to the first unique object identifier.
- the second storage location of the first storage resource is backfilled with unchanged data from the first storage location of the first storage resource, and subsequently data in the first storage location of the first storage resource are removed.
- the first unique object identifier is encrypted prior to the instantaneous copy being distributed.
- the data in the first location of the first storage resource are compressed and encrypted after the instantaneous copy is taken and prior to the instantaneous copy being distributed.
- Each unique object identifier comprises one or more metadata identifying specific storage location within a specific storage resource, specific storage resource location, structure of the specific resource, type of the contained data, access control data, security data, encryption data, object descriptor, cloud storage provider, cloud storage access node, cloud storage user, cloud storage secret/token, indicator whether data are encrypted or not, indicator whether data are compressed or not, structural encryption key, data encryption key, epoch/generation number, cloud array volume identifier, user-provided description, sender identifier, recipient identifier, algorithm identifier, signature or timestamp.
- the system may further includes a local computing system comprising at least one computing host device, the CAS application, at least one local storage resource and at least one local cache. The local computing system connects to the one or more cloud storage arrays via the Internet.
- the storage resources may be storage volumes or snapshots.
- the invention features a method for resource sharing across multi-cloud storage arrays including providing a plurality of storage resources distributed in one or more cloud storage arrays and providing a cloud array storage (CAS) application for managing the resource sharing process.
- Each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time.
- the CAS application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays.
- the instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when the instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and the second storage location of the second storage resource comprises a second unique object identifier.
- Implementations of this aspect of the invention may include one or more of the following features.
- the method may further include the following steps. First, receiving a write command from a computing host device for writing new data into a first storage resource.
- the first storage resource comprises a first unique object identifier and the first unique object identifier comprises metadata identifying a first storage location within the first storage resource, structure of the first storage resource, type of contained data, access control data and security data.
- verifying authorization of the computing host device to write the new data into the first storage location of the first storage resource based on the access control metadata.
- the method may further include the following steps. First, analyzing the local cache block where the new data were written to determine if the local cache block has been written before or not. If the local cache block has been written before, backfilling the local cache block with unchanged data from the first storage location and then flushing the local cache block data to a second storage location in the first storage resource. If the local cache block has not been written before, flushing the local cache block data to the second storage location in the first storage resource.
- the method may further include the following steps. First, requesting authorization to perform cache flush of the data in the local cache block to one or more cloud storage arrays. Upon receiving authorization to perform cache flush, creating a copy of the local cache block data and compressing the data in the local cache block. Next, encrypting the data in the local cache block. Next, assigning a unique object identifier and a logical time stamp to the local cache block. Next, encrypting the unique object identifier of the local cache block, and then transmitting the encrypted cache block to one or more cloud storage arrays
- FIG. 1 is a schematic diagram of a copy-on-write snapshot implementation
- FIG. 2A is a schematic overview diagram of a single node to two-cloud array data replication system
- FIG. 2B is a schematic overview diagram of a two node to two-cloud array data replication system
- FIG. 3A-FIG . 3 C are flow diagrams of the data I/O requests in a multi cloud data replication system
- FIG. 4 is a block diagram of a data volume
- FIG. 5 is a block diagram of the basic envelope information
- FIG. 6 is a block diagram of the basic envelope encryption structure
- FIG. 7 is a block diagram of the addressed envelope information
- FIG. 8 depicts a block diagram of the “shared snapshot” process in a cloud array replication system.
- data are usually written in computer files and stored in some kind of durable storage medium such as hard disks, compact discs (CD), zip drives, USB flash drives or magnetic media, among others.
- the stored data may be numbers, text characters, or image pixels.
- Most computers organize files in folders, directories and catalogs. The way a computer organizes, names, stores and manipulates files is globally referred to as its file system.
- An extent is a contiguous area of storage in a computer file system reserved for a file.
- File systems include in addition to the data stored in the files other bookkeeping information (or metadata) that is typically associated with each file within the file system. This bookkeeping information (metadata) includes the length of the data contained in a file, the time the file was last modified, file creation time, the time last accessed, file's device type, owner user ID and access permission settings, among others.
- Computer files are protected against accidental or deliberate damage by implementing access control to the files and by backing up the content of the files.
- Access control refers to restricting access and implementing permissions as to who may or may not read, write, modify, delete or create files and folders.
- Backing up files refers to making copies of the files in a separate location so that they can be restored if something happens to the main computer, or if they are deleted accidentally.
- Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk.
- a complete data back up of a large set of data usually takes a long time. During the time the data are being backed up the users of the system may continue to write to the data files that are being backed up. This results in the backed-up data not being the same across all users and may lead to data and/or file corruption.
- One way to avoid this problem is to require all users to stop writing data in the data files while the back up occurs. However, this is not practical and undesirable for a multi-user group data system.
- a “snapshot” is defined as an instantaneous copy of a set of files and directories stored in a storage device as they are at a particular point in time.
- a snapshot creates a point-in-time copy of the data.
- a snapshot may or may not involve the actual physical copying of data bits from one storage location to another. The time and I/O needed to create a snapshot does not increase with the size of the data set, whereas the time and I/O needed for a direct backup is proportional to the size of the data set.
- a snapshot contains indicators pointing to where the initial data and changed data can be found.
- Snapshots are used for data protection, data analysis, data replication and data distribution. In cases of data loss due to either data or file corruption, the data can be recovered from the snapshot, i.e., from a previous version of the volume.
- Program developers may test programs or run data mining utilities on snapshots. Administrators may take a snapshot of a master volume (i.e., take instant copies of a master volume) and share it with a large number of users in the system.
- Snapshots usually have an operational overhead associated with whatever copy implementation is used. Increasing the number of snapshots increases the latency of the system and therefore some implementations restrict how the snapshots can be used. In some cases snapshots are read-only. Implementations that allow read-write snapshots may restrict the number of copies produced. Read-write snapshots are sometimes called branching snapshots, because they implicitly create diverging versions of their data.
- a snapshot 60 of a storage volume 56 stored in storage device 54 is created via the snapshot mechanism 55 .
- the snapshot mechanism 55 creates a logical copy 60 of the data in storage volume 56 .
- the snapshot 60 is first created, only the meta-data indicating where the original data of volume 56 are stored are copied in the snapshot 60 . No physical copy of the original data 56 is taken at the time of the creation of snapshot 60 .
- a storage region 68 is set aside in the storage device 54 for future writes in the snapshot 60 . Accordingly, the creation of the snapshot 60 via the snapshot mechanism 55 is instantaneous.
- the present invention provides a data back-up system based on a sharing a snapshot of the initial data over an array of online cloud storage systems.
- This data back-up system utilizes the “shared snapshot” solution to provide data distribution, data analyses, data test and development, bulk loading, workflow management and disaster recovery.
- a multi-cloud replication system 100 includes a local computing system 60 connecting to one or more online cloud storage systems 104 , 106 via Internet connection 90 .
- the local computing system 60 includes a computing host 103 , accessing a local storage device 116 via a node 102 , a cloud array software (CAS) application 200 and a local cache 80 .
- Host 103 issues read or write commands to the cache 80 and local storage device 116 via a standard block based based iSCSI (Internet Small Computer System Interface) interface of the CAS 200 .
- a SCSI interface is a set of standards for physically connecting and transferring data between computer hard disks and peripheral storage devices. The SCSI standards define commands, protocols and electrical and optical interfaces.
- the iSCSI protocols allow client computing devices to send SCSI commands to SCSI storage devices on remote servers via wide area IP (Internet Protocol) network connections, thereby creating a storage area network (SAN).
- IP Internet Protocol
- iSCSI protocols are used by systems administrators to allow server computers to access disk volumes on storage arrays for storage consolidation to a central location and disaster recovery applications.
- the iSCSI protocols allow block level data input/output (I/O).
- a block is a sequence of bytes having a nominal length.
- a kernel-level interface is used for writing into block-structured storage resources.
- the cloud replication system 100 may include more than one cluster nodes.
- cloud replication system 101 includes nodes 102 and 108 .
- Host 103 accesses local cache 80 a and local storage device 110 in node 102 via the iSCSI interface 112 and host 105 accesses local cache 80 b and local storage device 130 in node 108 also via the iSCSI interface 112 .
- Hosts 103 and 105 also access a shared storage device 120 via the iSCSI interface 112 .
- cloud array software application (CAS) 200 provides a secure and reliable replication of data between cloud storage resources 104 and 106 and the local storage devices 110 , 120 , 130 .
- Each storage resource 110 , 120 , 130 , 140 , 150 is associated with a unique object identifier that points to a specific location in the multi-cloud array.
- the unique object identifier also includes information (i.e., metadata) about the structure of the specific resource, the type of data contained, access control information and security requirements, among others.
- the unique object identifier includes a global unique identifier index (GUID), a local unique identifier (LUID) index, and security information.
- GUID global unique identifier index
- LID local unique identifier
- security information In the case of a catastrophic failure of the CAS 200 , only a small amount of metadata is necessary to recover the entire contents of each volume. In the simplest case, all that is needed is the object identifier of the volume object 301 , shown in FIG. 4 .
- volume object 301 Usually, though, additional information needs to be provided in order to locate and address the volume object 301 within the appropriate cloud provider.
- This representation of a volume structure lends itself to using the snapshot replication model in order to perform point-in-time copies of a volume in a cloud array. Furthermore, this representation allows sharing entire volumes and the datasets contained therein across multiple disparate systems performing a wide variety of tasks, in such a way as to eliminate operational overhead between the systems. Essentially, volumes can be transferred or shared between CAS instances without either copying the data or managing access to physical or network components.
- an input/output (I/O) that is received from attached hosts 103 , 105 via the iSCSI interface 112 is processed in several stages, passing from the host's random access memory (RAM) to specific blocks in specific storage volumes in the local disk storage devices 110 , 120 , 130 and in the cloud storage devices 140 , 150 .
- RAM random access memory
- the processing 160 of an I/O request includes the following steps.
- the I/O is an iSCSI “write” request.
- a “write” request directed to a storage volume is received from a host 103 via the iSCSI 112 ( 161 ).
- the Cloud Array Software (CAS) application 200 identifies the internal structure representing that storage volume by mapping the host and the storage volume identifier, and initiates processing of the host “write” request ( 162 ).
- CAS ( 200 ) verifies that the node 102 is authorized to perform this “write” request to the identified storage volume ( 163 ). If the authorization fails ( 165 ) an error is indicated ( 166 ).
- Authorization may fail for a number of reasons: the node may not currently be a member of the storage cluster or the cluster may be currently partitioned, some resource necessary to the processing may be currently offline or blocked by some other node, or some other circumstance may have led to revocation of this node's authorization. If the authorization is approved ( 164 ), the “write” is passed to a caching subsystem. Next, the caching subsystem checks the node's authorization to write in the specific region of the storage volume to which the “write” is to be directed ( 167 ). In the single-node system 100 of FIG. 2A , the specific region authorization is unlikely to fail. However, in a system with multiple nodes, such as in FIG.
- the storage volume is partitioned into sections, with each node having direct responsibility for some subset of those sections. For different caching configuration options, such as shared versus mirrored cache, the meaning of this responsibility may differ, as will be described below.
- the caching subsystem proceeds to determine if the precise extent of the “write” request is already contained within the cache 80 . It performs a lookup on existing cache blocks to determine if the extent is within them ( 170 ). If the extent does not match any existing cache blocks ( 171 ), the caching subsystem attempts to allocate cache resources for the extent, either by allocating new resources or by freeing existing ones, if no more capacity is available ( 173 ).
- Cache blocks are allocated in very large segments, ranging from 64 kilobytes to a full megabyte, depending upon configuration choices. Once the new cache block is allocated, the write is stored in the appropriate locations in the cache block buffer. In a mirrored cache configuration, some form of consensus via a distributed algorithm such as a replicated state machine must be achieved before the write is stored in the buffer. If the extent matches an existing cache block ( 172 ), the “write” request is stored in the appropriate locations in the cache block buffer ( 174 ).
- Whether the write is also immediately stored on disk in the local storage 116 is configuration dependent.
- a “dirty mask” structure indicating the location of the valid data in the cache buffer is simultaneously updated.
- initial processing of the “write” request is almost completed.
- a flow control analysis ( 191 ) is performed to determine the amount of host I/O processing being performed, and if the rest of the system is in danger of lagging too far behind the host processing, a small amount of additional latency may be introduced.
- flow control is done, if necessary, simply by pausing the response to the host for a very short period of time, and identifying and amortizing the overhead of remote transmissions over as many of the incoming requests as possible to avoid any single slowdown that could potentially cause failure or other noticeable problems.
- Flow control reduces and eliminates the possibility of catastrophic I/O errors on the host due to unacceptably long periods of slowdown within CAS ( 200 ).
- the caching subsystem is reactivated to analyze the cache block ( 176 ). If the cache block represents a block on the storage volume that has never been written to before (or which has been deleted), then the cache buffer is “zero-filled” ( 177 ). If the storage volume block has been previously written, i.e., is “dirty” the cache block must be backfilled by reading its data from an underlying cloud storage device and then the entire cache block is flushed to the local storage device ( 180 ). Assuming the cache buffer is zero-filled ( 177 ), excepting for the extent matching the dirty mask and containing the data from the previous disk, the entire cache block is then flushed to the local storage device 110 ( 180 ).
- a cache flush from node 102 to one or more clouds 104 , 106 is scheduled.
- the node 102 requests and receives authorization to begin a flush of the cached storage volume data to the cloud.
- Each “dirty” cache block (cache blocks containing non-zero dirty masks) passes through the following series of processing steps.
- copy of the buffer is created ( 183 ), and then the data within the buffer are compressed ( 184 ) and encrypted using a data private key (symmetric) ( 185 ).
- the cache block is assigned a unique identifier, including a logical timestamp ( 186 ), and then the cache block's unique identifier is encrypted using a metadata private key (symmetric) ( 187 ).
- the resulting buffer is transmitted to one or more cloud storage providers 104 , 106 , according to a RAID-1 replication algorithm ( 188 ).
- volume 300 that is presented to a host 103 or set of hosts 103 , 105 by a single CAS instance 200 is stored in the cloud in a tree structure.
- the nodes in the tree structure are represented by named cloud objects. Each cloud object contains either data or metadata.
- Leaf nodes contain strictly the volume data, while internal nodes contain lists of references to other cloud objects in the tree, as well as some extra metadata used for such things as versioning.
- FIG. 4 illustrates the cloud storage I/O request process and format used by CAS ( 200 ) to store volume data in the cloud.
- the volume object 301 and the region object 311 are the internal nodes, while the page object 321 is the leaf node where the data 323 are stored.
- the primary volume object having a volume ID 311 contains a set of three regions identified by region identifiers 303 , 304 , 306 (i.e., metadata) pointing to regions of the volume.
- region identifier 303 points to region 311 .
- Region 311 includes three pages identified by page identifiers 313 , 314 , 315 .
- Page identifier 313 points to page 321 of the volume, which contains data 323 .
- the CAS 200 maintains local copies only for as long as is necessary to construct or read the data and metadata contained therein.
- the volume object 301 is maintained for the life of the CAS volume, but always as a shadow of the corresponding cloud object, i.e. any changes which are made by the CAS are immediately transmitted to the cloud.
- volume envelope 350 In order to encapsulate the volume information, we developed a packaging mechanism, which we call a volume envelope 350 , shown in FIG. 5 .
- This envelope 350 contains all of the information necessary to retrieve the original volume object, all authentication information necessary to validate both the sender and recipient of the envelope, and all authorization information needed to dictate the “Terms of Use”. Finally, the envelope is securely sealed so that none of the potentially secret information can be accessed intentionally or inadvertently by any third-party individuals who gain access to that envelope, as shown in FIG. 6 .
- basic envelope information 350 includes envelope descriptor 351 , cloud provider 352 , cloud access code 353 , cloud user 354 , cloud secret/token 355 , cloud object identifier 356 , information if the volume is encrypted (yes/no) 357 , volume compressed (yes/no) 358 , an optional structural encryption key 359 , an optional data encryption key 360 , epoch/generation number 361 , cloud array volume identifier 362 , most recent cloud array identifier 363 and user-provided description 364 .
- Items marked with an asterisk are fields that may vary from cloud provider to provider. If the data contained within the cloud is encrypted or compressed by the CAS software 200 , the envelope needs to retain that information, as well as the particular encryption keys that are used to encode and decode the data and metadata.
- the envelope descriptor 351 is a tag which identifies the nature and intended purpose of the envelope, e.g. shared snapshot, volume migration, or volume retention. The recipient's behaviour and expectations about volume status differ for each of the different envelope descriptor types. Every envelope includes the epoch number 361 of the volume object at the time of envelope creation, where the epoch number is a logical timestamp which is monotonically updated when the volume is written to.
- the epoch number 361 may be used to invalidate the entire envelope if there are concerns about stale data (or stale envelopes).
- a cloudarray volume identifier 363 provides a label for the volume, the most recent cloudarray identifier 364 establishes the claimed identity of the sender, and the user-provided description 265 allows the user to embed additional arbitrary information in the envelope.
- the envelope structure is composed in a self-describing manner, e.g. XML, so that an actual structure may omit optional fields, and the cloud provider access methods may be varied without modifying the contents or usage of the rest of the structure. Additionally, the ordering and size of the fields within the envelope structure can be changed.
- a number of fields within the envelope structure are considered to be secret, e.g. the data encryption keys 360 , structural encryption keys 359 , and secret tokens 354 from the cloud providers. Therefore, base envelopes are not stored or transmitted as clear text, but instead, base envelopes are structured and encrypted, as shown in FIG. 6 .
- a base envelope structure 350 is encrypted using a user-provided pass phrase, encoded as a base-64 structure 367 , and then a user-provided description or identifier is included 367 .
- the user-provided description 367 may or may not be the same as the user description contained within the base envelope 365 .
- a CAS system may encode a volume or a snapshot for transmission to another CAS system, or for archival purposes.
- the recipient of the envelope is restricted in its use of the encoded data: for example, a snapshot envelope requires a certain number of steps be taken in order to safely access that snapshot, and the sender must honor certain commitments regarding the lifespan of that snapshot. Additional algorithmic data may be encoded in the base snapshot to account for those commitments, e.g. an expiration date.
- a volume transfer/migration may require a coordination point which will record when certain phases of the migration have been achieved. The identity and specific transaction process for the migration will also need to be encoded in the base envelope.
- the data replication in the cloud array includes the following.
- CAS 200 takes a snapshot 400 of volume 300 ( 500 ) by making a copy 401 of the primary volume object 301 .
- Copy 401 includes copies 403 , 404 , 405 of the regions identifiers 303 , 304 , 305 respectively.
- CAS 200 copies snapshot 400 and distributes copies 420 , 430 directly to cloud storage arrays 421 , 431 , respectively.
- Copies of the snapshots 400 may also be distributed from one cloud storage array to another.
- copy 420 of snapshot 400 is distributed from cloud array 421 to cloud array 441 .
- the copies 420 , 430 , 440 of the snapshot 400 include the volume region identifiers, which are used to access the original volume 300 and perform read/write operations.
- the advantage of this “shared snapshot” method is that the copying of the original snapshot 400 and distribution of the snapshot copies from one cloud storage array to another cloud array does not spend any overhead of the original system and therefore the latency of the original system is not affected. Overhead is only incurred during the garbage collection. As was described above garbage collection as a normal CAS 200 operation involves creating a new page for every set of writes. CAS 200 checks to see if any snapshots are still using a page before removing the page object from the cloud array.
- CAS 200 can easily use the volume object of the snapshot as a basis for future read/write (I/O) operations.
- I/O read/write
- the volume object of the snapshot and the underlying region/data pages are stored in the cloud and CAS 200 allows multiple users/systems to access the same data., i.e., CAS provides a “shared snapshot” operation.
- This “shared snapshot” system can be used is distributed analysis of data. In distributed analysis of data, a large amount of data are stored in a single volume and can be shared and simultaneously analyzed by multiple users/systems. Local caches in each system can ensure uncompromised performance, since each cache operates independently.
- a company has a large set of data on which they wish to do some processing, e.g. customer trend analysis.
- Traditionally a copy of that dataset would be made and one or more on-premise servers would be dedicated to do the extensive computational cycles, including a substantial amount of IO processing as data is retrieved from the disks, operated upon by the processor, and then the results written back out to disk.
- Using the CAS envelope scheme allows for a faster, cheaper model. If the large dataset is stored on a CAS system, either partially cached locally or fully replicated to a remote site, then a snapshot of that entire dataset can be easily created. An envelope containing that snapshot is created and distributed to a number of virtual CASs, which may reside in a remote compute infrastructure.
- Each of those CASs instantiates the enveloped snapshot and exposes it to an associated virtual server, created expressly for the purpose of performing the desired analysis.
- an associated virtual server created expressly for the purpose of performing the desired analysis.
- Each virtual server is assigned a chunk of the large dataset to be analyzed, and automatically loads its chunk into its associated CAS instance. There is no contention in the underlying IO system, there are no spurious copies of the data, and the virtual resources can be simply removed when the computation is complete.
- CAS can be used to create an entire replica of the production environment's data set, based on the most recent versions of production data. Envelopes containing snapshots of the volumes can be distributed to virtual CASs in the test environment, which then expose the volumes to the virtualized test environment. Rather than building out an entire permanent infrastructure to support temporary tasks, this virtualized environment can be loaded and created only when the developers require it.
- a user could create a CAS volume on a local system, encapsulate it in an envelope, and transfer it to a virtual CAS within the provider's local infrastructure. Bulk loading can then be done on the virtual CAS. Once completed, the volume can be enveloped again and transferred back to the user's local CAS system.
- Envelopes can facilitate this kind of architecture, by allowing volumes to be transferred quickly to the server which is best fitted to performing the current stage's task.
- Envelopes are also applicable in disaster recovery applications.
- large configurations of massive datasets are easily, compactly, and securely stored in multiple locations using the described envelope methodology.
- the datasets are re-instantiated with the help of the envelope information, in the case of an emergency.
Abstract
A system for resource sharing across multi-cloud storage arrays includes a plurality of storage arrays and a cloud array storage (CAS) application. The plurality of storage resources are distributed in one or more cloud storage arrays, and each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time. The cloud array storage (CAS) application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays. The instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when the instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and the second storage location of the second storage resource is assigned a second unique object identifier.
Description
- This application claims the benefit of U.S. provisional application Ser. No. 61/324,819 filed on Apr. 16, 2010 and entitled SYSTEM AND METHOD FOR RESOURCE SHARING ACROSS MULTI-CLOUD ARRAYS which is commonly assigned and the contents of which are expressly incorporated herein by reference.
- The present invention relates to a system and a method for resource sharing across multi-cloud arrays, and more particularly to resource sharing across multi-cloud arrays that provides secure and reliable data replication and “compute anywhere” capability.
- Cloud storage refers to providing online data storage services including database-like services, web-based storage services, network attached storage services, and synchronization services. Examples of database storage services include Amazon SimpleDB, Google App Engine and BigTable datastore, among others. Examples of web-based storage services include Amazon Simple Storage Service (Amazon S3) and Nirvanix SDN, among others. Examples of network attached storage services include MobileMe, iDisk and Nirvanix NAS, among others. Examples of synchronization services include Live Mesh, MobileMe push functions and Live Desktop component, among others.
- Customers usually rent data capacity on demand over the Internet, or use local pools of inexpensive storage as a private utility, anywhere within their business. Cloud storage services are usually billed on a utility computing basis, e.g., per gigabyte per month. Cloud storage provides flexibility of storage capacity planning and reduces the storage management overhead by centralizing and outsourcing data storage administrative and infrastructure costs.
- However, the benefits of cloud storage do come with some significant drawbacks. Business data are extremely critical to the operations of any business and need to be reliable, secure and available on demand. Even a minor security breach or black out in the data availability can have drastic consequences. Current Internet-based cloud storage implementations do not usually deploy security measures that are adequate to protect against even minor security breaches. Availability and reliability has also not been up to the standards of even small-to-medium size enterprises. Furthermore, cloud storage is not standards-based and businesses usually need to invest in application development in order to be able to use them. In particular, different cloud storage systems provide different interfaces and have different requirements for the data presentation and transfer. For example, Amazon S3 allows reading objects containing from 1 to 5 gigabytes of data each (extents), storing each object in a file and uploading (sending data from a local system to a remote system) only the entire file, whereas Nirvanix SDN allows writing to any extent but only downloading (receiving data to a local system from a remote system) the entire file. Continuous data replication between data stored in these two different cloud storage systems is currently unavailable.
- A one time data migration process from Amazon S3 to Nirvanix SDN is described in http://www.nirvanix.com/s3migrationtool.aspx. It requires downloading and installing specialized software, is cumbersome, inefficient for continuous data replication, not reliable or secure and therefore it is currently not used at least for business storage applications.
- Accordingly, there is a need for a reliable and secure multi cloud data replication solution that is secure, inexpensive, easy to use and scalable without compromising performance.
- The invention provides a multi-cloud data replication system that utilizes shared storage resources for providing secure and reliable data replication and “compute anywhere” capability. Each storage resource is associated with a unique object identifier that identifies the location and structure of the corresponding storage resource at a given point-in-time within a specific cloud. Data contained in the storage resources are accessed by accessing the location and volume identified by the unique object identifier. The shared resources may be storage volumes, snapshots, among others. The shared storage resources may be located in any of the clouds included multi-cloud arrays. A cloud array storage (CAS) application manages the storage sharing processes.
- In one embodiment, a “shared snapshot” is utilized to provide data replication. In the ‘shared snapshot” data replication model a “snapshot” of the original volume is taken and then copies of the “snapshot” are distributed to the various clouds in the multi-cloud array. This multi-cloud data replication system provides cloud storage having enterprise-level functionality, security, reliability and increased operational performance without latency. The “shared-snapshot” data replication model is also used to provide an accelerated distributed computing environment.
- In general, in one aspect, the invention features a system for resource sharing across multi-cloud storage arrays including a plurality of storage arrays and a cloud array storage (CAS) application. The plurality of storage resources are distributed in one or more cloud storage arrays, and each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time. The cloud array storage (CAS) application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays. The instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when the instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and the second storage location of the second storage resource comprises a second unique object identifier.
- Implementations of this aspect of the invention may include one or more of the following features. When a user tries to “write” new data into the first storage location of the first storage resource, the new data are written into a second storage location of the first storage resource and then the second storage location of the first storage resource is assigned to the first unique object identifier. The second storage location of the first storage resource is backfilled with unchanged data from the first storage location of the first storage resource, and subsequently data in the first storage location of the first storage resource are removed. The first unique object identifier is encrypted prior to the instantaneous copy being distributed. The data in the first location of the first storage resource are compressed and encrypted after the instantaneous copy is taken and prior to the instantaneous copy being distributed. Each unique object identifier comprises one or more metadata identifying specific storage location within a specific storage resource, specific storage resource location, structure of the specific resource, type of the contained data, access control data, security data, encryption data, object descriptor, cloud storage provider, cloud storage access node, cloud storage user, cloud storage secret/token, indicator whether data are encrypted or not, indicator whether data are compressed or not, structural encryption key, data encryption key, epoch/generation number, cloud array volume identifier, user-provided description, sender identifier, recipient identifier, algorithm identifier, signature or timestamp. The system may further includes a local computing system comprising at least one computing host device, the CAS application, at least one local storage resource and at least one local cache. The local computing system connects to the one or more cloud storage arrays via the Internet. The storage resources may be storage volumes or snapshots.
- In general, in another aspect, the invention features a method for resource sharing across multi-cloud storage arrays including providing a plurality of storage resources distributed in one or more cloud storage arrays and providing a cloud array storage (CAS) application for managing the resource sharing process. Each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time. The CAS application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays. The instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when the instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and the second storage location of the second storage resource comprises a second unique object identifier.
- Implementations of this aspect of the invention may include one or more of the following features. The method may further include the following steps. First, receiving a write command from a computing host device for writing new data into a first storage resource. The first storage resource comprises a first unique object identifier and the first unique object identifier comprises metadata identifying a first storage location within the first storage resource, structure of the first storage resource, type of contained data, access control data and security data. Next, identifying structure of the first storage resource based on the structure metadata. Next, verifying authorization of the computing host device to write the new data into the first storage resource based on the access control metadata. Next, verifying authorization of the computing host device to write the new data into the first storage location of the first storage resource based on the access control metadata. Next, determining if a precise block for writing the new data already exists in a local cache of the first storage resource. Next, storing the new data in the precise block of the local cache, if a precise block already exists. Next, allocating a new block and storing the new data in the new block of the local cache, if a precise block does not already exists. Finally, acknowledging processing of the write command to the computing host device.
- The method may further include the following steps. First, analyzing the local cache block where the new data were written to determine if the local cache block has been written before or not. If the local cache block has been written before, backfilling the local cache block with unchanged data from the first storage location and then flushing the local cache block data to a second storage location in the first storage resource. If the local cache block has not been written before, flushing the local cache block data to the second storage location in the first storage resource.
- The method may further include the following steps. First, requesting authorization to perform cache flush of the data in the local cache block to one or more cloud storage arrays. Upon receiving authorization to perform cache flush, creating a copy of the local cache block data and compressing the data in the local cache block. Next, encrypting the data in the local cache block. Next, assigning a unique object identifier and a logical time stamp to the local cache block. Next, encrypting the unique object identifier of the local cache block, and then transmitting the encrypted cache block to one or more cloud storage arrays
- The details of one or more embodiments of the invention are set forth in the accompanying drawings and description below. Other features, objects and advantages of the invention will be apparent from the following description of the preferred embodiments, the drawings and from the claims.
- Referring to the figures, wherein like numerals represent like parts throughout the several views:
-
FIG. 1 is a schematic diagram of a copy-on-write snapshot implementation; -
FIG. 2A is a schematic overview diagram of a single node to two-cloud array data replication system; -
FIG. 2B is a schematic overview diagram of a two node to two-cloud array data replication system; -
FIG. 3A-FIG . 3C are flow diagrams of the data I/O requests in a multi cloud data replication system; -
FIG. 4 is a block diagram of a data volume; -
FIG. 5 is a block diagram of the basic envelope information; -
FIG. 6 is a block diagram of the basic envelope encryption structure; -
FIG. 7 is a block diagram of the addressed envelope information; -
FIG. 8 depicts a block diagram of the “shared snapshot” process in a cloud array replication system. - In computing systems data are usually written in computer files and stored in some kind of durable storage medium such as hard disks, compact discs (CD), zip drives, USB flash drives or magnetic media, among others. The stored data may be numbers, text characters, or image pixels. Most computers organize files in folders, directories and catalogs. The way a computer organizes, names, stores and manipulates files is globally referred to as its file system. An extent is a contiguous area of storage in a computer file system reserved for a file. File systems include in addition to the data stored in the files other bookkeeping information (or metadata) that is typically associated with each file within the file system. This bookkeeping information (metadata) includes the length of the data contained in a file, the time the file was last modified, file creation time, the time last accessed, file's device type, owner user ID and access permission settings, among others.
- Computer files are protected against accidental or deliberate damage by implementing access control to the files and by backing up the content of the files. Access control refers to restricting access and implementing permissions as to who may or may not read, write, modify, delete or create files and folders. When computer files contain information that is extremely important, a back-up process is used to protect against disasters that might destroy the files. Backing up files refers to making copies of the files in a separate location so that they can be restored if something happens to the main computer, or if they are deleted accidentally. There are many ways to back up files. Files are often copied to removable media such as writable CDs or cartridge tapes. Copying files to another hard disk in the same computer protects against failure of one disk. However, if it is necessary to protect against failure or destruction of the entire computer, then copies of the files must be made on other media that can be taken away from the computer and stored in a safe, distant location. Most computer systems provide utility programs to assist in the back-up process. However, the back up process can become very time-consuming if there are many files to safeguard.
- A complete data back up of a large set of data usually takes a long time. During the time the data are being backed up the users of the system may continue to write to the data files that are being backed up. This results in the backed-up data not being the same across all users and may lead to data and/or file corruption. One way to avoid this problem is to require all users to stop writing data in the data files while the back up occurs. However, this is not practical and undesirable for a multi-user group data system.
- One type of data back up that can be used in cases where the writing of data cannot be interrupted is a “snapshot”. A “snapshot” is defined as an instantaneous copy of a set of files and directories stored in a storage device as they are at a particular point in time. A snapshot creates a point-in-time copy of the data. A snapshot may or may not involve the actual physical copying of data bits from one storage location to another. The time and I/O needed to create a snapshot does not increase with the size of the data set, whereas the time and I/O needed for a direct backup is proportional to the size of the data set. In some systems once the initial snapshot is taken of a data set, subsequent snapshots copy the changed data only, and use a system of pointers to reference the initial snapshot. This method of pointer-based snapshots consumes less disk capacity than if the data set was repeatedly copied. In summary, a snapshot contains indicators pointing to where the initial data and changed data can be found.
- Snapshots are used for data protection, data analysis, data replication and data distribution. In cases of data loss due to either data or file corruption, the data can be recovered from the snapshot, i.e., from a previous version of the volume. Program developers may test programs or run data mining utilities on snapshots. Administrators may take a snapshot of a master volume (i.e., take instant copies of a master volume) and share it with a large number of users in the system.
- Snapshots usually have an operational overhead associated with whatever copy implementation is used. Increasing the number of snapshots increases the latency of the system and therefore some implementations restrict how the snapshots can be used. In some cases snapshots are read-only. Implementations that allow read-write snapshots may restrict the number of copies produced. Read-write snapshots are sometimes called branching snapshots, because they implicitly create diverging versions of their data.
- Referring to
FIG. 1 , in a copy-on-write snapshot implementation 50, asnapshot 60 of astorage volume 56 stored instorage device 54 is created via thesnapshot mechanism 55. Thesnapshot mechanism 55 creates alogical copy 60 of the data instorage volume 56. When thesnapshot 60 is first created, only the meta-data indicating where the original data ofvolume 56 are stored are copied in thesnapshot 60. No physical copy of theoriginal data 56 is taken at the time of the creation ofsnapshot 60. Astorage region 68 is set aside in thestorage device 54 for future writes in thesnapshot 60. Accordingly, the creation of thesnapshot 60 via thesnapshot mechanism 55 is instantaneous. When auser 52 tries to “write” new data (51) intoblock 58 of theoriginal data 56, the original data inblock 58 are first copied (53) inblock 64 contained in thesnapshot storage region 68 and then the new data are written in block 58 (51). Read requests to the snapshot volume of the unchanged data blocks (63) are redirected to the original storage volume 56 (66). Read requests to the snapshot volume of the changed data block 62 (61) are redirected to the copiedblocks 64 in the snapshot storage region (65). - Recently, Internet based cloud storage services became available that allow data storage to online cloud storage systems. The present invention provides a data back-up system based on a sharing a snapshot of the initial data over an array of online cloud storage systems. This data back-up system utilizes the “shared snapshot” solution to provide data distribution, data analyses, data test and development, bulk loading, workflow management and disaster recovery.
- Referring to
FIG. 2A , amulti-cloud replication system 100 includes alocal computing system 60 connecting to one or more onlinecloud storage systems Internet connection 90. Thelocal computing system 60 includes acomputing host 103, accessing alocal storage device 116 via anode 102, a cloud array software (CAS)application 200 and alocal cache 80. Host 103 issues read or write commands to thecache 80 andlocal storage device 116 via a standard block based iSCSI (Internet Small Computer System Interface) interface of theCAS 200. A SCSI interface is a set of standards for physically connecting and transferring data between computer hard disks and peripheral storage devices. The SCSI standards define commands, protocols and electrical and optical interfaces. The iSCSI protocols allow client computing devices to send SCSI commands to SCSI storage devices on remote servers via wide area IP (Internet Protocol) network connections, thereby creating a storage area network (SAN). Currently, iSCSI protocols are used by systems administrators to allow server computers to access disk volumes on storage arrays for storage consolidation to a central location and disaster recovery applications. The iSCSI protocols allow block level data input/output (I/O). A block is a sequence of bytes having a nominal length. In other embodiments, a kernel-level interface is used for writing into block-structured storage resources. - The
cloud replication system 100 may include more than one cluster nodes. Referring toFIG. 2B ,cloud replication system 101 includesnodes local cache 80 a andlocal storage device 110 innode 102 via theiSCSI interface 112 and host 105 accesseslocal cache 80 b andlocal storage device 130 innode 108 also via theiSCSI interface 112.Hosts storage device 120 via theiSCSI interface 112. In bothsystems cloud storage resources local storage devices storage resource CAS 200, only a small amount of metadata is necessary to recover the entire contents of each volume. In the simplest case, all that is needed is the object identifier of thevolume object 301, shown inFIG. 4 . Usually, though, additional information needs to be provided in order to locate and address thevolume object 301 within the appropriate cloud provider. This representation of a volume structure lends itself to using the snapshot replication model in order to perform point-in-time copies of a volume in a cloud array. Furthermore, this representation allows sharing entire volumes and the datasets contained therein across multiple disparate systems performing a wide variety of tasks, in such a way as to eliminate operational overhead between the systems. Essentially, volumes can be transferred or shared between CAS instances without either copying the data or managing access to physical or network components. - In operation, an input/output (I/O) that is received from attached
hosts iSCSI interface 112, is processed in several stages, passing from the host's random access memory (RAM) to specific blocks in specific storage volumes in the localdisk storage devices cloud storage devices - Referring to
FIG. 3A ,FIG. 3B andFIG. 3C , theprocessing 160 of an I/O request includes the following steps. In this example, the I/O is an iSCSI “write” request. In the first step, a “write” request directed to a storage volume is received from ahost 103 via the iSCSI 112 (161). The Cloud Array Software (CAS)application 200 identifies the internal structure representing that storage volume by mapping the host and the storage volume identifier, and initiates processing of the host “write” request (162). Next, CAS (200) verifies that thenode 102 is authorized to perform this “write” request to the identified storage volume (163). If the authorization fails (165) an error is indicated (166). Authorization may fail for a number of reasons: the node may not currently be a member of the storage cluster or the cluster may be currently partitioned, some resource necessary to the processing may be currently offline or blocked by some other node, or some other circumstance may have led to revocation of this node's authorization. If the authorization is approved (164), the “write” is passed to a caching subsystem. Next, the caching subsystem checks the node's authorization to write in the specific region of the storage volume to which the “write” is to be directed (167). In the single-node system 100 ofFIG. 2A , the specific region authorization is unlikely to fail. However, in a system with multiple nodes, such as inFIG. 2B , the storage volume is partitioned into sections, with each node having direct responsibility for some subset of those sections. For different caching configuration options, such as shared versus mirrored cache, the meaning of this responsibility may differ, as will be described below. Assuming thatnode 103 is authorized to write to the specific region of the storage volume (168), the caching subsystem proceeds to determine if the precise extent of the “write” request is already contained within thecache 80. It performs a lookup on existing cache blocks to determine if the extent is within them (170). If the extent does not match any existing cache blocks (171), the caching subsystem attempts to allocate cache resources for the extent, either by allocating new resources or by freeing existing ones, if no more capacity is available (173). Cache blocks are allocated in very large segments, ranging from 64 kilobytes to a full megabyte, depending upon configuration choices. Once the new cache block is allocated, the write is stored in the appropriate locations in the cache block buffer. In a mirrored cache configuration, some form of consensus via a distributed algorithm such as a replicated state machine must be achieved before the write is stored in the buffer. If the extent matches an existing cache block (172), the “write” request is stored in the appropriate locations in the cache block buffer (174). - Whether the write is also immediately stored on disk in the
local storage 116 is configuration dependent. A “dirty mask” structure indicating the location of the valid data in the cache buffer is simultaneously updated. Upon completion of the cache buffer updates, initial processing of the “write” request is almost completed. At this point, a flow control analysis (191) is performed to determine the amount of host I/O processing being performed, and if the rest of the system is in danger of lagging too far behind the host processing, a small amount of additional latency may be introduced. Subsequently flow control is done, if necessary, simply by pausing the response to the host for a very short period of time, and identifying and amortizing the overhead of remote transmissions over as many of the incoming requests as possible to avoid any single slowdown that could potentially cause failure or other noticeable problems. Flow control reduces and eliminates the possibility of catastrophic I/O errors on the host due to unacceptably long periods of slowdown within CAS (200). - At this point, the first stage of the CAS(200) processing of the “write” request has been completed and is returned successfully to the host (175). In the next stage, (shown in
FIG. 3B ) after acknowledging to thehost 103, the caching subsystem is reactivated to analyze the cache block (176). If the cache block represents a block on the storage volume that has never been written to before (or which has been deleted), then the cache buffer is “zero-filled” (177). If the storage volume block has been previously written, i.e., is “dirty” the cache block must be backfilled by reading its data from an underlying cloud storage device and then the entire cache block is flushed to the local storage device (180). Assuming the cache buffer is zero-filled (177), excepting for the extent matching the dirty mask and containing the data from the previous disk, the entire cache block is then flushed to the local storage device 110 (180). - At some point during the process, a cache flush from
node 102 to one ormore clouds node 102 requests and receives authorization to begin a flush of the cached storage volume data to the cloud. Each “dirty” cache block (cache blocks containing non-zero dirty masks) passes through the following series of processing steps. - First, copy of the buffer is created (183), and then the data within the buffer are compressed (184) and encrypted using a data private key (symmetric) (185). Next, the cache block is assigned a unique identifier, including a logical timestamp (186), and then the cache block's unique identifier is encrypted using a metadata private key (symmetric) (187). After these steps are performed, the resulting buffer is transmitted to one or more
cloud storage providers - The above described cloud storage I/O request process and format used by CAS (200) to store volume data in the cloud is compatible with a “snapshot” based data backup. Referring to
FIG. 4 ,volume 300 that is presented to ahost 103 or set ofhosts single CAS instance 200 is stored in the cloud in a tree structure. The nodes in the tree structure are represented by named cloud objects. Each cloud object contains either data or metadata. Leaf nodes contain strictly the volume data, while internal nodes contain lists of references to other cloud objects in the tree, as well as some extra metadata used for such things as versioning. In the example ofFIG. 4 , thevolume object 301 and theregion object 311 are the internal nodes, while thepage object 321 is the leaf node where thedata 323 are stored. The primary volume object having avolume ID 311 contains a set of three regions identified byregion identifiers FIG. 4 ,region identifier 303 points toregion 311.Region 311 includes three pages identified bypage identifiers Page identifier 313 points topage 321 of the volume, which containsdata 323. There is always asingle volume object 301 for each CAS volume, but there are many regions and pages for each volume object. All of the region objects and page objects persist only in the cloud. TheCAS 200 maintains local copies only for as long as is necessary to construct or read the data and metadata contained therein. Thevolume object 301 is maintained for the life of the CAS volume, but always as a shadow of the corresponding cloud object, i.e. any changes which are made by the CAS are immediately transmitted to the cloud. - Based on this method, all of the data that describe the internal structure of a volume representation is always present in the cloud. In the case of a catastrophic failure of the
CAS 200, only a small amount of metadata is necessary to recover the entire contents of the volume. In the simplest case, all that is needed is the object identifier of thevolume object 301. Usually, though, additional information needs to be provided in order to locate and address thevolume object 301 within the appropriate cloud provider. This representation of a volume structure lends itself to using the snapshot replication model in order to perform point-in-time copies of a volume in a cloud array. Furthermore, this representation allows sharing entire volumes and the datasets contained therein across multiple disparate systems performing a wide variety of tasks, in such a way as to eliminate operational overhead between the systems. Essentially, volumes can be transferred or shared between CAS instances without either copying the data or managing access to physical or network components. - In order to encapsulate the volume information, we developed a packaging mechanism, which we call a
volume envelope 350, shown inFIG. 5 . Thisenvelope 350 contains all of the information necessary to retrieve the original volume object, all authentication information necessary to validate both the sender and recipient of the envelope, and all authorization information needed to dictate the “Terms of Use”. Finally, the envelope is securely sealed so that none of the potentially secret information can be accessed intentionally or inadvertently by any third-party individuals who gain access to that envelope, as shown inFIG. 6 . - At a minimum, the envelope contains the information necessary to access the volume object. The specifics may vary depending upon the cloud provider, but the basic framework is similar across most cloud providers. Referring to
FIG. 5 ,basic envelope information 350 includesenvelope descriptor 351,cloud provider 352,cloud access code 353,cloud user 354, cloud secret/token 355,cloud object identifier 356, information if the volume is encrypted (yes/no) 357, volume compressed (yes/no) 358, an optional structural encryption key 359, an optional data encryption key 360, epoch/generation number 361, cloud array volume identifier 362, most recent cloud array identifier 363 and user-provided description 364. Items marked with an asterisk are fields that may vary from cloud provider to provider. If the data contained within the cloud is encrypted or compressed by theCAS software 200, the envelope needs to retain that information, as well as the particular encryption keys that are used to encode and decode the data and metadata. Theenvelope descriptor 351 is a tag which identifies the nature and intended purpose of the envelope, e.g. shared snapshot, volume migration, or volume retention. The recipient's behaviour and expectations about volume status differ for each of the different envelope descriptor types. Every envelope includes the epoch number 361 of the volume object at the time of envelope creation, where the epoch number is a logical timestamp which is monotonically updated when the volume is written to. In some cases, the epoch number 361 may be used to invalidate the entire envelope if there are concerns about stale data (or stale envelopes). A cloudarray volume identifier 363 provides a label for the volume, the most recent cloudarray identifier 364 establishes the claimed identity of the sender, and the user-provided description 265 allows the user to embed additional arbitrary information in the envelope. The envelope structure is composed in a self-describing manner, e.g. XML, so that an actual structure may omit optional fields, and the cloud provider access methods may be varied without modifying the contents or usage of the rest of the structure. Additionally, the ordering and size of the fields within the envelope structure can be changed. - A number of fields within the envelope structure are considered to be secret, e.g. the data encryption keys 360, structural encryption keys 359, and
secret tokens 354 from the cloud providers. Therefore, base envelopes are not stored or transmitted as clear text, but instead, base envelopes are structured and encrypted, as shown inFIG. 6 . Abase envelope structure 350 is encrypted using a user-provided pass phrase, encoded as a base-64structure 367, and then a user-provided description or identifier is included 367. The user-provideddescription 367 may or may not be the same as the user description contained within the base envelope 365. - The above described base envelope structure is only minimally secure. There are quite a few additional security concerns to be raised when transferring whole volume access between disparate systems. These concerns are centered around several questions including:
- Is the recipient authorized to view the contents of the envelope?
- Is the sender known to the recipient?
- Is this envelope the same as the one that the sender sent?
- How long is the envelope valid?
- What sort of operations by the recipient are permitted upon the volume?
To answer these questions, some additional information must be available to both the recipient and the sender via side channels, e.g. the public keys of both must be available. Therefore, the envelope structure is extended to include additional fields, resulting in theAddressed Envelope structure 370, shown inFIG. 7 . Using an addressedenvelope 370, a system may encrypt thebase envelope 350 using the recipient'spublic key 371, the sender'sprivate key 372, and/or an optional passphrase. In order to decode the contained envelope, the recipient must be able to access its own private key, the sender's public key, and the passphrase. Any of these fields may be optional. Minimally, the envelope must be signed, although the signature need not be secured. Any user-provided description on the addressed envelope is inherently insecure and to be used for bookkeeping purposes only. - Using this pair of structures, a CAS system may encode a volume or a snapshot for transmission to another CAS system, or for archival purposes. The recipient of the envelope is restricted in its use of the encoded data: for example, a snapshot envelope requires a certain number of steps be taken in order to safely access that snapshot, and the sender must honor certain commitments regarding the lifespan of that snapshot. Additional algorithmic data may be encoded in the base snapshot to account for those commitments, e.g. an expiration date. As another example, a volume transfer/migration may require a coordination point which will record when certain phases of the migration have been achieved. The identity and specific transaction process for the migration will also need to be encoded in the base envelope.
- Referring to
FIG. 8 , the data replication in the cloud array includes the following.CAS 200 takes asnapshot 400 of volume 300 (500) by making acopy 401 of theprimary volume object 301.Copy 401 includescopies regions identifiers CAS 200copies snapshot 400 and distributescopies cloud storage arrays snapshots 400 may also be distributed from one cloud storage array to another. In the example ofFIG. 8 , copy 420 ofsnapshot 400 is distributed fromcloud array 421 tocloud array 441. In eachcloud array copies snapshot 400 include the volume region identifiers, which are used to access theoriginal volume 300 and perform read/write operations. The advantage of this “shared snapshot” method is that the copying of theoriginal snapshot 400 and distribution of the snapshot copies from one cloud storage array to another cloud array does not spend any overhead of the original system and therefore the latency of the original system is not affected. Overhead is only incurred during the garbage collection. As was described above garbage collection as anormal CAS 200 operation involves creating a new page for every set of writes.CAS 200 checks to see if any snapshots are still using a page before removing the page object from the cloud array. This is essentially a log-structured snapshot, in which data which would normally be marked invalid are simply retained. However, since the system uses a tree-structured volume instead of a flat volume,CAS 200 can easily use the volume object of the snapshot as a basis for future read/write (I/O) operations. In this way, the volume object of the snapshot and the underlying region/data pages are stored in the cloud andCAS 200 allows multiple users/systems to access the same data., i.e., CAS provides a “shared snapshot” operation. One example of where this “shared snapshot” system can be used is distributed analysis of data. In distributed analysis of data, a large amount of data are stored in a single volume and can be shared and simultaneously analyzed by multiple users/systems. Local caches in each system can ensure uncompromised performance, since each cache operates independently. - Applications of shared resources across multi-cloud arrays include the following examples.
- In one common scenario, a company has a large set of data on which they wish to do some processing, e.g. customer trend analysis. Traditionally, a copy of that dataset would be made and one or more on-premise servers would be dedicated to do the extensive computational cycles, including a substantial amount of IO processing as data is retrieved from the disks, operated upon by the processor, and then the results written back out to disk. Using the CAS envelope scheme allows for a faster, cheaper model. If the large dataset is stored on a CAS system, either partially cached locally or fully replicated to a remote site, then a snapshot of that entire dataset can be easily created. An envelope containing that snapshot is created and distributed to a number of virtual CASs, which may reside in a remote compute infrastructure. Each of those CASs instantiates the enveloped snapshot and exposes it to an associated virtual server, created expressly for the purpose of performing the desired analysis. With this infrastructure in place, the problem can be solved in an entirely distributed way. Each virtual server is assigned a chunk of the large dataset to be analyzed, and automatically loads its chunk into its associated CAS instance. There is no contention in the underlying IO system, there are no spurious copies of the data, and the virtual resources can be simply removed when the computation is complete.
- Within any large IT department, there is a continual push for the development and deployment of new applications and infrastructure pieces to aid in the operations of the business. This development is often expensive to develop and difficult to deploy, owing to the difficulty of testing alpha and beta code against production data and under realistic working environments. Companies may spend millions building test laboratories for their development teams and devising complex data-sharing schemes. In much the same way as a virtual analytics infrastructure is constructed, CAS can be used to create an entire replica of the production environment's data set, based on the most recent versions of production data. Envelopes containing snapshots of the volumes can be distributed to virtual CASs in the test environment, which then expose the volumes to the virtualized test environment. Rather than building out an entire permanent infrastructure to support temporary tasks, this virtualized environment can be loaded and created only when the developers require it.
- There is a significant performance cost involved in copying a large amount of data over a wide area network. While a wide area network may be sufficient to support the ongoing transfer of working data, especially when backed by intelligent caching and flow control algorithms, the amount of data that is accumulated in a typical data set may take a prohibitive amount of time to move. That situation causes problems for a customer who wishes to use CAS with an existing data set. Envelopes provide an elegant solution. Most cloud storage services offer bulk loading services in which a physical disk is loaded with the data set, sent via overnight courier to a service location, and loaded via the provider's local network infrastructure. In this scenario, a user could create a CAS volume on a local system, encapsulate it in an envelope, and transfer it to a virtual CAS within the provider's local infrastructure. Bulk loading can then be done on the virtual CAS. Once completed, the volume can be enveloped again and transferred back to the user's local CAS system.
- In a number of different industries, large datasets have a well-defined lifecycle in which different stages of processing are performed most naturally on different servers. Envelopes can facilitate this kind of architecture, by allowing volumes to be transferred quickly to the server which is best fitted to performing the current stage's task.
- Envelopes are also applicable in disaster recovery applications. In disaster recovery applications large configurations of massive datasets are easily, compactly, and securely stored in multiple locations using the described envelope methodology. The datasets are re-instantiated with the help of the envelope information, in the case of an emergency.
- Several embodiments of the present invention have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention. Accordingly, other embodiments are within the scope of the following claims.
Claims (21)
1. A system for resource sharing across multi-cloud storage arrays comprising:
a plurality of storage resources distributed in one or more cloud storage arrays, wherein each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time;
a cloud array storage (CAS) application managing the resource sharing process, wherein said CAS application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays; and
wherein said instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when said instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and wherein said second storage location of the second storage resource comprises a second unique object identifier.
2. The system of claim 1 , wherein when a user tries to “write” new data into the first storage location of the first storage resource, the new data are written into a second storage location of the first storage resource and then the second storage location of the first storage resource is assigned to the first unique object identifier.
3. The system of claim 2 wherein the second storage location of the first storage resource is backfilled with unchanged data from the first storage location of the first storage resource.
4. The system of claim 3 wherein subsequently data in the first storage location of the first storage resource are removed.
5. The system of claim 1 wherein said first unique object identifier is encrypted prior to the instantaneous copy being distributed.
6. The system of claim 1 wherein said data in said first location of said first storage resource are compressed and encrypted after said instantaneous copy is taken and prior to said instantaneous copy being distributed.
7. The system of claim 1 , wherein each unique object identifier comprises one or more metadata identifying specific storage location within a specific storage resource, specific storage resource location, structure of the specific resource, type of the contained data, access control data, security data, encryption data, object descriptor, cloud storage provider, cloud storage access node, cloud storage user, cloud storage secret/token, indicator whether data are encrypted or not, indicator whether data are compressed or not, structural encryption key, data encryption key, epoch/generation number, cloud array volume identifier, user-provided description, sender identifier, recipient identifier, algorithm identifier, signature or timestamp.
8. The system of claim 1 further comprising a local computing system comprising at least one computing host device, said CAS application, at least one local storage resource and at least one local cache, and wherein the local computing system connects to the one or more cloud storage arrays via the Internet.
9. The system of claim 1 wherein said storage resources comprise one of storage volumes or snapshots.
10. A method for resource sharing across multi-cloud storage arrays comprising:
providing a plurality of storage resources distributed in one or more cloud storage arrays, wherein each storage resource comprises a unique object identifier that identifies location and structure of the corresponding storage resource at a given point-in-time;
providing a cloud array storage (CAS) application managing the resource sharing process, wherein said CAS application manages the resource sharing process by first taking an instantaneous copy of initial data stored in a first location of a first storage resource at a given point-in-time and then distributing copies of the instantaneous copy to other storage resources in the one or more cloud storage arrays; and
wherein said instantaneous copy comprises a first unique object identifier pointing to the first storage location of the initial data in the first storage resource and when said instantaneous copy is distributed to a second storage resource, the first unique object identifier is copied into a second storage location within the second storage resource and wherein said second storage location of the second storage resource comprises a second unique object identifier.
11. The method of claim 10 , wherein when a user tries to “write” new data into the first storage location of the first storage resource, the new data are written into a second storage location of the first storage resource and then the second storage location of the first storage resource is assigned to the first unique object identifier.
12. The method of claim 11 wherein the second storage location of the first storage resource is backfilled with unchanged data from the first storage location of the first storage resource.
13. The method of claim 12 wherein subsequently data in the first storage location of the first storage resource are removed.
14. The method of claim 10 wherein said first unique object identifier is encrypted prior to the instantaneous copy being distributed.
15. The method of claim 10 wherein said data in said first location of said first storage resource are compressed and encrypted after said instantaneous copy is taken and prior to said instantaneous copy being distributed.
16. The method of claim 10 , wherein each unique object identifier comprises one or more metadata identifying specific storage location within a specific storage resource, specific storage resource location, structure of the specific resource, type of the contained data, access control data, security data, encryption data, object descriptor, cloud storage provider, cloud storage access node, cloud storage user, cloud storage secret/token, indicator whether data are encrypted or not, indicator whether data are compressed or not, structural encryption key, data encryption key, epoch/generation number, cloud array volume identifier, user-provided description, sender identifier, recipient identifier, algorithm identifier, signature or timestamp.
17. The method of claim 10 further comprising providing a local computing system comprising at least one computing host device, said CAS application, at least one local storage resource and at least one local cache, and wherein the local computing system connects to the one or more cloud storage arrays via the Internet.
18. The method of claim 10 wherein said storage resources comprise one of storage volumes or snapshots.
19. The method of claim 10 further comprising:
receiving a write command from a computing host device for writing new data into a first storage resource, wherein said first storage resource comprises a first unique object identifier and wherein said first unique object identifier comprises metadata identifying a first storage location within the first storage resource, structure of the first storage resource, type of contained data, access control data and security data;
identifying structure of the first storage resource based on said structure metadata;
verifying authorization of said computing host device to write said new data into said first storage resource based on said access control metadata;
verifying authorization of said computing host device to write said new data into said first storage location of said first storage resource based on said access control metadata;
determining if a precise block for writing the new data already exists in a local cache of the first storage resource;
storing the new data in the precise block of the local cache, if a precise block already exists;
allocating a new block and storing the new data in the new block of the local cache, if a precise block does not already exists; and
acknowledging processing of the write command to said computing host device.
20. The method of claim 19 further comprising:
analyzing the local cache block where the new data were written to determine if the local cache block has been written before or not;
if the local cache block has been written before, backfilling the local cache block with unchanged data from the first storage location and then flushing the local cache block data to a second storage location in the first storage resource;
if the local cache block has not been written before, flushing the local cache block data to the second storage location in the first storage resource.
21. The method of claim 20 further comprising:
requesting authorization to perform cache flush of the data in the local cache block to one or more cloud storage arrays;
upon receiving authorization to perform cache flush, creating a copy of the local cache block data and compressing the data in the local cache block;
encrypting the data in the local cache block;
assigning a unique object identifier and a logical time stamp to the local cache block;
encrypting the unique object identifier of the local cache block; and
transmitting the encrypted cache block to one or more cloud storage arrays.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/086,794 US20110258461A1 (en) | 2010-04-16 | 2011-04-14 | System and method for resource sharing across multi-cloud arrays |
PCT/US2011/032615 WO2011130588A2 (en) | 2010-04-16 | 2011-04-15 | System and method for resource sharing across multi-cloud arrays |
US14/269,758 US9836244B2 (en) | 2010-01-28 | 2014-05-05 | System and method for resource sharing across multi-cloud arrays |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US32481910P | 2010-04-16 | 2010-04-16 | |
US13/086,794 US20110258461A1 (en) | 2010-04-16 | 2011-04-14 | System and method for resource sharing across multi-cloud arrays |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/695,250 Continuation-In-Part US8762642B2 (en) | 2009-01-30 | 2010-01-28 | System and method for secure and reliable multi-cloud data replication |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/269,758 Continuation US9836244B2 (en) | 2010-01-28 | 2014-05-05 | System and method for resource sharing across multi-cloud arrays |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110258461A1 true US20110258461A1 (en) | 2011-10-20 |
Family
ID=44789112
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/086,794 Abandoned US20110258461A1 (en) | 2010-01-28 | 2011-04-14 | System and method for resource sharing across multi-cloud arrays |
US14/269,758 Active 2030-03-05 US9836244B2 (en) | 2010-01-28 | 2014-05-05 | System and method for resource sharing across multi-cloud arrays |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/269,758 Active 2030-03-05 US9836244B2 (en) | 2010-01-28 | 2014-05-05 | System and method for resource sharing across multi-cloud arrays |
Country Status (2)
Country | Link |
---|---|
US (2) | US20110258461A1 (en) |
WO (1) | WO2011130588A2 (en) |
Cited By (127)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8321487B1 (en) * | 2010-06-30 | 2012-11-27 | Emc Corporation | Recovery of directory information |
US20130145430A1 (en) * | 2011-06-05 | 2013-06-06 | Apple Inc. | Asset streaming |
US20130238890A1 (en) * | 2010-10-29 | 2013-09-12 | Proximic, Inc. | Method for transmitting information from a first information provider to a second information provider via an information intermediary |
US8577842B1 (en) * | 2011-09-19 | 2013-11-05 | Amazon Technologies, Inc. | Distributed computer system snapshots and instantiation thereof |
US20130318125A1 (en) * | 2012-05-23 | 2013-11-28 | Box, Inc. | Metadata enabled third-party application access of content at a cloud-based platform via a native client to the cloud-based platform |
US8613108B1 (en) * | 2009-03-26 | 2013-12-17 | Adobe Systems Incorporated | Method and apparatus for location-based digital rights management |
CN103533023A (en) * | 2013-07-25 | 2014-01-22 | 上海和辰信息技术有限公司 | Cloud service application cluster synchronization system and synchronization method based on cloud service characteristics |
CN103607418A (en) * | 2013-07-25 | 2014-02-26 | 上海和辰信息技术有限公司 | Large-scale data partitioning system and partitioning method based on cloud service data characteristics |
US8688935B1 (en) * | 2010-01-19 | 2014-04-01 | Infinidat Ltd | Storage system and method for snapshot space management |
US20140208399A1 (en) * | 2012-06-22 | 2014-07-24 | Frank J. Ponzio, Jr. | Method and system for accessing a computing resource |
US8812612B1 (en) * | 2012-04-20 | 2014-08-19 | Emc Corporation | Versioned coalescer |
US8868574B2 (en) | 2012-07-30 | 2014-10-21 | Box, Inc. | System and method for advanced search and filtering mechanisms for enterprise administrators in a cloud-based environment |
US8892679B1 (en) | 2013-09-13 | 2014-11-18 | Box, Inc. | Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform |
US8935764B2 (en) | 2012-08-31 | 2015-01-13 | Hewlett-Packard Development Company, L.P. | Network system for implementing a cloud platform |
US20150033135A1 (en) * | 2012-02-23 | 2015-01-29 | Ajay JADHAV | Persistent node framework |
US8990307B2 (en) | 2011-11-16 | 2015-03-24 | Box, Inc. | Resource effective incremental updating of a remote client with events which occurred via a cloud-enabled platform |
US8990151B2 (en) | 2011-10-14 | 2015-03-24 | Box, Inc. | Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution |
US9015601B2 (en) | 2011-06-21 | 2015-04-21 | Box, Inc. | Batch uploading of content to a web-based collaboration environment |
US9015465B2 (en) * | 2013-02-08 | 2015-04-21 | Automatic Data Capture Technologies Group, Inc. | Systems and methods for metadata-driven command processor and structured program transfer protocol |
US9021099B2 (en) | 2012-07-03 | 2015-04-28 | Box, Inc. | Load balancing secure FTP connections among multiple FTP servers |
US9019123B2 (en) | 2011-12-22 | 2015-04-28 | Box, Inc. | Health check services for web-based collaboration environments |
US9027108B2 (en) | 2012-05-23 | 2015-05-05 | Box, Inc. | Systems and methods for secure file portability between mobile applications on a mobile device |
US20150135004A1 (en) * | 2013-11-11 | 2015-05-14 | Fujitsu Limited | Data allocation method and information processing system |
US9038059B2 (en) | 2012-04-18 | 2015-05-19 | International Business Machines Corporation | Automatically targeting application modules to individual machines and application framework runtimes instances |
US9054919B2 (en) | 2012-04-05 | 2015-06-09 | Box, Inc. | Device pinning capability for enterprise cloud service and storage accounts |
US9063912B2 (en) | 2011-06-22 | 2015-06-23 | Box, Inc. | Multimedia content preview rendering in a cloud content management system |
US9098474B2 (en) | 2011-10-26 | 2015-08-04 | Box, Inc. | Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience |
US9117087B2 (en) | 2012-09-06 | 2015-08-25 | Box, Inc. | System and method for creating a secure channel for inter-application communication based on intents |
US9135462B2 (en) | 2012-08-29 | 2015-09-15 | Box, Inc. | Upload and download streaming encryption to/from a cloud-based platform |
US20150263979A1 (en) * | 2014-03-14 | 2015-09-17 | Avni Networks Inc. | Method and apparatus for a highly scalable, multi-cloud service deployment, orchestration and delivery |
US20150263980A1 (en) * | 2014-03-14 | 2015-09-17 | Rohini Kumar KASTURI | Method and apparatus for rapid instance deployment on a cloud using a multi-cloud controller |
US9141637B2 (en) | 2012-09-26 | 2015-09-22 | International Business Machines Corporation | Predictive data management in a networked computing environment |
US9189495B1 (en) | 2012-06-28 | 2015-11-17 | Emc Corporation | Replication and restoration |
US9197718B2 (en) | 2011-09-23 | 2015-11-24 | Box, Inc. | Central management and control of user-contributed content in a web-based collaboration environment and management console thereof |
US9195636B2 (en) | 2012-03-07 | 2015-11-24 | Box, Inc. | Universal file type preview for mobile devices |
US9195519B2 (en) | 2012-09-06 | 2015-11-24 | Box, Inc. | Disabling the self-referential appearance of a mobile application in an intent via a background registration |
US9213684B2 (en) | 2013-09-13 | 2015-12-15 | Box, Inc. | System and method for rendering document in web browser or mobile device regardless of third-party plug-in software |
US9223500B1 (en) | 2012-06-29 | 2015-12-29 | Emc Corporation | File clones in a distributed file system |
US9237170B2 (en) | 2012-07-19 | 2016-01-12 | Box, Inc. | Data loss prevention (DLP) methods and architectures by a cloud service |
US9292833B2 (en) | 2012-09-14 | 2016-03-22 | Box, Inc. | Batching notifications of activities that occur in a web-based collaboration environment |
US9311071B2 (en) | 2012-09-06 | 2016-04-12 | Box, Inc. | Force upgrade of a mobile application via a server side configuration file |
CN105659563A (en) * | 2013-10-18 | 2016-06-08 | 思科技术公司 | System and method for software defined network aware data replication |
US9369520B2 (en) | 2012-08-19 | 2016-06-14 | Box, Inc. | Enhancement of upload and/or download performance based on client and/or server feedback information |
US9396245B2 (en) | 2013-01-02 | 2016-07-19 | Box, Inc. | Race condition handling in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform |
US9396216B2 (en) | 2012-05-04 | 2016-07-19 | Box, Inc. | Repository redundancy implementation of a system which incrementally updates clients with events that occurred via a cloud-enabled platform |
US20160212207A1 (en) * | 2014-09-03 | 2016-07-21 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for cloud data backup and recovery |
US9413587B2 (en) | 2012-05-02 | 2016-08-09 | Box, Inc. | System and method for a third-party application to access content within a cloud-based platform |
US9436693B1 (en) | 2013-08-01 | 2016-09-06 | Emc Corporation | Dynamic network access of snapshotted versions of a clustered file system |
US9483473B2 (en) | 2013-09-13 | 2016-11-01 | Box, Inc. | High availability architecture for a cloud-based concurrent-access collaboration platform |
US9495364B2 (en) | 2012-10-04 | 2016-11-15 | Box, Inc. | Enhanced quick search features, low-barrier commenting/interactive features in a collaboration platform |
US9507795B2 (en) | 2013-01-11 | 2016-11-29 | Box, Inc. | Functionalities, features, and user interface of a synchronization client to a cloud-based environment |
US9519886B2 (en) | 2013-09-13 | 2016-12-13 | Box, Inc. | Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform |
US9519526B2 (en) | 2007-12-05 | 2016-12-13 | Box, Inc. | File management system and collaboration service and integration capabilities with third party applications |
US9535924B2 (en) | 2013-07-30 | 2017-01-03 | Box, Inc. | Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform |
US9535909B2 (en) | 2013-09-13 | 2017-01-03 | Box, Inc. | Configurable event-based automation architecture for cloud-based collaboration platforms |
US9553758B2 (en) | 2012-09-18 | 2017-01-24 | Box, Inc. | Sandboxing individual applications to specific user folders in a cloud-based service |
US9558202B2 (en) | 2012-08-27 | 2017-01-31 | Box, Inc. | Server side techniques for reducing database workload in implementing selective subfolder synchronization in a cloud-based environment |
US9571564B2 (en) | 2012-08-31 | 2017-02-14 | Hewlett Packard Enterprise Development Lp | Network system for implementing a cloud platform |
US9575981B2 (en) | 2012-04-11 | 2017-02-21 | Box, Inc. | Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system |
US9602514B2 (en) | 2014-06-16 | 2017-03-21 | Box, Inc. | Enterprise mobility management and verification of a managed application by a content provider |
US9628268B2 (en) | 2012-10-17 | 2017-04-18 | Box, Inc. | Remote key management in a cloud-based environment |
US9633037B2 (en) | 2013-06-13 | 2017-04-25 | Box, Inc | Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform |
US9652181B2 (en) | 2014-01-07 | 2017-05-16 | International Business Machines Corporation | Library apparatus including a cartridge memory (CM) database stored on a storage cloud |
US9652741B2 (en) | 2011-07-08 | 2017-05-16 | Box, Inc. | Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof |
US9665349B2 (en) | 2012-10-05 | 2017-05-30 | Box, Inc. | System and method for generating embeddable widgets which enable access to a cloud-based collaboration platform |
US9691051B2 (en) | 2012-05-21 | 2017-06-27 | Box, Inc. | Security enhancement through application access control |
US9705967B2 (en) | 2012-10-04 | 2017-07-11 | Box, Inc. | Corporate user discovery and identification of recommended collaborators in a cloud platform |
US9712510B2 (en) | 2012-07-06 | 2017-07-18 | Box, Inc. | Systems and methods for securely submitting comments among users via external messaging applications in a cloud-based platform |
US9729675B2 (en) | 2012-08-19 | 2017-08-08 | Box, Inc. | Enhancement of upload and/or download performance based on client and/or server feedback information |
US9756022B2 (en) | 2014-08-29 | 2017-09-05 | Box, Inc. | Enhanced remote key management for an enterprise in a cloud-based environment |
US9773051B2 (en) | 2011-11-29 | 2017-09-26 | Box, Inc. | Mobile platform file and folder selection functionalities for offline access and synchronization |
US9794256B2 (en) | 2012-07-30 | 2017-10-17 | Box, Inc. | System and method for advanced control tools for administrators in a cloud-based service |
US9792320B2 (en) | 2012-07-06 | 2017-10-17 | Box, Inc. | System and method for performing shard migration to support functions of a cloud-based service |
US9794331B1 (en) * | 2014-09-29 | 2017-10-17 | Amazon Technologies, Inc. | Block allocation based on server utilization |
US9805050B2 (en) | 2013-06-21 | 2017-10-31 | Box, Inc. | Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform |
US20170337382A1 (en) * | 2016-05-18 | 2017-11-23 | International Business Machines Corporation | Privacy enabled runtime |
US9894119B2 (en) | 2014-08-29 | 2018-02-13 | Box, Inc. | Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms |
US9898477B1 (en) | 2014-12-05 | 2018-02-20 | EMC IP Holding Company LLC | Writing to a site cache in a distributed file system |
US9904435B2 (en) | 2012-01-06 | 2018-02-27 | Box, Inc. | System and method for actionable event generation for task delegation and management via a discussion forum in a web-based collaboration environment |
US9916202B1 (en) * | 2015-03-11 | 2018-03-13 | EMC IP Holding Company LLC | Redirecting host IO's at destination during replication |
US9953036B2 (en) | 2013-01-09 | 2018-04-24 | Box, Inc. | File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform |
US9959420B2 (en) | 2012-10-02 | 2018-05-01 | Box, Inc. | System and method for enhanced security and management mechanisms for enterprise administrators in a cloud-based environment |
US9965745B2 (en) | 2012-02-24 | 2018-05-08 | Box, Inc. | System and method for promoting enterprise adoption of a web-based collaboration environment |
US9978040B2 (en) | 2011-07-08 | 2018-05-22 | Box, Inc. | Collaboration sessions in a workspace on a cloud-based content management system |
US10021212B1 (en) | 2014-12-05 | 2018-07-10 | EMC IP Holding Company LLC | Distributed file systems on content delivery networks |
US10027757B1 (en) * | 2015-05-26 | 2018-07-17 | Pure Storage, Inc. | Locally providing cloud storage array services |
US10038731B2 (en) | 2014-08-29 | 2018-07-31 | Box, Inc. | Managing flow-based interactions with cloud-based shared content |
US10110656B2 (en) | 2013-06-25 | 2018-10-23 | Box, Inc. | Systems and methods for providing shell communication in a cloud-based platform |
US10200256B2 (en) | 2012-09-17 | 2019-02-05 | Box, Inc. | System and method of a manipulative handle in an interactive mobile user interface |
US10229134B2 (en) | 2013-06-25 | 2019-03-12 | Box, Inc. | Systems and methods for managing upgrades, migration of user data and improving performance of a cloud-based platform |
US10235383B2 (en) | 2012-12-19 | 2019-03-19 | Box, Inc. | Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment |
CN110199277A (en) * | 2017-01-18 | 2019-09-03 | 微软技术许可有限责任公司 | It include metadata in data resource |
US10423507B1 (en) | 2014-12-05 | 2019-09-24 | EMC IP Holding Company LLC | Repairing a site cache in a distributed file system |
US10430385B1 (en) | 2014-12-05 | 2019-10-01 | EMC IP Holding Company LLC | Limited deduplication scope for distributed file systems |
US10445296B1 (en) | 2014-12-05 | 2019-10-15 | EMC IP Holding Company LLC | Reading from a site cache in a distributed file system |
US10452619B1 (en) | 2014-12-05 | 2019-10-22 | EMC IP Holding Company LLC | Decreasing a site cache capacity in a distributed file system |
US10452667B2 (en) | 2012-07-06 | 2019-10-22 | Box Inc. | Identification of people as search results from key-word based searches of content in a cloud-based environment |
US10474323B2 (en) | 2016-10-25 | 2019-11-12 | Microsoft Technology Licensing Llc | Organizational external sharing of electronic data |
WO2019220173A1 (en) * | 2018-05-16 | 2019-11-21 | Pratik Sharma | Distributed snapshot of rack |
US10509527B2 (en) | 2013-09-13 | 2019-12-17 | Box, Inc. | Systems and methods for configuring event-based automation in cloud-based collaboration platforms |
US10530854B2 (en) | 2014-05-30 | 2020-01-07 | Box, Inc. | Synchronization of permissioned content in cloud-based environments |
US10547621B2 (en) | 2016-11-28 | 2020-01-28 | Microsift Technology Licensing, Llc | Persistent mutable sharing of electronic content |
US10554426B2 (en) | 2011-01-20 | 2020-02-04 | Box, Inc. | Real time notification of activities that occur in a web-based collaboration environment |
US10574442B2 (en) | 2014-08-29 | 2020-02-25 | Box, Inc. | Enhanced remote key management for an enterprise in a cloud-based environment |
US10599671B2 (en) | 2013-01-17 | 2020-03-24 | Box, Inc. | Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform |
US10725968B2 (en) | 2013-05-10 | 2020-07-28 | Box, Inc. | Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform |
US10733324B2 (en) | 2016-05-18 | 2020-08-04 | International Business Machines Corporation | Privacy enabled runtime |
US10846074B2 (en) | 2013-05-10 | 2020-11-24 | Box, Inc. | Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client |
US10866931B2 (en) | 2013-10-22 | 2020-12-15 | Box, Inc. | Desktop application for accessing a cloud collaboration platform |
US10911540B1 (en) * | 2020-03-10 | 2021-02-02 | EMC IP Holding Company LLC | Recovering snapshots from a cloud snapshot lineage on cloud storage to a storage system |
US10915492B2 (en) | 2012-09-19 | 2021-02-09 | Box, Inc. | Cloud-based platform enabled with media content indexed for text-based searches and/or metadata extraction |
US10936494B1 (en) | 2014-12-05 | 2021-03-02 | EMC IP Holding Company LLC | Site cache manager for a distributed file system |
US10951705B1 (en) | 2014-12-05 | 2021-03-16 | EMC IP Holding Company LLC | Write leases for distributed file systems |
US11016943B2 (en) | 2019-03-08 | 2021-05-25 | Netapp, Inc. | Garbage collection for objects within object store |
US11023487B2 (en) | 2013-03-04 | 2021-06-01 | Sap Se | Data replication for cloud based in-memory databases |
US11048590B1 (en) * | 2018-03-15 | 2021-06-29 | Pure Storage, Inc. | Data consistency during recovery in a cloud-based storage system |
US11102298B1 (en) | 2015-05-26 | 2021-08-24 | Pure Storage, Inc. | Locally providing cloud storage services for fleet management |
US11144498B2 (en) | 2019-03-08 | 2021-10-12 | Netapp Inc. | Defragmentation for objects within object store |
US11188500B2 (en) * | 2016-10-28 | 2021-11-30 | Netapp Inc. | Reducing stable data eviction with synthetic baseline snapshot and eviction state refresh |
US11210610B2 (en) | 2011-10-26 | 2021-12-28 | Box, Inc. | Enhanced multimedia content preview rendering in a cloud content management system |
US11232481B2 (en) | 2012-01-30 | 2022-01-25 | Box, Inc. | Extended applications of multimedia content previews in the cloud-based content management system |
US11275603B2 (en) * | 2017-07-01 | 2022-03-15 | Intel Corporation | Technologies for memory replay prevention using compressive encryption |
US11283799B2 (en) | 2018-12-28 | 2022-03-22 | Microsoft Technology Licensing, Llc | Trackable sharable links |
US11416459B2 (en) | 2014-04-11 | 2022-08-16 | Douglas T. Migliori | No-code, event-driven edge computing platform |
US11768803B2 (en) | 2016-10-28 | 2023-09-26 | Netapp, Inc. | Snapshot metadata arrangement for efficient cloud integrated data management |
US11899620B2 (en) | 2019-03-08 | 2024-02-13 | Netapp, Inc. | Metadata attachment to storage objects within object store |
US11940999B2 (en) | 2013-02-08 | 2024-03-26 | Douglas T. Migliori | Metadata-driven computing system |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8775823B2 (en) | 2006-12-29 | 2014-07-08 | Commvault Systems, Inc. | System and method for encrypting secondary copies of data |
US9201610B2 (en) * | 2011-10-14 | 2015-12-01 | Verizon Patent And Licensing Inc. | Cloud-based storage deprovisioning |
GB2496840A (en) * | 2011-11-15 | 2013-05-29 | Ibm | Controlling access to a shared storage system |
US20140019755A1 (en) * | 2012-07-12 | 2014-01-16 | Unisys Corporation | Data storage in cloud computing |
US20140281516A1 (en) | 2013-03-12 | 2014-09-18 | Commvault Systems, Inc. | Automatic file decryption |
US9152578B1 (en) * | 2013-03-12 | 2015-10-06 | Emc Corporation | Securing data replication, backup and mobility in cloud storage |
US9558087B2 (en) * | 2014-06-23 | 2017-01-31 | International Business Machines Corporation | Test virtual volumes for test environments |
US9405928B2 (en) | 2014-09-17 | 2016-08-02 | Commvault Systems, Inc. | Deriving encryption rules based on file content |
US20160275295A1 (en) * | 2015-03-19 | 2016-09-22 | Emc Corporation | Object encryption |
WO2016195714A1 (en) * | 2015-06-05 | 2016-12-08 | Hitachi, Ltd. | Method and apparatus of shared storage between multiple cloud environments |
US10353592B2 (en) | 2015-09-15 | 2019-07-16 | Hitachi, Ltd. | Storage system, computer system, and control method for storage system |
US10572347B2 (en) | 2015-09-23 | 2020-02-25 | International Business Machines Corporation | Efficient management of point in time copies of data in object storage by sending the point in time copies, and a directive for manipulating the point in time copies, to the object storage |
US10423584B2 (en) * | 2015-11-23 | 2019-09-24 | Netapp Inc. | Synchronous replication for file access protocol storage |
US10346248B2 (en) | 2016-06-23 | 2019-07-09 | Red Hat Israel, Ltd. | Failure resistant volume creation in a shared storage environment |
US10277528B2 (en) * | 2016-08-11 | 2019-04-30 | Fujitsu Limited | Resource management for distributed software defined network controller |
US10884984B2 (en) | 2017-01-06 | 2021-01-05 | Oracle International Corporation | Low-latency direct cloud access with file system hierarchies and semantics |
US10366014B1 (en) * | 2017-04-20 | 2019-07-30 | EMC IP Holding Company LLC | Fast snap copy |
TWI735585B (en) * | 2017-05-26 | 2021-08-11 | 瑞昱半導體股份有限公司 | Data management circuit with network function and network-based data management method |
US10635642B1 (en) * | 2019-05-09 | 2020-04-28 | Capital One Services, Llc | Multi-cloud bi-directional storage replication system and techniques |
US11630736B2 (en) * | 2020-03-10 | 2023-04-18 | EMC IP Holding Company LLC | Recovering a storage volume associated with a snapshot lineage from cloud storage |
US11669494B2 (en) * | 2020-05-22 | 2023-06-06 | EMC IP Holding Company LLC | Scaling out data protection infrastructure |
CN112765684B (en) * | 2021-04-12 | 2021-07-30 | 腾讯科技(深圳)有限公司 | Block chain node terminal management method, device, equipment and storage medium |
US11907163B1 (en) | 2023-01-05 | 2024-02-20 | Dell Products L.P. | Cloud snapshot lineage mobility between virtualization software running on different storage systems |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100199042A1 (en) * | 2009-01-30 | 2010-08-05 | Twinstrata, Inc | System and method for secure and reliable multi-cloud data replication |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6532479B2 (en) | 1998-05-28 | 2003-03-11 | Oracle Corp. | Data replication for front office automation |
JP3868708B2 (en) | 2000-04-19 | 2007-01-17 | 株式会社日立製作所 | Snapshot management method and computer system |
US7043637B2 (en) * | 2001-03-21 | 2006-05-09 | Microsoft Corporation | On-disk file format for a serverless distributed file system |
US7475098B2 (en) | 2002-03-19 | 2009-01-06 | Network Appliance, Inc. | System and method for managing a plurality of snapshots |
US7467167B2 (en) * | 2002-03-19 | 2008-12-16 | Network Appliance, Inc. | System and method for coalescing a plurality of snapshots |
US6993539B2 (en) | 2002-03-19 | 2006-01-31 | Network Appliance, Inc. | System and method for determining changes in two snapshots and for transmitting changes to destination snapshot |
US8010498B2 (en) * | 2005-04-08 | 2011-08-30 | Microsoft Corporation | Virtually infinite reliable storage across multiple storage devices and storage services |
ATE504878T1 (en) * | 2005-10-12 | 2011-04-15 | Datacastle Corp | DATA BACKUP METHOD AND SYSTEM |
US8117164B2 (en) | 2007-12-19 | 2012-02-14 | Microsoft Corporation | Creating and utilizing network restore points |
US8370302B2 (en) * | 2009-06-02 | 2013-02-05 | Hitachi, Ltd. | Method and apparatus for block based volume backup |
US8321688B2 (en) * | 2009-06-12 | 2012-11-27 | Microsoft Corporation | Secure and private backup storage and processing for trusted computing and data services |
US8849955B2 (en) * | 2009-06-30 | 2014-09-30 | Commvault Systems, Inc. | Cloud storage and networking agents, including agents for utilizing multiple, different cloud storage sites |
US8452932B2 (en) * | 2010-01-06 | 2013-05-28 | Storsimple, Inc. | System and method for efficiently creating off-site data volume back-ups |
-
2011
- 2011-04-14 US US13/086,794 patent/US20110258461A1/en not_active Abandoned
- 2011-04-15 WO PCT/US2011/032615 patent/WO2011130588A2/en active Application Filing
-
2014
- 2014-05-05 US US14/269,758 patent/US9836244B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100199042A1 (en) * | 2009-01-30 | 2010-08-05 | Twinstrata, Inc | System and method for secure and reliable multi-cloud data replication |
Cited By (175)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9519526B2 (en) | 2007-12-05 | 2016-12-13 | Box, Inc. | File management system and collaboration service and integration capabilities with third party applications |
US8613108B1 (en) * | 2009-03-26 | 2013-12-17 | Adobe Systems Incorporated | Method and apparatus for location-based digital rights management |
US8688935B1 (en) * | 2010-01-19 | 2014-04-01 | Infinidat Ltd | Storage system and method for snapshot space management |
US9367569B1 (en) * | 2010-06-30 | 2016-06-14 | Emc Corporation | Recovery of directory information |
US8321487B1 (en) * | 2010-06-30 | 2012-11-27 | Emc Corporation | Recovery of directory information |
US10917391B2 (en) | 2010-10-29 | 2021-02-09 | Proximic, LLC | Method for transmitting information from a first information provider to a second information provider via an information intermediary |
US20130238890A1 (en) * | 2010-10-29 | 2013-09-12 | Proximic, Inc. | Method for transmitting information from a first information provider to a second information provider via an information intermediary |
US9998429B2 (en) * | 2010-10-29 | 2018-06-12 | Proximic, Llc. | Method for transmitting information from a first information provider to a second information provider via an information intermediary |
US10341308B2 (en) | 2010-10-29 | 2019-07-02 | Proximic, Llc. | Method for transmitting information from a first information provider to a second information provider via an information intermediary |
US10554426B2 (en) | 2011-01-20 | 2020-02-04 | Box, Inc. | Real time notification of activities that occur in a web-based collaboration environment |
US9141683B1 (en) | 2011-03-24 | 2015-09-22 | Amazon Technologies, Inc. | Distributed computer system snapshot instantiation with variable depth |
US20130145430A1 (en) * | 2011-06-05 | 2013-06-06 | Apple Inc. | Asset streaming |
US8943555B2 (en) * | 2011-06-05 | 2015-01-27 | Apple Inc. | Asset streaming |
US9118642B2 (en) | 2011-06-05 | 2015-08-25 | Apple Inc. | Asset streaming |
US9015601B2 (en) | 2011-06-21 | 2015-04-21 | Box, Inc. | Batch uploading of content to a web-based collaboration environment |
US9063912B2 (en) | 2011-06-22 | 2015-06-23 | Box, Inc. | Multimedia content preview rendering in a cloud content management system |
US9652741B2 (en) | 2011-07-08 | 2017-05-16 | Box, Inc. | Desktop application for access and interaction with workspaces in a cloud-based content management system and synchronization mechanisms thereof |
US9978040B2 (en) | 2011-07-08 | 2018-05-22 | Box, Inc. | Collaboration sessions in a workspace on a cloud-based content management system |
US8577842B1 (en) * | 2011-09-19 | 2013-11-05 | Amazon Technologies, Inc. | Distributed computer system snapshots and instantiation thereof |
US9197718B2 (en) | 2011-09-23 | 2015-11-24 | Box, Inc. | Central management and control of user-contributed content in a web-based collaboration environment and management console thereof |
US8990151B2 (en) | 2011-10-14 | 2015-03-24 | Box, Inc. | Automatic and semi-automatic tagging features of work items in a shared workspace for metadata tracking in a cloud-based content management system with selective or optional user contribution |
US11210610B2 (en) | 2011-10-26 | 2021-12-28 | Box, Inc. | Enhanced multimedia content preview rendering in a cloud content management system |
US9098474B2 (en) | 2011-10-26 | 2015-08-04 | Box, Inc. | Preview pre-generation based on heuristics and algorithmic prediction/assessment of predicted user behavior for enhancement of user experience |
US8990307B2 (en) | 2011-11-16 | 2015-03-24 | Box, Inc. | Resource effective incremental updating of a remote client with events which occurred via a cloud-enabled platform |
US9015248B2 (en) | 2011-11-16 | 2015-04-21 | Box, Inc. | Managing updates at clients used by a user to access a cloud-based collaboration service |
US11853320B2 (en) | 2011-11-29 | 2023-12-26 | Box, Inc. | Mobile platform file and folder selection functionalities for offline access and synchronization |
US11537630B2 (en) | 2011-11-29 | 2022-12-27 | Box, Inc. | Mobile platform file and folder selection functionalities for offline access and synchronization |
US9773051B2 (en) | 2011-11-29 | 2017-09-26 | Box, Inc. | Mobile platform file and folder selection functionalities for offline access and synchronization |
US10909141B2 (en) | 2011-11-29 | 2021-02-02 | Box, Inc. | Mobile platform file and folder selection functionalities for offline access and synchronization |
US9019123B2 (en) | 2011-12-22 | 2015-04-28 | Box, Inc. | Health check services for web-based collaboration environments |
US9904435B2 (en) | 2012-01-06 | 2018-02-27 | Box, Inc. | System and method for actionable event generation for task delegation and management via a discussion forum in a web-based collaboration environment |
US11232481B2 (en) | 2012-01-30 | 2022-01-25 | Box, Inc. | Extended applications of multimedia content previews in the cloud-based content management system |
US10382287B2 (en) * | 2012-02-23 | 2019-08-13 | Ajay JADHAV | Persistent node framework |
US20150033135A1 (en) * | 2012-02-23 | 2015-01-29 | Ajay JADHAV | Persistent node framework |
US10713624B2 (en) | 2012-02-24 | 2020-07-14 | Box, Inc. | System and method for promoting enterprise adoption of a web-based collaboration environment |
US9965745B2 (en) | 2012-02-24 | 2018-05-08 | Box, Inc. | System and method for promoting enterprise adoption of a web-based collaboration environment |
US9195636B2 (en) | 2012-03-07 | 2015-11-24 | Box, Inc. | Universal file type preview for mobile devices |
US9054919B2 (en) | 2012-04-05 | 2015-06-09 | Box, Inc. | Device pinning capability for enterprise cloud service and storage accounts |
US9575981B2 (en) | 2012-04-11 | 2017-02-21 | Box, Inc. | Cloud service enabled to handle a set of files depicted to a user as a single file in a native operating system |
US9038059B2 (en) | 2012-04-18 | 2015-05-19 | International Business Machines Corporation | Automatically targeting application modules to individual machines and application framework runtimes instances |
US8812612B1 (en) * | 2012-04-20 | 2014-08-19 | Emc Corporation | Versioned coalescer |
US9413587B2 (en) | 2012-05-02 | 2016-08-09 | Box, Inc. | System and method for a third-party application to access content within a cloud-based platform |
US9396216B2 (en) | 2012-05-04 | 2016-07-19 | Box, Inc. | Repository redundancy implementation of a system which incrementally updates clients with events that occurred via a cloud-enabled platform |
US9691051B2 (en) | 2012-05-21 | 2017-06-27 | Box, Inc. | Security enhancement through application access control |
US9280613B2 (en) * | 2012-05-23 | 2016-03-08 | Box, Inc. | Metadata enabled third-party application access of content at a cloud-based platform via a native client to the cloud-based platform |
US9027108B2 (en) | 2012-05-23 | 2015-05-05 | Box, Inc. | Systems and methods for secure file portability between mobile applications on a mobile device |
US20130318125A1 (en) * | 2012-05-23 | 2013-11-28 | Box, Inc. | Metadata enabled third-party application access of content at a cloud-based platform via a native client to the cloud-based platform |
US9552444B2 (en) | 2012-05-23 | 2017-01-24 | Box, Inc. | Identification verification mechanisms for a third-party application to access content in a cloud-based platform |
US8914900B2 (en) | 2012-05-23 | 2014-12-16 | Box, Inc. | Methods, architectures and security mechanisms for a third-party application to access content in a cloud-based platform |
US20140208399A1 (en) * | 2012-06-22 | 2014-07-24 | Frank J. Ponzio, Jr. | Method and system for accessing a computing resource |
US9189495B1 (en) | 2012-06-28 | 2015-11-17 | Emc Corporation | Replication and restoration |
US9223500B1 (en) | 2012-06-29 | 2015-12-29 | Emc Corporation | File clones in a distributed file system |
US9021099B2 (en) | 2012-07-03 | 2015-04-28 | Box, Inc. | Load balancing secure FTP connections among multiple FTP servers |
US9712510B2 (en) | 2012-07-06 | 2017-07-18 | Box, Inc. | Systems and methods for securely submitting comments among users via external messaging applications in a cloud-based platform |
US9792320B2 (en) | 2012-07-06 | 2017-10-17 | Box, Inc. | System and method for performing shard migration to support functions of a cloud-based service |
US10452667B2 (en) | 2012-07-06 | 2019-10-22 | Box Inc. | Identification of people as search results from key-word based searches of content in a cloud-based environment |
US9237170B2 (en) | 2012-07-19 | 2016-01-12 | Box, Inc. | Data loss prevention (DLP) methods and architectures by a cloud service |
US9473532B2 (en) | 2012-07-19 | 2016-10-18 | Box, Inc. | Data loss prevention (DLP) methods by a cloud service including third party integration architectures |
US9794256B2 (en) | 2012-07-30 | 2017-10-17 | Box, Inc. | System and method for advanced control tools for administrators in a cloud-based service |
US8868574B2 (en) | 2012-07-30 | 2014-10-21 | Box, Inc. | System and method for advanced search and filtering mechanisms for enterprise administrators in a cloud-based environment |
US9729675B2 (en) | 2012-08-19 | 2017-08-08 | Box, Inc. | Enhancement of upload and/or download performance based on client and/or server feedback information |
US9369520B2 (en) | 2012-08-19 | 2016-06-14 | Box, Inc. | Enhancement of upload and/or download performance based on client and/or server feedback information |
US9558202B2 (en) | 2012-08-27 | 2017-01-31 | Box, Inc. | Server side techniques for reducing database workload in implementing selective subfolder synchronization in a cloud-based environment |
US9450926B2 (en) | 2012-08-29 | 2016-09-20 | Box, Inc. | Upload and download streaming encryption to/from a cloud-based platform |
US9135462B2 (en) | 2012-08-29 | 2015-09-15 | Box, Inc. | Upload and download streaming encryption to/from a cloud-based platform |
US9571564B2 (en) | 2012-08-31 | 2017-02-14 | Hewlett Packard Enterprise Development Lp | Network system for implementing a cloud platform |
US8935764B2 (en) | 2012-08-31 | 2015-01-13 | Hewlett-Packard Development Company, L.P. | Network system for implementing a cloud platform |
US9117087B2 (en) | 2012-09-06 | 2015-08-25 | Box, Inc. | System and method for creating a secure channel for inter-application communication based on intents |
US9311071B2 (en) | 2012-09-06 | 2016-04-12 | Box, Inc. | Force upgrade of a mobile application via a server side configuration file |
US9195519B2 (en) | 2012-09-06 | 2015-11-24 | Box, Inc. | Disabling the self-referential appearance of a mobile application in an intent via a background registration |
US9292833B2 (en) | 2012-09-14 | 2016-03-22 | Box, Inc. | Batching notifications of activities that occur in a web-based collaboration environment |
US10200256B2 (en) | 2012-09-17 | 2019-02-05 | Box, Inc. | System and method of a manipulative handle in an interactive mobile user interface |
US9553758B2 (en) | 2012-09-18 | 2017-01-24 | Box, Inc. | Sandboxing individual applications to specific user folders in a cloud-based service |
US10915492B2 (en) | 2012-09-19 | 2021-02-09 | Box, Inc. | Cloud-based platform enabled with media content indexed for text-based searches and/or metadata extraction |
US9141637B2 (en) | 2012-09-26 | 2015-09-22 | International Business Machines Corporation | Predictive data management in a networked computing environment |
US9959420B2 (en) | 2012-10-02 | 2018-05-01 | Box, Inc. | System and method for enhanced security and management mechanisms for enterprise administrators in a cloud-based environment |
US9495364B2 (en) | 2012-10-04 | 2016-11-15 | Box, Inc. | Enhanced quick search features, low-barrier commenting/interactive features in a collaboration platform |
US9705967B2 (en) | 2012-10-04 | 2017-07-11 | Box, Inc. | Corporate user discovery and identification of recommended collaborators in a cloud platform |
US9665349B2 (en) | 2012-10-05 | 2017-05-30 | Box, Inc. | System and method for generating embeddable widgets which enable access to a cloud-based collaboration platform |
US9628268B2 (en) | 2012-10-17 | 2017-04-18 | Box, Inc. | Remote key management in a cloud-based environment |
US10235383B2 (en) | 2012-12-19 | 2019-03-19 | Box, Inc. | Method and apparatus for synchronization of items with read-only permissions in a cloud-based environment |
US9396245B2 (en) | 2013-01-02 | 2016-07-19 | Box, Inc. | Race condition handling in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform |
US9953036B2 (en) | 2013-01-09 | 2018-04-24 | Box, Inc. | File system monitoring in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform |
US9507795B2 (en) | 2013-01-11 | 2016-11-29 | Box, Inc. | Functionalities, features, and user interface of a synchronization client to a cloud-based environment |
US10599671B2 (en) | 2013-01-17 | 2020-03-24 | Box, Inc. | Conflict resolution, retry condition management, and handling of problem files for the synchronization client to a cloud-based platform |
US11940999B2 (en) | 2013-02-08 | 2024-03-26 | Douglas T. Migliori | Metadata-driven computing system |
US10838955B2 (en) | 2013-02-08 | 2020-11-17 | Douglas T. Migliori | Systems and methods for metadata-driven command processor and structured program transfer protocol |
US10223412B2 (en) | 2013-02-08 | 2019-03-05 | Douglas T. Migliori | Systems and methods for metadata-driven command processor and structured program transfer protocol |
US9336013B2 (en) | 2013-02-08 | 2016-05-10 | Automatic Data Capture Technologies Group, Inc. | Systems and methods for metadata-driven command processor and structured program transfer protocol |
US9015465B2 (en) * | 2013-02-08 | 2015-04-21 | Automatic Data Capture Technologies Group, Inc. | Systems and methods for metadata-driven command processor and structured program transfer protocol |
US11023487B2 (en) | 2013-03-04 | 2021-06-01 | Sap Se | Data replication for cloud based in-memory databases |
US10725968B2 (en) | 2013-05-10 | 2020-07-28 | Box, Inc. | Top down delete or unsynchronization on delete of and depiction of item synchronization with a synchronization client to a cloud-based platform |
US10846074B2 (en) | 2013-05-10 | 2020-11-24 | Box, Inc. | Identification and handling of items to be ignored for synchronization with a cloud-based platform by a synchronization client |
US10877937B2 (en) | 2013-06-13 | 2020-12-29 | Box, Inc. | Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform |
US9633037B2 (en) | 2013-06-13 | 2017-04-25 | Box, Inc | Systems and methods for synchronization event building and/or collapsing by a synchronization component of a cloud-based platform |
US9805050B2 (en) | 2013-06-21 | 2017-10-31 | Box, Inc. | Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform |
US11531648B2 (en) | 2013-06-21 | 2022-12-20 | Box, Inc. | Maintaining and updating file system shadows on a local device by a synchronization client of a cloud-based platform |
US10229134B2 (en) | 2013-06-25 | 2019-03-12 | Box, Inc. | Systems and methods for managing upgrades, migration of user data and improving performance of a cloud-based platform |
US10110656B2 (en) | 2013-06-25 | 2018-10-23 | Box, Inc. | Systems and methods for providing shell communication in a cloud-based platform |
CN103533023A (en) * | 2013-07-25 | 2014-01-22 | 上海和辰信息技术有限公司 | Cloud service application cluster synchronization system and synchronization method based on cloud service characteristics |
CN103607418A (en) * | 2013-07-25 | 2014-02-26 | 上海和辰信息技术有限公司 | Large-scale data partitioning system and partitioning method based on cloud service data characteristics |
US9535924B2 (en) | 2013-07-30 | 2017-01-03 | Box, Inc. | Scalability improvement in a system which incrementally updates clients with events that occurred in a cloud-based collaboration platform |
US9436693B1 (en) | 2013-08-01 | 2016-09-06 | Emc Corporation | Dynamic network access of snapshotted versions of a clustered file system |
US11822759B2 (en) | 2013-09-13 | 2023-11-21 | Box, Inc. | System and methods for configuring event-based automation in cloud-based collaboration platforms |
US10044773B2 (en) | 2013-09-13 | 2018-08-07 | Box, Inc. | System and method of a multi-functional managing user interface for accessing a cloud-based platform via mobile devices |
US8892679B1 (en) | 2013-09-13 | 2014-11-18 | Box, Inc. | Mobile device, methods and user interfaces thereof in a mobile device platform featuring multifunctional access and engagement in a collaborative environment provided by a cloud-based platform |
US9519886B2 (en) | 2013-09-13 | 2016-12-13 | Box, Inc. | Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform |
US9535909B2 (en) | 2013-09-13 | 2017-01-03 | Box, Inc. | Configurable event-based automation architecture for cloud-based collaboration platforms |
US10509527B2 (en) | 2013-09-13 | 2019-12-17 | Box, Inc. | Systems and methods for configuring event-based automation in cloud-based collaboration platforms |
US9483473B2 (en) | 2013-09-13 | 2016-11-01 | Box, Inc. | High availability architecture for a cloud-based concurrent-access collaboration platform |
US9704137B2 (en) | 2013-09-13 | 2017-07-11 | Box, Inc. | Simultaneous editing/accessing of content by collaborator invitation through a web-based or mobile application to a cloud-based collaboration platform |
US11435865B2 (en) | 2013-09-13 | 2022-09-06 | Box, Inc. | System and methods for configuring event-based automation in cloud-based collaboration platforms |
US9213684B2 (en) | 2013-09-13 | 2015-12-15 | Box, Inc. | System and method for rendering document in web browser or mobile device regardless of third-party plug-in software |
CN105659563A (en) * | 2013-10-18 | 2016-06-08 | 思科技术公司 | System and method for software defined network aware data replication |
US10866931B2 (en) | 2013-10-22 | 2020-12-15 | Box, Inc. | Desktop application for accessing a cloud collaboration platform |
US20150135004A1 (en) * | 2013-11-11 | 2015-05-14 | Fujitsu Limited | Data allocation method and information processing system |
US9652181B2 (en) | 2014-01-07 | 2017-05-16 | International Business Machines Corporation | Library apparatus including a cartridge memory (CM) database stored on a storage cloud |
US10101949B2 (en) | 2014-01-07 | 2018-10-16 | International Business Machines Corporation | Library apparatus including a cartridge memory (CM) database stored on a storage cloud |
US20150263980A1 (en) * | 2014-03-14 | 2015-09-17 | Rohini Kumar KASTURI | Method and apparatus for rapid instance deployment on a cloud using a multi-cloud controller |
US10291476B1 (en) | 2014-03-14 | 2019-05-14 | Veritas Technologies Llc | Method and apparatus for automatically deploying applications in a multi-cloud networking system |
US20150263979A1 (en) * | 2014-03-14 | 2015-09-17 | Avni Networks Inc. | Method and apparatus for a highly scalable, multi-cloud service deployment, orchestration and delivery |
US9680708B2 (en) * | 2014-03-14 | 2017-06-13 | Veritas Technologies | Method and apparatus for cloud resource delivery |
US11416459B2 (en) | 2014-04-11 | 2022-08-16 | Douglas T. Migliori | No-code, event-driven edge computing platform |
US10530854B2 (en) | 2014-05-30 | 2020-01-07 | Box, Inc. | Synchronization of permissioned content in cloud-based environments |
US9602514B2 (en) | 2014-06-16 | 2017-03-21 | Box, Inc. | Enterprise mobility management and verification of a managed application by a content provider |
US10574442B2 (en) | 2014-08-29 | 2020-02-25 | Box, Inc. | Enhanced remote key management for an enterprise in a cloud-based environment |
US11146600B2 (en) | 2014-08-29 | 2021-10-12 | Box, Inc. | Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms |
US9756022B2 (en) | 2014-08-29 | 2017-09-05 | Box, Inc. | Enhanced remote key management for an enterprise in a cloud-based environment |
US10708321B2 (en) | 2014-08-29 | 2020-07-07 | Box, Inc. | Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms |
US10708323B2 (en) | 2014-08-29 | 2020-07-07 | Box, Inc. | Managing flow-based interactions with cloud-based shared content |
US9894119B2 (en) | 2014-08-29 | 2018-02-13 | Box, Inc. | Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms |
US10038731B2 (en) | 2014-08-29 | 2018-07-31 | Box, Inc. | Managing flow-based interactions with cloud-based shared content |
US11876845B2 (en) | 2014-08-29 | 2024-01-16 | Box, Inc. | Configurable metadata-based automation and content classification architecture for cloud-based collaboration platforms |
US9979784B2 (en) * | 2014-09-03 | 2018-05-22 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for cloud data backup and recovery |
US20160212207A1 (en) * | 2014-09-03 | 2016-07-21 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for cloud data backup and recovery |
US9794331B1 (en) * | 2014-09-29 | 2017-10-17 | Amazon Technologies, Inc. | Block allocation based on server utilization |
US10469571B2 (en) | 2014-09-29 | 2019-11-05 | Amazon Technologies, Inc. | Block allocation based on server utilization |
US10423507B1 (en) | 2014-12-05 | 2019-09-24 | EMC IP Holding Company LLC | Repairing a site cache in a distributed file system |
US10430385B1 (en) | 2014-12-05 | 2019-10-01 | EMC IP Holding Company LLC | Limited deduplication scope for distributed file systems |
US10353873B2 (en) | 2014-12-05 | 2019-07-16 | EMC IP Holding Company LLC | Distributed file systems on content delivery networks |
US10417194B1 (en) * | 2014-12-05 | 2019-09-17 | EMC IP Holding Company LLC | Site cache for a distributed file system |
US10795866B2 (en) | 2014-12-05 | 2020-10-06 | EMC IP Holding Company LLC | Distributed file systems on content delivery networks |
US11221993B2 (en) | 2014-12-05 | 2022-01-11 | EMC IP Holding Company LLC | Limited deduplication scope for distributed file systems |
US10445296B1 (en) | 2014-12-05 | 2019-10-15 | EMC IP Holding Company LLC | Reading from a site cache in a distributed file system |
US9898477B1 (en) | 2014-12-05 | 2018-02-20 | EMC IP Holding Company LLC | Writing to a site cache in a distributed file system |
US10936494B1 (en) | 2014-12-05 | 2021-03-02 | EMC IP Holding Company LLC | Site cache manager for a distributed file system |
US10951705B1 (en) | 2014-12-05 | 2021-03-16 | EMC IP Holding Company LLC | Write leases for distributed file systems |
US10021212B1 (en) | 2014-12-05 | 2018-07-10 | EMC IP Holding Company LLC | Distributed file systems on content delivery networks |
US10452619B1 (en) | 2014-12-05 | 2019-10-22 | EMC IP Holding Company LLC | Decreasing a site cache capacity in a distributed file system |
US9916202B1 (en) * | 2015-03-11 | 2018-03-13 | EMC IP Holding Company LLC | Redirecting host IO's at destination during replication |
US11102298B1 (en) | 2015-05-26 | 2021-08-24 | Pure Storage, Inc. | Locally providing cloud storage services for fleet management |
US11711426B2 (en) | 2015-05-26 | 2023-07-25 | Pure Storage, Inc. | Providing storage resources from a storage pool |
US10027757B1 (en) * | 2015-05-26 | 2018-07-17 | Pure Storage, Inc. | Locally providing cloud storage array services |
US10652331B1 (en) | 2015-05-26 | 2020-05-12 | Pure Storage, Inc. | Locally providing highly available cloud-based storage system services |
US10733324B2 (en) | 2016-05-18 | 2020-08-04 | International Business Machines Corporation | Privacy enabled runtime |
US20170337382A1 (en) * | 2016-05-18 | 2017-11-23 | International Business Machines Corporation | Privacy enabled runtime |
US10769285B2 (en) * | 2016-05-18 | 2020-09-08 | International Business Machines Corporation | Privacy enabled runtime |
US10474323B2 (en) | 2016-10-25 | 2019-11-12 | Microsoft Technology Licensing Llc | Organizational external sharing of electronic data |
US11188500B2 (en) * | 2016-10-28 | 2021-11-30 | Netapp Inc. | Reducing stable data eviction with synthetic baseline snapshot and eviction state refresh |
US11768803B2 (en) | 2016-10-28 | 2023-09-26 | Netapp, Inc. | Snapshot metadata arrangement for efficient cloud integrated data management |
US10547621B2 (en) | 2016-11-28 | 2020-01-28 | Microsift Technology Licensing, Llc | Persistent mutable sharing of electronic content |
CN110199277A (en) * | 2017-01-18 | 2019-09-03 | 微软技术许可有限责任公司 | It include metadata in data resource |
US11675666B2 (en) | 2017-01-18 | 2023-06-13 | Microsoft Technology Licensing, Llc | Including metadata in data resources |
US11275603B2 (en) * | 2017-07-01 | 2022-03-15 | Intel Corporation | Technologies for memory replay prevention using compressive encryption |
US11775332B2 (en) | 2017-07-01 | 2023-10-03 | Intel Corporation | Technologies for memory replay prevention using compressive encryption |
US11698837B2 (en) | 2018-03-15 | 2023-07-11 | Pure Storage, Inc. | Consistent recovery of a dataset |
US11048590B1 (en) * | 2018-03-15 | 2021-06-29 | Pure Storage, Inc. | Data consistency during recovery in a cloud-based storage system |
WO2019220173A1 (en) * | 2018-05-16 | 2019-11-21 | Pratik Sharma | Distributed snapshot of rack |
US11283799B2 (en) | 2018-12-28 | 2022-03-22 | Microsoft Technology Licensing, Llc | Trackable sharable links |
US11630807B2 (en) | 2019-03-08 | 2023-04-18 | Netapp, Inc. | Garbage collection for objects within object store |
US11144498B2 (en) | 2019-03-08 | 2021-10-12 | Netapp Inc. | Defragmentation for objects within object store |
US11797477B2 (en) | 2019-03-08 | 2023-10-24 | Netapp, Inc. | Defragmentation for objects within object store |
US11016943B2 (en) | 2019-03-08 | 2021-05-25 | Netapp, Inc. | Garbage collection for objects within object store |
US11899620B2 (en) | 2019-03-08 | 2024-02-13 | Netapp, Inc. | Metadata attachment to storage objects within object store |
US10911540B1 (en) * | 2020-03-10 | 2021-02-02 | EMC IP Holding Company LLC | Recovering snapshots from a cloud snapshot lineage on cloud storage to a storage system |
Also Published As
Publication number | Publication date |
---|---|
US20140245026A1 (en) | 2014-08-28 |
WO2011130588A2 (en) | 2011-10-20 |
WO2011130588A3 (en) | 2012-02-16 |
US9836244B2 (en) | 2017-12-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9836244B2 (en) | System and method for resource sharing across multi-cloud arrays | |
US10587692B2 (en) | Service and APIs for remote volume-based block storage | |
US20230281088A1 (en) | Orchestrator for orchestrating operations between a computing environment hosting virtual machines and a storage environment | |
US8438136B2 (en) | Backup catalog recovery from replicated data | |
US11868213B2 (en) | Incremental backup to object store | |
US7139808B2 (en) | Method and apparatus for bandwidth-efficient and storage-efficient backups | |
US8762642B2 (en) | System and method for secure and reliable multi-cloud data replication | |
US7752492B1 (en) | Responding to a failure of a storage system | |
US8229897B2 (en) | Restoring a file to its proper storage tier in an information lifecycle management environment | |
US10289694B1 (en) | Method and system for restoring encrypted files from a virtual machine image | |
US8433863B1 (en) | Hybrid method for incremental backup of structured and unstructured files | |
US9582213B2 (en) | Object store architecture for distributed data processing system | |
US8205049B1 (en) | Transmitting file system access requests to multiple file systems | |
US8010543B1 (en) | Protecting a file system on an object addressable storage system | |
US20230252042A1 (en) | Search and analytics for storage systems | |
US11675503B1 (en) | Role-based data access | |
US20220179985A1 (en) | User entitlement management system | |
Kulkarni et al. | Cloud computing-storage as service | |
US8095804B1 (en) | Storing deleted data in a file system snapshot | |
US8850126B2 (en) | Exclusive access during a critical sub-operation to enable simultaneous operations | |
JP2017531892A (en) | Improved apparatus and method for performing a snapshot of a block level storage device | |
US11334456B1 (en) | Space efficient data protection | |
US9183208B1 (en) | Fileshot management | |
US11029855B1 (en) | Containerized storage stream microservice | |
US11068354B1 (en) | Snapshot backups of cluster databases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TWINSTRATA, INC.;REEL/FRAME:033412/0797 Effective date: 20140707 |