WO2016122603A1 - Dynamically inheritable attribute - Google Patents

Dynamically inheritable attribute Download PDF

Info

Publication number
WO2016122603A1
WO2016122603A1 PCT/US2015/013797 US2015013797W WO2016122603A1 WO 2016122603 A1 WO2016122603 A1 WO 2016122603A1 US 2015013797 W US2015013797 W US 2015013797W WO 2016122603 A1 WO2016122603 A1 WO 2016122603A1
Authority
WO
WIPO (PCT)
Prior art keywords
root
dynamically inheritable
inheritable attribute
dynamically
inheritance
Prior art date
Application number
PCT/US2015/013797
Other languages
French (fr)
Inventor
Boris Zuckerman
Oskar Y. BATUNER
Manny Ye
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/013797 priority Critical patent/WO2016122603A1/en
Publication of WO2016122603A1 publication Critical patent/WO2016122603A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/122File system administration, e.g. details of archiving or snapshots using management policies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/119Details of migration of file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/18File system types
    • G06F16/185Hierarchical storage management [HSM] systems, e.g. file migration or policies thereof

Definitions

  • Data stored in a storage system can be organized into files and directories of a file system.
  • a large storage system typically has a large number of computer nodes.
  • information associated with the file system can be a distributed across the computer nodes.
  • Performing certain operations in a distributed file system can be complex and can result in inefficiency if not performed properly.
  • FIG. 1 is a schematic of an example of a highly distributed segmented parallel file system
  • FIG. 2 is a schematic of an example of propagating an inheritable snapshot time mark (STM);
  • Fig. 3 is an example of a method for handling dynamically inheritable attributes
  • Fig. 4 is an example of a method for propagating dynamically inheritable attributes
  • Fig. 5 is an example of a method for finding and validating a root of inheritance for dynamically inheritable attributes in a name space tree
  • Fig. 6 is a schematic of an example of propagation of roots of inheritance to different levels of a name space tree
  • Fig. 7 is an example of server that may allow roots of inheritance to be moved up or down a hierarchical name space tree.
  • Dynamic inheritance allows for efficient setting of various properties in the environment of a highly scalable distributed name space. Dynamic inheritance is widely used for variety of reasons ranging from propagating snapshot marks, snapshot restore information, anti-virus checking policies, placement and tiering rules, among others.
  • the mechanism generally assumes a low rate of such modifications, and often uses a single defined root of inheritance, such as the root of a file system, as a coordinating node.
  • the number of changes in properties, file location, and the like increases, the number of updates to the root of inheritance also increase. This can create undesirable contention, e.g., loading on the server controlling the root of inheritance, and reduce productivity due to the number of systems trying to access the root directory substantially simultaneously.
  • the overlapping access attempts may be especially problematic for very large multi- tenant name spaces with a large number of independent modifications of the dynamically inheritable attributes.
  • Fig. 1 is a schematic 100 of an example of a highly distributed segmented parallel file system.
  • the core component of the distributed segmented parallel file system (FS) 1 02 is a storage segment 104.
  • the FS 102 may include a plurality, such as thousands, of storage segments 104, many of which may be in the terabyte or petabyte size.
  • the individual storage segments 1 04 are controlled by corresponding storage servers 106. However, for load balancing purposes or due to component failures or maintenance reasons, the control over storage segments 104 can migrate from one server 106 to another.
  • Storage servers 1 06 can be connected to the storage segments 104 directly, for example, through a direct attached storage (DAS) model or through various interconnect technologies 108 such as Fibre Channel (FC), serial attached SCSI (SAS), internet protocol SCSI (iSCSI), and the like.
  • the FS 102 may also include client servers 1 1 0 that may not control storage segments 104.
  • the client servers 1 1 0 can be used to run applications or provide access to the FS 102 through other protocols 1 12 such as network file system (NFS), server message block (SMB), HTTP, FTP, and the like.
  • NFS network file system
  • SMB server message block
  • HTTP HTTP
  • FTP FTP
  • participating nodes such as storage servers 106 and client servers 1 10, exchange messages over Ethernet or other networks.
  • a higher degree of parallelism may achieved by widely distributing individual elements throughout the storage segments 104.
  • individual elements are controlled by different storage servers 106.
  • Fig. 1 illustrates how individual elements 1 14 of a file path 1 1 6, e.g., /Dir1 /Dir2/Dir3/My_file, may be placed on five different storage segments 104 and controlled by three different storage servers 106.
  • servers 106 or 1 10 which perform an FS service for applications or for SMB, NFS, FTP, HTTP and other servers, may be termed an entry server (ES).
  • ES entry server
  • the storage servers 106 which control segments can play both roles. They could be ES for FS level requests originated locally and could be DS for requests coming from other computers.
  • the server 106 or 1 10 may have to request services of storage servers 106 that control segments associated with objects involved in operation.
  • the storage servers 106 may be termed destination servers (DS).
  • An ES that creates a new file, e.g., My file, may place it on disk segment 3 and may have to link it into a directory Dir3 on segment 5. Therefore, the ES may have to request services of storage servers 1 or 3 106 to create a file and services of storage server 3 to link the file into directory Dir3.
  • Fig. 1 The system described with respect to Fig. 1 is configured designed to support a large number of objects and to be able to scale up linearly by adding storage and processing capacities as needed.
  • hierarchically organized FS such as shown in Fig. 1 , many properties may be passed from a parent node to a child node, e.g., from 7" down the intermediate directories to "My file". This process may be termed "inheritance”. Such inheriting can be done statically or dynamically.
  • Statically inherited attributes are calculated usually during creation of an object and are recorded as the part of the meta-data of a newly recorded object. In one example, the FS does not have to change them automatically.
  • Dynamically inheritable attributes may be calculated or revalidated when they are needed.
  • an object such as a directory of file
  • statically inheritable attributes may be persistently recorded as part of on-disk inodes, e.g., data structures that hold metadata about files.
  • statically inherited attributes are not changed. Most of the time the static attributes are set during a create or a mkdir FS operation. Thus, dynamically and statically inheritable attributes are semantically different.
  • Dynamic inheritance is processed 'run-time' by ES at appropriate points. Dynamically inherited attributes are associated only with in-core inode objects. There are many different examples or cases when a FS uses dynamic inheritance mechanism. These include setting various policies that describe a number of replicas, placement rules for new objects, tracking changes for various notification purposes, defining security rules, auditing and write-once, read many (WORM) policies, and the like.
  • dynamic inheritance operations may engage more objects and correspondently depend on correct actions and coordination of a number of destination servers 106.
  • properties such as policies
  • Dynamic inheritance may provide an efficient mechanism to set various policies at any position in the name space. Such policies should affect all the nodes below. In other words, policy-like attributes should be dynamically inherited by all descendants of the nodes in the name space.
  • Rl may coincide with the root directory 7" of the FS, FS Root.
  • the number of instances of dynamic inheritance increase, the number of update requests to the root of the file system may increase or become large.
  • the efficiency of the system may be impacted if all intermediate nodes are to be delegated and revalidated when delegations are revoked.
  • the traffic to the DS hosting FS Root associated with revocation of delegations and revalidation of the root grows and system may become unbalanced. Therefore, methods described herein can be used to track contention and take actions in form of redistributing Rl activities to a next level of the name space tree.
  • the methods assume that if delegations are revoked, only the node of the interest and the node that is used as the Rl have to be delegated for a given check and have to be revalidated. Avoiding the validation of all intermediate path nodes for validating of dynamically inheritable attributes may substantially improve the operation of the system.
  • Fig. 2 is a schematic 200 of an example of propagating an inheritable snapshot time mark (STM).
  • STM inheritable snapshot time mark
  • the time when the snapshot was requested may be a characteristic.
  • a direct form of identification may be made by time and termed a Snapshot Time Mark (STM).
  • STM can be recorded on any object at any place of the FS name space and is treated and propagated as a dynamically inheritable attribute.
  • FS file system
  • the various file system entities are managed by respective destination servers, S1 21 6, S2 218, and S3 220.
  • a dashed line 222 between a destination server 216, 218, and 220 and a respective file system entity 202-212 indicates that the file system entity is being managed or controlled by the destination server.
  • the destination server S2 218 manages file system entities File3 206 and Dir2 210.
  • Two entry point servers, ES1 224 and ES2 226 are also shown in this example.
  • File system operations, including snapshot operations, can be initiated at the entry point servers 224 and 226.
  • a snapshot of the root is basically a snapshot of the entire file system under the root. All file system entities under the root (/), such as those shown in Fig. 5, inherit the STM value STM 1 from the root. If a file system entity was created after the time indicated by STM 1 and is subsequently deleted, prior to another snapshot being taken, then such entity would not be preserved by the file system.
  • the entry point server ES1 may issue a snapshot request 228 to take a snapshot of Dir1 208.
  • the snapshot of Dir1 is a request to take a snapshot of Dir1 and all of the file system entities that are under Dir1 .
  • the root (/) can have other sub-directories, and thus, the snapshot of Dir1 would be a snapshot of a subset of the data of the entire file system.
  • the snapshot of Dir1 is associated with STM value stm_2, which is larger than stm_1 . Because stm_2 is larger than stm_1 , the new value of stm_2 should be inherited by all file system objects under Dir1 208. As a result, file system entities that were created before stm_1 , as well as file system entities created after stm_1 but before stm_2, should be preserved in the snapshot at stm_2.
  • ES1 224 may also request a change of the value for dm gen.stm on the FS Root 214.
  • Snap /Dir1 When "snap /Dir1 " is executed and a new value of STM stm_2 was recorded on Dir1 208 by S3 220 and when dm gen.stm was incremented on the FS Root 214 by S1 216 invalidation requests may be sent by S3 220 and S1 216 to ES2 226 to indicate that ES2 226 cannot trust a locally stored copy of the attributes of Dir1 208 or the FS Root 214.
  • the system performs an inheritance checking process, for example, as described with respect to Fig. 3. It detects that a root "is not cached", i.e., does not have caching delegation associated with it, and re-reads it from S1 216. It may be determined that File3 206 is cached and can be trusted. However, the value for dm gen.stm for File3 206 is different from the value of dm gen.stm for the FS Root 214. Consequently, a new list of the nodes
  • ES2 226 has performed two network requests to refresh to "non-cached" nodes and updated four in-core objects.
  • the next file to be deleted is File2 204.
  • the system performs the same checking inheritance process.
  • both File2 204 and the FS Root 214 are cached, but the values for dm gen.stm do not match.
  • the list of nodes hierarchically linking File2 204 to the FS Root 214 are rebuilt.
  • the building of the list is stopped after placing a single node, Dir3 212, on the list because the value for dm gen.stm for Dir3 212 matches the value of dm gen.stm for the FS Root 214.
  • no network requested were made, and only one node has been updated. The same result is seen for every other node in the /Dir1 /Dir2 sub-tree, minimizing requests made to S1 216.
  • Fig. 3 is an example of a method 300 for handling dynamically inheritable attributes.
  • the method may be implemented on the systems discussed with respect to Figs. 1 and 2.
  • entities of a hierarchically arranged file system are stored in the distributed storage system.
  • an operation is performed at block 304 that sets a value of a dynamically inheritable attribute of a particular one of the file system entities.
  • the dynamically inheritable attribute can be an STM, as discussed above.
  • other types of dynamically inheritable attributes include a replication policy, a placement rule, information relating to tracked changes, a security rule, an audit policy, and so forth.
  • the determination that a dynamically inheritable attribute of a file system entity is to be refreshed can be part of a validation procedure, in which the value of the dynamically inheritable attribute for a given file system entity is validated. For example, a validation procedure can be performed of all file system entities along a particular path from a particular file system entity.
  • techniques or mechanisms according to some implementations are provided to move a root of inheritance down a chain of nodes when contention is detected to be a problem, or return the root of inheritance back to a root directory if all directories below are invalidated. Further, the method may intelligently determine that certain file system entities along the path do not have to be re-validated provided certain conditions are satisfied, as discussed further below. In one example, the techniques of the present application according to some implementations may help avoid traversing the entire chain of nodes, corresponding to a sub-tree of file system entities, during a validation procedure.
  • a dynamically inherited generation field such as dm gen.stm, among others, in an in-core or in memory inode representing a file system entity is used during a validation procedure to determine when traversal of a chain of nodes can be stopped.
  • the dm gen.stm field is maintained by entry point servers in in-core inodes and is copied from the parent of the inode during the process of propagation of a dynamically inheritable attribute, such as an STM.
  • the dm gen.stm field is updated at the root of the file system whenever a dynamically inheritable attribute is updated, such as in response to taking of a new snapshot.
  • the dm gen.stm field is changed, for example, monotonically
  • the dm gen.stm field is propagated from the root to other nodes during lookups or during a validation procedure to validate the dynamically inheritable attribute, such as the STM, as discussed further with respect to Fig. 4.
  • Fig. 4 is an example of a method 400 for propagating dynamically inheritable attributes.
  • the validation procedure of Fig. 4 is performed by an entry point server and is used to validate a dynamically inheritable attribute (e.g. STM) at a given file system entity, referred to as "my object".
  • the processing of dynamically inheritable attribute is based on dynamically inherited generation (dm gen) field in in-core inode structure representing a file object.
  • dm gen dynamically inherited generation
  • dm gen dynamically inherited generation
  • dm gen is a single value, in one example, it may be a vector describing various distinct inheritable properties such as snapshot IDs, generation of antivirus checking tools, generation of placement rules, etc.
  • dm gen.xxx for example, dm gen.stm.
  • This field is maintained by an ES and is copied from the parent of the inode during process of propagation of inheritable attribute.
  • STM snapshot time marks
  • the method starts at block 402, with the location and revalidation of Rl. This may be performed by the method discussed with respect to Fig. 5.
  • the dm gen field is checked to confirm that it is the same for all nodes on the name path up to the root of inheritance.
  • dm gen field is monotonically incremented only on the root of inheritance (Rl) and then propagated to other nodes by the process of validation of dynamically inheritable attributes.
  • Rl root of inheritance
  • the above processdoes not check values of inheritable attributes for all the nodes of the hierarchy. The processing stops as soon as dm gen attribute of any node in the path matches dm gen of Rl.
  • the entry point server builds a list (L) of all nodes in the hierarchy from my object to the root. As part of the process of building the list (L), the entry point server retrieves the root from the corresponding destination server, unless such information is already cached at the entry point server. Further, information pertaining to my object is retrieved from the corresponding destination server, unless such information is already cached at the entry point server. Moreover, the entry point server further retrieves information pertaining to any intermediate file system entities between my object and the root, unless any such information associated with a given intermediate object is already cached at the entry point server.
  • nodes associated with file system entities in the hierarchy are iteratively added to the list, so long as the dm gen field of the corresponding file system entity does not match the dm gen field of the root.
  • the process of adding of nodes to the list stops when the dm gen field of a
  • the value of the dynamically inheritable attribute is propagated from the first node in the list, where the first node is typically the root of inheritance, to other nodes in the list.
  • propagation of a dynamically inheritable attribute is made to the file system entities associated with nodes in the list. These are the file system entities having dm gen values that do not match that of the root of inheritance. This may help to reduce traffic and resource consumption associated with propagation of dynamically inheritable attributes, which can grow rapidly in a large distributed storage system.
  • Fig. 5 is an example of a method 500 for finding and validating a root of inheritance for dynamically inheritable attributes in a name space tree.
  • the method can also track contention and set the root of inheritance to a lower level to decrease the loading on a server, or reset the Rl back to the primary root if needed.
  • the system keeps tracks of the relative cost of updates and queries associated with the Rl. This is done using the formulas in Eqn. 1 and 2.
  • M represents the maximum productivity of the host in the number of input-output transactions per second (lO/s). Generally, M depends on the speed of the network adapter, productivity of the host's CPU, and other factors. To start the tracking, it may be pre-set with a reasonable conservative default, but is continuously tracked, and increased until it matches the actual maximum for the measured total load value.
  • L is the total load value for the server in lO/s for the last time interval, e.g., a second.
  • N is the root load actual number of invalidation, revalidation, and dm gen update IO requests per second associated with Rl.
  • R is the target ratio of root related operations (N) to the maximum productivity (M). Typically it is set between 0.2 and 0.3.
  • T is the threshold ratio of the actual (L) to maximum productivity (M). In this calculation, the
  • inheritance root overloading event triggers when T exceeds a preset level, such as 0.5.
  • An Rl overloading event as determined by the values for R and T, sets the value of dm_gen.shift_down to TRUE and starts reassigning of inheritance root responsibilities to the nodes on the next level of name hierarchy.
  • the process of offloading Rl responsibilities to the next level is performed during the revalidation of an Rl, wherein initial Rl responsibilities belong to the FS Root.
  • the Rl responsibilities are propagated down the tree until one of the resetting events return the Rl back to the FS Root. Accordingly, it may be viewed as an oscillating process that is controlled by two fields, the dm_gen.shift_down flag and the dm_gen.shift_generation.
  • the value of dm_gen.shift_generation indicates that Rl responsibilities were assigned to this node during specific shift_generation and the dm_gen.shift_down flag set to FALSE indicates that the current Rl is operational. TRUE indicates that an overloading event was detected and the Rl must be shifted down.
  • the method 500 begins at block 502 by determining if FS Root, and current Rl cached and if dm_gen.shift_generations are the same. If not, the Rl is reset to the FS Root at block 504, and process flow returns to block 502 to restart the process. If so, process flow proceeds to block 506.
  • the ES determines if dm_gen.shift_down set to TRUE and dm_gen.shift_down_gen matching of FS Root. If so, process flow proceeds to block 508. At block 508, the Rl is set to the next object down the path to the object. If at block 508 either dm_gen.shift_down is FALSE or dm_gen.shift_down_gen does not match FS Root, process flow proceeds to block 510 at which the method 500 exits back to block 404 of method 400.
  • Fig. 6 is a schematic of an example of propagation of roots of inheritance to different levels of a name space tree 600.
  • newly chosen RIs may be on different levels of the name space tree 600.
  • some name space sub-trees with relatively low activities e.g., sub-tree 602
  • Some dormant sub-trees may not appoint RIs at all, e.g., sub-tree 604.
  • some active sub-trees may move their RIs deeper into the name space tree 600 and closer to the areas of activity.
  • Fig. 6 illustrates how RIs were propagated to different levels of the name space tree 600.
  • Rl nodes are represented as shaded areas. More active portions, such as sub-trees 606 and 608, of the name space had their RIs propagated deeper, for example, to Dir_1 1 610 and Dir_12 61 2 levels. Less active portions, e.g., sub-tree 602, of the name space had their RIs propagated shallower to Dir_2 614 and dormant areas of the name space below Dir_3 616 do not have their Rl established.
  • a number of events may trigger an Rl Reset. For example, a reboot of the DS controlling the FS Root node or a transfer of control of FS Root segment.
  • FS Root generation is initialized with a content randomized by the reboot time and ID of the handling host.
  • Another event that may trigger an Rl reset to the FS Root 603, is a renaming of a portion of the name space tree 600, with concomitant changes of inheritance. For example, some parts of the name space may be moved out of their scope of inheritance. In such case, the elements that are moved may either retain some inheritable properties from a previous path or have to re-inherit them from a new path. To assure that this process affects all objects Rl is reset, either to the most common ancestor or, as a simplification, to the FS Root 603.
  • Changing inheritable properties between FS Root 603 and Rl may also trigger a reset of the Rl. For example, to the place of change or, as a simplification, to the FS Root 603.
  • Fig. 7 is an example of a server 700 that may allow roots of inheritance to be moved up or down a hierarchical name space tree.
  • the server may be an ES as described herein.
  • the server 700 has one or more processors 702, which may be single core processors, multi-core processors, virtual processors, cloud processors, or any combinations thereof.
  • the processors 702 may be coupled to a network interface controller (NIC) 704 through a bus 706.
  • NIC network interface controller
  • the bus 706 may include any number of technologies, such as PCIe, Fibre Channel, or any number of other commercial or proprietary bus technologies.
  • the processors 702 may be coupled to a storage system 708 through the bus 706.
  • the storage system 708 may include a RAM, a ROM, a flash drive, a hard drive, a SAN, an optical drive or any combinations thereof.
  • the storage system 708 forms a tangible machine readable medium that includes instructions code to direct the processors 702 to perform the methods described herein.
  • a root validation module 71 0 could include code to direct the processor to validate a root of inheritance, for example, as discussed with respect to Fig. 5.
  • a contention monitor 71 2 may direct the processor to monitor the amount of propagation accesses to a root directory controller, or other directory controllers, to determine when the Rl should be moved down a name space tree.
  • a node list 714 may be used to keep track of the attributes for the nodes, including, for example, STMs and Rls.
  • An attribute propagator 716 may be used to direct the processors 702 to confirm the Rl and propagate attributes in a name space tree.

Abstract

In one example, disclosed is a method for controlling contention in hierarchical storage The method includes monitoring loading on a destination server (DS) controlling a directory that is a root of inheritance (RI) for a dynamically inheritable attribute. If the loading on the DS is higher than a predetermined level, move the RI for the dynamically inheritable attribute to a lower level in a name space tree.

Description

DYNAMICALLY INHERITABLE ATTRIBUTE
BACKGROUND
[0001] Data stored in a storage system can be organized into files and directories of a file system. A large storage system typically has a large number of computer nodes. As a result, information associated with the file system can be a distributed across the computer nodes. Performing certain operations in a distributed file system can be complex and can result in inefficiency if not performed properly.
DESCRIPTION OF THE DRAWINGS
[0002] Certain exemplary embodiments are described in the following detailed description and in reference to the drawings, in which:
[0003] Fig. 1 is a schematic of an example of a highly distributed segmented parallel file system;
[0004] Fig. 2 is a schematic of an example of propagating an inheritable snapshot time mark (STM);
[0005] Fig. 3 is an example of a method for handling dynamically inheritable attributes;
[0006] Fig. 4 is an example of a method for propagating dynamically inheritable attributes;
[0007] Fig. 5 is an example of a method for finding and validating a root of inheritance for dynamically inheritable attributes in a name space tree;
[0008] Fig. 6 is a schematic of an example of propagation of roots of inheritance to different levels of a name space tree; and
[0009] Fig. 7 is an example of server that may allow roots of inheritance to be moved up or down a hierarchical name space tree.
DETAILED DESCRIPTION
[0010] Dynamic inheritance allows for efficient setting of various properties in the environment of a highly scalable distributed name space. Dynamic inheritance is widely used for variety of reasons ranging from propagating snapshot marks, snapshot restore information, anti-virus checking policies, placement and tiering rules, among others. However, the mechanism generally assumes a low rate of such modifications, and often uses a single defined root of inheritance, such as the root of a file system, as a coordinating node. As the number of changes in properties, file location, and the like, increases, the number of updates to the root of inheritance also increase. This can create undesirable contention, e.g., loading on the server controlling the root of inheritance, and reduce productivity due to the number of systems trying to access the root directory substantially simultaneously. The overlapping access attempts may be especially problematic for very large multi- tenant name spaces with a large number of independent modifications of the dynamically inheritable attributes.
[0011] In examples described herein, techniques are employed to measure contention and control it by selecting multiple inheritance roots dynamically. The use of different roots of inheritance (Rl) may help lower the number of access attempts on the main Rl, which may increase the efficiency of the file system.
[0012] Fig. 1 is a schematic 100 of an example of a highly distributed segmented parallel file system. The core component of the distributed segmented parallel file system (FS) 1 02 is a storage segment 104. The FS 102 may include a plurality, such as thousands, of storage segments 104, many of which may be in the terabyte or petabyte size.
[0013] The individual storage segments 1 04 are controlled by corresponding storage servers 106. However, for load balancing purposes or due to component failures or maintenance reasons, the control over storage segments 104 can migrate from one server 106 to another. Storage servers 1 06 can be connected to the storage segments 104 directly, for example, through a direct attached storage (DAS) model or through various interconnect technologies 108 such as Fibre Channel (FC), serial attached SCSI (SAS), internet protocol SCSI (iSCSI), and the like. The FS 102 may also include client servers 1 1 0 that may not control storage segments 104. The client servers 1 1 0 can be used to run applications or provide access to the FS 102 through other protocols 1 12 such as network file system (NFS), server message block (SMB), HTTP, FTP, and the like. [0014] To provide file system services, participating nodes, such as storage servers 106 and client servers 1 10, exchange messages over Ethernet or other networks. In comparison to keeping all of the elements of the hierarchical name space in a single storage segment 1 04, a higher degree of parallelism may achieved by widely distributing individual elements throughout the storage segments 104. In this arrangement, individual elements are controlled by different storage servers 106. As an example, Fig. 1 illustrates how individual elements 1 14 of a file path 1 1 6, e.g., /Dir1 /Dir2/Dir3/My_file, may be placed on five different storage segments 104 and controlled by three different storage servers 106.
[0015] For purposes of this example, servers 106 or 1 10, which perform an FS service for applications or for SMB, NFS, FTP, HTTP and other servers, may be termed an entry server (ES). In one example, the storage servers 106 which control segments can play both roles. They could be ES for FS level requests originated locally and could be DS for requests coming from other computers.
[0016] To execute an operation, the server 106 or 1 10 may have to request services of storage servers 106 that control segments associated with objects involved in operation. The storage servers 106 may be termed destination servers (DS). An ES that creates a new file, e.g., My file, may place it on disk segment 3 and may have to link it into a directory Dir3 on segment 5. Therefore, the ES may have to request services of storage servers 1 or 3 106 to create a file and services of storage server 3 to link the file into directory Dir3.
[0017] The system described with respect to Fig. 1 is configured designed to support a large number of objects and to be able to scale up linearly by adding storage and processing capacities as needed. In hierarchically organized FS, such as shown in Fig. 1 , many properties may be passed from a parent node to a child node, e.g., from 7" down the intermediate directories to "My file". This process may be termed "inheritance". Such inheriting can be done statically or dynamically.
Statically inherited attributes are calculated usually during creation of an object and are recorded as the part of the meta-data of a newly recorded object. In one example, the FS does not have to change them automatically.
[0018] Dynamically inheritable attributes may be calculated or revalidated when they are needed. When an object, such as a directory of file, is moved from one parent to another and if the parents have different sets of dynamically inheritable attributes, the moved object and all its descendants have to inherit a new set of such attributes from the new parent. In contrast, statically inheritable attributes may be persistently recorded as part of on-disk inodes, e.g., data structures that hold metadata about files. When such inodes are renamed from one parent to another, the statically inherited attributes are not changed. Most of the time the static attributes are set during a create or a mkdir FS operation. Thus, dynamically and statically inheritable attributes are semantically different. Dynamic inheritance is processed 'run-time' by ES at appropriate points. Dynamically inherited attributes are associated only with in-core inode objects. There are many different examples or cases when a FS uses dynamic inheritance mechanism. These include setting various policies that describe a number of replicas, placement rules for new objects, tracking changes for various notification purposes, defining security rules, auditing and write-once, read many (WORM) policies, and the like.
[0019] Further, dynamic inheritance operations may engage more objects and correspondently depend on correct actions and coordination of a number of destination servers 106. For example, properties, such as policies, for the newly created My file can be dynamically inherited from the file system root directory 7", which is under the control of storage server S3 106. Dynamic inheritance may provide an efficient mechanism to set various policies at any position in the name space. Such policies should affect all the nodes below. In other words, policy-like attributes should be dynamically inherited by all descendants of the nodes in the name space.
[0020] In one example, Rl may coincide with the root directory 7" of the FS, FS Root. However, as the number of instances of dynamic inheritance increase, the number of update requests to the root of the file system may increase or become large. The efficiency of the system may be impacted if all intermediate nodes are to be delegated and revalidated when delegations are revoked. Further, the traffic to the DS hosting FS Root associated with revocation of delegations and revalidation of the root grows and system may become unbalanced. Therefore, methods described herein can be used to track contention and take actions in form of redistributing Rl activities to a next level of the name space tree. The methods assume that if delegations are revoked, only the node of the interest and the node that is used as the Rl have to be delegated for a given check and have to be revalidated. Avoiding the validation of all intermediate path nodes for validating of dynamically inheritable attributes may substantially improve the operation of the system.
[0021] Fig. 2 is a schematic 200 of an example of propagating an inheritable snapshot time mark (STM). There are many different techniques to identify snapshots. For example, the time when the snapshot was requested may be a characteristic. Accordingly, a direct form of identification may be made by time and termed a Snapshot Time Mark (STM). Such STM can be recorded on any object at any place of the FS name space and is treated and propagated as a dynamically inheritable attribute.
[0022] In this example, it may be assumed to illustrate operation, there are three files, Filel 202, File2 204, and File3 206 and three directories, Dir1 208, Dir2 210, and Dir3 212. The file system (FS) also has a file system root 7", termed FS Root 214. The various file system entities are managed by respective destination servers, S1 21 6, S2 218, and S3 220. A dashed line 222 between a destination server 216, 218, and 220 and a respective file system entity 202-212 indicates that the file system entity is being managed or controlled by the destination server. Thus, for example, the destination server S2 218 manages file system entities File3 206 and Dir2 210.
[0023] Two entry point servers, ES1 224 and ES2 226 are also shown in this example. File system operations, including snapshot operations, can be initiated at the entry point servers 224 and 226.
[0024] For purposes of this example, it may be assumed that the entry point server ES2 226 has worked with entities under /Dir2 21 0 for some amount of time and thus entities under /Dir2 210 are stored in the cache of ES2 226. It may also be assumed that a previous snapshot request was applied to the FS Root 214, and is associated with STM value stm_1 . A snapshot of the root (/) is basically a snapshot of the entire file system under the root. All file system entities under the root (/), such as those shown in Fig. 5, inherit the STM value STM 1 from the root. If a file system entity was created after the time indicated by STM 1 and is subsequently deleted, prior to another snapshot being taken, then such entity would not be preserved by the file system.
[0025] At a later point in time, the entry point server ES1 may issue a snapshot request 228 to take a snapshot of Dir1 208. The snapshot of Dir1 is a request to take a snapshot of Dir1 and all of the file system entities that are under Dir1 . In one example, the root (/) can have other sub-directories, and thus, the snapshot of Dir1 would be a snapshot of a subset of the data of the entire file system. The snapshot of Dir1 is associated with STM value stm_2, which is larger than stm_1 . Because stm_2 is larger than stm_1 , the new value of stm_2 should be inherited by all file system objects under Dir1 208. As a result, file system entities that were created before stm_1 , as well as file system entities created after stm_1 but before stm_2, should be preserved in the snapshot at stm_2.
[0026] In addition to changing the STM value on Dir1 208, ES1 224 may also request a change of the value for dm gen.stm on the FS Root 214. When "snap /Dir1 " is executed and a new value of STM stm_2 was recorded on Dir1 208 by S3 220 and when dm gen.stm was incremented on the FS Root 214 by S1 216 invalidation requests may be sent by S3 220 and S1 216 to ES2 226 to indicate that ES2 226 cannot trust a locally stored copy of the attributes of Dir1 208 or the FS Root 214. The rest of the cached objects, Dir2 210, Dir3 21 2, Filel 202, File2 204, and File3 206 are not affected by these invalidations. However, the inherited STM should then be stm_2 and not the value of stm_1 that was recorded previously.
[0027] If a user on ES2 226 performs a command 230, "rm -rf /Dir1 /Dir2/*", that is supposed to delete all objects below Dir2 210, because of the snap that was requested at stm_2 on /Dir1 208, all the files should be preserved. The first object that may be removed is File3 206. The system performs an inheritance checking process, for example, as described with respect to Fig. 3. It detects that a root "is not cached", i.e., does not have caching delegation associated with it, and re-reads it from S1 216. It may be determined that File3 206 is cached and can be trusted. However, the value for dm gen.stm for File3 206 is different from the value of dm gen.stm for the FS Root 214. Consequently, a new list of the nodes
hierarchically linking File3 206 to the FS Root 214 is rebuilt. [0028] Since all of the nodes have values for dm gen.stm that are different from the current value for dm gen.stm for the FS Root 214, all of them are placed on the list. While building this list, it may be detected that Dir1 208 is not cached as well the directory is reread. The check inheritance process then propagates the new inherited STM by setting the value of dm gen.stm for all of the elements of the list to the new value. Further, it detects that the STM value of stm_2 for Dir1 208 is larger than the values for FS Root 214 (0) and Dir2 210 (stm_1 ) and proceeds to use the value stm_2 for all nodes below Dir1 208. At this point, ES2 226 has performed two network requests to refresh to "non-cached" nodes and updated four in-core objects.
[0029] The next file to be deleted is File2 204. The system performs the same checking inheritance process. At this point, both File2 204 and the FS Root 214 are cached, but the values for dm gen.stm do not match. Again, the list of nodes hierarchically linking File2 204 to the FS Root 214 are rebuilt. However, the building of the list is stopped after placing a single node, Dir3 212, on the list because the value for dm gen.stm for Dir3 212 matches the value of dm gen.stm for the FS Root 214. As a result, no network requested were made, and only one node has been updated. The same result is seen for every other node in the /Dir1 /Dir2 sub-tree, minimizing requests made to S1 216.
[0030] Fig. 3 is an example of a method 300 for handling dynamically inheritable attributes. The method may be implemented on the systems discussed with respect to Figs. 1 and 2. At block 302, entities of a hierarchically arranged file system are stored in the distributed storage system. At run-time of the file system, an operation is performed at block 304 that sets a value of a dynamically inheritable attribute of a particular one of the file system entities. For example, the dynamically inheritable attribute can be an STM, as discussed above. In other examples, other types of dynamically inheritable attributes include a replication policy, a placement rule, information relating to tracked changes, a security rule, an audit policy, and so forth.
[0031] At block 306, a determination is made as to whether the dynamically inheritable attribute of at least a second one of the file system entities related to the particular file system entity is to be refreshed. If so, process flow proceeds to block 308, where the value of the dynamically inheritable attribute is propagated to at least the second file system entity. [0032] The determination that a dynamically inheritable attribute of a file system entity is to be refreshed can be part of a validation procedure, in which the value of the dynamically inheritable attribute for a given file system entity is validated. For example, a validation procedure can be performed of all file system entities along a particular path from a particular file system entity. For performance reasons, techniques or mechanisms according to some implementations are provided to move a root of inheritance down a chain of nodes when contention is detected to be a problem, or return the root of inheritance back to a root directory if all directories below are invalidated. Further, the method may intelligently determine that certain file system entities along the path do not have to be re-validated provided certain conditions are satisfied, as discussed further below. In one example, the techniques of the present application according to some implementations may help avoid traversing the entire chain of nodes, corresponding to a sub-tree of file system entities, during a validation procedure.
[0033] In some examples, a dynamically inherited generation field, such as dm gen.stm, among others, in an in-core or in memory inode representing a file system entity is used during a validation procedure to determine when traversal of a chain of nodes can be stopped. The dm gen.stm field is maintained by entry point servers in in-core inodes and is copied from the parent of the inode during the process of propagation of a dynamically inheritable attribute, such as an STM. The dm gen.stm field is updated at the root of the file system whenever a dynamically inheritable attribute is updated, such as in response to taking of a new snapshot.
[0034] The dm gen.stm field is changed, for example, monotonically
incremented, at the root of the file system with respective changes of the
corresponding dynamically inheritable attribute. The dm gen.stm field is propagated from the root to other nodes during lookups or during a validation procedure to validate the dynamically inheritable attribute, such as the STM, as discussed further with respect to Fig. 4.
[0035] Fig. 4 is an example of a method 400 for propagating dynamically inheritable attributes. The validation procedure of Fig. 4 is performed by an entry point server and is used to validate a dynamically inheritable attribute (e.g. STM) at a given file system entity, referred to as "my object". The processing of dynamically inheritable attribute is based on dynamically inherited generation (dm gen) field in in-core inode structure representing a file object. Though the description above suggests that dm gen is a single value, in one example, it may be a vector describing various distinct inheritable properties such as snapshot IDs, generation of antivirus checking tools, generation of placement rules, etc. Below reference is made to a specific dm gen slot as dm gen.xxx, for example, dm gen.stm. This field is maintained by an ES and is copied from the parent of the inode during process of propagation of inheritable attribute.
[0036] The method can be further explained by using the example of snapshot time marks (STM). Rules of STM propagation are based on the fact that time moves in one direction and STM for snapshots grow monotonically. Request to preserve affected object later simply overwrites previous request. Therefore, effective update STM is always the largest requested STM on the chain of all predecessors of the object.
[0037] The method starts at block 402, with the location and revalidation of Rl. This may be performed by the method discussed with respect to Fig. 5.
[0038] At block 404, the dm gen field is checked to confirm that it is the same for all nodes on the name path up to the root of inheritance. Ultimately dm gen field is monotonically incremented only on the root of inheritance (Rl) and then propagated to other nodes by the process of validation of dynamically inheritable attributes. When the value of an inheritable attribute is changed, then the corresponding dm gen for the root inode must be updated. In one example,the above processdoes not check values of inheritable attributes for all the nodes of the hierarchy. The processing stops as soon as dm gen attribute of any node in the path matches dm gen of Rl.
[0039] If the dm gens checked for at 404 do not match, then the process proceeds to task 406. If they do match, the process ends at block 41 2.
[0040] In one example, if the root is not cached or if my object is not cached, then the corresponding dm gen field is not locally accessible at the entry point server. At block 406, the entry point server builds a list (L) of all nodes in the hierarchy from my object to the root. As part of the process of building the list (L), the entry point server retrieves the root from the corresponding destination server, unless such information is already cached at the entry point server. Further, information pertaining to my object is retrieved from the corresponding destination server, unless such information is already cached at the entry point server. Moreover, the entry point server further retrieves information pertaining to any intermediate file system entities between my object and the root, unless any such information associated with a given intermediate object is already cached at the entry point server.
[0041] As indicated by loop 408, nodes associated with file system entities in the hierarchy are iteratively added to the list, so long as the dm gen field of the corresponding file system entity does not match the dm gen field of the root. The process of adding of nodes to the list stops when the dm gen field of a
corresponding file system entity matches the root's dm gen field.
[0042] After the list has been built, at block 410, the value of the dynamically inheritable attribute, such as STM, is propagated from the first node in the list, where the first node is typically the root of inheritance, to other nodes in the list. In the process according to Fig. 4, propagation of a dynamically inheritable attribute is made to the file system entities associated with nodes in the list. These are the file system entities having dm gen values that do not match that of the root of inheritance. This may help to reduce traffic and resource consumption associated with propagation of dynamically inheritable attributes, which can grow rapidly in a large distributed storage system.
[0043] After propagation of the value of the dynamically inheritable attribute to the file system entities associated with nodes in the list, the process of Fig. 4 exits at block 412.
[0044] Fig. 5 is an example of a method 500 for finding and validating a root of inheritance for dynamically inheritable attributes in a name space tree. The method can also track contention and set the root of inheritance to a lower level to decrease the loading on a server, or reset the Rl back to the primary root if needed.
[0045] The system keeps tracks of the relative cost of updates and queries associated with the Rl. This is done using the formulas in Eqn. 1 and 2.
N/M > R Eqn. 1
L/M > T Eqn. 2 In Eqns. 1 and 2, M represents the maximum productivity of the host in the number of input-output transactions per second (lO/s). Generally, M depends on the speed of the network adapter, productivity of the host's CPU, and other factors. To start the tracking, it may be pre-set with a reasonable conservative default, but is continuously tracked, and increased until it matches the actual maximum for the measured total load value. L is the total load value for the server in lO/s for the last time interval, e.g., a second. N is the root load actual number of invalidation, revalidation, and dm gen update IO requests per second associated with Rl.
[0046] In one example, R is the target ratio of root related operations (N) to the maximum productivity (M). Typically it is set between 0.2 and 0.3. T is the threshold ratio of the actual (L) to maximum productivity (M). In this calculation, the
inheritance root overloading event triggers when T exceeds a preset level, such as 0.5. An Rl overloading event, as determined by the values for R and T, sets the value of dm_gen.shift_down to TRUE and starts reassigning of inheritance root responsibilities to the nodes on the next level of name hierarchy.
[0047] The process of offloading Rl responsibilities to the next level is performed during the revalidation of an Rl, wherein initial Rl responsibilities belong to the FS Root. The Rl responsibilities are propagated down the tree until one of the resetting events return the Rl back to the FS Root. Accordingly, it may be viewed as an oscillating process that is controlled by two fields, the dm_gen.shift_down flag and the dm_gen.shift_generation. The value of dm_gen.shift_generation indicates that Rl responsibilities were assigned to this node during specific shift_generation and the dm_gen.shift_down flag set to FALSE indicates that the current Rl is operational. TRUE indicates that an overloading event was detected and the Rl must be shifted down.
[0048] The process of selecting and appointing Rl to the next level is driven by ES, as block 402 of the method of Fig. 4 for propagating inheritable attributes. At this point Rl itself is an inheritable property. For example, Rl is calculated and recorded in in-core representations of all ES objects as part of method 400 in Fig. 4.
[0049] The method 500 begins at block 502 by determining if FS Root, and current Rl cached and if dm_gen.shift_generations are the same. If not, the Rl is reset to the FS Root at block 504, and process flow returns to block 502 to restart the process. If so, process flow proceeds to block 506.
[0050] At block 506, the ES determines if dm_gen.shift_down set to TRUE and dm_gen.shift_down_gen matching of FS Root. If so, process flow proceeds to block 508. At block 508, the Rl is set to the next object down the path to the object. If at block 508 either dm_gen.shift_down is FALSE or dm_gen.shift_down_gen does not match FS Root, process flow proceeds to block 510 at which the method 500 exits back to block 404 of method 400.
[0051] Fig. 6 is a schematic of an example of propagation of roots of inheritance to different levels of a name space tree 600. In one example, in the method 500 described with respect to Fig. 5 newly chosen RIs may be on different levels of the name space tree 600. For example, some name space sub-trees with relatively low activities, e.g., sub-tree 602, may set their RIs closer to the root 603 of FS. Some dormant sub-trees may not appoint RIs at all, e.g., sub-tree 604. Further, some active sub-trees may move their RIs deeper into the name space tree 600 and closer to the areas of activity.
[0052] Fig. 6 illustrates how RIs were propagated to different levels of the name space tree 600. Rl nodes are represented as shaded areas. More active portions, such as sub-trees 606 and 608, of the name space had their RIs propagated deeper, for example, to Dir_1 1 610 and Dir_12 61 2 levels. Less active portions, e.g., sub-tree 602, of the name space had their RIs propagated shallower to Dir_2 614 and dormant areas of the name space below Dir_3 616 do not have their Rl established.
[0053] There are several events in the system when shifting of Rl responsibilities down the name space tree 600 is stopped and the Rl is returned to the FS Root 603. For example, if the dm_gen.shift_generation value of the FS root 603 is changed then all RIs are reset to the FS Root 603. As described with respect to the method 500 of Fig. 5, this event is analyzed by every ES side revalidation of dynamically inheritable attributes, and if dm_gen.shift_generations of the current Rl and FS Root do not match, the Rl is reset to the FS Root 603.
[0054] A number of events may trigger an Rl Reset. For example, a reboot of the DS controlling the FS Root node or a transfer of control of FS Root segment. In this case, FS Root generation is initialized with a content randomized by the reboot time and ID of the handling host.
[0055] Another event that may trigger an Rl reset to the FS Root 603, is a renaming of a portion of the name space tree 600, with concomitant changes of inheritance. For example, some parts of the name space may be moved out of their scope of inheritance. In such case, the elements that are moved may either retain some inheritable properties from a previous path or have to re-inherit them from a new path. To assure that this process affects all objects Rl is reset, either to the most common ancestor or, as a simplification, to the FS Root 603.
[0056] Changing inheritable properties between FS Root 603 and Rl may also trigger a reset of the Rl. For example, to the place of change or, as a simplification, to the FS Root 603.
[0057] Fig. 7 is an example of a server 700 that may allow roots of inheritance to be moved up or down a hierarchical name space tree. The server may be an ES as described herein. The server 700 has one or more processors 702, which may be single core processors, multi-core processors, virtual processors, cloud processors, or any combinations thereof. The processors 702 may be coupled to a network interface controller (NIC) 704 through a bus 706. The bus 706 may include any number of technologies, such as PCIe, Fibre Channel, or any number of other commercial or proprietary bus technologies.
[0058] The processors 702 may be coupled to a storage system 708 through the bus 706. The storage system 708 may include a RAM, a ROM, a flash drive, a hard drive, a SAN, an optical drive or any combinations thereof. The storage system 708 forms a tangible machine readable medium that includes instructions code to direct the processors 702 to perform the methods described herein. For example, a root validation module 71 0 could include code to direct the processor to validate a root of inheritance, for example, as discussed with respect to Fig. 5. A contention monitor 71 2 may direct the processor to monitor the amount of propagation accesses to a root directory controller, or other directory controllers, to determine when the Rl should be moved down a name space tree. A node list 714 may be used to keep track of the attributes for the nodes, including, for example, STMs and Rls. An attribute propagator 716 may be used to direct the processors 702 to confirm the Rl and propagate attributes in a name space tree.
[0059] While the present techniques may be susceptible to various modifications and alternative forms, the exemplary examples discussed above have been shown only by way of example. It is to be understood that the techniques are not intended to be limited to the particular examples disclosed herein. Indeed, the present techniques include all alternatives, modifications, and equivalents falling within the scope of the present techniques.

Claims

CLAIMS What is claimed is:
1 . A method for controlling contention in hierarchical storage, the method comprising:
monitoring loading on a destination server (DS) controlling a directory that is a root of inheritance (Rl) for a dynamically inheritable attribute; and if the loading on the DS is higher than a predetermined level, then moving the Rl for the dynamically inheritable attribute to a lower level in a name space tree.
2. The method of claim 1 , further comprising:
finding the Rl for the dynamically inheritable attribute;
revalidating the Rl for the dynamically inheritable attribute;
building a list of entities from the Rl to the dynamically inheritable attribute; and
propagating the dynamically inheritable attribute to each of the entities in the list.
3. The method of claim 1 , further comprising resetting the root of inheritance to a file system root directory (FS Root) if the FS Root is not cached, the Rl is not cached or the generations between the FS root and the Rl do not match.
4. The method of claim 1 , further comprising determining the loading by determining a ratio (T) between the number of input-output transactions per time unit to the maximum loading for a DS over the same time unit.
5. The method of claim 4, further comprising using T to determine loading, and wherein the predetermined level is set to about 0.5.
6. The method of claim 1 , further comprising determining the loading by determining a ratio (R) between the number of transactions for a hosted root of inheritance per time unit to the maximum loading for a DS over the same time unit.
7. The method of claim 6, further comprising using R to determine loading, and wherein the predetermined level is set to between about 0.2 and 0.3.
8. The method of claim 1 , further comprising:
at run-time of a file system, performing an operation that sets a value of the dynamically inheritable attribute of a particular entity, wherein the dynamically inheritable attribute relates to a snapshot;
determining whether the dynamically inheritable attribute of at least a second entity is to be refreshed; and in response to determining that the dynamically inheritable attribute of at least the second entity is to be refreshed, propagating the value of the dynamically inheritable attribute to at least the second entity.
9. The method of claim 8, wherein propagating the value to the second entity comprises propagating the value to the second entity that is a descendant of the particular entity in a hierarchy of the hierarchically storage system.
10. The method of claim 9, wherein propagating the value of the dynamically inheritable attribute comprises propagating a time property of the snapshot.
1 1 . An entry server (ES) for a hierarchical storage system comprising: a processor; and
a storage system, where the storage system comprises: a list of file system entities in a name space tree, wherein each entry in the list comprises a root of inheritance and a dynamically inheritable attribute;
instructions to direct the processor to:
monitor loading on a destination server (DS) controlling a
segment that is the root of inheritance (Rl) for the dynamically inheritable attribute; and if the loading on the DS is higher than a predetermined level, move the Rl for the dynamically inheritable attribute to a lower level in the name space tree.
12. The entry server of claim 1 1 , comprising:
a network interface; and
the storage system, wherein the storage system comprises instructions to direct the processor to access the DS over a network.
13. The entry server of claim 1 1 , comprising instructions to direct the
processor to:
find the Rl for the dynamically inheritable attribute;
revalidate the Rl for the dynamically inheritable attribute;
build a list of entities from the Rl to the dynamically inheritable attribute; and propagate the dynamically inheritable attribute to each of the entities in the list.
14. The entry server of claim 1 1 , comprising instructions to determine loading on the DS by determining a ratio (T) between the number of input- output transactions per time unit to the maximum loading for a DS over the same time unit.
15. A non-transitory machine readable medium comprising instructions to instruct a processor to:
monitor contention on a destination server and move a root of inheritance for an entity to a lower level of a name space tree if the contention is beyond preset limits;
validate the root of inheritance (Rl) for the entity;
build a node list of entities in the name space tree from the Rl to the entity; and
propagate a dynamically inheritable attribute to each of the entities in the node list.
PCT/US2015/013797 2015-01-30 2015-01-30 Dynamically inheritable attribute WO2016122603A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013797 WO2016122603A1 (en) 2015-01-30 2015-01-30 Dynamically inheritable attribute

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/013797 WO2016122603A1 (en) 2015-01-30 2015-01-30 Dynamically inheritable attribute

Publications (1)

Publication Number Publication Date
WO2016122603A1 true WO2016122603A1 (en) 2016-08-04

Family

ID=56544022

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/013797 WO2016122603A1 (en) 2015-01-30 2015-01-30 Dynamically inheritable attribute

Country Status (1)

Country Link
WO (1) WO2016122603A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6834301B1 (en) * 2000-11-08 2004-12-21 Networks Associates Technology, Inc. System and method for configuration, management, and monitoring of a computer network using inheritance
US7383286B2 (en) * 2001-07-06 2008-06-03 Fujitsu Limited File management system with parent directory search functions
US7761432B2 (en) * 2005-11-04 2010-07-20 Oracle America, Inc. Inheritable file system properties
US20120303585A1 (en) * 2011-05-23 2012-11-29 Boris Zuckerman Propagating a snapshot attribute in a distributed file system
US20140214773A1 (en) * 2013-01-30 2014-07-31 Hewlett-Packard Development Company, L.P. Reconstructing a state of a file system using a preserved snapshot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6834301B1 (en) * 2000-11-08 2004-12-21 Networks Associates Technology, Inc. System and method for configuration, management, and monitoring of a computer network using inheritance
US7383286B2 (en) * 2001-07-06 2008-06-03 Fujitsu Limited File management system with parent directory search functions
US7761432B2 (en) * 2005-11-04 2010-07-20 Oracle America, Inc. Inheritable file system properties
US20120303585A1 (en) * 2011-05-23 2012-11-29 Boris Zuckerman Propagating a snapshot attribute in a distributed file system
US20140214773A1 (en) * 2013-01-30 2014-07-31 Hewlett-Packard Development Company, L.P. Reconstructing a state of a file system using a preserved snapshot

Similar Documents

Publication Publication Date Title
JP7393334B2 (en) Allocation and reassignment of unique identifiers for content item synchronization
US11880581B2 (en) Integrated hierarchical storage management
US10853339B2 (en) Peer to peer ownership negotiation
US10360261B2 (en) Propagating a snapshot attribute in a distributed file system
Xiong et al. Metadata distribution and consistency techniques for large-scale cluster file systems
US20190392047A1 (en) Multi-table partitions in a key-value database
JP2020525906A (en) Database tenant migration system and method
US10521401B2 (en) Data object lockdown
US9317525B2 (en) Reconstructing a state of a file system using a preserved snapshot
US20170220586A1 (en) Assign placement policy to segment set
Dev et al. Dr. Hadoop: an infinite scalable metadata management for Hadoop—How the baby elephant becomes immortal
US20180210950A1 (en) Distributed file system with tenant file system entity
US11822806B2 (en) Using a secondary storage system to implement a hierarchical storage management plan
Avilés-González et al. Scalable metadata management through OSD+ devices
Deochake et al. Bigbird: Big data storage and analytics at scale in hybrid cloud
US11556589B1 (en) Adaptive tiering for database data of a replica group
Liu et al. AngleCut: A ring-based hashing scheme for distributed metadata management
WO2016122603A1 (en) Dynamically inheritable attribute
Luo et al. D2-tree: A distributed double-layer namespace tree partition scheme for metadata management in large-scale storage systems
Cha et al. Effective metadata management in exascale file system
Dewan et al. Julunga: A new large-scale distributed read-write file storage system for cloud computing environments
Duan et al. A multi‐channel architecture for metadata management in cloud storage systems by binding CPU‐cores to disks
Mouratidis Optimizing the recovery of data consistency gossip algorithms on distributed object-store systems (CEPH)
Neto et al. Managing Object Versioning in Geo-Distributed Object Storage Systems
Deochake et al. Bigbird: Exabyte-Scale Big Data Storage and Analytics Framework in Hybrid Cloud

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15880494

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15880494

Country of ref document: EP

Kind code of ref document: A1