US9027017B2 - Methods and apparatus for movement of virtual resources within a data center environment - Google Patents
Methods and apparatus for movement of virtual resources within a data center environment Download PDFInfo
- Publication number
- US9027017B2 US9027017B2 US12/709,943 US70994310A US9027017B2 US 9027017 B2 US9027017 B2 US 9027017B2 US 70994310 A US70994310 A US 70994310A US 9027017 B2 US9027017 B2 US 9027017B2
- Authority
- US
- United States
- Prior art keywords
- data center
- resources
- virtual
- resource
- cluster
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
- G06F9/5077—Logical partitioning of resources; Management or configuration of virtualized resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/485—Task life-cycle, e.g. stopping, restarting, resuming execution
- G06F9/4856—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration
- G06F9/4862—Task life-cycle, e.g. stopping, restarting, resuming execution resumption being on a different machine, e.g. task migration, virtual machine migration the task being a mobile agent, i.e. specifically designed to migrate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
- G06F9/5088—Techniques for rebalancing the load in a distributed system involving task migration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/455—Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
- G06F9/45533—Hypervisors; Virtual machine monitors
- G06F9/45558—Hypervisor-specific management and integration aspects
- G06F2009/4557—Distribution of virtual machine instances; Migration and load balancing
Definitions
- Embodiments described herein relate generally to virtual resources within a data center, and, in particular, to methods and apparatus for movement of virtual resources within a data center environment.
- Known methods for managing the operation of virtual resources within a data center can be complicated and inefficient.
- known methods for handling movement of virtual resources within a data center can involve labor intensive manual intervention due to various incompatible systems that control and/or manage resources (e.g., hardware resources, software resources) of a data center in a disparate and inefficient fashion.
- Management of virtual resource movement within a data center environment if not handled appropriately, can adversely affect the operation of other virtual resources within the data center environment.
- an apparatus can include a monitoring module configured to send an indicator representing that performance of a virtual resource satisfies a threshold condition.
- the apparatus can also include a management module configured to move a set of virtual resources including the virtual resource from a first portion of data center hardware resources to a second portion of data center hardware resources mutually exclusive from the first portion of data center hardware resources in response to the indicator.
- the management module can be configured to define the set of virtual resources based on an operational relationship between the virtual resource and the remaining virtual resources included in the set of virtual resources.
- FIG. 1 is a schematic diagram that illustrates a set of virtual resources identified for movement from one portion of a data center to another portion of the data center, according to an embodiment.
- FIG. 2 is a schematic diagram that illustrates a set of virtual resources identified for movement from one data center cluster to another data center cluster, according to an embodiment.
- FIGS. 3A through 3C are graphs that illustrate performance metric values associated with virtual resources, according to an embodiment.
- FIG. 4A illustrates a database that includes representations of operational relationships between virtual resources, according to an embodiment.
- FIG. 4B illustrates a database that include available capacity values of data center units, according to an embodiment.
- FIG. 5 is a flowchart that illustrates a method for identifying a set of virtual resources for movement within a data center, according to an embodiment.
- a management module can be configured to move (or trigger movement of) a virtual resource from a first portion of data center (which can be referred to as a source portion of the data center) to second portion of a data center (which can be referred to as a destination portion of the data center).
- the hardware resources e.g., host devices, access switches, aggregation devices, core switching elements
- software resources e.g., operating systems, hypervisors such a VMware hypervisor
- the hardware resources and/or software resources of the data center can be collectively referred to as data center resources.
- the virtual resource(s) can be configured to, for example, emulate the functionality of a physical source device and/or its associated software.
- the movement of one or more virtual resources can be triggered in response to a threshold condition being satisfied based on a value of a performance metric.
- the performance metric of the virtual resource can be, for example, a utilization rate, a failure rate, and/or so forth, of the virtual resource when operating within at least a portion (e.g., a hardware and/or software resource) of a data center.
- the management module can be configured to move additional virtual resources that have an operational relationship (e.g., an operational dependency) with the virtual resource to the destination portion of the data center as defined within a mapping of operational relationships between the virtual resources and the additional virtual resources.
- the management module can be configured to move the virtual resource (and/or related virtual resources) when the destination portion of the data center is available to operate the virtual resource (and/or related virtual resources).
- the movement of one or more virtual resources can be triggered in response to a combination of factors including, (1) performance of one or more virtual resources, (2) operational relationships between the virtual resource(s), (3) the availability of destination resources (e.g., data center resources) to operate the virtual resource(s), and/or so forth.
- the movement of one or more virtual resources from a source portion of a data center to a destination portion of the data center can be based on a user preference of a user (e.g., a data center administrator, a client, a customer), a rules-based algorithm, and/or so forth. For example, the factors can be balanced based on a user preference and/or a rules-based algorithm to identify one or more virtual resources for movement within the data center.
- resources of a source portion of a data center e.g., source data center resources
- resources of a destination portion of the data center e.g., destination data center resources
- the source and/or destination portions of the data center can be portions of the data center that are managed as data center units.
- the source and/or destinations portions of the data center can be associated with data center clusters.
- FIG. 1 is a schematic diagram that illustrates a set of virtual resources 50 identified for movement from one portion 102 of a data center 100 to another portion 104 of the data center 100 , according to an embodiment.
- Portion 102 of the data center 100 can be referred to as a source portion 102 of the data center 100
- the portion 104 of the data center 100 can be referred to as a destination portion 104 of the data center 100 .
- the portions (i.e., the source portion 102 , the destination portion 104 ) of the data center 100 can be resources (e.g., hardware resources, software resources, data center resources) of the data center 100 .
- the set of virtual resources 50 which is identified for movement from the source portion 102 of the data center 100 to the destination 104 of the data center 100 , includes virtual resources VR 2 and VR 4 .
- the source portion 102 of the data center 100 is configured to operate virtual resources VR 1 through VR 4 and the destination portion 104 of the data center 100 is configured to operate virtual resources VR 5 through VR 7 .
- the virtual resources VR 1 through VR 7 can be collectively referred to as virtual resources 55 .
- the virtual resources can be configured to emulate an application from a source device (outside of the data center 100 ) that has been migrated to the data center 100 . More details related to migration of a source are described in co-pending patent application Ser. No. 12/709,954, filed on Feb. 22, 2010, entitled, “Methods and Apparatus Related to Migration of Customer Resources to Virtual Resources within a Data Center Environment,” which is incorporated herein by reference in its entirety.
- the movement of the virtual resources 50 from the source portion 102 of the data center 100 (e.g., source data center resources) to the destination portion 104 of the data center 100 (e.g., destination data center resources) can be triggered by a management module 120 .
- the management module 120 can be executed within a processor 140 of a processing device 150 and the management module 120 can include a monitoring module 124 .
- the management module 120 (and the monitoring module 124 ) can have access to a memory 130 of the processing device 150 .
- the processing device 150 can be, for example, a computing device such as a server within the data center 100 .
- the processor 150 can include a memory (e.g., a level-1 (L1) cache) (not shown).
- the virtual resources are selected (e.g., identified) from the virtual resources 55 for inclusion in the set of virtual resources 50 (which are to be moved) based on a combination of several factors.
- the factors can include, for example, (1) the values of performance metrics of one or more of the virtual resources 55 with respect to one or more threshold conditions, (2) the operational relationships (e.g., an operational dependencies) between the virtual resources 55 , and (3) the availability of target resources (e.g., hardware resources, software resources) to which one or more of the virtual resources 55 can be moved (or triggered to move).
- FIG. 1 illustrates an example of virtual resources 50 identified for movement between portions (e.g., data center resources) of the data center 100
- the factors described above can be used to trigger movement of one or more of the virtual resources 55 within a data center portion.
- the values of the performance metrics of one or more of the virtual resources 55 can be monitored with respect to the one or more threshold conditions 132 by the monitoring module 124 .
- the performance metrics of one or more of the virtual resources 55 can include, for example, a utilization rate, a failure rate, a processing speed, and/or so forth, of the virtual resource(s) 55 when operating within the source portion 102 of the data center 100 .
- a threshold condition database 132 including information representing one or more threshold conditions can be stored in the memory 120 .
- the virtual resource VR 2 is identified for movement from the source portion 102 of the data center 100 to the destination portion 104 of the data center 100 by the monitoring module 124 of the management module 120 in response to a value of a performance metric (or a set of values of the performance metric) satisfying a threshold condition included in the threshold condition database 132 .
- the virtual resource VR 2 can be identified as a virtual resource that should be moved (or triggered to move) because the performance of the virtual resource VR 2 (as indicated by values of performance metrics) falls below a specified threshold level (as indicated by a threshold condition) represented within the threshold condition database 132 .
- the monitoring module 124 can be configured to receive (e.g., collect, access) values of performance metrics related to one or more of the virtual resources 55 periodically, randomly, at specified intervals, in response to an instruction from a user, based on a user preference (which can be stored in the memory 120 ), and/or so forth.
- the types of performance metric values collected by the monitoring module 124 and/or statistics calculated by the monitoring module 124 based on the performance metric values can be preselected, selected randomly, based on a preference of a user, and/or so forth.
- the user preference can identify the performance metric values to be used by the monitoring module 124 to trigger movement of one or more of the virtual resources 55 .
- the monitoring module 124 can be configured to request and/or receive one or more performance metric values (or raw data that can be used to calculate a performance metric value) from one or more resources (e.g., hardware resources, software resources, virtual resources) of the data center 100 .
- resources e.g., hardware resources, software resources, virtual resources
- values of performance metrics can be pushed from one or more resources of the data center 100 to the monitoring module 124 .
- values of performance metrics can be stored in the memory 130 .
- the historical values of the performance metrics can be used by the monitoring module 124 to determine whether or not one or more virtual resources from the virtual resources 55 should be moved from a portion of the data center 100 to another portion of the data center 100 . Examples of performance metric values associated with virtual resources are shown in FIGS. 3A through 3C .
- an operational relationship database 134 including information representing operational relationships between the virtual resources 55 can be stored in the memory 120 .
- the information included in the operational relationship database 134 can represent operational dependencies between the virtual resources 55 , such as a requirement that two of the virtual resources 55 operate at the same host device. This physical proximity may be needed so that the virtual resources 55 can operate in a desirable fashion.
- the information included in the operational relationship database 134 can be referred to as a mapping of operational relationships.
- the operational relationships represented within the operational relationship database 134 can be tiered operational relationships. An example of an operational relationship database that includes tiered operational relationships is described in connection with FIG. 4A .
- the virtual resource VR 4 is identified for movement from the source portion 102 of the data center 100 to the destination portion 104 of the data center 100 by the management module 120 based on an operational relationship stored in the operational relationship database 134 .
- the virtual resource VR 4 can be identified as a virtual resource that should be moved with the virtual resource VR 2 to the destination portion 104 of the data center 100 because VR 2 may not function in a desirable fashion unless VR 4 is also operating within the same destination portion 104 of the data center 100 as VR 2 .
- a virtual resource from the virtual resources 55 may not be moved from, for example, the source portion 102 of the data center 100 to the destination portion 104 of the data center 100 (or moved within a data center portion or within a set of data center resources) based on an operational relationship represented within the operational relationship database 134 .
- a virtual resource from the virtual resources 55 can be identified for movement from one portion of the data center 100 to another portion of the data center 100 in response to a value of a performance metric satisfying a threshold condition.
- the virtual resource may not be moved (e.g., may be prevented from moving) based on an operational relationship indicating that the virtual resource would disrupt the operation of other virtual resources in an undesirable fashion.
- the benefits associated with movement of the virtual resource can be outweighed by disruptions that could be caused by movement of the virtual resource away from other virtual resources that have an operational relationship with the virtual resource.
- an availability database 136 including information representing the availability of the resources of the data center 100 can also be stored in the memory 120 .
- the information stored in the availability database 136 can represent the capacity available in one or more portions (e.g., portions of data center resources) of the data center 100 .
- An example of an availability database is shown in FIG. 4B .
- the virtual resource VR 2 and virtual resource VR 4 can be moved as a set of virtual resources 50 to destination portion 104 of the data center 100 because destination portion 104 of the data center 100 has resources available, as indicated in the availability database 136 , to operate the set of virtual resources 50 .
- the management module 120 can be configured to determine, based on information represented within the availability database 136 , whether or not one or more of the virtual resources 55 can be moved from one portion of the data center 100 (e.g., one portion of data center resources) to another portion of the data center 100 (e.g., another portion of data center resources).
- one or more virtual resources from the virtual resources 55 may not be moved from, for example, the source portion 102 of the data center 100 to the destination portion 104 of the data center 100 (or moved within a data center portion) based on a lack of availability of hardware resources to operate the virtual resource(s).
- a set of virtual resources from the virtual resources 55 can be identified for movement from one portion of the data center 100 to another portion of the data center 100 in response to a value of a performance metric satisfying a threshold condition and based on an operational relationship between the virtual resources in the set of virtual resources.
- One or more of the virtual resources from the set of virtual resources may not be moved (e.g., may be prevented from moving) based on a lack of capacity to operate one or more of the virtual resources from the set of virtual resources at a destination portion of the data center 100 .
- the rules-based algorithm, set of threshold conditions, and/or user preferences can be used by the management module 120 to automatically resolve conflicts between these factors and determine whether or not a virtual resource from the virtual resources 55 should be moved within the data center 100 .
- the monitoring module 124 can be configured to send a notification to, for example, a user (e.g., a data center administrator, a client, a customer) via a user interface (not shown) indicating that one or more of the virtual resources 55 should be moved.
- the management module 120 can be configured to move (or trigger movement of) the virtual resource(s) 55 only when authorized to do so by the user.
- the monitoring module 124 can be configured to solicit authorization from, for example, a user via the user interface for movement of one or more of the virtual resources 55 . When authorization is received from the user via the user interface the monitoring module 124 can be configured to move (or trigger movement of) the virtual resources 55 within the data center 100 .
- the management module 120 can be configured to trigger movement of one or more of the virtual resources 55 based on a schedule (e.g., a schedule stored at the memory 130 ). In some embodiments, the management module 120 can be configured to trigger a movement of the one or more of the virtual resources 55 so that they are operating at a first set of specified locations within the data center 100 (e.g., within the first portion 102 of the data center 100 ) when in a first configuration (which can be referred to as a first mode) and operating at a second set of specified locations within the data center 100 (e.g., within the second portion 104 of the data center 100 ) when in a second configuration (which can be referred to as a second mode).
- a schedule e.g., a schedule stored at the memory 130 .
- the management module 120 can be configured to trigger a movement of the one or more of the virtual resources 55 so that they are operating at a first set of specified locations within the data center 100 (e.g., within the first portion 102 of the
- Movement between the first configuration and the second configuration can be triggered based on a schedule.
- the movement between the modes can be referred to as a mode switch.
- the movement between the modes can be based on, for example, temporal considerations, performance thresholds, and/or so forth.
- the virtual resources 55 can be managed by the management module 120 so that are in a first mode (or configuration) during typical day-time operations, in a second mode (or configuration) during evening batch operations, in a third mode (or configuration) during end of month closing operations, and/or so forth.
- mode switching can be triggered based on utilization rates of portions of virtual resources.
- the hardware resources and/or software resources of the data center 100 can include one or more levels of infrastructure.
- the hardware resources of the data center 100 can include, storage devices, host devices, access switches, aggregation devices, routers, interface components, cables, and/or so forth.
- one or more processing devices e.g., host devices, storage devices
- the data center 100 can be configured so that the processing devices can be in communication with (e.g., coupled to) a layer of access switches that are in communication with (e.g., coupled to) a layer of aggregation devices.
- the aggregation devices can function as gateway devices into a set of routers/switches that function as core switching elements of the data center 100 .
- the processing devices can be configured to communicate with one another via at least a portion of the infrastructure of the data center 100 .
- the data center 100 infrastructure can have a live side and a redundant side (which can function as a back-up of the live side).
- the data center 100 can also include software resources, for example, management modules (such as management module 120 ), operating systems, hypervisors 110 (e.g., VMware hypervisor, Xen hypervisor, Hyper-V hypervisor), and/or so forth.
- the software resources can be configured to enable use of the hardware resources of the data center 100 in a particular fashion.
- the hypervisors can be configured to facilitate (or enable) virtualization of hardware resources of processing device(s).
- the operating systems can be installed at hardware resources such as routers, aggregation devices, routers, core switching elements, and/or forth so that other software resources can function at these hardware resources in a desirable fashion.
- the data center 100 can be a cloud computing environment where the hardware resources and/or software resources are shared by multiple virtual resources associated with one or more users (e.g., clients, customers).
- the hardware resources e.g., host devices, access switches, aggregation devices, core switching elements
- software resources e.g., operating systems, hypervisors
- the virtualized environment defined by the data center 100 can be referred to as a data center virtualized environment.
- one or more portions of the management module 120 can be (or can include) a hardware-based module (e.g., an application-specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA)) and/or a software-based module (e.g., a module of computer code, a set of processor-readable instructions that can be executed at a processor).
- the management module 120 can include one or more memory portions (e.g., a random access memory (RAM) portion, a shift register, a cache) that can be used during operation of one or more functions of the management module 120 .
- one or more of the functions associated with the management module 120 can be included in different modules (not shown) and/or combined into one or more modules (not shown).
- the management module 120 can be a centralized management module configured to handle data center management for the entire data center 100 , or can be a de-centralized management module configured to handle management of only a portion of the data center 100 .
- the management module 120 can be configured to perform various functions in addition to management of movement of virtual resources 55 .
- the management module 120 can be configured to manage disaster recovery of the data center, virtual resource provisioning, event reporting, data center security, and/or so forth (which can be collectively referred to as management functions) via interactions with various potentially incompatible hypervisors executing within a data center environment.
- the management module 120 can be configured to perform various management functions associated with the operation of virtual resources at host devices, which can each be operating hypervisors that have incompatible hypervisor platforms.
- a virtual resource when operating with a hypervisor that has a hypervisor platform can be referred to as operating within the hypervisor environment.
- hypervisors can be incompatible. For example, function calls and/or signaling protocols that can be used by a hypervisor based on a first hypervisor platform may not be compatibly used by another hypervisor based on a second hypervisor platform.
- the management module 120 can be configured to, for example, handle signaling so that the management module 120 can manage one or more virtual resources of a data center via a hypervisor independent of the platform of the hypervisor.
- the platform of a hypervisor can be defined, for example, by a particular runtime library, a functionality, an architecture, a communication protocol, an operating system, a programming language, a hypervisor version, and/or so forth.
- the platform of a hypervisor can be, for example, based on a hosted software application architecture executing within an operating-system environment, or a native software application architecture that executes directly on the hardware of one or more host devices (not shown).
- the processing device 150 is included in the infrastructure of the data center 100 .
- the processing device 150 can be disposed outside of the infrastructure of the data center 100 .
- the management module 120 can be accessed via a user interface, which can be configured so that a user (e.g., a data center administrator, a network administrator, a customer, a source owner) can send signals (e.g., control signals, input signals, signals related to instructions) to the management module 120 and/or receive signals (e.g., output signals) from the management module 120 .
- the user interface can be included in the processing device 150 and/or can be included in a client device (e.g., a remote client device) outside of the infrastructure of the data center 100 .
- the memory 130 can be, for example, a random-access memory (RAM) (e.g., a dynamic RAM, a static RAM), a flash memory, a removable memory, and/or so forth.
- RAM random-access memory
- the information stored in the memory 130 can define a database that can be implemented as, for example, a relational database, an indexed database, a table, and/or so forth.
- the memory 130 is shown as being local to the management module 120 , in some embodiments, one or more portions of the databases 132 , 134 , and/or 136 , which are stored in the memory 130 , can be stored in a remote memory that can be accessed by the management module 120 .
- portions of the databases 132 , 134 , and/or 136 can be stored in a separate (e.g., a remote) storage device (e.g., storage facility) that can be accessed by the management module 120 via a network (e.g., a local area network (LAN), a wide area network (WAN), a mobile network such as a 3G network) (not shown).
- a network e.g., a local area network (LAN), a wide area network (WAN), a mobile network such as a 3G network
- LAN local area network
- WAN wide area network
- 3G network 3G network
- FIG. 2 is a schematic diagram that illustrates a set of virtual resources 80 identified for movement from one data center cluster to another data center cluster, according to an embodiment.
- FIG. 2 illustrates a set of virtual resources 80 being moved from data center cluster A to data center cluster B.
- the data center cluster A can be referred to as a source data center cluster
- the data center cluster B can be referred to as a destination data center cluster.
- the set of virtual resources 80 moved from data center cluster A to data center cluster B includes virtual resources VM 2 and VM 7 .
- the data center cluster A is configured to operate virtual resources VM 1 through VM 7 and the data center cluster B is configured to operate virtual resources VM 8 through VM 12 .
- the virtual resources VM 1 through VM 12 can be collectively referred to as virtual resources 85 .
- the movement of the virtual resources 80 from data center cluster A to data center cluster B can be triggered by a management module 220 using a threshold condition database 232 , an operational relationship database 234 , and/or an availability database 236 stored in a memory 230 .
- the management module 220 can be executed within a processor 240 of a processing device 250 and the management module 220 can include a monitoring module 224 .
- Data center clusters can be defined by groups of host devices (e.g., a group of more than 2 host devices, a group of 8 host devices) that function, from the perspective of hypervisors installed within host devices of each of the data center clusters, as isolated virtual resource movement regions.
- hypervisors installed within host devices of data center cluster A may be configured so that movement of virtual resources handled by the hypervisors can only occur between host devices that define the data center cluster A.
- the management module 220 which can manage virtual resources across data center clusters, can be configured to store information in the databases 232 , 234 , and/or 236 about multiple data center clusters (such as both data center cluster A and data center cluster B) so that the management module 220 can identify one or more of the virtual resources 85 for movement between data center cluster A and B.
- one or more of the virtual resources 85 identified for movement between data center cluster A and data center cluster B can be performed automatically (e.g., triggered so that it occurs automatically), or performed manually (after being temporarily deactived during movement).
- each of the data center clusters A, B has resources that are managed as data center units. Specifically, data center cluster A has resources that are managed as data center units A 1 through A 4 , and data center cluster B has resources that are managed as data center units B 1 through B 4 . As shown in FIG. 2 , the set of resources 80 is moved from the resources managed as data center unit A 4 (in data center cluster A) to the resources managed as data center unit B 2 (in data center cluster B).
- the data center units can be collectively referred to as data center units 260 .
- the data center units 260 can each be managed as a specified portion of resources (e.g., hardware resources, software resources, data center resources) of the data center 200 .
- resources of the data center 200 can be divided into (e.g., partitioned into) data center units 260 that can be used, for example, to handle processing associated with one or more virtual resources.
- the data center units 260 can be assigned for use by a specific user (e.g., assigned for operation of virtual resources of a user).
- the resources managed as one or more of the data center units 260 can be used by a user, for example, to operate one or more virtual resources (such as virtual resource VM 3 ) of the user.
- the user can be a computing element (e.g., a server, a personal computer, a personal digital assistant (PDA), a data center administrator, a customer, a client, a company, and/or so forth.
- PDA personal digital assistant
- At least a portion of the information included in the availability database 236 can be based on the availability of the data center units 260 (or portions thereof). Accordingly, a virtual resource can be moved so that the virtual resource operates within the resources of data center unit 260 if the resources of the data center unit 260 are sufficient to support operation of the virtual resource.
- the management module 220 can be configured to move (or trigger movement of) virtual resources of a user only to one or more of the data center units 260 assigned to the user. Accordingly, the management module 220 can be configured to identify virtual resources for movement based on assignment of the data center units 260 to one or more users.
- the management module 220 can be configured modify (or request authorization to modify) a number of data center units (which can be an integer number) assigned to a user in response to identification of one or more virtual resources for movement within the data center 200 .
- the management module 220 can be configured to identify a set of virtual resource of a user for movement based on, for example, a set of performance metrics and a set of operational relationships.
- the management module 220 can be configured to modify (or request authorization to modify) a number of data center units (such as data center units 260 ) assigned to the user when the capacity of the data center units assigned to the user would be insufficient to support the movement of the set of virtual resources.
- data center units assigned to the user at the first portion of the data center 200 can be reduced.
- the hardware resources (and the associated software resources to support the hardware resources) of one or more of the data center units 260 can be managed so that they perform at (or are capable of performing at), for example, predefined resource limit values (e.g., predefined hardware resource limit values).
- the hardware resources of one or more of the data center units 260 can managed so that they perform at, for example, a specified level of network bandwidth (e.g., 10 megabits/second (Mb/s) of network bandwidth, a specified level of network bandwidth of more than 1 Mb/s of network bandwidth), a specified level of processing speed (e.g., a processor speed of 300 megahertz (MHz), a processor speed of 600 MHz, a specific processor speed of more than 200 MHz), a specified input/output (I/O) speed of a storage device (e.g., a disk I/O speed of 40 I/O operations per second, a specified disk I/O speed of more than 10 IOPS), and/or a specified storage device bandwidth (e.g., a disk bandwidth of 10 Mb/s, a specified level of disk bandwidth of more than 10 Mb/s).
- a specified level of network bandwidth e.g., 10 megabits/second (Mb/s) of network bandwidth, a specified level of network
- a specified portion of hardware resources can also be reserved as part of one or more of the data center unit(s) 260 .
- the data center unit(s) 260 can also have a specified level of a storage device (e.g., a disk size of 30 gigabytes (GB), a specified disk size of more than 1 GB) and/or a specified memory space (e.g., a memory storage capacity of 768 megabytes (MB), a specified memory storage capacity of more than 64 MB) allocated to the data center unit(s) 260 .
- a specified level of a storage device e.g., a disk size of 30 gigabytes (GB), a specified disk size of more than 1 GB
- a specified memory space e.g., a memory storage capacity of 768 megabytes (MB), a specified memory storage capacity of more than 64 MB
- the hardware resources (and accompanying software) of the data center 100 can be partitioned so that the hardware (and/or software) resources of the data center units 260 are guaranteed to perform at predefined resource limit values.
- the resources of the data center units 260 can be managed so that they provide guaranteed levels of service that correspond with each (or every) predefined resource limit value from a set of predefined resource limit values. More details related to management of resources related to data units are set forth in co-pending patent application Ser. No. 12/709,962, filed on Feb. 22, 2010, entitled, “Methods and Apparatus Related to Unit-Based Virtual Resources within a Data Center Environment,” which is incorporated herein by reference in its entirety.
- a data center unit assigned to a user can be moved from a first portion of the data center 200 (such as data center cluster A) to a second portion of the data center 200 (such as data center cluster B).
- data center resources at the first portion of the data center 200 that are managed as the data center unit and assigned to the user can be replaced with data center resources at the second portion of the data center 200 so that they can be managed at the second portion of the data center 200 as the data center unit assigned to the user.
- Virtual resources of the user that were previously operated at the first portion of the data center 200 (and were previously managed as the data center unit) can be operated at the second portion of the data center 200 (and are newly managed as the data center unit) with the movement of the data center unit.
- data center unit assignments can be used to account for data center resources (e.g., data center hardware resources, data center software resources) used to operate a virtual resource of a user.
- FIGS. 3A through 3C are graphs that illustrate performance metric values associated with virtual resources, according to an embodiment.
- the values of performance metric X (shown on the y-axis) are plotted versus time (shown on the x-axis) for virtual resources V 1 , V 2 , and V 3 , respectively.
- the values of the performance metric X for virtual resource V 1 are above a threshold limit value P 1 before time T 1
- the values of the performance metric X are below threshold limit value P 1 after time T 1 .
- a monitoring module such as monitoring module 134 shown in FIG.
- the monitoring module 134 can be configured to identify the virtual resource V 1 for movement within the data center in response to the values of the performance metric X satisfying a threshold condition associated with the threshold limit value P 1 .
- the virtual resource V 2 can also be identified for movement within the data center based on an operational relationship between the virtual resource V 1 and the virtual resource V 2 .
- the virtual resource V 2 can be identified by a management module for movement with the virtual resource V 1 to a common destination portion (e.g., a common host device, a common data center cluster) of the data center based on an operational relationship between the virtual resource V 2 and the virtual resource V 1 .
- the operational relationship can be determined based on operation relationship information related to the virtual resources V 1 , V 2 , and V 3 stored in a database.
- the management module may determine that virtual resource V 3 should not be moved with virtual resources V 1 and V 2 to a destination portion of a data center because a capacity of the destination portion of the data center is insufficient to operate all of the virtual resources V 1 , V 2 , and V 3 .
- the identification of virtual resource V 2 for movement with V 1 rather than virtual resource V 3 can be based on a rank ordering of the respective operational relationships of virtual resources V 2 and V 3 with virtual resource V 1 , a user preference, and/or so forth.
- the virtual resource V 3 may not be identified for movement with virtual resource V 1 based on a value of a different performance metric (different than performance metric X) satisfying a threshold condition.
- FIG. 4A illustrates a database 400 that includes representations of operational relationships between virtual resources, according to an embodiment.
- virtual resources W 1 through W 6 (shown in column 410 ) are operating at data center unit UC (shown in column 440 ).
- the portion of capacity consumption of the hardware resources managed as the data center unit UC are shown in column 450 .
- the virtual resources 410 are associated with one or both of the operational relationships—tier 1 operational relationship (shown in column 420 ) and tier 2 operational relationship (shown in column 430 ).
- the “Y” values in the tier 1 operational relationship 420 can represent that the virtual resources 410 are associated with a particular user, and the “N” values can represent that the virtual resources 410 are not associated with the user.
- the “Y” values in the tier 2 operational relationship 430 can represent that the virtual resource 410 has an operational dependency with other virtual resources 410 also designated with a “Y” value in the operational relationship.
- the virtual resources in the tier 2 operational relationship may be needed to operate at, for example, the same host device or have a specified topological proximity based on the operational dependency.
- the “N” values in the tier 2 operational relationship 430 can represent that the virtual resource 410 is not associated with other virtual resources 410 associated with the tier 2 operational relationship 430 .
- the operational relationships can represent different operational relationships than those described above.
- the tier 1 operational relationship 420 can represent an optional operational dependency.
- a database such as database 400 can include more or less than two operational relationships.
- the operational relationships can overlap and/or can be hierarchically related.
- the operational relationships can be rank ordered (e.g., can be associated with a priority) (not shown). In other words, the operational relationships can be rank ordered so that the operational relationships will be given precedence by a management module in accordance with the rank order.
- the operational relationships can be defined by a data center administrator and/or defined by a user associated with the virtual resources 410 .
- the management module can be configured to use information such as that shown in database 400 to identify additional virtual resources that should be moved with the virtual resource.
- virtual resource W 1 can be identified by a management module (e.g., a monitoring module of the management module) for movement based on a value of a performance metric associated with virtual resource W 1 satisfying a threshold condition.
- the management module can also be configured to identify virtual resource W 4 as a virtual resource that is also to be moved with virtual resource W 1 because these two virtual resources have an operational dependency as indicated in the database 400 .
- FIG. 4B illustrates a database 470 that includes available capacity values of data center units, according to an embodiment.
- the data center units shown in column 480
- UD, UE, and UF have available capacities (shown in column 490 ) of 10%, 60%, and 90%, respectively.
- the resources e.g., data center resources
- the resources associated with the data center units 480 can be referred to as a pool of resources (e.g., data center resources).
- the operational relationship information included in database 400 can be used in conjunction with database 470 (shown in FIG. 4B ) to identify a destination data center unit for operation of one or more of the virtual resources if the virtual resource(s) are identified for movement to one of the data center units 480 .
- virtual resource W 1 and virtual resource W 4 can be identified for movement as a set of virtual resources based on their tier 2 operational relationship shown in FIG. 4A . Assuming that the resources of the data center unit UC (shown in column 440 of FIG.
- database 400 and database 470 can include information related to the association of the data center units and virtual resources to particular users represented by user identifiers. Accordingly, the movement of virtual resources and/or identification of destination data center resources (e.g., data center units) can also be determined by a management module based on the user identifiers. Specifically, the management module can be configured to only identify virtual resources associated with a particular user for movement to data center resources also associated with the same user.
- FIG. 5 is a flowchart that illustrates a method for identifying a set of virtual resources for movement within a data center, according to an embodiment.
- a multi-tiered representation e.g., multi-tiered mapping
- the multi-tiered representations can be similar to those shown in FIG. 4A .
- Availability information related to a pool of data center resources is received, at 510 .
- the available capacity values can be similar to those shown in FIG. 4B .
- the pool of data center resources can be data center resources that are not assigned to a user.
- An indicator that performance of a virtual resource from the group of virtual resources, when operating within a data center resource, has satisfied a threshold condition is received, at 520 .
- the performance can be related to a failure rate of the virtual resources when operating with the data center resource.
- (1) a set of virtual resources from the group of virtual resources and (2) a portion of the data center resources to operate the set of virtual resources are identified based on the multi-tiered representation of operational relationships and the availability information.
- only a subset of the information associated with the multi-tiered representation of operation relationships and/or the availability information may be used by a management module to identify the set of virtual resources and the portion of the hardware resources to operate the set of virtual resources.
- the portion of the hardware resources can be managed as data center units based on a set of predefined hardware resource limit values.
- An instruction configured to trigger movement of the set of virtual resources to the portion of the data center resources is sent, at 540 .
- the instruction can be defined at and sent from a management module to another module (which can be in a different processing device than the management module).
- the instruction can be sent to a module, separate from the management module, that is configured to move (or trigger movement) of the set of virtual resources to the portion of the data center resources.
- the instruction can be defined at a management module and sent to a module within the management module configured to move (or trigger movement) of the set of virtual resource to the portion of the data center resources.
- the instruction can be sent to, for example, a user via a user interface.
- one or more portions of the data center resources can be reconfigured so that the portion(s) of the data center resources can operate the set of virtual resources.
- a hardware component of the data center can be configured (or reconfigured) so that the hardware component can operate at least a portion of the set of virtual resources in a desirable fashion.
- a software resource e.g., a hypervisor platform
- the data center can be configured (or reconfigured) so that the software resource can be used to operate at least a portion of the set of virtual resources in a desirable fashion.
- Some embodiments described herein relate to a computer storage product with a computer-readable medium (also can be referred to as a processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations.
- the media and computer code also can be referred to as code
- Examples of computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), and Read-Only Memory (ROM) and Random-Access Memory (RAM) devices.
- ASICs Application-Specific Integrated Circuits
- PLDs Programmable Logic Devices
- RAM Random-Access Memory
- Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter.
- embodiments may be implemented using, for example, a run-time environment and/or an application framework such as a Microsoft .NET framework and/Java, C++, or other programming languages (e.g., object-oriented programming languages) and/or development tools.
- Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Abstract
Description
Claims (25)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/709,943 US9027017B2 (en) | 2010-02-22 | 2010-02-22 | Methods and apparatus for movement of virtual resources within a data center environment |
PCT/US2011/025390 WO2011103390A1 (en) | 2010-02-22 | 2011-02-18 | Methods and apparatus for movement of virtual resources within a data center environment |
CN201180020127.4A CN102947796B (en) | 2010-02-22 | 2011-02-18 | For the method and apparatus of mobile virtual resource in thimble border in the data |
EP20110745297 EP2539817A4 (en) | 2010-02-22 | 2011-02-18 | Methods and apparatus for movement of virtual resources within a data center environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/709,943 US9027017B2 (en) | 2010-02-22 | 2010-02-22 | Methods and apparatus for movement of virtual resources within a data center environment |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110209146A1 US20110209146A1 (en) | 2011-08-25 |
US9027017B2 true US9027017B2 (en) | 2015-05-05 |
Family
ID=44477561
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/709,943 Active US9027017B2 (en) | 2010-02-22 | 2010-02-22 | Methods and apparatus for movement of virtual resources within a data center environment |
Country Status (4)
Country | Link |
---|---|
US (1) | US9027017B2 (en) |
EP (1) | EP2539817A4 (en) |
CN (1) | CN102947796B (en) |
WO (1) | WO2011103390A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9866450B2 (en) | 2010-02-22 | 2018-01-09 | Virtustream Ip Holding Company Llc | Methods and apparatus related to management of unit-based virtual resources within a data center environment |
US11216312B2 (en) | 2018-08-03 | 2022-01-04 | Virtustream Ip Holding Company Llc | Management of unit-based virtual accelerator resources |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6901582B1 (en) | 1999-11-24 | 2005-05-31 | Quest Software, Inc. | Monitoring system for monitoring the performance of an application |
US7979245B1 (en) | 2006-05-17 | 2011-07-12 | Quest Software, Inc. | Model-based systems and methods for monitoring computing resource performance |
US8175863B1 (en) * | 2008-02-13 | 2012-05-08 | Quest Software, Inc. | Systems and methods for analyzing performance of virtual environments |
WO2011162744A1 (en) | 2010-06-22 | 2011-12-29 | Hewlett-Packard Development Company, L.P. | Methods and systems for planning application deployment |
US9729464B1 (en) | 2010-06-23 | 2017-08-08 | Brocade Communications Systems, Inc. | Method and apparatus for provisioning of resources to support applications and their varying demands |
US9317314B2 (en) * | 2010-06-29 | 2016-04-19 | Microsoft Techology Licensing, Llc | Techniques for migrating a virtual machine using shared storage |
US9183028B1 (en) | 2010-09-30 | 2015-11-10 | Amazon Technologies, Inc. | Managing virtual computing nodes |
US9384029B1 (en) | 2010-09-30 | 2016-07-05 | Amazon Technologies, Inc. | Managing virtual computing nodes |
US9104458B1 (en) * | 2010-09-30 | 2015-08-11 | Amazon Technologies, Inc. | Managing virtual computing nodes using isolation and migration techniques |
US20150106813A1 (en) * | 2010-10-21 | 2015-04-16 | Brocade Communications Systems, Inc. | Method and apparatus for provisioning of resources to support applications and their varying demands |
US8850430B2 (en) * | 2011-01-25 | 2014-09-30 | International Business Machines Corporation | Migration of virtual machines |
US9454406B2 (en) * | 2011-02-28 | 2016-09-27 | Novell, Inc. | Techniques for cloud bursting |
EP2673704B1 (en) * | 2011-04-07 | 2019-10-23 | Ent. Services Development Corporation LP | Method and apparatus for moving a software object |
US9215142B1 (en) | 2011-04-20 | 2015-12-15 | Dell Software Inc. | Community analysis of computing performance |
US8856784B2 (en) | 2011-06-14 | 2014-10-07 | Vmware, Inc. | Decentralized management of virtualized hosts |
US9026630B2 (en) | 2011-06-14 | 2015-05-05 | Vmware, Inc. | Managing resources in a distributed system using dynamic clusters |
US8701107B2 (en) | 2011-06-14 | 2014-04-15 | Vmware, Inc. | Decentralized management of virtualized hosts |
DE102012217202B4 (en) | 2011-10-12 | 2020-06-18 | International Business Machines Corporation | Method and system for optimizing the placement of virtual machines in cloud computing environments |
US9680716B2 (en) | 2011-12-12 | 2017-06-13 | Avocent Huntsville, Llc | System and method for monitoring and managing data center resources in real time incorporating manageability subsystem |
US20130238785A1 (en) * | 2012-03-06 | 2013-09-12 | Rackspace Us, Inc. | System and Method for Metadata Discovery and Metadata-Aware Scheduling |
US9557879B1 (en) | 2012-10-23 | 2017-01-31 | Dell Software Inc. | System for inferring dependencies among computing systems |
US10333820B1 (en) | 2012-10-23 | 2019-06-25 | Quest Software Inc. | System for inferring dependencies among computing systems |
US9608933B2 (en) * | 2013-01-24 | 2017-03-28 | Hitachi, Ltd. | Method and system for managing cloud computing environment |
US8972780B2 (en) * | 2013-01-31 | 2015-03-03 | Red Hat Israel, Ltd. | Low-latency fault-tolerant virtual machines |
US9203700B2 (en) * | 2013-05-21 | 2015-12-01 | International Business Machines Corporation | Monitoring client information in a shared environment |
US9912570B2 (en) | 2013-10-25 | 2018-03-06 | Brocade Communications Systems LLC | Dynamic cloning of application infrastructures |
US9948493B2 (en) * | 2014-04-03 | 2018-04-17 | Centurylink Intellectual Property Llc | Network functions virtualization interconnection gateway |
US11005738B1 (en) | 2014-04-09 | 2021-05-11 | Quest Software Inc. | System and method for end-to-end response-time analysis |
CN104137482B (en) * | 2014-04-14 | 2018-02-02 | 华为技术有限公司 | A kind of disaster tolerance data center configuration method and device under cloud computing framework |
US9479414B1 (en) | 2014-05-30 | 2016-10-25 | Dell Software Inc. | System and method for analyzing computing performance |
US10129168B2 (en) * | 2014-06-17 | 2018-11-13 | Analitiqa Corporation | Methods and systems providing a scalable process for anomaly identification and information technology infrastructure resource optimization |
US9692654B2 (en) * | 2014-08-19 | 2017-06-27 | Benefitfocus.Com, Inc. | Systems and methods for correlating derived metrics for system activity |
US9606826B2 (en) * | 2014-08-21 | 2017-03-28 | International Business Machines Corporation | Selecting virtual machines to be migrated to public cloud during cloud bursting based on resource usage and scaling policies |
US10110445B2 (en) * | 2014-09-27 | 2018-10-23 | At&T Global Network Services France, Sas | Closed control loops for data centers |
US9886296B2 (en) * | 2014-12-01 | 2018-02-06 | International Business Machines Corporation | Managing hypervisor weights in a virtual environment |
US10291493B1 (en) | 2014-12-05 | 2019-05-14 | Quest Software Inc. | System and method for determining relevant computer performance events |
EP3234796A4 (en) * | 2014-12-16 | 2018-07-04 | Telefonaktiebolaget LM Ericsson (publ) | Computer servers for datacenter management |
CN104601448B (en) | 2015-01-12 | 2017-11-28 | 腾讯科技(深圳)有限公司 | A kind of method and apparatus handled virtual card |
US9274758B1 (en) | 2015-01-28 | 2016-03-01 | Dell Software Inc. | System and method for creating customized performance-monitoring applications |
US9996577B1 (en) | 2015-02-11 | 2018-06-12 | Quest Software Inc. | Systems and methods for graphically filtering code call trees |
US10187260B1 (en) | 2015-05-29 | 2019-01-22 | Quest Software Inc. | Systems and methods for multilayer monitoring of network function virtualization architectures |
US10031768B2 (en) * | 2015-06-30 | 2018-07-24 | Vmware, Inc. | Host-gateway-facilitated aggregation of host-computer clusters |
US10200252B1 (en) | 2015-09-18 | 2019-02-05 | Quest Software Inc. | Systems and methods for integrated modeling of monitored virtual desktop infrastructure systems |
US10230601B1 (en) | 2016-07-05 | 2019-03-12 | Quest Software Inc. | Systems and methods for integrated modeling and performance measurements of monitored virtual desktop infrastructure systems |
JP7035858B2 (en) * | 2018-07-03 | 2022-03-15 | 富士通株式会社 | Migration management program, migration method and migration system |
US10776158B2 (en) * | 2019-01-31 | 2020-09-15 | Lockheed Martin Corporation | Management of application deployment across multiple provisioning layers |
Citations (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020059427A1 (en) | 2000-07-07 | 2002-05-16 | Hitachi, Ltd. | Apparatus and method for dynamically allocating computer resources based on service contract with user |
US20020184363A1 (en) | 2001-04-20 | 2002-12-05 | Steven Viavant | Techniques for server-controlled measurement of client-side performance |
US20030028642A1 (en) | 2001-08-03 | 2003-02-06 | International Business Machines Corporation | Managing server resources for hosted applications |
US20040111509A1 (en) | 2002-12-10 | 2004-06-10 | International Business Machines Corporation | Methods and apparatus for dynamic allocation of servers to a plurality of customers to maximize the revenue of a server farm |
US20040267897A1 (en) | 2003-06-24 | 2004-12-30 | Sychron Inc. | Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers |
US20050039183A1 (en) | 2000-01-28 | 2005-02-17 | Francisco Romero | System and method for allocating a plurality of resources between a plurality of computing domains |
US20050102674A1 (en) | 2003-11-10 | 2005-05-12 | Takashi Tameshige | Computer resource distribution method based on prediction |
US20050108712A1 (en) | 2003-11-14 | 2005-05-19 | Pawan Goyal | System and method for providing a scalable on demand hosting system |
US20050120160A1 (en) | 2003-08-20 | 2005-06-02 | Jerry Plouffe | System and method for managing virtual servers |
US20050235286A1 (en) | 2004-04-15 | 2005-10-20 | Raytheon Company | System and method for topology-aware job scheduling and backfilling in an HPC environment |
US20060056618A1 (en) | 2004-09-16 | 2006-03-16 | International Business Machines Corporation | Enabling user control over automated provisioning environment |
US20060069594A1 (en) | 2004-07-01 | 2006-03-30 | Yasushi Yamasaki | Method and computer program product for resource planning |
US20060143617A1 (en) | 2004-12-29 | 2006-06-29 | Knauerhase Robert C | Method, apparatus and system for dynamic allocation of virtual platform resources |
US20060161988A1 (en) | 2005-01-14 | 2006-07-20 | Microsoft Corporation | Privacy friendly malware quarantines |
US20060190606A1 (en) | 2005-02-22 | 2006-08-24 | Kidaro Inc. | Data transfer security |
US20060259818A1 (en) | 2004-12-22 | 2006-11-16 | Microsoft Corporation | Deterministic multiprocessor computer system |
US7194616B2 (en) * | 2001-10-27 | 2007-03-20 | International Business Machines Corporation | Flexible temporary capacity upgrade/downgrade in a computer system without involvement of the operating system |
US20070106796A1 (en) | 2005-11-09 | 2007-05-10 | Yutaka Kudo | Arbitration apparatus for allocating computer resource and arbitration method therefor |
US20070118567A1 (en) | 2005-10-26 | 2007-05-24 | Hiromi Isokawa | Method for device quarantine and quarantine network system |
US20070115924A1 (en) | 2005-10-19 | 2007-05-24 | Marco Schneider | Methods and apparatus for authorizing and allocating outdial communication services |
US20070250929A1 (en) | 2006-04-21 | 2007-10-25 | Herington Daniel E | Automatic isolation of misbehaving processes on a computer system |
US20070266433A1 (en) | 2006-03-03 | 2007-11-15 | Hezi Moore | System and Method for Securing Information in a Virtual Computing Environment |
US20070271560A1 (en) | 2006-05-18 | 2007-11-22 | Microsoft Corporation | Deploying virtual machine to host based on workload characterizations |
US20080082977A1 (en) | 2006-09-29 | 2008-04-03 | Microsoft Corporation | Automatic load and balancing for virtual machines to meet resource requirements |
US20080109549A1 (en) | 2004-07-21 | 2008-05-08 | Kazushi Nakagawa | Rental Server System |
US20080163239A1 (en) | 2006-12-29 | 2008-07-03 | Suresh Sugumar | Method for dynamic load balancing on partitioned systems |
US20080183544A1 (en) | 2005-11-10 | 2008-07-31 | International Business Machines Corporation | method for provisioning resources |
US20080263258A1 (en) * | 2007-04-19 | 2008-10-23 | Claus Allwell | Method and System for Migrating Virtual Machines Between Hypervisors |
US20080295096A1 (en) | 2007-05-21 | 2008-11-27 | International Business Machines Corporation | DYNAMIC PLACEMENT OF VIRTUAL MACHINES FOR MANAGING VIOLATIONS OF SERVICE LEVEL AGREEMENTS (SLAs) |
EP2040176A1 (en) | 2004-09-09 | 2009-03-25 | Solarflare Communications Incorporated | Dynamic Resource Allocation |
US20090138887A1 (en) | 2007-11-28 | 2009-05-28 | Hitachi, Ltd. | Virtual machine monitor and multiprocessor sysyem |
WO2009072186A1 (en) | 2007-12-04 | 2009-06-11 | Fujitsu Limited | Resource lending controlling device, resource lending method and resource lending program |
US20090199198A1 (en) | 2008-02-04 | 2009-08-06 | Hiroshi Horii | Multinode server system, load distribution method, resource management server, and program product |
US20090254572A1 (en) | 2007-01-05 | 2009-10-08 | Redlich Ron M | Digital information infrastructure and method |
US20090276771A1 (en) | 2005-09-15 | 2009-11-05 | 3Tera, Inc. | Globally Distributed Utility Computing Cloud |
US20090293022A1 (en) | 2008-05-22 | 2009-11-26 | Microsoft Corporation | Virtual Machine Placement Based on Power Calculations |
US7664110B1 (en) | 2004-02-07 | 2010-02-16 | Habanero Holdings, Inc. | Input/output controller for coupling the processor-memory complex to the fabric in fabric-backplane interprise servers |
US20100107172A1 (en) | 2003-12-31 | 2010-04-29 | Sychron Advanced Technologies, Inc. | System providing methodology for policy-based resource allocation |
US20100242045A1 (en) | 2009-03-20 | 2010-09-23 | Sun Microsystems, Inc. | Method and system for allocating a distributed resource |
US7908605B1 (en) | 2005-01-28 | 2011-03-15 | Hewlett-Packard Development Company, L.P. | Hierarchal control system for controlling the allocation of computer resources |
US20110093852A1 (en) | 2009-10-21 | 2011-04-21 | Sap Ag | Calibration of resource allocation during parallel processing |
US7941804B1 (en) * | 2005-10-31 | 2011-05-10 | Hewlett-Packard Development Company, L.P. | Allocating resources among tiered partitions of different types |
US20110131589A1 (en) | 2009-12-02 | 2011-06-02 | International Business Machines Corporation | System and method for transforming legacy desktop environments to a virtualized desktop model |
US20110131335A1 (en) | 2009-05-08 | 2011-06-02 | Cloudkick, Inc. | Methods and systems for cloud computing management |
US20110185064A1 (en) | 2010-01-26 | 2011-07-28 | International Business Machines Corporation | System and method for fair and economical resource partitioning using virtual hypervisor |
US20110239215A1 (en) | 2010-03-24 | 2011-09-29 | Fujitsu Limited | Virtual machine management apparatus |
US20120110328A1 (en) | 2010-10-27 | 2012-05-03 | High Cloud Security, Inc. | System and Method For Secure Storage of Virtual Machines |
US20120174097A1 (en) | 2011-01-04 | 2012-07-05 | Host Dynamics Ltd. | Methods and systems of managing resources allocated to guest virtual machines |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8209680B1 (en) * | 2003-04-11 | 2012-06-26 | Vmware, Inc. | System and method for disk imaging on diverse computers |
US20090024713A1 (en) * | 2007-07-18 | 2009-01-22 | Metrosource Corp. | Maintaining availability of a data center |
US8127296B2 (en) * | 2007-09-06 | 2012-02-28 | Dell Products L.P. | Virtual machine migration between processors having VM migration registers controlled by firmware to modify the reporting of common processor feature sets to support the migration |
-
2010
- 2010-02-22 US US12/709,943 patent/US9027017B2/en active Active
-
2011
- 2011-02-18 EP EP20110745297 patent/EP2539817A4/en not_active Ceased
- 2011-02-18 CN CN201180020127.4A patent/CN102947796B/en active Active
- 2011-02-18 WO PCT/US2011/025390 patent/WO2011103390A1/en active Application Filing
Patent Citations (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050039183A1 (en) | 2000-01-28 | 2005-02-17 | Francisco Romero | System and method for allocating a plurality of resources between a plurality of computing domains |
US20020059427A1 (en) | 2000-07-07 | 2002-05-16 | Hitachi, Ltd. | Apparatus and method for dynamically allocating computer resources based on service contract with user |
US20020184363A1 (en) | 2001-04-20 | 2002-12-05 | Steven Viavant | Techniques for server-controlled measurement of client-side performance |
US20030028642A1 (en) | 2001-08-03 | 2003-02-06 | International Business Machines Corporation | Managing server resources for hosted applications |
US7194616B2 (en) * | 2001-10-27 | 2007-03-20 | International Business Machines Corporation | Flexible temporary capacity upgrade/downgrade in a computer system without involvement of the operating system |
US20040111509A1 (en) | 2002-12-10 | 2004-06-10 | International Business Machines Corporation | Methods and apparatus for dynamic allocation of servers to a plurality of customers to maximize the revenue of a server farm |
US20040267897A1 (en) | 2003-06-24 | 2004-12-30 | Sychron Inc. | Distributed System Providing Scalable Methodology for Real-Time Control of Server Pools and Data Centers |
US20050120160A1 (en) | 2003-08-20 | 2005-06-02 | Jerry Plouffe | System and method for managing virtual servers |
US20050102674A1 (en) | 2003-11-10 | 2005-05-12 | Takashi Tameshige | Computer resource distribution method based on prediction |
US20050108712A1 (en) | 2003-11-14 | 2005-05-19 | Pawan Goyal | System and method for providing a scalable on demand hosting system |
US20100107172A1 (en) | 2003-12-31 | 2010-04-29 | Sychron Advanced Technologies, Inc. | System providing methodology for policy-based resource allocation |
US7664110B1 (en) | 2004-02-07 | 2010-02-16 | Habanero Holdings, Inc. | Input/output controller for coupling the processor-memory complex to the fabric in fabric-backplane interprise servers |
US20050235286A1 (en) | 2004-04-15 | 2005-10-20 | Raytheon Company | System and method for topology-aware job scheduling and backfilling in an HPC environment |
US20060069594A1 (en) | 2004-07-01 | 2006-03-30 | Yasushi Yamasaki | Method and computer program product for resource planning |
US20080109549A1 (en) | 2004-07-21 | 2008-05-08 | Kazushi Nakagawa | Rental Server System |
EP2040176A1 (en) | 2004-09-09 | 2009-03-25 | Solarflare Communications Incorporated | Dynamic Resource Allocation |
US20060056618A1 (en) | 2004-09-16 | 2006-03-16 | International Business Machines Corporation | Enabling user control over automated provisioning environment |
US20060259818A1 (en) | 2004-12-22 | 2006-11-16 | Microsoft Corporation | Deterministic multiprocessor computer system |
US20060143617A1 (en) | 2004-12-29 | 2006-06-29 | Knauerhase Robert C | Method, apparatus and system for dynamic allocation of virtual platform resources |
US20060161988A1 (en) | 2005-01-14 | 2006-07-20 | Microsoft Corporation | Privacy friendly malware quarantines |
US7908605B1 (en) | 2005-01-28 | 2011-03-15 | Hewlett-Packard Development Company, L.P. | Hierarchal control system for controlling the allocation of computer resources |
US20060190606A1 (en) | 2005-02-22 | 2006-08-24 | Kidaro Inc. | Data transfer security |
US20090276771A1 (en) | 2005-09-15 | 2009-11-05 | 3Tera, Inc. | Globally Distributed Utility Computing Cloud |
US20070115924A1 (en) | 2005-10-19 | 2007-05-24 | Marco Schneider | Methods and apparatus for authorizing and allocating outdial communication services |
US20070118567A1 (en) | 2005-10-26 | 2007-05-24 | Hiromi Isokawa | Method for device quarantine and quarantine network system |
US7941804B1 (en) * | 2005-10-31 | 2011-05-10 | Hewlett-Packard Development Company, L.P. | Allocating resources among tiered partitions of different types |
US20070106796A1 (en) | 2005-11-09 | 2007-05-10 | Yutaka Kudo | Arbitration apparatus for allocating computer resource and arbitration method therefor |
US20080183544A1 (en) | 2005-11-10 | 2008-07-31 | International Business Machines Corporation | method for provisioning resources |
US20070266433A1 (en) | 2006-03-03 | 2007-11-15 | Hezi Moore | System and Method for Securing Information in a Virtual Computing Environment |
US20070250929A1 (en) | 2006-04-21 | 2007-10-25 | Herington Daniel E | Automatic isolation of misbehaving processes on a computer system |
US20070271560A1 (en) | 2006-05-18 | 2007-11-22 | Microsoft Corporation | Deploying virtual machine to host based on workload characterizations |
US20080082977A1 (en) | 2006-09-29 | 2008-04-03 | Microsoft Corporation | Automatic load and balancing for virtual machines to meet resource requirements |
US20080163239A1 (en) | 2006-12-29 | 2008-07-03 | Suresh Sugumar | Method for dynamic load balancing on partitioned systems |
US20090254572A1 (en) | 2007-01-05 | 2009-10-08 | Redlich Ron M | Digital information infrastructure and method |
US20080263258A1 (en) * | 2007-04-19 | 2008-10-23 | Claus Allwell | Method and System for Migrating Virtual Machines Between Hypervisors |
US20080295096A1 (en) | 2007-05-21 | 2008-11-27 | International Business Machines Corporation | DYNAMIC PLACEMENT OF VIRTUAL MACHINES FOR MANAGING VIOLATIONS OF SERVICE LEVEL AGREEMENTS (SLAs) |
US20090138887A1 (en) | 2007-11-28 | 2009-05-28 | Hitachi, Ltd. | Virtual machine monitor and multiprocessor sysyem |
WO2009072186A1 (en) | 2007-12-04 | 2009-06-11 | Fujitsu Limited | Resource lending controlling device, resource lending method and resource lending program |
US20100241751A1 (en) | 2007-12-04 | 2010-09-23 | Fujitsu Limited | Resource lending control apparatus and resource lending method |
US20090199198A1 (en) | 2008-02-04 | 2009-08-06 | Hiroshi Horii | Multinode server system, load distribution method, resource management server, and program product |
US20090293022A1 (en) | 2008-05-22 | 2009-11-26 | Microsoft Corporation | Virtual Machine Placement Based on Power Calculations |
US20100242045A1 (en) | 2009-03-20 | 2010-09-23 | Sun Microsystems, Inc. | Method and system for allocating a distributed resource |
US20110131335A1 (en) | 2009-05-08 | 2011-06-02 | Cloudkick, Inc. | Methods and systems for cloud computing management |
US20110093852A1 (en) | 2009-10-21 | 2011-04-21 | Sap Ag | Calibration of resource allocation during parallel processing |
US20110131589A1 (en) | 2009-12-02 | 2011-06-02 | International Business Machines Corporation | System and method for transforming legacy desktop environments to a virtualized desktop model |
US20110185064A1 (en) | 2010-01-26 | 2011-07-28 | International Business Machines Corporation | System and method for fair and economical resource partitioning using virtual hypervisor |
US20110239215A1 (en) | 2010-03-24 | 2011-09-29 | Fujitsu Limited | Virtual machine management apparatus |
US20120110328A1 (en) | 2010-10-27 | 2012-05-03 | High Cloud Security, Inc. | System and Method For Secure Storage of Virtual Machines |
US20120174097A1 (en) | 2011-01-04 | 2012-07-05 | Host Dynamics Ltd. | Methods and systems of managing resources allocated to guest virtual machines |
Non-Patent Citations (25)
Title |
---|
Chinese Office Action issued in CN 201180020127.4 dated Feb. 15, 2015. |
Chinese Office Action issued in CN 201180020260 dated Sep. 2, 2014. |
Chinese Office Action issued in CN 201180020269.0 dated Oct. 20, 2014. |
English Language Translation of Chinese Office Action issued in CN 201180020127.4 dated Feb. 15, 2015. |
English Language Translation of Chinese Office Action issued in CN 201180020260 dated Sep. 2, 2014. |
English Language Translation of Chinese Office Action issued in CN 201180020269.0 dated Oct. 20, 2014. |
International Preliminary Report on Patentability issued in PCT/US2011/025393 on Aug. 28, 2012. |
International Preliminary Report on Patnetability and Written Opinion issued in PCT/US2011/025390 on Aug. 28, 2012. |
International Search Report and Written Opinion issued in PCT/US2011/025392 on Jun. 2, 2011. |
International Search Report issued in PCT/US2011/025393 on Jun. 2, 2011. |
International Search Report issued in PCT/US2012/052561 dated Feb. 7, 2013. |
Related U.S. Appl. No. 12/709,954 electronically captured Jan. 2, 2013. |
Related U.S. Appl. No. 12/709,954 electronically captured Mar. 20, 2013. |
Related U.S. Appl. No. 12/709,954 electronically captured Sep. 30, 2013. |
Related U.S. Appl. No. 12/709,962 electronically captured Jan. 2, 2013. |
Related U.S. Appl. No. 12/709,962 electronically captured Mar. 20, 2013. |
Related U.S. Appl. No. 12/709,962 electronically captured on Jan. 7, 2014. |
Related U.S. Appl. No. 12/709,962 electronically captured Sep. 30, 2013. |
Related U.S. Appl. No. 13/595,955 electronically captured Jan. 2, 2013. |
Related U.S. Appl. No. 13/595,955 electronically captured Oct. 30, 2014. |
Related U.S. Appl. No. 13/595,955 electronically captured on Jan. 7, 2014. |
Related U.S. Appl. No. 13/595,955 electronically captured on Jul. 10, 2014. |
Virtustream, Inc. PCTUS11/25390 filed Feb. 18, 2011. International Search Report-Written Opinion (Jun. 1, 2011). |
Virtustream, Inc. PCTUS11/25390 filed Feb. 18, 2011. International Search Report—Written Opinion (Jun. 1, 2011). |
Written Opinion issued in PCT/US2012/052561 dated Feb. 7, 2013. |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9866450B2 (en) | 2010-02-22 | 2018-01-09 | Virtustream Ip Holding Company Llc | Methods and apparatus related to management of unit-based virtual resources within a data center environment |
US10659318B2 (en) | 2010-02-22 | 2020-05-19 | Virtustream Ip Holding Company Llc | Methods and apparatus related to management of unit-based virtual resources within a data center environment |
US11216312B2 (en) | 2018-08-03 | 2022-01-04 | Virtustream Ip Holding Company Llc | Management of unit-based virtual accelerator resources |
Also Published As
Publication number | Publication date |
---|---|
EP2539817A4 (en) | 2015-04-29 |
WO2011103390A1 (en) | 2011-08-25 |
US20110209146A1 (en) | 2011-08-25 |
CN102947796A (en) | 2013-02-27 |
EP2539817A1 (en) | 2013-01-02 |
CN102947796B (en) | 2016-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9027017B2 (en) | Methods and apparatus for movement of virtual resources within a data center environment | |
US10659318B2 (en) | Methods and apparatus related to management of unit-based virtual resources within a data center environment | |
US11080081B2 (en) | Virtual machine and volume allocation in hyperconverged infrastructure environment and storage system | |
US9569245B2 (en) | System and method for controlling virtual-machine migrations based on processor usage rates and traffic amounts | |
US11936731B2 (en) | Traffic priority based creation of a storage volume within a cluster of storage nodes | |
US10474488B2 (en) | Configuration of a cluster of hosts in virtualized computing environments | |
US20210405902A1 (en) | Rule-based provisioning for heterogeneous distributed systems | |
US10356150B1 (en) | Automated repartitioning of streaming data | |
US10152343B2 (en) | Method and apparatus for managing IT infrastructure in cloud environments by migrating pairs of virtual machines | |
US10649811B2 (en) | Sunder management for a cluster of disperse nodes | |
KR102016238B1 (en) | System and method for supervising doker container, computer readable medium for performing the method | |
US20150363238A1 (en) | Resource management in a virtualized computing environment | |
US11023128B2 (en) | On-demand elastic storage infrastructure | |
US20220057947A1 (en) | Application aware provisioning for distributed systems | |
US11588884B2 (en) | Utilizing network analytics for service provisioning | |
US11726684B1 (en) | Cluster rebalance using user defined rules | |
US11609831B2 (en) | Virtual machine configuration update technique in a disaster recovery environment | |
US11722560B2 (en) | Reconciling host cluster membership during recovery | |
US11507431B2 (en) | Resource allocation for virtual machines | |
US20230297402A1 (en) | Virtual machine migration based on power utilization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: VIRTUSTREAM, INC., MARYLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOX, JULIAN J.;LUBSEY, VINCENT G.;REID, KEVIN D.;AND OTHERS;SIGNING DATES FROM 20100218 TO 20100219;REEL/FRAME:024000/0294 |
|
AS | Assignment |
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:VIRTUSTREAM, INC.;REEL/FRAME:026243/0023 Effective date: 20110506 |
|
AS | Assignment |
Owner name: ORIX VENTURES, LLC, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:VIRTUSTREAM, INC.;VIRTUSTREAM DCS, LLC;VIRTUSTREAM LIMITED;AND OTHERS;REEL/FRAME:033453/0702 Effective date: 20140723 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: VIRTUSTREAM IP HOLDING COMPANY LLC, MARYLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VIRTUSTREAM, INC.;REEL/FRAME:039694/0886 Effective date: 20160906 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
AS | Assignment |
Owner name: VIRTUSTREAM LIMITED, VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM CANADA HOLDINGS, INC., VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: NETWORK I LIMITED, VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM UK LIMITED, VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM DCS, LLC, VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM, INC., VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM SWITZERLAND SARL, VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM SECURITY SOLUTIONS, LLC, VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 Owner name: VIRTUSTREAM GROUP HOLDINGS, INC., VIRGINIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:ORIX VENTURES, LLC;REEL/FRAME:051255/0178 Effective date: 20150709 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |