US20060112388A1 - Method for dynamic scheduling in a distributed environment - Google Patents
Method for dynamic scheduling in a distributed environment Download PDFInfo
- Publication number
- US20060112388A1 US20060112388A1 US10/994,852 US99485204A US2006112388A1 US 20060112388 A1 US20060112388 A1 US 20060112388A1 US 99485204 A US99485204 A US 99485204A US 2006112388 A1 US2006112388 A1 US 2006112388A1
- Authority
- US
- United States
- Prior art keywords
- program
- node
- priority
- execution
- assigning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5038—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/5021—Priority
Definitions
- This invention relates to a method and system for dynamically scheduling programs for execution on one or more nodes.
- a directed acyclic graph includes a set of nodes connected by a set of edges. Each node represents a task, and the weight of the node is the execution time of the task. Each edge represents a message transferred from one node to another node, with its weight being the transmission time of the message. Scheduling programs for execution onto processors is a crucial component of a parallel processing system.
- DAGs There are generally two categories of prior art scheduler using DAGs: centralized and decentralized (not shown).
- An example of a centralized scheduler ( 10 ) is shown in FIG. 1 to include a scheduler ( 30 ) and a plurality of program execution nodes ( 12 ), ( 14 ), ( 16 ), ( 18 ), and ( 20 ).
- the nodes ( 12 ), ( 14 ), ( 16 ), ( 18 ), and ( 20 ) communicate with each other and the scheduler ( 30 ) across a network.
- an execution request for a program is made to the scheduler ( 30 ) which assigns the program to one of the nodes ( 12 ), ( 14 ), ( 16 ), ( 18 ) or ( 20 ) in accordance with a state of each node.
- An example of a routine implemented with a centralized scheduler is a first in first out routine (FIFO) in which each program is assigned to a processor in the order in which they are placed in the queue. Problems with FIFO arise when a program in the queue is subject to dependency upon execution of another program.
- FIFO first in first out routine
- the FIFO routine does not support scheduling a dependent program based upon execution of a prior program. For example, two programs are provided with an execution dependency such that the first program requires a first data input and generates a second data output, and the second program is dependent upon the second data output from the first program execution, and the second program generates a third data output. If the scheduler assigning the programs to one or more processors is running a FIFO routine and the two programs are assigned to execute on two different nodes, the second data output from the first program execution will be on a different node than the second program execution. The second data output will need to be transferred from the node that executed the first program and produce the second data output to the node in which the second program has been assigned for execution. The process of transferring data between nodes consumes resources of both nodes associated with data encryption and decryption. Accordingly, the centralized scheduler results in a decreased utilization of both the first and second processors respectively executing the first and second programs.
- the decentralized scheduler In the decentralized scheduler, a plurality of independent schedulers are provided.
- the benefit associated with the decentralized scheduler is the scalability in a multinode system.
- the negative aspect of the decentralized scheduler is complexity of control and communication among the schedulers to efficient allocate resources in a sequential manner to reduce operation and transmission costs associated with transferring data across nodes for execution of dependent programs. Accordingly, there is an increased communication cost associated with a decentralized scheduler.
- This invention comprises a method and system for dynamically scheduling execution of a program among two or more processor nodes.
- a method for assigning resources to a plurality of processing nodes. Priority of execution dependency of a program is decided. In response to the decision, the program is dynamically assigned to a node based upon the priority and in accordance with a state of each node in a multinode system. Preemptive execution of the program is determined, and the program is executed at a designated node non-preemptively in response to a positive determination.
- a system is provided with a plurality of operating nodes, and a scheduling manager to decide priority of execution dependency of a program.
- a global scheduler is also provided to dynamically assign the program to one of the nodes based upon the priority and a state of each node in the system.
- a program manager is provided to determine applicability of preemptive execution of the program, and to non-preemptively execute the program at a designated node in response to a positive determination.
- an article is provided with a computer-readable signal-bearing medium with a plurality of operating nodes in the medium.
- Means in the medium are provided for deciding priority of execution dependency of a program.
- means in the medium are provided for dynamically assigning the program to one of the nodes based upon the priority and a state of each node in the system.
- Means in the medium are provided for determining applicability of preemptive execution of the program, and to non-preemptively execute the program at a designated node in response to a positive determination.
- FIG. 1 is a block diagram of a prior art centralized scheduler.
- FIG. 2 is a block diagram is a global scheduler according to the preferred embodiment of this invention, and is suggested for printing on the first page of the issued patent.
- FIG. 3 is flow chart illustrating a high level operation of processing flow.
- FIG. 4 is a flow chart illustrating workflow analysis.
- FIG. 5 is a flow chart illustrating assignment of priority to programs in a workflow.
- FIG. 6 is a flow chart illustrating logical node assignment.
- FIG. 7 is a flow chart illustrating scheduling a program at a node.
- FIG. 8 is a flow chart illustrating execution of a program at a node.
- a grid environment ( 50 ) is shown in FIG. 2 and is composed of a global scheduler ( 60 ) and a plurality of program execution units ( 70 ) and ( 80 ), known as nodes. Although only two nodes are shown, more nodes may be addressed to the system. Each node has a program execution unit ( 72 ) and ( 82 ), respectively, and a local scheduler ( 74 ) and ( 84 ) that has a local program execution queue (not shown) to manage execution of programs assigned to the respective node.
- the nodes ( 70 ) and ( 80 ) communicate with each other and the global scheduler ( 60 ) across a local or wide area network ( 90 ).
- An execution request for a program is made to the centralized scheduler ( 60 ) which assigns the program to one of the nodes ( 70 , 80 ) in accordance with a state of each node to execute the program.
- the centralized scheduler ( 60 ) includes a wait queue ( 62 ), a workflow database ( 64 ), a performance database ( 66 ), and an assignment database ( 68 ).
- Each of the nodes ( 70 ) and ( 80 ) provide processing power, and outputs result of program execution to the global scheduler ( 60 ).
- a web server (not shown) in communication with the global scheduler ( 60 ) and each of the nodes ( 70 ) and ( 80 ) dynamically generates transactions to obtain execution requests and process data.
- the global scheduler ( 60 ) controls processing of a requested program to one or more of the nodes.
- FIG. 3 is a flow chart ( 100 ) showing a high level processing of program assignments.
- a workflow submission request i.e. execution request
- the workflow is analyzed ( 112 ) and executed ( 114 ) prior to scheduling an execution of an associated program in the workflow ( 116 ).
- the results are provided to the user ( 118 ).
- the first procedure is the workflow analysis ( 112 ) conducted subsequent to receipt of a workflow submission, and is detailed in FIG. 4 .
- the second procedure involves three components: an execution request for a workflow from the user ( 114 ), scheduling and executing programs in the workflow ( 116 ), and providing results to the user ( 118 ).
- the workflow analysis ( 112 ) of FIG. 3 is shown in detail in FIG. 4 ( 150 ).
- the first step of the workflow analysis is assigning priority ( 152 ).
- the program execution priority is decided based on the execution dependency relation of a given program before actual program execution.
- One method is a known as topological sorting, and the second method is based upon the distance from the start program.
- the topological sorting method involves sorting a directed acyclic graph (DAG) and deciding the priority of the program by incrementing by a factor of 1/(i ⁇ 1) in sequence, where i indicates the number of programs included in the DAG.
- DAG directed acyclic graph
- the second method involves computing the distance from the start program, and then deciding the priority as the value normalized by the maximum distance. When there is more than one group of program sets to be executed, the decision on priority of execution is applied to all the program sets to be executed.
- the program execution request is added to the global wait queue ( 62 ).
- the entries in the wait queue are sorted based on the priority assigned to the program. When any node is waiting for program execution and the wait queue is not empty, a calculation csts, i.e. cost of assignment to a target node, for program execution is conducted for each program in the queue in order of priority. After execution of the program is completed at the assigned node, an execution request for a subsequent dependent program is added to the wait queue.
- the entries in the wait queue are rearranged in accordance with the priority assigned to the program. This procedure is repeated until the wait queue is empty.
- FIG. 5 is a flow chart ( 170 ) illustrating assignment of priority to each program in a group of programs.
- the first step is a test to determine if there is only one program in the group ( 172 ).
- a positive response to the test at step ( 172 ) will result in storing the priority to this one program ( 174 ) in the workflow database ( 64 ) on the global scheduler ( 60 ).
- a negative response to the test at step ( 172 ) is an indication that there are at least two or more programs in the group that need to be prioritized.
- Programs making up a strongly connected component are detected and grouped together ( 176 ). The programs grouped in this manner are identified as a strongly connected component group.
- Other programs that are not part of a strongly connected component are each grouped individually into groups of one program each with the number of programs in each group set as an integer of one.
- Each of the groups are sequenced by topological sorting ( 178 ) with the priority P i of the i-th group G i being decided in the following range: 0.0 ⁇ P i ⁇ 1.0, such that P i ⁇ 1 ⁇ P assuming that the priority of a start group is 0.0 and the priority of an end group is 1.0.
- priority is assigned to each group ( 180 ).
- the process of assigning priority to each group is applied recursively for each program constituting the strongly connected component group ( 182 ) by returning to step ( 172 ).
- the decision of priority P i is given to group G i,s and the priority P i,j is given to the jth group G i,j in a range of P i ⁇ P i,j ⁇ P j +1, such that P i,j ⁇ P j+i in the sequence acquired by topologically sorting the DAG created by excluding the input into G i,s as the root.
- the purpose of normalizing the priority of each program is to enable programs in different program sets to be executed with the same presence.
- the computation can be ended at the same time given the equal computation time between the sets.
- the request includes a weight value.
- the priority assigned to the program is then multiplied by the weight value and applied to the scheduling method described above. Accordingly, it is required that the programs within the groups be recursively split into strongly connected components to decide the priority.
- FIG. 6 is a flow chart ( 200 ) illustrating the details of the process of assigning one or more programs to a logical node, i.e. a temporary node.
- workflow data is received ( 202 ).
- a cost estimate of program calculation and transmission is estimated ( 204 ).
- the relationship between input data size and output data size for the programs in the execution dependency graph and the relationship between input data size and processing costs at a node are estimated.
- This step focuses on assigning the program with a greater amount of computation to the node of higher performance when the required data transfer overhead is minimal and a plurality of nodes are available.
- the estimation modeling parameter is made by a regression analysis.
- the costs can be computed based on program cost assignment, such as data transfer costs and whether the program and required data is cached, and program execution cost, such as the computation amount and the predicted end time.
- the programs in the workflow are then sorted ( 208 ) in the order of the calculated transmission cost.
- the program(s) are sorted in a hierarchy starting with a program having the highest transmission cost among the programs in consideration ( 210 ). If there is a tie between one or more programs having the same transmission cost, the tie is broken based upon the maximum cost of program execution including all dependent programs.
- Each of the programs are assigned to one or more logical nodes ( 212 ) based upon the hierarchical arrangement of the programs from step ( 210 ). Accordingly, the logical node assignment is based upon the transmission and/or communication cost of the programs in the queue.
- each of the programs or program groups is assigned to one or more logical nodes ( 156 ).
- the assignment to the logical nodes is stored ( 158 ) in the workflow database ( 64 ) of the global scheduler ( 60 ) and is utilized for scheduling execution of associated programs on actual nodes.
- FIG. 7 is a flow chart ( 250 ) illustrating the process of program scheduling. The first step involves waiting for a next event ( 252 ), wherein the event may be a new request arrival event or a node status change event.
- Step ( 254 ) includes providing a priority parameter, i, to a newly executable program.
- the priority m i has the highest priority when the node to be assigned and the actually assigned node mapped from the logical node for the program are matched.
- the next highest priority is when the logical node is not assigned to the actual node, and the lowest priority is when the node to be assigned to the program(s) is different from the mapped assignment.
- the entries in the wait queue are sorted based upon the priority parameters. The sorting is made based upon the following precedence: m i ⁇ d i , b i , i.e. after the sorting based on m i is complete, the sorting is then based on di, followed by sorting based on b i . Following step ( 254 ), a node capable of executing a program or a set of programs is selected ( 256 ). The node selection process is based upon prior calculated costs, priority, and availability.
- a test is then conducted ( 258 ) to determine of the node selected at step ( 256 ) exists.
- a negative response to the test at step ( 258 ) will result in a return to step ( 252 ).
- a positive response to the test at step ( 258 ) will result in selection of a program or a set of programs for the transfer from the logical node assignment to the physical node ( 260 ).
- a test is then conducted to determine if the program(s) exist ( 262 ). If the response to the test at step ( 262 ) is negative, the scheduling returns to step ( 256 ). However, if the response to the test at step ( 262 ) is positive, a new map is created and the program is assigned to the actual node ( 264 ).
- the process of scheduling and executing a program includes mapping the program to an actual node for execution.
- FIG. 8 is a flow chart ( 300 ) illustrating a process for executing a program after it has been assigned to a physical node for execution.
- the first step involves waiting for a next event ( 302 ), wherein the event may be either a program assignment or data transmission completion. Thereafter, an executable program is selected from the local queue of the physical node ( 304 ).
- a test is conducted to determine if the program exists ( 306 ). A negative response to the test at step ( 306 ) will return to step ( 302 ) for another event. However, a positive response to the test at step ( 306 ) will result in executing the selected program within an assigned period ( 308 ).
- a subsequent test is conducted to determine if the program execution has concluded within the assigned time period ( 310 ).
- a negative response to the test at step ( 310 ) will return to step ( 304 ) to select another program from the queue.
- a positive response to the test at step ( 310 ) will remove the executed program from the local queue ( 312 ).
- the performance data generated from the program execution is stored in the performance database ( 66 ) of the global scheduler ( 60 ).
- another test is conducted to determine if the destination of the data generated from the program execution has been decided ( 316 ).
- a positive response to the test at step ( 316 ) will allow the generated data to be transmitted ( 318 ).
- a node status change event is generated ( 320 ).
- the process returns to step ( 304 ) for selection of a subsequent program from the local queue. Accordingly, the actual node assigned to executed the program stores performance data within the global scheduler.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multi Processors (AREA)
Abstract
Description
- 1. Technical Field
- This invention relates to a method and system for dynamically scheduling programs for execution on one or more nodes.
- 2. Description of the Prior Art
- A directed acyclic graph (DAG) includes a set of nodes connected by a set of edges. Each node represents a task, and the weight of the node is the execution time of the task. Each edge represents a message transferred from one node to another node, with its weight being the transmission time of the message. Scheduling programs for execution onto processors is a crucial component of a parallel processing system. There are generally two categories of prior art scheduler using DAGs: centralized and decentralized (not shown). An example of a centralized scheduler (10) is shown in
FIG. 1 to include a scheduler (30) and a plurality of program execution nodes (12), (14), (16), (18), and (20). The nodes (12), (14), (16), (18), and (20) communicate with each other and the scheduler (30) across a network. In the centralized scheduler (10), an execution request for a program is made to the scheduler (30) which assigns the program to one of the nodes (12), (14), (16), (18) or (20) in accordance with a state of each node. An example of a routine implemented with a centralized scheduler is a first in first out routine (FIFO) in which each program is assigned to a processor in the order in which they are placed in the queue. Problems with FIFO arise when a program in the queue is subject to dependency upon execution of another program. The FIFO routine does not support scheduling a dependent program based upon execution of a prior program. For example, two programs are provided with an execution dependency such that the first program requires a first data input and generates a second data output, and the second program is dependent upon the second data output from the first program execution, and the second program generates a third data output. If the scheduler assigning the programs to one or more processors is running a FIFO routine and the two programs are assigned to execute on two different nodes, the second data output from the first program execution will be on a different node than the second program execution. The second data output will need to be transferred from the node that executed the first program and produce the second data output to the node in which the second program has been assigned for execution. The process of transferring data between nodes consumes resources of both nodes associated with data encryption and decryption. Accordingly, the centralized scheduler results in a decreased utilization of both the first and second processors respectively executing the first and second programs. - In the decentralized scheduler, a plurality of independent schedulers are provided. The benefit associated with the decentralized scheduler is the scalability in a multinode system. However, the negative aspect of the decentralized scheduler is complexity of control and communication among the schedulers to efficient allocate resources in a sequential manner to reduce operation and transmission costs associated with transferring data across nodes for execution of dependent programs. Accordingly, there is an increased communication cost associated with a decentralized scheduler.
- There is therefore a need for a method and system to efficiently assign resources based upon a plurality of execution requests for a set of programs having execution dependency with costs associated with data transfer and processing accounted for in a dynamic manner.
- This invention comprises a method and system for dynamically scheduling execution of a program among two or more processor nodes.
- In one aspect of the invention a method is provided for assigning resources to a plurality of processing nodes. Priority of execution dependency of a program is decided. In response to the decision, the program is dynamically assigned to a node based upon the priority and in accordance with a state of each node in a multinode system. Preemptive execution of the program is determined, and the program is executed at a designated node non-preemptively in response to a positive determination.
- In another aspect of the invention, a system is provided with a plurality of operating nodes, and a scheduling manager to decide priority of execution dependency of a program. A global scheduler is also provided to dynamically assign the program to one of the nodes based upon the priority and a state of each node in the system. In addition, a program manager is provided to determine applicability of preemptive execution of the program, and to non-preemptively execute the program at a designated node in response to a positive determination.
- In a further aspect of the invention, an article is provided with a computer-readable signal-bearing medium with a plurality of operating nodes in the medium. Means in the medium are provided for deciding priority of execution dependency of a program. In addition, means in the medium are provided for dynamically assigning the program to one of the nodes based upon the priority and a state of each node in the system. Means in the medium are provided for determining applicability of preemptive execution of the program, and to non-preemptively execute the program at a designated node in response to a positive determination.
- Other features and advantages of this invention will become apparent from the following detailed description of the presently preferred embodiment of the invention, taken in conjunction with the accompanying drawings.
-
FIG. 1 is a block diagram of a prior art centralized scheduler. -
FIG. 2 is a block diagram is a global scheduler according to the preferred embodiment of this invention, and is suggested for printing on the first page of the issued patent. -
FIG. 3 is flow chart illustrating a high level operation of processing flow. -
FIG. 4 is a flow chart illustrating workflow analysis. -
FIG. 5 is a flow chart illustrating assignment of priority to programs in a workflow. -
FIG. 6 is a flow chart illustrating logical node assignment. -
FIG. 7 is a flow chart illustrating scheduling a program at a node. -
FIG. 8 is a flow chart illustrating execution of a program at a node. - A grid environment (50) is shown in
FIG. 2 and is composed of a global scheduler (60) and a plurality of program execution units (70) and (80), known as nodes. Although only two nodes are shown, more nodes may be addressed to the system. Each node has a program execution unit (72) and (82), respectively, and a local scheduler (74) and (84) that has a local program execution queue (not shown) to manage execution of programs assigned to the respective node. The nodes (70) and (80) communicate with each other and the global scheduler (60) across a local or wide area network (90). An execution request for a program is made to the centralized scheduler (60) which assigns the program to one of the nodes (70, 80) in accordance with a state of each node to execute the program. The centralized scheduler (60) includes a wait queue (62), a workflow database (64), a performance database (66), and an assignment database (68). Each of the nodes (70) and (80) provide processing power, and outputs result of program execution to the global scheduler (60). A web server (not shown) in communication with the global scheduler (60) and each of the nodes (70) and (80) dynamically generates transactions to obtain execution requests and process data. The global scheduler (60) controls processing of a requested program to one or more of the nodes. -
FIG. 3 is a flow chart (100) showing a high level processing of program assignments. A workflow submission request, i.e. execution request, is received from a user (11). The workflow is analyzed (112) and executed (114) prior to scheduling an execution of an associated program in the workflow (116). Following execution at step (116) the results are provided to the user (118). There are essentially two procedures to the high level processing. The first procedure is the workflow analysis (112) conducted subsequent to receipt of a workflow submission, and is detailed inFIG. 4 . The second procedure involves three components: an execution request for a workflow from the user (114), scheduling and executing programs in the workflow (116), and providing results to the user (118). - As mentioned above, the workflow analysis (112) of
FIG. 3 is shown in detail inFIG. 4 (150). The first step of the workflow analysis is assigning priority (152). The program execution priority is decided based on the execution dependency relation of a given program before actual program execution. There are two optional methods for determining priority of assignment of a program. One method is a known as topological sorting, and the second method is based upon the distance from the start program. The topological sorting method involves sorting a directed acyclic graph (DAG) and deciding the priority of the program by incrementing by a factor of 1/(i−1) in sequence, where i indicates the number of programs included in the DAG. The second method, known as the shortest path length, involves computing the distance from the start program, and then deciding the priority as the value normalized by the maximum distance. When there is more than one group of program sets to be executed, the decision on priority of execution is applied to all the program sets to be executed. In either method of assigning priority to a program, the program execution request is added to the global wait queue (62). The entries in the wait queue are sorted based on the priority assigned to the program. When any node is waiting for program execution and the wait queue is not empty, a calculation csts, i.e. cost of assignment to a target node, for program execution is conducted for each program in the queue in order of priority. After execution of the program is completed at the assigned node, an execution request for a subsequent dependent program is added to the wait queue. The entries in the wait queue are rearranged in accordance with the priority assigned to the program. This procedure is repeated until the wait queue is empty. -
FIG. 5 is a flow chart (170) illustrating assignment of priority to each program in a group of programs. The first step is a test to determine if there is only one program in the group (172). A positive response to the test at step (172) will result in storing the priority to this one program (174) in the workflow database (64) on the global scheduler (60). However, a negative response to the test at step (172) is an indication that there are at least two or more programs in the group that need to be prioritized. Programs making up a strongly connected component are detected and grouped together (176). The programs grouped in this manner are identified as a strongly connected component group. Other programs that are not part of a strongly connected component are each grouped individually into groups of one program each with the number of programs in each group set as an integer of one. Each of the groups are sequenced by topological sorting (178) with the priority Pi of the i-th group Gi being decided in the following range: 0.0<Pi<1.0, such that Pi−1<P assuming that the priority of a start group is 0.0 and the priority of an end group is 1.0. - Following the sorting process at step (178), priority is assigned to each group (180). The process of assigning priority to each group is applied recursively for each program constituting the strongly connected component group (182) by returning to step (172). The decision of priority Pi is given to group Gi,s and the priority Pi,j is given to the jth group Gi,j in a range of Pi<Pi,j<Pj+1, such that Pi,j<Pj+i in the sequence acquired by topologically sorting the DAG created by excluding the input into Gi,s as the root. The purpose of normalizing the priority of each program is to enable programs in different program sets to be executed with the same presence. That is, when there are nodes for computing and the program sets have an equal total computation time, in situations when program sets request execution at the same time, the computation can be ended at the same time given the equal computation time between the sets. However, in a case where a program set includes a preferential request, the request includes a weight value. The priority assigned to the program is then multiplied by the weight value and applied to the scheduling method described above. Accordingly, it is required that the programs within the groups be recursively split into strongly connected components to decide the priority.
- Following the assignment of priority to a group of programs, as well as each program within a group (152), a test is conducted to determine if the program or set of programs can be assigned to a logical node to minimize the transfer of data between programs when analyzing execution dependency (154). The determination at step (154) is based upon whether the computation and/or transmission costs can be estimated.
FIG. 6 is a flow chart (200) illustrating the details of the process of assigning one or more programs to a logical node, i.e. a temporary node. Initially workflow data is received (202). Following the receipt at step (202), a cost estimate of program calculation and transmission is estimated (204). From the results of measuring the execution of programs having execution dependency in the past, the relationship between input data size and output data size for the programs in the execution dependency graph and the relationship between input data size and processing costs at a node are estimated. This step focuses on assigning the program with a greater amount of computation to the node of higher performance when the required data transfer overhead is minimal and a plurality of nodes are available. The estimation modeling parameter is made by a regression analysis. The costs can be computed based on program cost assignment, such as data transfer costs and whether the program and required data is cached, and program execution cost, such as the computation amount and the predicted end time. When the estimation at step (240) is complete, the maximum cost, including the computation cost of the dependent programs, is calculated (206). The programs in the workflow are then sorted (208) in the order of the calculated transmission cost. The program(s) are sorted in a hierarchy starting with a program having the highest transmission cost among the programs in consideration (210). If there is a tie between one or more programs having the same transmission cost, the tie is broken based upon the maximum cost of program execution including all dependent programs. Each of the programs are assigned to one or more logical nodes (212) based upon the hierarchical arrangement of the programs from step (210). Accordingly, the logical node assignment is based upon the transmission and/or communication cost of the programs in the queue. - Following the process of calculating the costs associated with execution of a program or group of programs, each of the programs or program groups is assigned to one or more logical nodes (156). The assignment to the logical nodes is stored (158) in the workflow database (64) of the global scheduler (60) and is utilized for scheduling execution of associated programs on actual nodes.
FIG. 7 is a flow chart (250) illustrating the process of program scheduling. The first step involves waiting for a next event (252), wherein the event may be a new request arrival event or a node status change event. - Thereafter, the execution condition of the next program is checked and submitted to the queue (254). Step (254) includes providing a priority parameter, i, to a newly executable program. The priority parameter is defined as pi={bi, di, mi}, where bi is the priority given to the entire program, di is the priority based on the dependency relation of each program in the execution dependency, and mi is the priority based on the correspondence relation between the logical node assignment and the actual node assignment. The priority mi has the highest priority when the node to be assigned and the actually assigned node mapped from the logical node for the program are matched. The next highest priority is when the logical node is not assigned to the actual node, and the lowest priority is when the node to be assigned to the program(s) is different from the mapped assignment. The entries in the wait queue are sorted based upon the priority parameters. The sorting is made based upon the following precedence: mi<di, bi, i.e. after the sorting based on mi is complete, the sorting is then based on di, followed by sorting based on bi. Following step (254), a node capable of executing a program or a set of programs is selected (256). The node selection process is based upon prior calculated costs, priority, and availability. A test is then conducted (258) to determine of the node selected at step (256) exists. A negative response to the test at step (258) will result in a return to step (252). However, a positive response to the test at step (258) will result in selection of a program or a set of programs for the transfer from the logical node assignment to the physical node (260). A test is then conducted to determine if the program(s) exist (262). If the response to the test at step (262) is negative, the scheduling returns to step (256). However, if the response to the test at step (262) is positive, a new map is created and the program is assigned to the actual node (264). Thereafter, required data transmission is requested for the program input (266), the program is submitted to the physical node's local queue (268), and a program assignment event is generated (270) followed by a return to step (260). Accordingly, the process of scheduling and executing a program includes mapping the program to an actual node for execution.
-
FIG. 8 is a flow chart (300) illustrating a process for executing a program after it has been assigned to a physical node for execution. The first step involves waiting for a next event (302), wherein the event may be either a program assignment or data transmission completion. Thereafter, an executable program is selected from the local queue of the physical node (304). A test is conducted to determine if the program exists (306). A negative response to the test at step (306) will return to step (302) for another event. However, a positive response to the test at step (306) will result in executing the selected program within an assigned period (308). A subsequent test is conducted to determine if the program execution has concluded within the assigned time period (310). A negative response to the test at step (310) will return to step (304) to select another program from the queue. However, a positive response to the test at step (310) will remove the executed program from the local queue (312). The performance data generated from the program execution is stored in the performance database (66) of the global scheduler (60). Thereafter, another test is conducted to determine if the destination of the data generated from the program execution has been decided (316). A positive response to the test at step (316) will allow the generated data to be transmitted (318). Thereafter or following a negative response to the test at step (316) a node status change event is generated (320). Following step (320), the process returns to step (304) for selection of a subsequent program from the local queue. Accordingly, the actual node assigned to executed the program stores performance data within the global scheduler. - The global scheduler dynamically assigns resources while optimizing overhead. Assignment of a workflow to a logical node is employed to mitigate communication and transmission costs associated with execution of a plurality of programs in the workflow by a plurality of nodes in the system. The priority of each program is normalized and sorted in the order of priority. Accordingly, the use of the global scheduler in conjunction with logical node assignments supports cost effective assignment of programs in a workflow to an optimal mode.
- It will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without departing from the spirit and scope of the invention. In particular, the assignment of programs in a workflow to a logical node to determine communication and transmission costs may be removed to allow the programs to be forwarded directly to a node having a local queue. Accordingly, the scope of protection of this invention is limited only by the following claims and their equivalents.
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/994,852 US20060112388A1 (en) | 2004-11-22 | 2004-11-22 | Method for dynamic scheduling in a distributed environment |
US12/173,387 US8185908B2 (en) | 2004-11-22 | 2008-07-15 | Dynamic scheduling in a distributed environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/994,852 US20060112388A1 (en) | 2004-11-22 | 2004-11-22 | Method for dynamic scheduling in a distributed environment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/173,387 Continuation US8185908B2 (en) | 2004-11-22 | 2008-07-15 | Dynamic scheduling in a distributed environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060112388A1 true US20060112388A1 (en) | 2006-05-25 |
Family
ID=36462332
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/994,852 Abandoned US20060112388A1 (en) | 2004-11-22 | 2004-11-22 | Method for dynamic scheduling in a distributed environment |
US12/173,387 Active 2027-07-30 US8185908B2 (en) | 2004-11-22 | 2008-07-15 | Dynamic scheduling in a distributed environment |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/173,387 Active 2027-07-30 US8185908B2 (en) | 2004-11-22 | 2008-07-15 | Dynamic scheduling in a distributed environment |
Country Status (1)
Country | Link |
---|---|
US (2) | US20060112388A1 (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060129660A1 (en) * | 2004-11-12 | 2006-06-15 | Mueller Wolfgang G | Method and computer system for queue processing |
US20070083735A1 (en) * | 2005-08-29 | 2007-04-12 | Glew Andrew F | Hierarchical processor |
US20070083739A1 (en) * | 2005-08-29 | 2007-04-12 | Glew Andrew F | Processor with branch predictor |
US20070282462A1 (en) * | 2006-05-31 | 2007-12-06 | Microsoft Corporation | Displaying interrelated changes in a grid |
US20080133893A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Hierarchical register file |
US20080133868A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Method and apparatus for segmented sequential storage |
US20080133889A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Hierarchical instruction scheduler |
US20080215642A1 (en) * | 2007-03-02 | 2008-09-04 | Kwai Hing Man | System, Method, And Service For Migrating An Item Within A Workflow Process |
WO2008144239A2 (en) * | 2007-05-18 | 2008-11-27 | Network Automation | Agent workflow system and method |
US20090019259A1 (en) * | 2006-03-23 | 2009-01-15 | Fujitsu Limited | Multiprocessing method and multiprocessor system |
US20090216783A1 (en) * | 2008-02-25 | 2009-08-27 | Alexander Gebhart | Hierarchical system operation in an adaptive computing environment |
US20090241117A1 (en) * | 2008-03-20 | 2009-09-24 | International Business Machines Corporation | Method for integrating flow orchestration and scheduling for a batch of workflows |
US20090276761A1 (en) * | 2008-05-01 | 2009-11-05 | Intuit Inc. | Weighted performance metrics for financial software |
US20100082762A1 (en) * | 2008-09-29 | 2010-04-01 | Fujitsu Limited | Message tying processing method and apparatus |
US20100199281A1 (en) * | 2009-02-05 | 2010-08-05 | International Business Machines Corporation | Managing the Processing of Processing Requests in a Data Processing System Comprising a Plurality of Processing Environments |
US20110276977A1 (en) * | 2010-05-07 | 2011-11-10 | Microsoft Corporation | Distributed workflow execution |
US20120266023A1 (en) * | 2011-04-12 | 2012-10-18 | Brown Julian M | Prioritization and assignment manager for an integrated testing platform |
CN103135741A (en) * | 2011-12-01 | 2013-06-05 | 施乐公司 | Multi-device power saving |
US20130254772A1 (en) * | 2012-03-21 | 2013-09-26 | Phillip Morris International | Verification of complex workflows through internal assessment or community based assessment |
US20140379619A1 (en) * | 2013-06-24 | 2014-12-25 | Cylance Inc. | Automated System For Generative Multimodel Multiclass Classification And Similarity Analysis Using Machine Learning |
US9037961B1 (en) * | 2006-09-18 | 2015-05-19 | Credit Suisse Securities (Usa) Llc | System and method for storing a series of calculations as a function for implementation in a spreadsheet application |
US9235808B2 (en) | 2013-03-14 | 2016-01-12 | International Business Machines Corporation | Evaluation of predictions in the absence of a known ground truth |
US9262296B1 (en) | 2014-01-31 | 2016-02-16 | Cylance Inc. | Static feature extraction from structured files |
WO2016063482A1 (en) * | 2014-10-23 | 2016-04-28 | 日本電気株式会社 | Accelerator control device, accelerator control method, and program storage medium |
US9378012B2 (en) | 2014-01-31 | 2016-06-28 | Cylance Inc. | Generation of API call graphs from static disassembly |
US9465940B1 (en) | 2015-03-30 | 2016-10-11 | Cylance Inc. | Wavelet decomposition of software entropy to identify malware |
US9495633B2 (en) | 2015-04-16 | 2016-11-15 | Cylance, Inc. | Recurrent neural networks for malware analysis |
WO2016206564A1 (en) * | 2015-06-26 | 2016-12-29 | 阿里巴巴集团控股有限公司 | Operation scheduling method, device and distribution system |
WO2017131187A1 (en) * | 2016-01-29 | 2017-08-03 | 日本電気株式会社 | Accelerator control device, accelerator control method and program |
WO2017167105A1 (en) * | 2016-03-31 | 2017-10-05 | 阿里巴巴集团控股有限公司 | Task-resource scheduling method and device |
US10235518B2 (en) | 2014-02-07 | 2019-03-19 | Cylance Inc. | Application execution control utilizing ensemble machine learning for discernment |
US20200004580A1 (en) * | 2018-06-29 | 2020-01-02 | International Business Machines Corporation | Resource management for parent child workload |
US10812407B2 (en) | 2017-11-21 | 2020-10-20 | International Business Machines Corporation | Automatic diagonal scaling of workloads in a distributed computing environment |
US10887250B2 (en) | 2017-11-21 | 2021-01-05 | International Business Machines Corporation | Reducing resource allocations and application instances in diagonal scaling in a distributed computing environment |
US10893000B2 (en) | 2017-11-21 | 2021-01-12 | International Business Machines Corporation | Diagonal scaling of resource allocations and application instances in a distributed computing environment |
US20230076061A1 (en) * | 2021-09-07 | 2023-03-09 | Hewlett Packard Enterprise Development Lp | Cascaded priority mapping |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5347648B2 (en) * | 2009-03-30 | 2013-11-20 | 富士通株式会社 | Program, information processing apparatus, and status output method |
US8549536B2 (en) * | 2009-11-30 | 2013-10-01 | Autonomy, Inc. | Performing a workflow having a set of dependancy-related predefined activities on a plurality of task servers |
KR101689736B1 (en) * | 2010-08-18 | 2016-12-27 | 삼성전자주식회사 | Work processing unit having a function of work scheduling, control unit for scheduling activation and work scheduling method over the symetric multi-processing environment |
JP6201530B2 (en) * | 2013-08-30 | 2017-09-27 | 富士通株式会社 | Information processing system, job management apparatus, control program for job management apparatus, and control method for information processing system |
US20150081400A1 (en) * | 2013-09-19 | 2015-03-19 | Infosys Limited | Watching ARM |
JP6221588B2 (en) * | 2013-09-30 | 2017-11-01 | 富士通株式会社 | Information processing system, management apparatus control program, and information processing system control method |
US9576072B2 (en) | 2014-02-13 | 2017-02-21 | Sap Se | Database calculation using parallel-computation in a directed acyclic graph |
US9826011B2 (en) | 2014-07-31 | 2017-11-21 | Istreamplanet Co. | Method and system for coordinating stream processing at a video streaming platform |
US9417921B2 (en) * | 2014-07-31 | 2016-08-16 | Istreamplanet Co. | Method and system for a graph based video streaming platform |
US9912707B2 (en) | 2014-07-31 | 2018-03-06 | Istreamplanet Co. | Method and system for ensuring reliability of unicast video streaming at a video streaming platform |
CN105740249B (en) * | 2014-12-08 | 2020-05-22 | Tcl科技集团股份有限公司 | Processing method and system in parallel scheduling process of big data job |
US10394682B2 (en) | 2015-02-27 | 2019-08-27 | Vmware, Inc. | Graphical lock analysis |
US9552235B2 (en) * | 2015-02-27 | 2017-01-24 | Vmware Inc. | Using pagerank algorithm-based lock analysis to identify key processes for improving computing system efficiency |
US9898382B2 (en) | 2015-02-27 | 2018-02-20 | Vmware, Inc. | Hyperlink-induced topic search algorithm lock analysis |
KR102339779B1 (en) * | 2015-04-06 | 2021-12-15 | 삼성전자주식회사 | Data storage device, data processing system having same, and method thereof |
US9686576B2 (en) | 2015-05-08 | 2017-06-20 | Istreamplanet Co. | Coordination of video stream timing in cloud-based video streaming system |
US9407944B1 (en) | 2015-05-08 | 2016-08-02 | Istreamplanet Co. | Resource allocation optimization for cloud-based video processing |
US10164853B2 (en) | 2015-05-29 | 2018-12-25 | Istreamplanet Co., Llc | Real-time anomaly mitigation in a cloud-based video streaming system |
CN109213594B (en) * | 2017-07-06 | 2022-05-17 | 阿里巴巴集团控股有限公司 | Resource preemption method, device, equipment and computer storage medium |
US10771365B2 (en) * | 2017-12-26 | 2020-09-08 | Paypal, Inc. | Optimizing timeout settings for nodes in a workflow |
WO2019200402A1 (en) | 2018-04-13 | 2019-10-17 | Plaid Inc. | Secure permissioning of access to user accounts, including secure distribution of aggregated user account data |
WO2021055618A1 (en) | 2019-09-17 | 2021-03-25 | Plaid Inc. | System and method linking to accounts using credential-less authentication |
CN111597040B (en) * | 2020-04-30 | 2022-09-16 | 中国科学院深圳先进技术研究院 | Resource allocation method, device, storage medium and electronic equipment |
CA3189855A1 (en) | 2020-08-18 | 2022-02-24 | William Frederick Kiefer | System and method for managing user interaction flows within third party applications |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4769772A (en) * | 1985-02-28 | 1988-09-06 | Honeywell Bull, Inc. | Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases |
US5526521A (en) * | 1993-02-24 | 1996-06-11 | International Business Machines Corporation | Method and system for process scheduling from within a current context and switching contexts only when the next scheduled context is different |
US6185569B1 (en) * | 1998-06-29 | 2001-02-06 | Microsoft Corporation | Linked data structure integrity verification system which verifies actual node information with expected node information stored in a table |
US6415259B1 (en) * | 1999-07-15 | 2002-07-02 | American Management Systems, Inc. | Automatic work progress tracking and optimizing engine for a telecommunications customer care and billing system |
US20030037089A1 (en) * | 2001-08-15 | 2003-02-20 | Erik Cota-Robles | Tracking operating system process and thread execution and virtual machine execution in hardware or in a virtual machine monitor |
US20030120709A1 (en) * | 2001-12-20 | 2003-06-26 | Darren Pulsipher | Mechanism for managing execution of interdependent aggregated processes |
US20040078105A1 (en) * | 2002-09-03 | 2004-04-22 | Charles Moon | System and method for workflow process management |
US20040133622A1 (en) * | 2002-10-10 | 2004-07-08 | Convergys Information Management Group, Inc. | System and method for revenue and authorization management |
US6894991B2 (en) * | 2000-11-30 | 2005-05-17 | Verizon Laboratories Inc. | Integrated method for performing scheduling, routing and access control in a computer network |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5774668A (en) * | 1995-06-07 | 1998-06-30 | Microsoft Corporation | System for on-line service in which gateway computer uses service map which includes loading condition of servers broadcasted by application servers for load balancing |
US6571215B1 (en) * | 1997-01-21 | 2003-05-27 | Microsoft Corporation | System and method for generating a schedule based on resource assignments |
US7024669B1 (en) * | 1999-02-26 | 2006-04-04 | International Business Machines Corporation | Managing workload within workflow-management-systems |
US6728961B1 (en) * | 1999-03-31 | 2004-04-27 | International Business Machines Corporation | Method and system for dynamically load balancing a process over a plurality of peer machines |
US7296056B2 (en) * | 2001-07-30 | 2007-11-13 | International Business Machines Corporation | Method, system, and program for selecting one user to assign a work item in a workflow |
US7568199B2 (en) * | 2003-07-28 | 2009-07-28 | Sap Ag. | System for matching resource request that freeing the reserved first resource and forwarding the request to second resource if predetermined time period expired |
-
2004
- 2004-11-22 US US10/994,852 patent/US20060112388A1/en not_active Abandoned
-
2008
- 2008-07-15 US US12/173,387 patent/US8185908B2/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4769772A (en) * | 1985-02-28 | 1988-09-06 | Honeywell Bull, Inc. | Automated query optimization method using both global and parallel local optimizations for materialization access planning for distributed databases |
US5526521A (en) * | 1993-02-24 | 1996-06-11 | International Business Machines Corporation | Method and system for process scheduling from within a current context and switching contexts only when the next scheduled context is different |
US6185569B1 (en) * | 1998-06-29 | 2001-02-06 | Microsoft Corporation | Linked data structure integrity verification system which verifies actual node information with expected node information stored in a table |
US6415259B1 (en) * | 1999-07-15 | 2002-07-02 | American Management Systems, Inc. | Automatic work progress tracking and optimizing engine for a telecommunications customer care and billing system |
US6894991B2 (en) * | 2000-11-30 | 2005-05-17 | Verizon Laboratories Inc. | Integrated method for performing scheduling, routing and access control in a computer network |
US20030037089A1 (en) * | 2001-08-15 | 2003-02-20 | Erik Cota-Robles | Tracking operating system process and thread execution and virtual machine execution in hardware or in a virtual machine monitor |
US20030120709A1 (en) * | 2001-12-20 | 2003-06-26 | Darren Pulsipher | Mechanism for managing execution of interdependent aggregated processes |
US20040078105A1 (en) * | 2002-09-03 | 2004-04-22 | Charles Moon | System and method for workflow process management |
US20040133622A1 (en) * | 2002-10-10 | 2004-07-08 | Convergys Information Management Group, Inc. | System and method for revenue and authorization management |
Cited By (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060129660A1 (en) * | 2004-11-12 | 2006-06-15 | Mueller Wolfgang G | Method and computer system for queue processing |
US20080133868A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Method and apparatus for segmented sequential storage |
US8296550B2 (en) | 2005-08-29 | 2012-10-23 | The Invention Science Fund I, Llc | Hierarchical register file with operand capture ports |
US8275976B2 (en) | 2005-08-29 | 2012-09-25 | The Invention Science Fund I, Llc | Hierarchical instruction scheduler facilitating instruction replay |
US20080133883A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Hierarchical store buffer |
US20080133893A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Hierarchical register file |
US7644258B2 (en) | 2005-08-29 | 2010-01-05 | Searete, Llc | Hybrid branch predictor using component predictors each having confidence and override signals |
US20080133889A1 (en) * | 2005-08-29 | 2008-06-05 | Centaurus Data Llc | Hierarchical instruction scheduler |
US20070083739A1 (en) * | 2005-08-29 | 2007-04-12 | Glew Andrew F | Processor with branch predictor |
US8037288B2 (en) | 2005-08-29 | 2011-10-11 | The Invention Science Fund I, Llc | Hybrid branch predictor having negative ovedrride signals |
US20070083735A1 (en) * | 2005-08-29 | 2007-04-12 | Glew Andrew F | Hierarchical processor |
US8028152B2 (en) | 2005-08-29 | 2011-09-27 | The Invention Science Fund I, Llc | Hierarchical multi-threading processor for executing virtual threads in a time-multiplexed fashion |
US8266412B2 (en) | 2005-08-29 | 2012-09-11 | The Invention Science Fund I, Llc | Hierarchical store buffer having segmented partitions |
US9176741B2 (en) * | 2005-08-29 | 2015-11-03 | Invention Science Fund I, Llc | Method and apparatus for segmented sequential storage |
US20090019259A1 (en) * | 2006-03-23 | 2009-01-15 | Fujitsu Limited | Multiprocessing method and multiprocessor system |
US7831902B2 (en) * | 2006-05-31 | 2010-11-09 | Microsoft Corporation | Displaying interrelated changes in a grid |
US20070282462A1 (en) * | 2006-05-31 | 2007-12-06 | Microsoft Corporation | Displaying interrelated changes in a grid |
US9037961B1 (en) * | 2006-09-18 | 2015-05-19 | Credit Suisse Securities (Usa) Llc | System and method for storing a series of calculations as a function for implementation in a spreadsheet application |
US7958058B2 (en) * | 2007-03-02 | 2011-06-07 | International Business Machines Corporation | System, method, and service for migrating an item within a workflow process |
US20080215642A1 (en) * | 2007-03-02 | 2008-09-04 | Kwai Hing Man | System, Method, And Service For Migrating An Item Within A Workflow Process |
WO2008144239A3 (en) * | 2007-05-18 | 2009-06-04 | Network Automation | Agent workflow system and method |
WO2008144239A2 (en) * | 2007-05-18 | 2008-11-27 | Network Automation | Agent workflow system and method |
US8935371B2 (en) * | 2008-02-25 | 2015-01-13 | Sap Se | Hierarchical system operation in an adaptive computing environment |
US20090216783A1 (en) * | 2008-02-25 | 2009-08-27 | Alexander Gebhart | Hierarchical system operation in an adaptive computing environment |
US8869165B2 (en) | 2008-03-20 | 2014-10-21 | International Business Machines Corporation | Integrating flow orchestration and scheduling of jobs and data activities for a batch of workflows over multiple domains subject to constraints |
US20090241117A1 (en) * | 2008-03-20 | 2009-09-24 | International Business Machines Corporation | Method for integrating flow orchestration and scheduling for a batch of workflows |
US20090276761A1 (en) * | 2008-05-01 | 2009-11-05 | Intuit Inc. | Weighted performance metrics for financial software |
US8621437B2 (en) * | 2008-05-01 | 2013-12-31 | Intuit Inc. | Weighted performance metrics for financial software |
US8539035B2 (en) * | 2008-09-29 | 2013-09-17 | Fujitsu Limited | Message tying processing method and apparatus |
US20100082762A1 (en) * | 2008-09-29 | 2010-04-01 | Fujitsu Limited | Message tying processing method and apparatus |
US20100199281A1 (en) * | 2009-02-05 | 2010-08-05 | International Business Machines Corporation | Managing the Processing of Processing Requests in a Data Processing System Comprising a Plurality of Processing Environments |
US20120167096A1 (en) * | 2009-02-05 | 2012-06-28 | International Business Machines Corporation | Managing the Processing of Processing Requests in a Data Processing System Comprising a Plurality of Processing Environments |
US8850440B2 (en) * | 2009-02-05 | 2014-09-30 | International Business Machines Corporation | Managing the processing of processing requests in a data processing system comprising a plurality of processing environments |
US8850438B2 (en) | 2009-02-05 | 2014-09-30 | International Business Machines Corporation | Managing the processing of processing requests in a data processing system comprising a plurality of processing environments |
US9946576B2 (en) | 2010-05-07 | 2018-04-17 | Microsoft Technology Licensing, Llc | Distributed workflow execution |
US20110276977A1 (en) * | 2010-05-07 | 2011-11-10 | Microsoft Corporation | Distributed workflow execution |
US9524192B2 (en) * | 2010-05-07 | 2016-12-20 | Microsoft Technology Licensing, Llc | Distributed workflow execution |
US9286193B2 (en) * | 2011-04-12 | 2016-03-15 | Accenture Global Services Limited | Prioritization and assignment manager for an integrated testing platform |
US20120266023A1 (en) * | 2011-04-12 | 2012-10-18 | Brown Julian M | Prioritization and assignment manager for an integrated testing platform |
CN102789414A (en) * | 2011-04-12 | 2012-11-21 | 埃森哲环球服务有限公司 | Prioritization and assignment manager for an integrated testing platform |
CN103135741A (en) * | 2011-12-01 | 2013-06-05 | 施乐公司 | Multi-device power saving |
EP2600237A3 (en) * | 2011-12-01 | 2014-03-05 | Xerox Corporation | Multi-device power saving |
US9026825B2 (en) * | 2011-12-01 | 2015-05-05 | Xerox Corporation | Multi-device powersaving |
US20130145187A1 (en) * | 2011-12-01 | 2013-06-06 | Xerox Corporation | Multi-device powersaving |
US20130254772A1 (en) * | 2012-03-21 | 2013-09-26 | Phillip Morris International | Verification of complex workflows through internal assessment or community based assessment |
US9009675B2 (en) * | 2012-03-21 | 2015-04-14 | International Business Machines Corporation | Verification of complex workflows through internal assessment or community based assessment |
US10915826B2 (en) | 2013-03-14 | 2021-02-09 | International Business Machines Corporation | Evaluation of predictions in the absence of a known ground truth |
US9235808B2 (en) | 2013-03-14 | 2016-01-12 | International Business Machines Corporation | Evaluation of predictions in the absence of a known ground truth |
US9582760B2 (en) | 2013-03-14 | 2017-02-28 | International Business Machines Corporation | Evaluation of predictions in the absence of a known ground truth |
US11657317B2 (en) | 2013-06-24 | 2023-05-23 | Cylance Inc. | Automated systems and methods for generative multimodel multiclass classification and similarity analysis using machine learning |
US20140379619A1 (en) * | 2013-06-24 | 2014-12-25 | Cylance Inc. | Automated System For Generative Multimodel Multiclass Classification And Similarity Analysis Using Machine Learning |
US9262296B1 (en) | 2014-01-31 | 2016-02-16 | Cylance Inc. | Static feature extraction from structured files |
US9921830B2 (en) | 2014-01-31 | 2018-03-20 | Cylance Inc. | Generation of API call graphs from static disassembly |
US9378012B2 (en) | 2014-01-31 | 2016-06-28 | Cylance Inc. | Generation of API call graphs from static disassembly |
US9959276B2 (en) | 2014-01-31 | 2018-05-01 | Cylance Inc. | Static feature extraction from structured files |
US10235518B2 (en) | 2014-02-07 | 2019-03-19 | Cylance Inc. | Application execution control utilizing ensemble machine learning for discernment |
JPWO2016063482A1 (en) * | 2014-10-23 | 2017-08-17 | 日本電気株式会社 | Accelerator control device, accelerator control method, and computer program |
WO2016063482A1 (en) * | 2014-10-23 | 2016-04-28 | 日本電気株式会社 | Accelerator control device, accelerator control method, and program storage medium |
US9465940B1 (en) | 2015-03-30 | 2016-10-11 | Cylance Inc. | Wavelet decomposition of software entropy to identify malware |
US9946876B2 (en) | 2015-03-30 | 2018-04-17 | Cylance Inc. | Wavelet decomposition of software entropy to identify malware |
US10691799B2 (en) | 2015-04-16 | 2020-06-23 | Cylance Inc. | Recurrent neural networks for malware analysis |
US9495633B2 (en) | 2015-04-16 | 2016-11-15 | Cylance, Inc. | Recurrent neural networks for malware analysis |
US10558804B2 (en) | 2015-04-16 | 2020-02-11 | Cylance Inc. | Recurrent neural networks for malware analysis |
US10521268B2 (en) | 2015-06-26 | 2019-12-31 | Alibaba Group Holding Limited | Job scheduling method, device, and distributed system |
WO2016206564A1 (en) * | 2015-06-26 | 2016-12-29 | 阿里巴巴集团控股有限公司 | Operation scheduling method, device and distribution system |
JPWO2017131187A1 (en) * | 2016-01-29 | 2018-11-15 | 日本電気株式会社 | Accelerator control device, accelerator control method and program |
WO2017131187A1 (en) * | 2016-01-29 | 2017-08-03 | 日本電気株式会社 | Accelerator control device, accelerator control method and program |
US10831547B2 (en) | 2016-01-29 | 2020-11-10 | Nec Corporation | Accelerator control apparatus for analyzing big data, accelerator control method, and program |
WO2017167105A1 (en) * | 2016-03-31 | 2017-10-05 | 阿里巴巴集团控股有限公司 | Task-resource scheduling method and device |
US10936359B2 (en) | 2016-03-31 | 2021-03-02 | Alibaba Group Holding Limited | Task resource scheduling method and apparatus |
US10893000B2 (en) | 2017-11-21 | 2021-01-12 | International Business Machines Corporation | Diagonal scaling of resource allocations and application instances in a distributed computing environment |
US10887250B2 (en) | 2017-11-21 | 2021-01-05 | International Business Machines Corporation | Reducing resource allocations and application instances in diagonal scaling in a distributed computing environment |
US10812407B2 (en) | 2017-11-21 | 2020-10-20 | International Business Machines Corporation | Automatic diagonal scaling of workloads in a distributed computing environment |
US11360804B2 (en) * | 2018-06-29 | 2022-06-14 | International Business Machines Corporation | Resource management for parent child workload |
US20200004580A1 (en) * | 2018-06-29 | 2020-01-02 | International Business Machines Corporation | Resource management for parent child workload |
US20230076061A1 (en) * | 2021-09-07 | 2023-03-09 | Hewlett Packard Enterprise Development Lp | Cascaded priority mapping |
Also Published As
Publication number | Publication date |
---|---|
US8185908B2 (en) | 2012-05-22 |
US20080276242A1 (en) | 2008-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8185908B2 (en) | Dynamic scheduling in a distributed environment | |
US9201690B2 (en) | Resource aware scheduling in a distributed computing environment | |
Harchol-Balter et al. | Exploiting process lifetime distributions for dynamic load balancing | |
CN107992359B (en) | Task scheduling method for cost perception in cloud environment | |
US8612987B2 (en) | Prediction-based resource matching for grid environments | |
JP5845809B2 (en) | Efficient parallelization of software analysis in distributed computing environment by intelligent and dynamic load balancing | |
US8332862B2 (en) | Scheduling ready tasks by generating network flow graph using information receive from root task having affinities between ready task and computers for execution | |
JP4781089B2 (en) | Task assignment method and task assignment device | |
US20170004009A1 (en) | Job distribution within a grid environment | |
US8843929B1 (en) | Scheduling in computer clusters | |
CN103701886A (en) | Hierarchic scheduling method for service and resources in cloud computation environment | |
JP5845813B2 (en) | A node computation initialization method for efficient parallel analysis of software in distributed computing environments | |
CN109857535B (en) | Spark JDBC-oriented task priority control implementation method and device | |
CN112148468B (en) | Resource scheduling method and device, electronic equipment and storage medium | |
JP5845811B2 (en) | Dynamic and intelligent partial computation management for efficient parallelization of software analysis in distributed computing environments | |
JP2012099110A (en) | Scheduling policy for efficient parallelization of software analysis in distributed computing environment | |
CN112130966A (en) | Task scheduling method and system | |
JP5845810B2 (en) | Efficient partial computation for parallel analysis of software in distributed computing environments | |
Qureshi et al. | Grid resource allocation for real-time data-intensive tasks | |
Ghazali et al. | A classification of Hadoop job schedulers based on performance optimization approaches | |
Kanemitsu et al. | Prior node selection for scheduling workflows in a heterogeneous system | |
Quan | Mapping heavy communication workflows onto grid resources within an SLA context | |
US11836532B2 (en) | OS optimized workflow allocation | |
Nzanywayingoma et al. | Task scheduling and virtual resource optimising in Hadoop YARN-based cloud computing environment | |
Massa et al. | Heterogeneous quasi-partitioned scheduling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TANIGUCHI, MASAAKI;KUBO, HARUNOBO;REEL/FRAME:016275/0306;SIGNING DATES FROM 20041105 TO 20041110 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |
|
AS | Assignment |
Owner name: MIDWAY TECHNOLOGY COMPANY LLC, MICHIGAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:037704/0257 Effective date: 20151231 |
|
AS | Assignment |
Owner name: SERVICENOW, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MIDWAY TECHNOLOGY COMPANY LLC;REEL/FRAME:038324/0816 Effective date: 20160324 |