EP0697654A1 - Load distribution method and system - Google Patents

Load distribution method and system Download PDF

Info

Publication number
EP0697654A1
EP0697654A1 EP95304960A EP95304960A EP0697654A1 EP 0697654 A1 EP0697654 A1 EP 0697654A1 EP 95304960 A EP95304960 A EP 95304960A EP 95304960 A EP95304960 A EP 95304960A EP 0697654 A1 EP0697654 A1 EP 0697654A1
Authority
EP
European Patent Office
Prior art keywords
information processing
processing apparatus
threads
distributed task
load
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP95304960A
Other languages
German (de)
French (fr)
Other versions
EP0697654B1 (en
Inventor
Yoshiaki C/O Canon K.K. Sudo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Publication of EP0697654A1 publication Critical patent/EP0697654A1/en
Application granted granted Critical
Publication of EP0697654B1 publication Critical patent/EP0697654B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • G06F9/5088Techniques for rebalancing the load in a distributed system involving task migration

Definitions

  • This invention relates to a load distribution method and system for controlling a task operating in a plurality of information processing apparatuses.
  • a program form called a task/thread model capable of effectively utilizing these processors has been proposed.
  • a program is divided into a plurality of execution modules called threads, and units called tasks to which resources are allocated.
  • the threads are units to which processor resources are allocated.
  • Other resources such as a storage space resource and the like, are allocated to tasks, and are released for all threads within each task.
  • the task/thread model is a model for programs for efficiently using processor resources in a multiprocessor-type information processing apparatus.
  • User-level threads for example, capable of switching contexts and generating threads within a user space in an ordinary task/thread-model program have also been proposed.
  • This type of thread has been proposed in order to improve such disadvantages of an ordinary task/thread model as that, for example, generation of threads and switching of contexts require a system call to an operating system (OS) kernel, resulting in a low speed of processing.
  • OS operating system
  • the user-level thread has such advantages as that a plurality of contexts can be provided, and generation of threads, switching of contexts, and the like can be performed within a user space, permitting a high speed of processing.
  • a thread controlled by a conventional OS kernel is called a kernel-level thread.
  • a distributed task/thread model has also been proposed in which, using a distributed virtual shared memory method for realizing virtual shared storage between tasks in a plurality of information processing apparatuses by controlling a conventional memory management unit and a metwork between the information processing apparatuses without providing a particular apparatus, the entire virtual storage space within each task is shared and a plurality of threads are operated in the shared state.
  • the entire main storage in respective tasks in a plurality of information processing apparatuses is made distributed virtual shared memory, and those tasks are considered together as one distributed task having at least one thread.
  • multiprocessor-type information processing apparatuses in the above-described task/thread model are replaced by connection of a plurality of information processing apparatuses with a network so as to use these distributed resources efficiently.
  • a network is mainly used for transfer of page data having a fixed length, so that a high-speed network can be efficiently used.
  • load is not uniformly distributed among the information processing apparatuses.
  • various kinds of load distribution methods have been proposed in order to prevent concentration of load.
  • Conventional load distribution methods comprise, for example, a method called task migration, in which a task in operation is transferred from a heavily loaded information processing apparatus to a lightly loaded information processing apparatus, and a method called remote task execution in which a task to be executed in a heavily loaded information processing apparatus is instead executed in a lightly loaded information processing apparatus.
  • the task migration method in which a task in operation is transferred, the entire task, in its present state of execution, must be transferred to an entirely different information processing apparatus.
  • a task is completely transferred to and operates in another information processing apparatus, although the time of tranfer differs. This causes no problem when distributed information processing apparatuses have the same processing capability, the load of the overall distributed system is high, and a task in an information processing apparatus whose load is particularly heavy from among the apparatuses can be transferred to a lightly loaded information processing apparatus.
  • the load of the entire distribution system is light and the number of operation tasks is less than the number of information processing apparatuses.
  • the task trasferred to the apparatus B may be retransferred to the apparatus A.
  • a task which has already operated cannot be transferred.
  • retransfer of a task causes a decrease in processing efficiency. It can be considered to select a light-load task having a short processing time period from among tasks operating in the apparatus A so that it is meaningful to execute the task in the apparatus B.
  • the present invention in a system having a mechanism of connecting a plurality of distributed information processing apparatuses with a network, and executing tasks by distributing threads within a distributed task sharing virtual storage space, when it is determined that load is not equal as a result of collection of load information from (or at least relating to) each of the information processing apparatuses, by controlling the degree of distribution of a distributed task in operation and transferring threads operating within the distributed task, the load in the information processing apparatuses is distributed, and the processing capabilities of the information processing apparatuses of the entire system can be sufficiently utilized.
  • the load in the information processing apparatuses can be distributed without actually transferring threads within the distributed task.
  • load information can be collected efficiently.
  • the present invention relates to a load distribution method having a mechanism of connecting a plurality of information processing apparatuses with a network, and executing a distributed task whose main storage is shared by a distributed virtual shared memory method, present in the plurality of information processing apparatuses by distributing threads in the respective information processing apparatuses, comprising the steps of collecting load information about the plurality of information processing apparatuses, controlling the degree of distribution of a distributed task in operation in accordance with the collected load information, and transferring threads operating in a heavily loaded information processing apparatus within the distributed task to a lightly loaded information processing apparatus.
  • the present invention relates to a load distribution system having a mechanism of connecting a plurality of information processing apparatuses with a network, and executing a distributed task, whose main storage is shared by a distributed virtual shared memory method, present in the plurality of information processing apparatuses by distributing threads in the respective information processing apparatuses, comprising collection means for collecting load information of the plurality of information processing apparatuses, control means for controlling the degree of distribution of a distributed task in operation in accordance with the collected load information, and transfer means for transferring threads operating in a high-load information processing apparatus within the distributed task to a low-load information processing apparatus.
  • FIG. 1 is a diagram illustrating distributed information processing apparatuses in the first embodiment.
  • Each of the information processing apparatuses can operate as an ordinary information processing apparatus by itself. These information processing apparatuses are connected to one another with a network and can communicate with one another.
  • Each of the information processing apparatuses does not always include a complete set of input and output devices, and does not always have the same processing capability. For example, the number of processors possessed by each of the information processing apparatuses may differ, or the calculation capability of each processor may differ.
  • FIG. 2 is a schematic diagram illustrating a load distribution method according to the present embodiment.
  • Respective information processing apparatuses (hereinafter termed "nodes") 201 are connected to one another with a network.
  • a microkernel 202 of an operating system controls tasks, main storage within the corresponding node, kernel-level threads and the like.
  • a load distribution server 203 executes the load distribution method of the present embodiment.
  • the servers 203 in the respective nodes 201 perform load distribution by communicating and cooperating with one another.
  • Reference numeral 204 represents a distributed task distributed in a plurality of nodes. Threads 205 operate in each distributed task 204.
  • Distributed virtual shared memory servers 206 realize distributed virtual shared memory of a distributed task.
  • FIG. 3 is a flowchart illustrating the load distribution method of the present embodiment.
  • step S1 load information of each information processing apparatus is collected as in the conventional load distribution method.
  • step S2 it is determined if the load information of each information processing apparatus collected in step S1 is equal. If the result of the determination is affirmative, the process returns to step S1. If the result of the determination is negative, i.e., if heavily loaded noads and lightly loaded nodes are present, the process proceeds to step S3.
  • step S3 the degree of distribution of a distributed task in operation (the number of nodes where the distributed task operates) is controlled.
  • step S4 threads within the distributed task in operation are transferred from a heavily loaded node to a lightly loaded node.
  • the degree of distribution of a distributed task is controlled according to two methods, i.e. , a method of increasing the degree of distribution, and a method of decreasing the degree of distribution.
  • a method of increasing the degree of distribution and a method of decreasing the degree of distribution.
  • the distributed virtual shared memory space within the distributed task is expanded to another node, so that threads within the distributed task can operate therein.
  • FIG. 4 is a diagram illustrating the concept of expansion of a distributed task.
  • FIG. 5 is a diagram illustrating the concept of compression of a distributed task.
  • node B In the upper portion of FIG. 5, four threads are present in node B in addition to two threads shared by the distributed virtual shared memory, i.e., six threads are present in total in node B. On the other hand, two threads are present in node A. In order to distribute the load of the heavily loaded node B, sharing of threads in the distributed virtual shared memory is cancelled to decrease the degree of distribution and to compress the distributed task. As a result, as shown in the lower portion of FIG. 5, four threads are present in each of nodes A and B, so that load distribution is realized.
  • the degree of distribution of the distributed task is controlled, and load distribution is performed by expanding a distributed task operating in a heavily loaded node to a lightly loaded node, or compressing the distributed task into the lightly loaded node (from a heavily loaded node), and transferring threads within the distributed task.
  • expansion and compression of a distributed task Only one or the other of expansion and compression of a distributed task may be performed, or both expansion and compression of a distributed task may be performed, depending on the distribution system. For example, in a system of generating a distributed task while expanding it to nodes whose number equals the degree of parallel operations required for the task, or in a system of expanding a distributed task to another node immediately when the degree of parallel operations required for the task has increased while the distributed task operates, the corresponding load distribution server must compress the distributed task in order to equalize the load of each node.
  • a load distribution server performs expansion of a distributed task in order to distribute the load of each node.
  • the effect of load distribution may be obtained merely by suppressing expansion of the distributed task.
  • the effect of load distribution will be improved by also performing compression of the distributed task.
  • step S6 processing of recording load information of each information processing apparatus collected in step S1, and notifying load distribution threads of the load information.
  • step S7 it is awaited for a predetermined time period, and then the process returns to the processing of step S1.
  • step S5 the collected load information is checked.
  • step S2 to step S4 the processing from step S2 to step S4 shown in FIG. 3 is executed. That is, the collected load information is stored in a storage device which can be referred to from both a thread for performing load distribution and a thread for performing collection of load information, and the collected load information is transferred in the form of a message or the like.
  • FIG. 7 is a flowchart illustrating a load distribution method according to the second embodiment.
  • each load distribution server monitors information relating to the load of the corresponding node. If the load of the corresponding node decreases or increases, the process proceeds to the processing of step S72, where it is intended to perform load distribution. If it is determined in step S71 that the load does not change, the process returns to the processing of step S71.
  • step S72 load information about another information processing apparatus is collected. If it is determined in step S73 that the load of the concerned information processing apparatus is heavier than that of the other information processing apparatus collected in step S72, the process proceeds to the processing of step S75. If it is determined in step S75 that there is not a distributed task in operation, the process proceeds to step S76, where a distributed task is expanded to a lightly loaded node.
  • step S74 If it is determined in step S74 that the load of the concerned information processing apparatus is lower than that of the other information processing apparatus, the process proceeds to the processing of step S77, where it is determined if there is a distributed task in operation. If the result of the determination is negative, the process proceeds to step S78, where a distributed task is expanded from a heavily loaded node. After executing the processing of steps S76 and S78, the process proceeds to step S79. In step S79, threads within the distributed task are transferred to a lightly loaded node.
  • each load distribution server basically monitors information relating to the corresponding node, and intends to perform load distribution when the load of the node decreases or increases.
  • the number of such thresholds is not limited to one, but two thresholds may also be provided.
  • upper and lower thresholds are compared with the current load. If the load exceeds the upper threshold, the server intends to reduce the load of the node. If the load is less than the lower threshold, the server intends to increase the load of the node.
  • load distribution is realized.
  • a node having a lighter load than the load of the concerned node is searched for by collecting load information about other nodes. If a distributed task in operation is present in the concerned node and the distributed task is expanded to a more lightly loaded node than the concerned node, threads within the distributed task are transferred from the concerned node to the node having the lighter load.
  • a node having a load higher than the load of the concerned node is searched for by collecting load information of other nodes.
  • load distribution can be realized by expanding a distributed task from a heavily loaded node or compressing a distributed task to a lightly loaded node, and transferring threads from the heavily loaded node.
  • threads are transferred from a heavily loaded node to a lightly loaded node using a thread transfer mechanism provided in an operating system.
  • threads can be transferred using a user-level thread control mechanism in a distributed task.
  • the user-level thread control mechanism frames called user-level threads are provided, and a program is thereby operated.
  • the user-level thread operates in a thread provided from the operating system (called a kernel thread) by the user-level thread control mechanism (principally provided from a library and executed in the user space of an application), which performs operations, such as stop, resumption of execution, and the like.
  • the state of the user-level thread is recorded in the user space of the application.
  • a certain kernel thread reads a recorded state, and that state is set as the state of the concerned thread.
  • the user-level thread control mechanism within the distributed task shown in FIG. 8 stores the state of the user-level thread in the virtual storage space in order to stop the thread which is currently operating within the node C.
  • one kernel thread in node C can be stopped, whereby the load of node C is reduced.
  • the corresponding load distribution server provides the user-level thread control mechanism with a thread formation request.
  • the user-level thread control mechanism generates a kernel thread within node A, and causes to resume execution of the user-level thread stopped in node C.
  • load distribution is realized.
  • the load distribution server need not issue a request to transfer threads.
  • load distribution can be performed by controlling the number of kernel threads allocated to the distributed task for each node.
  • FIG. 9 is a flow-chart illustrating a load distribution method in this case.
  • step S71 to step S78 shown in FIG. 9 Processing from step S71 to step S78 shown in FIG. 9 is the same as the processing from step S71 to step S78 shown in FIG. 7. Hence, description thereof will be omitted.
  • step S80 When it has been determined that the load of the concerned node is lighter that the load of another node, the process proceeds to the processing of step S80, where the number of kernel threads in the concerned node is increased.
  • the load of the concerned node is heavier than the load of another node, the number of kernel threads in the concerned node is reduced.
  • the load in the information processing apparatuses can be distributed without actually transferring threads within the distributed task.
  • load information By periodically circulating load information of a concerned apparatus as a message, load information can be collected efficiently.
  • load information of other information processing apparatuses By collecting load information of other information processing apparatuses only when the state of the load of a concerned apparatus increases or decreases, and expansion of a distributed task and transfer of threads must be performed, load information can be collected efficiently.

Abstract

In a load distribution method, the load in the entire distributed system is uniformly distributed. In a system in which a plurality of information processing apparatuses (nodes) are connected with a network, the degree of distribution of each distributed task is controlled by expanding or compressing the task. Load distribution is performed by expanding a distributed task operating in a heavily loaded node to a lightly loaded node, and compressing the distributed task from the heavily loaded node and transferring threads within the distributed task. Load distribution servers execute the load distribution method.

Description

    BACKGROUND OF THE INVENTION Field of the Invention
  • This invention relates to a load distribution method and system for controlling a task operating in a plurality of information processing apparatuses.
  • Description of the Related Art
  • In multiprocessor-type information processing apparatuses each including a plurality of processors, a program form called a task/thread model capable of effectively utilizing these processors has been proposed. In this model, a program is divided into a plurality of execution modules called threads, and units called tasks to which resources are allocated. The threads are units to which processor resources are allocated. Other resources, such as a storage space resource and the like, are allocated to tasks, and are released for all threads within each task. The task/thread model is a model for programs for efficiently using processor resources in a multiprocessor-type information processing apparatus.
  • User-level threads, for example, capable of switching contexts and generating threads within a user space in an ordinary task/thread-model program have also been proposed. This type of thread has been proposed in order to improve such disadvantages of an ordinary task/thread model as that, for example, generation of threads and switching of contexts require a system call to an operating system (OS) kernel, resulting in a low speed of processing. The user-level thread has such advantages as that a plurality of contexts can be provided, and generation of threads, switching of contexts, and the like can be performed within a user space, permitting a high speed of processing. In contrast to such a user-level thread, a thread controlled by a conventional OS kernel is called a kernel-level thread.
  • A distributed task/thread model has also been proposed in which, using a distributed virtual shared memory method for realizing virtual shared storage between tasks in a plurality of information processing apparatuses by controlling a conventional memory management unit and a metwork between the information processing apparatuses without providing a particular apparatus, the entire virtual storage space within each task is shared and a plurality of threads are operated in the shared state. In this model, the entire main storage in respective tasks in a plurality of information processing apparatuses is made distributed virtual shared memory, and those tasks are considered together as one distributed task having at least one thread. In the distributed task/thread model, multiprocessor-type information processing apparatuses in the above-described task/thread model are replaced by connection of a plurality of information processing apparatuses with a network so as to use these distributed resources efficiently. In distributed shared virtual shared memory, a network is mainly used for transfer of page data having a fixed length, so that a high-speed network can be efficiently used.
  • In a system comprising distributed information processing apparatuses, load is not uniformly distributed among the information processing apparatuses. Hence, various kinds of load distribution methods have been proposed in order to prevent concentration of load. Conventional load distribution methods comprise, for example, a method called task migration, in which a task in operation is transferred from a heavily loaded information processing apparatus to a lightly loaded information processing apparatus, and a method called remote task execution in which a task to be executed in a heavily loaded information processing apparatus is instead executed in a lightly loaded information processing apparatus. In the task migration method in which a task in operation is transferred, the entire task, in its present state of execution, must be transferred to an entirely different information processing apparatus. In the remote task execution method, such transfer is unnecessary, and it is only necessary to transfer a small amount of information, such as the name of the task in execution, environment for execution, arguments and the like. However, since it is impossible to transfer a task which has once been stated, the time of execution of load distribution is limited.
  • In both the above-described task migration method and the remote task execution method, a task is completely transferred to and operates in another information processing apparatus, although the time of tranfer differs. This causes no problem when distributed information processing apparatuses have the same processing capability, the load of the overall distributed system is high, and a task in an information processing apparatus whose load is particularly heavy from among the apparatuses can be transferred to a lightly loaded information processing apparatus. Suppose a case in which the load of the entire distribution system is light and the number of operation tasks is less than the number of information processing apparatuses. In such a case, even if there is a task in which a large number of threads are generated, and load is heavy in the information processing apparatus in which the task operates, transfer of the task to a lightly loaded information processing apparatus cannot equalize the load even if the load is transferred; and only the load of the first information processing apparatus is reduced.
  • That is, when the number of tasks in operation is small, in the above-described load distribution methods in which a task is completely transferred and operates, information processing apparatuses to which no task is allocated are present, thereby wasting the processing capability of these apparatuses.
  • Consider a case in which respective information processing apparatuses have different processing capabilities (including a case in which the number of processors possessed by each information processing apparatus differs). In such a case, when the load of each information processing apparatus is not equal and therefore it is intended to transfer a task, no problem arises in the transfer of the task from an information processing apparatus having a low processing capability to an information processing apparatus having a high processing capability. However, the transfer of the task from an information processing apparatus having a high processing capability to an information processing apparatus having a low processing capability causes the following problems.
  • That is, when load is concentrated and several tasks operate in an information processing apparatus A having a high processing capability, and a task in operation has been terminated and no load is present in an information processing apparatus B having a low processing capability, one of the tasks in the apparatus A is transferred to the apparatus B. Thereafter, the apparatus A completes execution of remaining tasks because it has a high processing capability, but the task transferred to the apparatus B continues to operate because the apparatus B has a low processing capability. Accordingly, an inversion phenomenon occurs, i.e., the execution of the task would already have been completed if it had not been transferred to the apparatus B.
  • In order to prevent such a phenomenon, the task trasferred to the apparatus B may be retransferred to the apparatus A. However, in the remote task execution method, a task which has already operated cannot be transferred. Also, in the case of the task migration method, retransfer of a task causes a decrease in processing efficiency. It can be considered to select a light-load task having a short processing time period from among tasks operating in the apparatus A so that it is meaningful to execute the task in the apparatus B. However, it is difficult to select such a task during an operation in a conventional technique. In consideration of the above-described problems, in general, it has not been actively considered to transfer a task to an information processing apparatus having a low processing capability.
  • SUMMARY OF THE INVENTION
  • According to the present invention, in a system having a mechanism of connecting a plurality of distributed information processing apparatuses with a network, and executing tasks by distributing threads within a distributed task sharing virtual storage space, when it is determined that load is not equal as a result of collection of load information from (or at least relating to) each of the information processing apparatuses, by controlling the degree of distribution of a distributed task in operation and transferring threads operating within the distributed task, the load in the information processing apparatuses is distributed, and the processing capabilities of the information processing apparatuses of the entire system can be sufficiently utilized.
  • By providing a user-level thread control mechanism in a distributed task and using context switching in a user distributed virtual shared memory space, the load in the information processing apparatuses can be distributed without actually transferring threads within the distributed task.
  • By periodically circulating load information of a concerned apparatus as a message, efficient load information can be collected.
  • By collecting load information from (or relating to) other information processing apparatuses only when the state of the load of a concerned apparatus increases or decreases, and expansion of a distributed task and transfer of threads must be performed, load information can be collected efficiently.
  • According to one aspect, the present invention relates to a load distribution method having a mechanism of connecting a plurality of information processing apparatuses with a network, and executing a distributed task whose main storage is shared by a distributed virtual shared memory method, present in the plurality of information processing apparatuses by distributing threads in the respective information processing apparatuses, comprising the steps of collecting load information about the plurality of information processing apparatuses, controlling the degree of distribution of a distributed task in operation in accordance with the collected load information, and transferring threads operating in a heavily loaded information processing apparatus within the distributed task to a lightly loaded information processing apparatus.
  • According to another aspect, the present invention relates to a load distribution system having a mechanism of connecting a plurality of information processing apparatuses with a network, and executing a distributed task, whose main storage is shared by a distributed virtual shared memory method, present in the plurality of information processing apparatuses by distributing threads in the respective information processing apparatuses, comprising collection means for collecting load information of the plurality of information processing apparatuses, control means for controlling the degree of distribution of a distributed task in operation in accordance with the collected load information, and transfer means for transferring threads operating in a high-load information processing apparatus within the distributed task to a low-load information processing apparatus.
  • The foregoing and other objects, advantages and features of the present invention will become more apparent from the following description of the preferred embodiments taken in conjuction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • FIG. 1 is a diagram illustrating the configuration of distributed information processing apparatuses using a load distribution method according to a first embodiment of the present invention;
    • FIG. 2 is a schematic diagram illustrating the load distribution method according to the first embodiment;
    • FIG. 3 is a flowchart illustrating processing procedures in the load distribution method of the first embodiment;
    • FIG. 4 is a diagram illustrating expansion of a distributed task;
    • FIG. 5 is a diagram illustrating compression of a distributed task;
    • FIG. 6 is a flowchart when collection of load information is performed in another thread;
    • FIG. 7 is a flowchart when load distribution is autonomously performed on distributed nodes according to a second embodiment of the present invention;
    • FIG. 8 is a diagram illustrating the relationship between kernel-level threads and user-level threads in a third embodiment of the present invention; and
    • FIG. 9 is a flowchart of a load distribution method when using movement of user-level threads in the third embodiment.
    DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment
  • A first embodiment of the present invention will now be described in detail with reference to the drawings.
  • FIG. 1 is a diagram illustrating distributed information processing apparatuses in the first embodiment. Each of the information processing apparatuses can operate as an ordinary information processing apparatus by itself. These information processing apparatuses are connected to one another with a network and can communicate with one another. Each of the information processing apparatuses does not always include a complete set of input and output devices, and does not always have the same processing capability. For example, the number of processors possessed by each of the information processing apparatuses may differ, or the calculation capability of each processor may differ.
  • FIG. 2 is a schematic diagram illustrating a load distribution method according to the present embodiment. Respective information processing apparatuses (hereinafter termed "nodes") 201 are connected to one another with a network. A microkernel 202 of an operating system controls tasks, main storage within the corresponding node, kernel-level threads and the like. A load distribution server 203 executes the load distribution method of the present embodiment. The servers 203 in the respective nodes 201 perform load distribution by communicating and cooperating with one another. Reference numeral 204 represents a distributed task distributed in a plurality of nodes. Threads 205 operate in each distributed task 204. Distributed virtual shared memory servers 206 realize distributed virtual shared memory of a distributed task.
  • FIG. 3 is a flowchart illustrating the load distribution method of the present embodiment.
  • In step S1, load information of each information processing apparatus is collected as in the conventional load distribution method. In step S2, it is determined if the load information of each information processing apparatus collected in step S1 is equal. If the result of the determination is affirmative, the process returns to step S1. If the result of the determination is negative, i.e., if heavily loaded noads and lightly loaded nodes are present, the process proceeds to step S3. In step S3, the degree of distribution of a distributed task in operation (the number of nodes where the distributed task operates) is controlled. In step S4, threads within the distributed task in operation are transferred from a heavily loaded node to a lightly loaded node.
  • The degree of distribution of a distributed task is controlled according to two methods, i.e. , a method of increasing the degree of distribution, and a method of decreasing the degree of distribution. In order to increase the degree of distribution, the distributed virtual shared memory space within the distributed task is expanded to another node, so that threads within the distributed task can operate therein. FIG. 4 is a diagram illustrating the concept of expansion of a distributed task.
  • In order to decrease the degree of distribution, all threads are transferred from a node where the distributed task operates as a result of expansion to another node where the distributed task is present, and sharing in the distributed virtual shared memory is cancelled for the node from where all the threads have been transferred. FIG. 5 is a diagram illustrating the concept of compression of a distributed task.
  • In the upper portion of FIG. 5, four threads are present in node B in addition to two threads shared by the distributed virtual shared memory, i.e., six threads are present in total in node B. On the other hand, two threads are present in node A. In order to distribute the load of the heavily loaded node B, sharing of threads in the distributed virtual shared memory is cancelled to decrease the degree of distribution and to compress the distributed task. As a result, as shown in the lower portion of FIG. 5, four threads are present in each of nodes A and B, so that load distribution is realized.
  • In the load distribution method of the present embodiment, by performing expansion and compression of a distributed task in the above-described manner, the degree of distribution of the distributed task is controlled, and load distribution is performed by expanding a distributed task operating in a heavily loaded node to a lightly loaded node, or compressing the distributed task into the lightly loaded node (from a heavily loaded node), and transferring threads within the distributed task.
  • Only one or the other of expansion and compression of a distributed task may be performed, or both expansion and compression of a distributed task may be performed, depending on the distribution system. For example, in a system of generating a distributed task while expanding it to nodes whose number equals the degree of parallel operations required for the task, or in a system of expanding a distributed task to another node immediately when the degree of parallel operations required for the task has increased while the distributed task operates, the corresponding load distribution server must compress the distributed task in order to equalize the load of each node.
  • On the other hand, in a system in which a task does not automatically become a distributed task and is not distributed to another node, and a distributed task is not expanded to another node even if the degree of parallel operations increases, a load distribution server performs expansion of a distributed task in order to distribute the load of each node. In such a case, when a distributed task is not compressed and the load of each node uniformly increases, the effect of load distribution may be obtained merely by suppressing expansion of the distributed task. However, the effect of load distribution will be improved by also performing compression of the distributed task.
  • Although in the flowchart shown in FIG. 3, collection of load information and load distribution are performed within the same flow, these two operations may be performed with different flows, as shown in FIG. 6. Processing from step S1 to step S4 shown in FIG. 6 is the same as the processing from step S1 to step S4 shown in FIG. 3. Hence, description thereof will be omitted. A description will be provided only of processing from step S5 to step S7. In step S6, processing of recording load information of each information processing apparatus collected in step S1, and notifying load distribution threads of the load information. In step S7, it is awaited for a predetermined time period, and then the process returns to the processing of step S1. In step S5, the collected load information is checked. In the following processing from step S2 to step S4, the processing from step S2 to step S4 shown in FIG. 3 is executed. That is, the collected load information is stored in a storage device which can be referred to from both a thread for performing load distribution and a thread for performing collection of load information, and the collected load information is transferred in the form of a message or the like.
  • Second Embodiment
  • In the first embodiment, a description has been provided of a case in which a server for performing load distribution can concentratedly perform decision making. In a second embodiment of the present invention, however, a description will be provided of a case in which respective servers autonomously perform load distribution in a distributed state.
  • FIG. 7 is a flowchart illustrating a load distribution method according to the second embodiment.
  • In step S71, each load distribution server monitors information relating to the load of the corresponding node. If the load of the corresponding node decreases or increases, the process proceeds to the processing of step S72, where it is intended to perform load distribution. If it is determined in step S71 that the load does not change, the process returns to the processing of step S71. In step S72, load information about another information processing apparatus is collected. If it is determined in step S73 that the load of the concerned information processing apparatus is heavier than that of the other information processing apparatus collected in step S72, the process proceeds to the processing of step S75. If it is determined in step S75 that there is not a distributed task in operation, the process proceeds to step S76, where a distributed task is expanded to a lightly loaded node. If it is determined in step S74 that the load of the concerned information processing apparatus is lower than that of the other information processing apparatus, the process proceeds to the processing of step S77, where it is determined if there is a distributed task in operation. If the result of the determination is negative, the process proceeds to step S78, where a distributed task is expanded from a heavily loaded node. After executing the processing of steps S76 and S78, the process proceeds to step S79. In step S79, threads within the distributed task are transferred to a lightly loaded node.
  • In the second embodiment, each load distribution server basically monitors information relating to the corresponding node, and intends to perform load distribution when the load of the node decreases or increases. There is a method of providing a threshold for the load of the node in order to determine whether or not load distribution is to be performed. The number of such thresholds is not limited to one, but two thresholds may also be provided. In this case, upper and lower thresholds are compared with the current load. If the load exceeds the upper threshold, the server intends to reduce the load of the node. If the load is less than the lower threshold, the server intends to increase the load of the node. Thus, load distribution is realized.
  • In order to reduce the load of the concerned node, a node having a lighter load than the load of the concerned node is searched for by collecting load information about other nodes. If a distributed task in operation is present in the concerned node and the distributed task is expanded to a more lightly loaded node than the concerned node, threads within the distributed task are transferred from the concerned node to the node having the lighter load.
  • When a distributed task does not operate in the concerned node, or when a distributed task is present in the concerned node and is expanded to a more heavily loaded node than the concerned node, threads are transferred to a lightly loaded node after expanding a task or a distributed task operating in the concerned node to the lightly loaded node. Thus, the load of the concerned node can be distributed to another node.
  • When distributing the load of the concerned node to another node, not only is a task in the concerned node expanded, but also a distributed task in operation expanded to a lightly loaded node may be compressed from the concerned node to the lightly loaded node. In such a case, when the load of the entire system is high, the distributed task is not only expanded but also compressed. Hence, the load of processing for holding distributed virtual shared memory is reduced, and message transfer via the network is reduced, whereby the processing capability of the entire system increases.
  • On the other hand, when the load of the concerned node decreases and it is intended to increase the load of the concerned node, a node having a load higher than the load of the concerned node is searched for by collecting load information of other nodes. As in the case of reducing the load of the concerned node, load distribution can be realized by expanding a distributed task from a heavily loaded node or compressing a distributed task to a lightly loaded node, and transferring threads from the heavily loaded node.
  • Although in the second embodiment, a description has been provided of a method of collecting load information of other nodes whenever necessary when trying to perform load distribution, a method of collecting load information by periodically transmitting load information of the concerned node to other nodes can also be considered.
  • Third Embodiment
  • In the above-described embodiments, threads are transferred from a heavily loaded node to a lightly loaded node using a thread transfer mechanism provided in an operating system. However, even if such a mechanism is not provided, threads can be transferred using a user-level thread control mechanism in a distributed task. In the user-level thread control mechanism, frames called user-level threads are provided, and a program is thereby operated. The user-level thread operates in a thread provided from the operating system (called a kernel thread) by the user-level thread control mechanism (principally provided from a library and executed in the user space of an application), which performs operations, such as stop, resumption of execution, and the like. When stopping the user-level thread, the state of the user-level thread is recorded in the user space of the application. When resuming the user-level thread, a certain kernel thread reads a recorded state, and that state is set as the state of the concerned thread.
  • When a frame of such a user-level thread is used, since one consistent virtual storage space is provided between distributed nodes for a distributed task, by writing and storing the state of the user-level thread in the virtual storage space in the node which the user-level thread has been operated within, reading the stored state by a kernel thread in another node, and setting the read state as the state of the concerned thread, a certain thread is observed as if it has been transferred between nodes in an application program described by the user. In such transfer of the user-level thread, as shown in FIG. 8, when another task is generated in node C and the load of node C thereby increases, the corresponding load distribution server requests the task operating in node C to transfer the thread which operates. Accordingly, the user-level thread control mechanism within the distributed task shown in FIG. 8 stores the state of the user-level thread in the virtual storage space in order to stop the thread which is currently operating within the node C. Thus, one kernel thread in node C can be stopped, whereby the load of node C is reduced. When the load of node A decreases (for example, when a task operating in node A has been completed), the corresponding load distribution server provides the user-level thread control mechanism with a thread formation request. The user-level thread control mechanism generates a kernel thread within node A, and causes to resume execution of the user-level thread stopped in node C. Thus, load distribution is realized.
  • In a method in which the control mechanism of the user-level thread automatically searches for an idle kernel thread and allocate that kernel thread to a ready user-level thread, the load distribution server need not issue a request to transfer threads. In this case, load distribution can be performed by controlling the number of kernel threads allocated to the distributed task for each node.
  • That is, when the number of kernel-level threads decreases, the state of the user-level thread in a kernel thread to be stopped is written and stored in the virtual storage space, and a kernel thread which can operate is automatically searched for. When the number of kernel-level threads increases, the state of the user-level thread which has been stopped and stored in the virtual storage space is read, and the operation of the read thread is resumed. When it is determined that the number of kernel threads to be allocated to one node of a certain distributed task is made 0, the distributed task may be compressed. FIG. 9 is a flow-chart illustrating a load distribution method in this case.
  • Processing from step S71 to step S78 shown in FIG. 9 is the same as the processing from step S71 to step S78 shown in FIG. 7. Hence, description thereof will be omitted. When it has been determined that the load of the concerned node is lighter that the load of another node, the process proceeds to the processing of step S80, where the number of kernel threads in the concerned node is increased. When it has been determined that the load of the concerned node is heavier than the load of another node, the number of kernel threads in the concerned node is reduced.
  • As described above, in a system having a mechanism of connecting a plurality of distributed information processing apparatuses with a network, and executing tasks by distributing threads within a distributed task sharing virtual storage space, when it is determined that load is not equal as a result of collection of load information about each of the information processing apparatuses, by controlling the degree of distribution of a distributed task in operation and transferring threads operating within the distributed task, the load in the information processing apparatuses is distributed, and the processing capabilities of the information processing apparatuses of the entire system can be sufficiently utilized.
  • By providing a user-level thread control mechanism in a distributed task and using context switching in a user distributed virtual shared memory space, the load in the information processing apparatuses can be distributed without actually transferring threads within the distributed task.
  • By periodically circulating load information of a concerned apparatus as a message, load information can be collected efficiently.
  • By collecting load information of other information processing apparatuses only when the state of the load of a concerned apparatus increases or decreases, and expansion of a distributed task and transfer of threads must be performed, load information can be collected efficiently.
  • The individual components shown in outline in the drawings are all well known in the load distribution method and system arts and their specific construction and operation are not critical to the operation or the best mode for carrying out the invention.
  • While the present invention has been described with respect to what is presently considered to be the preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, the present invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (17)

  1. A load distribution method having a mechanism of connecting a plurality of information processing apparatuses with a network, and executing a distributed task, whose main storage is shared by a distributed virtual shared memory method, present in the plurality of information processing apparatuses by distributing threads in the respective information processing apparatuses, said method comprising the steps of:
       collecting load information about the plurality of information processing apparatuses;
       controlling the degree of distribution of a distributed task in operation in accordance with the collected load information; and
       transferring threads operating in a heavily loaded information processing apparatus within the distributed task to a lightly loaded information processing apparatus.
  2. A method according to Claim 1, wherein, in said controlling step, the heavily loaded information processing apparatus expands the distributed task operating therein to the lightly loaded information processing apparatus in accordance with the collected load information, and then the threads are transferred
  3. A method according to Claim 1, wherein, in said controlling step, the lightly loaded information processing apparatus expands the distributed task operating in the heavily loaded information processing apparatus to the lightly loaded information processing apparatus in accordance with the collected load information, and then the threads are transferred.
  4. A method according to Claim 1, wherein, in said controlling step, the heavily loaded information processing apparatus expands the distributed task operating therein to the lightly loaded information processing apparatus, and the lightly loaded information processing apparatus expands the distributed task operating in the heavily loaded information processing apparatus to the lightly loaded information processing apparatus, in accordance with the collected load information, and then the respective apparatuses transfer the threads.
  5. A method according to Claim 1, wherein, in said controlling step, the heavily loaded information processing apparatus expands the distributed task operating therein to the lightly loaded information processing apparatus and compresses the distributed task expanded to another lightly loaded information processing apparatus to the concerned information processing apparatus, and the lightly loaded information processing apparatus expands the distributed task operating in the heavily loaded information processing apparatus to the lightly loaded information processing apparatus and compresses the distributed task expanded to another heavily loaded information processing apparatus from that information processing apparatus to the lightly loaded information processing apparatus, in accordance with the collected load information, and then the respective apparatuses transfer the threads.
  6. A method according to Claim 1, wherein, in said transferring step, user-level threads are transferred by stopping threads in the heavily loaded information processing apparatus and generating threads in the lightly loaded information processing apparatus by providing a user-level thread control mechanism in the distributed task and using switching of contexts in a user distributed virtual shared memory space.
  7. A method according to Claim 1, wherein, in said collecting step, load information is collected by periodically circulating load information about a concerned apparatus as a message.
  8. A method according to Claim 1, wherein, in said collecting step, load information about other information processing apparatuses is collected only when expansion of the distributed task and transfer of threads must be performed because load information about a concerned apparatus decreases or increases.
  9. A load distribution system having a mechanism of connecting a plurality of information processing apparatuses with a network, and executing a distributed task, whose main storage is shared by a distributed virtual shared memory method, present in the plurality of information processing apparatuses by distributing threads in the respective information processing apparatuses, said system comprising:
       collection means for collecting load information about the plurality of information processing apparatuses;
       control means for controlling the degree of distribution of a distributed task in operation in accordance with the collected load information; and
       transfer means for transferring threads operating in a heavily loaded information processing apparatus within the distributed task to a lightly loaded information processing apparatus.
  10. A system according to Claim 9, wherein said control means performs control so that the heavily loaded information processing apparatus expands the distributed task operating therein to the lightly loaded information processing apparatus in accordance with the collected load information, and then said transfer means transfers the threads.
  11. A system according to Claim 9, wherein said control means performs control so that the lightly loaded information processing apparatus expands the distributed task operating in the heavily loaded information processing apparatus to the lightly loaded information processing apparatus in accordance with the collected load information, and then said transfer means transfers the threads.
  12. A system according to Claim 9, wherein said control means performs control so that the heavily loaded information processing apparatus expands the distributed task operating therein to the lightly loaded information processing apparatus, and the lightly loaded information processing apparatus expands the distributed task operating in the heavily loaded information processing apparatus to the lightly loaded information processing apparatus, in accordance with the collected load information, and the respective apparatuses transfer the threads.
  13. A system according to Claim 9, wherein said control means performs control so that the heavily loaded information processing apparatus expands the distributed task operating therein to the lightly loaded information processing apparatus and compresses the distributed task expanded to another lightly loaded information processing apparatus to the concerned information processing apparatus, and the lightly loaded information processing apparatus expands the distributed task operating in the heavily loaded information processing apparatus to the lightly loaded information processing apparatus and compresses distributed task expanded to another heavily loaded information processing apparatus from that information processing apparatus to the lightly loaded information processing apparatus, in accordance with the collected load information, and the respective apparatuses transfer the threads.
  14. A system according to Claim 9, wherein said transfer means transfers user-level threads by stopping threads in the heavily loaded information processing apparatus and generating threads in the lightly loaded information processing apparatus by providing a user-level thread control mechanism in the distributed task and using switching of contexts in a user distributed virtual shared memory space.
  15. A system according to Claim 9, wherein said collection means collects load information by periodically circulating load information about a concerned apparatus as a message.
  16. A system according to Claim 9, wherein said collection means collects load information about other information processing apparatuses only when expansion of the distributed task and transfer of threads must be performed because load information about a concerned apparatus decreases or increases.
  17. A load distribution method or system wherein tasks are executed by distributing threads within a distributed task sharing virtual storage space by determining load information concerning a plurality of information and/or data processing apparatuses sharing the virtual storage space and controlling the distribution in accordance with the loading of the information processing apparatuses.
EP95304960A 1994-07-19 1995-07-17 Load distribution method and system Expired - Lifetime EP0697654B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP16682494 1994-07-19
JP166824/94 1994-07-19
JP16682494A JP3696901B2 (en) 1994-07-19 1994-07-19 Load balancing method

Publications (2)

Publication Number Publication Date
EP0697654A1 true EP0697654A1 (en) 1996-02-21
EP0697654B1 EP0697654B1 (en) 2001-05-23

Family

ID=15838352

Family Applications (1)

Application Number Title Priority Date Filing Date
EP95304960A Expired - Lifetime EP0697654B1 (en) 1994-07-19 1995-07-17 Load distribution method and system

Country Status (4)

Country Link
US (1) US5692192A (en)
EP (1) EP0697654B1 (en)
JP (1) JP3696901B2 (en)
DE (1) DE69520988T2 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000031636A1 (en) * 1998-11-24 2000-06-02 Sun Microsystems, Inc. Distributed monitor concurrency control
WO2007034232A2 (en) * 2005-09-26 2007-03-29 Imagination Technologies Limited Scalable multi-threaded media processing architecture
EP1788481A1 (en) * 2005-11-21 2007-05-23 Sap Ag Hierarchical, multi-tiered mapping and monitoring architecture for service-to-device re-mapping smart items
US7860968B2 (en) 2005-11-21 2010-12-28 Sap Ag Hierarchical, multi-tiered mapping and monitoring architecture for smart items
US7890568B2 (en) 2006-04-28 2011-02-15 Sap Ag Service-to-device mapping for smart items using a genetic algorithm
US8005879B2 (en) 2005-11-21 2011-08-23 Sap Ag Service-to-device re-mapping for smart items
US8065411B2 (en) 2006-05-31 2011-11-22 Sap Ag System monitor for networks of nodes
US8108863B2 (en) 2005-12-30 2012-01-31 Intel Corporation Load balancing for multi-threaded applications via asymmetric power throttling
US8131838B2 (en) 2006-05-31 2012-03-06 Sap Ag Modular monitor service for smart item monitoring
US8296408B2 (en) 2006-05-12 2012-10-23 Sap Ag Distributing relocatable services in middleware for smart items
US8296413B2 (en) 2006-05-31 2012-10-23 Sap Ag Device registration in a hierarchical monitor service
US8396788B2 (en) 2006-07-31 2013-03-12 Sap Ag Cost-based deployment of components in smart item environments
US8522341B2 (en) 2006-03-31 2013-08-27 Sap Ag Active intervention in service-to-device mapping for smart items
US8527622B2 (en) 2007-10-12 2013-09-03 Sap Ag Fault tolerance framework for networks of nodes

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4429469A1 (en) * 1994-08-19 1996-02-22 Licentia Gmbh Method for routing control
US5884077A (en) * 1994-08-31 1999-03-16 Canon Kabushiki Kaisha Information processing system and method in which computer with high load borrows processor of computer with low load to execute process
JP3862293B2 (en) * 1994-08-31 2006-12-27 キヤノン株式会社 Information processing method and apparatus
JP3585956B2 (en) * 1994-09-12 2004-11-10 キヤノン株式会社 Information processing apparatus and method
JP3591883B2 (en) * 1994-09-01 2004-11-24 キヤノン株式会社 Computer, its system and its control method
US5765157A (en) * 1996-06-05 1998-06-09 Sun Microsystems, Inc. Computer system and method for executing threads of execution with reduced run-time memory space requirements
US5859898A (en) * 1996-09-17 1999-01-12 Nynex Science & Technology Messaging architecture supporting digital and analog media
US6119145A (en) * 1997-02-28 2000-09-12 Oracle Corporation Multithreaded client application storing a separate context for each transaction thus allowing threads to resume transactions started by other client threads
US6535878B1 (en) * 1997-05-02 2003-03-18 Roxio, Inc. Method and system for providing on-line interactivity over a server-client network
JP3883647B2 (en) * 1997-06-10 2007-02-21 インターナショナル・ビジネス・マシーンズ・コーポレーション Message processing method, message processing apparatus, and storage medium for storing program for controlling message processing
US6675195B1 (en) * 1997-06-11 2004-01-06 Oracle International Corporation Method and apparatus for reducing inefficiencies caused by sending multiple commands to a server
US6003066A (en) * 1997-08-14 1999-12-14 International Business Machines Corporation System for distributing a plurality of threads associated with a process initiating by one data processing station among data processing stations
JP4000223B2 (en) * 1997-09-24 2007-10-31 富士通株式会社 Information search method, information search system, and search management apparatus for the system
US6185662B1 (en) * 1997-12-22 2001-02-06 Nortel Networks Corporation High availability asynchronous computer system
US6243107B1 (en) 1998-08-10 2001-06-05 3D Labs Inc., Ltd. Optimization of a graphics processor system when rendering images
US7216348B1 (en) * 1999-01-05 2007-05-08 Net2Phone, Inc. Method and apparatus for dynamically balancing call flow workloads in a telecommunications system
JP3250729B2 (en) * 1999-01-22 2002-01-28 日本電気株式会社 Program execution device, process movement method thereof, and storage medium storing process movement control program
JP2000242609A (en) * 1999-02-23 2000-09-08 Nippon Telegr & Teleph Corp <Ntt> Distributed object dynamic arrangement control method and device
US6986137B1 (en) * 1999-09-28 2006-01-10 International Business Machines Corporation Method, system and program products for managing logical processors of a computing environment
US6842899B2 (en) 1999-12-21 2005-01-11 Lockheed Martin Corporation Apparatus and method for resource negotiations among autonomous agents
US6957237B1 (en) 2000-06-02 2005-10-18 Sun Microsystems, Inc. Database store for a virtual heap
US6934755B1 (en) * 2000-06-02 2005-08-23 Sun Microsystems, Inc. System and method for migrating processes on a network
US6854115B1 (en) 2000-06-02 2005-02-08 Sun Microsystems, Inc. Process persistence in a virtual machine
US6941410B1 (en) 2000-06-02 2005-09-06 Sun Microsystems, Inc. Virtual heap for a virtual machine
US20030014507A1 (en) * 2001-03-13 2003-01-16 International Business Machines Corporation Method and system for providing performance analysis for clusters
US20030033345A1 (en) * 2002-06-27 2003-02-13 Keefer Christopher E. Thread-based methods and systems for using the idle processing power of one or more networked computers to solve complex scientific problems
US7594233B2 (en) * 2002-06-28 2009-09-22 Hewlett-Packard Development Company, L.P. Processing thread launching using volunteer information
US7389506B1 (en) * 2002-07-30 2008-06-17 Unisys Corporation Selecting processor configuration based on thread usage in a multiprocessor system
US7093258B1 (en) * 2002-07-30 2006-08-15 Unisys Corporation Method and system for managing distribution of computer-executable program threads between central processing units in a multi-central processing unit computer system
US7043729B2 (en) * 2002-08-08 2006-05-09 Phoenix Technologies Ltd. Reducing interrupt latency while polling
US20040055001A1 (en) * 2002-09-16 2004-03-18 Islam Farhad Fuad Method and apparatus for computational load sharing in a multiprocessor architecture
US7096470B2 (en) * 2002-09-19 2006-08-22 International Business Machines Corporation Method and apparatus for implementing thread replacement for optimal performance in a two-tiered multithreading structure
US7181744B2 (en) * 2002-10-24 2007-02-20 International Business Machines Corporation System and method for transferring data between virtual machines or other computer entities
US7231638B2 (en) 2002-12-03 2007-06-12 International Business Machines Corporation Memory sharing in a distributed data processing system using modified address space to create extended address space for copying data
US7299468B2 (en) * 2003-04-29 2007-11-20 International Business Machines Corporation Management of virtual machines to utilize shared resources
US7251815B2 (en) * 2003-04-29 2007-07-31 International Business Machines Corporation Multiple virtual machines sharing processor and work queue in memory having program/dispatch functions for assigning and accessing work items while the virtual machine was not idle
JP4012517B2 (en) 2003-04-29 2007-11-21 インターナショナル・ビジネス・マシーンズ・コーポレーション Managing locks in a virtual machine environment
US8104043B2 (en) * 2003-11-24 2012-01-24 Microsoft Corporation System and method for dynamic cooperative distributed execution of computer tasks without a centralized controller
US7380039B2 (en) * 2003-12-30 2008-05-27 3Tera, Inc. Apparatus, method and system for aggregrating computing resources
US20050240380A1 (en) * 2004-03-31 2005-10-27 Jones Kenneth D Reducing context memory requirements in a multi-tasking system
JP4086813B2 (en) * 2004-06-09 2008-05-14 キヤノン株式会社 Network print system and grid network construction method in network print system
US20060048133A1 (en) * 2004-08-31 2006-03-02 Patzachke Till I Dynamically programmable embedded agents
US20060045019A1 (en) * 2004-09-01 2006-03-02 Patzschke Till I Network testing agent with integrated microkernel operating system
US7437581B2 (en) * 2004-09-28 2008-10-14 Intel Corporation Method and apparatus for varying energy per instruction according to the amount of available parallelism
US8589944B2 (en) * 2005-03-16 2013-11-19 Ricoh Production Print Solutions Method and system for task mapping to iteratively improve task assignment in a heterogeneous computing system
US8429630B2 (en) * 2005-09-15 2013-04-23 Ca, Inc. Globally distributed utility computing cloud
US8489700B2 (en) * 2005-11-30 2013-07-16 International Business Machines Corporation Analysis of nodal affinity behavior
US7496667B2 (en) * 2006-01-31 2009-02-24 International Business Machines Corporation Decentralized application placement for web application middleware
US8212805B1 (en) 2007-01-05 2012-07-03 Kenneth Banschick System and method for parametric display of modular aesthetic designs
WO2008136075A1 (en) * 2007-04-20 2008-11-13 Fujitsu Limited Storage management program, storage management device, and storage management method
US10268741B2 (en) * 2007-08-03 2019-04-23 International Business Machines Corporation Multi-nodal compression techniques for an in-memory database
US7844620B2 (en) * 2007-11-16 2010-11-30 International Business Machines Corporation Real time data replication for query execution in a massively parallel computer
US8095512B2 (en) * 2007-11-19 2012-01-10 International Business Machines Corporation Managing database resources used for optimizing query execution on a parallel computer system
US8688767B2 (en) * 2008-09-26 2014-04-01 Nec Corporation Distributed processing system, distributed operation method and computer program
WO2011155047A1 (en) * 2010-06-10 2011-12-15 富士通株式会社 Multi-core processor system, method of power control, and power control program
JP5527425B2 (en) 2010-11-16 2014-06-18 富士通株式会社 COMMUNICATION DEVICE, LOAD DISTRIBUTION METHOD, AND RECORDING MEDIUM
US8789065B2 (en) 2012-06-08 2014-07-22 Throughputer, Inc. System and method for input data load adaptive parallel processing
WO2012124077A1 (en) * 2011-03-16 2012-09-20 富士通株式会社 Multi-core processor system and scheduling method
US9448847B2 (en) 2011-07-15 2016-09-20 Throughputer, Inc. Concurrent program execution optimization
JP2013090072A (en) 2011-10-17 2013-05-13 Hitachi Ltd Service provision system
US9417907B1 (en) * 2012-05-23 2016-08-16 Emc Corporation Impact management of system tasks
US8930956B2 (en) * 2012-08-08 2015-01-06 International Business Machines Corporation Utilizing a kernel administration hardware thread of a multi-threaded, multi-core compute node of a parallel computer
JP2014102691A (en) * 2012-11-20 2014-06-05 Toshiba Corp Information processing device, camera with communication function, and information processing method
CN103274266A (en) * 2013-01-11 2013-09-04 株洲时代新材料科技股份有限公司 Numerically-controlled fiber winding machine and application thereof
CN103440165B (en) * 2013-08-30 2016-04-27 西安电子科技大学 A kind of task assignment towards individual and disposal route
US11200058B2 (en) 2014-05-07 2021-12-14 Qualcomm Incorporated Dynamic load balancing of hardware threads in clustered processor cores using shared hardware resources, and related circuits, methods, and computer-readable media
JP5872669B2 (en) * 2014-12-01 2016-03-01 株式会社日立製作所 Server device group and network system
US9524193B1 (en) * 2015-09-09 2016-12-20 Ca, Inc. Transparent virtualized operating system
US10386904B2 (en) * 2016-03-31 2019-08-20 Qualcomm Incorporated Hardware managed power collapse and clock wake-up for memory management units and distributed virtual memory networks
US11443186B2 (en) 2019-10-03 2022-09-13 Wipro Limited Method and system for executing processes in operating systems

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5031089A (en) * 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems
US5287508A (en) * 1992-04-07 1994-02-15 Sun Microsystems, Inc. Method and apparatus for efficient scheduling in a multiprocessor system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5396614A (en) * 1992-06-25 1995-03-07 Sun Microsystems, Inc. Method and apparatus for a secure protocol for virtual memory managers that use memory objects
US5452447A (en) * 1992-12-21 1995-09-19 Sun Microsystems, Inc. Method and apparatus for a caching file server

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5031089A (en) * 1988-12-30 1991-07-09 United States Of America As Represented By The Administrator, National Aeronautics And Space Administration Dynamic resource allocation scheme for distributed heterogeneous computer systems
US5287508A (en) * 1992-04-07 1994-02-15 Sun Microsystems, Inc. Method and apparatus for efficient scheduling in a multiprocessor system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EVANS D J ET AL: "Load balancing with network partitioning using host groups", PARALLEL COMPUTING, MARCH 1994, NETHERLANDS, vol. 20, no. 3, ISSN 0167-8191, pages 325 - 345, XP000433507, DOI: doi:10.1016/0167-8191(94)90090-6 *
NIKHIL R S ET AL: "*T: a multithreaded massively parallel architecture", 19TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, GOLD COAST, QLD., AUSTRALIA, 19-21 MAY 1992, vol. 20, no. 2, ISSN 0163-5964, COMPUTER ARCHITECTURE NEWS, MAY 1992, USA, pages 156 - 167, XP000277763, DOI: doi:10.1145/146628.139715 *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000031636A1 (en) * 1998-11-24 2000-06-02 Sun Microsystems, Inc. Distributed monitor concurrency control
US6622155B1 (en) 1998-11-24 2003-09-16 Sun Microsystems, Inc. Distributed monitor concurrency control
WO2007034232A2 (en) * 2005-09-26 2007-03-29 Imagination Technologies Limited Scalable multi-threaded media processing architecture
WO2007034232A3 (en) * 2005-09-26 2008-01-24 Imagination Tech Ltd Scalable multi-threaded media processing architecture
EP3367237A1 (en) * 2005-09-26 2018-08-29 Imagination Technologies Limited Scalable multi-threaded media processing architecture
US8046761B2 (en) 2005-09-26 2011-10-25 Imagination Technologies Limited Scalable multi-threaded media processing architecture
EP1788481A1 (en) * 2005-11-21 2007-05-23 Sap Ag Hierarchical, multi-tiered mapping and monitoring architecture for service-to-device re-mapping smart items
US7860968B2 (en) 2005-11-21 2010-12-28 Sap Ag Hierarchical, multi-tiered mapping and monitoring architecture for smart items
US8005879B2 (en) 2005-11-21 2011-08-23 Sap Ag Service-to-device re-mapping for smart items
US8156208B2 (en) 2005-11-21 2012-04-10 Sap Ag Hierarchical, multi-tiered mapping and monitoring architecture for service-to-device re-mapping for smart items
US8108863B2 (en) 2005-12-30 2012-01-31 Intel Corporation Load balancing for multi-threaded applications via asymmetric power throttling
US8839258B2 (en) 2005-12-30 2014-09-16 Intel Corporation Load balancing for multi-threaded applications via asymmetric power throttling
US8522341B2 (en) 2006-03-31 2013-08-27 Sap Ag Active intervention in service-to-device mapping for smart items
US7890568B2 (en) 2006-04-28 2011-02-15 Sap Ag Service-to-device mapping for smart items using a genetic algorithm
US8296408B2 (en) 2006-05-12 2012-10-23 Sap Ag Distributing relocatable services in middleware for smart items
US8131838B2 (en) 2006-05-31 2012-03-06 Sap Ag Modular monitor service for smart item monitoring
US8065411B2 (en) 2006-05-31 2011-11-22 Sap Ag System monitor for networks of nodes
US8296413B2 (en) 2006-05-31 2012-10-23 Sap Ag Device registration in a hierarchical monitor service
US8751644B2 (en) 2006-05-31 2014-06-10 Sap Ag Modular monitor service for smart item monitoring
US8396788B2 (en) 2006-07-31 2013-03-12 Sap Ag Cost-based deployment of components in smart item environments
US8527622B2 (en) 2007-10-12 2013-09-03 Sap Ag Fault tolerance framework for networks of nodes

Also Published As

Publication number Publication date
US5692192A (en) 1997-11-25
JPH0830472A (en) 1996-02-02
DE69520988T2 (en) 2001-10-25
JP3696901B2 (en) 2005-09-21
EP0697654B1 (en) 2001-05-23
DE69520988D1 (en) 2001-06-28

Similar Documents

Publication Publication Date Title
US5692192A (en) Load distribution method and system for distributed threaded task operation in network information processing apparatuses with virtual shared memory
TWI289766B (en) Information processor, information processing method and program
US7441240B2 (en) Process scheduling apparatus, process scheduling method, program for process scheduling, and storage medium recording a program for process scheduling
Hui et al. Improved strategies for dynamic load balancing
US7689996B2 (en) Method to distribute programs using remote Java objects
US5884077A (en) Information processing system and method in which computer with high load borrows processor of computer with low load to execute process
Hac A distributed algorithm for performance improvement through file replication, file migration, and process migration
US20010010052A1 (en) Method for controlling multithreading
US20110107344A1 (en) Multi-core apparatus and load balancing method thereof
JPH06250853A (en) Management method and system for process scheduling
US7032099B1 (en) Parallel processor, parallel processing method, and storing medium
Setia et al. Processor scheduling on multiprogrammed, distributed memory parallel computers
CN109739634A (en) A kind of atomic task execution method and device
WO2023165484A1 (en) Distributed task processing method, distributed system, and first device
JP3429582B2 (en) Multiprocessor system
CN115328564A (en) Asynchronous input output thread processor resource allocation method and device
Schnor et al. Scheduling of parallel applications on heterogeneous workstation clusters
CN116541160A (en) Function deployment method and device, server and cloud computing platform
Denning Equipment configuration in balanced computer systems
Heath et al. Development, analysis, and verification of a parallel hybrid dataflow computer architectural framework and associated load-balancing strategies and algorithms via parallel simulation
JPS59188749A (en) System for controlling data transfer
CN112860396A (en) GPU (graphics processing Unit) scheduling method and system based on distributed deep learning
JPH11353291A (en) Multiprocessor system and medium recording task exchange program
JPH06187309A (en) Processor allocation control system
JPH09179834A (en) Scheduling method of parallel system for process

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE FR GB IT NL

17P Request for examination filed

Effective date: 19960708

17Q First examination report despatched

Effective date: 19980803

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAG Despatch of communication of intention to grant

Free format text: ORIGINAL CODE: EPIDOS AGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAH Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOS IGRA

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB IT NL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20010523

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT;WARNING: LAPSES OF ITALIAN PATENTS WITH EFFECTIVE DATE BEFORE 2007 MAY HAVE OCCURRED AT ANY TIME BEFORE 2007. THE CORRECT EFFECTIVE DATE MAY BE DIFFERENT FROM THE ONE RECORDED.

Effective date: 20010523

REF Corresponds to:

Ref document number: 69520988

Country of ref document: DE

Date of ref document: 20010628

ET Fr: translation filed
NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20040705

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20040716

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20040922

Year of fee payment: 10

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20050717

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20060201

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20050717

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20060331

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20060331