WO2000026781A1 - Capacity reservation providing different priority levels to different tasks - Google Patents

Capacity reservation providing different priority levels to different tasks Download PDF

Info

Publication number
WO2000026781A1
WO2000026781A1 PCT/SE1999/001558 SE9901558W WO0026781A1 WO 2000026781 A1 WO2000026781 A1 WO 2000026781A1 SE 9901558 W SE9901558 W SE 9901558W WO 0026781 A1 WO0026781 A1 WO 0026781A1
Authority
WO
WIPO (PCT)
Prior art keywords
buffers
reservation
tasks
capacity
distribution array
Prior art date
Application number
PCT/SE1999/001558
Other languages
French (fr)
Inventor
Birgitta Olin
Original Assignee
Telefonaktiebolaget Lm Ericsson
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson filed Critical Telefonaktiebolaget Lm Ericsson
Priority to CA002349271A priority Critical patent/CA2349271A1/en
Priority to AU60161/99A priority patent/AU6016199A/en
Priority to EP99971537A priority patent/EP1125198A1/en
Publication of WO2000026781A1 publication Critical patent/WO2000026781A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements

Abstract

The present invention relates to load control in a processing system (40), and the invention employs a priority-level-based hierarchy of buffers (66) in which tasks are stored temporarily before being collected for further processing. The idea according to the invention is to define a number of reservation classes such that a predetermined set of the buffers of said hierarchy is assigned to each reservation class, and to determine a capacity distribution array in which values that identify the reservation classes are put in proportion to the capaciry given to the corresponding reservation class. By collecting the tasks from the buffers in accordance with the capacity distribution array, the capacity given to each reservation class is guaranteed during a so-called reservation interval.

Description

Capacity reservation providing different priority levels to different tasks
TECHNICAL FIELD OF THE INVENTION
The present invention generally relates to the field of load control, and more particularly to capacity reservation in a processing system.
BACKGROUND OF THE INVENTION
Fig. 1 illustrates an example of a conventional unregulated processing system. The system 10 consists of a central processor (CP) 20 and one or more external units, such as a regional processor (RP) 30. The central processor 20 includes a CPU 21, an input/ output system (IOS) 22 and a plurality of job buffers 23 (JBA to JBN), all of which are connected through a communication bus. The regional processor 30 is connected to the IOS system 22. The IOS system 22 can receive information from and send information to the regional processor 30. In general, the IOS system 22 forwards a signal from the regional processor 30 to the CPU 21 by first writing into one of the job buffers JBA to JBN. Normally, the job buffers JBA to JBN are organized as first-in- first-out (FIFO) queues, and the CPU 21 serves the job buffers 23 in order of priority. This means that the CPU 21 does not read out any jobs from buffer JBB as long as buffer JBA contains a job. However, all jobs stored in the job buffers have been accepted for processing, sooner or later, by the CPU 21.
Under normal conditions, the system throughput of a processing system, such as that shown in Fig. 1, increases as the offered workload to the processor increases. However, if the offered workload exceeds the capacity of the processor, the system experiences an overload condition. During overload, the throughput of an unregulated system normally decreases drastically, and therefore a load control strategy must be implemented in the processing system to prevent such a degradation of the system throughput. The main purpose of load control is to secure a high and stable throughput of work when a processing system works under overload conditions. The load controL is basically performed by controlling the intensity of accepted jobs and rejecting jobs that cannot be handled.
During overload, there are often requirements on how to divide the system capacity among different tasks. In some situations, it is desirable to be able to guarantee that a certain proportion of the system capacity is given to a specific task or groups of tasks.
Certain situations might also require the possibility to reserve capacity. Just after a system restart, it is sometimes crucial that a number of tasks are performed in parallel to quickly bring the system back to normal operational mode.
RELATED ART
U.S. Patent 4,692,860 issued to Andersen September 8, 1987 discloses an apparatus in a computer-controlled telecommunication system for performing load regulation such that all program levels in the central processor of the computer-controlled system are guaranteed a positive share of the processor capacity. This is accomplished by a queuing system which makes use of the equivalents of tickets, coupons and baskets, and which is implemented by a plurality of regulator members that cooperates with a number of job buffers. Each regulator member is utilized on a given program level and cooperates with a job buffer that is associated with that program level.
U.S. Patent 5,381,546 relates to a process for scheduling a processor. The process is stochastic and utilizes preassigned probability parameters to schedule heterogeneous tasks for the processor in such a way that the processor continually cycles through all types of tasks. SUMMARY OF THE INVENTION
It is a general object of the present invention to provide a flexible distribution of processing capacity among different tasks with different priority levels.
It is another object of the invention to prevent tasks with a low priority level from being blocked from the processor, and instead ensure at least a minimum share of the system capacity to such low- priority tasks.
These and other objects are met by the invention as defined by the accompanying patent claims.
The present invention makes use of a priority-level-based hierarchy of buffers in which tasks are stored temporarily before being collected for further processing. Briefly, the general idea according to the invention is to define a number of reservation classes such that a predetermined set of the buffers of said hierarchy is assigned to each reservation class, and to determine a capacity distribution array in which values that identify the reservation classes are put in proportion to the capacity given to the corresponding reservation class. By collecting the tasks from the buffers in accordance with the capacity distribution array, the capacity given to each reservation class is guaranteed for a predetermined period of time.
It is also possible to determine, in advance or during operation, a number of different capacity distribution arrays, each of which is adapted for a specific system situation. In operation, an appropriate one of these predetermined distribution arrays is selected, in dependence on the prevailing system conditions, for use in distributing the processor capacity.
The invention offers the following advantages: Flexible distribution of system capacity; Possibility to reserve capacity to low-priority tasks; and Capacity distribution flexibly adapted to the prevailing system conditions.
Other advantages offered by the present invention will be appreciated upon reading of the below description of the embodiments of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention, together with further objects and advantages thereof, will be best understood by reference to the following description taken together with the accompanying drawings, in which:
Fig. 1 illustrates an example of a conventional unregulated processing system;
Fig. 2 is a schematic block diagram of an example of a regulated processing system according to a first preferred embodiment of the invention;
Fig. 3 is a schematic timing diagram of an example of a reservation interval;
Fig. 4 is a schematic block diagram of an alternative implementation of a processing system according to the invention;
Fig. 5 is a schematic flow diagram of a method for dividing capacity among a number of different tasks with different priority levels according to a first preferred embodiment of the invention; and
Fig. 6 is a schematic flow diagram of a method for dividing capacity among a number of different tasks with different priority levels according to a second preferred embodiment of the invention. DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION -
System overview
Fig. 2 is a schematic diagram of an example of a regulated processing system according to a first preferred embodiment of the invention. The processing system 40 basically comprises a processor 50 and a load controller 60. The processor 50 comprises a CPU 51, an input/ output system (IOS) 52 and a plurality of job buffers 53 (JBA to JBN), all of which are connected through a communication bus. The load controller 60 comprises a priority analysis unit 65, a plurality of load control buffers 66 and a gate control unit 67. The CPU 51 incorporates a number of software modules 71 to 75, all of which will be explained later on.
The load control function of the load controller 60 is to regulate the intensity of tasks, also referred to as jobs, forwarded to the processor 50 such that a high and stable flow of tasks through the overall processing system 40 is maintained. The load control function employs a priority-level-based hierarchy of buffers 66, the load control buffers, where jobs accepted by the load controller 60 are temporarily stored until they are handled by the processor 50. Each one of the load control buffers has a limited number of storage positions. By using several load control buffers, instead of just one, it is possible to handle priorities; jobs with different priority levels are stored in different load control buffers.
The load control buffers 66, as well as the job buffers 53, are arranged by priority in such a way that the jobs of buffer i have priority over the jobs of buffer j if and only if i<j, i≠j.
When a new job arrives to the processing system 40 it is forwarded to the load controller 60. The job arriving to the load controller 60 is first analyzed by the priority analysis unit 65. The analysis either results in an immediate rejection, and the job is refused entry into the system, or the load control buffer into which the job should be placed is identified. If all storage positions of the_ selected buffer are occupied, the job is rejected. Otherwise the job is accepted and placed in the buffer where it awaits to be forwarded to the processor 50. The gate control unit 67 is connected to the input/ output system 52 of the processor 50, and collects jobs from the hierarchy of load control buffers 66 to the input/ output system 52 according to instructions from the CPU 51. The gate control unit 67 communicates with the CPU 51 through the IOS system 52. The collected jobs are stored in the job buffers 53, and the CPU 51 normally serves the job buffers 53 in order of priority. The CPU 51 collects jobs from the job buffers 53 and performs the actual processing in one or more predefined job processing modules 71.
The load control function, also referred to as the overload control function when the processor 50 is heavily loaded, imposes a limit on the flow of work that is transferred from the load control buffers 66 for further processing by the processor 50. In a basic version of the overload control function, the limit is expressed in terms of jobs per time unit, and given by a parameter value. The parameter value is normally updated at regular so-called update intervals to adjust the flow of work to a level close to the system capacity. This control of the parameter value is preferably performed in a software module 72 in the CPU 51.
In the load controller 60, jobs are collected from the load control buffers 66 by the gate control unit 67 at regular intervals, referred to as fetch intervals, and forwarded to the processor 50. The amount of work, i.e. the number of jobs, transferred each time from the load control buffers 66 to the processor 50 is controlled by the current parameter value and the length of the fetch intervals.
When the processor 50 is heavily loaded, there are often requirements on how to divide the processor capacity among different jobs or tasks. It is sometimes desirable to be able to reserve a certain proportion of the capacity to a specific task or groups of tasks. For example, at system restart capacity should be reserved to those tasks that quickly bring the system back to normaL operation. In the specific field of telecommunications, an example of a system that might require reservation of capacity is the central processor of a telecommunication network node where: - service control functions are combined with call handling functions; or mobile telephone location update and mobile telephone paging must be executed in parallel.
Reservation mechanism According to a preferred embodiment of the invention, a number M of reservation classes (Ci, C2, ..., CM) are introduced. The reservation classes are defined by assigning to each reservation class a predetermined set of the load control buffers 66. Assume that there are N load control buffers (Bi, B2, ..., BN) in total. Reservation class G (where i=l, ..., M) is comprised of all tasks that are stored in the buffer or buffers assigned to the reservation class. Capacity is given to the different reservation classes, and if the total capacity reserved is less than 100% of the total available capacity, the remaining capacity is distributed by priority. This is administered by introducing a further reservation class, reservation class zero Co, to which all load control buffers is assigned, and giving the remaining capacity to this reservation class. The reservation classes are preferably defined in a software-implemented class definition module 73 in the CPU 51.
To handle the distribution of the available processor capacity among the reservation classes, a capacity distribution array is determined. In general, the capacity distribution array is a vector with a number K of elements, and the vector is determined by putting values, between 1 and M, that identify the reservation classes (Ci, C2, ..., CM) into the elements of the distribution array in proportion to the capacity given to the corresponding reservation class. The capacity distribution array A is preferably determined by a software- implemented array determination module 74 in the CPU 51, either by manually entering, for example via a keyboard interface, the class identifying values into the distribution array according to the rule given above, or by letting the software module 74 automatically determine the distribution array according to the above rule based on the capacities given by the operator to the different reservation classes. Information about the capacity distribution array determined by the module 74 is communicated via the communication bus to the gate control unit 67 in the load controller 60, and the gate control unit 67 collects the tasks from the load control buffers 66 to the input/ output system 52 in accordance with the determined capacity distribution array.
By collecting the tasks from the load control buffers according to the determined capacity distribution array A, the capacity given to each reservation class is guaranteed, given that the flow of reserved jobs is large enough, during a so-called reservation interval.
Fig. 3 is a schematic timing diagram of an example of a reservation interval. In this particular example, the duration of the reservation interval is 1 second. The reservation interval comprises a predetermined number, such as for example 10 or 25, of fetch intervals. In Fig. 3, the reservation interval comprises 10 fetch intervals. For example, the reservation interval corresponds to the update intervals such that the parameter value is updated at the beginning of each reservation interval. If the parameter value is updated more often, the reservation process is affected accordingly. In the reservation interval of Fig. 3, the maximum number of jobs collected each fetch interval is equal to 12 since the current parameter value is 120 jobs/s. At the beginning of the next reservation interval, the parameter value is 110 jobs/s and the maximum number of jobs collected each fetch interval will then be 11.
The capacity distribution array A governs in which order the load control buffers 66 are emptied during a transfer of jobs from the load control buffers to the processor 50. In general, the fetch intervals of a reservation interval correspond to the elements of the distribution array A, and the capacity distribution array can be expressed as A=(aι, a_>, ..., aκ)( where K is equal to the number of fetch intervals of the current reservation interval. If the current, fetch interval is the j:th fetch interval of the reservation interval in process, the load control buffers belonging to reservation class aj, i.e. the buffers specified by Ca , are emptied by priority. In other words, for each fetch interval in the reservation interval in process, a reservation class is identified by the class identifying value of the array element that corresponds to the current fetch interval. Subsequently, jobs stored in the set of load control buffers that is assigned to the identified reservation class are being collected from the buffers of that set in order of the priority of the buffers. If all load control buffers of a reservation class are emptied before the limit on the number of collected jobs for the current fetch interval is reached, jobs are collected by priority from non-empty load control buffers until the limit is reached or all load control buffers are empty.
When a reservation interval ends, a new one starts immediately, and the distribution array A or possibly a new distribution array is scanned from the beginning again.
Example For a better understanding of the invention and for illustrative purposes, an example of a possible relation between a number of reservation classes and a number of load control buffers is given below in Table I. Assume that we have four reservation classes Ci to C4 and seven load control buffers Bi to B7,
Table I
Figure imgf000011_0001
An example of a relation between reserved capacity, reservation classes, class_ identifying values and load control buffers on one hand, and a possible capacity distribution array A on the other hand, is given in Table II below. Assume that we want to give 60% of the capacity to class Ci, 20% of the capacity to class C2, 10% of the capacity to class C3 and 10% of the capacity to class C4. Capacity can be given to the different reservation classes in multiples
of — 100% , where K is the number of fetch intervals of the reservation K interval. Furthermore, assume that each capacity distribution array has ten elements (ai, a2, ..., aio), and that the values 1, 2, 3 and 4 are used as class identifiers.
Table II
Figure imgf000012_0001
Based on the distribution array given in Table II, in the first, third, fifth, seventh, ninth and tenth fetch interval, the buffers are being emptied by the order Bi, B2, B4, B3, Bs, Be, B7. In the second and sixth fetch interval by the order B3, Be, Bi, B2, B4, Bs, B7. In the fourth fetch interval by the order Bs, Bi, B2, B3, B4, Be, B7, and in the eighth fetch interval by the order B7, Bi, B2, B3, B4, Bs, Be. As can be seen, the available processor capacity is flexibly distributed among different tasks with different priority levels. Furthermore, tasks stored in low-priority buffers, such as Bs and B7 in the fourth and eighth fetch interval, are prevented from being totally blocked from the processor. For each situation, there are of course many possible distribution arrays. In- the example specified by Table II, the arrays A=(l, 2, 3, 4, 1, 2, 1, 1, 1, 1) and A=(l, 1, 1, 1, 1, 1, 2, 2, 3, 4) are also possible to use for distributing the capacity. However, by distributing the class identifying values that point to the same reservation class uniformly among the elements of the distribution array, such as for example (1, 2, 1, 3, 1, 2, 1, 4, 1, 1), a time-wise distribution of the processor capacity among the different reservation classes is obtained. Preferably, the array determination module 74 operates to distribute identical class identifying values uniformly among the elements of the capacity distribution array.
Alternative implementation of the processing system
Fig. 4 is a schematic block diagram of an alternative implementation of a processing system according to the invention. The processing system 80 is basically a processor, such as for example the APZ processor in the AXE system from Telefonaktiebolaget LM Ericsson, and in this alternative implementation the load control functionality is incorporated into the processor. The processing system 80 comprises a CPU 81, an input/ output system (IOS) 82, a hierarchy of job buffers (JBA to JBN) 83 and a hierarchy of load control buffers 84, all of which are connected through a communication bus BUS. The CPU 81 incorporates a number of software modules 91 to 97. The software modules 91 to 95 are similar to the modules 71 to 75 of Fig. 2.
Job requests from external units such as regional processors are stored in a job buffer (JBB) in the hierarchy of job buffers 83 via the IOS 82. The CPU 81 analyzes the requests in a priority analysis software module 96. If the jobs are accepted, they are stored in appropriately selected load control buffers in the hierarchy of load control buffers 84. The gate control functionality is now incorporated into the CPU 81 as a gate control software module 97, and the jobs are collected from the load control buffers according to the instructions from the CPU 81. The CPU instructions to the gate control module 97 are based on the determined capacity distribution array. Further general information on load control can be found in for example the documentation of the AXE 10 system of Telefonaktiebolaget LM Ericsson.
Flow diagrams Fig. 5 is a schematic flow diagram of a method for dividing capacity among a number of different tasks with different priority levels according to the first preferred embodiment of the invention. A priority-based hierarchy of buffers is used. In step 101, a number of reservation classes are defined by assigning a predetermined set of the buffers to each reservation class. In step 102, a capacity distribution array is determined by putting reservation class identifying , values into the elements of the capacity distribution array in proportion to the capacity given to the corresponding reservation class. In step 103, the tasks are temporarily stored in a priority-level-based hierarchy of buffers. In step 104, tasks are collected from the buffers at regular fetch intervals. For each fetch interval, a reservation class is identified by the class identifying value of the array element that corresponds to the current fetch interval, and tasks stored in the set of buffers assigned to the identified reservation class are being collected from the buffers of the set in order of the priority of the buffers.
Alternatively, the step 103 of storing the tasks in the buffers is performed prior to the steps 101 and 102.
It is important to understand that the steps 103, 104 of storing tasks in the buffers and collecting tasks from the buffers are part of a continuous process, active as long as new jobs arrive to the processing system.
In normal operation, a particular distribution of processor capacity among the different load control buffers may be desirable. In other circumstances, at system restart or a change of operation mode for example, another distribution of processor capacity may be more appropriate. Therefore, in accordance with a second preferred embodiment of the invention, a number of different capacity distribution arrays, each of which is adapted for a specific system situation, are determined by the array determination module 74. It should be noted that the reservation classes may be redefined for each distribution array. In operation, a software-implemented array selection module 75 in the CPU 51 selects an appropriate one of these distribution arrays in accordance with the current system situation. The selected distribution array is communicated to the gate control unit 67 and used in distributing the processor capacity.
Fig. 6 is a schematic flow diagram of a method for dividing capacity among a number of different tasks with different priority levels according to the second preferred embodiment of the invention, using at least two different capacity distribution arrays. A priority-based hierarchy of buffers is used. In step 201, a number of reservation classes are defined by assigning a predetermined set of the buffers to each reservation class. In step 202, a first capacity distribution array is determined by putting reservation class identifying values into the elements of the first distribution array in proportion to the capacity given to the corresponding reservation class. In step 203, a second capacity distribution array is determined by putting reservation class identifying values into the elements of the second distribution array in proportion to the capacity given to the corresponding reservation class. The capacities given to the reservation classes in step 202 generally differ from the capacities given to the reservation classes in step 203. In step 204, tasks are temporarily stored the priority-level-based hierarchy of buffers. Step 205 consists of selecting one of the first capacity distribution array and the second capacity distribution array. In step 206, the tasks are collected from the buffers in accordance with the selected distribution array. New distribution arrays can be determined and used selectively by repeating the steps 201 to 206.
The steps of storing tasks in the buffers, selecting an appropriate distribution array and collecting tasks from the buffers are continuously repeated as indicated in Fig. 6. Although only two different capacity distribution arrays are explicitly determined in the above example, it should be understood that further distribution arrays, adapted to various system conditions, may be determined. The selection procedure is then based on all available distribution arrays.
A further application - scheduling
It should be noted that the above reservation mechanism is also applicable to the job buffers 53 in the processor 50. However, in this case, the main objective is not load control but rather scheduling of the processor capacity. Consequently, a number of further reservation classes are defined by assigning a predetermined set of the job buffers 53 (JBA to JBN) to each one of the reservation classes. Next, a further capacity distribution array is determined by putting values that identify these reservation classes into the further capacity distribution array in proportion to the capacity given to the corresponding reservation class. In operation, the tasks are being collected by the CPU 51 from the job buffers 53 according to this further capacity distribution array. This application is possible if the jobs in the job buffers are such that they can be executed in an arbitrary order.
It should furthermore be understood that although the invention is particularly advantageous for use under overload conditions, the invention may be used under normal load conditions as well.
If the execution times for the different tasks handled by the processing system are known, the reservation scheme described above may be enhanced to manage "processor load reservations". By processor load reservation we mean a reservation that guarantees (given that the flow of reserved jobs is large enough) that the processor spends at least a reserved proportion of time on tasks belonging to a certain reservation class.
The embodiments described above are merely given as examples, and it should be understood that the present invention is not limited thereto. Further modifications, changes and improvements which retain the basic underlying principles disclosed and claimed herein are within the scope of the invention.

Claims

CLAIMS:
1. A method for dividing the capacity of a processing system (40; 50; 80) among a number of different tasks with different priority levels by temporarily storing said tasks in a priority-level-based hierarchy of buffers (Bi, B2, ..., BN), and collecting said tasks from said buffers for further processing by the system, characterized in that said method further comprises the steps of: defining a number M of reservation classes (Ci, C2, ..., CM) by assigning to each one of said reservation classes a predetermined set of said buffers; and determining a capacity distribution array having a number K of elements
(ai, a2, ..., ax), by putting values (1 , 2, ..., M) that identify said reservation classes (Ci, C2, ..., CM) into the elements (ai, a2, ..., ax.) of said capacity distribution array in proportion to the capacity given to the corresponding reservation class; and said step of collecting said tasks from said buffers is performed according to said capacity distribution array.
2. The method according to claim 1, characterized in that the class identifying values in said capacity distribution array that point to the same reservation class are uniformly distributed among the elements (ai, a2, ..., ax) of said distribution array.
3. The method according to claim 1, characterized in that said step of collecting said tasks from said buffers is performed at regular fetch intervals, a predetermined number of tasks being collected during each one of said fetch intervals; the capacity given to each one of said reservation classes is guaranteed during a so-called reservation interval which comprises a predetermined number, equal to the number K of elements in said distribution array, of said fetch intervals; the fetch intervals of said reservation interval correspond, in order, to the_ elements of said distribution array; and for each fetch interval in said reservation interval, a reservation class is identified by the class identifying value of the element that corresponds to the current fetch interval and tasks stored in the set of buffers assigned to the identified reservation class are being collected from the buffers of the set in order of the priority of these buffers.
4. . The method according to claim 3, characterized in that if all the buffers in the set of buffers that is assigned to the current reservation class are emptied before said predetermined number of tasks to be collected during the fetch interval is reached, tasks are collected from non-empty buffers in said hierarchy of buffers in order of the priority of the buffers.
5. The method according to claim 1, characterized in that said further processing includes the steps of: temporarily storing the collected tasks in a further priority-level-based hierarchy of buffers; and collecting said tasks from the buffers of said further hierarchy of buffers for processing said tasks in a processor.
6. The method according to claim 5, characterized in that said step of collecting said tasks from the buffers of said further hierarchy of buffers is performed in the order of the priority of the buffers of said further hierarchy of buffers.
7. The method according to claim 5, characterized in that said step of collecting said tasks from the buffers of said further hierarchy of buffers is performed by: defining a number J of further reservation classes (Di, D2, ..., DJ) by assigning to each one of said further reservation classes a predetermined set of the buffers in said further hierarchy of buffers; and determining a further capacity distribution array having a number L of elements (bi, b2, ..., _>L), by putting values (1, 2, ..., J) that identify said reservation classes (Di, D2, ..., DJ) into the elements (bi, b , ..., b ) of said further capacity distribution array in proportion to the capacity given to the corresponding further reservation class; and emptying said tasks from the buffers of said further hierarchy of buffers according to said further capacity distribution array.
8. The method according to claim 1, characterized in that each predetermined set of said buffers includes at least one buffer.
9. The method according to claim 1, characterized in that at least one of said predetermined sets of said buffers includes more than one buffer.
10. The method according to claim 1, characterized in that said method is performed under overload conditions.
11. A method for dividing the capacity of a processing system (40; 50; 80) among a number of different tasks with different priority levels by temporarily storing said tasks in a priority-level-based hierarchy of buffers (Bi, B2, ..., BN), and collecting said tasks from said buffers for further processing by the system, characterized in that said method further comprises the steps of: defining a number M of reservation classes (Ci, C2, ..., CM) by assigning to each one of said reservation classes a predetermined set of said buffers; and determining a first capacity distribution array having a number of elements, the values of which identify said reservation classes (Ci, C2, ..., CM), said class identifying values being put into said first capacity distribution array in proportion to the capacity given to the corresponding reservation class; determining a second capacity distribution array having a number of elements, the values of which identify said reservation classes (Ci, C2, ..., CM), said class identifying values being put into said second capacity distribution array in proportion to the capacity given to the corresponding reservation class; and selecting one of said first capacity distribution array and said second capacity distribution array; and said step of collecting said tasks from said buffers is performed according to said selected capacity distribution array.
12. A system for dividing the capacity of a processing system (40; 50; 80) among a number of different tasks with different priority levels, having a priority-level-based hierarchy of buffers (66; 53; 84) for temporarily storing said tasks, and means (67; 51; 97) for collecting said tasks from said buffers for further processing by the processing system, characterized in that said system further comprises: means (73; 93) for defining a number M of reservation classes (Ci, C2, ...,
CM) by assigning to each one of said reservation classes a predetermined set of said buffers; and means (74; 94) for determining a capacity distribution array having a number K of elements (ai, a2, ..., ax), the values (1, 2, ..., M) of which identify said reservation classes (Ci, C2, ..., CM), said class identifying values being put into the elements (ai, a2, ..., ax) of said capacity distribution array in proportion to the capacity given to the corresponding reservation class; and said means (67; 51; 97) for collecting said tasks from said buffers (66; 53;
84) operates in accordance with said capacity distribution array.
13. The system according to claim 12, characterized in that said means (74; 94) for determining a capacity distribution array operates to distribute the class identifying values in said capacity distribution array that point to the same reservation class uniformly among the elements (ai, a2, ..., ax) of said distribution array.
14. The system according to claim 12, characterized in that said means (67; 51; 97) for collecting said tasks from said buffers (66; 53; 84) performs collection of tasks at regular fetch intervals, a predetermined number of tasks being collected during each one of said fetch intervals; the capacity given to each one of said reservation classes is guaranteed during a so-called reservation interval which comprises a predetermined number, equal to the number K of elements in said distribution array, of said fetch intervals; the fetch intervals of a reservation interval correspond, in order, to the elements of said distribution array; and said means (67; 51; 97) for collecting said tasks from said buffers (66; 53;
84) operates to identify, for each fetch interval in said reservation interval, a reservation class by means of the class identifying value of the element that corresponds to the current fetch interval and to collect tasks stored in the set of buffers assigned to the identified reservation class from the buffers of the set in order of the priority of these buffers.
15. The system according to claim 14, characterized in that if all the buffers in the set of buffers that is assigned to the current reservation class are emptied before said predetermined number of tasks to be collected during the fetch interval is reached, said means (67; 51; 97) for collecting said tasks from said buffers operates to collect tasks from non-empty buffers in said hierarchy of buffers in order of the priority of these non-empty buffers.
16. The system according to claim 12, characterized in that each predetermined set of said buffers includes at least one buffer.
17. The system according to claim 12, characterized in that at least one of said predetermined sets of said buffers includes more than one buffer.
PCT/SE1999/001558 1998-11-02 1999-09-08 Capacity reservation providing different priority levels to different tasks WO2000026781A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA002349271A CA2349271A1 (en) 1998-11-02 1999-09-08 Capacity reservation providing different priority levels to different tasks
AU60161/99A AU6016199A (en) 1998-11-02 1999-09-08 Capacity reservation providing different priority levels to different tasks
EP99971537A EP1125198A1 (en) 1998-11-02 1999-09-08 Capacity reservation providing different priority levels to different tasks

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE9803739A SE512843C2 (en) 1998-11-02 1998-11-02 Load control in a data processing system m.a. priority level based hierarchy of buffers
SE9803739-3 1998-11-02

Publications (1)

Publication Number Publication Date
WO2000026781A1 true WO2000026781A1 (en) 2000-05-11

Family

ID=20413149

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE1999/001558 WO2000026781A1 (en) 1998-11-02 1999-09-08 Capacity reservation providing different priority levels to different tasks

Country Status (5)

Country Link
EP (1) EP1125198A1 (en)
AU (1) AU6016199A (en)
CA (1) CA2349271A1 (en)
SE (1) SE512843C2 (en)
WO (1) WO2000026781A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100422939C (en) * 2003-04-30 2008-10-01 国际商业机器公司 Method and system of configuring elements of a distributed computing system for optimized value

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4314335A (en) * 1980-02-06 1982-02-02 The Perkin-Elmer Corporation Multilevel priority arbiter
US4692860A (en) * 1983-03-18 1987-09-08 Telefonaktiebolaget Lm Ericsson Apparatus for load regulation in computer systems
US5128860A (en) * 1989-04-25 1992-07-07 Motorola, Inc. Manufacturing or service system allocating resources to associated demands by comparing time ordered arrays of data
US5577221A (en) * 1994-04-14 1996-11-19 Industrial Technology Research Institute Method and device for expanding ROM capacity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4314335A (en) * 1980-02-06 1982-02-02 The Perkin-Elmer Corporation Multilevel priority arbiter
US4692860A (en) * 1983-03-18 1987-09-08 Telefonaktiebolaget Lm Ericsson Apparatus for load regulation in computer systems
US5128860A (en) * 1989-04-25 1992-07-07 Motorola, Inc. Manufacturing or service system allocating resources to associated demands by comparing time ordered arrays of data
US5577221A (en) * 1994-04-14 1996-11-19 Industrial Technology Research Institute Method and device for expanding ROM capacity

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100422939C (en) * 2003-04-30 2008-10-01 国际商业机器公司 Method and system of configuring elements of a distributed computing system for optimized value

Also Published As

Publication number Publication date
SE9803739D0 (en) 1998-11-02
SE9803739L (en) 2000-05-03
CA2349271A1 (en) 2000-05-11
EP1125198A1 (en) 2001-08-22
SE512843C2 (en) 2000-05-22
AU6016199A (en) 2000-05-22

Similar Documents

Publication Publication Date Title
JP3585755B2 (en) Load sharing based on priority among non-communication processes in time sharing system
US6909691B1 (en) Fairly partitioning resources while limiting the maximum fair share
US6675190B1 (en) Method for cooperative multitasking in a communications network, and a network element for carrying out the method
US5596576A (en) Systems and methods for sharing of resources
US5274644A (en) Efficient, rate-base multiclass access control
US8387052B2 (en) Adaptive partitioning for operating system
US6687257B1 (en) Distributed real-time operating system providing dynamic guaranteed mixed priority scheduling for communications and processing
JP3716753B2 (en) Transaction load balancing method, method and program between computers of multiprocessor configuration
US6477144B1 (en) Time linked scheduling of cell-based traffic
US7809876B2 (en) Distributed real-time operating system
US6810043B1 (en) Scheduling circuitry and methods
US5155851A (en) Routing an incoming data stream to parallel processing stations
US20030076834A1 (en) Adaptive service weight assingnments for ATM scheduling
US6229813B1 (en) Pointer system for queue size control in a multi-task processing application
KR20000028737A (en) Predictive bursty real-time traffic control for telecommunications switching systems
CN110100235B (en) Heterogeneous event queue
KR100479306B1 (en) Reserving resources for anticipated work items via simulated work items
US20140245311A1 (en) Adaptive partitioning for operating system
EP0863680B1 (en) Method and apparatus for improved call control scheduling in a distributed system with dissimilar call processors
WO2000026781A1 (en) Capacity reservation providing different priority levels to different tasks
CN111858019B (en) Task scheduling method and device and computer readable storage medium
Dandamudi The effect of scheduling discipline on sender-initiated and receiver-initiated adaptive load sharing in homogeneous distributed systems
JP2938630B2 (en) Load control method in a system with multiple processing methods
US7843913B2 (en) Method of operating a scheduler of a crossbar switch and scheduler
JPH06338902A (en) Call reception controller

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SL SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1999971537

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2349271

Country of ref document: CA

Ref country code: CA

Ref document number: 2349271

Kind code of ref document: A

Format of ref document f/p: F

WWP Wipo information: published in national office

Ref document number: 1999971537

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642