CA2419340A1 - Software barrier synchronization - Google Patents

Software barrier synchronization Download PDF

Info

Publication number
CA2419340A1
CA2419340A1 CA002419340A CA2419340A CA2419340A1 CA 2419340 A1 CA2419340 A1 CA 2419340A1 CA 002419340 A CA002419340 A CA 002419340A CA 2419340 A CA2419340 A CA 2419340A CA 2419340 A1 CA2419340 A1 CA 2419340A1
Authority
CA
Canada
Prior art keywords
array
elements
concurrently running
state
release state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002419340A
Other languages
French (fr)
Inventor
Guansong Zhang
Robert J. Blainey
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IBM Canada Ltd
Original Assignee
IBM Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IBM Canada Ltd filed Critical IBM Canada Ltd
Priority to CA002419340A priority Critical patent/CA2419340A1/en
Priority to US10/718,293 priority patent/US7581222B2/en
Publication of CA2419340A1 publication Critical patent/CA2419340A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/522Barrier synchronisation

Abstract

The present invention provides an approach for barrier synchronization. The barrier has a first array of elements with each element of the first array having an associated process, and a second array of elements with each element of the second array having an associated process.
Prior to use, the values or states of the elements in each array may be initialized. As each process finishes its phase and arrives at the barrier, it may update the value or state of its associated element in the first array. Each process may then proceed to spin at its associated element in the second array, waiting for that element to switch. When the values or states of the elements of the first array reach a predetermined value or state, an instruction is sent to all of the elements in the second array to switch their values or states, allowing all processes to leave.

Description

SOFTWARE BARRIER SYNCHRONI~?ATION
BACKGROUND OF THE INVENTION
The present invention relates to synchronization of concurrently running processes in a data processing system.
Parallel computing allows a computational problem to be decomposed into multiple tasks. These multiple tasks are then carried out by a plurality of processes which may operate concurrently. Parallel computing may allow a computational problem to be solved in a shorter amount of time by utilizing the computational resources of a plurality of processors. Parallel computing may also allow large computational problems to be solved that may not be practical to solve using conventional computing systems and methods. With currently available parallel computing systems, it is possible to harness the computational resources of hundreds or thousands of computer processors to run hundreds or thousands of concurrent processes.
Typically, there are interdependencies between at least some of the concurrent processes.
In order to avoid a condition where one process races too far ahead of another interdependent process (which may cause an indeterminate computational result), it is often necessary to incorporate a process synchronization mechanism, such as a barrier synchronization point.
Multiple threads or processes may then come to the barrier synchronization point, and wait until all of the other concurrent processes have arrived. Once synchronized in this manner, the processes may then continue with their execution separately. Thus, barrier synchronization is considered to be one of the most important mechanisms in parallel processing.
This is reflected in published shared memory parallel programming standards, such as OpenMP~M, in which combined parallel work-sharing constructs have implicit barrier synchronization (although these implicit barriers may be turned off if necessary by a no-wait clause).
Different designs for barrier synchronization have been proposed. For example, an IBM
Technical Disclosure Bulletin entitled "Barrier Synchronization Using Fetch-and-Add and Broadcast", 34(8):33-34, 1992, describes a fetch-and-add operation to decrease an established counter, and then have waiting threads monitor the counter to determine when they may proceed.
As another example, U.S. Patent No. 6,330,619 issued to Kreuzberg describes use of different memory words for different synchronization states without using special hardware instructions.
While these methods provide possible barrier synchronization solutions, it is desirable to develop a system and method for barrier synchronization having increased performance.
SUMMARY OF THE INVENTION
The present invention provides an approach for barrier synchronization. The barrier has a first array of elements with each element of the first array having an associated process, and a second array of elements with each element of the second array having an associated process.
In an exemplary embodiment, prior to use, the values or states of the elements in each array may be initialized. As each process finishes its phase and arrives at the barrier, it may update the value or state of its associated element in the first array. Each process may then proceed to spin at its associated element in the second array, waiting for that element to switch.
One of the concurrent processes may be designated to monitor the first array of elements, checking for all of the values or states of the elements of the first array to reach a predetermined value or state. Upon occurrence of this event, the designated process may instruct all of the elements in the second array to switch their values or states, allowing all processes to leave.
In an embodiment, upon all of the values of the elements of the first array reaching the predetermined value, and before the instruction is issued to switch the elements of the second array, the elements of the first array may be reinitialized.
In an alternative embodiment, the monitoring function may be conducted by an independent process that is not one of the concurrent processes having an associated element in each of the first and second arrays.
Accordingly, in a first aspect of the invention, there is provided a method of synchronizing at least two concurrently running processes in a data processing system, comprising:
(a) providing a first array of elements with initialized states, each element of said first array having a concurrently running process associated therewith:, each element of said first array being configured to have its state updated by its associated concurrently running process upon completion of a phase by said associated concurrently running process;
(b) providing a second array of elements with initialized hold states, each element of said second array having a concurrently running process associated therewith, each element of said second array being configured to switch, upon receiving an instruction, to a release state;
(c) arranging for monitoring said first array of elements and, upon each element of said first array having had its state updated, arranging for generating said instruction for switching said elements of said second array to said release state.
In an embodiment of the first aspect of the invention, the method further comprises:
(d) for each process of said at least two concurrently running processes, configuring said each process such that, upon completion of said phase and upon updating of its associated element of said first array, said each. process then waits at its associated element of said second array for said release state.
In a second aspect of the invention, there is provided a system for synchronizing at least two concurrently running processes in a data processing system, comprising:
(a) a first array of elements, each element of said first array having a concurrently running process associated therewith, said each element of said first array being configured to have an initial state that may be updated by its associated concurrently running process, upon completion of a phase by said associated concurrently running process;
(b) a second array of elements, each element of said second array having a concurrently running process associated therewith, said each element of said second array being configured to have an initial hold state that may be switched, upon receiving an instruction, to a release state;
(c) a monitoring process for monitoring said first array of elements, said monitoring process being configured to generate said instruction for switching said elements of said second array to said release state, upon each element of said first array having had its state updated.
In a third aspect of the invention, there is provided a processor for executing a process in order to synchronize said process with at least one other concurrently running process, said processor being operable to:
access an element of a first array of elements, said element of said first array being associated with said process, said element of said first array having an initial state;
update said element of said first array of elements upon completion of a phase by said process;
after said updating, access an element of a second array of elements, said element of said second array being associated with said process, said element of said second array having an initial hold state and being configured to switch, upon receiving an instruction, to a release state, and check said element of said second array for said switch to said release state until detecting said release state.
In a fourth aspect of the invention, there is provided a processor for executing a process in order to synchronize at least two concurrently running processes, said processor being operable to:
access a first array of elements, each element of said first array of elements being associated with one of said at least two concurrently running process and having an initial state;
monitor all elements of said first array of elements until detecting that each of said elements of said first array has been updated by its associated process; and thereafter generate an instruction to switch all elements of a second array of elements from an initial hold state to a release state, each element of said second array of elements being associated with one of said at least two concurrently running processes.
In a fifth aspect of the invention, there is provided a computer program product for synchronizing at least two concurrently running processes in a data processing system, the computer program product comprising:
a computer useable medium having computer readable program code means embodied in the medium for synchronizing at least two concurrently running processes, the computer program code means including:
computer readable program code means for providing a first array of elements with initialized states, each element of said first array having a concurrently running process associated therewith, each element of said first array being configured to have its state updated by its associated concurrently running process upon completion of a phase by said associated concurrently running process;
computer readable program code means for providing a second array of elements with initialized hold states, each element of said second array having a concurrently running process associated therewith, each element of said second array being configured to switch, upon receiving an instruction, to a release state;
computer readable program code means for monitoring said first array of elements and, upon each element of said first array having had its state updated, generating said instruction for switching said elements of said second array to said release state.

In an embodiment of the fifth aspect of the invention, the computer program product further comprises:
computer readable program code means for configuring each process of said at least two concurrently running processes, upon completion of said phase and upon updating of its associated element of said first array, to wait at its associated element of said second array for said release state.
In a sixth aspect of the invention, there is provided a system for synchronizing at least two concurrently running processes in a data processing system, comprising:
(a) means for providing a first array of elements with initialized states, each element of said first array having a concurrently running process associated therewith, each element of said first array being configured to have its state updated by its associated concurrently running process upon completion of a phase by said associated concurrently running process;
(b) means for providing a second array of elements with initialized hold states, each element of said second array having a concurrently running process associated therewith, each element of said second array being configured to switch, upon receiving an instruction, to a release state;
(c) means for monitoring said first array of elements and, upon each element of said first array having had its state updated, generating said instruction for switching said elements of said second array to said release state.
In an embodiment of the sixth aspect of the invention, the system further comprises:
(d) means for each process of said at least two concurrently running processes, configuring said each process such that, upon completion of said phase and upon updating of its associated element of said first array, said each process then waits at its associated element of said second array for said release state.
Other aspects of the invention will become apparent through the illustrative figures and accompanying description provided below.
BRIEF DESCRIPTION OF THE DRAWINGS
In the figures which illustrate example embodiments of this invention:
FIG. 1 is a schematic block diagram of a parallel computing system in which the present invention may be embodied.
FIG. 2a is a schematic diagram of a number of concurrent processes running in the parallel computing system of FIG. 1.
FIG. 2b is a detailed view of a parallel region of FIG. 2a.
FIG. 3 is a schematic diagram of a barrier implemented with a fetch-and-add operation.
FIG. 4 is a schematic barrier implemented with a distributed counter.
FIG. 5 is a schematic barrier implemented with a padded distributed counter.
FIG. ~ is a schematic barrier with a distributed counter and local sensor in accordance with an embodiment of the invention.
FIG. 7 is a graphical representation of test results for the barrier designs of FIG. 3 to FIG.
6.

DETAILED DESCRIPTION
Referring to FIG. l, a parallel computing system 10 is shown in which a plurality of processors 11 are interconnected, for example, by means of a network 12. This configuration is intended to be illustrative and not limiting in any way in terms of a parallel computing system that may embody the present invention.
As shown, a plurality of processors 11 may be provided within a single computer 13 or across multiple computers 13, 13' (collectively 13) connected across the network I2. (When a single computer 13 is provided with a plurality of processors l.l, and only those processors 11 are used, the single computer 13 may be characterized as a parallel computing system in and of itself.) The plurality of processors 11 may have a logical shared memory 14, represented in dotted outline, which may physically reside on one or more of the computers 13 and map onto a plurality of physical memory modules 15, the mapping being represented by blocks 15' shown in dotted outline. Each of the processors 11 may have access to one or more physical memory modules 15 and/or the logical shared memory 14 on which data and/or executable software may reside. Thus, in this context, an "access" is any hardware, software, network components, network protocols, or any combination of these which enable s a processor 11 to access data and/or executable software located anywhere within the parallel computing system IO of FIG. 1.
For the remainder of the description, the term "access" will have this meaning.
Each processor 11 may have a cache 16 for efficiently transferring data between the processor 11 and its associated physical memory 15 (or the logical shared memory 14). There may be a defined minimum size of data or minimum number of words that may be transferred by the cache 16. This defined minimum size of data or minimum number of words is often referred to as a "cache line" or "cache block". For the remainder of the description, the term "cache line"
will have this meaning.
Refernng to FIG. 2a, shown schematically is an example of a software program 20 which may execute within the parallel computing system 10 of FIG. 1. The software program 20 may have a defined master thread or process 22 which runs from the beginning 24 of execution of the software program 20 to its end 25. For example, the master thread or process 22 may execute on a processor 11 in one of the computers 13 of FIG. 1. The software program 20 (which may be initially executed by the master thread or process 22) may also spawn one or more concurrent threads or processes 26, 26' (collectively 26) which may run in parallel with the master thread or process 22 for a portion of the program 20. These concurrent processes 22, 26 may run concurrently on one or more of the processors 11 of FIG. 1. In an embodiment, the software program 20 may assign which processors 11 execute which of the concurrent processes 22, 26 and how the accesses are configured to allow the processors 11 to have access to necessary data and resources within the parallel computing system 10. For the remainder of this description, the terms "thread" and "process" will be used interchangeably and should be considered to have the same meaning, as defined in this paragraph.
The number of concurrent processes 26 spawned by the software program 20 may change over the course of execution of the software program 20. One instance of concurrent processes 26 spawned by the software program 20 may be referred to as a parallel region.
In the example shown in FIG. 2a, there are two parallel regions R1, R2 formed during execution of the software program 20. The first parallel region Rl may have associated. with it a certain set or subset of processors 11 and logical shared memory 14 required to run the concurrent processes 26 within the region Rl. Similarly, second parallel region R2 may have associated with it another set or subset of processors 11 and logical shared memory 14 required to run the concurrent processes 26' within the region R2. For the remainder of this description, the term "parallel region" will have the meaning as defined in this paragraph.
Referring to FIG. 2b, within the first parallel region Rl (or analogously within the second parallel region R2), there may be a number of barriers Bl, B2 which separate phases PHl, PH2, PH3 of the software program 20 within that parallel region R1. One phase PH1 may include, for example, a software loop after which interdependencies between one or more of the processes 22, 26 require that synchronization occur at barrier B 1. Upon synchronization at barrier B 1, the processes 22, 26 may proceed to a second phase PH2. Similar interdependencies between the processes 22, 26 may then require synchronization at a second barrier B2, followed by execution of a third phase PH3. For the remainder of this description, the term "phase"
will have the meaning as defined in this paragraph.
Referring to FIG. 3, shown is a known basic barrier design 30 utilizing a fetch-and-add operation. (In this figure, and subsequent figures, square blocks are used to represent a data structure, and arrow-headed lines with associated comments are used to represent actions that a process may apply to the data structure.) In the basic barrier design 30 shown, a global counter 30C may reside in a shared memory (e.g, memory 14 of FIG. 1) of the parallel computing system (e.g. system 10 of FIG. 1). Concurrent processes P1 - Pn are shov~n in a simplified manner interacting with the counter 30C.
Before the start of execution of a phase by the concurrent processes Pl - Pn (or at least before the fastest of the concurrent processes P1 - I'n comes to the barrier) the value of counter 30C is initialized to the number of concurrent processes participating in the parallel region (e.g.
parallel region Rl of FIG. 2a). As the processes P1 - Pn come to a barrier point (e.g. barner Bl of FIG. 2b), each process Pl - Pn will decrease the counter 30C with a fetch-and-add operation, as indicated by the "decrease" lines Dl - Dn. After that, each process P1 - Pn will spin, repeatedly checking the value of the counter 30C, as indicated by "spin" lines S 1- Sn, until the value of counter 30C is zero. When the counter 30C reaches a value of zero, the processes Pl -Pn are free to leave for their next phase, if any, as indicated by "leave"
lines L1- Ln.
The barrier implementation in FIG. 3 is similar to that disclosed in the IBM
Technical Disclosure Bulletin entitled "Barrier Synchronization Using Fetch-and-Add Broadcast", referred to above. One difference, however, is that instead of scheduling periodic checks of the counter 30C by each process P1 - Pn, a busy-wait method is enforced by letting each process P1 - Pn spin and constantly read the shared counter 30C. When processor resources are not needed for other tasks, busy-wait synchronization is preferred over scheduler-based blocking ("blocking"
being an operating system scheduling task for removing a thread or process from a processor and putting it on a wait queue). This barrier implementation is relatively simple to implement in terms of coding in a computer program. However, the fetch-and-add operation may make multiple concurrent processes focus on one memory word, possibly resulting in access conflicts.
Referring to FIG. 4, another known barrier design 40 having a distributed counter or array 40C is shown.
In the present description, an "array" refers broadly to a logical grouping or arrangement of elements having a common feature or characteristic. This logical grouping or arrangement is not meant to place any restriction or limitation on the physical embodiment.
Thus, the term "array" may apply equally to a logical grouping of elements that are provided across a network in a distributed arrangement, and to a logical grouping of elements provided within a local, defined physical space (such as a contiguous area of physical memory 15).
Here, instead of setting up one counter (e.g. counter 30C of FIG. 3) in the shared memory 14 (FIG. 1), multiple elements or counters 40C1 - 40Cn may be set up as a byte array 40C. The size (i.e. the number of elements) of the array 40C is chosen to be as many as the number of processes P1 - Pn in the parallel region.
Before execution, the value of each element or counter 40C1 - 40Cn may be initialized, for example, to a value of one. As in the case of the fetch-and-add barrier (FIG. 3), each process P1 - Pn coming to the barrier point will decrease the distributed counter or array 40C. However, unlike the previous case (FIG. 3), each process Pl - Pn decreases its own local element or counter 40C 1 - 40Cn, as indicated by "decrease" lines D 1 - Dn. In this way the need for a fetch-and-add operation is removed, since for any given element 40C1 - 40Cn there is only one associated process P1 - Pn to write to with a byte operation.
Still referring to FIG. 4, each process Pl - Pn spins, as indicated by "spin"
lines S1 - Sn, at the distributed counter or array 40C until all of the values of each element 40C 1 - 40Cn are zero. For this purpose, each process Pl - Pn requires access to all of the elements 40C1 - 40Cn of the distributed counter 40C to check that all of the processes P1 - Pn have decreased their own associated elements 40C 1 - 40Cn to zero. ~nce all elements 40C 1 - 40Cn in array 40C reach a value of zero, the processes P1 - Pn are free to leave for a subsequent phase, if any, as indicated by "leave" lines Ll - Ln.
The distributed counter design of FIG. 4 generally outperforms the fetch-and-add design of FIG. 3, as demonstrated by test results presented further below. However, access conflicts may still arise as the processes P1 - Pn try to check the values of the elements 40C1 - 40Cn of array 40C at the same time.
Now referring to FIG. 5, to further lessen any potential access conflicts between the processes P1 - Pn, a barrier design 50 having a padded distributed counter or array 54C may be utilized. The concept of padding is known to those skilled in the art. In the present illustrative embodiment, each of the counters or elements 50C1 - 50Cn of the array 50C is placed into different cache lines (i.e. each counter or element 50C1 - 50Cn is aligned with a portion of logical/physical memory so that the counters or elements 5001 - 50Cn avoid sharing the same cache line) and padded to correspond to the size of a cache line. The operation of the barrier 50 with padded distributed counter 50C is substantially as described for the barrier 40 described above. However, padding the data structure to correspond to the size of a cache line results in a significant reduction of potential memory access conflicts by the concurrent processes P1 - Pn, since no two counters will share the same cache line. This improvement is demonstrated by test results presented hereinafter.
A potential drawback of the barrier design 50 sho~m in FIG. 5 is that the memory requirements may be significant, considering the cache line sizes that may be used in modern processor architectures. This potential memory resource issue may be alleviated by limiting the number of cache lines used for the barriers. For example, if a parallel region has two barriers, two counters may be set up in each parallel region, one as a current counter and the other as a next counter. For a given process, the two associated counter elements may occupy one cache line. This will reduce memory consumption, while taking full advantage of the padded distributed counter or array 50C. A sample algorithm for implementing two counters, one as a current counter and the other as a next counter, is shown below.
Algorithm l: Distributed counter barrier with reused counters Data : Array of two distributed counters, with elements initialized to one Data : Barrier index, for specifying which counter is the current counter Begin Increase the barrier index;
Get the current and the next position in the counter array;
Set my next distributed counter element as one;
Set my current distributed counter element as zero;
Repeat For each element in current distributed counter :Do Check if it is zero End Until all distributed counter elements are zero;
End To reuse the same counter or array, it is necessary to reinitialize the counter elements 5001 - 50Cn back to one after a harrier synchronization upon completion of a phase. In case a program encounters multiple barriers in a small period of time, it is necessary to ensure that when the counter is reinitialized for the second barrier, the prewious barrier is not contaminated.
Suppose, for example, that the concurrent processes P1 - Pn include a very fast process Pf and a very slow process Ps. In a possible scenario, if both the very fast process Pf and the very slow process Ps are free to leave the current barrier, and the very fast process Pf completes its next phase and encounters the next barrier right away, the very slow process may not yet have been able to check all of the elements of counter 50C to realize that it is also free to leave. If the next barrier uses the same counter 50C as the one used for the current barrier, and if the element associated with the very fast process Pf is reinitialized to one in the meantime, the very slow process Ps may not leave the current barrier as expected. In order to avoid this possible problem, two separate counters may be used.
By having at least two counters within a parallel region, the processes Pl -Pn can initialize the next counter while leaving the cuwent countezv, knowing that no processes are spinning on the next counter. This must be the case as, otherwise, the current counter elements can not be all zero and the processes would not be free to leave.
Referring to FIG. 6, shown is an illustrative example of a barrier 60 having a padded distributed counter or array 50C, as used with the barrier of FIG. 5.. As before, each element 50C1- 50Cn of the array 50C is associated with one of the processes P1- Pn. In addition, there is provided a plurality of local sensors or elements LS 1 - LSn. Each sensor or element LS 1 -LSn is also associated with one of the processes P1 - Pn.
While the local sensors or elements LS 1 - LSn may be located anywhere across the parallel computing system, in an embodiment, each element LS 1 ~- LSn may be local to its associated process Pl -Pn. In such a case, although the elements LS1 - LSn may be logically grouped as an array, they may be characterized as individual sensors having a distributed arrangement. Alternatively, the local sensors or elements LS 1 - LSn may be formed into a logical group within a defined physical space. In an embodiment, each of the distributed counter or array 50C and the local sensor or array LS may be implemented by an array of cache lines such that each corresponding counter element and local sensor element resides on the same cache line.
Before operation of the barrier 60, each element LS I - LSn corresponding to a local sensor for an associated process is initialized to a value, for example, of one which may represent a "hold" state. All of the elements in the counter 50C may also be initialized to a value of one.
In an embodiment, one process in the group, for instance a master process P1, may be designated to act as if it is the last {slowest) process. Thus, after the master process P1 decreases its associated element 50C1 of the distributed counter 50C, it spins at the counter or array 50C
(as indicated by "spin" line S 1' ) to check whether all of the elements 50C 1-50Cn are zero. The rest of the processes P2 - Pn will decrease their associated counter elements 50C2 - 50Cn, and then immediately proceed to their own local sensors LS2 - LSn to spin there awaiting a switch of the sensor to a leave state (as indicated by "spin" line Sn') The last process Pl, upon detecting that all elements 50C1 - 50Cn in the counter 50C are zero, may cause an instruction to be sent to all of the local sensors LS 1 -LSn to change their bits to zero, thus changing the state of those local sensors to a "leave" state.
Upon such a change in state, all of the processes, including the last process P1, may leave for the next phase, if any, as indicated by "leave" lines L1 - Ln. Advantageously, only the last process Pl (or another independent process) is required to check all of the elements of the padded distributed counter 50C during the barrier synchronization operation.
In an embodiment, the last process Pl, upon detecting that all elements 50C1 -54Cn in the counter 50C are zero, may immediately reinitialize the value of the elements 50C1 - 50Cn, prior to instructing the local sensors LS 1 - LSn to change states. This will allow the same counter 50C to be used far a subsequent barrier operation, as explained further below.
A sample algorithm for implementing the barrier 60 of FIG. 6 is provided below:

Algorithm 2: Barrier with distributed counter and local sensor Data : Distributed counter with elements initialized to one Data : Local sensor With each element initialized as one Begin Decrease my own distributed counter element;
If I am the designated process then Repeat For each element in the distributed counter Do Check if it is zero End Until all distributed counter elements are zero;
For each element in the distributed counter Do Set it back to one End For each element in the local sensor Do set it to zero Else Repeat Check my local sensor element;
Until it is zero;
End Set my own local sensor element back to one;
End In an alternative embodiment, rather than designating one of the concurrent processes as the last process, an entirely independent process may perform the monitoring function at the counter 50C. In this case, process P1 may also immediately proceed to its associated local sensor LS 1 to spin there. When the independent process finds that the counter values are all-zero, it may reinitialize all of the counter elements in counter 50C back to one, and also flip all of the elements in the local sensor to a leave state. At last, when all of the processes P1 - Pn leave the barrier, the local sensors LS 1 - LSn may then be reinitialized back to a hold state.
Again, in order to reduce memory consumption, all barriers within a parallel region may be allowed to share the same pair of counter SOC and sensor LS. Unlike the previous situation (FIG. 5), it is not necessary to have two groups of a counter and a sensor for adjacent barriers, as explained below.
Again, suppose, that the concurrent processes Pl - Pn include a very fast process Pf and a very slow process Ps. In the current case (FIG. 6), even if the very fast process Pf is already spinning on checking a sensor for a second barrier, its counter element will not affect the very slow process Ps. This is because, by the time the very fast process Pf can decrease its counter element, the very slow process Ps must have passed re-initialization of the first array, and this operation is done by the slowest process before it frees the very fast process from the first barrier. Thus, in the worst case, the slow thread may be still spinning on checking its local sensor to leave the first barrier at this time.
To further reduce memory consumption, it is also possible, for each process, to merge the associated counter element and the sensor element of FIG. 6 into the same cache line, using different byte positions. However, this may increase barrier overhead as the counter and the sensor may be accessed at essentially the same time in the same synchronization.
EXAMPLE
The inventors conducted a series of tests to compare the performance of various barrier designs. In one test, standard SPCC micro-benchmarks were used. As introduced by J.M. Bull in "Measuring synchronization and scheduling overheads in OpenMP," First European Workshop on OpenMP, October 1999, the "overhead" is considered as the difference between the parallel execution time and the ideal time, given perfect scaling of the sequential program.
The parallel execution time may be taken from the following FORTRAN code:
dl = delaylength do k=0, outerreps start = getclock( ) !$OMP PARALLEL PRIVATE(J) do j=1, innerreps call delay(dl) !$OMP BARRIER
end do ! $OMP END PARALLEL
time(k) _ (getclock( ) - start) '~
& l.Oe6 l dble (innerreps) end do The sequential reference time may be measured through the following FORTRAN
code:
dl = delaylength do k=0, outerreps start getclock( ) do j=1, innerreps call delay (dl) end do time(k) _ (getclock( ) - start) '~
& 1.0e6 I able (innerreps) end do In the test program used for the tests by the inventors, the value of outerreps was set to 50. The array variahle time was then used to compute the mean and standard deviation of the 50 measurements.

The hardware system used by the inventors for the tests comprised a 1.1 GHz 32-way POWER4, with AIX 5. l, VisualAge C/C++ 6.0 and XL FORTRAN 8.1 installed.
Referring to FIG. 7, the performance results for the tests are shown. The bottom curve 72 represents the overhead for a barrier combining a distributed counter and a local sensor, in accordance with an embodiment of the invention (FIG. 6). In comparison to the performance results for a barrier with a fetch-and-add design (FIG. 3), represented by the top curve 80, the barrier combining the padded distributed counter and local sensor (FIG. 6) showed a significant improvement in performance. The test results also show that the barrier design of the present invention (curve 72) provides an improvement over the use of a local sensor alone (curve 78), a distributed counter alone (curve 7&), and a padded distributed counter alone (curve 74).
While illustrative embodiments of the present invention have been shown and described, various modifications will be apparent to those skilled in the art.
For example, while each element of the first array has been generally referred to as a counter, it will be appreciated that each element of the first array may be a gate, a flag, a switch, or, generically, a state machine having two or more states. Similarly, each element of the second array may be one of a counter, a gate, a flag, a switch, or, generically, a state machine having two or more states (e.g. a hold state and a release state). Variou s other implementations may be possible.
Therefore, the invention is defined in the following claims.

Claims (31)

1. A method of synchronizing at least two concurrently running processes in a data processing system, comprising:

(a) providing a first array of elements with initialized states, each element of said first array having a concurrently running process associated therewith, each element of said first array being configured to have its state updated by its associated concurrently running process upon completion of a phase by said associated concurrently running process;

(b) providing a second array of elements with initialized hold states, each element of said second array having a concurrently running process associated therewith, each element of said second array being configured to switch, upon receiving an instruction, to a release state;
(c) arranging for monitoring said first array of elements and, upon each element of said first array having had its state updated, arranging for generating said instruction for switching said elements of said second array to said release state.
2. The method recited in claim 1, further comprising:

(d) for each process of said at least two concurrently running processes, configuring said each process such that, upon completion of said phase and upon updating of its associated element of said first array, said each process then waits at its associated element of said second array for said release state.
3. The method recited in claim 2, wherein each element of said first array has a byte size corresponding to the size of a cache line used in said data processing system.
4. The method recited in claim 3, wherein each element of said second array has a byte size corresponding to the size of said cache line used in said data processing system.
5. The method recited in claim 4, further comprising providing each element of said second array locally in relation to its respective, associated process.
6. The method recited in claim 2, further comprising, upon said each element of said first array having had its state updated, and prior to generating said instruction for switching said elements of said second array to said release state, arranging for reinitializing each element of said first array.
7. The method recited in claim 1, wherein in (c), said monitoring of said first array of elements is performed by one of said concurrently running processes.
8. The method recited in claim 1, wherein in (c), said monitoring of said first array of elements is performed by an independent process.
9. The method recited in claim 1, wherein in (a), said initialized state of said each element of said first array is a value.
10. The method recited in claim 1, wherein in (a), said each element of said first array comprises a state machine.
11. The method recited in claim 10, wherein said state machine is one of a counter, a gate, a flag and a sensor.
12. The method recited in claim 1, wherein in (b), said each element of said second array comprises a state machine.
13. The method recited in claim 12, wherein said state machine is one of a counter, a gate, a flag and a sensor.
14. A system for synchronizing at least two concurrently running processes in a data processing system, comprising:
(a) a first array of elements, each element of said first array having a concurrently running process associated therewith, said each element of said first array being configured to have an initial state that may be updated by its associated concurrently running process, upon completion of a phase by said associated concurrently running process;
(b) a second array of elements, each element of said second array having a concurrently running process associated therewith, said each element of said second array being configured to have an initial hold state that may be switched, upon receiving an instruction, to a release state;
(c) a monitoring process for monitoring said first array of elements, said monitoring process being configured to generate said instruction for switching said elements of said second array to said release state, upon each element of said first array having had its state updated.
15. The system recited in claim 14, wherein each element of said first array has a byte size corresponding to the size of a cache line used in said data processing system.
16. The system recited in claim 15, wherein each element of said second array has a byte size corresponding to the size of said cache line used in said data processing system.
17. The system recited in claim 14, wherein each element of said second array is provided locally in relation to its respective, associated process.
18. The system recited in claim 14, wherein said each element of said first array is a state machine.
19. The system recited in claim 14, wherein said each element in one of a counter, a gate, a flag and a switch.
20. The system recited in claim 14, wherein said each element of said second array is a state machine.
21. The system recited in claim 14, wherein said state machine is one of a counter, a gate, a flag, a switch, and a sensor.
22. The system recited in claim 14, wherein said at least two concurrently running processes execute on multiple processors embodied within a single computer.
23. The system recited in claim 14, wherein said at least two concurrently running processes execute on multiple processors distributed across multiple computers connect across a network.
24. A processor for executing a process in order to synchronize said process with at least one other concurrently running process, said processor being operable to:
access an element of a first array of elements, said element of said first array being associated with said process, said element of said first array having an initial state;
update said element of said first array of elements upon completion of a phase by said process;
after said updating, access an element of a second array of elements, said element of said second array being associated with said process, said element of said second array having an initial hold state and being configured to switch, upon receiving an instruction, to a release state, and check said element of said second array for said switch to said release state until detecting said release state.
25. A method for executing a process in order to synchronize said process with at least one other concurrently running process, comprising:
accessing an element of a first array of elements, said element of said first array being associated with said process, said element of said first array having an initial state;
updating said element of said first array of elements upon completion of a phase by said process;
after said updating, accessing an element of a second array of elements, said element of said second array being associated with said process, said element of said second array having an initial hold state and being configured to switch, upon receiving an instruction, to a release state, and checking said element of said second array for said switch to said release state until detecting said release state.
26. A processor for executing a process in order to synchronize at least two concurrently running processes, said processor being operable to:
access a first array of elements, each element of said first array of elements being associated with one of said at least two concurrently running process and having an initial state;
monitor all elements of said first array of elements until detecting that each of said elements of said first array has been updated by its associated process; and thereafter generate an instruction to switch all elements of a second array of elements from an initial hold state to a release state, each element of said second array of elements being associated with one of said at least two concurrently running processes.
27. The processor recited in claim 26, wherein said process executed thereon is one of said concurrent processes.
28 28. A computer program product for synchronizing at least two concurrently running processes in a data processing system, the computer program product comprising:
a computer useable medium having computer readable program code means embodied in the medium for synchronizing at least two concurrently running processes, the computer program code means including:
computer readable program code means for providing a first array of elements with initialized states, each element of said first array having a concurrently running process associated therewith, each element of said first array being configured to have its state updated by its associated concurrently running process upon completion of a phase by said associated concurrently running process;
computer readable program code means for providing a second array of elements with initialized hold states, each element of said second array having a concurrently running process associated therewith, each element of said second array being configured to switch, upon receiving an instruction, to a release state;
computer readable program code means for monitoring said first array of elements and, upon each element of said first array having had its state updated, generating said instruction for switching said elements of said second array to said release state.
29. The computer program product recited in claim 28, further comprising:
computer readable program code means for configuring each process of said at least two concurrently running processes, upon completion of said phase and upon updating of its associated element of said first array, to wait at its associated element of said second array for said release state.
30. A system for synchronizing at least two concurrently running processes in a data processing system, comprising:
(a) means for providing a first array of elements with initialized states, each element of said first array having a concurrently running process associated therewith, each element of said first array being configured to have its state updated by its associated concurrently running process upon completion of a phase by said associated concurrently running process;
(b) means for providing a second array of elements with initialized hold states, each element of said second array having a concurrently running process associated therewith, each element of said second array being configured to switch, upon receiving an instruction, to a release state;
(c) means for monitoring said first array of elements and, upon each element of said first array having had its state updated, generating said instruction for switching said elements of said second array to said release state.
31. The system recited in claim 30, further comprising:
(d) means for each process of said at least two concurrently running processes, configuring said each process such that, upon completion of said phase and upon updating of its associated element of said first array, said each process then waits at its associated element of said second array for said release state.
CA002419340A 2003-02-20 2003-02-20 Software barrier synchronization Abandoned CA2419340A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA002419340A CA2419340A1 (en) 2003-02-20 2003-02-20 Software barrier synchronization
US10/718,293 US7581222B2 (en) 2003-02-20 2003-11-20 Software barrier synchronization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002419340A CA2419340A1 (en) 2003-02-20 2003-02-20 Software barrier synchronization

Publications (1)

Publication Number Publication Date
CA2419340A1 true CA2419340A1 (en) 2004-08-20

Family

ID=32855067

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002419340A Abandoned CA2419340A1 (en) 2003-02-20 2003-02-20 Software barrier synchronization

Country Status (2)

Country Link
US (1) US7581222B2 (en)
CA (1) CA2419340A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7937709B2 (en) 2004-12-29 2011-05-03 Intel Corporation Synchronizing multiple threads efficiently
US7945911B1 (en) * 2005-06-03 2011-05-17 Oracle America, Inc. Barrier synchronization method and apparatus for work-stealing threads
US7770170B2 (en) * 2005-07-12 2010-08-03 Microsoft Corporation Blocking local sense synchronization barrier
US7587555B2 (en) * 2005-11-10 2009-09-08 Hewlett-Packard Development Company, L.P. Program thread synchronization
US7555607B2 (en) * 2005-11-10 2009-06-30 Hewlett-Packard Development Company, L.P. Program thread syncronization for instruction cachelines
US8060881B2 (en) * 2007-05-15 2011-11-15 Microsoft Corporation Small barrier with local spinning
US20090006507A1 (en) * 2007-06-27 2009-01-01 Mingnan Guo System and method for ordering reclamation of unreachable objects
US8065681B2 (en) * 2007-10-12 2011-11-22 International Business Machines Corporation Generic shared memory barrier
US8924984B2 (en) * 2009-06-26 2014-12-30 Microsoft Corporation Lock-free barrier with dynamic updating of participant count
US8510738B2 (en) * 2009-08-20 2013-08-13 Microsoft Corporation Preventing unnecessary context switching by employing an indicator associated with a lock on a resource
US8463887B2 (en) * 2009-12-23 2013-06-11 Citrix Systems, Inc. Systems and methods for server surge protection in a multi-core system
US20110191775A1 (en) * 2010-01-29 2011-08-04 Microsoft Corporation Array-based thread countdown
US20130247069A1 (en) * 2012-03-15 2013-09-19 International Business Machines Corporation Creating A Checkpoint Of A Parallel Application Executing In A Parallel Computer That Supports Computer Hardware Accelerated Barrier Operations
US20140282564A1 (en) * 2013-03-15 2014-09-18 Eli Almog Thread-suspending execution barrier
US10178310B1 (en) * 2015-08-20 2019-01-08 Ambarella, Inc. Data unit feed synchronization to multiple pipelines
US9940226B2 (en) * 2016-05-26 2018-04-10 International Business Machines Corporation Synchronization of hardware agents in a computer system
US10474541B2 (en) * 2017-03-07 2019-11-12 Sap Se Optimized savepoint during database recovery
US10409695B2 (en) * 2017-04-25 2019-09-10 Sap Se Self-adaptive continuous flushing of pages to disk
US10558530B2 (en) * 2017-07-11 2020-02-11 Sap Se Database savepoint with shortened critical phase time
US11526483B2 (en) * 2018-03-30 2022-12-13 Intel Corporation Storage architectures for graph analysis applications
US11175919B1 (en) * 2018-12-13 2021-11-16 Amazon Technologies, Inc. Synchronization of concurrent computation engines

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0640324B2 (en) * 1989-10-26 1994-05-25 インターナショナル・ビジネス・マシーンズ・コーポレーション Multiprocessor system and process synchronization method thereof
US5278975A (en) * 1990-06-14 1994-01-11 Fujitsu Limited Synchronization control system in a parallel computer
US5887167A (en) 1995-11-03 1999-03-23 Apple Computer, Inc. Synchronization mechanism for providing multiple readers and writers access to performance information of an extensible computer system
JP3636871B2 (en) 1997-09-16 2005-04-06 株式会社日立製作所 Parallel processor system

Also Published As

Publication number Publication date
US7581222B2 (en) 2009-08-25
US20040187118A1 (en) 2004-09-23

Similar Documents

Publication Publication Date Title
US7581222B2 (en) Software barrier synchronization
CN101833475B (en) Method and device for execution of instruction block
US6256753B1 (en) Bus error handling in a computer system
EP1839146B1 (en) Mechanism to schedule threads on os-sequestered without operating system intervention
US8368701B2 (en) Metaprocessor for GPU control and synchronization in a multiprocessor environment
JP5398375B2 (en) Optimizing grace period detection for preemptable reads, copies, and updates on uniprocessor systems
US11803380B2 (en) High performance synchronization mechanisms for coordinating operations on a computer system
EP3701377B1 (en) Method and apparatus for updating shared data in a multi-core processor environment
RU2501071C2 (en) Late lock acquire mechanism for hardware lock elision (hle)
US9378069B2 (en) Lock spin wait operation for multi-threaded applications in a multi-core computing environment
US9940290B2 (en) Handling CPU hotplug events in RCU without sleeplocks
US20140223119A1 (en) In-Kernel SRCU Implementation With Reduced OS Jitter
US8645963B2 (en) Clustering threads based on contention patterns
US10579413B2 (en) Efficient task scheduling using a locking mechanism
CN107003896B (en) Apparatus with shared transaction processing resources and data processing method
US20140282564A1 (en) Thread-suspending execution barrier
US9910717B2 (en) Synchronization method
US20060048147A1 (en) Distributed counter and centralized sensor in barrier wait synchronization
Wang et al. Transaction-friendly condition variables
US7475397B1 (en) Methods and apparatus for providing a remote serialization guarantee
McKenney Priority-boosting RCU read-side critical sections
JP7042105B2 (en) Program execution control method and vehicle control device
Alchieri et al. Boosting state machine replication with concurrent execution
US20130111152A1 (en) Method for optimizing memory access in a microprocessor including several logic cores upon resumption of executing an application, and computer program implementing such a method
CN112306703A (en) Critical region execution method and device in NUMA system

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead