US20080155140A1 - System and program for buffering work requests - Google Patents
System and program for buffering work requests Download PDFInfo
- Publication number
- US20080155140A1 US20080155140A1 US12/047,238 US4723808A US2008155140A1 US 20080155140 A1 US20080155140 A1 US 20080155140A1 US 4723808 A US4723808 A US 4723808A US 2008155140 A1 US2008155140 A1 US 2008155140A1
- Authority
- US
- United States
- Prior art keywords
- work request
- work
- memory structure
- storing
- business process
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Disclosed is a technique for buffering work requests. It is determined that a work request is about to be placed into an in-memory structure. When the in-memory structure is not capable of storing the work request, a work request ordering identifier for the work request is stored into an overflow structure. When the in-memory structure is capable of storing the work request, a recovery stub is generated for the work request ordering identifier, and the recovery stub is stored into the in-memory structure.
Description
- This application is a continuation application of and claims the benefit of “METHOD, SYSTEM, AND PROGRAM FOR BUFFERING WORK REQUESTS”, having application Ser. No. 10/768,581, filed Jan. 30, 2004, the entire contents of which is incorporated herein by reference.
- 1. Field of the Invention
- The present invention is related to buffering work requests.
- 2. Description of the Related Art
- The term “workflow” may be used to describe tasks and data for business processes. The data, for example, may relate to organizations or people involved in a business process and required input and output information for the business process. A workflow automation product allows creation of a workflow model to manage business processes. A workflow engine is a component in a workflow automation program that understands the tasks of each business process in the workflow and determines whether the business process is ready to move to the next task.
- A publish-subscribe pattern is a common pattern in distributed applications and describes a pattern in which a publisher (e.g., an application program) generates work requests to be processed by one or more subscribers (e.g., business processes), for example, as part of a work flow. The subscribers that receive the work requests are those that are interested in the work requests and that have registered with the publisher to receive the work requests of interest.
- A work request may be described as a business object request because the work request is processed by a business process. For example, a work request may provide data (e.g., employee name and social security number) and a description of what is to be done (e.g., creating, deleting, or updating an entry in a data store).
- The publisher may dispatch work requests to an intermediary application program that stores the work requests in queues for each subscriber, and each subscriber retrieves the work requests from an associated queue. Since the intermediary application program holds work requests in each queue until the work requests are retrieved by subscribers, sometimes, a very slow subscriber may not retrieve work requests at a fast rate, leaving many work requests in the queue. This may lead to the queue running out of entries for storing new work requests for that subscriber.
- That is, one problem with the publisher-subscriber pattern is that the delivery of work requests from the publisher may cause a queue to overflow when a subscriber is slow to retrieve work requests from the queue.
- Thus, there is a need in the art for an improved technique for processing work requests for a system using a publish-subscribe pattern.
- Provided are a method, system, and program for buffering work requests. It is determined that a work request is about to be placed into an in-memory structure. When the in-memory structure is not capable of storing the work request, a work request ordering identifier for the work request is stored into an overflow structure. When the in-memory structure is capable of storing the work request, a recovery stub is generated for the work request ordering identifier, and the recovery stub is stored into the in-memory structure.
- Referring now to the drawings in which like reference numbers represent corresponding parts throughout:
-
FIG. 1A illustrates, in a block diagram, a computing environment in accordance with certain implementations of the invention. -
FIG. 1B illustrates, in a block diagram, further details of a computing environment in accordance with certain implementations of the invention. -
FIG. 1C illustrates, in a block diagram, yet further details of a computing environment in accordance with certain implementations of the invention. -
FIG. 2A illustrates logic implemented in a business process in accordance with certain implementations of the invention. -
FIG. 2B illustrates logic implemented for moving work requests in accordance with certain implementations of the invention. -
FIG. 3A illustrates logic implemented when a work request is to be stored in an in-memory structure in accordance with certain implementations of the invention. -
FIG. 3B illustrates logic implemented when a work request is to be stored in an in-memory structure in accordance with certain alternative implementations of the invention. -
FIG. 4A illustrates logic implemented when a work request is removed from an in-memory structure in accordance with certain implementations of the invention. -
FIGS. 4B , 4C, and 4D illustrate structures in accordance with certain implementations of the invention. -
FIG. 5 illustrates logic implemented in a flow control component in accordance with certain implementations of the invention. -
FIG. 6 illustrates logic implemented in a work request reader in accordance with certain implementations of the invention. -
FIG. 7 illustrates logic implemented in a work request reader for processing recovery stubs and work requests in accordance with certain implementations of the invention. -
FIG. 8 illustrates an architecture of a computer system that may be used in accordance with certain implementations of the invention. - In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several implementations of the present invention. It is understood that other implementations may be utilized and structural and operational changes may be made without departing from the scope of the present invention.
- Implementations of the invention buffer work requests for one or more subscribers that are slow to retrieve work requests from their in-memory structures (e.g., queues) that hold work requests. When an in-memory structure becomes full and work requests continue to be sent to the subscriber, the subscriber is said to be in an overflow state (i.e., the in-memory structure for the subscriber may overflow). Thus, in cases in which it is not possible to send a communication to the publisher to stop sending work requests or cases in which some subscribers wish to receive work requests when other subscribers are in an overflow state, each subscriber may be configured such that, even if the subscriber reaches an overflow state, work requests are still delivered to the subscribers that are not in overflow states without interruption. Then, the work requests for the subscribers in the overflow state are buffered and sent to the subscribers when the subscribers are able to process more work requests.
-
FIG. 1A illustrates, in a block diagram, a computing environment in accordance with certain implementations of the invention. One ormore client computers 100 a . . . 100 n are connected via anetwork 190 to aserver computer 120. For ease of reference, the designations of “a” and “n” after reference numbers (e.g., 100 a . . . 110 n) are used to indicate one or more elements (e.g., client computers). Theclient computers 100 a . . . 100 n may comprise any computing device known in the art, such as a server, mainframe, workstation, personal computer, hand held computer, laptop telephony device, network appliance, etc. Thenetwork 190 may comprise any type of network, such as, for example, a Storage Area Network (SAN), a Source Area Network (LAN), Wide Area Network (WAN), the Internet, an Intranet, etc. - Each
client computer 100 a . . . 100 n includessystem memory 104 a . . . 104 n, respectively, which may be implemented in volatile and/or non-volatile devices. One ormore client applications 110 a . . . 110 n and client admin applications 112 a . . . 112 n may execute in thesystem memory 104 a . . . 104 n, respectively. Theclient applications 110 a . . . 110 n may generate and submit work requests in the form of messages to theserver computer 120 for execution. The client admin applications 112 a . . . 112 n perform administrative functions. - The
server computer 120 includessystem memory 122, which may be implemented in volatile and/or non-volatile devices. Adata store engine 160 is connected to theserver computer 120 and todata store 170. - One or more
work request readers 130, one or more business processes 132, arecovery system 134, one ormore structure processors 136, and one or moreflow control components 138 execute in thesystem memory 122. Additionally, one ormore server applications 150 execute insystem memory 122. One or more in-memory structures 140 (e.g., in-memory queues) may be stored insystem memory 122. In certain implementations of the invention, there is one in-memory structure 140 for eachbusiness process 132, and onestructure processor 136 for each in-memory structure 140. One or more work request overflow structures (“overflow structures”) 184 may also be stored insystem memory 122 for eachbusiness process 132. - One or more transport structures 182 (e.g., queues) may be stored in a
data store 180 connected tonetwork 190. In certain implementations of the invention, there is onetransport structure 182 associated with eachbusiness process 132. Thetransport structure 182 may be, for example, a Message Queue (“MQ”) available from International Business Machines Corporation, a Common Object Request Broker Architecture (CORBA) structure, or a JAVA® Message Service (JMS) structure. In certain implementations of the invention, thetransport structure 182 may be persistent. - In certain implementations of the invention, such as in workflow systems, the
client applications 110 a . . . 110 n may be described as “publishers”, while the business processes 132 may be described as “subscribers”. - The work requests may be stored in both in-
memory structures 140 and intransport structures 182 corresponding to the business processes 132 that are to process the work requests. Thework request reader 130 retrieves a work request from atransport structure 182 associated with abusiness process 132 that is to execute the work request, and forwards the work request to theappropriate business process 132. - During recovery,
recovery stubs 142 are generated insystem memory 122 by retrieving some data fromlog 172. In certain implementations of the invention, the term “recovery stubs” 142 may be used to represent a portion of a work request. In certain implementations of the invention, a recovery stub includes a work request key that links together work requests (e.g., a social security number for data about an individual), a work request ordering identifier that indicates the order in which the work request corresponding to the recovery stub was received by thework request reader 130, and a structure identifier that provides access to the complete work request stored in one ormore transport structures 182. In certain implementations, the work request ordering identifier is a sequence number assigned to the work request. Thelog 172 provides information about work requests (e.g., a work request key, a work request ordering identifier, and a structure identifier) and the state of the work requests (e.g., whether a work request was in progress when a system (e.g., server computer 120) failure occurred). - Although a
single data store 170 is illustrated for ease of understanding, data indata store 170 may be stored in multiple data stores atserver computer 120 and/or other computers connected toserver computer 120. - The
data store 170 may comprise an array of storage devices, such as Direct Access Storage Devices (DASDs), Just a Bunch of Disks (JBOD), Redundant Array of Independent Disks (RAID), virtualization device, etc. -
FIG. 1B illustrates, in a block diagram, further details of a computing environment in accordance with certain implementations of the invention. In certain implementations, one client application 130 (“publisher”), onetransport structure 182, onework request reader 130, one in-memory structure 140, onestructure processor 136, and one business process 132 (“subscriber”) are associated with each other. In certain alternative implementations, abusiness process 132 may receive work requests from multiple client applications 110. - In the illustration of
FIG. 1B , theclient application 110 a produces work requests that are destined for thebusiness process 132. Theclient application 110 a may also communicate with thework request reader 130, for example, for administrative functions. In particular, theclient application 110 a sends work requests to theserver computer 120 by storing the work requests intransport structures 182, where onetransport structure 182 corresponds to onebusiness process 132. Thework request reader 130 retrieves work requests from thetransport structure 182 and stores them in the in-memory structure 140 for thebusiness process 132. If the in-memory structure is full, thework request reader 130 stores the work request in a workrequest overflow structure 184. Thestructure processor 136 retrieves work requests from the in-memory structure 140 and forwards the work requests to thebusiness process 132 for processing. Also, as work requests are retrieved from the in-memory structure 140, theflow control component 138 stores the work requests from the workrequest overflow structure 184 into the in-memory structure 140. After completing a work request, abusiness process 132 removes the work request from theappropriate transport structure 182 and performs other processing to clean up thetransport structure 182. Additionally, aflow control component 138 monitors work requests being transferred by thework request reader 130 into the in-memory structure 140 and work requests removed from the in-memory structure 140. Theflow control component 138 may assist in controlling the flow of work requests. -
FIG. 1C illustrates, in a block diagram, yet further details of a computing environment in accordance with certain implementations of the invention. In particular, inFIG. 1C , asingle client application 110 a may send work requests that are processed by a singlework request reader 130 formultiple business processes -
FIG. 2A illustrates logic implemented in abusiness process 132 in accordance with certain implementations of the invention. Control begins atblock 200 with thebusiness process 132 registering with one ormore client applications 110 a . . . 110 n for certain types of work requests. In certain implementations, each work request includes a type field. Then, when a work request is generated by aclient application 110 a . . . 110 n, the type of the work request is determined, the business processes 132 that registered for that type of work request are determined, and the work request is sent, by theclient application 110 a . . . 110 n, to thetransport structures 182 for the determined business processes 132. In alternative implementations, work requests andbusiness processes 132 may be associated using other techniques (e.g., allbusiness processes 132 receive all work requests and process the desired ones). - In
block 210, thebusiness process 132 is configured for a maximum number of work requests that may be stored by the business process at any given time, and this maximum number is also referred to as a “maximum limit.” In certain implementations, a user, such as a system administrator, sets the maximum limit. In certain implementations, the maximum limit is equivalent to the number of work requests that may be stored in an in-memory structure 140 for thebusiness process 132. Inblock 220, a blocking type is specified for the in-memory structure 140 for thebusiness process 132. Inblock 230, other processing may occur. - In certain implementations, a blocking type may be associated with an in-
memory structure 140 for abusiness process 132. The blocking type is set to a first value (e.g., “blocking”) to indicate that aclient application 110 a . . . 110 n should be blocked from sending additional work requests when a maximum limit is reached for a business process. The blocking type is set to a second value (e.g., “non-blocking”) to indicate that work requests are to be stored in a workrequest overflow structure 184 for a business process when a maximum limit is reached for that business process. -
FIG. 2B illustrates logic implemented for moving work requests in accordance with certain implementations of the invention. Control begins inblock 250 with a client application (e.g., 110 a) generating a work request. Inblock 260, theclient application 110 a . . . 110 n stores the work request in atransport structure 182 for the associatedbusiness process 132. If more than onebusiness process 132 is to process the same work request, then theclient application 110 a . . . 110 n stores the work request in thetransport structure 182 for eachappropriate business process 132. Inblock 270, thework request reader 130 retrieves the work request from thetransport structure 182 for the associated business process. Inblock 280, thework request reader 130 stores the work request in an in-memory structure 140 for the associatedbusiness process 132. -
FIG. 3A illustrates logic implemented when a work request is to be stored in an in-memory structure 140 in accordance with certain implementations of the invention. Control begins inblock 300 with theflow control component 138 “intercepting” a work request transferred by thework request reader 130 to the in-memory structure 140. The term “intercepting” describes monitoring by theflow control component 138 and detecting that the work request is being transferred into or out of an in-memory structure 140. The processing ofblock 300 may occur periodically. In certain implementations, thework request reader 130 registers with theflow control component 138 so that theflow control component 138 can monitor work requests being transferred by thework request reader 130. - In
block 310, theflow control component 138 compares the maximum limit against the number of work requests in the in-memory structure 140. Inblock 320, if the maximum limit has been reached or work requests are stored in workrequest overflow structure 184, processing continues to block 330, otherwise, processing continues to block 340. Thus, a work request is stored in theoverflow structure 184 when the in-memory structure 140 is not capable of storing the work request. The in-memory structure 140 is not capable of storing work requests when the maximum limit has been reached or work requests remain in theoverflow structure 184. That is, in certain implementations, work requests are not stored in the in-memory structure 140 until all work requests in the workrequest overflow structure 184 have been moved into the in-memory structure 140. - In
block 330, theflow control component 138 stores a work request ordering identifier into a workrequest overflow structure 184 for the business process for which the work request was intercepted. Inblock 340, thework request reader 130 stores the work request in the in-memory structure 140. - For example, in certain implementations, if the maximum limit is 10 work requests, when the 11th work request is intercepted by the
flow control component 138, theflow control component 138 stores the 11th work request in a workrequest overflow structure 184. - Thus, in certain implementations, as work requests beyond the maximum limit are sent by one or
more client applications 110 a . . . 110 n to abusiness process 132, work requests for thebusiness process 132 are stored in a workrequest overflow structure 184. Thus, if onebusiness process 132 reaches its maximum limit, then theother business processes 132 are not impacted. -
FIG. 3B illustrates logic implemented when a work request is to be stored in an in-memory structure 140 in accordance with certain alternative implementations of the invention. Control begins inblock 350 with theflow control component 138 “intercepting” a work request transferred by thework request reader 130 to the in-memory structure 140. Inblock 355, theflow control component 138 compares the maximum limit against the number of work requests in the in-memory structure 140. Inblock 360, if the maximum limit has been reached or work requests are stored in workrequest overflow structure 184, processing continues to block 365, otherwise, processing continues to block 385. - In
block 365, the flow control component determines whether a blocking type (e.g., flag) is set to non-blocking. If so processing continues to block 370, otherwise, processing continues to block 375. Inblock 370, theflow control component 138 stores a work request ordering identifier into a workrequest overflow structure 184 for the business process for which the work request was intercepted. Inblock 375, theflow control component 138 notifies thework flow mover 130 to notify theclient application 110 a . . . 110 n that sent the intercepted work request to stop sending work requests. Fromblock 375, processing loops back to block 350. In certain implementations, a notification indicator (e.g., flag) may be set for the business processes. In this case, inblock 375, the notification is sent only if the notification indicator is set to indicate that a notification is to be sent. - In
block 385, thework request reader 130 stores the work request in the in-memory structure 140. Inblock 390, if theflow control component 138 determines that theclient application 110 a . . . 110 n was previously notified to stop delivering work requests, processing continues to block 395, otherwise, processing loops back to block 350. Inblock 395, theflow control component 138 notifies thework flow mover 130 to notify one ormore client applications 110 a . . . 110 n that were previously notified to stop sending work requests to start sending work requests. Then, processing loops back to block 350. - Thus, in certain implementations, as work requests beyond the maximum limit set for a
business process 132 are received for thatbusiness process 132, if a blocking type for the in-memory structure 140 associated with the business process is set to “non-blocking,” work requests are stored in workrequest overflow structures 184. -
FIG. 4A illustrates logic implemented when a work request is removed from an in-memory structure 140 in accordance with certain implementations of the invention. Control begins atblock 400 with theflow control component 138 intercepting a work request being removed from in-memory structure 140. Inblock 410, if theflow control component 138 determines that there are one or more work request ordering identifiers in a workrequest overflow structure 184, processing continues to block 420, otherwise, processing loops back to block 400. Inblock 420, theflow control component 138 creates a recovery stub for a work request ordering identifier in the workrequest overflow structure 132. Inblock 430, theflow control component 138 stores therecovery stub 142 in the in-memory structure 140. Inblock 440, theflow control component 132 removes the work request ordering identifier from the workrequest overflow structure 184. -
FIGS. 4B , 4C, and 4D illustratestructures FIG. 4B illustrates an in-memory structure 450 for abusiness process 132. The in-memory structure 450 contains four work requests. Each work request includes a work request key that links together work requests (e.g., a social security number for data about an individual), a work request ordering identifier that indicates the order in which the work request was received by thework request reader 130, a structure identifier that provides access to the work request stored in one ormore transport structures 182, and data. In this example, in-memory structure 450 is full. When a fifth work request is received, a work request ordering identifier is stored for the work request in a workrequest overflow structure 460, illustrated inFIG. 4C . -
FIG. 4D illustrates in-memory structure 450 for thebusiness process 132 that includes a recovery stub. After a work request has been removed from the in-memory structure 450, arecovery stub 142, generated from the work request ordering identifier in workrequest overflow structure 460, is stored in the in-memory structure 450. The recovery stub includes a work request key, a work request ordering identifier, and a structure identifier. In certain implementations, the recovery stubs 142 do not include data, while work requests do include data. -
FIG. 5 illustrates logic implemented in aflow control component 138 in accordance with certain implementations of the invention. Control begins inblock 500 with theflow control component 138 comparing the maximum limit of eachbusiness process 132 against the number of work requests in the respective in-memory structures 140. The processing ofblock 500 may occur periodically. Inblock 510, if the maximum limit has been reached for a predetermined number of business processes 132, processing continues to block 520, otherwise, processing continues to block 530. In certain implementations, the predetermined number is equivalent to all of the business processes 132. - In
block 520, theflow control component 138 notifies thework flow mover 130 to notify one ormore client applications 110 a . . . 110 n to stop sending work requests. Fromblock 520, processing loops back to block 500. In certain implementations, thework flow mover 130 is associated with one ormore client applications 110 a . . . 110 n, and the notification is sent to theseclient applications 110 a . . . 110 n. In certain implementations, a notification indicator may be set for the business processes. In this case, inblock 520, the notification is sent only if the notification indicator is set to indicate that a notification is to be sent. - In
block 530, if theflow control component 138 determines that anyclient application 110 a . . . 110 n was previously notified to stop delivering work requests, processing continues to block 550, otherwise, processing loops back to block 500. In block 550, theflow control component 138 notifies thework flow mover 130 to notify one ormore client applications 110 a . . . 110 n that were previously notified to stop sending work requests to start sending work requests. Then, processing loops back to block 500. - Thus, in certain implementations, if a maximum limit is reached for each of a predetermined number of business processes 132, one or
more client applications 110 a . . . 110 n are notified to stop sending work requests. -
FIG. 6 illustrates logic implemented in awork request reader 130 in accordance with certain implementations of the invention. Control begins atblock 600 with thework request reader 130 receiving a notification from theflow control component 138. Inblock 610, if the notification is to notify aclient application 110 a . . . 110 n to stop delivering work requests, processing continues to block 620, otherwise, processing continues to block 630. Inblock 620, thework request reader 130 notifies the client admin 112 a . . . 112 n of theclient application 110 a . . . 110 n to stop delivering work requests. - In
block 630, if the notification is to notify aclient application 110 a . . . 110 n to start delivering work requests, processing continues to block 640, otherwise, processing continues to block 650. Inblock 640, thework request reader 130 notifies the client admin 112 a . . . 112 n of theclient application 110 a . . . 110 n to start delivering work requests. Inblock 650, other processing may occur. For example, if a notification that thework request reader 130 is not able to process is received, error processing may occur. - Thus, in cases in which a
client application 110 a . . . 110 n has been designed such that theclient application 110 a . . . 110 n cannot be controlled (e.g., throttled) or cannot receive communications from, for example, business processes 132, it is still desirable to control the in-memory structures 140 so that they do not overflow and work requests are not discarded in the case of an overflow state. Therefore, implementations of the invention prevent the in-memory structures 140 from overflowing and avoid discarding work requests by allowing for work requests received for an in-memory structure that is full to be stored in a separate workrequest overflow structure 184. The work requests in the workrequest overflow structure 184 may be redelivered in proper order back to the in-memory structure 140 to be retrieved by the associated business process. -
FIG. 7 illustrates logic implemented in a work request reader for processing recovery stubs and work requests in accordance with certain implementations of the invention. Control begins atblock 700 with thestructure processor 136 retrieving a next item from an in-memory structure 140, starting with a first item. Inblock 710, thestructure processor 136 determines whether the item is a recovery stub. If so, processing continues to block 720, otherwise, processing continues to block 730. Inblock 720, thestructure processor 136 converts the recovery stub into a complete work request by retrieving the complete work request for which the recover stub was created from atransport structure 182. In certain implementations, the work request ordering identifier may be used to locate the complete work request in thetransport structure 182. Inblock 730, thestructure processor 136 forwards the complete work request to abusiness process 132. In certain alternative implementations, thestructure processor 136 is called by thebusiness process 132 to retrieve a work request. - IBM, DB2, OS/390, UDB, and Informix are registered trademarks or common law marks of International Business Machines Corporation in the United States and/or other countries. JAVA® is a registered trademark or common law mark of Sun Microsystems.
- The described techniques for buffering work requests may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The term “article of manufacture” as used herein refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), hardware component, etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, firmware, programmable logic, etc.). Code in the computer readable medium is accessed and executed by a processor. The code in which preferred embodiments are implemented may further be accessible through a transmission media or from a file server over a network. In such cases, the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc. Thus, the “article of manufacture” may comprise the medium in which the code is embodied. Additionally, the “article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present invention, and that the article of manufacture may comprise any information bearing medium known in the art.
- The logic of
FIGS. 2A , 2B, 3A, 3B, 4A, and 5-7 describes specific operations occurring in a particular order. In alternative implementations, certain of the logic operations may be performed in a different order, modified or removed. Moreover, operations may be added to the above described logic and still conform to the described implementations. Further, operations described herein may occur sequentially or certain operations may be processed in parallel, or operations described as performed by a single process may be performed by distributed processes. - The illustrated logic of
FIGS. 2A , 2B, 3A, 3B, 4A, and 5-7 may be implemented in software, hardware, programmable and non-programmable gate array logic or in some combination of hardware, software, or gate array logic. -
FIG. 8 illustrates anarchitecture 800 of a computer system that may be used in accordance with certain implementations of the invention. Client computer 100 and/orserver computer 120 may implementcomputer architecture 800. Thecomputer architecture 800 may implement a processor 802 (e.g., a microprocessor), a memory 804 (e.g., a volatile memory device), and storage 810 (e.g., a non-volatile storage area, such as magnetic disk drives, optical disk drives, a tape drive, etc.). Anoperating system 805 may execute inmemory 804. Thestorage 810 may comprise an internal storage device or an attached or network accessible storage.Computer programs 806 instorage 810 may be loaded into thememory 804 and executed by theprocessor 802 in a manner known in the art. The architecture further includes anetwork card 808 to enable communication with a network. Aninput device 812 is used to provide user input to theprocessor 802, and may include a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, or any other activation or input mechanism known in the art. Anoutput device 814 is capable of rendering information from theprocessor 802, or other component, such as a display monitor, printer, storage, etc. Thecomputer architecture 800 of the computer systems may include fewer components than illustrated, additional components not illustrated herein, or some combination of the components illustrated and additional components. - The
computer architecture 800 may comprise any computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop, handheld computer, telephony device, network appliance, virtualization device, storage controller, etc. Anyprocessor 802 andoperating system 805 known in the art may be used. - The foregoing description of implementations of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many implementations of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
Claims (14)
1. An article of manufacture comprising a computer-readable storage medium including a program for buffered work requests, wherein the program when executed by a computer causes operations to be performed, the operations comprising:
determining that a work request is about to be placed into an in-memory structure for a business process, wherein the work request includes a work request ordering identifier that indicates an order in which the work request was received, a structure identifier that provides access to the work request stored in one or more transport structures, and data;
determining whether the in-memory structure is capable of storing the work request and whether one or more work request ordering identifiers are stored in an overflow structure for the business process;
in response to determining that either the in-memory structure is not capable of storing the work request or one or more work request ordering identifiers are stored in the overflow structure for the business process, storing the work request ordering identifier for the work request into the overflow structure for the business process, wherein work requests for at least one other business process that is not in an overflow state and does not have any work request ordering identifiers stored in another overflow structure for that business process are capable of being stored in an in-memory structure for that business process without interruption; and
in response to determining that the in-memory structure is subsequently capable of storing the work request having the work request ordering identifier that was stored in the overflow structure, storing the work request into the in-memory structure for the business process based on the work request ordering identifier stored in the overflow structure by:
determining that a different work request has been removed from the in-memory structure;
generating a recovery stub for the work request ordering identifier for the work request, wherein the recovery stub includes the work request ordering identifier and the structure identifier that provides access to the work request including data stored in the one or more transport structures; and
storing the recovery stub into the in-memory structure.
2. The article of manufacture of claim 1 , wherein the in-memory structure is not capable of storing the work request when a maximum limit of work requests has been reached.
3. The article of manufacture of claim 2 , wherein there are multiple in-memory structures and wherein the operations further comprise:
determining that the maximum limit has been reached for a predetermined number of the multiple in-memory structures; and
sending one or more notifications to one or more client applications that additional work requests are not to be sent.
4. The article of manufacture of claim 1 , wherein the in-memory structure is not capable of storing the work request when one or more work request ordering identifiers reside in the overflow structure.
5. The article of manufacture of claim 1 , wherein a blocking type is associated with the in-memory structure and wherein the operations further comprise:
when the in-memory structure is not capable of storing the work request,
if the blocking type is set to non-blocking, storing the work request ordering identifier into the overflow structure; and
if the blocking type is set to blocking, sending a notification that additional work requests are not to be sent.
6. The article of manufacture of claim 1 , wherein the work request is sent from a publisher to a subscriber.
7. The article of manufacture of claim 6 , wherein the subscriber retrieves the work request from the in-memory structure.
8. A computer system having logic for buffering work requests, wherein the logic is executed by the computer system, the logic comprising:
determining that a work request is about to be placed into an in-memory structure for a business process, wherein the work request includes a work request ordering identifier that indicates an order in which the work request was received, a structure identifier that provides access to the work request stored in one or more transport structures, and data;
determining whether the in-memory structure is capable of storing the work request and whether one or more work request ordering identifiers are stored in an overflow structure for the business process;
in response to determining that either the in-memory structure is not capable of storing the work request or one or more work request ordering identifiers are stored in the overflow structure for the business process, storing the work request ordering identifier for the work request into the overflow structure for the business process, wherein work requests for at least one other business process that is not in an overflow state and does not have any work request ordering identifiers stored in another overflow structure for that business process are capable of being stored in an in-memory structure for that business process without interruption; and
in response to determining that the in-memory structure is subsequently capable of storing the work request having the work request ordering identifier that was stored in the overflow structure, storing the work request into the in-memory structure for the business process based on the work request ordering identifier stored in the overflow structure by:
determining that a different work request has been removed from the in-memory structure;
generating a recovery stub for the work request ordering identifier for the work request, wherein the recovery stub includes the work request ordering identifier and the structure identifier that provides access to the work request including data stored in the one or more transport structures; and
storing the recovery stub into the in-memory structure.
9. The computer system of claim 8 , wherein the in-memory structure is not capable of storing the work request when a maximum limit of work requests has been reached.
10. The computer system of claim 9 , wherein there are multiple in-memory structures and further comprising:
determining that the maximum limit has been reached for a predetermined number of the multiple in-memory structures; and
sending one or more notifications to one or more client applications that additional work requests are not to be sent.
11. The computer system of claim 8 , wherein the in-memory structure is not capable of storing the work request when one or more work request ordering identifiers reside in the overflow structure.
12. The computer system of claim 8 , wherein a blocking type is associated with the in-memory structure and further comprising:
when the in-memory structure is not capable of storing the work request,
if the blocking type is set to non-blocking, storing the work request ordering identifier into the overflow structure; and
if the blocking type is set to blocking, sending a notification that additional work requests are not to be sent.
13. The computer system of claim 8 , wherein the work request is sent from a publisher to a subscriber.
14. The computer system of claim 13 , wherein the subscriber retrieves the work request from the in-memory structure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/047,238 US20080155140A1 (en) | 2004-01-30 | 2008-03-12 | System and program for buffering work requests |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/768,581 US7366801B2 (en) | 2004-01-30 | 2004-01-30 | Method for buffering work requests |
US12/047,238 US20080155140A1 (en) | 2004-01-30 | 2008-03-12 | System and program for buffering work requests |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/768,581 Continuation US7366801B2 (en) | 2004-01-30 | 2004-01-30 | Method for buffering work requests |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080155140A1 true US20080155140A1 (en) | 2008-06-26 |
Family
ID=34807912
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/768,581 Expired - Fee Related US7366801B2 (en) | 2004-01-30 | 2004-01-30 | Method for buffering work requests |
US12/047,238 Abandoned US20080155140A1 (en) | 2004-01-30 | 2008-03-12 | System and program for buffering work requests |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/768,581 Expired - Fee Related US7366801B2 (en) | 2004-01-30 | 2004-01-30 | Method for buffering work requests |
Country Status (1)
Country | Link |
---|---|
US (2) | US7366801B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060277319A1 (en) * | 2005-06-03 | 2006-12-07 | Microsoft Corporation | Optimizing message transmission and delivery in a publisher-subscriber model |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
BRPI0510079A (en) * | 2004-04-23 | 2007-10-16 | Amgen Inc | isolated antibody or antigen binding region thereof, hybridoma cell, transfectoma cell, monoclonal antibody or antigen binding region thereof, transgenic non-human animal, method for producing an antibody or antigen binding region thereof, composition use of antibody or antigen binding region, isolated polypeptide, immunoassay, and method for identifying a compound |
US8122201B1 (en) * | 2004-09-21 | 2012-02-21 | Emc Corporation | Backup work request processing by accessing a work request of a data record stored in global memory |
JP4818820B2 (en) * | 2006-06-07 | 2011-11-16 | ルネサスエレクトロニクス株式会社 | Bus system, bus slave and bus control method |
US9244741B2 (en) * | 2011-04-02 | 2016-01-26 | Open Invention Network, Llc | System and method for service mobility |
US9753818B2 (en) | 2014-09-19 | 2017-09-05 | Splunk Inc. | Data forwarding using multiple data pipelines |
US9660930B2 (en) | 2014-03-17 | 2017-05-23 | Splunk Inc. | Dynamic data server nodes |
US9838346B2 (en) * | 2014-03-17 | 2017-12-05 | Splunk Inc. | Alerting on dual-queue systems |
US9836358B2 (en) * | 2014-03-17 | 2017-12-05 | Splunk Inc. | Ephemeral remote data store for dual-queue systems |
US9838467B2 (en) * | 2014-03-17 | 2017-12-05 | Splunk Inc. | Dynamically instantiating dual-queue systems |
WO2019030698A1 (en) * | 2017-08-08 | 2019-02-14 | Perry + Currier Inc. | Method, system and apparatus for processing database updates |
Citations (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4648031A (en) * | 1982-06-21 | 1987-03-03 | International Business Machines Corporation | Method and apparatus for restarting a computing system |
US4703481A (en) * | 1985-08-16 | 1987-10-27 | Hewlett-Packard Company | Method and apparatus for fault recovery within a computing system |
US4878167A (en) * | 1986-06-30 | 1989-10-31 | International Business Machines Corporation | Method for managing reuse of hard log space by mapping log data during state changes and discarding the log data |
US5410672A (en) * | 1990-08-31 | 1995-04-25 | Texas Instruments Incorporated | Apparatus and method for the handling of banded frame buffer overflows |
US5440691A (en) * | 1992-02-27 | 1995-08-08 | Digital Equipment Corporation, Pat. Law Group | System for minimizing underflowing transmit buffer and overflowing receive buffer by giving highest priority for storage device access |
US5566337A (en) * | 1994-05-13 | 1996-10-15 | Apple Computer, Inc. | Method and apparatus for distributing events in an operating system |
US5692156A (en) * | 1995-07-28 | 1997-11-25 | International Business Machines Corp. | Computer program product for overflow queue processing |
US5712971A (en) * | 1995-12-11 | 1998-01-27 | Ab Initio Software Corporation | Methods and systems for reconstructing the state of a computation |
US5870605A (en) * | 1996-01-18 | 1999-02-09 | Sun Microsystems, Inc. | Middleware for enterprise information distribution |
US5938775A (en) * | 1997-05-23 | 1999-08-17 | At & T Corp. | Distributed recovery with κ-optimistic logging |
US6014673A (en) * | 1996-12-05 | 2000-01-11 | Hewlett-Packard Company | Simultaneous use of database and durable store in work flow and process flow systems |
US6044419A (en) * | 1997-09-30 | 2000-03-28 | Intel Corporation | Memory handling system that backfills dual-port buffer from overflow buffer when dual-port buffer is no longer full |
US6070202A (en) * | 1998-05-11 | 2000-05-30 | Motorola, Inc. | Reallocation of pools of fixed size buffers based on metrics collected for maximum number of concurrent requests for each distinct memory size |
US6182086B1 (en) * | 1998-03-02 | 2001-01-30 | Microsoft Corporation | Client-server computer system with application recovery of server applications and client applications |
US6285601B1 (en) * | 2000-06-28 | 2001-09-04 | Advanced Micro Devices, Inc. | Method and apparatus for multi-level buffer thresholds for higher efficiency data transfers |
US6292856B1 (en) * | 1999-01-29 | 2001-09-18 | International Business Machines Corporation | System and method for application influence of I/O service order post I/O request |
US6308237B1 (en) * | 1998-10-19 | 2001-10-23 | Advanced Micro Devices, Inc. | Method and system for improved data transmission in accelerated graphics port systems |
US6321234B1 (en) * | 1996-09-18 | 2001-11-20 | Sybase, Inc. | Database server system with improved methods for logging transactions |
US6336119B1 (en) * | 1997-11-20 | 2002-01-01 | International Business Machines Corporation | Method and system for applying cluster-based group multicast to content-based publish-subscribe system |
US6351780B1 (en) * | 1994-11-21 | 2002-02-26 | Cirrus Logic, Inc. | Network controller using held data frame monitor and decision logic for automatically engaging DMA data transfer when buffer overflow is anticipated |
US20020161859A1 (en) * | 2001-02-20 | 2002-10-31 | Willcox William J. | Workflow engine and system |
US6493826B1 (en) * | 1993-09-02 | 2002-12-10 | International Business Machines Corporation | Method and system for fault tolerant transaction-oriented data processing system |
US20020194244A1 (en) * | 2001-06-01 | 2002-12-19 | Joan Raventos | System and method for enabling transaction-based service utilizing non-transactional resources |
US20040215998A1 (en) * | 2003-04-10 | 2004-10-28 | International Business Machines Corporation | Recovery from failures within data processing systems |
US6839817B2 (en) * | 2002-04-24 | 2005-01-04 | International Business Machines Corporation | Priority management of a disk array |
US6898609B2 (en) * | 2002-05-10 | 2005-05-24 | Douglas W. Kerwin | Database scattering system |
US20050172288A1 (en) * | 2004-01-30 | 2005-08-04 | Pratima Ahuja | Method, system, and program for system recovery |
US20050171789A1 (en) * | 2004-01-30 | 2005-08-04 | Ramani Mathrubutham | Method, system, and program for facilitating flow control |
US6970921B1 (en) * | 2001-07-27 | 2005-11-29 | 3Com Corporation | Network interface supporting virtual paths for quality of service |
US20060004649A1 (en) * | 2004-04-16 | 2006-01-05 | Narinder Singh | Method and system for a failure recovery framework for interfacing with network-based auctions |
US7017020B2 (en) * | 1999-07-16 | 2006-03-21 | Broadcom Corporation | Apparatus and method for optimizing access to memory |
US7065537B2 (en) * | 2000-06-07 | 2006-06-20 | Transact In Memory, Inc. | Method and system for highly-parallel logging and recovery operation in main-memory transaction processing systems |
US7130957B2 (en) * | 2004-02-10 | 2006-10-31 | Sun Microsystems, Inc. | Storage system structure for storing relational cache metadata |
US7210001B2 (en) * | 1999-03-03 | 2007-04-24 | Adaptec, Inc. | Methods of and apparatus for efficient buffer cache utilization |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000332792A (en) | 1999-05-24 | 2000-11-30 | Nec Corp | Packet discard avoidance system |
GB2354913B (en) | 1999-09-28 | 2003-10-08 | Ibm | Publish/subscribe data processing with publication points for customised message processing |
-
2004
- 2004-01-30 US US10/768,581 patent/US7366801B2/en not_active Expired - Fee Related
-
2008
- 2008-03-12 US US12/047,238 patent/US20080155140A1/en not_active Abandoned
Patent Citations (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4648031A (en) * | 1982-06-21 | 1987-03-03 | International Business Machines Corporation | Method and apparatus for restarting a computing system |
US4703481A (en) * | 1985-08-16 | 1987-10-27 | Hewlett-Packard Company | Method and apparatus for fault recovery within a computing system |
US4878167A (en) * | 1986-06-30 | 1989-10-31 | International Business Machines Corporation | Method for managing reuse of hard log space by mapping log data during state changes and discarding the log data |
US5410672A (en) * | 1990-08-31 | 1995-04-25 | Texas Instruments Incorporated | Apparatus and method for the handling of banded frame buffer overflows |
US5440691A (en) * | 1992-02-27 | 1995-08-08 | Digital Equipment Corporation, Pat. Law Group | System for minimizing underflowing transmit buffer and overflowing receive buffer by giving highest priority for storage device access |
US6493826B1 (en) * | 1993-09-02 | 2002-12-10 | International Business Machines Corporation | Method and system for fault tolerant transaction-oriented data processing system |
US5566337A (en) * | 1994-05-13 | 1996-10-15 | Apple Computer, Inc. | Method and apparatus for distributing events in an operating system |
US6351780B1 (en) * | 1994-11-21 | 2002-02-26 | Cirrus Logic, Inc. | Network controller using held data frame monitor and decision logic for automatically engaging DMA data transfer when buffer overflow is anticipated |
US5692156A (en) * | 1995-07-28 | 1997-11-25 | International Business Machines Corp. | Computer program product for overflow queue processing |
US5712971A (en) * | 1995-12-11 | 1998-01-27 | Ab Initio Software Corporation | Methods and systems for reconstructing the state of a computation |
US5870605A (en) * | 1996-01-18 | 1999-02-09 | Sun Microsystems, Inc. | Middleware for enterprise information distribution |
US6321234B1 (en) * | 1996-09-18 | 2001-11-20 | Sybase, Inc. | Database server system with improved methods for logging transactions |
US6014673A (en) * | 1996-12-05 | 2000-01-11 | Hewlett-Packard Company | Simultaneous use of database and durable store in work flow and process flow systems |
US5938775A (en) * | 1997-05-23 | 1999-08-17 | At & T Corp. | Distributed recovery with κ-optimistic logging |
US6044419A (en) * | 1997-09-30 | 2000-03-28 | Intel Corporation | Memory handling system that backfills dual-port buffer from overflow buffer when dual-port buffer is no longer full |
US6336119B1 (en) * | 1997-11-20 | 2002-01-01 | International Business Machines Corporation | Method and system for applying cluster-based group multicast to content-based publish-subscribe system |
US6182086B1 (en) * | 1998-03-02 | 2001-01-30 | Microsoft Corporation | Client-server computer system with application recovery of server applications and client applications |
US6070202A (en) * | 1998-05-11 | 2000-05-30 | Motorola, Inc. | Reallocation of pools of fixed size buffers based on metrics collected for maximum number of concurrent requests for each distinct memory size |
US6308237B1 (en) * | 1998-10-19 | 2001-10-23 | Advanced Micro Devices, Inc. | Method and system for improved data transmission in accelerated graphics port systems |
US6292856B1 (en) * | 1999-01-29 | 2001-09-18 | International Business Machines Corporation | System and method for application influence of I/O service order post I/O request |
US7210001B2 (en) * | 1999-03-03 | 2007-04-24 | Adaptec, Inc. | Methods of and apparatus for efficient buffer cache utilization |
US7017020B2 (en) * | 1999-07-16 | 2006-03-21 | Broadcom Corporation | Apparatus and method for optimizing access to memory |
US7065537B2 (en) * | 2000-06-07 | 2006-06-20 | Transact In Memory, Inc. | Method and system for highly-parallel logging and recovery operation in main-memory transaction processing systems |
US6285601B1 (en) * | 2000-06-28 | 2001-09-04 | Advanced Micro Devices, Inc. | Method and apparatus for multi-level buffer thresholds for higher efficiency data transfers |
US20020161859A1 (en) * | 2001-02-20 | 2002-10-31 | Willcox William J. | Workflow engine and system |
US20020194244A1 (en) * | 2001-06-01 | 2002-12-19 | Joan Raventos | System and method for enabling transaction-based service utilizing non-transactional resources |
US6970921B1 (en) * | 2001-07-27 | 2005-11-29 | 3Com Corporation | Network interface supporting virtual paths for quality of service |
US6839817B2 (en) * | 2002-04-24 | 2005-01-04 | International Business Machines Corporation | Priority management of a disk array |
US6898609B2 (en) * | 2002-05-10 | 2005-05-24 | Douglas W. Kerwin | Database scattering system |
US20040215998A1 (en) * | 2003-04-10 | 2004-10-28 | International Business Machines Corporation | Recovery from failures within data processing systems |
US20050171789A1 (en) * | 2004-01-30 | 2005-08-04 | Ramani Mathrubutham | Method, system, and program for facilitating flow control |
US20050172288A1 (en) * | 2004-01-30 | 2005-08-04 | Pratima Ahuja | Method, system, and program for system recovery |
US7130957B2 (en) * | 2004-02-10 | 2006-10-31 | Sun Microsystems, Inc. | Storage system structure for storing relational cache metadata |
US20060004649A1 (en) * | 2004-04-16 | 2006-01-05 | Narinder Singh | Method and system for a failure recovery framework for interfacing with network-based auctions |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060277319A1 (en) * | 2005-06-03 | 2006-12-07 | Microsoft Corporation | Optimizing message transmission and delivery in a publisher-subscriber model |
US8028085B2 (en) * | 2005-06-03 | 2011-09-27 | Microsoft Corporation | Optimizing message transmission and delivery in a publisher-subscriber model |
Also Published As
Publication number | Publication date |
---|---|
US20050172054A1 (en) | 2005-08-04 |
US7366801B2 (en) | 2008-04-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080155140A1 (en) | System and program for buffering work requests | |
US8793691B2 (en) | Managing and forwarding tasks to handler for processing using a message queue | |
US8489693B2 (en) | System and method for context-based serialization of messages in a parallel execution environment | |
US10938893B2 (en) | System for optimizing distribution of processing an automated process | |
JP4993905B2 (en) | Server queuing system and method | |
US8150812B2 (en) | Methods, apparatus and computer programs for data replication | |
US7657450B2 (en) | Reliable, secure and scalable infrastructure for event registration and propagation in a distributed enterprise | |
US8843580B2 (en) | Criteria-based message publication control and feedback in a publish/subscribe messaging environment | |
US6826707B1 (en) | Apparatus and method for remote data recovery | |
US6216127B1 (en) | Method and apparatus for processing electronic mail in parallel | |
US8615580B2 (en) | Message publication feedback in a publish/subscribe messaging environment | |
US7899897B2 (en) | System and program for dual agent processes and dual active server processes | |
WO2011131262A1 (en) | Controlling message delivery in publish/subscribe messaging | |
US8819266B2 (en) | Dynamic file transfer scheduling and server messaging | |
US20080115128A1 (en) | Method, system and computer program product for implementing shadow queues for recovery of messages | |
US20100257240A1 (en) | Method and system for implementing sequence start and increment values for a resequencer | |
KR20040104467A (en) | Most eligible server in a common work queue environment | |
US20100254259A1 (en) | Method and system for performing blocking of messages on errors in message stream | |
US9069632B2 (en) | Message processing | |
US7650606B2 (en) | System recovery | |
US7249163B2 (en) | Method, apparatus, system and computer program for reducing I/O in a messaging environment | |
US8140348B2 (en) | Method, system, and program for facilitating flow control | |
US20080077939A1 (en) | Solution for modifying a queue manager to support smart aliasing which permits extensible software to execute against queued data without application modifications | |
US20220276901A1 (en) | Batch processing management | |
US11930076B1 (en) | Offload inefficient slicing from clients to the servers |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATHRUBUTHAM, RAMANI;SATHYE, ADWAIT B.;ZOU, CHENDONG;REEL/FRAME:021089/0334;SIGNING DATES FROM 20040107 TO 20040120 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |