US20100106697A1 - System for accessing shared data using multiple application servers - Google Patents
System for accessing shared data using multiple application servers Download PDFInfo
- Publication number
- US20100106697A1 US20100106697A1 US12/571,496 US57149609A US2010106697A1 US 20100106697 A1 US20100106697 A1 US 20100106697A1 US 57149609 A US57149609 A US 57149609A US 2010106697 A1 US2010106697 A1 US 2010106697A1
- Authority
- US
- United States
- Prior art keywords
- mode
- centralized
- application server
- distributed
- control unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/161—Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/167—Interprocessor communication using a common memory, e.g. mailbox
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/176—Support for shared access to files; File sharing support
- G06F16/1767—Concurrency control, e.g. optimistic or pessimistic approaches
- G06F16/1774—Locking methods, e.g. locking methods for file systems allowing shared and concurrent access to files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
- G06F16/2336—Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
- G06F16/2343—Locking methods, e.g. distributed locking or locking implementation details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
Definitions
- This invention relates generally to a system for accessing shared data using multiple application servers. More particularly, it invention relates to a method, system, and computer program for use in accessing shared data using multiple application servers.
- a lock control In a system causing each application server to cache the result of a read of the database, a lock control must be performed among the multiple application servers in order to prevent a read of cached data inconsistent with the database.
- Examples of a lock control method include the distributed-lock method in which each application server controls a lock individually, and the centralized-lock method in which a lock server or the like centrally controls a lock.
- a lock control performed using the distributed-lock method will be referred to as “cache mode” and a lock control performed using the centralized-lock method will be referred to as “database mode.”
- an application server reads a database in a system that is using cache mode
- the application server acquires a locally controlled read lock before reading the database.
- the application server updates the database in the system that is using cache mode
- the application server acquires an exclusive lock controlled by each of the other application servers before updating the database.
- the application server acquires a read lock or an exclusive lock controlled by a lock server before reading or updating the database.
- cache mode the latency caused when acquiring a read lock is short.
- an exclusive lock must be acquired from each of multiple application servers. This complicates the process.
- database mode it is sufficient to only acquire one exclusive lock from a lock server. This simplifies the process.
- the latency caused when acquiring a read lock is long. Therefore, cache mode is preferably used in a system for realizing an application where a read of a database frequently occurs, and database mode is preferably used in a system for realizing an application where an update of a database frequently occurs.
- a read of a database occurs more frequently than a database update during daytime hours.
- a batch update of the database is performed during the nighttime when the database is used by users less frequently.
- cache mode is used in such a system, the operation efficiency is increased during the daytime when a read of the database occurs more frequently.
- the operation efficiency is reduced during the nighttime when a batch update is performed.
- database mode is used in such a system, the operation efficiency is increased during the nighttime when a batch update is performed; however, the operation efficiency is reduced during the daytime when a read of the database occurs more frequently. Therefore, it is difficult to increase the operation efficiency regardless of the time of day in a system for realizing an application where an update of a database occurs more frequently during a particular time period.
- One aspect of the present invention provides a system including a plurality of application servers for accessing shared data and a centralized control unit for centrally controlling a lock applied to the shared data by each of the application servers.
- Each of the application servers includes a distributed control unit for controlling a lock applied to the shared data by the application server and a selection unit for selecting any one of a distributed mode in which a lock is acquired from the distributed control unit or a centralized mode in which a lock is acquired from the centralized control unit.
- the present invention provides an application server for accessing shared data in a system, wherein the system includes a plurality of application servers and a centralized control unit for centrally controlling a lock applied to shared data by each of the application servers.
- the application server includes: a distributed control unit for controlling a lock applied to the shared data by the application server; and a selection unit for selecting any one of a distributed mode in which a lock is acquired from the distributed control unit or a centralized mode in which a lock is acquired from the centralized control unit.
- the present invention provides a method for causing a computer to function as an application server for accessing shared data for use in a system including a plurality of application servers and a centralized control unit for centrally controlling a lock applied to shared data by each of the application servers being included in the system.
- the method includes the steps of: causing the computer to function as a distributed control unit for controlling a lock applied to the shared data by a corresponding application server; and causing the computer to function as a selection unit for selecting any one of distributed mode in which a lock is acquired from the distributed control unit and centralized mode in which a lock is acquired from the centralized control unit.
- the present invention provides an article of manufacture tangibly embodying computer readable instructions which, when implemented, causes a computer to carry out the steps of the above method.
- FIG. 1 shows a configuration of an information processing system 10 according to an embodiment of the present invention.
- FIG. 2 shows a configuration of each of multiple application servers 30 .
- FIG. 3 shows an example of a schema defining the data structure of shared data (ITEM table).
- FIG. 4 shows an example of a read query for reading a value from the ITEM table shown in FIG. 3 .
- FIG. 5 shows an example of data cached by a cache unit 56 .
- FIG. 6 shows an example of modes selected by a selection unit 60 .
- FIG. 7 shows an example of change conditions for changing the mode from one to another.
- FIG. 8 shows an example of a table indicating whether a read of the cache is permitted or prohibited in each mode, whether a read of the database is permitted or prohibited in each mode, and whether an update of the database is permitted or prohibited in each mode.
- FIG. 9 shows an example of the flows of processes performed by one application server 30 (A 1 ) and processes performed by the other multiple application servers 30 (A 2 to An) in the information processing system 10 .
- FIG. 10 shows the flow of processes performed when determining the mode of an application server 30 newly added to the information processing system 10 .
- FIG. 11 shows an example hardware configuration of a computer 1900 according to this embodiment.
- the information processing system 10 includes a database server 20 , multiple application servers 30 , and a centralized control unit 40 .
- the database server 20 is storing shared data.
- the shared data is a table included in a database.
- the multiple application servers 30 each execute an application program. Each application server 30 processes information written in an application program. Also, each application server 30 accesses the shared data stored in the database server 20 via a network in accordance with the description of the application program. In other words, each application server 30 reads and updates the shared data.
- the centralized control unit 40 centrally controls locks applied to the shared data by the application servers 30 .
- the centralized control unit 40 controls a lock with respect to each of records of the shared data.
- the centralized control unit 40 receives a request for acquiring a read lock with respect to one record, from one application server 30 , it permits the application server 30 to acquire the read lock, provided that none of the other application servers 30 has acquired an exclusive lock. Also, if the centralized control unit 40 receives a request for acquiring an exclusive lock with respect to one record, from one application server 30 , it permits the application server 30 to acquire the exclusive lock, provided that each of the other application servers 30 has acquired none of a read lock and an exclusive lock. Thus, the application servers 30 are allowed to read and update the shared data without causing data inconsistency among one another.
- the database server 20 and centralized control unit 40 may be controlled by an identical system.
- FIG. 2 shows a configuration of each application server 30 .
- Each application server 30 includes an execution unit 52 , an access control unit 54 , a cache unit 56 , and a distributed control unit 58 .
- Each application server 30 thus configured is realized when a computer executes a program.
- the execution unit 52 processes information written in an application program.
- the execution unit 52 performs a process corresponding to, for example, a given request and sends back the result of the process as a response. Also, if the execution unit 52 performs a process for reading or updating the shared data, it issues a read request or an update request to the database server 20 via the access control unit 54 .
- the execution unit 52 may issue a read request or an update request written in, for example, SQL (structured query language).
- the access control unit 54 sends the request for reading or updating the shared data issued by the execution unit 52 , to the database server 20 via a network. Subsequently, the access control unit 54 acquires the result of a process corresponding to the sent request, from the database server 20 and sends the process result back to the execution unit 52 .
- the access control unit 54 starts a transaction to access the database server 20 , it acquires a lock from the centralized control unit 40 or distributed control unit 58 via a selection unit 60 . More specifically, if the access control unit 54 starts a transaction including an update request (hereafter referred to as an “update transaction”), it acquires an exclusive lock. Also, if the access control unit 54 starts a transaction not including an update request (hereafter referred to as a “read transaction”), it acquires a read lock. If the access control unit 54 cannot acquire a lock, it does not access the shared data.
- update transaction hereafter referred to as an “update transaction”
- read transaction it acquires a read lock. If the access control unit 54 cannot acquire a lock, it does not access the shared data.
- the transaction here refers to, for example, a set of inseparable multiple processes performed between the application server 30 and database server 20 . If the database server 20 is, for example, an SQL server, the transaction refers to a series of processes from “begin” to “commit” or “rollback.”
- the cache unit 56 caches the shared data read by the access control unit 54 . After a transaction is completed, the cache unit 56 may invalidate the cached shared data.
- the distributed control unit 58 controls a lock applied to the shared data by the application server 30 .
- the distributed control unit 58 controls a lock with respect to each of records of the shared data.
- the distributed control unit 58 receives a request for acquiring a read lock with respect to one record, from the access control unit 54 , it permits the access control unit 54 to acquire the read lock, provided that none of the access control units 54 of the other application servers 30 has acquired an exclusive lock. Also, if the distributed control unit 58 receives a request for acquiring an exclusive lock with respect to one record, from the access control unit 54 , it makes inquiries to the other application servers 30 . If each of the other application servers 30 has acquired none of a read lock and an exclusive lock, the distributed control unit 58 permits the access control unit 54 to acquire the exclusive lock. In this way, the distributed control unit 58 allows the application server 30 to read and update the shared data without causing data inconsistency between the application server 30 and the other application servers 30 .
- the selection unit 60 selects any one of distributed mode in which a lock is acquired from the distributed control unit 58 and centralized mode in which a lock is acquired from the centralized control unit 40 . If the selection unit 60 receives a request for acquiring a lock from the access control unit 54 in distributed mode, it provides the request to the distributed control unit 58 so that the access control unit 54 acquires the lock from the distributed control unit 58 . Also, if the selection unit 60 receives a request for acquiring a lock from the access control unit 54 in centralized mode, it provides the request to the centralized control unit 40 via a network so that the access control unit 54 acquires the lock from the centralized control unit 40 .
- the selection unit 60 communicates with the respective selection units 60 of the other application servers 30 . If at least one of the other application servers 30 intends to update the shared data, the selection unit 60 changes the mode to centralized mode. If none of the other application servers 30 is updating the shared data, the selection unit 60 changes the mode to distributed mode. In distributed mode, the access control unit 54 permits a read of the shared data and prohibits an update thereof. In centralized mode, the access control unit 54 permits both a read of the shared data and an update thereof.
- the application server 30 acquires an exclusive lock from the centralized control unit 40 . This prevents interactions between the other application servers 30 and the database server 20 . Also, if the application server 30 reads the shared data, it acquires a read lock from the distributed control unit 58 . Thus, the latency caused when acquiring the read lock is reduced. That is, the application servers 30 according to this embodiment are allowed to efficiently perform distributed lock control.
- FIG. 3 shows an example of a schema defining the data structure of the shared data (ITEM table).
- FIG. 4 shows an example of a read query for reading a value from the ITEM table shown in FIG. 3 .
- FIG. 5 shows an example of data cached by the cache unit 56 .
- the cache unit 56 stores a result obtained by reading the shared data stored in the database server 20 , for example, using a read query written in SQL.
- a read query written in SQL For example, assume that the database server 20 is storing, as the shared data, an ITEM table as shown in the schema of FIG. 3 .
- the access control unit 54 issues a read query shown in FIG. 4 to the database server 20 , it acquires a query result as shown in FIG. 5 from the database server 20 . Then, the cache unit 56 caches the query result acquired by the access control unit 54 .
- the access control unit 54 again receives a request for reading all or a part of the data shown in FIG. 5 , from the execution unit 52 , it acquires the above-mentioned query result as the shared data from the cache unit 56 , rather than issuing a read query to the database server 20 , and sends back the query result to the execution unit 52 .
- the access control unit 54 reduces the load imposed on the database server 20 , as well as reduces the latency caused when reading the shared data.
- FIG. 6 shows an example of modes selected by the selection unit 60 .
- FIG. 7 shows an example of change conditions for changing from one mode to another. As shown in FIGS. 6 and 7 , the selection unit 60 selects any one of distributed mode, pre-centralized mode used when changing from distributed mode to centralized mode, centralized read mode, which is one type of centralized mode, centralized update mode, which is one type of centralized mode, and pre-distributed mode used when changing from centralized mode to distributed mode.
- the selection unit 60 changes the mode from distributed mode to centralized mode. For example, if the application server 30 starts an update transaction, the selection unit 60 changes the mode from distributed mode to pre-centralized mode. Also, if at least one of the other application servers 30 is placed in pre-centralized mode when the application server 30 is placed in distributed mode, the selection unit 60 changes the mode from distributed mode to centralized mode. That is, if any one of the application servers 30 updates the shared data (for example, if any one of the application servers 30 starts an update transaction), the modes of all the application servers 30 are changed from distributed mode to pre-centralized mode.
- each of the other application servers 30 is placed in any one of pre-centralized mode, centralized read mode, and centralized update mode, which is a type of centralized mode
- the selection unit 60 changes the mode from pre-centralized mode to centralized read mode, which is a type of centralized mode. That is, each application server 30 may change the mode to centralized mode (centralized read mode or pre-centralized mode) if the modes of the other application servers 30 have been changed from distributed mode to pre-centralized mode.
- the application servers 30 may change the modes from pre-centralized mode to centralized read mode in synchronization with one another.
- the selection unit 60 changes the mode from centralized read mode to centralized update mode. For example, if the application server 30 performs an update transaction, the selection unit 60 changes the mode from centralized read mode to centralized update mode.
- the selection unit 60 changes the mode from centralized update mode to centralized read mode. For example, if the application server 30 finishes all update transactions, the selection unit 60 changes the mode from centralized update mode to centralized read mode.
- the selection unit 60 changes the mode from centralized read mode to pre-distributed mode. Also, if a given period has elapsed after the change of the mode to centralized read mode, the selection unit 60 may change the mode from centralized read mode to pre-distributed mode. That is, if none of the application server 30 does not update the shared data in centralized mode, the application servers 30 may change the modes to pre-distributed mode.
- the selection unit 60 changes the mode from pre-centralized mode to distributed mode. That is, each application server 30 may change the mode to distributed mode if the other application servers 30 have changed the modes from centralized mode to pre-distributed mode.
- the application servers 30 may change the modes from pre-distributed mode to distributed mode in synchronization with one another.
- the application servers 30 may be configured so that if one application server 30 updates the shared data in pre-distributed mode, the selection unit 60 thereof changes the mode from pre-distributed mode to centralized read mode, provided that each of the other application servers 30 is placed in any one of pre-distributed mode, centralized read mode, and centralized update mode. In this case, if the application server 30 updates the shared data in pre-distributed mode (for example, if the application server 30 starts an update transaction), the application server 30 is allowed to change the mode from pre-distributed mode to centralized update mode via centralized read mode.
- FIG. 8 shows an example of a table indicating whether a read of the cache is permitted or prohibited in each mode, whether a read of the database is permitted or prohibited in each mode, and whether an update of the database is permitted or prohibited in each mode.
- the selection unit 60 In distributed mode, the selection unit 60 thereof acquires a lock from the distributed control unit 58 .
- the selection unit 60 acquires a lock from the centralized control unit 40 .
- the selection unit 60 which has acquired a lock from the distributed control unit 58 in distributed mode, changes the mode from distributed mode to pre-centralized mode, it acquires a lock from the centralized control unit 40 and releases the lock acquired from the distributed control unit 58 . Also, if the selection unit 60 , which has acquired a lock from the centralized control unit 40 in pre-distributed mode, changes the mode from pre-distributed mode to distributed mode, it acquires a lock from the distributed control unit 58 and releases the lock acquired from the centralized control unit 40 . Thus, when the selection unit 60 changes the destination, from which the selection unit 60 acquires a lock, from one to another, occurrence of data inconsistency is prevented.
- the access control unit 54 permits both a read of the shared data cached in the cache unit 56 and a read of the shared data stored in the database server 20 . That is, the access control unit 54 reads the shared data using the cache unit 56 in distributed mode.
- the access control unit 54 reduces the load imposed on the database server 20 , as well as accesses the shared data at higher speed.
- the access control unit 54 prohibits an update of the shared data stored in the database server 20 . Therefore, the access control unit 54 does not need to acquire an exclusive lock from each of the other application servers 30 in distributed mode. This simplifies distributed lock control.
- the access control unit 54 prohibits a read of the shared data cached in the cache unit 56 and permits a read of the shared data stored in the database server 20 . That is, in centralized update mode, the access control unit 54 reads the shared data without using the cache unit 56 . Also, in centralized update mode, the access control unit 54 permits an update of the shared data stored in the database server 20 . In other words, the access control unit 54 prohibits access to the cache in centralized update mode. This prevents occurrence of data inconsistency.
- the access control unit 54 prohibits a read of the shared data cached in the cache unit 56 and permits a read of the shared data stored in the database server 20 . That is, in pre-centralized mode, centralized read mode, and pre-distributed mode, the access control unit 54 reads the shared data without using the cache unit 56 . Also, in pre-centralized mode, centralized read mode, and pre-distributed mode, the access control unit 54 prohibits an update of the shared data stored in the database server 20 . This is, when the mode is changed from distributed mode to centralized update mode and when the mode is changed from centralized update mode to distributed mode, the access control unit 54 prohibits access to the cache. This prevents occurrence of data inconsistency.
- the selection unit 60 may invalidate the shared data cached in the cache unit 56 . Also, if, when changing the mode from pre-distributed mode to distributed mode, the selection unit 60 is notified that data included in the shared data cached in the cache unit 56 has been updated by any one of the application servers 30 , the selection unit 60 may selectively invalidate the data. Thus, the selection unit 60 prevents an inconsistency between the shared data cached in the cache unit 56 and the shared data stored in the database server 20 .
- FIG. 9 shows an example of the flows of processes performed by one application server ( 30 A 1 ) and processes performed by the other application servers ( 30 A 2 to 30 An) in the information processing system 10 . If the application server 30 A 1 starts an update transaction when all the application servers 30 A 1 to 30 An are placed in distributed mode, the application server 30 A 1 and the other application servers 30 A 2 to 30 An perform operations in accordance with the corresponding flows shown in FIG. 9 .
- the application server 30 A 1 when the application server 30 A 1 starts an update transaction, it changes the mode to pre-centralized mode (S 101 , S 102 , S 103 ).
- the other application servers 30 A 2 to 30 An each receive notification (S 103 A) from the application server 30 A 1 and recognizes that the application server 30 A 1 is placed in pre-centralized mode and then each change the mode to pre-centralized mode (S 201 , S 204 , S 205 ). As a result, all the application servers 30 A 1 to 30 An are placed in pre-centralized mode.
- the application servers 30 A 1 to 30 An receive notification (S 205 A) from one another and recognize that all the application servers 30 A 1 to 30 An are placed in pre-centralized mode (S 106 , S 206 ) and then change the modes to centralized read mode (S 107 , S 207 ). In this case, the application servers 30 A 1 to 30 An may change the modes from pre-centralized mode to centralized read mode in synchronization with one another.
- the application server 30 A 1 changes the mode from centralized read mode to centralized update mode (S 108 ). Subsequently, the application server 30 A 1 updates the shared data (S 109 ). Subsequently, when the application server 30 A 1 finishes all update transactions (S 110 ), it changes the mode from centralized update mode to centralized read mode (S 111 ). When a given period has elapsed after the application server 30 A 1 changes the mode to centralized read mode (S 112 ), the application server 30 A 1 changes the mode to pre-distributed mode (S 113 ). As a result, all the application servers 30 A 1 to 30 An are placed in pre-distributed mode.
- the application servers 30 A 1 to 30 An receive notification (S 113 A, S 213 A) from one another and recognize that all the application servers 30 A 1 to 30 An are placed in pre-distributed mode (S 114 , S 214 ) and then change the modes to distributed mode (S 115 , S 215 ). In this case, the application servers 30 A 1 to 30 An may change the modes from pre-distributed mode to distributed mode in synchronization with one another.
- the application servers 30 change the modes from distributed mode to centralized read mode via pre-centralized mode. Subsequently, the one application server 30 changes the mode from centralized read mode to centralized update mode so as to perform an update. Subsequently, when the one application server 30 finishes the update, the application servers 30 change the modes from centralized read mode to distributed mode via pre-distributed mode.
- FIG. 10 shows the flow of processes performed when determining the mode of an application server 30 newly added to the information processing system 10 .
- the information processing system 10 includes a newly added application server 30 .
- the selection unit 60 of the application server 30 newly added to the information processing system 10 makes determinations as shown in FIG. 10 and selects the mode in accordance with the determinations.
- the selection unit 60 determines whether at least one of the other application servers 30 is placed in pre-centralized mode (S 301 ). If at least one of the other application servers 30 is placed in pre-centralized mode (YES in S 301 ), the selection unit 60 changes the mode to pre-centralized mode (S 302 ).
- the selection unit 60 determines whether at least one of the other application servers 30 is placed in distributed mode (S 303 ). If none of the other application servers 30 is placed in pre-centralized mode and if at least one of the other application servers 30 is placed in distributed mode (YES in S 303 ), the selection unit 60 changes the mode to distributed mode (S 304 ).
- the selection unit 60 determines whether at least one of the other application servers 30 is placed in pre-distributed mode (S 305 ). If each of the other application servers 30 is placed in none of pre-centralized mode and distributed mode and if at least one of the other application servers 30 is placed in pre-distributed mode (YES in S 305 ), the selection unit 60 changes the mode to pre-distributed mode (S 306 ).
- the selection unit 60 changes the mode to centralized read mode (S 307 ).
- the application server 30 newly added to the information processing system 10 is also allowed to access the shared data while maintaining data integrity with the other application servers 30 .
- FIG. 11 shows an example hardware configuration of a computer 1900 according to this embodiment.
- the computer 1900 according to this embodiment includes a CPU peripheral unit, an input/output unit, and a legacy input/output unit.
- the CPU peripheral unit includes a CPU 2000 , a RAM 2020 , a graphic controller 2075 , and a display 2080 , which are coupled to one another via a host controller 2082 .
- the input/output unit includes a communication interface 2030 , a hard disk drive 2040 , and a CD-ROM drive 2060 , which are coupled to the host controller 2082 via an input/output controller 2084 .
- the legacy input/output unit includes a ROM 2010 , a flexible disk drive 2050 , and an input/output chip 2070 , which are coupled to the input/output controller 2084 .
- the host controller 2082 couples between the RAM 2020 , and the CPU 2000 configured to access the RAM 2020 at a high transfer rate and the graphic controller 2075 .
- the CPU 2000 operates on the basis of programs stored in the ROM 2010 and RAM 2020 so as to control each component.
- the graphic controller 2075 acquires image data generated by the CPU 2000 or the like on a frame buffer provided in the RAM 2020 and displays the acquired image data on a display unit 2080 .
- the graphic controller 2075 may include a frame buffer for storing image data generated by the CPU 2000 or the like.
- the input/output controller 2084 couples between the host controller 2082 , and the communication interface 2030 , which is a relatively high-speed input/output device, the hard disk drive 2040 , and the CD-ROM drive 2060 .
- the communication interface 2030 is coupled to other apparatuses via a network.
- the hard disk drive 2040 stores a program and data to be used by the CPU 2000 of the computer 1900 .
- the CD-ROM drive 2060 reads out a program or data from the CD-ROM 2095 and provides the read-out program or data to the hard disk drive 2040 via the RAM 2020 .
- the ROM 2010 stores a boot program to be executed at a boot of the computer 1900 , a program dependent on the hardware of the computer 1900 , and the like.
- the flexible disk drive 2050 reads out a program or data from the flexible disk 2090 and provides the read-out program or data to the hard disk drive 2040 via the RAM 2020 .
- the input/output chip 2070 couples the flexible drive 2050 to the input/output controller 2084 , as well as couples various input/output devices to the input/output controller 2084 , for example, via a parallel port, a serial port, a keyboard port, a mouse port, and the like.
- a program stored in a recoding medium such as the flexible disk 2090 , the CD-ROM 2095 , or an integrated circuit (IC) card is installed into the hard disk drive 2040 via the RAM 2020 by the user and then executed by the CPU 2000 .
- a recoding medium such as the flexible disk 2090 , the CD-ROM 2095 , or an integrated circuit (IC) card
- a program installed into the computer 1900 and intended to cause the computer 1900 to function as one application server 30 includes an execution module, an access control module, a cache module, a distributed control module, and a selection module.
- This program or these modules operates the CPU 2000 and the like in order to cause the computer 1900 to function as the execution unit 52 , access control unit 54 , cache unit 56 , distributed control unit 58 , and selection unit 60 .
- the execution unit 52 , access control unit 54 , cache unit 56 , distributed control unit 58 , and selection unit 60 are realized as specific means in which software and the above-mentioned various hardware resources collaborate with each other. Also, by performing operations on information or processing information using these specific means in accordance with the use objective of the computer 1900 according to this embodiment, a unique application server 30 according to the use objective is constructed.
- the CPU 2000 executes a communication program loaded in the RAM 2020 and, on the basis of a process written in the communication program, instructs the communication interface 2030 to perform a communication process.
- the communication interface 2030 reads out transmission data stored in a transmission buffer area or the like provided in a storage device such as the RAM 2020 , hard disk drive 2040 , flexible disk 2090 , or CD-ROM 2095 and transmits the read-out transmission data to a network, or writes reception data received via a network into a reception buffer area or the like provided in a storage device.
- the communication interface 2030 may transmit transmission data to a storage device or receive reception data from a storage device using the DMA (direct memory access) method.
- the CPU2000 may read out data from a storage device or the communication interface 2030 , which is the transmission source, and may write the read-out data into the communication interface 2030 or a storage device, which is the transmission destination, so as to transfer transmission data or reception data.
- the CPU 2000 loads all or the necessary files, databases, and the like stored in an external storage device such as the hard disk drive 2040 , CD-ROM drive 2060 (CD-ROM 2095 ), or flexible disk drive 2050 (flexible disk 2090 ) into the RAM 2020 using DMA transfer or the like and performs various processes on the data loaded in the RAM 2020 . Then, the CPU 2000 writes the resultant data back into the external storage device using DMA transfer or the like.
- the RAM 2020 is considered as an apparatus for temporarily retaining the data stored in the external storage device. Therefore, in this embodiment, the RAM 2020 , external storage devices, and the like are each referred to as a “memory,” a “storage unit,” a “storage device,” or the like.
- various programs and various types of information such as data, tables, and databases are stored in such storage devices and are subjected to information processing.
- the CPU 2000 may read or write data from or into a cache memory holding a part of the RAM 2020 .
- the cache memory also plays a part of the function of the RAM 2020 . Therefore, in this embodiment, the cache memory is also referred to as the “RAM 2020 ,” a “memory,” or a “storage device” except for a case where the cache memory and RAM 2020 or the like are shown independently.
- the CPU 2000 performs various processes specified by a command string of a program and including various operations, information processing, condition judgment, and retrieval or replacement of information described in this embodiment, on data read out from the RAM 2020 and then writes the resultant data back into the RAM 2020 .
- condition judgment it judges whether the variables shown in this embodiment meet corresponding conditions such as a condition that each variable must be larger, smaller, equal to or larger than, equal to or smaller than, or equal to other variables or constants. If such a condition is met (or unmet), the condition judgment is branched to a different command string or a sub-routine is called.
- the CPU 2000 is allowed to retrieve information included in a file, a database, or the like stored in a storage device. For example, if multiple entries in which the attribute value of a first attribute is associated with the attribute value of a second attribute are stored in a storage device, the CPU 2000 retrieves an entry in which the attribute value of the first attribute meets a specified condition, from among the multiple entries and reads out the attribute value of the second attribute stored in the entry. Thus, the CPU 2000 obtains the attribute value of the second attribute associated with the first attribute meeting the specified condition.
- the above-mentioned program or modules may be stored in an external recording medium.
- recording media include the flexible disk 2090 and CD-ROM 2095 as well as optical recording media such as a digital versatile disc (DVD) and a compact disc (CD), magneto-optical recording media such as a magneto-optical (MO) disk, tape media, and semiconductor memories such as an IC card.
- a storage device such as a hard disk or a random access memory (RAM), provided in a server system connected to a dedicated communication network or the Internet may be used as a recording medium and the above-mentioned program stored in such a storage device may be provided to the computer 1900 via a network.
- RAM random access memory
Abstract
Description
- This application claims priority under 35 U.S.C. §119 from Japanese Patent Application No. 2008-259926 filed Oct. 6, 2008, the entire contents of which are incorporated herein by reference.
- 1. Field of the Invention
- This invention relates generally to a system for accessing shared data using multiple application servers. More particularly, it invention relates to a method, system, and computer program for use in accessing shared data using multiple application servers.
- 2. Description of the Related Art
- There is known a system which includes a database server storing a database and multiple application servers for accessing the database. Such a system causes each application server to cache the result of a read of the database, thereby reducing the load imposed on the database server.
- In a system causing each application server to cache the result of a read of the database, a lock control must be performed among the multiple application servers in order to prevent a read of cached data inconsistent with the database. Examples of a lock control method include the distributed-lock method in which each application server controls a lock individually, and the centralized-lock method in which a lock server or the like centrally controls a lock. Hereafter, a lock control performed using the distributed-lock method will be referred to as “cache mode” and a lock control performed using the centralized-lock method will be referred to as “database mode.”
- If an application server reads a database in a system that is using cache mode, the application server acquires a locally controlled read lock before reading the database. Also, if the application server updates the database in the system that is using cache mode, the application server acquires an exclusive lock controlled by each of the other application servers before updating the database. On the other hand, if an application server reads or updates a database in a system which is using database mode, the application server acquires a read lock or an exclusive lock controlled by a lock server before reading or updating the database.
- As for cache mode, the latency caused when acquiring a read lock is short. However, an exclusive lock must be acquired from each of multiple application servers. This complicates the process. On the other hand, as for database mode, it is sufficient to only acquire one exclusive lock from a lock server. This simplifies the process. However, the latency caused when acquiring a read lock is long. Therefore, cache mode is preferably used in a system for realizing an application where a read of a database frequently occurs, and database mode is preferably used in a system for realizing an application where an update of a database frequently occurs.
- In a system for realizing bank operations or the like, a read of a database occurs more frequently than a database update during daytime hours. A batch update of the database is performed during the nighttime when the database is used by users less frequently. If cache mode is used in such a system, the operation efficiency is increased during the daytime when a read of the database occurs more frequently. However, the operation efficiency is reduced during the nighttime when a batch update is performed. In contrast, if database mode is used in such a system, the operation efficiency is increased during the nighttime when a batch update is performed; however, the operation efficiency is reduced during the daytime when a read of the database occurs more frequently. Therefore, it is difficult to increase the operation efficiency regardless of the time of day in a system for realizing an application where an update of a database occurs more frequently during a particular time period.
- One aspect of the present invention provides a system including a plurality of application servers for accessing shared data and a centralized control unit for centrally controlling a lock applied to the shared data by each of the application servers. Each of the application servers includes a distributed control unit for controlling a lock applied to the shared data by the application server and a selection unit for selecting any one of a distributed mode in which a lock is acquired from the distributed control unit or a centralized mode in which a lock is acquired from the centralized control unit.
- In another aspect, the present invention provides an application server for accessing shared data in a system, wherein the system includes a plurality of application servers and a centralized control unit for centrally controlling a lock applied to shared data by each of the application servers. The application server includes: a distributed control unit for controlling a lock applied to the shared data by the application server; and a selection unit for selecting any one of a distributed mode in which a lock is acquired from the distributed control unit or a centralized mode in which a lock is acquired from the centralized control unit.
- In still another aspect, the present invention provides a method for causing a computer to function as an application server for accessing shared data for use in a system including a plurality of application servers and a centralized control unit for centrally controlling a lock applied to shared data by each of the application servers being included in the system. The method includes the steps of: causing the computer to function as a distributed control unit for controlling a lock applied to the shared data by a corresponding application server; and causing the computer to function as a selection unit for selecting any one of distributed mode in which a lock is acquired from the distributed control unit and centralized mode in which a lock is acquired from the centralized control unit.
- In still another aspect, the present invention provides an article of manufacture tangibly embodying computer readable instructions which, when implemented, causes a computer to carry out the steps of the above method.
-
FIG. 1 shows a configuration of aninformation processing system 10 according to an embodiment of the present invention. -
FIG. 2 shows a configuration of each ofmultiple application servers 30. -
FIG. 3 shows an example of a schema defining the data structure of shared data (ITEM table). -
FIG. 4 shows an example of a read query for reading a value from the ITEM table shown inFIG. 3 . -
FIG. 5 shows an example of data cached by acache unit 56. -
FIG. 6 shows an example of modes selected by aselection unit 60. -
FIG. 7 shows an example of change conditions for changing the mode from one to another. -
FIG. 8 shows an example of a table indicating whether a read of the cache is permitted or prohibited in each mode, whether a read of the database is permitted or prohibited in each mode, and whether an update of the database is permitted or prohibited in each mode. -
FIG. 9 shows an example of the flows of processes performed by one application server 30 (A1) and processes performed by the other multiple application servers 30 (A2 to An) in theinformation processing system 10. -
FIG. 10 shows the flow of processes performed when determining the mode of anapplication server 30 newly added to theinformation processing system 10. -
FIG. 11 shows an example hardware configuration of acomputer 1900 according to this embodiment. - The present invention is described using the embodiments thereof. However, the embodiment does not limit the invention as set forth in the appended claims. Also, not all combinations of the features described in the embodiment are essential as the means of solving the above-mentioned problem.
-
-
- 10: information processing system
- 20: database server
- 30: application server
- 40: centralized control unit
- 52: execution unit
- 54: access control unit
- 56:
cache unit 56 - 58: distributed control unit
- 60: selection unit
- 1900: computer
- 2000: CPU
- 2010: ROM
- 2020: RAM
- 2030: communication interface
- 2040: hard disk drive
- 2050: flexible disk drive
- 2060: CD-ROM drive
- 2070: input/output chip
- 2075: graphic controller
- 2080: display
- 2082: host controller
- 2084: input/output controller
- 2090: flexible disk
- 2095: CD-ROM
- Referring to
FIG. 1 , a configuration of aninformation processing system 10 according to one embodiment of the invention is shown. Theinformation processing system 10 includes adatabase server 20,multiple application servers 30, and acentralized control unit 40. - The
database server 20 is storing shared data. In this embodiment, the shared data is a table included in a database. - The
multiple application servers 30 each execute an application program. Eachapplication server 30 processes information written in an application program. Also, eachapplication server 30 accesses the shared data stored in thedatabase server 20 via a network in accordance with the description of the application program. In other words, eachapplication server 30 reads and updates the shared data. - The
centralized control unit 40 centrally controls locks applied to the shared data by theapplication servers 30. In this embodiment, thecentralized control unit 40 controls a lock with respect to each of records of the shared data. - More specifically, if the
centralized control unit 40 receives a request for acquiring a read lock with respect to one record, from oneapplication server 30, it permits theapplication server 30 to acquire the read lock, provided that none of theother application servers 30 has acquired an exclusive lock. Also, if thecentralized control unit 40 receives a request for acquiring an exclusive lock with respect to one record, from oneapplication server 30, it permits theapplication server 30 to acquire the exclusive lock, provided that each of theother application servers 30 has acquired none of a read lock and an exclusive lock. Thus, theapplication servers 30 are allowed to read and update the shared data without causing data inconsistency among one another. Thedatabase server 20 andcentralized control unit 40 may be controlled by an identical system. -
FIG. 2 shows a configuration of eachapplication server 30. Eachapplication server 30 includes anexecution unit 52, anaccess control unit 54, acache unit 56, and a distributedcontrol unit 58. Eachapplication server 30 thus configured is realized when a computer executes a program. - The
execution unit 52 processes information written in an application program. Theexecution unit 52 performs a process corresponding to, for example, a given request and sends back the result of the process as a response. Also, if theexecution unit 52 performs a process for reading or updating the shared data, it issues a read request or an update request to thedatabase server 20 via theaccess control unit 54. Theexecution unit 52 may issue a read request or an update request written in, for example, SQL (structured query language). - The
access control unit 54 sends the request for reading or updating the shared data issued by theexecution unit 52, to thedatabase server 20 via a network. Subsequently, theaccess control unit 54 acquires the result of a process corresponding to the sent request, from thedatabase server 20 and sends the process result back to theexecution unit 52. - If the
access control unit 54 starts a transaction to access thedatabase server 20, it acquires a lock from thecentralized control unit 40 or distributedcontrol unit 58 via aselection unit 60. More specifically, if theaccess control unit 54 starts a transaction including an update request (hereafter referred to as an “update transaction”), it acquires an exclusive lock. Also, if theaccess control unit 54 starts a transaction not including an update request (hereafter referred to as a “read transaction”), it acquires a read lock. If theaccess control unit 54 cannot acquire a lock, it does not access the shared data. - The transaction here refers to, for example, a set of inseparable multiple processes performed between the
application server 30 anddatabase server 20. If thedatabase server 20 is, for example, an SQL server, the transaction refers to a series of processes from “begin” to “commit” or “rollback.” - The
cache unit 56 caches the shared data read by theaccess control unit 54. After a transaction is completed, thecache unit 56 may invalidate the cached shared data. - The distributed
control unit 58 controls a lock applied to the shared data by theapplication server 30. In this embodiment, the distributedcontrol unit 58 controls a lock with respect to each of records of the shared data. - More specifically, if the distributed
control unit 58 receives a request for acquiring a read lock with respect to one record, from theaccess control unit 54, it permits theaccess control unit 54 to acquire the read lock, provided that none of theaccess control units 54 of theother application servers 30 has acquired an exclusive lock. Also, if the distributedcontrol unit 58 receives a request for acquiring an exclusive lock with respect to one record, from theaccess control unit 54, it makes inquiries to theother application servers 30. If each of theother application servers 30 has acquired none of a read lock and an exclusive lock, the distributedcontrol unit 58 permits theaccess control unit 54 to acquire the exclusive lock. In this way, the distributedcontrol unit 58 allows theapplication server 30 to read and update the shared data without causing data inconsistency between theapplication server 30 and theother application servers 30. - The
selection unit 60 selects any one of distributed mode in which a lock is acquired from the distributedcontrol unit 58 and centralized mode in which a lock is acquired from thecentralized control unit 40. If theselection unit 60 receives a request for acquiring a lock from theaccess control unit 54 in distributed mode, it provides the request to the distributedcontrol unit 58 so that theaccess control unit 54 acquires the lock from the distributedcontrol unit 58. Also, if theselection unit 60 receives a request for acquiring a lock from theaccess control unit 54 in centralized mode, it provides the request to thecentralized control unit 40 via a network so that theaccess control unit 54 acquires the lock from thecentralized control unit 40. - Incidentally, the
selection unit 60 communicates with therespective selection units 60 of theother application servers 30. If at least one of theother application servers 30 intends to update the shared data, theselection unit 60 changes the mode to centralized mode. If none of theother application servers 30 is updating the shared data, theselection unit 60 changes the mode to distributed mode. In distributed mode, theaccess control unit 54 permits a read of the shared data and prohibits an update thereof. In centralized mode, theaccess control unit 54 permits both a read of the shared data and an update thereof. - As described above, if one
application server 30 updates the shared data, theapplication server 30 acquires an exclusive lock from thecentralized control unit 40. This prevents interactions between theother application servers 30 and thedatabase server 20. Also, if theapplication server 30 reads the shared data, it acquires a read lock from the distributedcontrol unit 58. Thus, the latency caused when acquiring the read lock is reduced. That is, theapplication servers 30 according to this embodiment are allowed to efficiently perform distributed lock control. -
FIG. 3 shows an example of a schema defining the data structure of the shared data (ITEM table). -
FIG. 4 shows an example of a read query for reading a value from the ITEM table shown inFIG. 3 . -
FIG. 5 shows an example of data cached by thecache unit 56. - The
cache unit 56 stores a result obtained by reading the shared data stored in thedatabase server 20, for example, using a read query written in SQL. For example, assume that thedatabase server 20 is storing, as the shared data, an ITEM table as shown in the schema ofFIG. 3 . In this case, if theaccess control unit 54 issues a read query shown inFIG. 4 to thedatabase server 20, it acquires a query result as shown inFIG. 5 from thedatabase server 20. Then, thecache unit 56 caches the query result acquired by theaccess control unit 54. - Subsequently, if the
access control unit 54 again receives a request for reading all or a part of the data shown inFIG. 5 , from theexecution unit 52, it acquires the above-mentioned query result as the shared data from thecache unit 56, rather than issuing a read query to thedatabase server 20, and sends back the query result to theexecution unit 52. Thus, theaccess control unit 54 reduces the load imposed on thedatabase server 20, as well as reduces the latency caused when reading the shared data. -
FIG. 6 shows an example of modes selected by theselection unit 60.FIG. 7 shows an example of change conditions for changing from one mode to another. As shown inFIGS. 6 and 7 , theselection unit 60 selects any one of distributed mode, pre-centralized mode used when changing from distributed mode to centralized mode, centralized read mode, which is one type of centralized mode, centralized update mode, which is one type of centralized mode, and pre-distributed mode used when changing from centralized mode to distributed mode. - If one
application server 30 updates the shared data in distributed mode, theselection unit 60 thereof changes the mode from distributed mode to centralized mode. For example, if theapplication server 30 starts an update transaction, theselection unit 60 changes the mode from distributed mode to pre-centralized mode. Also, if at least one of theother application servers 30 is placed in pre-centralized mode when theapplication server 30 is placed in distributed mode, theselection unit 60 changes the mode from distributed mode to centralized mode. That is, if any one of theapplication servers 30 updates the shared data (for example, if any one of theapplication servers 30 starts an update transaction), the modes of all theapplication servers 30 are changed from distributed mode to pre-centralized mode. - If each of the
other application servers 30 is placed in any one of pre-centralized mode, centralized read mode, and centralized update mode, which is a type of centralized mode, if theapplication server 30 is placed in pre-centralized mode, theselection unit 60 changes the mode from pre-centralized mode to centralized read mode, which is a type of centralized mode. That is, eachapplication server 30 may change the mode to centralized mode (centralized read mode or pre-centralized mode) if the modes of theother application servers 30 have been changed from distributed mode to pre-centralized mode. Theapplication servers 30 may change the modes from pre-centralized mode to centralized read mode in synchronization with one another. - If the
application server 30 updates the shared data in centralized read mode, theselection unit 60 changes the mode from centralized read mode to centralized update mode. For example, if theapplication server 30 performs an update transaction, theselection unit 60 changes the mode from centralized read mode to centralized update mode. - If the
application server 30 finishes updating the shared data in centralized update mode, theselection unit 60 changes the mode from centralized update mode to centralized read mode. For example, if theapplication server 30 finishes all update transactions, theselection unit 60 changes the mode from centralized update mode to centralized read mode. - Also, if each of the
other application servers 30 is placed in any one of centralized read mode and centralized update mode when theapplication server 30 is placed in centralized read mode or if at least one of theother application servers 30 is placed in pre-distributed mode when theapplication server 30 is placed in centralized read mode, theselection unit 60 changes the mode from centralized read mode to pre-distributed mode. Also, if a given period has elapsed after the change of the mode to centralized read mode, theselection unit 60 may change the mode from centralized read mode to pre-distributed mode. That is, if none of theapplication server 30 does not update the shared data in centralized mode, theapplication servers 30 may change the modes to pre-distributed mode. - If each of the
other application servers 30 is placed in any one of pre-distributed mode, distributed mode, and pre-centralized mode when theapplication server 30 is placed in pre-distributed mode, theselection unit 60 changes the mode from pre-centralized mode to distributed mode. That is, eachapplication server 30 may change the mode to distributed mode if theother application servers 30 have changed the modes from centralized mode to pre-distributed mode. Theapplication servers 30 may change the modes from pre-distributed mode to distributed mode in synchronization with one another. - Also, the
application servers 30 may be configured so that if oneapplication server 30 updates the shared data in pre-distributed mode, theselection unit 60 thereof changes the mode from pre-distributed mode to centralized read mode, provided that each of theother application servers 30 is placed in any one of pre-distributed mode, centralized read mode, and centralized update mode. In this case, if theapplication server 30 updates the shared data in pre-distributed mode (for example, if theapplication server 30 starts an update transaction), theapplication server 30 is allowed to change the mode from pre-distributed mode to centralized update mode via centralized read mode. -
FIG. 8 shows an example of a table indicating whether a read of the cache is permitted or prohibited in each mode, whether a read of the database is permitted or prohibited in each mode, and whether an update of the database is permitted or prohibited in each mode. In distributed mode, theselection unit 60 thereof acquires a lock from the distributedcontrol unit 58. In pre-centralized mode, centralized read mode, centralized update mode, and pre-distributed mode, theselection unit 60 acquires a lock from thecentralized control unit 40. - If the
selection unit 60, which has acquired a lock from the distributedcontrol unit 58 in distributed mode, changes the mode from distributed mode to pre-centralized mode, it acquires a lock from thecentralized control unit 40 and releases the lock acquired from the distributedcontrol unit 58. Also, if theselection unit 60, which has acquired a lock from thecentralized control unit 40 in pre-distributed mode, changes the mode from pre-distributed mode to distributed mode, it acquires a lock from the distributedcontrol unit 58 and releases the lock acquired from thecentralized control unit 40. Thus, when theselection unit 60 changes the destination, from which theselection unit 60 acquires a lock, from one to another, occurrence of data inconsistency is prevented. - As shown in
FIG. 8 , in distributed mode, theaccess control unit 54 permits both a read of the shared data cached in thecache unit 56 and a read of the shared data stored in thedatabase server 20. That is, theaccess control unit 54 reads the shared data using thecache unit 56 in distributed mode. Thus, in distributed mode, theaccess control unit 54 reduces the load imposed on thedatabase server 20, as well as accesses the shared data at higher speed. Also, in distributed mode, theaccess control unit 54 prohibits an update of the shared data stored in thedatabase server 20. Therefore, theaccess control unit 54 does not need to acquire an exclusive lock from each of theother application servers 30 in distributed mode. This simplifies distributed lock control. - As shown in
FIG. 8 , in centralized update mode, theaccess control unit 54 prohibits a read of the shared data cached in thecache unit 56 and permits a read of the shared data stored in thedatabase server 20. That is, in centralized update mode, theaccess control unit 54 reads the shared data without using thecache unit 56. Also, in centralized update mode, theaccess control unit 54 permits an update of the shared data stored in thedatabase server 20. In other words, theaccess control unit 54 prohibits access to the cache in centralized update mode. This prevents occurrence of data inconsistency. - As shown in
FIG. 8 , in pre-centralized mode, centralized read mode, and pre-distributed mode, theaccess control unit 54 prohibits a read of the shared data cached in thecache unit 56 and permits a read of the shared data stored in thedatabase server 20. That is, in pre-centralized mode, centralized read mode, and pre-distributed mode, theaccess control unit 54 reads the shared data without using thecache unit 56. Also, in pre-centralized mode, centralized read mode, and pre-distributed mode, theaccess control unit 54 prohibits an update of the shared data stored in thedatabase server 20. This is, when the mode is changed from distributed mode to centralized update mode and when the mode is changed from centralized update mode to distributed mode, theaccess control unit 54 prohibits access to the cache. This prevents occurrence of data inconsistency. - Also, if the
selection unit 60 changes the mode from pre-distributed mode to distributed mode, it may invalidate the shared data cached in thecache unit 56. Also, if, when changing the mode from pre-distributed mode to distributed mode, theselection unit 60 is notified that data included in the shared data cached in thecache unit 56 has been updated by any one of theapplication servers 30, theselection unit 60 may selectively invalidate the data. Thus, theselection unit 60 prevents an inconsistency between the shared data cached in thecache unit 56 and the shared data stored in thedatabase server 20. -
FIG. 9 shows an example of the flows of processes performed by one application server (30A1) and processes performed by the other application servers (30A2 to 30An) in theinformation processing system 10. If the application server 30A1 starts an update transaction when all the application servers 30A1 to 30An are placed in distributed mode, the application server 30A1 and the other application servers 30A2 to 30An perform operations in accordance with the corresponding flows shown inFIG. 9 . - First, when the application server 30A1 starts an update transaction, it changes the mode to pre-centralized mode (S101, S102, S103). The other application servers 30A2 to 30An each receive notification (S103A) from the application server 30A1 and recognizes that the application server 30A1 is placed in pre-centralized mode and then each change the mode to pre-centralized mode (S201, S204, S205). As a result, all the application servers 30A1 to 30An are placed in pre-centralized mode.
- The application servers 30A1 to 30An receive notification (S205A) from one another and recognize that all the application servers 30A1 to 30An are placed in pre-centralized mode (S106, S206) and then change the modes to centralized read mode (S107, S207). In this case, the application servers 30A1 to 30An may change the modes from pre-centralized mode to centralized read mode in synchronization with one another.
- Subsequently, when a given period has elapsed after the other application servers 30A2 to 30An change the modes to centralized read mode (S212), the application servers 30A2 to 30An change the modes to pre-distributed mode (S213).
- On the other hand, the application server 30A1 changes the mode from centralized read mode to centralized update mode (S108). Subsequently, the application server 30A1 updates the shared data (S109). Subsequently, when the application server 30A1 finishes all update transactions (S110), it changes the mode from centralized update mode to centralized read mode (S111). When a given period has elapsed after the application server 30A1 changes the mode to centralized read mode (S112), the application server 30A1 changes the mode to pre-distributed mode (S113). As a result, all the application servers 30A1 to 30An are placed in pre-distributed mode.
- The application servers 30A1 to 30An receive notification (S113A, S213A) from one another and recognize that all the application servers 30A1 to 30An are placed in pre-distributed mode (S114, S214) and then change the modes to distributed mode (S115, S215). In this case, the application servers 30A1 to 30An may change the modes from pre-distributed mode to distributed mode in synchronization with one another.
- As described above, if any one of the
application servers 30 starts an update transaction when theapplication servers 30 are all placed in distributed mode, theapplication servers 30 change the modes from distributed mode to centralized read mode via pre-centralized mode. Subsequently, the oneapplication server 30 changes the mode from centralized read mode to centralized update mode so as to perform an update. Subsequently, when the oneapplication server 30 finishes the update, theapplication servers 30 change the modes from centralized read mode to distributed mode via pre-distributed mode. -
FIG. 10 shows the flow of processes performed when determining the mode of anapplication server 30 newly added to theinformation processing system 10. Theinformation processing system 10 includes a newly addedapplication server 30. Theselection unit 60 of theapplication server 30 newly added to theinformation processing system 10 makes determinations as shown inFIG. 10 and selects the mode in accordance with the determinations. - First, the
selection unit 60 determines whether at least one of theother application servers 30 is placed in pre-centralized mode (S301). If at least one of theother application servers 30 is placed in pre-centralized mode (YES in S301), theselection unit 60 changes the mode to pre-centralized mode (S302). - If none of the
other application servers 30 is placed in pre-centralized mode (NO in S301), theselection unit 60 determines whether at least one of theother application servers 30 is placed in distributed mode (S303). If none of theother application servers 30 is placed in pre-centralized mode and if at least one of theother application servers 30 is placed in distributed mode (YES in S303), theselection unit 60 changes the mode to distributed mode (S304). - Subsequently, if each of the
other application servers 30 is placed in none of pre-centralized mode and distributed mode (NO in S303), theselection unit 60 determines whether at least one of theother application servers 30 is placed in pre-distributed mode (S305). If each of theother application servers 30 is placed in none of pre-centralized mode and distributed mode and if at least one of theother application servers 30 is placed in pre-distributed mode (YES in S305), theselection unit 60 changes the mode to pre-distributed mode (S306). - Subsequently, if all the
other application servers 30 are placed in none of pre-centralized mode, distributed mode, and pre-distributed mode (NO in S305), theselection unit 60 changes the mode to centralized read mode (S307). By determining the mode in the above-mentioned way, theapplication server 30 newly added to theinformation processing system 10 is also allowed to access the shared data while maintaining data integrity with theother application servers 30. -
FIG. 11 shows an example hardware configuration of acomputer 1900 according to this embodiment. Thecomputer 1900 according to this embodiment includes a CPU peripheral unit, an input/output unit, and a legacy input/output unit. The CPU peripheral unit includes aCPU 2000, aRAM 2020, agraphic controller 2075, and adisplay 2080, which are coupled to one another via ahost controller 2082. The input/output unit includes acommunication interface 2030, ahard disk drive 2040, and a CD-ROM drive 2060, which are coupled to thehost controller 2082 via an input/output controller 2084. The legacy input/output unit includes aROM 2010, aflexible disk drive 2050, and an input/output chip 2070, which are coupled to the input/output controller 2084. - The
host controller 2082 couples between theRAM 2020, and theCPU 2000 configured to access theRAM 2020 at a high transfer rate and thegraphic controller 2075. TheCPU 2000 operates on the basis of programs stored in theROM 2010 andRAM 2020 so as to control each component. Thegraphic controller 2075 acquires image data generated by theCPU 2000 or the like on a frame buffer provided in theRAM 2020 and displays the acquired image data on adisplay unit 2080. Alternatively, thegraphic controller 2075 may include a frame buffer for storing image data generated by theCPU 2000 or the like. - The input/
output controller 2084 couples between thehost controller 2082, and thecommunication interface 2030, which is a relatively high-speed input/output device, thehard disk drive 2040, and the CD-ROM drive 2060. Thecommunication interface 2030 is coupled to other apparatuses via a network. Thehard disk drive 2040 stores a program and data to be used by theCPU 2000 of thecomputer 1900. The CD-ROM drive 2060 reads out a program or data from the CD-ROM 2095 and provides the read-out program or data to thehard disk drive 2040 via theRAM 2020. - Also coupled to the input/
output controller 2084 are theROM 2010 and relatively low-speed input/output devices, such as theflexible disk drive 2050 and the input/output chip 2070. TheROM 2010 stores a boot program to be executed at a boot of thecomputer 1900, a program dependent on the hardware of thecomputer 1900, and the like. Theflexible disk drive 2050 reads out a program or data from theflexible disk 2090 and provides the read-out program or data to thehard disk drive 2040 via theRAM 2020. The input/output chip 2070 couples theflexible drive 2050 to the input/output controller 2084, as well as couples various input/output devices to the input/output controller 2084, for example, via a parallel port, a serial port, a keyboard port, a mouse port, and the like. - A program stored in a recoding medium such as the
flexible disk 2090, the CD-ROM 2095, or an integrated circuit (IC) card is installed into thehard disk drive 2040 via theRAM 2020 by the user and then executed by theCPU 2000. - A program installed into the
computer 1900 and intended to cause thecomputer 1900 to function as oneapplication server 30 includes an execution module, an access control module, a cache module, a distributed control module, and a selection module. This program or these modules operates theCPU 2000 and the like in order to cause thecomputer 1900 to function as theexecution unit 52,access control unit 54,cache unit 56, distributedcontrol unit 58, andselection unit 60. - In other words, when such a program is read by the
computer 1900, theexecution unit 52,access control unit 54,cache unit 56, distributedcontrol unit 58, andselection unit 60 are realized as specific means in which software and the above-mentioned various hardware resources collaborate with each other. Also, by performing operations on information or processing information using these specific means in accordance with the use objective of thecomputer 1900 according to this embodiment, aunique application server 30 according to the use objective is constructed. - For example, if communications are performed between the
computer 1900 and an external apparatus or the like, theCPU 2000 executes a communication program loaded in theRAM 2020 and, on the basis of a process written in the communication program, instructs thecommunication interface 2030 to perform a communication process. Under the control of theCPU 2000, thecommunication interface 2030 reads out transmission data stored in a transmission buffer area or the like provided in a storage device such as theRAM 2020,hard disk drive 2040,flexible disk 2090, or CD-ROM 2095 and transmits the read-out transmission data to a network, or writes reception data received via a network into a reception buffer area or the like provided in a storage device. As described above, thecommunication interface 2030 may transmit transmission data to a storage device or receive reception data from a storage device using the DMA (direct memory access) method. Alternatively, the CPU2000 may read out data from a storage device or thecommunication interface 2030, which is the transmission source, and may write the read-out data into thecommunication interface 2030 or a storage device, which is the transmission destination, so as to transfer transmission data or reception data. - Also, the
CPU 2000 loads all or the necessary files, databases, and the like stored in an external storage device such as thehard disk drive 2040, CD-ROM drive 2060 (CD-ROM 2095), or flexible disk drive 2050 (flexible disk 2090) into theRAM 2020 using DMA transfer or the like and performs various processes on the data loaded in theRAM 2020. Then, theCPU 2000 writes the resultant data back into the external storage device using DMA transfer or the like. In such a process, theRAM 2020 is considered as an apparatus for temporarily retaining the data stored in the external storage device. Therefore, in this embodiment, theRAM 2020, external storage devices, and the like are each referred to as a “memory,” a “storage unit,” a “storage device,” or the like. In this embodiment, various programs and various types of information such as data, tables, and databases are stored in such storage devices and are subjected to information processing. Incidentally, theCPU 2000 may read or write data from or into a cache memory holding a part of theRAM 2020. In this case, the cache memory also plays a part of the function of theRAM 2020. Therefore, in this embodiment, the cache memory is also referred to as the “RAM 2020,” a “memory,” or a “storage device” except for a case where the cache memory andRAM 2020 or the like are shown independently. - Also, the
CPU 2000 performs various processes specified by a command string of a program and including various operations, information processing, condition judgment, and retrieval or replacement of information described in this embodiment, on data read out from theRAM 2020 and then writes the resultant data back into theRAM 2020. For example, if theCPU 2000 performs condition judgment, it judges whether the variables shown in this embodiment meet corresponding conditions such as a condition that each variable must be larger, smaller, equal to or larger than, equal to or smaller than, or equal to other variables or constants. If such a condition is met (or unmet), the condition judgment is branched to a different command string or a sub-routine is called. - Also, the
CPU 2000 is allowed to retrieve information included in a file, a database, or the like stored in a storage device. For example, if multiple entries in which the attribute value of a first attribute is associated with the attribute value of a second attribute are stored in a storage device, theCPU 2000 retrieves an entry in which the attribute value of the first attribute meets a specified condition, from among the multiple entries and reads out the attribute value of the second attribute stored in the entry. Thus, theCPU 2000 obtains the attribute value of the second attribute associated with the first attribute meeting the specified condition. - The above-mentioned program or modules may be stored in an external recording medium. Among such recording media are the
flexible disk 2090 and CD-ROM 2095 as well as optical recording media such as a digital versatile disc (DVD) and a compact disc (CD), magneto-optical recording media such as a magneto-optical (MO) disk, tape media, and semiconductor memories such as an IC card. Also, a storage device, such as a hard disk or a random access memory (RAM), provided in a server system connected to a dedicated communication network or the Internet may be used as a recording medium and the above-mentioned program stored in such a storage device may be provided to thecomputer 1900 via a network. - Note that the above-mentioned description of the present invention does not cover all features essential to the invention. Thus, there is no specific description of details such as “perform an operation before performing another operation” about the order of performance of the processes, such as operations, steps, and stages, of the apparatus(es), system(s), program(s), and/or method(s) described in the appended claims, specification, and accompanying drawings and that these processes may be performed in an arbitrary order unless an output produced in a preceding process is used in a subsequent process. While the flow of the operations is described using terms such as “first,” “then,” and the like in the claims, specification, and drawings for convenience sake, such terms do not mean that the operations always must be performed in that order.
- Subcombinations of the features are also included in the invention.
- While the present invention has been described using the embodiment thereof, the technical scope of the invention is not limited to the description of the embodiment. It will be apparent for those skilled in the art that various changes and modifications can be made to the above-mentioned embodiment. Also, it will be apparent from the description of the appended claims that such changed or modified embodiments can also fall within the technical scope of the invention.
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/082,371 US9031923B2 (en) | 2008-10-06 | 2013-11-18 | System for accessing shared data using multiple application servers |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008259926 | 2008-10-06 | ||
JP2008-259926 | 2008-10-06 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/082,371 Continuation US9031923B2 (en) | 2008-10-06 | 2013-11-18 | System for accessing shared data using multiple application servers |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100106697A1 true US20100106697A1 (en) | 2010-04-29 |
US8589438B2 US8589438B2 (en) | 2013-11-19 |
Family
ID=42100471
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/571,496 Expired - Fee Related US8589438B2 (en) | 2008-10-06 | 2009-10-01 | System for accessing shared data using multiple application servers |
US14/082,371 Active US9031923B2 (en) | 2008-10-06 | 2013-11-18 | System for accessing shared data using multiple application servers |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/082,371 Active US9031923B2 (en) | 2008-10-06 | 2013-11-18 | System for accessing shared data using multiple application servers |
Country Status (6)
Country | Link |
---|---|
US (2) | US8589438B2 (en) |
EP (1) | EP2352090B1 (en) |
JP (1) | JP5213077B2 (en) |
KR (1) | KR20110066940A (en) |
CN (1) | CN102165420B (en) |
WO (1) | WO2010041515A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120066287A1 (en) * | 2010-09-11 | 2012-03-15 | Hajost Brian H | Mobile application deployment for distributed computing environments |
US20130042244A1 (en) * | 2010-04-23 | 2013-02-14 | Zte Corporation | Method and system for implementing internet of things service |
US8484649B2 (en) | 2011-01-05 | 2013-07-09 | International Business Machines Corporation | Amortizing costs of shared scans |
US20140068734A1 (en) * | 2011-05-12 | 2014-03-06 | International Business Machines Corporation | Managing Access to a Shared Resource Using Client Access Credentials |
US20140280347A1 (en) * | 2013-03-14 | 2014-09-18 | Konica Minolta Laboratory U.S.A., Inc. | Managing Digital Files with Shared Locks |
US8930323B2 (en) | 2011-09-30 | 2015-01-06 | International Business Machines Corporation | Transaction processing system, method, and program |
US11032361B1 (en) * | 2020-07-14 | 2021-06-08 | Coupang Corp. | Systems and methods of balancing network load for ultra high server availability |
US11176121B2 (en) * | 2019-05-28 | 2021-11-16 | International Business Machines Corporation | Global transaction serialization |
US20230004545A1 (en) * | 2018-03-13 | 2023-01-05 | Google Llc | Including Transactional Commit Timestamps In The Primary Keys Of Relational Databases |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010041515A1 (en) | 2008-10-06 | 2010-04-15 | インターナショナル・ビジネス・マシーンズ・コーポレーション | System accessing shared data by a plurality of application servers |
GB2503266A (en) * | 2012-06-21 | 2013-12-25 | Ibm | Sharing aggregated cache hit and miss data in a storage area network |
KR101645163B1 (en) * | 2014-11-14 | 2016-08-03 | 주식회사 인프라웨어 | Method for synchronizing database in distributed system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030225884A1 (en) * | 2002-05-31 | 2003-12-04 | Hayden Mark G. | Distributed network storage system with virtualization |
US20050289143A1 (en) * | 2004-06-23 | 2005-12-29 | Exanet Ltd. | Method for managing lock resources in a distributed storage system |
US20070088762A1 (en) * | 2005-05-25 | 2007-04-19 | Harris Steven T | Clustering server providing virtual machine data sharing |
US20070185872A1 (en) * | 2006-02-03 | 2007-08-09 | Eugene Ho | Adaptive region locking |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07120333B2 (en) * | 1985-12-16 | 1995-12-20 | 株式会社日立製作所 | Shared data management method |
JPH08202567A (en) * | 1995-01-25 | 1996-08-09 | Hitachi Ltd | Inter-system lock processing method |
US6516351B2 (en) * | 1997-12-05 | 2003-02-04 | Network Appliance, Inc. | Enforcing uniform file-locking for diverse file-locking protocols |
US7200623B2 (en) * | 1998-11-24 | 2007-04-03 | Oracle International Corp. | Methods to perform disk writes in a distributed shared disk system needing consistency across failures |
AU2002341784A1 (en) * | 2001-09-21 | 2003-04-01 | Polyserve, Inc. | A system and method for efficient lock recovery |
US7406473B1 (en) * | 2002-01-30 | 2008-07-29 | Red Hat, Inc. | Distributed file system using disk servers, lock servers and file servers |
US7240058B2 (en) * | 2002-03-01 | 2007-07-03 | Sun Microsystems, Inc. | Lock mechanism for a distributed data system |
US20080243847A1 (en) * | 2007-04-02 | 2008-10-02 | Microsoft Corporation | Separating central locking services from distributed data fulfillment services in a storage system |
WO2010041515A1 (en) | 2008-10-06 | 2010-04-15 | インターナショナル・ビジネス・マシーンズ・コーポレーション | System accessing shared data by a plurality of application servers |
-
2009
- 2009-08-13 WO PCT/JP2009/064316 patent/WO2010041515A1/en active Application Filing
- 2009-08-13 KR KR20117008669A patent/KR20110066940A/en not_active Application Discontinuation
- 2009-08-13 JP JP2010532855A patent/JP5213077B2/en active Active
- 2009-08-13 EP EP09819051.5A patent/EP2352090B1/en active Active
- 2009-08-13 CN CN200980138187.9A patent/CN102165420B/en active Active
- 2009-10-01 US US12/571,496 patent/US8589438B2/en not_active Expired - Fee Related
-
2013
- 2013-11-18 US US14/082,371 patent/US9031923B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030225884A1 (en) * | 2002-05-31 | 2003-12-04 | Hayden Mark G. | Distributed network storage system with virtualization |
US20050289143A1 (en) * | 2004-06-23 | 2005-12-29 | Exanet Ltd. | Method for managing lock resources in a distributed storage system |
US20090094243A1 (en) * | 2004-06-23 | 2009-04-09 | Exanet Ltd. | Method for managing lock resources in a distributed storage system |
US20070088762A1 (en) * | 2005-05-25 | 2007-04-19 | Harris Steven T | Clustering server providing virtual machine data sharing |
US20070185872A1 (en) * | 2006-02-03 | 2007-08-09 | Eugene Ho | Adaptive region locking |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9071657B2 (en) * | 2010-04-23 | 2015-06-30 | Zte Corporation | Method and system for implementing internet of things service |
US20130042244A1 (en) * | 2010-04-23 | 2013-02-14 | Zte Corporation | Method and system for implementing internet of things service |
US8620998B2 (en) * | 2010-09-11 | 2013-12-31 | Steelcloud, Inc. | Mobile application deployment for distributed computing environments |
US20120066287A1 (en) * | 2010-09-11 | 2012-03-15 | Hajost Brian H | Mobile application deployment for distributed computing environments |
US8484649B2 (en) | 2011-01-05 | 2013-07-09 | International Business Machines Corporation | Amortizing costs of shared scans |
US20140068734A1 (en) * | 2011-05-12 | 2014-03-06 | International Business Machines Corporation | Managing Access to a Shared Resource Using Client Access Credentials |
US9088569B2 (en) * | 2011-05-12 | 2015-07-21 | International Business Machines Corporation | Managing access to a shared resource using client access credentials |
US8930323B2 (en) | 2011-09-30 | 2015-01-06 | International Business Machines Corporation | Transaction processing system, method, and program |
US20140280347A1 (en) * | 2013-03-14 | 2014-09-18 | Konica Minolta Laboratory U.S.A., Inc. | Managing Digital Files with Shared Locks |
US20230004545A1 (en) * | 2018-03-13 | 2023-01-05 | Google Llc | Including Transactional Commit Timestamps In The Primary Keys Of Relational Databases |
US11899649B2 (en) * | 2018-03-13 | 2024-02-13 | Google Llc | Including transactional commit timestamps in the primary keys of relational databases |
US11176121B2 (en) * | 2019-05-28 | 2021-11-16 | International Business Machines Corporation | Global transaction serialization |
US11032361B1 (en) * | 2020-07-14 | 2021-06-08 | Coupang Corp. | Systems and methods of balancing network load for ultra high server availability |
US20220021730A1 (en) * | 2020-07-14 | 2022-01-20 | Coupang Corp. | Systems and methods of balancing network load for ultra high server availability |
US11627181B2 (en) * | 2020-07-14 | 2023-04-11 | Coupang Corp. | Systems and methods of balancing network load for ultra high server availability |
Also Published As
Publication number | Publication date |
---|---|
EP2352090B1 (en) | 2019-09-25 |
US20140082127A1 (en) | 2014-03-20 |
EP2352090A1 (en) | 2011-08-03 |
US8589438B2 (en) | 2013-11-19 |
EP2352090A4 (en) | 2015-05-06 |
CN102165420A (en) | 2011-08-24 |
JP5213077B2 (en) | 2013-06-19 |
WO2010041515A1 (en) | 2010-04-15 |
JPWO2010041515A1 (en) | 2012-03-08 |
KR20110066940A (en) | 2011-06-17 |
CN102165420B (en) | 2014-07-16 |
US9031923B2 (en) | 2015-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8589438B2 (en) | System for accessing shared data using multiple application servers | |
EP2478442B1 (en) | Caching data between a database server and a storage system | |
JP2505939B2 (en) | How to control data castout | |
US7383392B2 (en) | Performing read-ahead operation for a direct input/output request | |
JP4794571B2 (en) | System and method for efficient access to database | |
US9286300B2 (en) | Archiving data in database management systems | |
US7457796B2 (en) | Method using virtual replicated tables in a cluster database management system | |
US20180210908A1 (en) | Accessing data entities | |
US20100122041A1 (en) | Memory control apparatus, program, and method | |
US8949192B2 (en) | Technique of controlling access to database | |
US8825959B1 (en) | Method and apparatus for using data access time prediction for improving data buffering policies | |
US6557082B1 (en) | Method and apparatus for ensuring cache coherency for spawned dependent transactions in a multi-system environment with shared data storage devices | |
JPH07262065A (en) | Method for offloading of extraction of committed data from database system to control device | |
WO2018040167A1 (en) | Data caching method and apparatus | |
US20130290636A1 (en) | Managing memory | |
CN112540982A (en) | Virtual database table with updatable logical table pointers | |
JP2781092B2 (en) | Exclusive control method between systems | |
US20090276473A1 (en) | Method and apparatus for maintaining consistency between database and virtual table | |
US8732404B2 (en) | Method and apparatus for managing buffer cache to perform page replacement by using reference time information regarding time at which page is referred to | |
JP5186270B2 (en) | Database cache system | |
US20050120168A1 (en) | Digital data storage subsystem with directory including check values for verifying during an information retrieval operation that retrieved information was the desired information | |
JPS59220853A (en) | Disc cache system | |
US10430287B2 (en) | Computer | |
CN110659305A (en) | High performance relational database service based on non-volatile storage system | |
JP2023511743A (en) | Reducing demands using probabilistic data structures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION,NEW YO Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ENOKI, MIKI;HORII, HIROSHI;ONODERA, TAMIYA;AND OTHERS;SIGNING DATES FROM 20090925 TO 20090929;REEL/FRAME:023763/0578 Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ENOKI, MIKI;HORII, HIROSHI;ONODERA, TAMIYA;AND OTHERS;SIGNING DATES FROM 20090925 TO 20090929;REEL/FRAME:023763/0578 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.) |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20171119 |