CA2197324A1 - Scalable distributed computing environment - Google Patents
Scalable distributed computing environmentInfo
- Publication number
- CA2197324A1 CA2197324A1 CA002197324A CA2197324A CA2197324A1 CA 2197324 A1 CA2197324 A1 CA 2197324A1 CA 002197324 A CA002197324 A CA 002197324A CA 2197324 A CA2197324 A CA 2197324A CA 2197324 A1 CA2197324 A1 CA 2197324A1
- Authority
- CA
- Canada
- Prior art keywords
- node
- nodes
- parent
- child
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/02—Topology update or discovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/26—Route discovery packet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/503—Resource availability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/24—Negotiation of communication capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/1305—Software aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13107—Control equipment for a part of the connection, distributed control, co-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13141—Hunting for free outlet, circuit or channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13204—Protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13353—Routing table, map memory
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13383—Hierarchy of switches, main and subexchange, e.g. satellite exchange
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q2213/00—Indexing scheme relating to selecting arrangements in general and for multiplex systems
- H04Q2213/13388—Saturation signaling systems
Abstract
The present invention relates to disbributed computing systems and is more particularly directed to an architecture and implementation of a scalable distributed computing environment which facilitates communication between independently operating nodes on a single network or on interconnected networks, which may be either homogeneous or heterogeneous. The present invention is a dynamic, symmetrical, distributed, real-time, peer-to-peer system comprised of an arbitrary number of identical (semantically equivalent) instances, i.e., kernels, that together form a logical tree. The kernels exhibit unified and consistent behavior at run time through a self-configuring and self-maintaining logical view of the network. Each kernel resides at a network node that has one or more resources associated with it. The kernels dynamically locate one another in real-time to form and maintain a hierarchical structure that supports a virtually unlimited number of independently running kernels. The system maintains its logical view of the network and user-developed programmatic resources regardless of the number and combinations of transport protocols and underlying mix of physical topologies.
The system's communications services utilize a dynamic context bridge to communicate between end nodes that may not share a common transport protocol stack, thereby allowing applications residing on different stacks to communicate with one another automatically and transparently.
The system's communications services utilize a dynamic context bridge to communicate between end nodes that may not share a common transport protocol stack, thereby allowing applications residing on different stacks to communicate with one another automatically and transparently.
Description
~ Wo 96/07257 219 7 3 2 ~ PCT/US95/10605 ~ I Scalable Distributed Computing E . u ~
3 1~.~ ' of the Invention S The present irvention relates to distributed computing systems and is more6 particularly directed to an: ' and l' of a scalable distributed 7 computing c...... which facilitates ~ between ', ' l~, 8 operating noaes on a single netwo* or on ~ ' netwo*s, which may be 9 eitber i " or h Il In tociay's business c~. t, corporate struc~ures are being "'y 12 reshaped due to tbe dynamics of mergers and a, ~ . ~ J~ and the need 13 for real-time ~ with customers, suppliers and financial insitutions. In 14 addition, immediate acccss to ~ ~ and the neeci to manipulate that information quickly have bccome critical in ~ - v and ~ ~ _ competitive advanuge.
16 This requires that corporate data and the computerprograms which manipulate that dau 17 be deployed in a ' ' "~, new way; in a distributed rather than a centralized, 18 monolithic m=.
With distributed computing, programs and data are logicaDy positioned so that 21 they can be processed as near as possible to the users that interact with them. In 22 theory, this aDows the corporation to operate more reliably and efficiently by reducing 23 ~ overhead and exploiting the l ' ' ' processing power of 24 pwnal, group, and d, ' computing resources. By distributing workload over many computers, ~ ~ processing rcsources can be optimized for a given 26 individuai, work group or purpose. This appmach aDows data and processes to be wo 96/07257 Pcr/USs5/l0605 ~
distributed and replicated so that p ~ and reliability can be more easily 2 maintained as the demands on the system increase. The . ~ i ' of increased3 granularity and scalability also provide important benefits relating to software 4 reusability, i.e., the same component may be used in several different A~ . thus reducimg both ~ and, ~ time and costs.
7 Because of these demands, there is a movement toward - , ~ wide viltual 8 computing im which the entire resources of tbe network appear to the user to be locally 9 resident at his or her desktop computer or terminal. The traditio~al monolithic centralized corporate ~ r '- processing model is yielding to a distributed, fine-11 grained approach. This ~ to virtual, dynamic enterprise computing 12 requires that mission critical core systems be , ~ using a distributed 13 ' ~ im which localized computing resources (program elements and data) are 14 seamlessly interl;nked by virtual networks.
16 However, in today's corporate ' systems, individual ~
17 typicaUy exist im hr~ v--- e.lv that do not - . Businesses are 18 faced with the task of connecting , ' ' systems wh~ile 1, an ever 19 increasing number of disparate operating systems amd networking protocols over a wide geographic area. Corporate mergers amd A.. ~ are again on the rise, and the 21 need to integrate inst 11ed hrs .~ - - networks into a single enterprise wide 22 network, not once but multiple times, is needed. Further, e~ have become 23 global entities and their ' systems must now function over multiple time 24 zones, requiring those systems to be ~ ~ ', ' " Moreover, as ~
themselves are ~ i, so are the ~ ' systems that support 26 their business operations. Thus, the corporate computing omust be "open,"
27 i.e., it must be flexible enough to easily migrate to new standards while v the 28 integrity and access to its existing "legacy~ systems and data. Legacy systems typically ~ WO 96/072S7 PCT/IJS95/10605 21~732~
rely OD the use of static ubles to keep tracl; of networl;eci resourss. Such systems do 2 not support dynamic recovery and are not easily scalable to emerprise-wide dl.lJIu~
3 1~.~ ' of the Invention S The present irvention relates to distributed computing systems and is more6 particularly directed to an: ' and l' of a scalable distributed 7 computing c...... which facilitates ~ between ', ' l~, 8 operating noaes on a single netwo* or on ~ ' netwo*s, which may be 9 eitber i " or h Il In tociay's business c~. t, corporate struc~ures are being "'y 12 reshaped due to tbe dynamics of mergers and a, ~ . ~ J~ and the need 13 for real-time ~ with customers, suppliers and financial insitutions. In 14 addition, immediate acccss to ~ ~ and the neeci to manipulate that information quickly have bccome critical in ~ - v and ~ ~ _ competitive advanuge.
16 This requires that corporate data and the computerprograms which manipulate that dau 17 be deployed in a ' ' "~, new way; in a distributed rather than a centralized, 18 monolithic m=.
With distributed computing, programs and data are logicaDy positioned so that 21 they can be processed as near as possible to the users that interact with them. In 22 theory, this aDows the corporation to operate more reliably and efficiently by reducing 23 ~ overhead and exploiting the l ' ' ' processing power of 24 pwnal, group, and d, ' computing resources. By distributing workload over many computers, ~ ~ processing rcsources can be optimized for a given 26 individuai, work group or purpose. This appmach aDows data and processes to be wo 96/07257 Pcr/USs5/l0605 ~
distributed and replicated so that p ~ and reliability can be more easily 2 maintained as the demands on the system increase. The . ~ i ' of increased3 granularity and scalability also provide important benefits relating to software 4 reusability, i.e., the same component may be used in several different A~ . thus reducimg both ~ and, ~ time and costs.
7 Because of these demands, there is a movement toward - , ~ wide viltual 8 computing im which the entire resources of tbe network appear to the user to be locally 9 resident at his or her desktop computer or terminal. The traditio~al monolithic centralized corporate ~ r '- processing model is yielding to a distributed, fine-11 grained approach. This ~ to virtual, dynamic enterprise computing 12 requires that mission critical core systems be , ~ using a distributed 13 ' ~ im which localized computing resources (program elements and data) are 14 seamlessly interl;nked by virtual networks.
16 However, in today's corporate ' systems, individual ~
17 typicaUy exist im hr~ v--- e.lv that do not - . Businesses are 18 faced with the task of connecting , ' ' systems wh~ile 1, an ever 19 increasing number of disparate operating systems amd networking protocols over a wide geographic area. Corporate mergers amd A.. ~ are again on the rise, and the 21 need to integrate inst 11ed hrs .~ - - networks into a single enterprise wide 22 network, not once but multiple times, is needed. Further, e~ have become 23 global entities and their ' systems must now function over multiple time 24 zones, requiring those systems to be ~ ~ ', ' " Moreover, as ~
themselves are ~ i, so are the ~ ' systems that support 26 their business operations. Thus, the corporate computing omust be "open,"
27 i.e., it must be flexible enough to easily migrate to new standards while v the 28 integrity and access to its existing "legacy~ systems and data. Legacy systems typically ~ WO 96/072S7 PCT/IJS95/10605 21~732~
rely OD the use of static ubles to keep tracl; of networl;eci resourss. Such systems do 2 not support dynamic recovery and are not easily scalable to emerprise-wide dl.lJIu~
3 beatuse of the ememely high overhead that would be required m rna~ntain these tables 4 in a cons~ntly changing 6 In existing systems, in order for one resource connected to the networl; to 7 discover the existence of another resours, boi-h must be "a]ive." As the total number 8 of resourss connected to the network expands, it becomes vitally important to ha~e a 9 mechanism for l : L ~ resours discovery whereby the networi; ~m~ lly is made aware of new resourss as they become available.
12 Existing systems are also iimited by the availability of a fixed number of roles 13 or ' ~ ' l levels, that can be assumed by any node, e.g., machine, area, group, 14 domain, networ~, etc. This iimitation presents significant problems when merging or integraiing rwo or more existing networl~s having different I ' I swctures. In 16 addition, in prior art systems, if a node assumes mulriple roles, the ,-~
17 between those roles is prescribed. That is, in order to function at level one (e.g., 18 machine) and levd 3 (e.g., group manager), i-he node must also assume the level 2 19 function ~e.g., area manager). This limiution can severely degrade system l and recovery.
22 Prior attempts to address ihe problems associated wiih c~ h ~/,. robus, 23 efficient C~ wide computing ~.,. . such as re~ time mesnging, 24 message queuing, remote pro~sdure calls, ~ . , ~ . and b~u~l ~ tl~ '--' and subscribe represent partial solutions at best. Because ttue26 distributed computing l - ~l'l'J'-' peer-to-peer ~ (sin~s master prosss 27 failure necessarily leads to failure of slave processes), ul;~; s_. . ~. based approaches to 28 resli~ing -he goal of enterprise compuiing represent suboptimal solutions. Existing W0 96107257 r~ J' ~
21973~4 peer-to-peer systems utili7ing static tables do not allow dynamic recovery and present 2 serious problems of scalability and ~
4 Summar~ of the Inventior s 6 The present invention is a dyDamic, ~J I, distributed, real-time, peer-to-7 peer system comprised of an arbitr ry number of identical ( ~'~ equivalent) 8 insta~lces, i.e., kernels, that together fomn a logical tree. The kemels exhibit unified 9 and consistent behavior at run time through a self-configuring and sclf logical view of the network. Each l~emel resides at a network node that has one or Il more resources associated with it. The kemels d~ , Iocate one another in real-12 time to form and maintain a hir~rrhir~l structure vhat supports a virtually unlimited 13 number of ' ~ ' 1y running kerrlels. The system maintains its logical view of the 14 network and user-developed r c~ resources regardless of the number and u ~ of transport protocols and underlying mix of physical topologies. The 16 system-s services utili7e a dynamic context bridge to ~
17 between end nodes that may not s_are a common t~ansport protocol stack, thereby 18 allowing ~ -u~ residing on different stacks to WitD one another 19 ~ and 21 The system is designed to snpport all forms of digiti_ed ~
22 inclu&g voice, sound, still and moving images, mass file transfer, traditional 23 transaction processing and any-to-any, r v such as rgroupwarer ~l.l,ll;~
24 would require. The system is also designed to operate over any type of networking rl5 protocol and medium, including ISDN, X.25, TCP/l~, SNA. APPC, ATM, etc. In all 26 cases, the system delivers a _igh percentage, typically 60-95%, of the theoretical 27 i capacity, i.e., bandwidth, of the underlying medium.
21g~32~
As new resources join (or rejoin) the network, the kemel residing at each node, 2 and thus each resource comnected to that node,: 'ly and '~, becomes 3 accessible to all .~ . using the system. The role(s) assumed by any node within 4 the managerial hierarchy employed (e.g., arca tnanager, domain manager, network S manager, etc.) is arbitrary, i.e., any node can assume one or multiple roles within the 6 hierarchy, and assuming one tole neither requires nor precludees assumption of any 7 other role. Further, the roles d.~ change based on the . of the 8 network, i.e., as one or more nodes enter or leave the network. Thus, the individual 9 kernels dynamically locate one another and negotiate the roles played by the associated nodes in managing the network hierarchy without regard to their physical location. In I l addition, the number of possible roles or levels that may be assumed by any node is not 12 limited and may be selected based on the particular . of the networking 13 ~
Brief ~ ~A of the ~rawings 17 These and other features and advantages of the presenl invention will be better 18 and more completely understood by referring to the following detailed description of 19 preferred . ~ ' in . with the appended sheets of drawings, of which:
21 Fig. 1 is a drawing showing a distributed computing system in accordance with 22 the present invention.
24 Fig. 2 is a detailed block diagram of one of the nodes in the system of Fig. 1.
26 Fig. 3 is a block diagram showing thc structure of a kemel in accordance with 27 the present invention.
W096/07257 ~ r 219~Z3~4 ' ~ !
Fig. 4 is a flow chart of the P~ES logical network (PLN) of the presen~
2 invention.
4 Fig. 5 is a flow chart of 8 Child login procedure in accordance with the present 5 invention.
7 Fig. 6 is a flow chart of a parent login pr~cedure in accordance with the prescnt 8 invention.
Fig. 7 is a diagram showing the login between different nodes in I l accordance with the present invention.
13 Fig. 8 is a flow ch rt of a roll caU procedure in accordance with the present 14 invention.
16 Flg. 9 is a diagram showing the roll call betweeh different nodes 17 jn accordance with the present invention.
19 Fg. 10 is a flow chart of a child monitor procedure in accordance with the present invention.
22 Fig. Il is a flow chart of a parent monitor p~cedure in accordance with the 23 present invention.
Fig. 12 is a diagruD showing the "I.~w~t~ monitor between 26 different nodes in accordance with the present inveDtion.
~ wo 96/072~7 P~ J~
219732~
Fig. 13 is a f.ow chart of an election process in accordance with Ihe present 2 invention.
4 Fig. 14 is a &glam showing the election between different nodes in accordance with the present invention.
7 Fig. 15 is a flow chart of a logout process in accordance with the present 8 invention.
9 Fig. 16 is a &gram showing the logout ~ between different nodes in accordance with the present mvention.
Il 12 Fig. 17 is a diagram showing activities relating to a resource of the present 13 invention.
Flg. 18 is a flow chart of an ~Add Resource~ process in accordance with the 16 preseDt invention.
18 Fig. 19 is a flow chart of a "Find Resource" process in accordance with the 19 present invendon.
21 Fig. 20 is a flow chart of a "Find Resource~ process at an area manager node of 22 the present invention.
24 Fig. 21 is a flow chart of a "Find Resource" process in accordance with the present invention at a level above area manager.
27 F~g. 22 is a flow chatt of a ~Persistent Find" process at an area manager node 28 of the present invention.
W0 96/07257 P~~
2 ~ ::
Fig. 23 is a flow chart of a "Persistent Find" process in accordance with the 2 present invention at a level aoove area manager.
4 Fig. 24 is a flow chart of a "Clean Persistent Finda process at an area manager node of the present invendon.
7 Fig. 25 is a flow chart of a ~CIean Persistent Find" process in accordance with 8 the present invention at a level aoove area manager.
Flg. 26 is a flow chart of a "Resource Recovery" process in accordance with the I l present invention when an area manager goes down.
13 Flg. 27 is a flow cbart of a "Resource Recovery" process in accordance with the 14 present invendon when another managerial node goes down.
16 Flg. 28 is a flow chart of a "Remove Resource" process in accordance with the 17 present invendon.
19 Pig. 29A shows the ~ . of a context bridge of the present invention.
21 Fig. 29B is an example illustradng the use of context bridges for ~ ' Z2 between different protocols.
24 Fig. 30 is a flow char~ showing a context bridge routing process in accordance with the present invention.
27 Fig. 31 is a flow chart of a ~Route Discovery" process iD accordance with the 28 present invendon.
~ WO 96/07257 ~ 1 ~ 7 3 2 a~ P ., .. . .
Fig. 32 is a flow chart of a ~Route Validationr process in accordance with thc 2 present invention.
4 Fig. 33 is a fiow chart of a ~Route Ah. process in accordance with the present invention.
7 Fig. 34 is a flow chart showing the steps perforrned in changing the number of 8 levels in tbe P~ES logical network of the present invention.
Detailed ~1 of the Inverrtion 12 FIG. I shows a distributed computing system 100 in accordance with the present 13 inventiOn- The A - of system 100 by the assignee of the present application 14 is referred to as the PIPES Platform ("PIPES~). In system 100, two nodes, Node 1 (shown as block 1) and Node 2 (shown as block 14), through a physical 16 nenvork connection (shown as line 27). It should be obvious to a person skilled in the 17 art that the number of nodes connected to net vork 27 is not lirnited to two.
19 The structures of the nodes are L ' ' '- ~~.y the sarne. ~ " only one of the nodes, such as Node 1, is described in detail. Three ~ ", App. A
21 (shown as block 2), App. B (shown as block 3), and App. C (shown as block 4), run 22 on Node 1. These ~ are typically written by application developers to run on 23 PlPES. The PIPES software includes a PIPES Application I~o Interface 24 ("PAPI~) (shown as block 6) for ~ g with Apps. A-C. PAPI 6 sends messages to a single PIPES Kernel (shown as block 9) executing at Node I through26 T A r ~PC) function calls (shown as block 7). Kernel 9 sends 27 and receives messages over network 27 through transport device drivers TD, (shown as 28 block 11), TD~ (shown as block 12), and TD3 (shown as block 13).
Wo 96/07257 ~ 1 9 7 3 2 4 PCT/USg5/1060~ ~
Similarly, Nodc 2 bas three "l~ running on it, App. X (shown as bloc};
2 15), App. Y (shown as block 16), and App. Z (shown as block 17), and ~ - _3 with a single PIPES Kemel (shown as block 21) running at Node 2 through PAPI
4 (shown as block 18) and IPC (shown as block 19). Node 2 supports three diffcren~
network protocols, and thus contains three transport drivers TD3 (shown as block 24), 6 TD~ (shown as block 25), and TD~ (shown as block 26).
8 For example, if App. A at Node I needs to ~ with App. Z at Node 9 2, a message travels from App. A through PAPI 6, IPC 7, and kemel 9. ~emel 9 uses its transport driver TD3 to send the message over network 27 to transport driver TD3 at Il Node 2. The message is then pasred to kemel 21 at Node 2, IPC 19, PAPI 18, and 12 finally to App. Z.
14 PIPES also provides generic services used by all of its component parts.
Network r~ _ Services (shown as blocks 10 and 20) provides access for a 16 P~PES Network r' _ Agent (not shown) to monitor the kemds' network- and 17 ~ ,.1 counters, attributes, and statistics. Generic Services (shown as blocks 8 18 and 22) provide a common interface for kernels 9 and 21 to operating system services, 19 jmcluding hasbing, btrees, address , ~ buffer _ t, queue _ t, logging, timers, and task scheduling. System Dependent Services (shown 21 as blocks 5 and 23) provides services specific to operating system, plafform, 22 ~ and transports on the nodes. These services are ured by Generic Services23 (shown as blocks 8 and 22) to realize a generic service within a given operating system 24 or plaffomm ~, 26 FIG. 2 shows a more desailed block dlagram of the PIPES intemal: ' ~
27 within Node I of system 100. The PIPES ~ ' is divided into three different28 layers: the Interface Layer (shown as block 28), the Kemel Layer (shown as block ~ W0 96l072s7 2 ~ ~ 7 ~ 2 ~ ;:
29), and the Transport Layer (shown as block 30). Interface Layer 28 handles queries 2 from and responses to the Allp~ that are accessing the PIPES e.. ~
3 through PAPI 6. Interface Layer 28 is embodied in a library which is linked tO each 4 application (e.g., Apps. A-C) which accesses kemel 9. Kemel Layer 29 provides ~ ,, ~ resource and ~ _ services to Al~ ;na' that are 6 accessing PIPES, allowing ~ between end-nodes that may not sbare a 7 transport protocol stack. Transport Layer 30 consist of the transport device drivers 11, 8 12, and 13 for the network protocols supported by Node 1. Each transport driver 9 provides access from kemel 9 to a network transport protocol provided by other vendors, such as TCPtIP, SNA, IPX, or DLC. Transport Layer 30 handles all 11 transport-specific API issues on a given platfomm for a given transport discipline.
13 FIG. 3 illustrates the intemal: ' of kemel 9. Kemel 9 contains an API14 Interface (shown as block 31) which is the interface to PAPI 6 of Fig. 2. API Interface 31 handles requests from Interface Layer 28 and retums responses to those requests. It 16 recognizes an application's priority and queues an application's messages based on this 17 priority. API Interface 31 also handles responses from the Resource Layer (shown as 18 block 32) and Session Services (shown as block 35), and routes those responses to the 19 appropriate~ i~finn 21 Resource Layer 32 registers an application's resources within a PIPES Logicai 22 Network (DPLN") layer (shown as block 33), provides the ability to fund other PAPI
23 resources within PIPES, and handles the de.~6;~ aLiull of resources within the 24 network. In addition, Resource Layer 32 implements a DPersistent Find" capability which enables the locating of resources that have not yet been registered in PLN 33.
27 PLN 33 maintains l;nowledge of the logical, i- ' ~ ' ~-. of the 28 nodes within PIPES to enforce a dynanuc ~ , fiamework. PLN 33 handles Wo 96107257 PCTIUS95/10605 ~
2~73~' thc election of managers, the transparent - ' ' ' of _ hierarchies as 2 a result of physical network faults. PLN 33 cmploys a system of ~heartbeat" messages 3 which is used to monitor the status of nodes within the network and identify networl;
4 failures. This layer also handles rcquests and retums responses to Resource Layer 32 and an A~ h-u.. l~i~,~ Datagnum Service (nAKDGr, shown as block 34).
7 AKDG 34 provides best-effort datagram service with on failures 8 for users. AKDG 34 handles the s~ding and receiving of messages through 9 ~'r ' Messaging Service (CLMS) 36 and Session Services 35.
Il Session Services 35 allocates, manages, and deallocates sessions for users.
12 Session ~ includes sending and receiving data sent by the user in sequence, 13 ensuring secure use of the session, and the message semantics over the 14 Comnection Oriented Messaging Service (COMS) stream protocol. Session Services 35 also multicasts PAPI application messages over sessions o vned by the PAPI
16 appGcation. Session Services 35 interacts with COMS 37 to satisfy rcquests from 17 ARDG 34 and AP[ Interfacc 31.
19 CLMS 36 transfers dau without a guarantee of delivery. It also interacts with Context Bridge layer 38 to satisfy the requests from AKDG 34.
22 COMS 37 manages ~ opened by Session Services 35. COMS 37 23 provides ~ ~ ~ ' data transfer, includingthe ~ ~ and reassembly of 24 mesQges for users. COMS 37 modiftes message size based on maximum me5Qge sizes of hops between connection endpoints.
~ wo s6/07zs7 2 1 ~ 7 3 2 ~
Context Bridge layer 38 insulates PAPI ~l.J,~ - from the underlying 2 networks by perfomming dynaunic transport protocol mapping over multiple networl;
3 transports, thus enabling data transfer even if the end-~end pro~ocols are different.
The Transport Driver ;nterface (shown as block 39) handles:
6 between transport-specific drivers and the CLMS 36 and COMS 37 layers. This7 interface contains generic common c ode for all transport drivers.
9 PLN' Layer I lPLli' 33 is a I ' ' structure imposed by the system - ' ~ - - on a set 12of machines executing kemels. These kemeis unify at run time to for-n a l 13 network with dynamically elected managers that =ge a given level of the hierarchy.
14 The PLN' name space is divided into five different levels: normal, area, group, domain, and network. All kernels at startup have nommal privileges. They assume a 16 managerial role depending on their ~ =~ in the network and such real-time17 f~ as the number of roles ah~eady assumed. Thus, r ' ~ ~ ~ functions 18 will be distributed evenly among the member kemels, leading to better ~ r and 19 faster recovery. It should be appreciated that the number of levels is not limited to five, and any number of levels can be . ' ' in the system, as explained below.
22 n PLN 33, the primary roles played by the various managers between the 23 Network Manager ar.d Area Marager (e.g., Domain Manager and Group Manager) are 24 essentially the same: to maintain ~ - with its parent and children, and to route Rcsource Layer 32 traffic. ;n addition to these functions, any manager between 26 the Network Manager and Area Mana~ger (e.g., Domain or Group) also provides 27 persistent fund source c aching services as described below in eonnection with Figs. 22 28 and 23. The Area Manager, in addition to these functions described above, provides WO 96/07257 PCT/US95/10605 ~
2 ~ 4 caching services for resources advertised by its children, including aD of the kernels in 2 the Area Manager~s name space. Therefore, the Area Manager is crucial to the orderly 3 function of PLN 33, which is built from the ground up by filling the Area Manager 4 role before any other role in the hierarchy. By default, any kemel can become an Area Manager.
7 As shown in FIG. 4, the PLN building and algorithm comprises 8 five main processes: Login (shown as block 100), Role Call (shown as block 200), 9 Monitor (shown as block 300), Election (shown as block 400), and Logout (shown as block 500). In this description, the following temns are used in order to allow for the I l appropriate abstraction. The number of levels in PLN 33 is defined by MinLevel and 12 Ma1cLcYeL The kemels that have nomnal privileges are configured at MinLevel and are 13 not managers. On the other hand, a kemel that is the Network Manager is conf~gured 14 at MarLevel and has the potential to become the Network Root. The ~
palameter Ma~S~ imposes a ceiling on the highest level of which the kemel can be16 a manager. A kemel at level n is termed to be a chld of its parcnt kemel at level n+ I
17 provided that the two kemels have the same name above level n.
19 Login 21 E71GS. 5 and 6 depict the Login proccdure executed at the cbild and parent nodes 22 in PLN 33. Login is a process by which a child kemel locates and registers with a 23 parent kemel. FIG. 7 illustrates the messages passed between kemels during a 24 hJ~lh~al execution of the Login process by which a kemel in node N7 (shown as circle 37 and referred to as Icemel N7) runs the Login process to enter the network.
27 A kemel enters the network by rumling the Login process to locate its parent 28 kemel. The child kemel first enters a wait period (step 101) during which the child P~T
~ 219732~ ~p~ al N~V ~96 listens for other login broadcasts on the network (step 102). If a login broadcast is 2 received during the wait period (step 103), the child kernel reads the message. The 3 information in the message is sufficient for the child to ascertain the identity of its 4 parent and siblings. If the originator of the message is a sibling (step 104)~ the child 5 kernel modifies its Login wait period interval (step 105) in order to prevent login 6 broadcasts from inundating the network. If the originator of the message is a parent 7 (step 106). the child kernel sends a login request to the parent (step 107) and waits for 8 an acknowledgment. If a login broadcast is not received, the child kernel continues to 9 listen for a login broadcast until the end of the wait period (step 108). At the end of 10 the wait period~ the child kernel sends a login broadcast on the network (step 109).
ll 12 In FIG. 7, kernel N7 is attempting to login to the PIPES network by sending a 13 login broadcast message (~ a~llt~d by dotted line a) to a kernel in a node Nl14 (represented by circle 41 and referred to as kernel Nl), a kernel in node N2 15 (represented by circle 42 and referred to as kernel N2), a kernel in node N3 16 (represented by circle 43 and referred to as kernel N3), a kernel in node 4 (Icl~lc~c~ld 17 by circle 44 and referred to as kernel N4), a kernel in node N5 (ICI./lC:~Cll.~,.i by circle 18 45 and referred to as kemel N5), and a kernel in node N6 (Ic~ lt~,d by circle 46 and 19 referred to as kernel N6). The child kernel waits for a specified time to receive a login acknowledgment (step 11 0).
22 All kemels listen for login broadcast messages on the network (step 116). If a 23 login broadcast is received (step 117), the parent kernel determines whether the kernel 24 that sent the message is its child (step 118). If the originating kernel is not its child, 2S the parent continues listening for login broadcasts (step 116). However, if the 26 originating kernel is its child, the parent checks if this kernel is a duplicate child (step 27 119). If this is a duplicate child, the parent informs its duplicate children of a role f~N!~t~J S
Wo 96/072s7 P~ llu.. ,~10605 ~
219732~
confiict (step 120). If not, the parent sends a logiD a~.LIu..; ~, to i~s child kernel 2 (step 121).
4 In FIG. 7, pareDt kernel N4 receives kemel N7's login broadcast message a,and sends a login a~,h~u.. .- .~g message represented by iine b to kernel N7.
7 If a login a~.iulu.... ' Ig is received (step 110), the chiid kemel sends a iogir, 8 . to the first parent kemel tbat seDds a login a~,iulu.. - 1~, (step 114).
9 The cbild kemel ignores any other login ~-,h-u.. l~d~,~.~.~ it may receive. After 10 sending the login ~ to its parent, the chiid kemel begins the Monitor processIl with its new parent (step 115). If the parent kernel receives the login ~
12 (step 122), the parent kemel registers t!ae child (step 123) and begins the Morator 13 process with its new chiid (step 124). If the parent kemel does not receive the login 14 from the cbild (step 122), the parent kemel contmues to listen for login broadcasts (step 116).
17 In FIG. 7, after receiving parent kemel N4's login a~h-u.. l~lc_a.~ b, cbiid 18 kemel N7 sends a login ~ - message represeated by iine c to kemel N4 and 19 begiDs the monitor process with its parent kemel N4.
21 If no parent kernel seads a login ~u .. I~l~ ,.n to the child, the child kernel 22 begins the Login process again (step 101) uniess the retry threshold has been exceeded 23 (st~p 111). If the retry threshold has been exceeded, the child checks its MaxSratus 24 setting (step 112). If the child's M~xStanLr is greater tiran MinLevel, the chiid begins the Role Call process to assume the role of its own parent. Othcrwise, the ci~iid kernel 26 wiil enter the Login wait period ag~un (step 101).
wo s6/072s7 2~ ~ 7 32 9 R~le Ctzll 3 Role Call is a procedure by which a kemel queries the nelwor~ to find out 4 vacancies in the name space bierarchy. The procedure is executed by all kemels who S bave been configured with Ma~ uus greater than MinLevel. The Role Call procedure 6 is invoked by a kernel upon startup and ~ l.Y when there is a managerial 7 vacancy in its namespace. The Role Call algorithm is designed to minitnize the number 8 of kernels ' l~, ~ ; ., in the Role Call process, reducing network-wide 9 broadcasts as well as possible collisions between potential contenders for the same vacancy.
Il 12 The roll call procedure is shown in Fig. 8. A kemel wishing to participau in 13 Role Call goes through a forced wait period (step 201). The wait period is a function 14 of the number of roles the kemel has alleady assumed, whether the kemel is an active context bridge, and the current state of the kemel. A random wait interval is also 16 added to the equation.
18 During the wait period, the kemel listens for role call broadcasts fmm other 19 kemels (step 202). If a role call broadcast is received for the same level of the hierarchy (step 203), the kemel abandons the Role Call procedure (step 204). If a role 21 call broadcast is not received, the kemel continues to listen for role caU broadcasts ~ (step 202) until the end of the wait period (step 205). At the end of the wait period, 23 the kemel sends its own role call broadcast on the network ~step 206). The broadcast 24 message contains the level of the hierarchy for which the role call is being requested.
After sending the role call broadcast, the kemel starts a timer (step 207) and listens for 26 role call messages on the network (step 208). A kemel that is a manager of the 27 namespace for which role call is requested wiU respond with a point-to-point role call 28 ~hlu'.g message. If the kemel initiating thc role call receives the 2~9732 PC~/U~ 95/ i ?~
~ P~. ~ O~ 9'3 acknowledgment (step 209), the kernel will abandon the Role Call procedure (step 204).
2 If the kemel initiating the role call instead receives another role call broadcast for the 3 same level of the hierarchy (step 210), the kernel reads the message. If the originator of 4 the message has higher credentials (step 211), the kernel will abandon the Role Call S procedure (step 204). The credentials of a particular kernel are a function of the number 6 of roles the kernel has already assumed, whether the kernel is an active context bridge, 7 and the current state of the kernel. At the end of the timeout period (step 212), the kernel 8 assumes the ~acant managerial role for which it requested role call (step 213).
FIG. 9 depicts an example of the Role Call procedure. Kemel N4, represented by 11 circle 54, becomes isolated from the network due to physical connection problems.
12 Existing systems are also iimited by the availability of a fixed number of roles 13 or ' ~ ' l levels, that can be assumed by any node, e.g., machine, area, group, 14 domain, networ~, etc. This iimitation presents significant problems when merging or integraiing rwo or more existing networl~s having different I ' I swctures. In 16 addition, in prior art systems, if a node assumes mulriple roles, the ,-~
17 between those roles is prescribed. That is, in order to function at level one (e.g., 18 machine) and levd 3 (e.g., group manager), i-he node must also assume the level 2 19 function ~e.g., area manager). This limiution can severely degrade system l and recovery.
22 Prior attempts to address ihe problems associated wiih c~ h ~/,. robus, 23 efficient C~ wide computing ~.,. . such as re~ time mesnging, 24 message queuing, remote pro~sdure calls, ~ . , ~ . and b~u~l ~ tl~ '--' and subscribe represent partial solutions at best. Because ttue26 distributed computing l - ~l'l'J'-' peer-to-peer ~ (sin~s master prosss 27 failure necessarily leads to failure of slave processes), ul;~; s_. . ~. based approaches to 28 resli~ing -he goal of enterprise compuiing represent suboptimal solutions. Existing W0 96107257 r~ J' ~
21973~4 peer-to-peer systems utili7ing static tables do not allow dynamic recovery and present 2 serious problems of scalability and ~
4 Summar~ of the Inventior s 6 The present invention is a dyDamic, ~J I, distributed, real-time, peer-to-7 peer system comprised of an arbitr ry number of identical ( ~'~ equivalent) 8 insta~lces, i.e., kernels, that together fomn a logical tree. The kemels exhibit unified 9 and consistent behavior at run time through a self-configuring and sclf logical view of the network. Each l~emel resides at a network node that has one or Il more resources associated with it. The kemels d~ , Iocate one another in real-12 time to form and maintain a hir~rrhir~l structure vhat supports a virtually unlimited 13 number of ' ~ ' 1y running kerrlels. The system maintains its logical view of the 14 network and user-developed r c~ resources regardless of the number and u ~ of transport protocols and underlying mix of physical topologies. The 16 system-s services utili7e a dynamic context bridge to ~
17 between end nodes that may not s_are a common t~ansport protocol stack, thereby 18 allowing ~ -u~ residing on different stacks to WitD one another 19 ~ and 21 The system is designed to snpport all forms of digiti_ed ~
22 inclu&g voice, sound, still and moving images, mass file transfer, traditional 23 transaction processing and any-to-any, r v such as rgroupwarer ~l.l,ll;~
24 would require. The system is also designed to operate over any type of networking rl5 protocol and medium, including ISDN, X.25, TCP/l~, SNA. APPC, ATM, etc. In all 26 cases, the system delivers a _igh percentage, typically 60-95%, of the theoretical 27 i capacity, i.e., bandwidth, of the underlying medium.
21g~32~
As new resources join (or rejoin) the network, the kemel residing at each node, 2 and thus each resource comnected to that node,: 'ly and '~, becomes 3 accessible to all .~ . using the system. The role(s) assumed by any node within 4 the managerial hierarchy employed (e.g., arca tnanager, domain manager, network S manager, etc.) is arbitrary, i.e., any node can assume one or multiple roles within the 6 hierarchy, and assuming one tole neither requires nor precludees assumption of any 7 other role. Further, the roles d.~ change based on the . of the 8 network, i.e., as one or more nodes enter or leave the network. Thus, the individual 9 kernels dynamically locate one another and negotiate the roles played by the associated nodes in managing the network hierarchy without regard to their physical location. In I l addition, the number of possible roles or levels that may be assumed by any node is not 12 limited and may be selected based on the particular . of the networking 13 ~
Brief ~ ~A of the ~rawings 17 These and other features and advantages of the presenl invention will be better 18 and more completely understood by referring to the following detailed description of 19 preferred . ~ ' in . with the appended sheets of drawings, of which:
21 Fig. 1 is a drawing showing a distributed computing system in accordance with 22 the present invention.
24 Fig. 2 is a detailed block diagram of one of the nodes in the system of Fig. 1.
26 Fig. 3 is a block diagram showing thc structure of a kemel in accordance with 27 the present invention.
W096/07257 ~ r 219~Z3~4 ' ~ !
Fig. 4 is a flow chart of the P~ES logical network (PLN) of the presen~
2 invention.
4 Fig. 5 is a flow chart of 8 Child login procedure in accordance with the present 5 invention.
7 Fig. 6 is a flow chart of a parent login pr~cedure in accordance with the prescnt 8 invention.
Fig. 7 is a diagram showing the login between different nodes in I l accordance with the present invention.
13 Fig. 8 is a flow ch rt of a roll caU procedure in accordance with the present 14 invention.
16 Flg. 9 is a diagram showing the roll call betweeh different nodes 17 jn accordance with the present invention.
19 Fg. 10 is a flow chart of a child monitor procedure in accordance with the present invention.
22 Fig. Il is a flow chart of a parent monitor p~cedure in accordance with the 23 present invention.
Fig. 12 is a diagruD showing the "I.~w~t~ monitor between 26 different nodes in accordance with the present inveDtion.
~ wo 96/072~7 P~ J~
219732~
Fig. 13 is a f.ow chart of an election process in accordance with Ihe present 2 invention.
4 Fig. 14 is a &glam showing the election between different nodes in accordance with the present invention.
7 Fig. 15 is a flow chart of a logout process in accordance with the present 8 invention.
9 Fig. 16 is a &gram showing the logout ~ between different nodes in accordance with the present mvention.
Il 12 Fig. 17 is a diagram showing activities relating to a resource of the present 13 invention.
Flg. 18 is a flow chart of an ~Add Resource~ process in accordance with the 16 preseDt invention.
18 Fig. 19 is a flow chart of a "Find Resource" process in accordance with the 19 present invendon.
21 Fig. 20 is a flow chart of a "Find Resource~ process at an area manager node of 22 the present invention.
24 Fig. 21 is a flow chart of a "Find Resource" process in accordance with the present invention at a level above area manager.
27 F~g. 22 is a flow chatt of a ~Persistent Find" process at an area manager node 28 of the present invention.
W0 96/07257 P~~
2 ~ ::
Fig. 23 is a flow chart of a "Persistent Find" process in accordance with the 2 present invention at a level aoove area manager.
4 Fig. 24 is a flow chart of a "Clean Persistent Finda process at an area manager node of the present invendon.
7 Fig. 25 is a flow chart of a ~CIean Persistent Find" process in accordance with 8 the present invention at a level aoove area manager.
Flg. 26 is a flow chart of a "Resource Recovery" process in accordance with the I l present invention when an area manager goes down.
13 Flg. 27 is a flow cbart of a "Resource Recovery" process in accordance with the 14 present invendon when another managerial node goes down.
16 Flg. 28 is a flow chart of a "Remove Resource" process in accordance with the 17 present invendon.
19 Pig. 29A shows the ~ . of a context bridge of the present invention.
21 Fig. 29B is an example illustradng the use of context bridges for ~ ' Z2 between different protocols.
24 Fig. 30 is a flow char~ showing a context bridge routing process in accordance with the present invention.
27 Fig. 31 is a flow chart of a ~Route Discovery" process iD accordance with the 28 present invendon.
~ WO 96/07257 ~ 1 ~ 7 3 2 a~ P ., .. . .
Fig. 32 is a flow chart of a ~Route Validationr process in accordance with thc 2 present invention.
4 Fig. 33 is a fiow chart of a ~Route Ah. process in accordance with the present invention.
7 Fig. 34 is a flow chart showing the steps perforrned in changing the number of 8 levels in tbe P~ES logical network of the present invention.
Detailed ~1 of the Inverrtion 12 FIG. I shows a distributed computing system 100 in accordance with the present 13 inventiOn- The A - of system 100 by the assignee of the present application 14 is referred to as the PIPES Platform ("PIPES~). In system 100, two nodes, Node 1 (shown as block 1) and Node 2 (shown as block 14), through a physical 16 nenvork connection (shown as line 27). It should be obvious to a person skilled in the 17 art that the number of nodes connected to net vork 27 is not lirnited to two.
19 The structures of the nodes are L ' ' '- ~~.y the sarne. ~ " only one of the nodes, such as Node 1, is described in detail. Three ~ ", App. A
21 (shown as block 2), App. B (shown as block 3), and App. C (shown as block 4), run 22 on Node 1. These ~ are typically written by application developers to run on 23 PlPES. The PIPES software includes a PIPES Application I~o Interface 24 ("PAPI~) (shown as block 6) for ~ g with Apps. A-C. PAPI 6 sends messages to a single PIPES Kernel (shown as block 9) executing at Node I through26 T A r ~PC) function calls (shown as block 7). Kernel 9 sends 27 and receives messages over network 27 through transport device drivers TD, (shown as 28 block 11), TD~ (shown as block 12), and TD3 (shown as block 13).
Wo 96/07257 ~ 1 9 7 3 2 4 PCT/USg5/1060~ ~
Similarly, Nodc 2 bas three "l~ running on it, App. X (shown as bloc};
2 15), App. Y (shown as block 16), and App. Z (shown as block 17), and ~ - _3 with a single PIPES Kemel (shown as block 21) running at Node 2 through PAPI
4 (shown as block 18) and IPC (shown as block 19). Node 2 supports three diffcren~
network protocols, and thus contains three transport drivers TD3 (shown as block 24), 6 TD~ (shown as block 25), and TD~ (shown as block 26).
8 For example, if App. A at Node I needs to ~ with App. Z at Node 9 2, a message travels from App. A through PAPI 6, IPC 7, and kemel 9. ~emel 9 uses its transport driver TD3 to send the message over network 27 to transport driver TD3 at Il Node 2. The message is then pasred to kemel 21 at Node 2, IPC 19, PAPI 18, and 12 finally to App. Z.
14 PIPES also provides generic services used by all of its component parts.
Network r~ _ Services (shown as blocks 10 and 20) provides access for a 16 P~PES Network r' _ Agent (not shown) to monitor the kemds' network- and 17 ~ ,.1 counters, attributes, and statistics. Generic Services (shown as blocks 8 18 and 22) provide a common interface for kernels 9 and 21 to operating system services, 19 jmcluding hasbing, btrees, address , ~ buffer _ t, queue _ t, logging, timers, and task scheduling. System Dependent Services (shown 21 as blocks 5 and 23) provides services specific to operating system, plafform, 22 ~ and transports on the nodes. These services are ured by Generic Services23 (shown as blocks 8 and 22) to realize a generic service within a given operating system 24 or plaffomm ~, 26 FIG. 2 shows a more desailed block dlagram of the PIPES intemal: ' ~
27 within Node I of system 100. The PIPES ~ ' is divided into three different28 layers: the Interface Layer (shown as block 28), the Kemel Layer (shown as block ~ W0 96l072s7 2 ~ ~ 7 ~ 2 ~ ;:
29), and the Transport Layer (shown as block 30). Interface Layer 28 handles queries 2 from and responses to the Allp~ that are accessing the PIPES e.. ~
3 through PAPI 6. Interface Layer 28 is embodied in a library which is linked tO each 4 application (e.g., Apps. A-C) which accesses kemel 9. Kemel Layer 29 provides ~ ,, ~ resource and ~ _ services to Al~ ;na' that are 6 accessing PIPES, allowing ~ between end-nodes that may not sbare a 7 transport protocol stack. Transport Layer 30 consist of the transport device drivers 11, 8 12, and 13 for the network protocols supported by Node 1. Each transport driver 9 provides access from kemel 9 to a network transport protocol provided by other vendors, such as TCPtIP, SNA, IPX, or DLC. Transport Layer 30 handles all 11 transport-specific API issues on a given platfomm for a given transport discipline.
13 FIG. 3 illustrates the intemal: ' of kemel 9. Kemel 9 contains an API14 Interface (shown as block 31) which is the interface to PAPI 6 of Fig. 2. API Interface 31 handles requests from Interface Layer 28 and retums responses to those requests. It 16 recognizes an application's priority and queues an application's messages based on this 17 priority. API Interface 31 also handles responses from the Resource Layer (shown as 18 block 32) and Session Services (shown as block 35), and routes those responses to the 19 appropriate~ i~finn 21 Resource Layer 32 registers an application's resources within a PIPES Logicai 22 Network (DPLN") layer (shown as block 33), provides the ability to fund other PAPI
23 resources within PIPES, and handles the de.~6;~ aLiull of resources within the 24 network. In addition, Resource Layer 32 implements a DPersistent Find" capability which enables the locating of resources that have not yet been registered in PLN 33.
27 PLN 33 maintains l;nowledge of the logical, i- ' ~ ' ~-. of the 28 nodes within PIPES to enforce a dynanuc ~ , fiamework. PLN 33 handles Wo 96107257 PCTIUS95/10605 ~
2~73~' thc election of managers, the transparent - ' ' ' of _ hierarchies as 2 a result of physical network faults. PLN 33 cmploys a system of ~heartbeat" messages 3 which is used to monitor the status of nodes within the network and identify networl;
4 failures. This layer also handles rcquests and retums responses to Resource Layer 32 and an A~ h-u.. l~i~,~ Datagnum Service (nAKDGr, shown as block 34).
7 AKDG 34 provides best-effort datagram service with on failures 8 for users. AKDG 34 handles the s~ding and receiving of messages through 9 ~'r ' Messaging Service (CLMS) 36 and Session Services 35.
Il Session Services 35 allocates, manages, and deallocates sessions for users.
12 Session ~ includes sending and receiving data sent by the user in sequence, 13 ensuring secure use of the session, and the message semantics over the 14 Comnection Oriented Messaging Service (COMS) stream protocol. Session Services 35 also multicasts PAPI application messages over sessions o vned by the PAPI
16 appGcation. Session Services 35 interacts with COMS 37 to satisfy rcquests from 17 ARDG 34 and AP[ Interfacc 31.
19 CLMS 36 transfers dau without a guarantee of delivery. It also interacts with Context Bridge layer 38 to satisfy the requests from AKDG 34.
22 COMS 37 manages ~ opened by Session Services 35. COMS 37 23 provides ~ ~ ~ ' data transfer, includingthe ~ ~ and reassembly of 24 mesQges for users. COMS 37 modiftes message size based on maximum me5Qge sizes of hops between connection endpoints.
~ wo s6/07zs7 2 1 ~ 7 3 2 ~
Context Bridge layer 38 insulates PAPI ~l.J,~ - from the underlying 2 networks by perfomming dynaunic transport protocol mapping over multiple networl;
3 transports, thus enabling data transfer even if the end-~end pro~ocols are different.
The Transport Driver ;nterface (shown as block 39) handles:
6 between transport-specific drivers and the CLMS 36 and COMS 37 layers. This7 interface contains generic common c ode for all transport drivers.
9 PLN' Layer I lPLli' 33 is a I ' ' structure imposed by the system - ' ~ - - on a set 12of machines executing kemels. These kemeis unify at run time to for-n a l 13 network with dynamically elected managers that =ge a given level of the hierarchy.
14 The PLN' name space is divided into five different levels: normal, area, group, domain, and network. All kernels at startup have nommal privileges. They assume a 16 managerial role depending on their ~ =~ in the network and such real-time17 f~ as the number of roles ah~eady assumed. Thus, r ' ~ ~ ~ functions 18 will be distributed evenly among the member kemels, leading to better ~ r and 19 faster recovery. It should be appreciated that the number of levels is not limited to five, and any number of levels can be . ' ' in the system, as explained below.
22 n PLN 33, the primary roles played by the various managers between the 23 Network Manager ar.d Area Marager (e.g., Domain Manager and Group Manager) are 24 essentially the same: to maintain ~ - with its parent and children, and to route Rcsource Layer 32 traffic. ;n addition to these functions, any manager between 26 the Network Manager and Area Mana~ger (e.g., Domain or Group) also provides 27 persistent fund source c aching services as described below in eonnection with Figs. 22 28 and 23. The Area Manager, in addition to these functions described above, provides WO 96/07257 PCT/US95/10605 ~
2 ~ 4 caching services for resources advertised by its children, including aD of the kernels in 2 the Area Manager~s name space. Therefore, the Area Manager is crucial to the orderly 3 function of PLN 33, which is built from the ground up by filling the Area Manager 4 role before any other role in the hierarchy. By default, any kemel can become an Area Manager.
7 As shown in FIG. 4, the PLN building and algorithm comprises 8 five main processes: Login (shown as block 100), Role Call (shown as block 200), 9 Monitor (shown as block 300), Election (shown as block 400), and Logout (shown as block 500). In this description, the following temns are used in order to allow for the I l appropriate abstraction. The number of levels in PLN 33 is defined by MinLevel and 12 Ma1cLcYeL The kemels that have nomnal privileges are configured at MinLevel and are 13 not managers. On the other hand, a kemel that is the Network Manager is conf~gured 14 at MarLevel and has the potential to become the Network Root. The ~
palameter Ma~S~ imposes a ceiling on the highest level of which the kemel can be16 a manager. A kemel at level n is termed to be a chld of its parcnt kemel at level n+ I
17 provided that the two kemels have the same name above level n.
19 Login 21 E71GS. 5 and 6 depict the Login proccdure executed at the cbild and parent nodes 22 in PLN 33. Login is a process by which a child kemel locates and registers with a 23 parent kemel. FIG. 7 illustrates the messages passed between kemels during a 24 hJ~lh~al execution of the Login process by which a kemel in node N7 (shown as circle 37 and referred to as Icemel N7) runs the Login process to enter the network.
27 A kemel enters the network by rumling the Login process to locate its parent 28 kemel. The child kemel first enters a wait period (step 101) during which the child P~T
~ 219732~ ~p~ al N~V ~96 listens for other login broadcasts on the network (step 102). If a login broadcast is 2 received during the wait period (step 103), the child kernel reads the message. The 3 information in the message is sufficient for the child to ascertain the identity of its 4 parent and siblings. If the originator of the message is a sibling (step 104)~ the child 5 kernel modifies its Login wait period interval (step 105) in order to prevent login 6 broadcasts from inundating the network. If the originator of the message is a parent 7 (step 106). the child kernel sends a login request to the parent (step 107) and waits for 8 an acknowledgment. If a login broadcast is not received, the child kernel continues to 9 listen for a login broadcast until the end of the wait period (step 108). At the end of 10 the wait period~ the child kernel sends a login broadcast on the network (step 109).
ll 12 In FIG. 7, kernel N7 is attempting to login to the PIPES network by sending a 13 login broadcast message (~ a~llt~d by dotted line a) to a kernel in a node Nl14 (represented by circle 41 and referred to as kernel Nl), a kernel in node N2 15 (represented by circle 42 and referred to as kernel N2), a kernel in node N3 16 (represented by circle 43 and referred to as kernel N3), a kernel in node 4 (Icl~lc~c~ld 17 by circle 44 and referred to as kernel N4), a kernel in node N5 (ICI./lC:~Cll.~,.i by circle 18 45 and referred to as kemel N5), and a kernel in node N6 (Ic~ lt~,d by circle 46 and 19 referred to as kernel N6). The child kernel waits for a specified time to receive a login acknowledgment (step 11 0).
22 All kemels listen for login broadcast messages on the network (step 116). If a 23 login broadcast is received (step 117), the parent kernel determines whether the kernel 24 that sent the message is its child (step 118). If the originating kernel is not its child, 2S the parent continues listening for login broadcasts (step 116). However, if the 26 originating kernel is its child, the parent checks if this kernel is a duplicate child (step 27 119). If this is a duplicate child, the parent informs its duplicate children of a role f~N!~t~J S
Wo 96/072s7 P~ llu.. ,~10605 ~
219732~
confiict (step 120). If not, the parent sends a logiD a~.LIu..; ~, to i~s child kernel 2 (step 121).
4 In FIG. 7, pareDt kernel N4 receives kemel N7's login broadcast message a,and sends a login a~,h~u.. .- .~g message represented by iine b to kernel N7.
7 If a login a~.iulu.... ' Ig is received (step 110), the chiid kemel sends a iogir, 8 . to the first parent kemel tbat seDds a login a~,iulu.. - 1~, (step 114).
9 The cbild kemel ignores any other login ~-,h-u.. l~d~,~.~.~ it may receive. After 10 sending the login ~ to its parent, the chiid kemel begins the Monitor processIl with its new parent (step 115). If the parent kernel receives the login ~
12 (step 122), the parent kemel registers t!ae child (step 123) and begins the Morator 13 process with its new chiid (step 124). If the parent kemel does not receive the login 14 from the cbild (step 122), the parent kemel contmues to listen for login broadcasts (step 116).
17 In FIG. 7, after receiving parent kemel N4's login a~h-u.. l~lc_a.~ b, cbiid 18 kemel N7 sends a login ~ - message represeated by iine c to kemel N4 and 19 begiDs the monitor process with its parent kemel N4.
21 If no parent kernel seads a login ~u .. I~l~ ,.n to the child, the child kernel 22 begins the Login process again (step 101) uniess the retry threshold has been exceeded 23 (st~p 111). If the retry threshold has been exceeded, the child checks its MaxSratus 24 setting (step 112). If the child's M~xStanLr is greater tiran MinLevel, the chiid begins the Role Call process to assume the role of its own parent. Othcrwise, the ci~iid kernel 26 wiil enter the Login wait period ag~un (step 101).
wo s6/072s7 2~ ~ 7 32 9 R~le Ctzll 3 Role Call is a procedure by which a kemel queries the nelwor~ to find out 4 vacancies in the name space bierarchy. The procedure is executed by all kemels who S bave been configured with Ma~ uus greater than MinLevel. The Role Call procedure 6 is invoked by a kernel upon startup and ~ l.Y when there is a managerial 7 vacancy in its namespace. The Role Call algorithm is designed to minitnize the number 8 of kernels ' l~, ~ ; ., in the Role Call process, reducing network-wide 9 broadcasts as well as possible collisions between potential contenders for the same vacancy.
Il 12 The roll call procedure is shown in Fig. 8. A kemel wishing to participau in 13 Role Call goes through a forced wait period (step 201). The wait period is a function 14 of the number of roles the kemel has alleady assumed, whether the kemel is an active context bridge, and the current state of the kemel. A random wait interval is also 16 added to the equation.
18 During the wait period, the kemel listens for role call broadcasts fmm other 19 kemels (step 202). If a role call broadcast is received for the same level of the hierarchy (step 203), the kemel abandons the Role Call procedure (step 204). If a role 21 call broadcast is not received, the kemel continues to listen for role caU broadcasts ~ (step 202) until the end of the wait period (step 205). At the end of the wait period, 23 the kemel sends its own role call broadcast on the network ~step 206). The broadcast 24 message contains the level of the hierarchy for which the role call is being requested.
After sending the role call broadcast, the kemel starts a timer (step 207) and listens for 26 role call messages on the network (step 208). A kemel that is a manager of the 27 namespace for which role call is requested wiU respond with a point-to-point role call 28 ~hlu'.g message. If the kemel initiating thc role call receives the 2~9732 PC~/U~ 95/ i ?~
~ P~. ~ O~ 9'3 acknowledgment (step 209), the kernel will abandon the Role Call procedure (step 204).
2 If the kemel initiating the role call instead receives another role call broadcast for the 3 same level of the hierarchy (step 210), the kernel reads the message. If the originator of 4 the message has higher credentials (step 211), the kernel will abandon the Role Call S procedure (step 204). The credentials of a particular kernel are a function of the number 6 of roles the kernel has already assumed, whether the kernel is an active context bridge, 7 and the current state of the kernel. At the end of the timeout period (step 212), the kernel 8 assumes the ~acant managerial role for which it requested role call (step 213).
FIG. 9 depicts an example of the Role Call procedure. Kemel N4, represented by 11 circle 54, becomes isolated from the network due to physical connection problems.
12 Kernel N7, represented by circle 57, detects the absence of kemel N4 as a result of its 13 Monitor process (described in detail below) with its parent kernel N4. Kernel N7 goes 14 into the forced wait period and listens for role call broadcast traffic on the network. If 15 kernel N5, represented by circle 55, had started its Role Call process before kernel N7, 16 kernel N7 would abort its Role Call afler receiving kernel N5's role call broadcast 17 message, ~ lL~d by dotted line i. However, assuming that kernel N7 started its Role 18 Call first, kernel N7 sends out its broadcast message, ~ .llt.d by dotted line h, at the 19 end of the role call wait period.
21 If kernel N5 sends its own role call broadcast message after kernel N7 has already 22 done so, kernel N7 compares its credentials ~hith those ef-kernel N5. If kernel N5's 23 credentials are higher, kernel N7 abandons Role Call and kernel N5 assumes the 24 managerial role left vacant by the di~ lllCe: of kernel N4. If kernel N7's credentials are higher, kernel N5 abandons Role Call and kernel N7 assumes kernel N4's vacant 26 managerial role at the end of the timeout period.
FA965040.021119431 701 AMENDED SHEEI
~ wo 96/072~7 2 1 ~ 7 3 2 ~
If kemel N4 has reappeared OD the networl~ and has received kemel N5's broadcast message i or kemel N7's broadcast message h, kemel N4 responds by 3 sending an u ' .. I.. I~,_.. _.lt message to kemel N5, represented by line j, or to kemel 4 N7, represented by line k. If kemel N4 bas not reappeared on the network, Icemel N5 and kemel N7 continue their Role Call processes.
7 Moru~or 9 FIGS. 10 and 11 depicts the child and parent Monitor processes, which is used to keep track of one another.
12 ~he parent has its own "be--ltbeat" timer set to the slowest heattbeat intelval of 13 all of its children. l'he parent initially resets its heartheat timer at the beginning of the 14 Monitor process (step 312) and listens for heartbeat messages from its children (step 313). A child l.-u;. ;,u-~hE in the Monitor process with its parent first sends a 16 heartbeat message to its parent (step 301) and waits for an a~,hlul~g~ .la. If a 17 heartbeat message is received by the parent (step 314), the parent will send a heartbeat 18 acLlu. ~ lg to the child (s.tep 315) and check off the child in its list of children 19 (step 316). The ' ........ ' 1" message contains a heartbeat offset value to scatter 20 the heartheat intervals among its children. If the child receives the heartbeat 21 .I.,hlu.. ' 18 (step 302), the child modifies its heattbeat interval (step 306) and 22 enters a wait period (step 307). If the child does not receive a heartbeat 23 a~hlul~ 'g t, it sends another heartheat message to its parent (step 303). If a 24 heartbeat ~hlu' 'g is received (step 304) at this time, the child then modifies its heartheat interval (step 306) and enters the wait period (step 307). II- the child still 26 does not receive a heartheat .I~U' l" t, the child assumes that it has become 27 orphaned and begins the Login process (step 305).
21~73~4 ~E~1~S~ 3'}i~iO
When the parent's heartbeat timer expires (step 317), the parent checks its list of 2 children for missing heartbeat messages (step 318). If the parent deIects a missing 3 hear.beat, the parent sends a heartbeat message to the missing child (step 319). If the 4 parent does not receive a heartbeat acknowledgment from the missing child (step 320), 5 the parent de-registers the child (step 321).
7 During its wait period (step 307~, the child listens for a heartbeat message from its 8 parent (step 308). If a heartbeat message is received by the child (step 309), the child 9 sends a heartbeat acknowledgment to its parent (step 310), modifies its heartbeat interval (step 306), and enters the wait period again (step 307). At the end of the wait period (step 11 311),thechildbeginstheMonitorprocessonceagain(step301).
13 FIG. 12 shows the periodic check-in messages, or "heartbeats," passed between 14 the parent and child during the Monitor process. In FIG. 12, kemels N3 and N4 (represented by circles 63 and 64,1ca~ ivcl~) are the child of kemel N2 (represented by 16 circle 62). Kemel N2 is in turn the child of kemel Nl (Ic,ulcaclllcd by circle 61).
17 Messages d, through d3 represent heartbeat messages from child to parent, while 18 messages e, through e3 represent heartbeat acknowledgments from parent to child.
19 Messages f~ through f3 represent heartbeat messages from parent to child, while messages g~ through g3 represent heartbeat ~h~uwled~ ltc, from child to parent.
22 Election 24 PIPES kemel engage in a distributed Election (FIG. 13) to determine the winner when role conflicts arise. Two or more managers may claim managerial responsibility 26 over the same namespace when there are problems in the underlying physical cul~ liulla 27 that cause rl~.~!llrlllAII~III of the network. Collisions in the namespace are primarily 28 detected through either role call or login broadcasts, described above. When a kernel FA963040.021/19431 701 ~ iF~ D S~'~
~137324 P~TIU~ 9~l l 060 iv J 19~6 detects a namespace collision, it will inform the principals that in turn executed the 2 Election process. New palLi~i~dnla may join an Election that is already in progress.
3 Because the Election is fully distributed, each kernel separately conducts the Election and 4 arrives at the result.
6 When a kemel detects a role confict or is informed of one, the kemel begins the 7 Election process by starting an election timer and opening an election database (step 401).
8 The kernel stores the election participants known so far. and sends an election request to 9 each one (step 402). This message consists of âll known kemels that are participating in the election. The kemel then listens for any election traffic on the network (step 403). If Il the kemel receives an election response (step 404), which contains a list of known 12 ~alL;~;IJallLa~ the kemel stores any new election participants in the datâbase and sends 13 each one an election request (step 402). If another election request is received (step 405), 14 the kemel sends an election response to the originator (step 406), updates the election database, and sends election requests to the new participants (step 402). When the 16 election timer expires (step 407), the kemel queries its election database to detemmine the 17 winner (step 408). The winner of an election depends on the nurnber of roles each 18 participating kemel has already assumed, whether the participating kemels are active 19 context bridges, and the current state of each kemel. If the kemel is the winner of the election (step 409), the kernel sends an election result message to all election pa~ JallLa 21 (step 410). If the kemel losses the election, the kemel will resign its post as manager 22 (step 411), informing all of its children of their new parent. All participants in the 23 election verify the election result and finally close their election databases (step 412).
FIG. 14 illustrates an example of the Election process. Suppose that kemels A
26 and B (represented by circles 71 and 72, respectively) have detected role conflicts 27 i"~L ~ ly. Kemel A will send an election request message (arrow l) to kemel B.
FA933040.021/19431 701 AMEi~D~D S~EEr t W096/072s7 21~73~ P~
This message will consist of participants known to kernel A, at this point being just 2 kemels A and B. When kemel B receives this meSQge, kemel B will send kemel A an 3 election response message (arrow m). Later, kemel C detects a role conflia with 4 kernel B. Kemel C will then send an election request mesQge (arrow n) to kemel B.
Kemel B will update its election database witb the new entrant kemel C and will send 6 an election response message (arrow o) back to kemel C. This mesQge will contain 7 the election participants lcDown to kemel B at this point, namely, kemels A, B, and C.
8 When kemel C receives tbis mesnge, it will detect the new contestant kemel A, update 9 its election database, and send an election request mesQge (arrow p) to kemel A. At this point, kemel A will become aware of the new contestant (f~m its perspective), 11 update its database with kemel C's credentials, and respond to kemel C's request 12 (arrow q). In the Qme fashion, when kemel D enters the election only aware of kemel 13 A, it will soon be aware of kemels B and C.
Logou~
17 Logout (EIGS. 15 & 16) is a procedure by which a kemel de-registers from its 18 parent. Logout may be initiated as part of the kemel shutdown logic, or as a result of 19 resigning as a manager of a particular level of the hierarcby. A cbild kemel (shown as kemel N2 in Fig. 16) sends a logout request (represented by arrow x) to its parent, 21 shown as kemel Nl in Fig. 16 (step 501). When the parent receives the logout request 22 from its child (step 506), it sends a logout a.kuu' i, (shown as atTow y tn Fig.
23 16) to the child (step 507) and de-registers the child (step 508). If the child is a 24 manager (step 503), the child will send messages (, ' by mesQges zl through ZS in Fig. 16) infomm all of its children (i.e., kernels N3, N4, and N5 in Fig. 16) that it 26 is no longer their parent (step 504). In addition, the parent kemel will nominate a 27 successor from among its children by nominating the winner of an election process 28 which it perfomms on its children (step sos).
n ~ WO 96/07257 219 7 3 2 ~ PCT/USg5/10605 ~ ',, Resource Layer 3 The Resource Layer tblock 32 in FIG. 3) is responsible for managing all of the 4 resources distributed throughout the PIPES network hierarchy. A resource is a functional subset of a PIPES application that is made available IO other PIPES
6 al pl A~ - - executing at other nodes on the network. A PIPES resource can be thought 7 of as a well-defmed service element, where one or more elements, when considered as 8 a whole, combime to form a complete service.
~[G. 17 describes the life cycle of a resource im PIPES. A resource enters the 11 nerwork through the Add Resource process (block 600) . In order to utilize the services 12 provided by a resource, an application must execute the Find Resource Process (block 13 700) to determine its location within the P~PES address space. For example, after 14 executing a Fimd Query and obtaining the address of an available resource, an application might attempt to establish a session with the resource through Session 16 Services 35.
18 If a resource is not available at the time an application executes a Find Query, 19 the application might ~ execute a Persistent Find Query, which will notifythe application of a resource's availability as soon as a resource meeting the search 21 criteria enters the network through the Add Resource Process. In this case, Area 22 Managers in P~ES maintain caches of pending Persistent Find Queries to facilitate an 23 immediate response to such a query. If an Area Manager were to become ~' '24 from the rest of the PIPES hierarchy through a physical network failure, a recovery mechanism (block 800) is employed to recreate the persistent fund cache at the new 26 Area Manager that takes over the ~' ' manager's 'l ' "
Wo 96/07257 Pcr/rss5/l0605 ~
2I9732~
., , ; ,, .
During its lifetime on the network, a resource is available to provide services 10 2 .~ .t.. ~ on the network. If the application that owns the resource removes the 3 resource from the network, the Resource Layer executes the Remove Resource process 4 (block 900). -6 ~d~' Aesowcc Pnc~cess 8 FIG. 18 iUustrates the Add Resource process which is used to introduce an 9 application's resource into PLN 33. The nodei at which the resource originates fu-st checks its local resource database to detem~ine whether a resource with the same name Il already exists (step 601). If such a resource doos exist, the originating node retums an 12 ERROR to the user's application (step 602). If the resoutce does not exist, the 13 originating node adds an entry for the resource in its local database (step 603). The 14 resource then checks its persistent find query cache to detemnine whether an application executing at the node is waiting for a resource (step 604). If the new resource matches 16 any of the search criteria in the persistent fnd cache, then the originating node sends 17 the new resource's attributes to the originating user's application that initiated the 18 Persistent Fnd Query (step 605). The originating node then rernoves fro,m the cache 19 the Persistent Find Query for which the new resource matched the search criteria (step 606). If the scope of the newly removed persistent ftnd query is greater than machine 21 level (step 607), then the originatmg node sends a Clean Persistent Find Query to its 22 parent node (step 608). At the end of the Persistent Find processing, or if no Persistent 23 Find Query was matched by the new resource, the orig,nating node sends an add 24 resource request to its parent Area Manager (step 609).
26 If an Area Manager teceives an add resource request from one of its children 27 (step 610), the Area Manager adds the resource to its own separate resource cache (step 28 611). The Aiea Manager then checks its own persistent find cache to detemnine 096/0725?
~1 fl7~1)A
whether the new resource matches any of the criteria of a query in the cache (step 612).
2 If so, the Area Manager sends the resource's attributes to the node that origmated the 3 Persistent Find Query (step 613) and removes the Query from its persistent fund cache 4 (step 614). If the scope of that Query is greater than a~a level (step 615), then the Area Manager sends a Clean Persistent Find Query to its parent Group Manager (step 6 616).
8 Find Resource Process 10An apphcation searching for a resource within the PLN 33 may specify one of I lthree different options for the Find Query which it sends to the PIPES Kernel: Find, 12Find Next, or Persistent Find. A Find Query will begin searching for resources at the 13local machine, moving to the area level if no resources are found at the machine level.
14If no resources are found at the area level, the search continues at the group level, and 15so on up the PIPES network hierarchy. If a resource is found at a pa~ticular level, that 16resource's attributes are sent to the application requesting the resource. If the 17application later issues a Find Next Query, the search will continue where the previous 18search had left off within the PIPES hierarchy.
20If the user issues a Persistent Find Query, the originating node fust converts it 21into a regular Fmd Query, which travels the network just like any other Find Query. If 22any resource is retumed to the user, the Find Query will not persist within the network;
23however, if no resource is found within the PIPES hierarchy, the Persistent Find Query 24is stored within the PIPES hierarchy in the Area Managers' persistent fund caches.
26FIG. 19 depicts the Find Resource process as it executes at the originating node.
27If a Find or Persistent Find Query is initiated, the originating node clears a resource 28cache which is used as a buffer to store the resource attributes satisfying the query's W096/07257 ' ~ ~- PCr/US95110605 ~
2~9732~
search criteria (step 701). Because a Find Query is completely coordinated by the 2 originator of the query, and no state is maintained at any of the - nodes, 3 each query data packet must carry sufficient ' to enable the 4 nodes to conduct their searches. Some of the most important pieces of information is -5 the originating node's location within the network, the maximum number of matches 6 that is desired by the originating node (ll' '' ' ), the cutrenr number of matches 7 &t have been returned to the originating node (C~-, '' ' ), the scope of the search 8 (Scope), the level at which the search was last conducted (Leve~), and the stanus of the 9 last search at that level (Level Sra~Ls). When the search begins with a Find Query or a Persistent Find Query, the originating node initializes some of these variables to begin 11 the search at the machine level ~step 7Q2). Because a Find Next Query is desiyned to 12 begin the next search where the previous search left off, a Find Next Query causes the 13 originating node to skip these " steps.
The originating node compares CurrMatches to l~ ~ to detemmine 16 whether the user has already received the maximum number of matches for which it 17 asked (step 703). If CurrMatches is not equal to /'- '' ' (ClurrMatches can never 18 exceed ~ ' ' ' ' ), then the originating node checks its resource to sx if any more 19 resources are available to retum to the user (step 704). Resources may be left over in the local cache because although a distributed Find Query may retum more than one 21 resource to tbe originating node, the originating node retums resources to the user one 22 at a time. If there are resources left in the local cache, the originating node retums the 23 ftrst resource to the user (step 705). If the resource cache is empty, the originating 24 node checks the Level StatLr to determine where the last search left off (step 70.7).
Level Sta~u~ is set to EOF ~I.e., end of fund) if then are no nsources available at that 26 levd. If the Lcvcl S~us is EOF, the originating nodc increments CurrLevcl to 27 continue the search at the next level of the hierarchy (step 710). If the Lcvel Status is 28 not EOF, the otiginating node checks Cu~Level to determine whether to begin the ~ wo 96/072s7 ~ 1 9 7 3 2 ~ r~l,u~ 6r-, search at the local machine before beginning a distribu~ed search (srep 708). If2 CurrLcvd is set to Machine, the originating node searches its local resource database to 3 see if loral resource may match the search criteria (step 709). If a local resource is 4 available, the originating node copies up to ~' ' ~ resources' attributes to the S quety's resource cache, and sets Cu~Ma~ches to the number of matches found and 6 copied to the cache (step 706). The originating node then returns the fust resource 7 from the cache to the user that requested the resource (step 705). If no local resources 8 are found, the o;iginating node sets the Lcvel Sta~us to EOF (step 711), and then 9 increments CurrLcvel to continue the search at the next level (step 707).
I l If CurrLcvel exceeds MaxLcvc~ (step 712) or Scope (step 716), then search has 12 either worked its way through the complete PIPES hierarchy or exceeded the scope of 13 the original query. Thus, if either of these conditions have been met, the search is 14 complete. If not, the originating node sends the Find Query to its parent, the Area Manager to begin the distributed seatch (step 713). If resources' atttibutes are retumed 16 in response (step 714), the originating node copies the resources' atttibutes to the 17 query's resource cache (step 718) and returns the first to the user (step 717). If the 18 search completes ~r ~ , tbe originating node checks CulrMatches to see if any19 resources have been returned to tbe user (step 715). If Cu~rMatches is greater than zero, then the user has received all of its resources, and the originating node returns an 21 EOF to the user (step 723). If CwrMatches is zero, and no resources were found on 22 the network, the originating node distributes a Persistent Find Query if the user has so 23 specified (step 719). This entails adding the query to a listing of Persistent Find 24 Queries pending at the node in order to keep track of the sources of the Persistent Find Queries (step 720). If a resource existing at the local machine could possibly match the 26 seatch criteria of the Query (step 721), the originating node adds the query to its 27 persistent ftnd cache (step 722), which is used to keep track of the search criteria so ~28 that resources that meet those criteria may be returned as soon as they are added to wo 96i072s7 2 1 9 7 3 ~ ~ PCT/US9S/10605 ~
PIPES. If the scope of the query is greater than machine level (step 724). then Ihe 2 Persistent Fmd Query is send ~o the Area Manager (step 725).
4 FIGS. 20 and 21 illustrate how the Resource Layer routes a Find Query throughout PLN 33. FIG. 20 shows the process which is executed at the Area Manager 6 level. When the Area Manager receives a Find Query (step 726), the Area Manager 7 checks CurrLcvel to detem~ine the level a~ which a search is requested (step 727). If 8 Cu~rLevel is less than Area (step 728), then the Area Manager retums an error to the 9 node that sent the Find Query oecause the Area Manager received the query by mistake (step 729). If CunLcvel is greater than Area (step 728), the Area Manager wiU
11 forward the Find Query to its parent (step 732) if the Area Manager received the Find 12 Query from one of its children (step 731). Thus, the Area Manager is just passing on 13 the Find Query because the search should continue at a higher level of the hierarchy.
14 If the search should continue at this level, the Area Manager ana!yzes the search criteria to determine whether a resource in this area could satisfy the criteria (step 730).
16 If not, the Area Manager returrls the Find Query to the sender (step 738). In addition, 17 if CuTrMarchcs is already equal to ~' ' ' (step 733), the Area Manager also 18 returns the Fnd Query to the sender (step 738). Other~vise, the Area Manager searchu 19 its resource database looking for a match that is visible to the originating node (step 734). The user that adds a resoutce to PIPES can specify which ~ - can utilize 21 its services, or its rvisibility" within PIPES. If visible matches are found, a maximum 22 of ~ ' ~ ' resources' attributes are copied to the Find Query (step 735). If more 23 than ~ ruources are found (step 737), the Area Manager sets the Lcvcl 24 Stat~ls to OK (step 739) so that the search wiU continue at this level the next time a Find Next Query is issued. Otherwise, the Area Manager sets the Lcvcl Srarl~s to EOF
26 to notify the originating node that no more ruourcos are available at this levd (step 27 736). Fmally, the Area Manager returns the Find Query to the sender (step 738).
219732~
o 96/072s7 . ~ : ~ PcrruS9S/1060S
The Find Query Process at managerial levels higher than Area Manager in the 2 PLN hierarchy (FIG. 21) is simiiar to that at the Area Manager level, except that no 3 searching occurs because oniy machines and Area Managers possess resources 4 da~abases. Steps 740 through 747 in FIG. 21 are the same as steps 726 through 733 in FIG. 20. In each case, the node determines whether the search should continue at this 6 level or at a higher level. In this case, a search at this level consists of for varding the 7 Find Query to each of the manager's children in tum. If any more children have not 8 yet seen the Find Query (step 748), the manager sends the Find Query to the next child 9 (step 749). When no more children are left, the manager sets the Level Sralus to EOF
(step 751) and returns the Find Query to the sender (step 750).
11 .
12 FIGS. 22 and 23 iilustrate the process of adding a Persistent Find Query 13 throughout the network, and FIGS. 24 and 25 depict a simiiar "clean-up" process used 14 to remove a Persistent Find Query from the network. In FIG. 22, ar, Area Manager node processes a Persistent Find Query received over PLN 33 (step 752). First, if the ~ 16 Area Manager received the Query from one of its children (step 753), the Area 17 Manager adds the query to its source list of pending persistent funds (step 754). If a 18 resource in this area could satisfy the Persistent Find Query's search criteria (step 755), 19 then the Area Manager adds the query to its persistent find cache. If the Scope of the Query is greater than Area level (step 757), the Area Manager sends the Persistent Find 21 Query to its parent (step 758). Simiiarly, in FIG. 23, a manager at a level higher than 22 Area receives a Persistent Find Query (step 759). If the sender is one of the manager's 23 children (step 760), the manager adds the Query to its source list of pending persistent 24 fnds (step 761). If this level is within the search criteria specified in the Query (step 762), the manager forwards the Query to its chiidren (except possibly the chiid that sent 26 the Query) (step 763). If the Scope of the Query is greater than this level (step 764), 27 then the manager sends the Persistent Find Query to its parent (step 765).
wo 96l072s7 PCTIIT595/10605 ~
219~32~
Similar processes are iUustrated in FIGS. 24 and 25 that "clean-up" Persisten~
2 Find Queries by removing them from nodes' source lists of pending persistent finds 3 (steps 768 and 775) and removing them from Area Managers' persistent fund caches 4 (step 770).
6 Pc~istenr F~ covery Procc~s ~ 8 Becausc important ~ about distributed Persistent Fnd Queries is kept 9 at the Area Manager nodes, and to a lesser extent at the other managerial nodes, a recovery process must be used when one of these nodes crashes or becomes ' from the rest of the PLN hierarchy. ~GS. 26 and 27 represent the 12 processes used to provide recovery when the Area Manager (FIG. 26) or another 13 managerial node (~:IG. 27) goes down.
When a machine logs in to its new parenl Area Manager, selecled by the 16 Election Process, the child machine sends its source list of pending persistent funds to 17 its new parent (step 800). The new Area Manager receives this list (step 801) and 18 updates its own source list of pending persistent finds using the ~ r '- received 19 from its children (step 802). The new Area Manager then sends a replenish cache request to its parent (step 803). The other managers receive the request (step 805) and 21 send il to all of its children in the manager's source list of pending persistenl finds (step 22 806). If the sender is the manager's child (slep 807), the rnanager sends the requesl up 23 the PLN hierarchy IO its parenl (step 808). Eventually, the other Area Managers in 24 PLN 33 receive the replenish cache request (step 809), and if the new Area Manager has a Query in its persistent find cache (step 810), the receiving Area Manager replies 26 to the new Area Manager with matching queries from its persistent ftnd cache (step 27 811). The new Area Manager then updates its own Persistent Eind Cache with the 28 replies from other Area Managers in PLN 33 (step 804).
Wo 961072s7 219 7 3 2 ~ PCT/USg5/10605 FIG. 27 describes thc situation that exists when a manager other t~an an Area 2 Manager goes down. The ncw manager's children send their source lists of pending 3 p~rsistent fmds to the new manager (step 812). The new manager, recci~cs tbese lists 4 (step 813) and update its list of pending persistent finds with the; ~ u~. sent from its children (step 814). If any of the queries are scoped higher than this le~el (step 6 815), theD the queries are sent up the PLN hierarchy to the new manager's parent (step 7 816). The new manager's parent verifies its source list of pcnding persistem frnds wit 8 the r '' obtained from its new child (step 817).
g Rcmovc Resource Proccss 12 When an applica~ion withdraws its resources from the PLN hierarchy, Resource 13 Layer 33 executes the Remove Resource Process illustrated in FIG. 28. The node at 14 which the resource originated first check to see if the resource exists in its resource database (step 901). If the resource exists, the originating node removes the resource 16 from the database (step 903) and sends the remove resource request to its parent Area 17 Manager (step 904). If not, the originating node retums an error to the user (step 902).
18 The Area Manager receives the, remove resource request (step 905) and removes the 19 resource from its area manager resource cache (step 906).
21 Context Bridge Layer 23 FIG. 29A illustrates the, , of Context Bridge Layer 38. The main 24 function of Context Bridge Layer is the Routing Process (block 1000); which routes a Protocol Data Unit (nPDUn) from a source node to a destination node. The source 26 node and the destination node may share a routable protocol. A routable protocol is 27 defined as a protocol that allows a decision about where a PDU must be sent in order 28 to reach its destination to be made solely from the destination address. The source wo 96/072s7 ~ ~ i P~l/u,.,~
nbde rmcrcJJ tlansfers the PDU to the routable protocol, and the routable protocol itself 2 de~erminc~ ho-v to get the PDU lo its destination by parsing the destination address.
3 ~rt,US, 210 ~nwl~ge of the " nodes used to forward a PDU from the source 4 to the destination is necessary. Witbin PIPES, TCPIIP and SNA are routable pro~ocols, whereas IPX, NetBios and DLC are non-routable protocols.
7 If the source node and the destination node share a non-routable prolocol, or if 8 the source and destination do oot share any protocol at all, jn~.n~ nodes must be 9 used to "bridge" tbe source and destination nodes. In this case, the Routing Process uses the RoutiDg T r ' Database (nRlDBn, shown as block 1400) to determine 11 how to route a PDU from source to destination. The RIDB contains the information 12 necessary to route a PDU to a Don-routable protocol or to a protocol that the source 13 node does not support. The RIDB contains two caches: a source routing cache (block 14 1401) is used for non-routable protocols, and a next-hop routing cache (block 1402) is used for dissimilar protocol bridging. The source routing cache is populated through 16 the Route Discovery Process (block 1100) and is validated through the Route Validation 17 Procws (block 1200). T;he next-hop routing cache is populated through the Route 18 Ad~, Process (block 1300).
FIG. 29B illustrates a system 1600 in which the context bridge of the present 21 invention can be al~ y used. The context bridges can be used to route 22 packets generated by nodws using protocols of different levels, as defined in the 23 T ~ O _ of .S'- ~ (nLSO") Reference Model. For exaunple, 24 system 1600 contains two nodes 1610 and 1630 which use the SNA (APPC) and DLC
protocols, ~w~~ ,ly. Thwe two protocols are at different ISO levels: the SNA is at 26 the l~ level while the DLC is at the data link level. In order to route packets 27 from node 1610 to node 1630 th~ugh a network 1640, it is nec-wsary to use a node 28 1620 containing a context bridge which can bridge the SNA (APPC) and DLC
~ wo 96/07257 ~ ,J,ir ;~~
~19732~
protocols Thus, the packet generale~ by node 1610 is first routeo ~o node 1620 via 2 path 1642, which then routes the packe~ to node 1630~via~ath L643.
~ 4 Similarly, if it is desirable t~ rou e a message generatèo by node 1610 to a node 1650 which uses the UDP protocol (at ISO transport level), it is necessary to use a 6 node 1660 containing a context bridge which can bridge the SNA and UDP protocols.
7 Thus, the packet generated by node 1610 is fLrst routed to node 1660 via path 1645, 8 which then routes the packet to node 1650 via path 1646.
Rounng Process 12 F~G. 30 depicts a flowchart of the Context Bridge Routing Process. When the 13 source node's Context Bridge Layer receives a PDU to be sent to a given destination 14 node, the source node looks at the destination address to determine whether the destination has a routable protocol (step 1001).
17 If the destination has a routable protocol, the source node determines whether or 18 not it supports the same routable protocol as the destination (step 1002). If the source 19 and destination share the same routable protocol, the source sends the PDU to the destination using the transport driver for the shared routable protocol (step 1003). If 21 the source and destination do not share the same routable protocol, the source searches 22 its RIDB next-hop routing cache for a route to the destination (step 1004). The source 23 node then checks to see whether a route exists in the RIDB (step 1006). If a route is 24 found, the source sends the PDU to the " node specified by the route found in the RIDB (step 1007). If a route is not found, the source returns an error stating 26 that the destination is not reachable (step 1009).
=
Wo 96/072s7 21 9 7 3 2 ~ PCT/US9S/10605 IF the destination has a non-routable protocol, the source searches i~s RIDB
2 source routing cache for a route to thc destlnation (step IOOS). The source node ~hen 3 checks 'D see wbether a route exists in the RIDB (step 1008). If a route is found, the 4 source sends the PDU to the ~ node specified by the rome Found in Ihe RIDB (step 1007). If a route is not found7 the source execu~es the Rou~e Discovery 6 Process to find a route tD tbe destination (step 1011). Ihe source node then ascertains 7 whether a route was found by the RDute Discovery Process (step 1012). If a route was 8 found by kDute Discovery, the source node updates its RD~B source routing cache (step 9 1010), and sends the PDU tD the ' node specified by the route (step 1007).
If a route was not found, the source node returns an er~r that the destina~ion is nol 11 reachable (step 1009).
13 Ro~lle Discovc~y Proccss FIG. 31 describes the Route Discovery Process, which is used to update the 16 RIDB source routing cache with source routes to individual ~ ctin~inn~ A source 17 node initiates the RDute Discovery Process when a route tD a destination with a non-18 mutable protocol needs to be found. First, a source node sends a Route Discovery 19 Packet to all of the activc context bddges about which it has ~ ~ ~ (step 1101).
A node is an active context bridge if it supports more than one protocol; the node acts 21 as a bridge betwun the protocols found at that node. All of the nodes in the network 22 find out about active conUxt bridges through the l?Dute A~l ~ Process.
24 A context bridge that receives the source node's RDUU Discovery Packet first deunnines whether it is a reply packet (step 1107). If it is a reply packet, the26 ~ ' node forwards the packet back to the source node using the route specified 27 in the reply packet (step 1112). If it is not a reply packet, the node receiving the RouU
28 Discovery Packet insetts its own address into the packet (step 1108). The node then ~ W0 96/07257 ~197'32~' checks to sx if h is the intended desthlaLon of the packe~ (srep 1109)- If the nod~ is 2 the intended destination of the packee, the ~i~d node changes the type of the packet to 3 REPLY (step 1111), and forwards Li.e packet back to the source. using the route 4 specified in the Route I)iscovery Packel (step 111.2). If the receiving node is not the destination, the s- node forwards the packet to a~ context bridges to w hich h 6 is cormected except the context bridge from which it originally received the packa (step 7 1110).
9 The source node is waiting to see if a reply is received (step 1102) . If no reply is received within a specified time period, the source retums an error that the 11 destination is, ' ~ ' (step 1103). If a reply is received, the source node checks if 12 there is already a valid route to the destination (step 1104). If there is already a valid 13 route, the source discards the reply packet (step 1105). Otherwise, the source node 14 updates its RlI)B source routing cache with the route specified in the reply packet (step 1106).
17 Roule Va/idanon Process 19 FIG. 32 illustrates the Route Validation Process, which is used to check the validity of the routes contained in the RD~B source routing cache. The source node 21 sends a Route Validation Packet to all of the destination nodes in its RlDB source 22 routing cache that have not been marked as valid (step 1201). The source then sets a 23 timer (step 1202) and listens for validation replies (step 1203).
The end nodes also listen for Route Validation Packets (step 1209) and checks to26 see if a Validation Packet is received (step 1210). If a Validation Packet is not 27 received within a specified time period, the end nodes continue listening for Route 28 Validation Packets (step 1209). If a Validation Packet is received, the end nodes Wo 96/072s7 F~~ an ~ 5~
validate the route specifie~ in the Route Validation Packet (step 1211) and remin the 2 Packet to the sender (step 1212).
3 . .
4 The source node checks to see whether a validation reply has been received (step 1204). If a validation reply is received, the source node marks the source route to 6 the destination as valid in the R~B source routing cache (step 1205). If a validation 7 rcply is not rcceived, the source node checks the timer (step 1206). If the timer has 8 not expired, the source node continues to listen for validation replies (step 1203). If 9 the timer has expired, the sonrce Dode will reset the timer (step 1202) if the retry thleshold has not been exceeded (step 1207). If the retry threshold has been exceedcd, 11 the source node removes the invalid source route from the R~B source routing cache 12 (step 1208).
14 RouIe A,l~.... Process 16 FIG. 33 reptesents the Route Adv, Process, a process which is 17 executed ~ l~ at every active context bridge and end node. Each context 18 bridge pc,i~li~ll~ sends a broadcast message known as a Routing Ad.. i 19 Packet ("RAPn) (step 1301), and each end node listens for RAP broadcasts (step 1305).
The RAP prcferably contains the following r ' the protocols that can be 21 handled by the context bridge and the number of hops required. All context bridges 22 and end nodes then wait until a RAP broadcast is received (steps 1302 and 1306). If a 23 RAP broadcast is received, the node receiving the broadcast determines if there is any 24 change m routing r ' by comparing the RAP broadcast with its RIDB next-hop routing cache (steps 1303 and 1307). If changes are necessary, the receiving node 26 updates its RII)B next-hop routing cache (steps 1304 and 1308).
~w096/072s7 ~ 24 PCT~ssg5ll~6os Unllmited Levels 3 in the preferred . ' " of the present invention, the number of levels tn 4 the PLN hierarchy is not limited. FIG. 34 iUustrates the steps tbat is preferred ta};en by developer of system 100 (the system developer), the application developer, and the 6 end user to implement a larger number of levels than the default number of levels (e.g., 7 five). Tbe maximum number of levels of a certain ,' is set when the 8 PIPES kemel and PAPI library code is compiled. If it is desirable to have greater 9 flexibility in tbeir PIPES and greater number of levels in the hierarchy, the PIP~S
kemel and PAPI library need to be Il 12 The system developer changes the MinLevel and MaxLevel parameters that are 13 hard-coded in a header file of the software (step 1501). The PAPI iibrary (step 1502) 14 and PIPES kemel (step 1503) will be r~omrill A and the new PAPI library and PIPES
kemel are distributed to the application developer (step 1504).
17 The apphScation developer receives these . . from the system developer18 (step l505) and makes any necessary ~ to their own PlPES application (step19 1506). The application developer then t~compiles its own PlPi3 application with the new PAPI library (step 1507) and distributes the new PIPLS appSScation and P PES21 kemel to the end user (step 1508).
23 The end user receives these ~ . from the application developer (step 24 1509) and instalSis them on ali of the nodes in the PLN (step 1510). After making any necessary ,.. ~ to its P PLS ~ (step 1511), tbe end user funal,y 26 restarts the system by loading the PiPES kemel (step 1512) and tbe PlPES appSiication 27 (step 1513). At this point, the end user can realize the number of levels destred in the 28 PLN bierarchy.
-WO 96/072~7 ; r~ D '' 2~7~
While the present invention has oeen described wi.h what is presently considered2to be the pteferred ' ' it is to be understood that the appended clairns are nol 3to be limited to the disclosed ~ L , but on the cont~r~, are in~ended u, cover 4 , ~ /;.... va~iattons, and equivalent ~ which retain any of the novel 5featutes and advanuges of the invention.
21 If kernel N5 sends its own role call broadcast message after kernel N7 has already 22 done so, kernel N7 compares its credentials ~hith those ef-kernel N5. If kernel N5's 23 credentials are higher, kernel N7 abandons Role Call and kernel N5 assumes the 24 managerial role left vacant by the di~ lllCe: of kernel N4. If kernel N7's credentials are higher, kernel N5 abandons Role Call and kernel N7 assumes kernel N4's vacant 26 managerial role at the end of the timeout period.
FA965040.021119431 701 AMENDED SHEEI
~ wo 96/072~7 2 1 ~ 7 3 2 ~
If kemel N4 has reappeared OD the networl~ and has received kemel N5's broadcast message i or kemel N7's broadcast message h, kemel N4 responds by 3 sending an u ' .. I.. I~,_.. _.lt message to kemel N5, represented by line j, or to kemel 4 N7, represented by line k. If kemel N4 bas not reappeared on the network, Icemel N5 and kemel N7 continue their Role Call processes.
7 Moru~or 9 FIGS. 10 and 11 depicts the child and parent Monitor processes, which is used to keep track of one another.
12 ~he parent has its own "be--ltbeat" timer set to the slowest heattbeat intelval of 13 all of its children. l'he parent initially resets its heartheat timer at the beginning of the 14 Monitor process (step 312) and listens for heartbeat messages from its children (step 313). A child l.-u;. ;,u-~hE in the Monitor process with its parent first sends a 16 heartbeat message to its parent (step 301) and waits for an a~,hlul~g~ .la. If a 17 heartbeat message is received by the parent (step 314), the parent will send a heartbeat 18 acLlu. ~ lg to the child (s.tep 315) and check off the child in its list of children 19 (step 316). The ' ........ ' 1" message contains a heartbeat offset value to scatter 20 the heartheat intervals among its children. If the child receives the heartbeat 21 .I.,hlu.. ' 18 (step 302), the child modifies its heattbeat interval (step 306) and 22 enters a wait period (step 307). If the child does not receive a heartbeat 23 a~hlul~ 'g t, it sends another heartheat message to its parent (step 303). If a 24 heartbeat ~hlu' 'g is received (step 304) at this time, the child then modifies its heartheat interval (step 306) and enters the wait period (step 307). II- the child still 26 does not receive a heartheat .I~U' l" t, the child assumes that it has become 27 orphaned and begins the Login process (step 305).
21~73~4 ~E~1~S~ 3'}i~iO
When the parent's heartbeat timer expires (step 317), the parent checks its list of 2 children for missing heartbeat messages (step 318). If the parent deIects a missing 3 hear.beat, the parent sends a heartbeat message to the missing child (step 319). If the 4 parent does not receive a heartbeat acknowledgment from the missing child (step 320), 5 the parent de-registers the child (step 321).
7 During its wait period (step 307~, the child listens for a heartbeat message from its 8 parent (step 308). If a heartbeat message is received by the child (step 309), the child 9 sends a heartbeat acknowledgment to its parent (step 310), modifies its heartbeat interval (step 306), and enters the wait period again (step 307). At the end of the wait period (step 11 311),thechildbeginstheMonitorprocessonceagain(step301).
13 FIG. 12 shows the periodic check-in messages, or "heartbeats," passed between 14 the parent and child during the Monitor process. In FIG. 12, kemels N3 and N4 (represented by circles 63 and 64,1ca~ ivcl~) are the child of kemel N2 (represented by 16 circle 62). Kemel N2 is in turn the child of kemel Nl (Ic,ulcaclllcd by circle 61).
17 Messages d, through d3 represent heartbeat messages from child to parent, while 18 messages e, through e3 represent heartbeat acknowledgments from parent to child.
19 Messages f~ through f3 represent heartbeat messages from parent to child, while messages g~ through g3 represent heartbeat ~h~uwled~ ltc, from child to parent.
22 Election 24 PIPES kemel engage in a distributed Election (FIG. 13) to determine the winner when role conflicts arise. Two or more managers may claim managerial responsibility 26 over the same namespace when there are problems in the underlying physical cul~ liulla 27 that cause rl~.~!llrlllAII~III of the network. Collisions in the namespace are primarily 28 detected through either role call or login broadcasts, described above. When a kernel FA963040.021/19431 701 ~ iF~ D S~'~
~137324 P~TIU~ 9~l l 060 iv J 19~6 detects a namespace collision, it will inform the principals that in turn executed the 2 Election process. New palLi~i~dnla may join an Election that is already in progress.
3 Because the Election is fully distributed, each kernel separately conducts the Election and 4 arrives at the result.
6 When a kemel detects a role confict or is informed of one, the kemel begins the 7 Election process by starting an election timer and opening an election database (step 401).
8 The kernel stores the election participants known so far. and sends an election request to 9 each one (step 402). This message consists of âll known kemels that are participating in the election. The kemel then listens for any election traffic on the network (step 403). If Il the kemel receives an election response (step 404), which contains a list of known 12 ~alL;~;IJallLa~ the kemel stores any new election participants in the datâbase and sends 13 each one an election request (step 402). If another election request is received (step 405), 14 the kemel sends an election response to the originator (step 406), updates the election database, and sends election requests to the new participants (step 402). When the 16 election timer expires (step 407), the kemel queries its election database to detemmine the 17 winner (step 408). The winner of an election depends on the nurnber of roles each 18 participating kemel has already assumed, whether the participating kemels are active 19 context bridges, and the current state of each kemel. If the kemel is the winner of the election (step 409), the kernel sends an election result message to all election pa~ JallLa 21 (step 410). If the kemel losses the election, the kemel will resign its post as manager 22 (step 411), informing all of its children of their new parent. All participants in the 23 election verify the election result and finally close their election databases (step 412).
FIG. 14 illustrates an example of the Election process. Suppose that kemels A
26 and B (represented by circles 71 and 72, respectively) have detected role conflicts 27 i"~L ~ ly. Kemel A will send an election request message (arrow l) to kemel B.
FA933040.021/19431 701 AMEi~D~D S~EEr t W096/072s7 21~73~ P~
This message will consist of participants known to kernel A, at this point being just 2 kemels A and B. When kemel B receives this meSQge, kemel B will send kemel A an 3 election response message (arrow m). Later, kemel C detects a role conflia with 4 kernel B. Kemel C will then send an election request mesQge (arrow n) to kemel B.
Kemel B will update its election database witb the new entrant kemel C and will send 6 an election response message (arrow o) back to kemel C. This mesQge will contain 7 the election participants lcDown to kemel B at this point, namely, kemels A, B, and C.
8 When kemel C receives tbis mesnge, it will detect the new contestant kemel A, update 9 its election database, and send an election request mesQge (arrow p) to kemel A. At this point, kemel A will become aware of the new contestant (f~m its perspective), 11 update its database with kemel C's credentials, and respond to kemel C's request 12 (arrow q). In the Qme fashion, when kemel D enters the election only aware of kemel 13 A, it will soon be aware of kemels B and C.
Logou~
17 Logout (EIGS. 15 & 16) is a procedure by which a kemel de-registers from its 18 parent. Logout may be initiated as part of the kemel shutdown logic, or as a result of 19 resigning as a manager of a particular level of the hierarcby. A cbild kemel (shown as kemel N2 in Fig. 16) sends a logout request (represented by arrow x) to its parent, 21 shown as kemel Nl in Fig. 16 (step 501). When the parent receives the logout request 22 from its child (step 506), it sends a logout a.kuu' i, (shown as atTow y tn Fig.
23 16) to the child (step 507) and de-registers the child (step 508). If the child is a 24 manager (step 503), the child will send messages (, ' by mesQges zl through ZS in Fig. 16) infomm all of its children (i.e., kernels N3, N4, and N5 in Fig. 16) that it 26 is no longer their parent (step 504). In addition, the parent kemel will nominate a 27 successor from among its children by nominating the winner of an election process 28 which it perfomms on its children (step sos).
n ~ WO 96/07257 219 7 3 2 ~ PCT/USg5/10605 ~ ',, Resource Layer 3 The Resource Layer tblock 32 in FIG. 3) is responsible for managing all of the 4 resources distributed throughout the PIPES network hierarchy. A resource is a functional subset of a PIPES application that is made available IO other PIPES
6 al pl A~ - - executing at other nodes on the network. A PIPES resource can be thought 7 of as a well-defmed service element, where one or more elements, when considered as 8 a whole, combime to form a complete service.
~[G. 17 describes the life cycle of a resource im PIPES. A resource enters the 11 nerwork through the Add Resource process (block 600) . In order to utilize the services 12 provided by a resource, an application must execute the Find Resource Process (block 13 700) to determine its location within the P~PES address space. For example, after 14 executing a Fimd Query and obtaining the address of an available resource, an application might attempt to establish a session with the resource through Session 16 Services 35.
18 If a resource is not available at the time an application executes a Find Query, 19 the application might ~ execute a Persistent Find Query, which will notifythe application of a resource's availability as soon as a resource meeting the search 21 criteria enters the network through the Add Resource Process. In this case, Area 22 Managers in P~ES maintain caches of pending Persistent Find Queries to facilitate an 23 immediate response to such a query. If an Area Manager were to become ~' '24 from the rest of the PIPES hierarchy through a physical network failure, a recovery mechanism (block 800) is employed to recreate the persistent fund cache at the new 26 Area Manager that takes over the ~' ' manager's 'l ' "
Wo 96/07257 Pcr/rss5/l0605 ~
2I9732~
., , ; ,, .
During its lifetime on the network, a resource is available to provide services 10 2 .~ .t.. ~ on the network. If the application that owns the resource removes the 3 resource from the network, the Resource Layer executes the Remove Resource process 4 (block 900). -6 ~d~' Aesowcc Pnc~cess 8 FIG. 18 iUustrates the Add Resource process which is used to introduce an 9 application's resource into PLN 33. The nodei at which the resource originates fu-st checks its local resource database to detem~ine whether a resource with the same name Il already exists (step 601). If such a resource doos exist, the originating node retums an 12 ERROR to the user's application (step 602). If the resoutce does not exist, the 13 originating node adds an entry for the resource in its local database (step 603). The 14 resource then checks its persistent find query cache to detemnine whether an application executing at the node is waiting for a resource (step 604). If the new resource matches 16 any of the search criteria in the persistent fnd cache, then the originating node sends 17 the new resource's attributes to the originating user's application that initiated the 18 Persistent Fnd Query (step 605). The originating node then rernoves fro,m the cache 19 the Persistent Find Query for which the new resource matched the search criteria (step 606). If the scope of the newly removed persistent ftnd query is greater than machine 21 level (step 607), then the originatmg node sends a Clean Persistent Find Query to its 22 parent node (step 608). At the end of the Persistent Find processing, or if no Persistent 23 Find Query was matched by the new resource, the orig,nating node sends an add 24 resource request to its parent Area Manager (step 609).
26 If an Area Manager teceives an add resource request from one of its children 27 (step 610), the Area Manager adds the resource to its own separate resource cache (step 28 611). The Aiea Manager then checks its own persistent find cache to detemnine 096/0725?
~1 fl7~1)A
whether the new resource matches any of the criteria of a query in the cache (step 612).
2 If so, the Area Manager sends the resource's attributes to the node that origmated the 3 Persistent Find Query (step 613) and removes the Query from its persistent fund cache 4 (step 614). If the scope of that Query is greater than a~a level (step 615), then the Area Manager sends a Clean Persistent Find Query to its parent Group Manager (step 6 616).
8 Find Resource Process 10An apphcation searching for a resource within the PLN 33 may specify one of I lthree different options for the Find Query which it sends to the PIPES Kernel: Find, 12Find Next, or Persistent Find. A Find Query will begin searching for resources at the 13local machine, moving to the area level if no resources are found at the machine level.
14If no resources are found at the area level, the search continues at the group level, and 15so on up the PIPES network hierarchy. If a resource is found at a pa~ticular level, that 16resource's attributes are sent to the application requesting the resource. If the 17application later issues a Find Next Query, the search will continue where the previous 18search had left off within the PIPES hierarchy.
20If the user issues a Persistent Find Query, the originating node fust converts it 21into a regular Fmd Query, which travels the network just like any other Find Query. If 22any resource is retumed to the user, the Find Query will not persist within the network;
23however, if no resource is found within the PIPES hierarchy, the Persistent Find Query 24is stored within the PIPES hierarchy in the Area Managers' persistent fund caches.
26FIG. 19 depicts the Find Resource process as it executes at the originating node.
27If a Find or Persistent Find Query is initiated, the originating node clears a resource 28cache which is used as a buffer to store the resource attributes satisfying the query's W096/07257 ' ~ ~- PCr/US95110605 ~
2~9732~
search criteria (step 701). Because a Find Query is completely coordinated by the 2 originator of the query, and no state is maintained at any of the - nodes, 3 each query data packet must carry sufficient ' to enable the 4 nodes to conduct their searches. Some of the most important pieces of information is -5 the originating node's location within the network, the maximum number of matches 6 that is desired by the originating node (ll' '' ' ), the cutrenr number of matches 7 &t have been returned to the originating node (C~-, '' ' ), the scope of the search 8 (Scope), the level at which the search was last conducted (Leve~), and the stanus of the 9 last search at that level (Level Sra~Ls). When the search begins with a Find Query or a Persistent Find Query, the originating node initializes some of these variables to begin 11 the search at the machine level ~step 7Q2). Because a Find Next Query is desiyned to 12 begin the next search where the previous search left off, a Find Next Query causes the 13 originating node to skip these " steps.
The originating node compares CurrMatches to l~ ~ to detemmine 16 whether the user has already received the maximum number of matches for which it 17 asked (step 703). If CurrMatches is not equal to /'- '' ' (ClurrMatches can never 18 exceed ~ ' ' ' ' ), then the originating node checks its resource to sx if any more 19 resources are available to retum to the user (step 704). Resources may be left over in the local cache because although a distributed Find Query may retum more than one 21 resource to tbe originating node, the originating node retums resources to the user one 22 at a time. If there are resources left in the local cache, the originating node retums the 23 ftrst resource to the user (step 705). If the resource cache is empty, the originating 24 node checks the Level StatLr to determine where the last search left off (step 70.7).
Level Sta~u~ is set to EOF ~I.e., end of fund) if then are no nsources available at that 26 levd. If the Lcvcl S~us is EOF, the originating nodc increments CurrLevcl to 27 continue the search at the next level of the hierarchy (step 710). If the Lcvel Status is 28 not EOF, the otiginating node checks Cu~Level to determine whether to begin the ~ wo 96/072s7 ~ 1 9 7 3 2 ~ r~l,u~ 6r-, search at the local machine before beginning a distribu~ed search (srep 708). If2 CurrLcvd is set to Machine, the originating node searches its local resource database to 3 see if loral resource may match the search criteria (step 709). If a local resource is 4 available, the originating node copies up to ~' ' ~ resources' attributes to the S quety's resource cache, and sets Cu~Ma~ches to the number of matches found and 6 copied to the cache (step 706). The originating node then returns the fust resource 7 from the cache to the user that requested the resource (step 705). If no local resources 8 are found, the o;iginating node sets the Lcvel Sta~us to EOF (step 711), and then 9 increments CurrLcvel to continue the search at the next level (step 707).
I l If CurrLcvel exceeds MaxLcvc~ (step 712) or Scope (step 716), then search has 12 either worked its way through the complete PIPES hierarchy or exceeded the scope of 13 the original query. Thus, if either of these conditions have been met, the search is 14 complete. If not, the originating node sends the Find Query to its parent, the Area Manager to begin the distributed seatch (step 713). If resources' atttibutes are retumed 16 in response (step 714), the originating node copies the resources' atttibutes to the 17 query's resource cache (step 718) and returns the first to the user (step 717). If the 18 search completes ~r ~ , tbe originating node checks CulrMatches to see if any19 resources have been returned to tbe user (step 715). If Cu~rMatches is greater than zero, then the user has received all of its resources, and the originating node returns an 21 EOF to the user (step 723). If CwrMatches is zero, and no resources were found on 22 the network, the originating node distributes a Persistent Find Query if the user has so 23 specified (step 719). This entails adding the query to a listing of Persistent Find 24 Queries pending at the node in order to keep track of the sources of the Persistent Find Queries (step 720). If a resource existing at the local machine could possibly match the 26 seatch criteria of the Query (step 721), the originating node adds the query to its 27 persistent ftnd cache (step 722), which is used to keep track of the search criteria so ~28 that resources that meet those criteria may be returned as soon as they are added to wo 96i072s7 2 1 9 7 3 ~ ~ PCT/US9S/10605 ~
PIPES. If the scope of the query is greater than machine level (step 724). then Ihe 2 Persistent Fmd Query is send ~o the Area Manager (step 725).
4 FIGS. 20 and 21 illustrate how the Resource Layer routes a Find Query throughout PLN 33. FIG. 20 shows the process which is executed at the Area Manager 6 level. When the Area Manager receives a Find Query (step 726), the Area Manager 7 checks CurrLcvel to detem~ine the level a~ which a search is requested (step 727). If 8 Cu~rLevel is less than Area (step 728), then the Area Manager retums an error to the 9 node that sent the Find Query oecause the Area Manager received the query by mistake (step 729). If CunLcvel is greater than Area (step 728), the Area Manager wiU
11 forward the Find Query to its parent (step 732) if the Area Manager received the Find 12 Query from one of its children (step 731). Thus, the Area Manager is just passing on 13 the Find Query because the search should continue at a higher level of the hierarchy.
14 If the search should continue at this level, the Area Manager ana!yzes the search criteria to determine whether a resource in this area could satisfy the criteria (step 730).
16 If not, the Area Manager returrls the Find Query to the sender (step 738). In addition, 17 if CuTrMarchcs is already equal to ~' ' ' (step 733), the Area Manager also 18 returns the Fnd Query to the sender (step 738). Other~vise, the Area Manager searchu 19 its resource database looking for a match that is visible to the originating node (step 734). The user that adds a resoutce to PIPES can specify which ~ - can utilize 21 its services, or its rvisibility" within PIPES. If visible matches are found, a maximum 22 of ~ ' ~ ' resources' attributes are copied to the Find Query (step 735). If more 23 than ~ ruources are found (step 737), the Area Manager sets the Lcvcl 24 Stat~ls to OK (step 739) so that the search wiU continue at this level the next time a Find Next Query is issued. Otherwise, the Area Manager sets the Lcvcl Srarl~s to EOF
26 to notify the originating node that no more ruourcos are available at this levd (step 27 736). Fmally, the Area Manager returns the Find Query to the sender (step 738).
219732~
o 96/072s7 . ~ : ~ PcrruS9S/1060S
The Find Query Process at managerial levels higher than Area Manager in the 2 PLN hierarchy (FIG. 21) is simiiar to that at the Area Manager level, except that no 3 searching occurs because oniy machines and Area Managers possess resources 4 da~abases. Steps 740 through 747 in FIG. 21 are the same as steps 726 through 733 in FIG. 20. In each case, the node determines whether the search should continue at this 6 level or at a higher level. In this case, a search at this level consists of for varding the 7 Find Query to each of the manager's children in tum. If any more children have not 8 yet seen the Find Query (step 748), the manager sends the Find Query to the next child 9 (step 749). When no more children are left, the manager sets the Level Sralus to EOF
(step 751) and returns the Find Query to the sender (step 750).
11 .
12 FIGS. 22 and 23 iilustrate the process of adding a Persistent Find Query 13 throughout the network, and FIGS. 24 and 25 depict a simiiar "clean-up" process used 14 to remove a Persistent Find Query from the network. In FIG. 22, ar, Area Manager node processes a Persistent Find Query received over PLN 33 (step 752). First, if the ~ 16 Area Manager received the Query from one of its children (step 753), the Area 17 Manager adds the query to its source list of pending persistent funds (step 754). If a 18 resource in this area could satisfy the Persistent Find Query's search criteria (step 755), 19 then the Area Manager adds the query to its persistent find cache. If the Scope of the Query is greater than Area level (step 757), the Area Manager sends the Persistent Find 21 Query to its parent (step 758). Simiiarly, in FIG. 23, a manager at a level higher than 22 Area receives a Persistent Find Query (step 759). If the sender is one of the manager's 23 children (step 760), the manager adds the Query to its source list of pending persistent 24 fnds (step 761). If this level is within the search criteria specified in the Query (step 762), the manager forwards the Query to its chiidren (except possibly the chiid that sent 26 the Query) (step 763). If the Scope of the Query is greater than this level (step 764), 27 then the manager sends the Persistent Find Query to its parent (step 765).
wo 96l072s7 PCTIIT595/10605 ~
219~32~
Similar processes are iUustrated in FIGS. 24 and 25 that "clean-up" Persisten~
2 Find Queries by removing them from nodes' source lists of pending persistent finds 3 (steps 768 and 775) and removing them from Area Managers' persistent fund caches 4 (step 770).
6 Pc~istenr F~ covery Procc~s ~ 8 Becausc important ~ about distributed Persistent Fnd Queries is kept 9 at the Area Manager nodes, and to a lesser extent at the other managerial nodes, a recovery process must be used when one of these nodes crashes or becomes ' from the rest of the PLN hierarchy. ~GS. 26 and 27 represent the 12 processes used to provide recovery when the Area Manager (FIG. 26) or another 13 managerial node (~:IG. 27) goes down.
When a machine logs in to its new parenl Area Manager, selecled by the 16 Election Process, the child machine sends its source list of pending persistent funds to 17 its new parent (step 800). The new Area Manager receives this list (step 801) and 18 updates its own source list of pending persistent finds using the ~ r '- received 19 from its children (step 802). The new Area Manager then sends a replenish cache request to its parent (step 803). The other managers receive the request (step 805) and 21 send il to all of its children in the manager's source list of pending persistenl finds (step 22 806). If the sender is the manager's child (slep 807), the rnanager sends the requesl up 23 the PLN hierarchy IO its parenl (step 808). Eventually, the other Area Managers in 24 PLN 33 receive the replenish cache request (step 809), and if the new Area Manager has a Query in its persistent find cache (step 810), the receiving Area Manager replies 26 to the new Area Manager with matching queries from its persistent ftnd cache (step 27 811). The new Area Manager then updates its own Persistent Eind Cache with the 28 replies from other Area Managers in PLN 33 (step 804).
Wo 961072s7 219 7 3 2 ~ PCT/USg5/10605 FIG. 27 describes thc situation that exists when a manager other t~an an Area 2 Manager goes down. The ncw manager's children send their source lists of pending 3 p~rsistent fmds to the new manager (step 812). The new manager, recci~cs tbese lists 4 (step 813) and update its list of pending persistent finds with the; ~ u~. sent from its children (step 814). If any of the queries are scoped higher than this le~el (step 6 815), theD the queries are sent up the PLN hierarchy to the new manager's parent (step 7 816). The new manager's parent verifies its source list of pcnding persistem frnds wit 8 the r '' obtained from its new child (step 817).
g Rcmovc Resource Proccss 12 When an applica~ion withdraws its resources from the PLN hierarchy, Resource 13 Layer 33 executes the Remove Resource Process illustrated in FIG. 28. The node at 14 which the resource originated first check to see if the resource exists in its resource database (step 901). If the resource exists, the originating node removes the resource 16 from the database (step 903) and sends the remove resource request to its parent Area 17 Manager (step 904). If not, the originating node retums an error to the user (step 902).
18 The Area Manager receives the, remove resource request (step 905) and removes the 19 resource from its area manager resource cache (step 906).
21 Context Bridge Layer 23 FIG. 29A illustrates the, , of Context Bridge Layer 38. The main 24 function of Context Bridge Layer is the Routing Process (block 1000); which routes a Protocol Data Unit (nPDUn) from a source node to a destination node. The source 26 node and the destination node may share a routable protocol. A routable protocol is 27 defined as a protocol that allows a decision about where a PDU must be sent in order 28 to reach its destination to be made solely from the destination address. The source wo 96/072s7 ~ ~ i P~l/u,.,~
nbde rmcrcJJ tlansfers the PDU to the routable protocol, and the routable protocol itself 2 de~erminc~ ho-v to get the PDU lo its destination by parsing the destination address.
3 ~rt,US, 210 ~nwl~ge of the " nodes used to forward a PDU from the source 4 to the destination is necessary. Witbin PIPES, TCPIIP and SNA are routable pro~ocols, whereas IPX, NetBios and DLC are non-routable protocols.
7 If the source node and the destination node share a non-routable prolocol, or if 8 the source and destination do oot share any protocol at all, jn~.n~ nodes must be 9 used to "bridge" tbe source and destination nodes. In this case, the Routing Process uses the RoutiDg T r ' Database (nRlDBn, shown as block 1400) to determine 11 how to route a PDU from source to destination. The RIDB contains the information 12 necessary to route a PDU to a Don-routable protocol or to a protocol that the source 13 node does not support. The RIDB contains two caches: a source routing cache (block 14 1401) is used for non-routable protocols, and a next-hop routing cache (block 1402) is used for dissimilar protocol bridging. The source routing cache is populated through 16 the Route Discovery Process (block 1100) and is validated through the Route Validation 17 Procws (block 1200). T;he next-hop routing cache is populated through the Route 18 Ad~, Process (block 1300).
FIG. 29B illustrates a system 1600 in which the context bridge of the present 21 invention can be al~ y used. The context bridges can be used to route 22 packets generated by nodws using protocols of different levels, as defined in the 23 T ~ O _ of .S'- ~ (nLSO") Reference Model. For exaunple, 24 system 1600 contains two nodes 1610 and 1630 which use the SNA (APPC) and DLC
protocols, ~w~~ ,ly. Thwe two protocols are at different ISO levels: the SNA is at 26 the l~ level while the DLC is at the data link level. In order to route packets 27 from node 1610 to node 1630 th~ugh a network 1640, it is nec-wsary to use a node 28 1620 containing a context bridge which can bridge the SNA (APPC) and DLC
~ wo 96/07257 ~ ,J,ir ;~~
~19732~
protocols Thus, the packet generale~ by node 1610 is first routeo ~o node 1620 via 2 path 1642, which then routes the packe~ to node 1630~via~ath L643.
~ 4 Similarly, if it is desirable t~ rou e a message generatèo by node 1610 to a node 1650 which uses the UDP protocol (at ISO transport level), it is necessary to use a 6 node 1660 containing a context bridge which can bridge the SNA and UDP protocols.
7 Thus, the packet generated by node 1610 is fLrst routed to node 1660 via path 1645, 8 which then routes the packet to node 1650 via path 1646.
Rounng Process 12 F~G. 30 depicts a flowchart of the Context Bridge Routing Process. When the 13 source node's Context Bridge Layer receives a PDU to be sent to a given destination 14 node, the source node looks at the destination address to determine whether the destination has a routable protocol (step 1001).
17 If the destination has a routable protocol, the source node determines whether or 18 not it supports the same routable protocol as the destination (step 1002). If the source 19 and destination share the same routable protocol, the source sends the PDU to the destination using the transport driver for the shared routable protocol (step 1003). If 21 the source and destination do not share the same routable protocol, the source searches 22 its RIDB next-hop routing cache for a route to the destination (step 1004). The source 23 node then checks to see whether a route exists in the RIDB (step 1006). If a route is 24 found, the source sends the PDU to the " node specified by the route found in the RIDB (step 1007). If a route is not found, the source returns an error stating 26 that the destination is not reachable (step 1009).
=
Wo 96/072s7 21 9 7 3 2 ~ PCT/US9S/10605 IF the destination has a non-routable protocol, the source searches i~s RIDB
2 source routing cache for a route to thc destlnation (step IOOS). The source node ~hen 3 checks 'D see wbether a route exists in the RIDB (step 1008). If a route is found, the 4 source sends the PDU to the ~ node specified by the rome Found in Ihe RIDB (step 1007). If a route is not found7 the source execu~es the Rou~e Discovery 6 Process to find a route tD tbe destination (step 1011). Ihe source node then ascertains 7 whether a route was found by the RDute Discovery Process (step 1012). If a route was 8 found by kDute Discovery, the source node updates its RD~B source routing cache (step 9 1010), and sends the PDU tD the ' node specified by the route (step 1007).
If a route was not found, the source node returns an er~r that the destina~ion is nol 11 reachable (step 1009).
13 Ro~lle Discovc~y Proccss FIG. 31 describes the Route Discovery Process, which is used to update the 16 RIDB source routing cache with source routes to individual ~ ctin~inn~ A source 17 node initiates the RDute Discovery Process when a route tD a destination with a non-18 mutable protocol needs to be found. First, a source node sends a Route Discovery 19 Packet to all of the activc context bddges about which it has ~ ~ ~ (step 1101).
A node is an active context bridge if it supports more than one protocol; the node acts 21 as a bridge betwun the protocols found at that node. All of the nodes in the network 22 find out about active conUxt bridges through the l?Dute A~l ~ Process.
24 A context bridge that receives the source node's RDUU Discovery Packet first deunnines whether it is a reply packet (step 1107). If it is a reply packet, the26 ~ ' node forwards the packet back to the source node using the route specified 27 in the reply packet (step 1112). If it is not a reply packet, the node receiving the RouU
28 Discovery Packet insetts its own address into the packet (step 1108). The node then ~ W0 96/07257 ~197'32~' checks to sx if h is the intended desthlaLon of the packe~ (srep 1109)- If the nod~ is 2 the intended destination of the packee, the ~i~d node changes the type of the packet to 3 REPLY (step 1111), and forwards Li.e packet back to the source. using the route 4 specified in the Route I)iscovery Packel (step 111.2). If the receiving node is not the destination, the s- node forwards the packet to a~ context bridges to w hich h 6 is cormected except the context bridge from which it originally received the packa (step 7 1110).
9 The source node is waiting to see if a reply is received (step 1102) . If no reply is received within a specified time period, the source retums an error that the 11 destination is, ' ~ ' (step 1103). If a reply is received, the source node checks if 12 there is already a valid route to the destination (step 1104). If there is already a valid 13 route, the source discards the reply packet (step 1105). Otherwise, the source node 14 updates its RlI)B source routing cache with the route specified in the reply packet (step 1106).
17 Roule Va/idanon Process 19 FIG. 32 illustrates the Route Validation Process, which is used to check the validity of the routes contained in the RD~B source routing cache. The source node 21 sends a Route Validation Packet to all of the destination nodes in its RlDB source 22 routing cache that have not been marked as valid (step 1201). The source then sets a 23 timer (step 1202) and listens for validation replies (step 1203).
The end nodes also listen for Route Validation Packets (step 1209) and checks to26 see if a Validation Packet is received (step 1210). If a Validation Packet is not 27 received within a specified time period, the end nodes continue listening for Route 28 Validation Packets (step 1209). If a Validation Packet is received, the end nodes Wo 96/072s7 F~~ an ~ 5~
validate the route specifie~ in the Route Validation Packet (step 1211) and remin the 2 Packet to the sender (step 1212).
3 . .
4 The source node checks to see whether a validation reply has been received (step 1204). If a validation reply is received, the source node marks the source route to 6 the destination as valid in the R~B source routing cache (step 1205). If a validation 7 rcply is not rcceived, the source node checks the timer (step 1206). If the timer has 8 not expired, the source node continues to listen for validation replies (step 1203). If 9 the timer has expired, the sonrce Dode will reset the timer (step 1202) if the retry thleshold has not been exceeded (step 1207). If the retry threshold has been exceedcd, 11 the source node removes the invalid source route from the R~B source routing cache 12 (step 1208).
14 RouIe A,l~.... Process 16 FIG. 33 reptesents the Route Adv, Process, a process which is 17 executed ~ l~ at every active context bridge and end node. Each context 18 bridge pc,i~li~ll~ sends a broadcast message known as a Routing Ad.. i 19 Packet ("RAPn) (step 1301), and each end node listens for RAP broadcasts (step 1305).
The RAP prcferably contains the following r ' the protocols that can be 21 handled by the context bridge and the number of hops required. All context bridges 22 and end nodes then wait until a RAP broadcast is received (steps 1302 and 1306). If a 23 RAP broadcast is received, the node receiving the broadcast determines if there is any 24 change m routing r ' by comparing the RAP broadcast with its RIDB next-hop routing cache (steps 1303 and 1307). If changes are necessary, the receiving node 26 updates its RII)B next-hop routing cache (steps 1304 and 1308).
~w096/072s7 ~ 24 PCT~ssg5ll~6os Unllmited Levels 3 in the preferred . ' " of the present invention, the number of levels tn 4 the PLN hierarchy is not limited. FIG. 34 iUustrates the steps tbat is preferred ta};en by developer of system 100 (the system developer), the application developer, and the 6 end user to implement a larger number of levels than the default number of levels (e.g., 7 five). Tbe maximum number of levels of a certain ,' is set when the 8 PIPES kemel and PAPI library code is compiled. If it is desirable to have greater 9 flexibility in tbeir PIPES and greater number of levels in the hierarchy, the PIP~S
kemel and PAPI library need to be Il 12 The system developer changes the MinLevel and MaxLevel parameters that are 13 hard-coded in a header file of the software (step 1501). The PAPI iibrary (step 1502) 14 and PIPES kemel (step 1503) will be r~omrill A and the new PAPI library and PIPES
kemel are distributed to the application developer (step 1504).
17 The apphScation developer receives these . . from the system developer18 (step l505) and makes any necessary ~ to their own PlPES application (step19 1506). The application developer then t~compiles its own PlPi3 application with the new PAPI library (step 1507) and distributes the new PIPLS appSScation and P PES21 kemel to the end user (step 1508).
23 The end user receives these ~ . from the application developer (step 24 1509) and instalSis them on ali of the nodes in the PLN (step 1510). After making any necessary ,.. ~ to its P PLS ~ (step 1511), tbe end user funal,y 26 restarts the system by loading the PiPES kemel (step 1512) and tbe PlPES appSiication 27 (step 1513). At this point, the end user can realize the number of levels destred in the 28 PLN bierarchy.
-WO 96/072~7 ; r~ D '' 2~7~
While the present invention has oeen described wi.h what is presently considered2to be the pteferred ' ' it is to be understood that the appended clairns are nol 3to be limited to the disclosed ~ L , but on the cont~r~, are in~ended u, cover 4 , ~ /;.... va~iattons, and equivalent ~ which retain any of the novel 5featutes and advanuges of the invention.
Claims (23)
1. A method for independently executing software components in a node of a network containing a plurality of nodes, the method comprising the steps of:
generating a logical hierarchy of the roles of the nodes in the network wherein:any node can assume one or multiple roles, the assumption of which neither requires nor preludes the assumption of any other role; and the hierarchy has three or more levels; and negotiating the role of the nodes when there is a change in the configuration of the network, wherein the node at the lowest level of the hierarchy can assume the role of the highest level of the hierarchy.
generating a logical hierarchy of the roles of the nodes in the network wherein:any node can assume one or multiple roles, the assumption of which neither requires nor preludes the assumption of any other role; and the hierarchy has three or more levels; and negotiating the role of the nodes when there is a change in the configuration of the network, wherein the node at the lowest level of the hierarchy can assume the role of the highest level of the hierarchy.
2. The method of Claim 1 wherein a node having a managerial role leaves the network and at least one of the remaining nodes participates in a negotiation process of determining which node assumes the managerial role, the negotiating step further comprising the steps, performed by each participating node, of:
broadcasting a message indicating the participating node's interest in assuming the managerial role;
listening, subsequent to the broadcasting step, for messages on the network; andassuming the managerial role if there is no message on the network which indicates that another node is better qualified to assume the managerial role.
broadcasting a message indicating the participating node's interest in assuming the managerial role;
listening, subsequent to the broadcasting step, for messages on the network; andassuming the managerial role if there is no message on the network which indicates that another node is better qualified to assume the managerial role.
3. The method of Claim 1 further comprising the steps performed by each participating node:
listening, prior to the said broadcasting step, for a specified period of time for messages sent by other participating nodes; and withdrawing from the process when the messages indicates that there is at least one participating note which is more qualified to assume the managerial role.
listening, prior to the said broadcasting step, for a specified period of time for messages sent by other participating nodes; and withdrawing from the process when the messages indicates that there is at least one participating note which is more qualified to assume the managerial role.
4. The method of Claim 1 wherein at least two conflicting nodes claim the same managerial role and at least one of the conflicting nodes participates in a process of determining a node which assumes the managerial role, the negotiating step further comprising the steps, performed by each participating node, of:
setting up a database containing the names of all known nodes participating in the process;
transmitting election messages to nodes included in the data base, the election messages containing information relating to the participating nodes known to thesending node;
receiving election messages from other participating nodes;
updating the database using the information containing in the received election messages; and determining, based on the information contained in the updated database, which one of the participating node assumes the managerial role.
setting up a database containing the names of all known nodes participating in the process;
transmitting election messages to nodes included in the data base, the election messages containing information relating to the participating nodes known to thesending node;
receiving election messages from other participating nodes;
updating the database using the information containing in the received election messages; and determining, based on the information contained in the updated database, which one of the participating node assumes the managerial role.
5. The method of Claim 1 wherein one of the nodes is a parent node, the method further comprising the step of searching for a parent node when a node enters the network.
6. The method of Claim 5 wherein the searching step further comprises the steps, performed by the entering node, of:
listening to messages for a specified period of time;
determining if a message is received, the entering node's parent based on the received message;
broadcasting if no parent is found upon expiration of the specified period of time, a message for searching its parent;
listening for responses to the broadcasted message and determining whether if any one of the responses originates from its parent;
assuming the role as its own parent when no response is received.
listening to messages for a specified period of time;
determining if a message is received, the entering node's parent based on the received message;
broadcasting if no parent is found upon expiration of the specified period of time, a message for searching its parent;
listening for responses to the broadcasted message and determining whether if any one of the responses originates from its parent;
assuming the role as its own parent when no response is received.
7. The method of Claim 1 wherein one of the nodes is a parent node, the method further comprising the step of registering a child upon its entering the network.
8. The method of Claim 7 wherein said registering step further comprises:
listening to messages sent by entering nodes;
determining whether one of the messages is sent by a child node or a duplicate child node;
if a duplicate child node is elected, informing the duplicate child node of a role conflict; and if a child node is detected, sending an acknowledge message to the child node.
listening to messages sent by entering nodes;
determining whether one of the messages is sent by a child node or a duplicate child node;
if a duplicate child node is elected, informing the duplicate child node of a role conflict; and if a child node is detected, sending an acknowledge message to the child node.
9. The method of Claim 1 wherein one of the nodes is a parent and one of the remaining nodes is a child, said method further comprising the step of monitoring the status of the parent and the child.
10. The method of Claim 9 wherein the monitoring step further comprises the step of:
exchanging status messages between the parent and the child at specified time intervals;
searching for a new parent node when the child does not receive status messages from the parent within a predetermined period of time.
exchanging status messages between the parent and the child at specified time intervals;
searching for a new parent node when the child does not receive status messages from the parent within a predetermined period of time.
11. The method of Claim 9 further comprising the step of de-registering the child when the parent does not receive status message from the child within a predetermined period of time.
12. The method of Claim 1 wherein the logical hierarchy consists of an arbitrary number of levels.
13. The method of Claim 12 wherein the number of levels is changeable.
14. The method of Claim 1 wherein the roles of the nodes in the network are changeable contingent on the requirements of the network.
15. A method for satisfying a request for a resource, the request made by a node in a scalable system interconnecting a plurality of nodes on a digital network, at least one of the plurality of nodes being associated with one or more resources, each resource having an active state in which the resource is available to other nodes and an inactive state in which the resource is not available, the method comprising the steps of:
storing the request if the requested resource is not available;
automatically identifying a resource that becomes available by switching from the inactive to the active state; and automatically informing the node making the request that the requested resource has become available if the resource that has become available is the requested resource.
storing the request if the requested resource is not available;
automatically identifying a resource that becomes available by switching from the inactive to the active state; and automatically informing the node making the request that the requested resource has become available if the resource that has become available is the requested resource.
16. The method of Claim 15 wherein:
at least one of said plurality of nodes contains a cache for storing requests for resources, and the step of storing the request comprises the step of storing the request in thecache.
at least one of said plurality of nodes contains a cache for storing requests for resources, and the step of storing the request comprises the step of storing the request in thecache.
17. The method of Claim 16 further comprising the step of removing the request from the cache when the requested resource becomes available and satisfies the request.
18. The method of Claim 15 wherein said plurality of nodes are arranged in at least two levels, and wherein nodes in a first level contain information relating to the resources present in nodes in a second level.
19. The method of Claim 18 wherein:
at least one node in the first level contains a cache for storing requests for resources, and the step of storing the request comprises the step of storing the request in thecache.
at least one node in the first level contains a cache for storing requests for resources, and the step of storing the request comprises the step of storing the request in thecache.
20. The method of Claim 19 further comprising the step of removing the request from the cache when the requested resource becomes available and satisfies the request.
21. The method of Claim 18 wherein the first level is a parent level and the second level is a child level, each node in the parent level being associated with one or more nodes in the child level and each node in the child level being associated with one node in the parent level.
22. A method for determining routing paths in a context bridge which is able to route packets between nodes having different communication protocols at different levels, the context bridge being one of many context bridges in a heterogeneous network containing a plurality of nodes, the method comprising the steps of:
setting up a list of context bridges and the communication protocols handled by each context bridge in the list;
listening for routing information packets which are periodically broadcast by other context bridges informing recipients of the communication protocols handled by the broadcasting context bridge; and updating the list using the information contained in the received routing information packets.
setting up a list of context bridges and the communication protocols handled by each context bridge in the list;
listening for routing information packets which are periodically broadcast by other context bridges informing recipients of the communication protocols handled by the broadcasting context bridge; and updating the list using the information contained in the received routing information packets.
23. A method for routing packets from a source node to a destination node using at least one context bridge, each node having a routing protocol, the context bridge being one of many context bridges in a heterogeneous network containing aplurality of nodes, the method comprising the steps of:
receiving a packet at the source node;
determining whether the destination node of the packet has a routable protocol;
if the destination has no routable protocol, sending a route discovery packet to discover one or more context bridges so as to route the packet from the source to the destination node, and routing the packet to the destination through the one or more discovered context bridges; and if the destination has a routable protocol, routing the packet from the source to the destination using the routable protocol.
receiving a packet at the source node;
determining whether the destination node of the packet has a routable protocol;
if the destination has no routable protocol, sending a route discovery packet to discover one or more context bridges so as to route the packet from the source to the destination node, and routing the packet to the destination through the one or more discovered context bridges; and if the destination has a routable protocol, routing the packet from the source to the destination using the routable protocol.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US293,073 | 1994-08-19 | ||
US08/293,073 US5526358A (en) | 1994-08-19 | 1994-08-19 | Node management in scalable distributed computing enviroment |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2197324A1 true CA2197324A1 (en) | 1996-03-07 |
Family
ID=23127543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002197324A Abandoned CA2197324A1 (en) | 1994-08-19 | 1995-08-18 | Scalable distributed computing environment |
Country Status (8)
Country | Link |
---|---|
US (5) | US5526358A (en) |
EP (1) | EP0776502A2 (en) |
JP (1) | JPH09511115A (en) |
CN (1) | CN1159858A (en) |
AU (1) | AU3944995A (en) |
BR (1) | BR9508731A (en) |
CA (1) | CA2197324A1 (en) |
WO (1) | WO1996007257A2 (en) |
Families Citing this family (192)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0863441A (en) * | 1994-08-26 | 1996-03-08 | Hitachi Ltd | Operating method for parallel systems |
US6282712B1 (en) * | 1995-03-10 | 2001-08-28 | Microsoft Corporation | Automatic software installation on heterogeneous networked computer systems |
US5784612A (en) * | 1995-05-03 | 1998-07-21 | International Business Machines Corporation | Configuration and unconfiguration of distributed computing environment components |
US5771353A (en) * | 1995-11-13 | 1998-06-23 | Motorola Inc. | System having virtual session manager used sessionless-oriented protocol to communicate with user device via wireless channel and session-oriented protocol to communicate with host server |
JPH09245007A (en) * | 1996-03-11 | 1997-09-19 | Toshiba Corp | Processor and method for information processing |
US5668857A (en) * | 1996-03-29 | 1997-09-16 | Netspeed, Inc. | Communication server apparatus and method |
US6385203B2 (en) | 1996-03-29 | 2002-05-07 | Cisco Technology, Inc. | Communication server apparatus and method |
US5754790A (en) * | 1996-05-01 | 1998-05-19 | 3Com Corporation | Apparatus and method for selecting improved routing paths in an autonomous system of computer networks |
US6400681B1 (en) * | 1996-06-20 | 2002-06-04 | Cisco Technology, Inc. | Method and system for minimizing the connection set up time in high speed packet switching networks |
US7146230B2 (en) * | 1996-08-23 | 2006-12-05 | Fieldbus Foundation | Integrated fieldbus data server architecture |
US20040194101A1 (en) * | 1997-08-21 | 2004-09-30 | Glanzer David A. | Flexible function blocks |
US6424872B1 (en) * | 1996-08-23 | 2002-07-23 | Fieldbus Foundation | Block oriented control system |
US5805594A (en) * | 1996-08-23 | 1998-09-08 | International Business Machines Corporation | Activation sequence for a network router |
US5778058A (en) * | 1996-10-07 | 1998-07-07 | Timeplex, Inc. | Method of adding a new PBX and new PBX port to an existing PBX network |
US5909550A (en) * | 1996-10-16 | 1999-06-01 | Cisco Technology, Inc. | Correlation technique for use in managing application-specific and protocol-specific resources of heterogeneous integrated computer network |
US6044080A (en) * | 1996-11-19 | 2000-03-28 | Pluris, Inc. | Scalable parallel packet router |
US20050180095A1 (en) * | 1996-11-29 | 2005-08-18 | Ellis Frampton E. | Global network computers |
US7506020B2 (en) | 1996-11-29 | 2009-03-17 | Frampton E Ellis | Global network computers |
US8225003B2 (en) | 1996-11-29 | 2012-07-17 | Ellis Iii Frampton E | Computers and microchips with a portion protected by an internal hardware firewall |
US6732141B2 (en) | 1996-11-29 | 2004-05-04 | Frampton Erroll Ellis | Commercial distributed processing by personal computers over the internet |
US7634529B2 (en) * | 1996-11-29 | 2009-12-15 | Ellis Iii Frampton E | Personal and server computers having microchips with multiple processing units and internal firewalls |
US7926097B2 (en) * | 1996-11-29 | 2011-04-12 | Ellis Iii Frampton E | Computer or microchip protected from the internet by internal hardware |
US7035906B1 (en) | 1996-11-29 | 2006-04-25 | Ellis Iii Frampton E | Global network computers |
US7805756B2 (en) | 1996-11-29 | 2010-09-28 | Frampton E Ellis | Microchips with inner firewalls, faraday cages, and/or photovoltaic cells |
US8312529B2 (en) | 1996-11-29 | 2012-11-13 | Ellis Frampton E | Global network computers |
US7024449B1 (en) | 1996-11-29 | 2006-04-04 | Ellis Iii Frampton E | Global network computers |
US6167428A (en) * | 1996-11-29 | 2000-12-26 | Ellis; Frampton E. | Personal computer microprocessor firewalls for internet distributed processing |
US6725250B1 (en) * | 1996-11-29 | 2004-04-20 | Ellis, Iii Frampton E. | Global network computers |
US5848145A (en) * | 1996-12-20 | 1998-12-08 | Lucent Technologies Inc. | Automatic learning of network routing using random routes |
US6058423A (en) * | 1996-12-23 | 2000-05-02 | International Business Machines Corporation | System and method for locating resources in a distributed network |
US5930264A (en) * | 1997-02-06 | 1999-07-27 | Telefonaktiebolaget L M Ericsson (Publ) | Inter-node signaling for protocol initialization within a communications network |
US6163599A (en) * | 1997-03-20 | 2000-12-19 | Cisco Technology, Inc. | Communication server apparatus and method |
US6151325A (en) * | 1997-03-31 | 2000-11-21 | Cisco Technology, Inc. | Method and apparatus for high-capacity circuit switching with an ATM second stage switch |
US6934249B1 (en) | 1997-04-01 | 2005-08-23 | Cisco Technology, Inc. | Method and system for minimizing the connection set up time in high speed packet switching networks |
WO1998053578A1 (en) * | 1997-05-23 | 1998-11-26 | The Trustees Of Columbia University In The City Of New York | Method and system for providing multimedia service in an atm communications network |
US6122276A (en) * | 1997-06-30 | 2000-09-19 | Cisco Technology, Inc. | Communications gateway mapping internet address to logical-unit name |
US6311228B1 (en) * | 1997-08-06 | 2001-10-30 | Microsoft Corporation | Method and architecture for simplified communications with HID devices |
US6999824B2 (en) * | 1997-08-21 | 2006-02-14 | Fieldbus Foundation | System and method for implementing safety instrumented systems in a fieldbus architecture |
US6128662A (en) * | 1997-08-29 | 2000-10-03 | Cisco Technology, Inc. | Display-model mapping for TN3270 client |
US6049833A (en) * | 1997-08-29 | 2000-04-11 | Cisco Technology, Inc. | Mapping SNA session flow control to TCP flow control |
US6366644B1 (en) | 1997-09-15 | 2002-04-02 | Cisco Technology, Inc. | Loop integrity test device and method for digital subscriber line (XDSL) communication |
US6252878B1 (en) | 1997-10-30 | 2001-06-26 | Cisco Technology, Inc. | Switched architecture access server |
US6381682B2 (en) | 1998-06-10 | 2002-04-30 | Compaq Information Technologies Group, L.P. | Method and apparatus for dynamically sharing memory in a multiprocessor system |
US6332180B1 (en) | 1998-06-10 | 2001-12-18 | Compaq Information Technologies Group, L.P. | Method and apparatus for communication in a multi-processor computer system |
US6633916B2 (en) | 1998-06-10 | 2003-10-14 | Hewlett-Packard Development Company, L.P. | Method and apparatus for virtual resource handling in a multi-processor computer system |
US6260068B1 (en) | 1998-06-10 | 2001-07-10 | Compaq Computer Corporation | Method and apparatus for migrating resources in a multi-processor computer system |
US6542926B2 (en) | 1998-06-10 | 2003-04-01 | Compaq Information Technologies Group, L.P. | Software partitioned multi-processor system with flexible resource sharing levels |
US6647508B2 (en) * | 1997-11-04 | 2003-11-11 | Hewlett-Packard Development Company, L.P. | Multiprocessor computer architecture with multiple operating system instances and software controlled resource allocation |
US6199179B1 (en) | 1998-06-10 | 2001-03-06 | Compaq Computer Corporation | Method and apparatus for failure recovery in a multi-processor computer system |
US6216132B1 (en) | 1997-11-20 | 2001-04-10 | International Business Machines Corporation | Method and system for matching consumers to events |
US5918910A (en) * | 1997-12-19 | 1999-07-06 | Ncr Corporation | Product tracking system and method |
ES2546173T3 (en) | 1998-03-13 | 2015-09-21 | Canon Kabushiki Kaisha | Apparatus and procedure for information processing |
US6278728B1 (en) | 1998-03-18 | 2001-08-21 | Cisco Technology, Inc. | Remote XDSL transceiver unit and method of operation |
US6205498B1 (en) | 1998-04-01 | 2001-03-20 | Microsoft Corporation | Method and system for message transfer session management |
US6446206B1 (en) | 1998-04-01 | 2002-09-03 | Microsoft Corporation | Method and system for access control of a message queue |
US6529932B1 (en) | 1998-04-01 | 2003-03-04 | Microsoft Corporation | Method and system for distributed transaction processing with asynchronous message delivery |
US6678726B1 (en) * | 1998-04-02 | 2004-01-13 | Microsoft Corporation | Method and apparatus for automatically determining topology information for a computer within a message queuing network |
US6639897B1 (en) * | 1998-04-22 | 2003-10-28 | Nippon Telegraph And Telephone Corporation | Communication network of linked nodes for selecting the shortest available route |
US6049834A (en) * | 1998-05-08 | 2000-04-11 | Cisco Technology, Inc. | Layer 3 switch unicast protocol |
US6772350B1 (en) | 1998-05-15 | 2004-08-03 | E.Piphany, Inc. | System and method for controlling access to resources in a distributed environment |
US6247109B1 (en) | 1998-06-10 | 2001-06-12 | Compaq Computer Corp. | Dynamically assigning CPUs to different partitions each having an operation system instance in a shared memory space |
US6275912B1 (en) | 1998-06-30 | 2001-08-14 | Microsoft Corporation | Method and system for storing data items to a storage device |
US6256634B1 (en) | 1998-06-30 | 2001-07-03 | Microsoft Corporation | Method and system for purging tombstones for deleted data items in a replicated database |
US6848108B1 (en) * | 1998-06-30 | 2005-01-25 | Microsoft Corporation | Method and apparatus for creating, sending, and using self-descriptive objects as messages over a message queuing network |
US6202089B1 (en) | 1998-06-30 | 2001-03-13 | Microsoft Corporation | Method for configuring at runtime, identifying and using a plurality of remote procedure call endpoints on a single server process |
US6269096B1 (en) | 1998-08-14 | 2001-07-31 | Cisco Technology, Inc. | Receive and transmit blocks for asynchronous transfer mode (ATM) cell delineation |
US6535520B1 (en) | 1998-08-14 | 2003-03-18 | Cisco Technology, Inc. | System and method of operation for managing data communication between physical layer devices and ATM layer devices |
US6381245B1 (en) | 1998-09-04 | 2002-04-30 | Cisco Technology, Inc. | Method and apparatus for generating parity for communication between a physical layer device and an ATM layer device |
US6078957A (en) * | 1998-11-20 | 2000-06-20 | Network Alchemy, Inc. | Method and apparatus for a TCP/IP load balancing and failover process in an internet protocol (IP) network clustering system |
US6700872B1 (en) | 1998-12-11 | 2004-03-02 | Cisco Technology, Inc. | Method and system for testing a utopia network element |
US6834302B1 (en) * | 1998-12-31 | 2004-12-21 | Nortel Networks Limited | Dynamic topology notification extensions for the domain name system |
US6535511B1 (en) | 1999-01-07 | 2003-03-18 | Cisco Technology, Inc. | Method and system for identifying embedded addressing information in a packet for translation between disparate addressing systems |
US6453357B1 (en) | 1999-01-07 | 2002-09-17 | Cisco Technology, Inc. | Method and system for processing fragments and their out-of-order delivery during address translation |
US6449655B1 (en) | 1999-01-08 | 2002-09-10 | Cisco Technology, Inc. | Method and apparatus for communication between network devices operating at different frequencies |
US6633574B1 (en) * | 1999-03-17 | 2003-10-14 | Loytec Electronics Gmbh | Dynamic wait acknowledge for network protocol |
US6889254B1 (en) | 1999-03-30 | 2005-05-03 | International Business Machines Corporation | Scalable merge technique for information retrieval across a distributed network |
US6393423B1 (en) | 1999-04-08 | 2002-05-21 | James Francis Goedken | Apparatus and methods for electronic information exchange |
US7085763B2 (en) * | 1999-04-27 | 2006-08-01 | Canon Kabushiki Kaisha | Device search system |
US6983291B1 (en) * | 1999-05-21 | 2006-01-03 | International Business Machines Corporation | Incremental maintenance of aggregated and join summary tables |
JP3740320B2 (en) * | 1999-05-31 | 2006-02-01 | キヤノン株式会社 | Device search system and device search method |
US6460082B1 (en) * | 1999-06-17 | 2002-10-01 | International Business Machines Corporation | Management of service-oriented resources across heterogeneous media servers using homogenous service units and service signatures to configure the media servers |
US6957254B1 (en) | 1999-10-21 | 2005-10-18 | Sun Microsystems, Inc | Method and apparatus for reaching agreement between nodes in a distributed system |
US6412002B1 (en) * | 1999-11-15 | 2002-06-25 | Ncr Corporation | Method and apparatus for selecting nodes in configuring massively parallel systems |
US6751200B1 (en) * | 1999-12-06 | 2004-06-15 | Telefonaktiebolaget Lm Ericsson (Publ) | Route discovery based piconet forming |
US6775781B1 (en) * | 1999-12-13 | 2004-08-10 | Microsoft Corporation | Administrative security systems and methods |
US6366907B1 (en) | 1999-12-15 | 2002-04-02 | Napster, Inc. | Real-time search engine |
US7310629B1 (en) | 1999-12-15 | 2007-12-18 | Napster, Inc. | Method and apparatus for controlling file sharing of multimedia files over a fluid, de-centralized network |
US6742023B1 (en) | 2000-04-28 | 2004-05-25 | Roxio, Inc. | Use-sensitive distribution of data files between users |
US6834208B2 (en) * | 1999-12-30 | 2004-12-21 | Microsoft Corporation | Method and apparatus for providing distributed control of a home automation and control system |
US6823223B2 (en) * | 1999-12-30 | 2004-11-23 | Microsoft Corporation | Method and apparatus for providing distributed scene programming of a home automation and control system |
US6990379B2 (en) * | 1999-12-30 | 2006-01-24 | Microsoft Corporation | Method and apparatus for providing a dynamic resource role model for subscriber-requester based protocols in a home automation and control system |
US7000007B1 (en) | 2000-01-13 | 2006-02-14 | Valenti Mark E | System and method for internet broadcast searching |
CN100418674C (en) | 2000-02-10 | 2008-09-17 | 特乔尼科斯有限公司 | Plasma arc reactor for the production of fine powders |
GB0004845D0 (en) | 2000-02-29 | 2000-04-19 | Tetronics Ltd | A method and apparatus for packaging ultra fine powders into containers |
US6728715B1 (en) | 2000-03-30 | 2004-04-27 | International Business Machines Corporation | Method and system for matching consumers to events employing content-based multicast routing using approximate groups |
AU9335001A (en) | 2000-04-10 | 2001-10-23 | Tetronics Limited | Twin plasma torch apparatus |
AU2001264944A1 (en) * | 2000-05-25 | 2001-12-03 | Transacttools, Inc. | A method, system and apparatus for establishing, monitoring, and managing connectivity for communication among heterogeneous systems |
CN100520720C (en) * | 2000-06-19 | 2009-07-29 | P·C·克劳斯及合伙人公司 | Distributed simulation |
US20050240286A1 (en) * | 2000-06-21 | 2005-10-27 | Glanzer David A | Block-oriented control system on high speed ethernet |
WO2002003296A1 (en) * | 2000-06-29 | 2002-01-10 | Dynamic Networks, Inc. | Method and system for producing an electronic business network |
US7089301B1 (en) | 2000-08-11 | 2006-08-08 | Napster, Inc. | System and method for searching peer-to-peer computer networks by selecting a computer based on at least a number of files shared by the computer |
FR2814883A1 (en) * | 2000-10-03 | 2002-04-05 | Canon Kk | METHOD AND DEVICE FOR DECLARING AND MODIFYING THE FUNCTIONALITY OF A NODE IN A COMMUNICATION NETWORK |
FR2815213B1 (en) * | 2000-10-05 | 2004-09-24 | Cit Alcatel | TELECOMMUNICATION EQUIPMENT FOR MIGRATION OF CALL CONTROL |
US7203741B2 (en) * | 2000-10-12 | 2007-04-10 | Peerapp Ltd. | Method and system for accelerating receipt of data in a client-to-client network |
US6990528B1 (en) | 2000-10-19 | 2006-01-24 | International Business Machines Corporation | System area network of end-to-end context via reliable datagram domains |
US7113995B1 (en) | 2000-10-19 | 2006-09-26 | International Business Machines Corporation | Method and apparatus for reporting unauthorized attempts to access nodes in a network computing system |
US7099955B1 (en) | 2000-10-19 | 2006-08-29 | International Business Machines Corporation | End node partitioning using LMC for a system area network |
US6978300B1 (en) | 2000-10-19 | 2005-12-20 | International Business Machines Corporation | Method and apparatus to perform fabric management |
US6941350B1 (en) * | 2000-10-19 | 2005-09-06 | International Business Machines Corporation | Method and apparatus for reliably choosing a master network manager during initialization of a network computing system |
US7636772B1 (en) | 2000-10-19 | 2009-12-22 | International Business Machines Corporation | Method and apparatus for dynamic retention of system area network management information in non-volatile store |
US7003559B1 (en) * | 2000-10-23 | 2006-02-21 | Hewlett-Packard Development Company, L.P. | System and method for determining probable network paths between nodes in a network topology |
US20020073257A1 (en) * | 2000-12-07 | 2002-06-13 | Ibm Corporation | Transferring foreign protocols across a system area network |
US7337473B2 (en) * | 2000-12-15 | 2008-02-26 | International Business Machines Corporation | Method and system for network management with adaptive monitoring and discovery of computer systems based on user login |
US7188145B2 (en) * | 2001-01-12 | 2007-03-06 | Epicrealm Licensing Llc | Method and system for dynamic distributed data caching |
US7275100B2 (en) * | 2001-01-12 | 2007-09-25 | Hitachi, Ltd. | Failure notification method and system using remote mirroring for clustering systems |
US7035911B2 (en) | 2001-01-12 | 2006-04-25 | Epicrealm, Licensing Llc | Method and system for community data caching |
US6891805B2 (en) * | 2001-02-06 | 2005-05-10 | Telephonics Corporation | Communications system |
EP1233318A1 (en) * | 2001-02-16 | 2002-08-21 | Abb Research Ltd. | Software coumpounds for a distributed control system |
US7142527B2 (en) * | 2001-02-28 | 2006-11-28 | Nokia Inc. | System and method for transmission scheduling using network membership information and neighborhood information |
US20020184361A1 (en) * | 2001-05-16 | 2002-12-05 | Guy Eden | System and method for discovering available network components |
US6590868B2 (en) * | 2001-06-02 | 2003-07-08 | Redback Networks Inc. | Method and apparatus for restart communication between network elements |
US6954817B2 (en) * | 2001-10-01 | 2005-10-11 | International Business Machines Corporation | Providing at least one peer connection between a plurality of coupling facilities to couple the plurality of coupling facilities |
US6920494B2 (en) | 2001-10-05 | 2005-07-19 | International Business Machines Corporation | Storage area network methods and apparatus with virtual SAN recognition |
US6785760B2 (en) | 2001-10-19 | 2004-08-31 | International Business Machines Corporation | Performance of a PCI-X to infiniband bridge |
US6964047B2 (en) * | 2001-12-20 | 2005-11-08 | Lucent Technologies Inc. | Method and apparatus for a fast process monitor suitable for a high availability system |
ATE345015T1 (en) * | 2002-02-12 | 2006-11-15 | Cit Alcatel | METHOD FOR DETERMINING AN ACTIVE OR PASSIVE ROLE ALLOCATION FOR A NETWORK ELEMENT CONTROL MEANS |
US20030158941A1 (en) * | 2002-02-15 | 2003-08-21 | Exanet, Inc. | Apparatus, method and computer software for real-time network configuration |
DE10206903A1 (en) * | 2002-02-19 | 2003-09-04 | Siemens Ag | Software application, software architecture and method for creating software applications, especially for MES systems |
DE10206902A1 (en) * | 2002-02-19 | 2003-09-11 | Siemens Ag | Engineering process and engineering system for industrial automation systems |
US20030185369A1 (en) * | 2002-03-29 | 2003-10-02 | Oliver Neal C. | Telephone conference bridge provided via a plurality of computer telephony resource algorithms |
US7116643B2 (en) * | 2002-04-30 | 2006-10-03 | Motorola, Inc. | Method and system for data in a collection and route discovery communication network |
US7702786B2 (en) * | 2002-08-09 | 2010-04-20 | International Business Machines Corporation | Taking a resource offline in a storage network |
JP3865668B2 (en) * | 2002-08-29 | 2007-01-10 | 富士通株式会社 | Mobile communication network system |
US20060031439A1 (en) * | 2002-10-29 | 2006-02-09 | Saffre Fabrice T | Method and apparatus for network management |
US20040088361A1 (en) * | 2002-11-06 | 2004-05-06 | Stuart Statman | Method and system for distributing information to services via a node hierarchy |
US8799366B2 (en) * | 2002-12-11 | 2014-08-05 | Broadcom Corporation | Migration of stored media through a media exchange network |
US8028093B2 (en) | 2002-12-11 | 2011-09-27 | Broadcom Corporation | Media processing system supporting adaptive digital media parameters based on end-user viewing capabilities |
WO2004075582A1 (en) * | 2003-02-21 | 2004-09-02 | Nortel Networks Limited | Data communication apparatus and method for establishing a codec-bypass connection |
GB2400200A (en) | 2003-04-05 | 2004-10-06 | Hewlett Packard Development Co | Use of nodes to monitor or manage peer to peer network |
GB0308708D0 (en) * | 2003-04-15 | 2003-05-21 | British Telecomm | A computer system |
WO2004095716A2 (en) | 2003-04-17 | 2004-11-04 | Fieldbus Foundation | System and method for implementing safety instrumented systems in a fieldbus architecture |
PL1627316T3 (en) * | 2003-05-27 | 2018-10-31 | Vringo Infrastructure Inc. | Data collection in a computer cluster |
FI20030796A0 (en) | 2003-05-27 | 2003-05-27 | Nokia Corp | Data collection in a computer cluster |
US7546313B1 (en) * | 2003-06-17 | 2009-06-09 | Novell, Inc. | Method and framework for using XML files to modify network resource configurations |
US20050010386A1 (en) * | 2003-06-30 | 2005-01-13 | Mvalent, Inc. | Method and system for dynamically modeling resources |
US7463612B2 (en) * | 2003-10-30 | 2008-12-09 | Motorola, Inc. | Method and apparatus for route discovery within a communication system |
GB2408355B (en) * | 2003-11-18 | 2007-02-14 | Ibm | A system for verifying a state of an environment |
US7502745B1 (en) * | 2004-07-21 | 2009-03-10 | The Mathworks, Inc. | Interfaces to a job manager in distributed computing environments |
US8726278B1 (en) | 2004-07-21 | 2014-05-13 | The Mathworks, Inc. | Methods and system for registering callbacks and distributing tasks to technical computing works |
US7990865B2 (en) * | 2004-03-19 | 2011-08-02 | Genband Us Llc | Communicating processing capabilities along a communications path |
US8027265B2 (en) * | 2004-03-19 | 2011-09-27 | Genband Us Llc | Providing a capability list of a predefined format in a communications network |
JP4390104B2 (en) * | 2004-04-16 | 2009-12-24 | 株式会社デンソー | Internal combustion engine knock determination device |
US7908313B2 (en) * | 2004-07-21 | 2011-03-15 | The Mathworks, Inc. | Instrument-based distributed computing systems |
JP2008512759A (en) * | 2004-09-13 | 2008-04-24 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | How to manage a distributed storage system |
US20060146829A1 (en) * | 2005-01-04 | 2006-07-06 | Omron Corporation | Network system, nodes connected thereto and data communication method using same |
US7616590B2 (en) * | 2005-07-28 | 2009-11-10 | The Boeing Company | Distributed tempero-spatial query service |
US8731542B2 (en) | 2005-08-11 | 2014-05-20 | Seven Networks International Oy | Dynamic adjustment of keep-alive message intervals in a mobile network |
KR100718097B1 (en) * | 2005-10-25 | 2007-05-16 | 삼성전자주식회사 | Address management and routing method for Wireless Personal Area Network |
US8635348B2 (en) * | 2005-12-07 | 2014-01-21 | Intel Corporation | Switch fabric service hosting |
US7489977B2 (en) * | 2005-12-20 | 2009-02-10 | Fieldbus Foundation | System and method for implementing time synchronization monitoring and detection in a safety instrumented system |
US8676357B2 (en) * | 2005-12-20 | 2014-03-18 | Fieldbus Foundation | System and method for implementing an extended safety instrumented system |
CN100386986C (en) * | 2006-03-10 | 2008-05-07 | 清华大学 | Hybrid positioning method for data duplicate in data network system |
KR101396661B1 (en) * | 2006-07-09 | 2014-05-16 | 마이크로소프트 아말가매티드 컴퍼니 Iii | Systems and methods for managing networks |
US8346239B2 (en) | 2006-12-28 | 2013-01-01 | Genband Us Llc | Methods, systems, and computer program products for silence insertion descriptor (SID) conversion |
US20080195430A1 (en) * | 2007-02-12 | 2008-08-14 | Yahoo! Inc. | Data quality measurement for etl processes |
US20080222634A1 (en) * | 2007-03-06 | 2008-09-11 | Yahoo! Inc. | Parallel processing for etl processes |
WO2008115221A2 (en) | 2007-03-20 | 2008-09-25 | Thomson Licensing | Hierarchically clustered p2p streaming system |
US7689648B2 (en) | 2007-06-27 | 2010-03-30 | Microsoft Corporation | Dynamic peer network extension bridge |
US8125796B2 (en) | 2007-11-21 | 2012-02-28 | Frampton E. Ellis | Devices with faraday cages and internal flexibility sipes |
US20090302588A1 (en) * | 2008-06-05 | 2009-12-10 | Autoliv Asp, Inc. | Systems and methods for airbag tether release |
JP5293426B2 (en) | 2009-06-09 | 2013-09-18 | ソニー株式会社 | COMMUNICATION METHOD, INFORMATION PROCESSING DEVICE, AND PROGRAM |
US8908541B2 (en) | 2009-08-04 | 2014-12-09 | Genband Us Llc | Methods, systems, and computer readable media for intelligent optimization of digital signal processor (DSP) resource utilization in a media gateway |
US10354302B2 (en) * | 2009-08-23 | 2019-07-16 | Joreida Eugenia Torres | Methods and devices for providing fashion advice |
US9300522B2 (en) * | 2009-12-23 | 2016-03-29 | International Business Machines Corporation | Information technology asset management |
US8429735B2 (en) | 2010-01-26 | 2013-04-23 | Frampton E. Ellis | Method of using one or more secure private networks to actively configure the hardware of a computer or microchip |
US20110225297A1 (en) | 2010-03-11 | 2011-09-15 | International Business Machines Corporation | Controlling Access To A Resource In A Distributed Computing System With A Distributed Access Request Queue |
US9448850B2 (en) * | 2010-03-11 | 2016-09-20 | International Business Machines Corporation | Discovering a resource in a distributed computing system |
US9348661B2 (en) * | 2010-03-11 | 2016-05-24 | International Business Machines Corporation | Assigning a unique identifier to a communicator |
US8621446B2 (en) * | 2010-04-29 | 2013-12-31 | International Business Machines Corporation | Compiling software for a hierarchical distributed processing system |
US20110320381A1 (en) * | 2010-06-24 | 2011-12-29 | International Business Machines Corporation | Business driven combination of service oriented architecture implementations |
US8521109B2 (en) | 2010-07-29 | 2013-08-27 | Intel Mobile Communications GmbH | Radio communication devices, information providers, methods for controlling a radio communication device and methods for controlling an information provider |
US8712353B2 (en) | 2010-07-29 | 2014-04-29 | Intel Mobile Communications Technology GmbH | Radio communication devices, information providers, methods for controlling a radio communication device and methods for controlling an information provider |
US8645543B2 (en) | 2010-10-13 | 2014-02-04 | International Business Machines Corporation | Managing and reconciling information technology assets in a configuration database |
CN103262064A (en) * | 2010-12-16 | 2013-08-21 | Et国际有限公司 | Distributed computing architecture |
EP3518504B1 (en) | 2010-12-30 | 2020-09-16 | Peerapp, Ltd. | Methods and systems for transmission of data over computer networks |
EP2659401B1 (en) | 2010-12-30 | 2019-06-26 | Peerapp, Ltd. | Methods and systems for caching data communications over computer networks |
CN102833289B (en) * | 2011-06-16 | 2016-02-17 | 浙江速腾电子有限公司 | A kind of distributed cloud computing resources tissue and method for allocating tasks |
JP6028728B2 (en) * | 2011-07-01 | 2016-11-16 | 日本電気株式会社 | Object placement device, object placement method, and program |
US8577610B2 (en) | 2011-12-21 | 2013-11-05 | Telenav Inc. | Navigation system with point of interest harvesting mechanism and method of operation thereof |
CN112039822B (en) * | 2019-06-03 | 2022-08-02 | 本无链科技(深圳)有限公司 | Method and system for constructing real-time block chain network based on WebRTC |
CN112214694B (en) * | 2019-07-10 | 2023-03-14 | 浙江宇视科技有限公司 | Visible node query method and device, terminal equipment and readable storage medium |
JP7328907B2 (en) * | 2020-01-31 | 2023-08-17 | 株式会社日立製作所 | control system, control method |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4800488A (en) * | 1985-11-12 | 1989-01-24 | American Telephone And Telegraph Company, At&T Bell Laboratories | Method of propagating resource information in a computer network |
US5088032A (en) * | 1988-01-29 | 1992-02-11 | Cisco Systems, Inc. | Method and apparatus for routing communications among computer networks |
US5214646A (en) * | 1990-01-31 | 1993-05-25 | Amnon Yacoby | System and method for interconnecting local area networks |
US5301303A (en) * | 1990-04-23 | 1994-04-05 | Chipcom Corporation | Communication system concentrator configurable to different access methods |
US5222242A (en) * | 1990-09-28 | 1993-06-22 | International Business Machines Corp. | System for locating a node containing a requested resource and for selectively verifying the presence of the resource at the node |
US5280480A (en) * | 1991-02-21 | 1994-01-18 | International Business Machines Corporation | Source routing transparent bridge |
JPH0540151A (en) * | 1991-08-06 | 1993-02-19 | Hokuriku Nippon Denki Software Kk | Scan path failure diagnosis method |
IL99923A0 (en) * | 1991-10-31 | 1992-08-18 | Ibm Israel | Method of operating a computer in a network |
US5323394A (en) * | 1992-04-07 | 1994-06-21 | Digital Equipment Corporation | Selecting optimal routes in source routing bridging without exponential flooding of explorer packets |
US5251213A (en) * | 1992-05-12 | 1993-10-05 | Microcom Systems, Inc. | Multiport source routing token ring bridge apparatus |
US5425028A (en) * | 1992-07-16 | 1995-06-13 | International Business Machines Corporation | Protocol selection and address resolution for programs running in heterogeneous networks |
US5289460A (en) * | 1992-07-31 | 1994-02-22 | International Business Machines Corp. | Maintenance of message distribution trees in a communications network |
US5365523A (en) * | 1992-11-16 | 1994-11-15 | International Business Machines Corporation | Forming and maintaining access groups at the lan/wan interface |
EP0598969B1 (en) * | 1992-11-27 | 1999-02-10 | International Business Machines Corporation | Inter-domain multicast routing |
US5430728A (en) * | 1992-12-10 | 1995-07-04 | Northern Telecom Limited | Single-route broadcast for LAN interconnection |
US5426637A (en) * | 1992-12-14 | 1995-06-20 | International Business Machines Corporation | Methods and apparatus for interconnecting local area networks with wide area backbone networks |
JPH07118717B2 (en) * | 1993-01-05 | 1995-12-18 | 日本電気株式会社 | Multi-protocol packet network configuration method |
JP2576762B2 (en) * | 1993-06-30 | 1997-01-29 | 日本電気株式会社 | Information collection method between nodes in ring network |
US5432789A (en) * | 1994-05-03 | 1995-07-11 | Synoptics Communications, Inc. | Use of a single central transmit and receive mechanism for automatic topology determination of multiple networks |
-
1994
- 1994-08-19 US US08/293,073 patent/US5526358A/en not_active Expired - Lifetime
-
1995
- 1995-08-18 WO PCT/US1995/010605 patent/WO1996007257A2/en not_active Application Discontinuation
- 1995-08-18 BR BR9508731A patent/BR9508731A/en not_active Application Discontinuation
- 1995-08-18 JP JP8508817A patent/JPH09511115A/en active Pending
- 1995-08-18 CA CA002197324A patent/CA2197324A1/en not_active Abandoned
- 1995-08-18 AU AU39449/95A patent/AU3944995A/en not_active Abandoned
- 1995-08-18 EP EP95937302A patent/EP0776502A2/en not_active Withdrawn
- 1995-08-18 CN CN95195435A patent/CN1159858A/en active Pending
- 1995-08-25 US US08/519,634 patent/US5612957A/en not_active Expired - Lifetime
-
1996
- 1996-03-28 US US08/624,973 patent/US5699351A/en not_active Expired - Lifetime
-
1997
- 1997-03-10 US US08/813,276 patent/US5793968A/en not_active Expired - Lifetime
- 1997-07-21 US US08/897,861 patent/US5778185A/en not_active Expired - Lifetime
Also Published As
Publication number | Publication date |
---|---|
WO1996007257A2 (en) | 1996-03-07 |
US5612957A (en) | 1997-03-18 |
WO1996007257A3 (en) | 1996-09-19 |
US5699351A (en) | 1997-12-16 |
US5526358A (en) | 1996-06-11 |
US5778185A (en) | 1998-07-07 |
EP0776502A2 (en) | 1997-06-04 |
US5793968A (en) | 1998-08-11 |
BR9508731A (en) | 1998-12-15 |
CN1159858A (en) | 1997-09-17 |
AU3944995A (en) | 1996-03-22 |
JPH09511115A (en) | 1997-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2197324A1 (en) | Scalable distributed computing environment | |
US11272044B2 (en) | Concurrent process execution | |
US9870413B1 (en) | Direct connections to a plurality of storage object replicas in a computer network | |
JP5798644B2 (en) | Consistency within the federation infrastructure | |
US7899934B2 (en) | Handling un-partitioning of a computer network | |
US7292585B1 (en) | System and method for storing and utilizing routing information in a computer network | |
US7404006B1 (en) | Publishing a network address in a computer network | |
US20140280398A1 (en) | Distributed database management | |
US20070058648A1 (en) | Identifying nodes in a ring network | |
US20030018717A1 (en) | Extensible information distribution mechanism for session management | |
JPH1040226A (en) | Method for recovering group leader in distribution computing environment | |
US7555527B1 (en) | Efficiently linking storage object replicas in a computer network | |
JP3589378B2 (en) | System for Group Leader Recovery in Distributed Computing Environment | |
US8345576B2 (en) | Methods and systems for dynamic subring definition within a multi-ring | |
WO2005006133A2 (en) | Interprocessor communication protocol | |
US7653059B1 (en) | Communication sessions for a computer network | |
US7467194B1 (en) | Re-mapping a location-independent address in a computer network | |
Fan et al. | The raincore distributed session service for networking elements | |
KR100787850B1 (en) | Interprocessor communication protocol with high level service composition | |
Hanssen et al. | RTnet: a real-time protocol for broadcast-capable networks | |
Jia et al. | Group Communications | |
Daniel | The impact of network characteristics on the selection of a deadlock detection algorithm for distributed databases |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |