US20080080438A1 - Methods and systems for centralized cluster management in wireless switch architecture - Google Patents

Methods and systems for centralized cluster management in wireless switch architecture Download PDF

Info

Publication number
US20080080438A1
US20080080438A1 US11/529,988 US52998806A US2008080438A1 US 20080080438 A1 US20080080438 A1 US 20080080438A1 US 52998806 A US52998806 A US 52998806A US 2008080438 A1 US2008080438 A1 US 2008080438A1
Authority
US
United States
Prior art keywords
nodes
cluster
command
response
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/529,988
Other versions
US7760695B2 (en
Inventor
Kamatchi Soundaram Gopalakrishnan
Ajay Malik
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Extreme Networks Inc
Original Assignee
Symbol Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Symbol Technologies LLC filed Critical Symbol Technologies LLC
Priority to US11/529,988 priority Critical patent/US7760695B2/en
Assigned to SYMBOL TECHNOLOGIES, INC. reassignment SYMBOL TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GOPALAKRISHNAN, KAMATCHI, MALIK, AJAY
Priority to PCT/US2007/079823 priority patent/WO2008042741A2/en
Publication of US20080080438A1 publication Critical patent/US20080080438A1/en
Application granted granted Critical
Publication of US7760695B2 publication Critical patent/US7760695B2/en
Assigned to MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATERAL AGENT SECURITY AGREEMENT Assignors: LASER BAND, LLC, SYMBOL TECHNOLOGIES, INC., ZEBRA ENTERPRISE SOLUTIONS CORP., ZIH CORP.
Assigned to SYMBOL TECHNOLOGIES, LLC reassignment SYMBOL TECHNOLOGIES, LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: SYMBOL TECHNOLOGIES, INC.
Assigned to SYMBOL TECHNOLOGIES, INC. reassignment SYMBOL TECHNOLOGIES, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC.
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK AMENDED AND RESTATED PATENT AND TRADEMARK SECURITY AGREEMENT Assignors: EXTREME NETWORKS, INC.
Assigned to EXTREME NETWORKS, INC. reassignment EXTREME NETWORKS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SYMBOL TECHNOLOGIES, LLC
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK SECOND AMENDED AND RESTATED PATENT AND TRADEMARK SECURITY AGREEMENT Assignors: EXTREME NETWORKS, INC.
Assigned to SILICON VALLEY BANK reassignment SILICON VALLEY BANK THIRD AMENDED AND RESTATED PATENT AND TRADEMARK SECURITY AGREEMENT Assignors: EXTREME NETWORKS, INC.
Assigned to BANK OF MONTREAL reassignment BANK OF MONTREAL SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EXTREME NETWORKS, INC.
Assigned to EXTREME NETWORKS, INC. reassignment EXTREME NETWORKS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: SILICON VALLEY BANK
Assigned to BANK OF MONTREAL reassignment BANK OF MONTREAL AMENDED SECURITY AGREEMENT Assignors: Aerohive Networks, Inc., EXTREME NETWORKS, INC.
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0866Checking the configuration
    • H04L41/0869Validating the configuration within one network element
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1886Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with traffic restrictions for efficiency improvement, e.g. involving subnets or subdomains
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/14Backbone network devices

Definitions

  • the present invention relates generally to wireless local area networks (WLANs) and, more particularly, to cluster management of wireless switches in a WLAN.
  • WLANs wireless local area networks
  • WLANs wireless local area networks
  • relatively unintelligent access ports act as RF conduits for information that is passed to the network through a centralized intelligent switch, or “wireless switch,” that controls wireless network functions.
  • a centralized intelligent switch or “wireless switch,” that controls wireless network functions.
  • one or more wireless switches communicate via conventional networks with multiple access points that provide wireless links to mobile units operated by end users.
  • the wireless switch then, typically acts as a logical “central point” for most wireless functionality. Consolidation of WLAN intelligence and functionality within a wireless switch provides many benefits, including centralized administration and simplified configuration of switches and access points.
  • wireless switches commonly incorporate “warm standby” features whereby a backup switch is configured to take over if a primary switch becomes incapacitated.
  • switches have been deployed in groups (e.g. so-called “clusters”) that allow multiple switches within the group to share connection licenses and other information with each other.
  • clusters While clusters are useful in providing standby ability and backup features, they have in the past been cumbersome to configure and administer due to the frequent need to execute certain configurations on multiple machines within the cluster.
  • wireless switches are monitored or configured on a cluster basis rather than being limited to configuration on an individual switches.
  • a switch cluster is made up of two or more wireless switches that share a cluster number or other identifier.
  • a command is received from a user interface module at a first node in the cluster, and an instruction related to the command is transmitted from the first node to the other nodes in the cluster.
  • the first node After receiving responses from at least some of the other nodes in the cluster as to the effect of the instruction, the first node provides an updated response to the administrator. The administrator is therefore able to configure or monitor each of the nodes in the cluster from a single administrative node.
  • no single node in the cluster is selected as a master node for all commands, but rather multiple nodes are each capable of acting as the “master node” for processing particular commands at different times. This allows multiple nodes within the cluster to provide administrative updates or to gather monitoring data from each of the other members of the cluster, thereby improving efficiency and convenience to the administrator.
  • FIG. 1 is a conceptual overview of an exemplary wireless network with a three-switch cluster
  • FIG. 2 is a block diagram of an exemplary cluster manager module used to originate a cluster command
  • FIG. 3 is a block diagram of an exemplary cluster manager module used to receive a cluster command.
  • exemplary embodiments may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions.
  • an embodiment of the invention may employ various integrated circuit components, e.g., radio-frequency (RF) devices, memory elements, digital signal processing elements, logic elements and/or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices.
  • RF radio-frequency
  • the present invention may be practiced in conjunction with any number of data transmission protocols and that the system described herein is merely one exemplary application for the invention.
  • a traditional wireless access point e.g., network management, wireless configuration, and the like
  • many of the functions usually provided by a traditional wireless access point can be concentrated in a corresponding wireless switch.
  • a traditional wireless access point e.g., network management, wireless configuration, and the like
  • the present invention is not so limited, and that the methods and systems described herein may be used in the context of other network environments, including any architecture that makes use of client-server principles or structures.
  • one or more switching devices 110 are coupled via one or more networks 104 (e.g., an ETHERNET or other local area network coupled to one or more other networks or devices, indicated by network cloud 102 ).
  • networks 104 e.g., an ETHERNET or other local area network coupled to one or more other networks or devices, indicated by network cloud 102 .
  • One or more wireless access ports 120 are configured to wirelessly connect switches 110 to one or more mobile units 130 (or “MUs”) after a suitable AP adoption process.
  • APs 120 are suitably connected to corresponding switches 110 via communication lines 106 (e.g., conventional Ethernet lines). Any number of additional and/or intervening switches, routers, servers and other networks or components may also be present in the system.
  • APs 120 may have a single or multiple built-in radio components.
  • Various wireless switches and access ports are available from SYMBOL TECHNOLOGIES of San Jose, Calif., although the concepts described herein may be implemented with products and services provided by any other supplier.
  • a particular AP 120 may have a number of associated MUs 130 .
  • MUs 130 ( a ), 130 ( b ) and 130 ( c ) are logically associated with AP 120 ( a ), while MU 130 ( d ) is associated with AP 120 ( b ).
  • one or more APs 120 may be logically connected to a single switch 110 .
  • AP 120 ( a ) and AP 120 ( b ) are connected to WS 110 ( a )
  • AP 120 ( c ) is connected to WS 110 ( b ).
  • the logical connections shown in the figure are merely exemplary, and other embodiments may include widely varying components arranged in any topology.
  • Each AP 120 establishes a logical connection to at least one WS 110 through a suitable adoption process.
  • each AP 120 responds to a “parent” message transmitted by one or more WSs 110 .
  • the parent messages may be transmitted in response to a request message broadcast by the AP 120 in some embodiments; alternatively, one or more WSs 110 may be configured to transmit parent broadcasts on any periodic or aperiodic basis.
  • AP 120 transmits an “adopt” message to the parent WS 110 .
  • each WS 110 determines the destination of packets it receives over network 104 and routes that packet to the appropriate AP 120 if the destination is an MU 130 with which the AP is associated. Each WS 110 therefore maintains a routing list of MUs 130 and their associated APs 130 . These lists are generated using a suitable packet handling process as is known in the art.
  • each AP 120 acts primarily as a conduit, sending/receiving RF transmissions via MUs 130 , and sending/receiving packets via a network protocol with WS 110 . Equivalent embodiments may provide additional or different functions as appropriate.
  • Wireless switches 110 A-C are shown in FIG. 1 as being combined into a single set, group or “cluster” 109 to provide redundancy and convenience as appropriate.
  • clusters could be formed from any grouping of two or more wireless switches 110 that share a group identifier or other identification.
  • a simple cluster could be made up of a primary switch 110 and a dedicated backup, for example, or of any number of primary switches with one or more backups.
  • any number of active switches could provide redundancy for each other, provided that they are able to intercommunicate through networks 104 and/or 102 .
  • switches 110 A-C making up a cluster 109 suitably exchange adoption information (e.g. number of adopted ports, number of licenses available, etc.) as appropriate. This data exchange may take place on any periodic, aperiodic or other basis over any transport mechanism.
  • the transmission control protocol (TCP) commonly used in the TCP/IP protocol suite could be used to establish any number of “virtual” connections 105 A-C between switches operating within a cluster.
  • TCP transmission control protocol
  • switches 110 B and 110 C may have ready access to a relatively current routing list that would include information about APs 120 A-B and/or MUs 130 A-D previously associated with switch 110 A.
  • either switch 110 B-C may therefore quickly contact APs 120 A-B following unavailability of switch 110 A to take over subsequent routing tasks.
  • switches 110 B or 110 C should become unavailable, switch 110 A would be able to quickly assume the tasks of either or both of the other switches 110 B-C.
  • the remaining switches 110 do not directly contact the APs 120 following the disappearance of another switch in the cluster, but rather adopt the disconnected APs 120 using conventional adoption techniques.
  • Nodes 110 A-C in a common cluster 109 may be logically addressed or otherwise represented with a common cluster identification code or the like.
  • Clusters may be established in any manner. Typically, clusters are initially configured manually on each participating WS 110 so that each switch 110 is able to identify the other members of the cluster 109 by name, network address or some other identifier. When switches 110 A-C are active, they further establish the cluster by sharing current load information (e.g. the current number of adopted ports) and/or other data as appropriate. Switches 110 A-C may also share information about their numbers of available licenses so that other switches 110 in cluster 109 can determine the number of cluster licenses available, and/or other information as appropriate.
  • current load information e.g. the current number of adopted ports
  • Switches 110 A-C may also share information about their numbers of available licenses so that other switches 110 in cluster 109 can determine the number of cluster licenses available, and/or other information as appropriate.
  • the time period between switch data exchanges is manually configured by the operator to allow flexibility in managing traffic on network 104 .
  • the timing and/or formatting of such messages may be defined by a protocol or standard.
  • Establishing cluster 109 could therefore refer to either configuration of the cluster, and/or the establishment of cluster communication while the various nodes in cluster 109 are active.
  • each switch 110 A-C suitably verifies the continued availability of the other switches 110 . Verification can take place through any appropriate technique, such as through transmission of regular “heartbeat” messages between servers.
  • the heartbeat messages contain an identifier of the particular sending switch 110 . This identifier is any token, certificate, or other data capable of uniquely identifying the particular switch 110 sending the heartbeat message.
  • the identifier is simply the media access control (MAC) address of the sending switch 110 .
  • MAC addresses are uniquely assigned to hardware components, and therefore are readily available for use as identifiers. Other embodiments may provide digital signatures, certificates or other digital credentials as appropriate, or may simply use the device serial number or any other identifier of the sending switch 110 .
  • the heartbeat messages may be sent between switches 110 on any periodic, aperiodic or other temporal basis.
  • heartbeat messages are exchanged with each other switch 110 operating within cluster 109 every second or so, although particular time periods may vary significantly in other embodiments.
  • another switch 110 operating within cluster 109 adopts the access ports 120 previously connected with the non-responding switch 110 for subsequent operation.
  • an exemplary cluster management module 200 suitably includes a transmit side 202 and a receive side 204 corresponding to transmitting and receiving messages for cluster management and/or monitoring.
  • each switch 110 ( FIG. 1 ) within a cluster 109 can execute one or more copies of the cluster management module, thereby allowing any switch 110 to become the “master” node for executing a particular command or request on other “slave” nodes, as described more fully below.
  • Transmit side 202 is any hardware, firmware, software and/or other logic capable of effecting the transmission of messages on network 104 ( FIG. 1 ) to other switches 110 .
  • transmit side 202 includes command handler logic 206 for receiving a command from an administrator, session validation logic 208 , command classifier logic 210 , command handler modules 212 , 214 and/or 216 , as well as message transmit logic 218 .
  • receive side 204 is any logic capable of processing messages received on network 104 ( FIG. 1 ), such as any sort of receive logic 224 , validation logic 222 and/or response collector logic 220 .
  • Each of the various “logic” modules may be implemented with any type of hardware, software and/or firmware logic, including any sort of interpreted or compiled software instructions stored on any type of digital storage medium. Frequently, software instructions are stored within flash memory and/or a disk drive or other storage medium associated with one or more switches 110 A-C.
  • Interface module 205 may be any type of command line interface (CLI), graphical user interface (GUI), simple network management protocol (SNMP) node, and/or the like. Interface module 205 may logically reside on the same switch 110 as cluster manager 200 , or may be located at any other processing platform. In various embodiments, interface module executes as a JAVA applet, ACTIVE-X control or the like within a client web browser that communicates with a hypertext transport protocol (HTTP) server executing on a switch 110 for receiving configuration instructions.
  • HTTP hypertext transport protocol
  • Commands provided from user interface module 205 are appropriately received at command handler logic 206 .
  • the command is validated (e.g. by session validation module 208 ) to ensure that the command emanated from an approved source (e.g. by an administrator operating in an approved configuration session established with a userid/password or other credential) and/or to otherwise ensure that the command is valid for execution within environment 100 .
  • the command is then provided to classification logic 210 , which appropriately determines whether the command is a regular command that can be processed locally on the host switch 110 , or whether the command requests access to other switches 110 via virtual connections 109 A-C ( FIG. 1 ).
  • handler modules 212 , 214 are provided for each of the various commands recognized by manager module 200 . That is, a “request status” command may be implemented in separate logic from a “change parameter” command, although this is certainly not necessary in all embodiments.
  • the user places the management module 200 into a “cluster management mode” by activating cluster commands at a CLI or GUI interface or the like.
  • a “cluster enable” or similar command at user interface 205 that indicates to module 214 that commands that make use of virtual connections 105 ( FIG. 1 ) may be forthcoming.
  • this modal aspect need not be present, and cluster commands may be readily intermixed with commands intended solely for the host node.
  • commands intended to be executed on multiple nodes 110 within cluster 109 may be duplicated (e.g. using logic 216 ) so that corresponding instructions are transmitted across virtual connections 105 A-C ( FIG. 1 ) or the like using transmit logic 218 .
  • the nodes operating within cluster 109 may be inter-connected by TCP “virtual circuits” or the like.
  • user datagram protocol (UDP) packets or the like could be used, albeit with a decline in reliability.
  • Cluster commands are nevertheless transmitted across network 104 to the other nodes 110 A-C in cluster 109 as appropriate.
  • a response is sent.
  • the responses from the various nodes 110 A-C are received at cluster manager 200 via logic 224 , which appropriately validates the session 222 , collects responses 220 , and provides the collected response to user interface module 205 .
  • a timeout feature is provided so that the response to user interface 205 is provided after a period of time even though one or more responses from receiving nodes 110 A-C might not have been received. That is, responses received prior to the timeout are processed differently from any responses received after the timeout period. This timeout feature allows for further response or analysis by the administrator in the event that one or more receiving nodes 110 A-C become unavailable or inaccessible during operation.
  • FIG. 3 shows a block logic diagram of processing executing at a node 110 A-C that is receiving a cluster instruction from another node 110 A-C.
  • cluster manager application 200 B suitably includes a transmit side 202 and a receive side 204 , as described in conjunction with the cluster management application 200 A described in FIG. 2 .
  • multiple nodes within cluster 109 include identical (or at least similar) application logic 200 so that each node 110 is capable of acting as the “master” node for certain commands while acting as a “slave” or “client” to commands provided by other nodes acting as “master” nodes.
  • the network manager enjoys improved flexibility. Moreover, this avoids the need to provide custom software on particular “management” nodes 110 that is different from that residing on non-management nodes within the cluster.
  • configuration or monitoring commands can be broadcast to all nodes 110 listening on a particular network 104 / 105 , with results provided to the user interface module 205 based upon the nodes 110 that respond to the broadcast instruction.
  • many different control and/or monitoring schemes can be formulated within the ambit of the concepts set forth herein.
  • the instruction is received at receiving logic 224 via network 104 and/or virtual connection 105 .
  • the session is again validated to ensure that the message was transmitted by a valid node 110 using any appropriate credential at validation logic 222 .
  • the instruction requests a new management session (logic 302 )
  • the new session can be created (logic 308 ) and a response is sent back to the originating node 110 as appropriate.
  • the command can be executed (logic 306 ) by an appropriate executor 305 .
  • Command executors 305 may provide data in response to a query, may adjust one or more operating parameters of the node, and/or may take other actions as appropriate.
  • Responses are received from the command executor 305 at logic 310 , and a response is prepared for transmission to the node originally requesting the command.
  • the session is again validated (logic 314 ) to ensure that the response is provided in a secure manner, and the validated message is transmitted back to the requesting node 110 via logic 218 .
  • the processes described above are implemented in software that executes within one or more wireless switches 110 .
  • This software may be in source or object code form, and may reside in any medium or media, including random access, read only, flash or other memory, as well as any magnetic, optical or other storage media.
  • the features described herein may be implemented in hardware, firmware and/or any other suitable logic.

Abstract

Wireless switch are monitored or configured on a cluster basis rather than being limited to configuration on an individual switches. A switch cluster is made up of two or more wireless switches that share a cluster number or other identifier. A command is received from a user interface module at a first node in the cluster, and an instruction related to the command is transmitted from the first node to the other nodes in the cluster. After receiving responses from at least some of the other nodes in the cluster as to the effect of the instruction, the first node provides an updated response to the administrator. The administrator is therefore able to configure or monitor each of the nodes in the cluster from a single administrative node.

Description

    TECHNICAL FIELD
  • The present invention relates generally to wireless local area networks (WLANs) and, more particularly, to cluster management of wireless switches in a WLAN.
  • BACKGROUND
  • In recent years, there has been a dramatic increase in demand for mobile connectivity solutions utilizing various wireless components and wireless local area networks (WLANs). This generally involves the use of wireless access points that communicate with mobile devices using one or more RF channels.
  • In one class of wireless networking systems, relatively unintelligent access ports act as RF conduits for information that is passed to the network through a centralized intelligent switch, or “wireless switch,” that controls wireless network functions. In a typical WLAN setting, one or more wireless switches communicate via conventional networks with multiple access points that provide wireless links to mobile units operated by end users. The wireless switch, then, typically acts as a logical “central point” for most wireless functionality. Consolidation of WLAN intelligence and functionality within a wireless switch provides many benefits, including centralized administration and simplified configuration of switches and access points.
  • One disadvantage, however, is that malfunctions at the wireless switch can effect a significant portion of the wireless network. That is, if the wireless switch were to become unavailable due to power failure, malfunction, or some other reason, then the access points logically connected to that switch will typically also become unavailable. To reduce the effects of wireless switch unavailability, wireless switches commonly incorporate “warm standby” features whereby a backup switch is configured to take over if a primary switch becomes incapacitated. More recently, switches have been deployed in groups (e.g. so-called “clusters”) that allow multiple switches within the group to share connection licenses and other information with each other. An example of one clustering technique is described in U.S. patent application Ser. No. 11/364,815 filed on Feb. 28, 2006 and entitled “METHODS AND APPARATUS FOR CLUSTER LICENSING IN WIRELESS SWITCH ARCHITECTURE”. While clusters are useful in providing standby ability and backup features, they have in the past been cumbersome to configure and administer due to the frequent need to execute certain configurations on multiple machines within the cluster.
  • Accordingly, it is desirable to provide a configuration scheme that can allow for a centralized management feature for switches and other network devices operating within a group or cluster. Other desirable features and characteristics will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
  • BRIEF SUMMARY
  • According to various exemplary embodiments, wireless switches are monitored or configured on a cluster basis rather than being limited to configuration on an individual switches. A switch cluster is made up of two or more wireless switches that share a cluster number or other identifier. A command is received from a user interface module at a first node in the cluster, and an instruction related to the command is transmitted from the first node to the other nodes in the cluster. After receiving responses from at least some of the other nodes in the cluster as to the effect of the instruction, the first node provides an updated response to the administrator. The administrator is therefore able to configure or monitor each of the nodes in the cluster from a single administrative node. In various further embodiments, no single node in the cluster is selected as a master node for all commands, but rather multiple nodes are each capable of acting as the “master node” for processing particular commands at different times. This allows multiple nodes within the cluster to provide administrative updates or to gather monitoring data from each of the other members of the cluster, thereby improving efficiency and convenience to the administrator.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A more complete understanding of the present invention may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
  • FIG. 1 is a conceptual overview of an exemplary wireless network with a three-switch cluster;
  • FIG. 2 is a block diagram of an exemplary cluster manager module used to originate a cluster command; and
  • FIG. 3 is a block diagram of an exemplary cluster manager module used to receive a cluster command.
  • DETAILED DESCRIPTION
  • The following detailed description is merely illustrative in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any express or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.
  • Various aspects of the exemplary embodiments may be described herein in terms of functional and/or logical block components and various processing steps. It should be appreciated that such block components may be realized by any number of hardware, software, and/or firmware components configured to perform the specified functions. For example, an embodiment of the invention may employ various integrated circuit components, e.g., radio-frequency (RF) devices, memory elements, digital signal processing elements, logic elements and/or the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. In addition, the present invention may be practiced in conjunction with any number of data transmission protocols and that the system described herein is merely one exemplary application for the invention.
  • For the sake of brevity, conventional techniques related to signal processing, data transmission, signaling, network control, the IEEE 802.11 family of specifications, and other functional aspects of the system (and the individual operating components of the system) may not be described in detail herein. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent example functional relationships and/or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in a practical embodiment.
  • Without loss of generality, in the illustrated embodiment, many of the functions usually provided by a traditional wireless access point (e.g., network management, wireless configuration, and the like) can be concentrated in a corresponding wireless switch. It will be appreciated that the present invention is not so limited, and that the methods and systems described herein may be used in the context of other network environments, including any architecture that makes use of client-server principles or structures.
  • Referring now to FIG. 1, one or more switching devices 110 (alternatively referred to as “wireless switches,” “WS,” or simply “switches”) are coupled via one or more networks 104 (e.g., an ETHERNET or other local area network coupled to one or more other networks or devices, indicated by network cloud 102). One or more wireless access ports 120 (alternatively referred to as “access ports” or “APs”) are configured to wirelessly connect switches 110 to one or more mobile units 130 (or “MUs”) after a suitable AP adoption process. APs 120 are suitably connected to corresponding switches 110 via communication lines 106 (e.g., conventional Ethernet lines). Any number of additional and/or intervening switches, routers, servers and other networks or components may also be present in the system. Similarly, APs 120 may have a single or multiple built-in radio components. Various wireless switches and access ports are available from SYMBOL TECHNOLOGIES of San Jose, Calif., although the concepts described herein may be implemented with products and services provided by any other supplier.
  • A particular AP 120 may have a number of associated MUs 130. For example, in the illustrated topology, MUs 130(a), 130(b) and 130(c) are logically associated with AP 120(a), while MU 130(d) is associated with AP 120(b). Furthermore, one or more APs 120 may be logically connected to a single switch 110. Thus, as illustrated, AP 120(a) and AP 120(b) are connected to WS 110(a), and AP 120(c) is connected to WS 110(b). Again, the logical connections shown in the figure are merely exemplary, and other embodiments may include widely varying components arranged in any topology.
  • Each AP 120 establishes a logical connection to at least one WS 110 through a suitable adoption process. In a typical adoption process, each AP 120 responds to a “parent” message transmitted by one or more WSs 110. The parent messages may be transmitted in response to a request message broadcast by the AP 120 in some embodiments; alternatively, one or more WSs 110 may be configured to transmit parent broadcasts on any periodic or aperiodic basis. When the AP 120 has decided upon a suitable “parent” WS 110, AP 120 transmits an “adopt” message to the parent WS 110.
  • Following the adoption process, each WS 110 determines the destination of packets it receives over network 104 and routes that packet to the appropriate AP 120 if the destination is an MU 130 with which the AP is associated. Each WS 110 therefore maintains a routing list of MUs 130 and their associated APs 130. These lists are generated using a suitable packet handling process as is known in the art. Thus, each AP 120 acts primarily as a conduit, sending/receiving RF transmissions via MUs 130, and sending/receiving packets via a network protocol with WS 110. Equivalent embodiments may provide additional or different functions as appropriate.
  • Wireless switches 110A-C are shown in FIG. 1 as being combined into a single set, group or “cluster” 109 to provide redundancy and convenience as appropriate. In various embodiments, if one or more switches 110A-C were to become unavailable for any reason, for example, then one or more other switches 110 in the cluster 109 could automatically absorb some or all of the functions previously carried out by the unavailable switch 110, thereby continuing service to mobile users 130 in a relatively smooth manner. In practice, clusters could be formed from any grouping of two or more wireless switches 110 that share a group identifier or other identification. A simple cluster could be made up of a primary switch 110 and a dedicated backup, for example, or of any number of primary switches with one or more backups. Alternatively, any number of active switches could provide redundancy for each other, provided that they are able to intercommunicate through networks 104 and/or 102. The cluster 109 made up of switches 110A-C, then, would allow any switch 110 in the cluster to absorb functions carried out by any other switch 110 if the other switch 110 were to become unavailable, and/or to otherwise provide for convenience of operation in any manner.
  • Redundancy is provided as appropriate. In various embodiments, switches 110A-C making up a cluster 109 suitably exchange adoption information (e.g. number of adopted ports, number of licenses available, etc.) as appropriate. This data exchange may take place on any periodic, aperiodic or other basis over any transport mechanism. The transmission control protocol (TCP) commonly used in the TCP/IP protocol suite, for example, could be used to establish any number of “virtual” connections 105A-C between switches operating within a cluster. In the event that wireless switch 110A in FIG. 1, for example, would become unavailable, switches 110B and 110C may have ready access to a relatively current routing list that would include information about APs 120A-B and/or MUs 130A-D previously associated with switch 110A. In such embodiments, either switch 110B-C may therefore quickly contact APs 120A-B following unavailability of switch 110A to take over subsequent routing tasks. Similarly, if switches 110B or 110C should become unavailable, switch 110A would be able to quickly assume the tasks of either or both of the other switches 110B-C. In other embodiments, the remaining switches 110 do not directly contact the APs 120 following the disappearance of another switch in the cluster, but rather adopt the disconnected APs 120 using conventional adoption techniques. Nodes 110A-C in a common cluster 109 may be logically addressed or otherwise represented with a common cluster identification code or the like.
  • Clusters may be established in any manner. Typically, clusters are initially configured manually on each participating WS 110 so that each switch 110 is able to identify the other members of the cluster 109 by name, network address or some other identifier. When switches 110A-C are active, they further establish the cluster by sharing current load information (e.g. the current number of adopted ports) and/or other data as appropriate. Switches 110A-C may also share information about their numbers of available licenses so that other switches 110 in cluster 109 can determine the number of cluster licenses available, and/or other information as appropriate.
  • In various embodiments, the time period between switch data exchanges is manually configured by the operator to allow flexibility in managing traffic on network 104. Alternatively, the timing and/or formatting of such messages may be defined by a protocol or standard. Establishing cluster 109 could therefore refer to either configuration of the cluster, and/or the establishment of cluster communication while the various nodes in cluster 109 are active.
  • During operation of the cluster 109, each switch 110A-C suitably verifies the continued availability of the other switches 110. Verification can take place through any appropriate technique, such as through transmission of regular “heartbeat” messages between servers. In various embodiments, the heartbeat messages contain an identifier of the particular sending switch 110. This identifier is any token, certificate, or other data capable of uniquely identifying the particular switch 110 sending the heartbeat message. In various embodiments, the identifier is simply the media access control (MAC) address of the sending switch 110. MAC addresses are uniquely assigned to hardware components, and therefore are readily available for use as identifiers. Other embodiments may provide digital signatures, certificates or other digital credentials as appropriate, or may simply use the device serial number or any other identifier of the sending switch 110. The heartbeat messages may be sent between switches 110 on any periodic, aperiodic or other temporal basis. In an exemplary embodiment, heartbeat messages are exchanged with each other switch 110 operating within cluster 109 every second or so, although particular time periods may vary significantly in other embodiments. In many embodiments, if a heartbeat message from any switch 110 fails to appear within an appropriate time window, another switch 110 operating within cluster 109 adopts the access ports 120 previously connected with the non-responding switch 110 for subsequent operation.
  • Clustering techniques can also be used to aid in the administration, configuration and/or monitoring of multiple switches 110 from one or more monitoring nodes. With reference now to FIG. 2, an exemplary cluster management module 200 suitably includes a transmit side 202 and a receive side 204 corresponding to transmitting and receiving messages for cluster management and/or monitoring. In various embodiments, each switch 110 (FIG. 1) within a cluster 109 can execute one or more copies of the cluster management module, thereby allowing any switch 110 to become the “master” node for executing a particular command or request on other “slave” nodes, as described more fully below.
  • Transmit side 202 is any hardware, firmware, software and/or other logic capable of effecting the transmission of messages on network 104 (FIG. 1) to other switches 110. In various embodiments, transmit side 202 includes command handler logic 206 for receiving a command from an administrator, session validation logic 208, command classifier logic 210, command handler modules 212, 214 and/or 216, as well as message transmit logic 218. Similarly, receive side 204 is any logic capable of processing messages received on network 104 (FIG. 1), such as any sort of receive logic 224, validation logic 222 and/or response collector logic 220. Each of the various “logic” modules may be implemented with any type of hardware, software and/or firmware logic, including any sort of interpreted or compiled software instructions stored on any type of digital storage medium. Frequently, software instructions are stored within flash memory and/or a disk drive or other storage medium associated with one or more switches 110A-C.
  • In operation, configuration and/or monitoring instructions are received from an administrative user via any sort of interface module 205. Interface module 205 may be any type of command line interface (CLI), graphical user interface (GUI), simple network management protocol (SNMP) node, and/or the like. Interface module 205 may logically reside on the same switch 110 as cluster manager 200, or may be located at any other processing platform. In various embodiments, interface module executes as a JAVA applet, ACTIVE-X control or the like within a client web browser that communicates with a hypertext transport protocol (HTTP) server executing on a switch 110 for receiving configuration instructions.
  • Commands provided from user interface module 205 are appropriately received at command handler logic 206. The command is validated (e.g. by session validation module 208) to ensure that the command emanated from an approved source (e.g. by an administrator operating in an approved configuration session established with a userid/password or other credential) and/or to otherwise ensure that the command is valid for execution within environment 100. The command is then provided to classification logic 210, which appropriately determines whether the command is a regular command that can be processed locally on the host switch 110, or whether the command requests access to other switches 110 via virtual connections 109A-C (FIG. 1). In various embodiments, handler modules 212, 214 are provided for each of the various commands recognized by manager module 200. That is, a “request status” command may be implemented in separate logic from a “change parameter” command, although this is certainly not necessary in all embodiments.
  • In various embodiments, the user places the management module 200 into a “cluster management mode” by activating cluster commands at a CLI or GUI interface or the like. For example, an administrator may enter a “cluster enable” or similar command at user interface 205 that indicates to module 214 that commands that make use of virtual connections 105 (FIG. 1) may be forthcoming. In other embodiments, however, this modal aspect need not be present, and cluster commands may be readily intermixed with commands intended solely for the host node.
  • For commands intended to be executed on multiple nodes 110 within cluster 109, such commands may be duplicated (e.g. using logic 216) so that corresponding instructions are transmitted across virtual connections 105A-C (FIG. 1) or the like using transmit logic 218. As briefly noted above, the nodes operating within cluster 109 may be inter-connected by TCP “virtual circuits” or the like. Alternatively, user datagram protocol (UDP) packets or the like could be used, albeit with a decline in reliability. Cluster commands are nevertheless transmitted across network 104 to the other nodes 110A-C in cluster 109 as appropriate.
  • After the receiving nodes 110A-C process the instruction transmitted by the sending node, typically a response is sent. The responses from the various nodes 110A-C are received at cluster manager 200 via logic 224, which appropriately validates the session 222, collects responses 220, and provides the collected response to user interface module 205. In various embodiments, a timeout feature is provided so that the response to user interface 205 is provided after a period of time even though one or more responses from receiving nodes 110A-C might not have been received. That is, responses received prior to the timeout are processed differently from any responses received after the timeout period. This timeout feature allows for further response or analysis by the administrator in the event that one or more receiving nodes 110A-C become unavailable or inaccessible during operation.
  • FIG. 3 shows a block logic diagram of processing executing at a node 110A-C that is receiving a cluster instruction from another node 110A-C. With reference to FIG. 3, cluster manager application 200B suitably includes a transmit side 202 and a receive side 204, as described in conjunction with the cluster management application 200A described in FIG. 2. In various embodiments, multiple nodes within cluster 109 include identical (or at least similar) application logic 200 so that each node 110 is capable of acting as the “master” node for certain commands while acting as a “slave” or “client” to commands provided by other nodes acting as “master” nodes. When each node 110A-C in the cluster 109 is capable of acting as the configuration “master”, the network manager enjoys improved flexibility. Moreover, this avoids the need to provide custom software on particular “management” nodes 110 that is different from that residing on non-management nodes within the cluster.
  • This concept can be further expanded in that “slave” or “client” nodes need not be part of the same logical cluster 109 as the “master” node 110 in all embodiments. That is, commands can be issued on network 104 (or even network 102) to any client node that is reachable by any sort of addressing, broadcast and/or routing scheme. To that end, any node 110A-C can act as a “master” and/or “client” to any other node 110A-C within system 100 so long as security is maintained in an appropriate manner. Security may be verified through access control lists, userid/password combinations or other credentials, routing lists, and/or the like. In still other embodiments, configuration or monitoring commands can be broadcast to all nodes 110 listening on a particular network 104/105, with results provided to the user interface module 205 based upon the nodes 110 that respond to the broadcast instruction. To that end, many different control and/or monitoring schemes can be formulated within the ambit of the concepts set forth herein.
  • To respond to cluster instructions received from another node 110, the instruction is received at receiving logic 224 via network 104 and/or virtual connection 105. The session is again validated to ensure that the message was transmitted by a valid node 110 using any appropriate credential at validation logic 222. In the event that the instruction requests a new management session (logic 302), the new session can be created (logic 308) and a response is sent back to the originating node 110 as appropriate. If the instruction is sent to an existing session operating on the receiving node 110 (logic 304), then the command can be executed (logic 306) by an appropriate executor 305. Command executors 305 may provide data in response to a query, may adjust one or more operating parameters of the node, and/or may take other actions as appropriate.
  • Responses are received from the command executor 305 at logic 310, and a response is prepared for transmission to the node originally requesting the command. The session is again validated (logic 314) to ensure that the response is provided in a secure manner, and the validated message is transmitted back to the requesting node 110 via logic 218.
  • The particular aspects and features described herein may be implemented in any manner. In various embodiments, the processes described above are implemented in software that executes within one or more wireless switches 110. This software may be in source or object code form, and may reside in any medium or media, including random access, read only, flash or other memory, as well as any magnetic, optical or other storage media. In other embodiments, the features described herein may be implemented in hardware, firmware and/or any other suitable logic.
  • It should be appreciated that the example embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the invention as set forth in the appended claims and the legal equivalents thereof.

Claims (12)

1. A method of executing a command on a plurality of networked nodes, the method comprising the steps of:
receiving the command from a user interface module at a first one of the plurality of networked nodes;
transmitting an instruction related to the command from the first one of the plurality of networked nodes to the other nodes;
receiving responses from at least some of the other nodes indicating an effect produced by the instruction; and
providing a response to the user interface module from the first one of the plurality of networked nodes.
2. The method of claim 1 wherein each of the plurality of networked nodes are configured to act as the first one of the plurality of networked nodes.
3. The method of claim 1 further comprising the step of validating the command at the first one of the plurality of networked nodes prior to the transmitting step.
4. The method of claim 1 further comprising the step of validating the responses from the at least some of the other nodes prior to the providing step.
5. The method of claim 1 wherein the receiving step comprises a timeout wherein responses received prior to a predetermined time are processed differently from responses received after the predetermined period of time.
6. The method of claim 1 further comprising the steps of:
receiving the instruction from the first one of the plurality of networked nodes at a second one of the plurality of networked nodes;
executing the command in response to the instruction at the second one of the plurality of networked nodes;
collecting the response to the command; and
forwarding the response to the first one of the plurality of networked nodes.
7. The method of claim 6 further comprising the steps of:
receiving the instruction from the first one of the plurality of networked nodes at a third one of the plurality of networked nodes;
executing the command in response to the instruction at the third one of the plurality of networked nodes;
collecting the response to the command; and
forwarding the response to the first one of the plurality of networked nodes.
8. The method of claim 1 wherein at least some of the plurality of nodes form a cluster of nodes.
9. The method of claim 1 wherein the plurality of nodes form a cluster of nodes identifiable by a cluster identification code.
10. A wireless switch configured to execute the method of claim 1.
11. A wireless switch operating as one of a plurality of wireless switches receiving a command from a user interface module, the wireless switch comprising:
means for receiving the command;
means for transmitting an instruction related to the command to the other nodes;
means for receiving responses from at least some of the other nodes indicating an effect produced by the instruction; and
means for providing a response to the user interface module from the first one of the plurality of networked nodes.
12. The wireless switch of claim 11 further comprising:
means for receiving a second instruction from a second one of the plurality of networked nodes;
means for executing a second command in response to the second instruction;
means for collecting the response to the second command; and
means for forwarding the response to the second one of the plurality of networked nodes.
US11/529,988 2006-09-29 2006-09-29 Methods and systems for centralized cluster management in wireless switch architecture Active 2028-08-30 US7760695B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/529,988 US7760695B2 (en) 2006-09-29 2006-09-29 Methods and systems for centralized cluster management in wireless switch architecture
PCT/US2007/079823 WO2008042741A2 (en) 2006-09-29 2007-09-28 Methods and systems for centralized cluster management in wireless switch architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/529,988 US7760695B2 (en) 2006-09-29 2006-09-29 Methods and systems for centralized cluster management in wireless switch architecture

Publications (2)

Publication Number Publication Date
US20080080438A1 true US20080080438A1 (en) 2008-04-03
US7760695B2 US7760695B2 (en) 2010-07-20

Family

ID=39256139

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/529,988 Active 2028-08-30 US7760695B2 (en) 2006-09-29 2006-09-29 Methods and systems for centralized cluster management in wireless switch architecture

Country Status (2)

Country Link
US (1) US7760695B2 (en)
WO (1) WO2008042741A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090327819A1 (en) * 2008-06-27 2009-12-31 Lemko, Corporation Fault Tolerant Distributed Mobile Architecture
CN102057638A (en) * 2008-06-26 2011-05-11 莱姆克公司 System and method to control wireless communications
US8676197B2 (en) 2006-12-13 2014-03-18 Lemko Corporation System, method, and device to control wireless communications
US8688111B2 (en) 2006-03-30 2014-04-01 Lemko Corporation System, method, and device for providing communications using a distributed mobile architecture
US8744435B2 (en) 2008-09-25 2014-06-03 Lemko Corporation Multiple IMSI numbers
US8780804B2 (en) 2004-11-08 2014-07-15 Lemko Corporation Providing communications using a distributed mobile architecture
US9191980B2 (en) 2008-04-23 2015-11-17 Lemko Corporation System and method to control wireless communications
US9198020B2 (en) 2008-07-11 2015-11-24 Lemko Corporation OAMP for distributed mobile architecture
US9253622B2 (en) 2006-06-12 2016-02-02 Lemko Corporation Roaming mobile subscriber registration in a distributed mobile architecture
US9332478B2 (en) 2008-07-14 2016-05-03 Lemko Corporation System, method, and device for routing calls using a distributed mobile architecture
US10361918B2 (en) * 2013-03-19 2019-07-23 Yale University Managing network forwarding configurations using algorithmic policies
EP3709571A1 (en) * 2019-03-14 2020-09-16 Nokia Solutions and Networks Oy Device management clustering
US20200403970A1 (en) * 2015-11-24 2020-12-24 At&T Intellectual Property I, L.P. Providing Network Address Translation in a Software Defined Networking Environment
US10896196B2 (en) 2019-03-14 2021-01-19 Nokia Solutions And Networks Oy Data retrieval flexibility
US11579998B2 (en) 2019-03-14 2023-02-14 Nokia Solutions And Networks Oy Device telemetry control
US11579949B2 (en) 2019-03-14 2023-02-14 Nokia Solutions And Networks Oy Device application support

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9210034B2 (en) * 2007-03-01 2015-12-08 Cisco Technology, Inc. Client addressing and roaming in a wireless network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088346A (en) * 1996-09-27 2000-07-11 U.S. Philips Corporation Local area network with transceiver
US6611860B1 (en) * 1999-11-17 2003-08-26 I/O Controls Corporation Control network with matrix architecture
US6636499B1 (en) * 1999-12-02 2003-10-21 Cisco Technology, Inc. Apparatus and method for cluster network device discovery
US20030212777A1 (en) * 2002-05-10 2003-11-13 International Business Machines Corporation Network attached storage SNMP single system image
US6785552B2 (en) * 1999-12-28 2004-08-31 Ntt Docomo, Inc. Location managing method for managing location of mobile station in mobile wireless packet communication system and mobile wireless packet communication system
US6886038B1 (en) * 2000-10-24 2005-04-26 Microsoft Corporation System and method for restricting data transfers and managing software components of distributed computers
US20050271044A1 (en) * 2004-03-06 2005-12-08 Hon Hai Precision Industry Co., Ltd. Method for managing a stack of switches
US20050289228A1 (en) * 2004-06-25 2005-12-29 Nokia Inc. System and method for managing a change to a cluster configuration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1266882C (en) 2002-12-04 2006-07-26 华为技术有限公司 A management method of network device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088346A (en) * 1996-09-27 2000-07-11 U.S. Philips Corporation Local area network with transceiver
US6611860B1 (en) * 1999-11-17 2003-08-26 I/O Controls Corporation Control network with matrix architecture
US6636499B1 (en) * 1999-12-02 2003-10-21 Cisco Technology, Inc. Apparatus and method for cluster network device discovery
US6785552B2 (en) * 1999-12-28 2004-08-31 Ntt Docomo, Inc. Location managing method for managing location of mobile station in mobile wireless packet communication system and mobile wireless packet communication system
US6886038B1 (en) * 2000-10-24 2005-04-26 Microsoft Corporation System and method for restricting data transfers and managing software components of distributed computers
US20030212777A1 (en) * 2002-05-10 2003-11-13 International Business Machines Corporation Network attached storage SNMP single system image
US20050271044A1 (en) * 2004-03-06 2005-12-08 Hon Hai Precision Industry Co., Ltd. Method for managing a stack of switches
US20050289228A1 (en) * 2004-06-25 2005-12-29 Nokia Inc. System and method for managing a change to a cluster configuration

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8780804B2 (en) 2004-11-08 2014-07-15 Lemko Corporation Providing communications using a distributed mobile architecture
US8688111B2 (en) 2006-03-30 2014-04-01 Lemko Corporation System, method, and device for providing communications using a distributed mobile architecture
US9253622B2 (en) 2006-06-12 2016-02-02 Lemko Corporation Roaming mobile subscriber registration in a distributed mobile architecture
US9515770B2 (en) 2006-12-13 2016-12-06 Lemko Corporation System, method, and device to control wireless communications
US8676197B2 (en) 2006-12-13 2014-03-18 Lemko Corporation System, method, and device to control wireless communications
US9191980B2 (en) 2008-04-23 2015-11-17 Lemko Corporation System and method to control wireless communications
US9215098B2 (en) 2008-06-26 2015-12-15 Lemko Corporation System and method to control wireless communications
CN102057638A (en) * 2008-06-26 2011-05-11 莱姆克公司 System and method to control wireless communications
CN103795620A (en) * 2008-06-26 2014-05-14 莱姆克公司 System and method to control wireless communications
CN103795619A (en) * 2008-06-26 2014-05-14 莱姆克公司 System and method to control wireless communications
CN103812767A (en) * 2008-06-26 2014-05-21 莱姆克公司 System and method to control wireless communications
US10547530B2 (en) 2008-06-27 2020-01-28 Lemko Corporation Fault tolerant distributed mobile architecture
US20090327819A1 (en) * 2008-06-27 2009-12-31 Lemko, Corporation Fault Tolerant Distributed Mobile Architecture
US8706105B2 (en) 2008-06-27 2014-04-22 Lemko Corporation Fault tolerant distributed mobile architecture
US9755931B2 (en) 2008-06-27 2017-09-05 Lemko Corporation Fault tolerant distributed mobile architecture
US9198020B2 (en) 2008-07-11 2015-11-24 Lemko Corporation OAMP for distributed mobile architecture
US9332478B2 (en) 2008-07-14 2016-05-03 Lemko Corporation System, method, and device for routing calls using a distributed mobile architecture
US8744435B2 (en) 2008-09-25 2014-06-03 Lemko Corporation Multiple IMSI numbers
US10361918B2 (en) * 2013-03-19 2019-07-23 Yale University Managing network forwarding configurations using algorithmic policies
US20200403970A1 (en) * 2015-11-24 2020-12-24 At&T Intellectual Property I, L.P. Providing Network Address Translation in a Software Defined Networking Environment
EP3709571A1 (en) * 2019-03-14 2020-09-16 Nokia Solutions and Networks Oy Device management clustering
US10896196B2 (en) 2019-03-14 2021-01-19 Nokia Solutions And Networks Oy Data retrieval flexibility
US11579998B2 (en) 2019-03-14 2023-02-14 Nokia Solutions And Networks Oy Device telemetry control
US11579949B2 (en) 2019-03-14 2023-02-14 Nokia Solutions And Networks Oy Device application support

Also Published As

Publication number Publication date
WO2008042741A3 (en) 2008-05-29
WO2008042741A2 (en) 2008-04-10
US7760695B2 (en) 2010-07-20

Similar Documents

Publication Publication Date Title
US7760695B2 (en) Methods and systems for centralized cluster management in wireless switch architecture
US11706102B2 (en) Dynamically deployable self configuring distributed network management system
US6856591B1 (en) Method and system for high reliability cluster management
US20070230415A1 (en) Methods and apparatus for cluster management using a common configuration file
US7639605B2 (en) System and method for detecting and recovering from virtual switch link failures
US5781726A (en) Management of polling traffic in connection oriented protocol sessions
US8005013B2 (en) Managing connectivity in a virtual network
US10050824B2 (en) Managing a cluster of switches using multiple controllers
US20120311670A1 (en) System and method for providing source id spoof protection in an infiniband (ib) network
US7561587B2 (en) Method and system for providing layer-4 switching technologies
TW200947969A (en) Open network connections
US20130201875A1 (en) Distributed fabric management protocol
US10523547B2 (en) Methods, systems, and computer readable media for multiple bidirectional forwarding detection (BFD) session optimization
CN104901825B (en) A kind of method and apparatus for realizing zero configuration starting
JP2021191007A (en) Network topology discovery method, network topology discovery device, and network topology discovery system
EP1989838B1 (en) Methods and apparatus for license management in a wireless switch cluster
CN109120520A (en) A kind of fault handling method and equipment
KR102092015B1 (en) Method, apparatus and computer program for recognizing network equipment in a software defined network
WO2015106506A1 (en) Methods for setting control information and establishing communication, management controller and controller
TWI836734B (en) Software-defined network controller-based automatic management system, method, and computer-readable medium
US8121102B2 (en) Methods and apparatus for recovering from misconfiguration in a WLAN
JP2006178836A (en) Authentication transmitting system
CN115136634A (en) Apparatus and method for zero configuration deployment in a communication network
WO2008138867A2 (en) Method for managing network components, and a network component

Legal Events

Date Code Title Description
AS Assignment

Owner name: SYMBOL TECHNOLOGIES, INC., NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOPALAKRISHNAN, KAMATCHI;MALIK, AJAY;REEL/FRAME:018371/0190

Effective date: 20060929

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATERAL AGENT, MARYLAND

Free format text: SECURITY AGREEMENT;ASSIGNORS:ZIH CORP.;LASER BAND, LLC;ZEBRA ENTERPRISE SOLUTIONS CORP.;AND OTHERS;REEL/FRAME:034114/0270

Effective date: 20141027

Owner name: MORGAN STANLEY SENIOR FUNDING, INC. AS THE COLLATE

Free format text: SECURITY AGREEMENT;ASSIGNORS:ZIH CORP.;LASER BAND, LLC;ZEBRA ENTERPRISE SOLUTIONS CORP.;AND OTHERS;REEL/FRAME:034114/0270

Effective date: 20141027

AS Assignment

Owner name: SYMBOL TECHNOLOGIES, LLC, NEW YORK

Free format text: CHANGE OF NAME;ASSIGNOR:SYMBOL TECHNOLOGIES, INC.;REEL/FRAME:036083/0640

Effective date: 20150410

AS Assignment

Owner name: SYMBOL TECHNOLOGIES, INC., NEW YORK

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC.;REEL/FRAME:036371/0738

Effective date: 20150721

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: AMENDED AND RESTATED PATENT AND TRADEMARK SECURITY AGREEMENT;ASSIGNOR:EXTREME NETWORKS, INC.;REEL/FRAME:040521/0762

Effective date: 20161028

AS Assignment

Owner name: EXTREME NETWORKS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SYMBOL TECHNOLOGIES, LLC;REEL/FRAME:040579/0410

Effective date: 20161028

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: SECOND AMENDED AND RESTATED PATENT AND TRADEMARK SECURITY AGREEMENT;ASSIGNOR:EXTREME NETWORKS, INC.;REEL/FRAME:043200/0614

Effective date: 20170714

AS Assignment

Owner name: SILICON VALLEY BANK, CALIFORNIA

Free format text: THIRD AMENDED AND RESTATED PATENT AND TRADEMARK SECURITY AGREEMENT;ASSIGNOR:EXTREME NETWORKS, INC.;REEL/FRAME:044639/0300

Effective date: 20171027

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

AS Assignment

Owner name: BANK OF MONTREAL, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:EXTREME NETWORKS, INC.;REEL/FRAME:046050/0546

Effective date: 20180501

Owner name: EXTREME NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:046051/0775

Effective date: 20180501

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12

AS Assignment

Owner name: BANK OF MONTREAL, NEW YORK

Free format text: AMENDED SECURITY AGREEMENT;ASSIGNORS:EXTREME NETWORKS, INC.;AEROHIVE NETWORKS, INC.;REEL/FRAME:064782/0971

Effective date: 20230818