US20140105025A1 - Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device - Google Patents

Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device Download PDF

Info

Publication number
US20140105025A1
US20140105025A1 US14/134,230 US201314134230A US2014105025A1 US 20140105025 A1 US20140105025 A1 US 20140105025A1 US 201314134230 A US201314134230 A US 201314134230A US 2014105025 A1 US2014105025 A1 US 2014105025A1
Authority
US
United States
Prior art keywords
packet flow
class
packet
instructions executable
flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/134,230
Inventor
Tal Lavian
Stephen Lau
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RPX Clearinghouse LLC
Original Assignee
Rockstar Consortium US LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rockstar Consortium US LP filed Critical Rockstar Consortium US LP
Priority to US14/134,230 priority Critical patent/US20140105025A1/en
Publication of US20140105025A1 publication Critical patent/US20140105025A1/en
Assigned to RPX CLEARINGHOUSE LLC reassignment RPX CLEARINGHOUSE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOCKSTAR TECHNOLOGIES LLC, CONSTELLATION TECHNOLOGIES LLC, MOBILESTAR TECHNOLOGIES LLC, NETSTAR TECHNOLOGIES LLC, ROCKSTAR CONSORTIUM LLC, ROCKSTAR CONSORTIUM US LP
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2425Traffic characterised by specific attributes, e.g. priority or QoS for supporting services specification, e.g. SLA
    • H04L47/2433Allocation of priorities to traffic types
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2458Modification of priorities while in transit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority

Definitions

  • the present invention relates to the field of telecommunications, and more particularly to dynamic assignment of traffic classes to queues having different priority levels.
  • a typical switch or router includes a number of input/output (I/O) modules connected to a switching fabric, such as a crossbar or shared memory switch.
  • I/O input/output
  • the switching fabric is operated at a higher frequency than the transmission frequency of the I/O modules so that the switching fabric may deliver packets to an I/O module faster than the I/O module can output them to the network transmission medium.
  • packets are usually queued in the I/O module to await transmission.
  • a method and apparatus for dynamic assignment of classes of traffic to a priority queue are disclosed.
  • Bandwidth consumption by one or more types of packet traffic received in a packet forwarding device is monitored.
  • the queue assignment of at least one type of packet traffic is automatically changed from a queue having a first priority to a queue having a second priority if the bandwidth consumption exceeds the threshold.
  • FIG. 1 illustrates a packet forwarding device that can be used to implement embodiments of the present invention
  • FIG. 2A illustrates queue fill logic implemented by a queue manager in a quad interface device
  • FIG. 2B illustrates queue drain logic according to one embodiment
  • FIG. 3 illustrates the flow of a packet within the switch of FIG. 1 ;
  • FIG. 4 illustrates storage of an entry in an address resolution table managed by an address resolution unit
  • FIG. 5 is a diagram of the software architecture of the switch of FIG. 1 according to one embodiment.
  • FIG. 6 illustrates an example of dynamic assignment of traffic classes to a priority queue.
  • a packet forwarding device in which selected classes of network traffic may be dynamically assigned for priority queuing includes a Java virtual machine for executing user-coded Java applets received from a network management server (NMS).
  • NMS network management server
  • a Java-to-native interface (JNI) is provided to allow the Java applets to obtain error information and traffic statistics from the device hardware and to allow the Java applets to write configuration information to the device hardware, including information that indicates which classes of traffic should be queued in priority queues.
  • the Java applets implement user-specified traffic management policies based on real-time evaluation of the error information and traffic statistics to provide dynamic control of the priority queuing assignments.
  • Java provides a number of advantages when used to implement the present invention, e.g., dynamic on-demand use, other programming languages such as C may be used in its place.
  • FIG. 1 illustrates a packet forwarding device 17 that can be used to implement embodiments of the present invention.
  • the packet forwarding device 17 is assumed to be a switch that switches packets between ingress and egress ports based on media access control (MAC) addresses within the packets.
  • the packet forwarding device 17 may be a router that routes packets according to destination internet protocol (IP) addresses or a routing switch that performs both MAC address switching and IP address routing.
  • IP internet protocol
  • the techniques and structures disclosed herein are applicable generally to a device that forwards packets in a packet switching network.
  • packet is used broadly herein to refer to a fixed-length cell, a variable length frame or any other information structure that is self-contained as to its destination address.
  • the switch 17 includes a switching fabric 12 coupled to a plurality of I/O units (only I/O units 1 and 16 are depicted) and to a processing unit 10 .
  • the processing unit includes at least a processor 31 (which may be a microprocessor, digital signal processor or microcontroller) coupled to a memory 32 via a bus 33 .
  • each I/O unit 1 , 16 includes four physical ports P 1 -P 4 coupled to a quad media access controller (QMAC) 14 A, 14 B via respective transceiver interface units 21 A- 24 A, 21 B- 24 B.
  • QMAC quad media access controller
  • Each I/O unit 1 , 16 also includes a quad interface device (QID) 16 A, 16 B, an address resolution unit (ARU) 15 A, 15 B and a memory 18 A, 18 B, interconnected as shown in FIG. 1 .
  • the switch 17 is modular with at least the I/O units 1 , 16 being implemented on port cards (not shown) that can be installed in a backplane (not shown) of the switch 17 .
  • each port card includes a given number of I/O units and therefore supports a corresponding number of physical ports.
  • the switch backplane includes slots for a given number of port cards, so that the switch 17 can be scaled according to customer needs to support a number of physical ports as controlled by the number of port cards.
  • each I/O unit 1 , 16 may support more or fewer physical ports each port card may support more or fewer I/O units 1 , 16 and the switch 17 may support more or fewer port cards.
  • the I/O unit 1 shown in FIG. 1 may be used to support 0baseT transmission lines (i.e., 10 Mbps (mega-bit per second), twisted-pair) or 00baseF transmission lines (100 Mbps, fiber optic), while a different I/O unit (not shown) may be used to support a 1000baseF transmission line (1000 Mbps, fiber optic).
  • a packet 25 when a packet 25 is received on physical port P 1 , it is supplied to the corresponding physical transceiver 21 A which performs any necessary signal conditioning (e.g. optical to electrical signal conversion) and then forwards the packet 25 to the QMAC 14 A.
  • the QMAC 14 A buffers packets received from the physical transceivers 21 A- 24 A as necessary, forwarding one packet at a time to the QID 16 A.
  • Receive logic within the QID 16 A notifies the ARU 15 A that the packet 25 has been received.
  • the ARU computes a table index based on the destination MAC address within the packet 25 and uses the index to identify an entry in a forwarding table that corresponds to the destination MAC address.
  • a forwarding table may be indexed based on other destination information contained within the packet.
  • the forwarding table entry identified based on the destination MAC address indicates the switch egress port to which the packet 25 is destined and also whether the packet is part of a MAC-address based virtual local area network (VLAN), or a port-based VLAN.
  • VLAN virtual local area network
  • the forwarding table entry further indicates whether the packet 25 is to be queued in a priority queue in the I/O unit that contains the destination port. As discussed below, priority queuing may be specified based on a number of conditions, including, but not limited to, whether the packet is part of a particular IP flow, or whether the packet is destined for a particular port, VLAN or MAC address.
  • the QID 16 A, 16 B segments the packet 25 into a plurality of fixed-length cells 26 for transmission through the switching fabric 12 .
  • Each cell includes a header 28 that identifies it as a constituent of the packet 25 and that identifies the destination port for the cell (and therefore for the packet 25 ).
  • the header 28 of each cell also includes a bit 29 indicating whether the cell is the beginning cell of a packet and also a bit 30 indicating whether the packet 25 to which the cell belongs is to be queued in a priority queue or a best effort queue on the destined I/O unit.
  • the switching fabric 12 forwards each cell to the I/O unit indicated by the cell header 28 .
  • the constituent cells 26 of the packet 25 are assumed to be forwarded to I/O unit 16 where they are delivered to transmit logic within the QID 16 B.
  • the transmit logic in the QID 16 B includes a queue manager (not shown) that maintains a priority queue and a best effort queue in the memory 18 B.
  • the memory 18 B is resolved into a pool of buffers, each large enough to hold a complete packet.
  • the queue manager obtains a buffer from the pool and appends the buffer to either the priority queue or the best effort queue according to whether the priority bit 30 is set in the beginning cell.
  • the priority queue and the best effort queue we each implemented by a linked list, with the queue manager maintaining respective pointers to the head and tail of each linked list. Entries are added to the tail of the queue list by advancing the tail pointer to point to a newly allocated buffer that has been appended to the linked list, and entries are popped off the head of the queue by advancing the head pointer to point to the next buffer in the linked list and returning the spent buffer to the pool.
  • the beginning cell and subsequent cells are used to reassemble the packet 25 within the buffer.
  • the packet 25 is popped off the head of the queue and delivered to an egress port via the QMAC 14 B and the physical transceiver (e.g., 23 B) in an egress operation. This is shown by way of example in FIG. 1 by the egress of packet 25 from physical port P 3 of I/O unit 16 .
  • FIG. 2A illustrates queue fill logic implemented by the queue manager in the QID.
  • a cell is received in the QID from the switching fabric.
  • the beginning cell bit in the cell header is inspected at decision block 53 to determine if the cell is the beginning cell of a packet. If so, the priority bit in the cell header is inspected at decision block 55 to determine whether to allocate an entry in the priority queue or the best effort queue for packet reassembly. If the priority bit is set, an entry in the priority queue is allocated at block 57 and the priority queue entry is associated with the portion of the cell header that identifies the cell as a constituent of a particular packet at block 59 . If the priority bit in the cell header is not set, then an entry in the best effort queue is allocated at block 61 and the best effort queue entry is associated with the portion of the cell header that identifies the cell as a constituent of a particular packet at block 63 .
  • the queue entry associated with the cell header is identified at block 65 .
  • the association between the cell header and the queue entry identified at block 65 was established earlier in either block 59 or block 63 .
  • identification of the queue entry in block 65 may include inspection of the priority bit in the cell to narrow the identification effort to either the priority queue or the best effort queue.
  • the cell is combined with the preceding cell in the queue entry in a packet reassembly operation. If the reassembly operation in block 67 results in a completed packet (decision block 69 ), then the packet is marked as ready for transmission in block 71 . In one embodiment, the packet is marked by setting a flag associated with the queue entry in which the packet has been reassembled. Other techniques for indicating that a packet is ready for transmission may be used in alternate embodiments.
  • FIG. 2B illustrates queue drain logic according to one embodiment.
  • the entry at the head of the priority queue is inspected to determine if it contains a packet ready for transmission. If so, the packet is transmitted at block 77 and the corresponding priority queue entry is popped off the head, of the priority queue and deallocated at block 79 . If a ready packet is not present at the head of the priority queue, then the entry at the head of the best effort queue is inspected at decision block 81 . If a packet is ready at the head of the best effort queue, it is transmitted at block 83 and the corresponding best effort queue entry is popped off the head of the best effort queue and deallocated in block 85 . Note that, in the embodiment illustrated in FIG.
  • packets are drained from the best effort queue only after the priority queue has been emptied.
  • a timer, counter or similar logic element may be used to ensure that the best effort queue 105 is serviced at least every so often or at least after every N number of packets are transmitted from the priority queue, thereby ensuring at least a threshold level of service to best effort queue.
  • FIG. 3 illustrates the flow of a packet within the switch 17 of FIG. 1 .
  • a packet is received in the switch at block 91 and used to identify an entry in a forwarding table called the address resolution (AR) table at block 93 .
  • AR address resolution
  • a priority bit in the AR table entry is inspected to determine whether the packet belongs to a class of traffic that has been selected for priority queuing. If the priority bit is set, the packet is segmented into cells having respective priority bits set in their headers in block 97 . If the priority bit is not set, the packet is segmented into cells having respective priority bits cleared their cell headers in block 99 .
  • the constituent cells of each packet are forwarded to an egress I/O unit by the switching fabric. In the egress I/O unit, the priority bit of each cell is inspected (decision block 101 ) and used to direct the cell to an entry in either the priority queue 103 or the best effort queue 105 where it is combined with other cells to reassemble the packet.
  • FIG. 4 illustrates storage of an entry in the address resolution (AR) table managed by the ARU.
  • the AR table is maintained in a high speed static random access memory (SRAM) coupled to the ARU.
  • the AR table may be included in a memory within an application-specific integrated circuit (ASIC) that includes the ARU.
  • ASIC application-specific integrated circuit
  • the ARU stores an entry in the AR table in response to packet forwarding information from the processing unit.
  • the processing unit supplies packet forwarding information to be stored in each AR table in the switch whenever a new association between a destination address and a switch egress port is learned.
  • an address-to-port association is learned by transmitting a packet that has an unknown egress port assignment on each of the egress ports of the switch and associating the destination address of the packet with the egress port at which an acknowledgment is received.
  • the processing unit issues forwarding information that includes, for example, an identifier of the newly associated egress port, the destination MAC address, an identifier of the VLAN associated with the MAC address (if any), an identifier of the VLAN associated with the egress port (if any), the destination IP address the destination IP port (e.g., transmission control protocol (TCP), universal device protocol (UDP) or other IP port) and the IP protocol (e.g., HTTP, FTP or other IP protocol).
  • TCP transmission control protocol
  • UDP universal device protocol
  • IP protocol e.g., HTTP, FTP or other IP protocol.
  • the source IP address, source IP port and source IP protocol may also be supplied to fully identify an end-to-end IP flow.
  • forwarding information 110 is received from the processing unit at block 115 .
  • the ARU stores the forwarding information in an AR table entry.
  • the physical egress port identifier stored in the AR table entry is compared against priority configuration information to determine if packets destined for the egress port have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry in block 127 . Thereafter, incoming packets that index the newly stored table entry will be queued in the priority queue to await transmission.
  • the MAC address stored in the AR table entry is compared against the priority configuration information to determine if packets destined for the MAC address have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry in block 127 . If packets destined for the MAC address have not been selected for priority egress queuing, then at decision block 123 the VLAN identifier stored in the AR table entry (if present) is compared against the priority configuration information to determine if packets destined for the VLAN have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry in block 127 .
  • the IP flow identified by the IP address, IP port and IP protocol in the AR table is compared against the priority configuration information to determine if packets that form part of the IP flow have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry, otherwise the priority bit is not set. Yet other criteria may be considered in assigning priority queuing in alternate embodiments. For example, priority queuing may be specified for a particular IP protocol (e.g. FTP, HTTP). Also, the ingress port, source MAC address or source VLAN of a packet may also be used to determine whether to queue the packet in the priority egress packet.
  • IP protocol e.g. FTP, HTTP
  • priority or best effort queuing of unicast traffic is determined based on destination parameters (e.g., egress port, destination MAC address or destination IP address), while priority or best effort queuing of multicast traffic is determined based on source parameters (e.g., ingress port source MAC address or source IP address).
  • destination parameters e.g., egress port, destination MAC address or destination IP address
  • source parameters e.g., ingress port source MAC address or source IP address
  • FIG. 5 is a diagram of the software architecture of the switch 17 of FIG. 1 according to one embodiment.
  • An operating system 143 and device drivers 145 are provided to interface with the device hardware 141 .
  • device drivers are provided to write configuration information and AR storage entries to the ARUs in respective I/O units.
  • the operating system 143 performs memory management functions and other system services in response to requests from higher level software.
  • the device drivers 145 extend the services provided by the operating system and are invoked in response to requests for operating system service that involve device-specific operations.
  • the device management code 147 is executed by the processing unit (e.g., element 10 of FIG. 1 ) to perform system level functions, including management of forwarding entries in the distributed AR tables and management of forwarding entries in a master forwarding table maintained in the memory of the processing unit.
  • the device management code 147 also includes routines for invoking device driver services, for example, to query the ARU for traffic statistics and error information, or to write updated configuration information to the ARUs, including priority queuing information. Further, the device management code 147 includes routines for writing updated configuration information to the ARUs, as discussed below in reference to FIG. 6 .
  • the device management code 147 is native code, meaning that the device management code 147 is a compiled set of instructions that can be executed directly by a processor in the processing unit to carry out the device management functions.
  • the device management code 147 supports the operation of a Java client 160 that includes a number of Java applets, including a monitor applet 157 , a policy enforcement applet 159 and configuration applet 161 .
  • a Java applet is an instantiation of a Java class that includes one or more methods for self initialization (e.g., a constructor method called “Applet( )”), and one or more methods for communicating with a controlling application.
  • the controlling application for a Java applet is a web browser executed on a general purpose computer.
  • a Java application called Data Communication Interface (DCI) 153 is the controlling application for the monitor, policy enforcement and configuration applets 157 , 159 , 161 .
  • DCI Data Communication Interface
  • the DCI application 153 is executed by a Java virtual machine 149 to manage the download of Java applets from a network management server (NMS) 170 .
  • NMS network management server
  • a library of Java objects 155 is provided for use by the Java applets 157 , 159 , 161 and the DCI application 153 .
  • Java is not essential to the present invention and is used for purposes of illustration and explanation. Other programming languages may be used in its place.
  • the NMS 170 supplies Java applets to the switch 17 in a hyper-text transfer protocol (HTTP) data stream.
  • HTTP hyper-text transfer protocol
  • Other protocols may also be used.
  • the constituent packets of the HTTP data stream are addressed to the IP address of the switch and are directed to the processing unit after being received by the I/O unit coupled to the NMS 170 .
  • the DCI application 153 After authenticating the HTTP data stream, stores the Java applets provided in the data stream in the memory of the processing unit and executes a method to invoke each applet.
  • An applet is invoked by supplying the Java virtual machine 149 with the address of the constructor method of the applet and causing the Java virtual machine 149 to begin execution of the applet code.
  • Program code defining the Java virtual machine 149 is executed to interpret the platform independent byte codes of the Java applets 157 , 159 , 161 into native instructions that can be executed by a processor within the processing unit.
  • the monitor applet 157 , policy enforcement applet 159 and configuration applet 161 communicate with the device management code 147 through a Java-native interface (JNI) 151 .
  • the JNI 151 is essentially an application programming interface (API) and provides a set of methods that can be invoked by the Java applets 157 , 159 , 161 to send messages and receive responses from the device management code 147 .
  • the JNI 151 includes methods by which the monitor applet 157 can request the device management code 147 to gather error information and traffic statistics from the device hardware 141 .
  • the JNI 151 also includes methods by which the configuration applet 161 can request the device management code 147 to write configuration information to the device hardware 141 .
  • the JNI 151 includes a method by which, the configuration applet 161 can indicate that priority queuing should be performed for specified classes of traffic, including, but not limited to, the classes of traffic discussed above in reference to FIG. 4 .
  • a user-coded configuration applet 161 may be executed by the Java virtual machine 149 within the switch 17 to invoke a method in the JNI 151 to request the device management code 147 to write information that assigns selected classes of traffic to be queued in the priority egress queue.
  • the configuration applet 161 assigns virtual queues defined by the selected classes of traffic to feed into the priority egress queue.
  • Java virtual machine 149 and Java applets 157 , 159 , 161 have been described, other virtual machines, interpreters and scripting languages may be used in alternate embodiments, Also, as discussed below, more or fewer Java applets may be used to perform the monitoring, policy enforcement and configuration functions in alternate embodiments.
  • FIG. 6 illustrates an example of dynamic assignment traffic classes to a priority queue.
  • An exemplary network includes switches A and B coupled together at physical ports 32 and 1 , respectively.
  • a network administrator or other user determines that an important server 175 on port 2 of switch A requires a relatively high quality of service (QoS), and that, at least in switch B, the required QoS can be provided by ensuring that at least 20% of the egress capacity of switch B, port 1 is reserved for traffic destined to the MAC address of the server 175 .
  • QoS quality of service
  • One way to ensure that 20% egress capacity is reserved to traffic destined for the server 175 is to assign priority queuing for packets destined to the MAC address of the server 175 , but not for other traffic.
  • MAC address A and MAC address B to which the user desires to assign priority queuing, so long as the egress capacity required by the server-destined traffic is available.
  • FIG. 6 includes exemplary pseudocode listings of monitor, policy enforcement and configuration applets 178 , 179 , 180 that can be used to ensure that at least 20% of the egress capacity of switch B, port 1 is reserved for traffic destined to the server 175 , but without unnecessarily denying priority queuing assignment to traffic destined for MAC addresses A and B.
  • the monitor applet 178 repeatedly measures of the port 1 line utilization from the device hardware.
  • the ARU in the I/O unit that manages port 1 keeps a count of the number of packets destined for particular egress ports, packets destined for particular MAC addresses, packets destined for particular VLANS, packets that form part of a particular IP low, packets having a particular IP protocol, and so forth.
  • the ARU also tracks the number of errors associated with these different classes of traffic, the number of packets from each class of traffic that are dropped, and other statistics. By determining the change in these different statistics per unit time, a utilization factor may be generated that represents the percent utilization of the capacity of an egress port, an I/O unit or the overall switch. Error rates and packet drop rates may also be generated.
  • the monitor applet 178 measures line utilization by invoking methods in the JNI to read the port 1 line utilization resulting from traffic destined for MAC address A and for MAC address B on a periodic basis, e.g., every 10 milliseconds.
  • the policy enforcement applet 179 includes variables to hold the line utilization percentage of traffic destined for MAC address A (A %), the line utilization percentage of traffic destined for MAC address B (B %), the queue assignment (i.e., priority or best effort) of traffic destined for the server MAC address (QA_S), the queue assignment of traffic destined for MAC address A (QA_A) and the queue assignment of traffic destined for MAC address B. Also, a constant, DELTA, is defined to be 5% and the queue assignments for the MAC address A, MAC address B and server MAC address traffic are initially set to the priority queue.
  • the policy enforcement applet 179 also includes a forever loop in which the line utilization percentages A % and B % are obtained from the monitor applet 178 and used to determine whether to change the queue assignments QA_A and QA_B. If the MAC address A traffic and the MAC address B traffic are both assigned to the priority queue (the initial configuration) and the sum of the line utilization percentages A % and B % exceeds 80%, then less than 20% line utilization remains for the saver-destined traffic. In that event, the MAC address A traffic is reassigned from the priority queue to the best effort queue (code statement 181 ).
  • the MAC address A traffic is reassigned to the priority queue if the sum of the line utilization percentages A % and B % drops below 80% less DELTA (code statement 183 ).
  • the DELTA parameter provides a deadband to prevent rapid changing of priority queue assignment.
  • the MAC address B traffic is reassigned from the priority queue to the best effort queue (code statement 185 ). If the MAC address B traffic is assigned to the best effort queue and the line utilization percentage B % drops below 80% less DELTA, then the MAC address B traffic is reassigned to the priority queue (code statement 187 ).
  • the policy enforcement applet 179 may treat the traffic destined for the MAC A and MAC B addresses more symmetrically by including additional statements to conditionally assign traffic destined for MAC address A to the priority queue, but not traffic destined for MAC address B.
  • the policy enforcement applet 179 delays for 5 milliseconds at the end of each pass through the forever loop before repeating.
  • the configuration applet 180 includes variables, QA_A and QA_B, to hold the queue assignments of the traffic destined for the MAC addresses A and B, respectively.
  • Variables LAST_QA_A and LAST_QA_B are also provided to record the history (i.e., most recent values) of the QA_A and QA_B values.
  • the LAST_QA_A and LAST_QA_B variables are initialized to indicate that traffic destined for the MAC addresses A and B is assigned to the priority queue.
  • the configuration applet 180 includes a forever loop in which a code sequence is executed followed by a delay, in the exemplary listing of FIG. 6 , the first operation performed by the configuration applet 180 within the forever loop is to obtain the queue assignments QA_A and QA_B from the policy enforcement applet 179 . If the queue assignment indicated by QA_A is different from the queue assignment indicated by LAST_QA_A, then a JNI method is invoked to request the device code to reconfigure the queue assignment of the traffic destined for MAC address A according to the new QA_A value. The new QA_A value is then copied into the LAST_QA_A variable so that subsequent queue assignment changes are detected.
  • a JNI method is invoked to request the device code to reconfigure the queue assignment of the traffic destined for MAC address B according to the new QA_B value.
  • the new QA_B value is then copied into the LAST_QA_B variable so that subsequent queue assignment changes are detected.
  • FIG. 6 Although a three-applet implementation is illustrated in FIG. 6 , more or fewer applets may be used in an alternate embodiment.
  • the functions of the monitor, policy enforcement and configuration applets 178 , 179 , 180 may be implemented in a single applet.
  • multiple applets may be provided to perform policy enforcement or other functions using different queue assignment criteria.
  • one policy enforcement applet may make priority queue assignments based on destination MAC addresses, while another policy enforcement applet makes priority queue assignments based on error rates or line utilization of higher level protocols.
  • Multiple monitor applets or configuration applets may similarly be provided.
  • queue assignment policy based on destination MAC address is illustrated in FIG. 6
  • queue assignments may be updated on other traffic patterns, including traffic to specified destination ports, traffic from specified source ports, traffic from specified source MAC addresses, traffic that forms part of a specified IP flow, traffic that is transmitted using a specified protocol (e.g., HTTP, FTP or other protocols) and so forth.
  • queue assignments may be updated based on environmental conditions such as time of day, changes in network configuration (e.g., due to failure or congestion at other network nodes), error rates, packet drop rates and so forth.
  • Monitoring, policy enforcement and configuration applets that combine many or all of the above-described criteria may be implemented to provide sophisticated traffic handling capability in a packet forwarding device.
  • the methods and apparatuses described herein may alternatively be used to assign traffic classes to a hierarchical set of queues anywhere in a packet forwarding device including, but not limited to, ingress queues and queues associated with delivering and receiving packets from the switching fabric.
  • the queue assignment of traffic classes has been described in terms of a pair of queues (priority and best effort), additional queues in a prioritization hierarchy may be used without departing from the spirit and scope of the present invention.
  • traffic can be filtered based on its type—source (e.g., source MAC address or source VLAN), ingress port, destination (e.g., destination. MAC address or destination IP address), egress port, protocol (e.g., FTP, HTTP) or other hardware-supported filters.
  • source e.g., source MAC address or source VLAN
  • destination e.g., destination. MAC address or destination IP address
  • protocol e.g., FTP, HTTP
  • filtering of unicast traffic is determined based on destination parameters such as egress port, destination MAC address or IP address
  • filtering of multicast traffic is determined based on source parameters such as ingress port, source MAC address or source IP address.
  • Filtering may be based on environmental conditions, such as time of day, changes in network configuration (e.g., due to failure or congestion at other network nodes), error rates, packet drop rates, line utilization of higher-level protocols. It may be based on traffic patterns such as traffic from specified source ports, traffic to specified destination ports, traffic from specified source MAC addresses or traffic that forms part of a specified IP flow. Various other hardware counters, monitors and dynamic values can be read from the hardware.
  • dynamic filtering decisions may be made on how to process packets other than choosing whether they should go to a priority or best effort queue; for example, they may be dropped or copied, or traffic of a specific type as described above may be diverted. Packet headers may be modified, and use of differentiated services (DS), quality of service (QoS), TOS, TTL, destination and the like is possible as long as it is supported by the hardware.
  • DS differentiated services
  • QoS quality of service
  • TOS quality of service
  • TTL destination and the like
  • Such traffic might be sent by three or more separated streams defined by, e.g., virtual port number.
  • This traffic could be filtered and processed to dynamically add or drop specific streams.
  • active network applications on nodes between the source and destination can negotiate and dynamically set different adaptation mechanisms.
  • the invention is of course not limited to this example, and in fact is intended to cover such filtering and processing using future hardware platforms which provide new capabilities and which afford new ways of using and controlling such functionality.

Abstract

Responsive to detecting a predetermined time of day, packet forwarding treatment is changed in accordance with at least one class of packet flow from a first packet forwarding treatment to a second packet forwarding treatment.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • The present application claims priority under 35 U.S.C. §119(e) from provisional application Ser. No. 60/226,787, and is related to U.S. patent application Ser. No. 09/227,389, both applications being incorporated herein by reference.
  • FIELD OF THE INVENTION
  • The present invention relates to the field of telecommunications, and more particularly to dynamic assignment of traffic classes to queues having different priority levels.
  • BACKGROUND OF THE INVENTION
  • The flow of packets through packet-switched networks is controlled by switches and routers that forward packets based on destination information included in the packets themselves. A typical switch or router includes a number of input/output (I/O) modules connected to a switching fabric, such as a crossbar or shared memory switch. In some switches and routers, the switching fabric is operated at a higher frequency than the transmission frequency of the I/O modules so that the switching fabric may deliver packets to an I/O module faster than the I/O module can output them to the network transmission medium. In these devices, packets are usually queued in the I/O module to await transmission.
  • One problem that may occur when packets are queued in the I/O module or elsewhere in a switch or router is that the queuing delay per packet varies depending on the amount of traffic being handled by the switch. Variable queuing delays tend to degrade data streams produced by real-time sampling (e.g., audio and video) because the original time delays between successive packets in the stream convey the sampling interval and are therefore needed to faithfully reproduce the source information. Another problem that results from queuing packets in a switch or router is that data from a relatively important source, such as a shared server, may be impeded by data from less important sources, resulting in bottlenecks.
  • SUMMARY OF THE INVENTION
  • A method and apparatus for dynamic assignment of classes of traffic to a priority queue are disclosed. Bandwidth consumption by one or more types of packet traffic received in a packet forwarding device is monitored. The queue assignment of at least one type of packet traffic is automatically changed from a queue having a first priority to a queue having a second priority if the bandwidth consumption exceeds the threshold.
  • Other features and advantages of the invention will be apparent from the accompanying drawings and from the detailed description that follows below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements and in which:
  • FIG. 1 illustrates a packet forwarding device that can be used to implement embodiments of the present invention;
  • FIG. 2A illustrates queue fill logic implemented by a queue manager in a quad interface device, and FIG. 2B illustrates queue drain logic according to one embodiment;
  • FIG. 3 illustrates the flow of a packet within the switch of FIG. 1;
  • FIG. 4 illustrates storage of an entry in an address resolution table managed by an address resolution unit;
  • FIG. 5 is a diagram of the software architecture of the switch of FIG. 1 according to one embodiment; and
  • FIG. 6 illustrates an example of dynamic assignment of traffic classes to a priority queue.
  • DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EXEMPLARY EMBODIMENTS
  • A packet forwarding device in which selected classes of network traffic may be dynamically assigned for priority queuing is disclosed. In one embodiment, the packet forwarding device includes a Java virtual machine for executing user-coded Java applets received from a network management server (NMS). A Java-to-native interface (JNI) is provided to allow the Java applets to obtain error information and traffic statistics from the device hardware and to allow the Java applets to write configuration information to the device hardware, including information that indicates which classes of traffic should be queued in priority queues. The Java applets implement user-specified traffic management policies based on real-time evaluation of the error information and traffic statistics to provide dynamic control of the priority queuing assignments. These and other aspects and advantages of the present invention are described below.
  • It should be noted that the use of the Java language is not a requirement for practicing the present invention. Although Java provides a number of advantages when used to implement the present invention, e.g., dynamic on-demand use, other programming languages such as C may be used in its place.
  • FIG. 1 illustrates a packet forwarding device 17 that can be used to implement embodiments of the present invention. For the purposes of the present description, the packet forwarding device 17 is assumed to be a switch that switches packets between ingress and egress ports based on media access control (MAC) addresses within the packets. In an alternate embodiment, the packet forwarding device 17 may be a router that routes packets according to destination internet protocol (IP) addresses or a routing switch that performs both MAC address switching and IP address routing. The techniques and structures disclosed herein are applicable generally to a device that forwards packets in a packet switching network. Also, the term packet is used broadly herein to refer to a fixed-length cell, a variable length frame or any other information structure that is self-contained as to its destination address.
  • The switch 17 includes a switching fabric 12 coupled to a plurality of I/O units (only I/O units 1 and 16 are depicted) and to a processing unit 10. The processing unit includes at least a processor 31 (which may be a microprocessor, digital signal processor or microcontroller) coupled to a memory 32 via a bus 33. In one embodiment, each I/O unit 1, 16 includes four physical ports P1-P4 coupled to a quad media access controller (QMAC) 14A, 14B via respective transceiver interface units 21A-24A, 21B-24B. Each I/O unit 1, 16 also includes a quad interface device (QID) 16A, 16B, an address resolution unit (ARU) 15A, 15B and a memory 18A, 18B, interconnected as shown in FIG. 1. Preferably, the switch 17 is modular with at least the I/O units 1, 16 being implemented on port cards (not shown) that can be installed in a backplane (not shown) of the switch 17. In one implementation, each port card includes a given number of I/O units and therefore supports a corresponding number of physical ports. The switch backplane includes slots for a given number of port cards, so that the switch 17 can be scaled according to customer needs to support a number of physical ports as controlled by the number of port cards. In alternate embodiments, each I/O unit 1, 16 may support more or fewer physical ports each port card may support more or fewer I/O units 1, 16 and the switch 17 may support more or fewer port cards. For example, the I/O unit 1 shown in FIG. 1 may be used to support 0baseT transmission lines (i.e., 10 Mbps (mega-bit per second), twisted-pair) or 00baseF transmission lines (100 Mbps, fiber optic), while a different I/O unit (not shown) may be used to support a 1000baseF transmission line (1000 Mbps, fiber optic). Nothing disclosed herein should be construed as limiting embodiments of the present invention to use with a particular transmission medium, I/O unit, port card or chassis configuration.
  • Still referring to FIG. 1, when a packet 25 is received on physical port P1, it is supplied to the corresponding physical transceiver 21A which performs any necessary signal conditioning (e.g. optical to electrical signal conversion) and then forwards the packet 25 to the QMAC 14A. The QMAC 14A buffers packets received from the physical transceivers 21A-24A as necessary, forwarding one packet at a time to the QID 16A. Receive logic within the QID 16A notifies the ARU 15A that the packet 25 has been received. The ARU computes a table index based on the destination MAC address within the packet 25 and uses the index to identify an entry in a forwarding table that corresponds to the destination MAC address. In packet forwarding devices that operate on different protocol layers of the packet (e.g., routers), a forwarding table may be indexed based on other destination information contained within the packet.
  • According to one embodiment, the forwarding table entry identified based on the destination MAC address indicates the switch egress port to which the packet 25 is destined and also whether the packet is part of a MAC-address based virtual local area network (VLAN), or a port-based VLAN. (As an aide a VLAN is a logical grouping of MAC addresses (a MAC-address-based VLAN) or a logical grouping of physical ports (a port-based VLAN).) The forwarding table entry further indicates whether the packet 25 is to be queued in a priority queue in the I/O unit that contains the destination port. As discussed below, priority queuing may be specified based on a number of conditions, including, but not limited to, whether the packet is part of a particular IP flow, or whether the packet is destined for a particular port, VLAN or MAC address.
  • According to one embodiment, the QID 16A, 16B segments the packet 25 into a plurality of fixed-length cells 26 for transmission through the switching fabric 12. Each cell includes a header 28 that identifies it as a constituent of the packet 25 and that identifies the destination port for the cell (and therefore for the packet 25). The header 28 of each cell also includes a bit 29 indicating whether the cell is the beginning cell of a packet and also a bit 30 indicating whether the packet 25 to which the cell belongs is to be queued in a priority queue or a best effort queue on the destined I/O unit.
  • The switching fabric 12 forwards each cell to the I/O unit indicated by the cell header 28. In the exemplary data flow shown in FIG. 1, the constituent cells 26 of the packet 25 are assumed to be forwarded to I/O unit 16 where they are delivered to transmit logic within the QID 16B. The transmit logic in the QID 16B includes a queue manager (not shown) that maintains a priority queue and a best effort queue in the memory 18B. In one embodiment, the memory 18B is resolved into a pool of buffers, each large enough to hold a complete packet. When the beginning cell of the packet 25 is delivered to the QID 16B, the queue manager obtains a buffer from the pool and appends the buffer to either the priority queue or the best effort queue according to whether the priority bit 30 is set in the beginning cell. In one embodiment, the priority queue and the best effort queue we each implemented by a linked list, with the queue manager maintaining respective pointers to the head and tail of each linked list. Entries are added to the tail of the queue list by advancing the tail pointer to point to a newly allocated buffer that has been appended to the linked list, and entries are popped off the head of the queue by advancing the head pointer to point to the next buffer in the linked list and returning the spent buffer to the pool.
  • After a buffer is appended to either the priority queue or the best effort queue, the beginning cell and subsequent cells are used to reassemble the packet 25 within the buffer. Eventually the packet 25 is popped off the head of the queue and delivered to an egress port via the QMAC 14B and the physical transceiver (e.g., 23B) in an egress operation. This is shown by way of example in FIG. 1 by the egress of packet 25 from physical port P3 of I/O unit 16.
  • FIG. 2A illustrates queue fill logic implemented by the queue manager in the QID. Starting at block 51, a cell is received in the QID from the switching fabric. The beginning cell bit in the cell header is inspected at decision block 53 to determine if the cell is the beginning cell of a packet. If so, the priority bit in the cell header is inspected at decision block 55 to determine whether to allocate an entry in the priority queue or the best effort queue for packet reassembly. If the priority bit is set, an entry in the priority queue is allocated at block 57 and the priority queue entry is associated with the portion of the cell header that identifies the cell as a constituent of a particular packet at block 59. If the priority bit in the cell header is not set, then an entry in the best effort queue is allocated at block 61 and the best effort queue entry is associated with the portion of the cell header that identifies the cell as a constituent of a particular packet at block 63.
  • Returning to decision block 53, if the beginning cell bit in the cell header is not set, then the queue entry associated with the cell header is identified at block 65. The association between the cell header and the queue entry identified at block 65 was established earlier in either block 59 or block 63. Also, identification of the queue entry in block 65 may include inspection of the priority bit in the cell to narrow the identification effort to either the priority queue or the best effort queue. In block 67, the cell is combined with the preceding cell in the queue entry in a packet reassembly operation. If the reassembly operation in block 67 results in a completed packet (decision block 69), then the packet is marked as ready for transmission in block 71. In one embodiment, the packet is marked by setting a flag associated with the queue entry in which the packet has been reassembled. Other techniques for indicating that a packet is ready for transmission may be used in alternate embodiments.
  • FIG. 2B illustrates queue drain logic according to one embodiment. At decision block 75, the entry at the head of the priority queue is inspected to determine if it contains a packet ready for transmission. If so, the packet is transmitted at block 77 and the corresponding priority queue entry is popped off the head, of the priority queue and deallocated at block 79. If a ready packet is not present at the head of the priority queue, then the entry at the head of the best effort queue is inspected at decision block 81. If a packet is ready at the head of the best effort queue, it is transmitted at block 83 and the corresponding best effort queue entry is popped off the head of the best effort queue and deallocated in block 85. Note that, in the embodiment illustrated in FIG. 2B, packets are drained from the best effort queue only after the priority queue has been emptied. In alternate embodiments, a timer, counter or similar logic element may be used to ensure that the best effort queue 105 is serviced at least every so often or at least after every N number of packets are transmitted from the priority queue, thereby ensuring at least a threshold level of service to best effort queue.
  • FIG. 3 illustrates the flow of a packet within the switch 17 of FIG. 1. A packet is received in the switch at block 91 and used to identify an entry in a forwarding table called the address resolution (AR) table at block 93. At decision block 95, a priority bit in the AR table entry is inspected to determine whether the packet belongs to a class of traffic that has been selected for priority queuing. If the priority bit is set, the packet is segmented into cells having respective priority bits set in their headers in block 97. If the priority bit is not set, the packet is segmented into cells having respective priority bits cleared their cell headers in block 99. The constituent cells of each packet are forwarded to an egress I/O unit by the switching fabric. In the egress I/O unit, the priority bit of each cell is inspected (decision block 101) and used to direct the cell to an entry in either the priority queue 103 or the best effort queue 105 where it is combined with other cells to reassemble the packet.
  • FIG. 4 illustrates storage of an entry in the address resolution (AR) table managed by the ARU. In one embodiment, the AR table is maintained in a high speed static random access memory (SRAM) coupled to the ARU. Alternatively, the AR table may be included in a memory within an application-specific integrated circuit (ASIC) that includes the ARU. Generally, the ARU stores an entry in the AR table in response to packet forwarding information from the processing unit. The processing unit supplies packet forwarding information to be stored in each AR table in the switch whenever a new association between a destination address and a switch egress port is learned. In one embodiment, an address-to-port association is learned by transmitting a packet that has an unknown egress port assignment on each of the egress ports of the switch and associating the destination address of the packet with the egress port at which an acknowledgment is received. Upon learning the association between the egress port and the destination address, the processing unit issues forwarding information that includes, for example, an identifier of the newly associated egress port, the destination MAC address, an identifier of the VLAN associated with the MAC address (if any), an identifier of the VLAN associated with the egress port (if any), the destination IP address the destination IP port (e.g., transmission control protocol (TCP), universal device protocol (UDP) or other IP port) and the IP protocol (e.g., HTTP, FTP or other IP protocol). The source IP address, source IP port and source IP protocol may also be supplied to fully identify an end-to-end IP flow.
  • Referring to FIG. 4, forwarding information 110 is received from the processing unit at block 115. At block 117, the ARU stores the forwarding information in an AR table entry. At decision block 119, the physical egress port identifier stored in the AR table entry is compared against priority configuration information to determine if packets destined for the egress port have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry in block 127. Thereafter, incoming packets that index the newly stored table entry will be queued in the priority queue to await transmission. If packets destined for the egress port have not been selected for priority queuing, then at decision block 121 the MAC address stored in the AR table entry is compared against the priority configuration information to determine if packets destined for the MAC address have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry in block 127. If packets destined for the MAC address have not been selected for priority egress queuing, then at decision block 123 the VLAN identifier stored in the AR table entry (if present) is compared against the priority configuration information to determine if packets destined for the VLAN have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry in block 127. If packets destined for the VLAN have not been selected for priority egress queuing, then at block 125 the IP flow identified by the IP address, IP port and IP protocol in the AR table is compared against the priority configuration information to determine if packets that form part of the IP flow have been selected for priority egress queuing. If so, the priority bit is set in the AR table entry, otherwise the priority bit is not set. Yet other criteria may be considered in assigning priority queuing in alternate embodiments. For example, priority queuing may be specified for a particular IP protocol (e.g. FTP, HTTP). Also, the ingress port, source MAC address or source VLAN of a packet may also be used to determine whether to queue the packet in the priority egress packet. More specifically, in one embodiment, priority or best effort queuing of unicast traffic is determined based on destination parameters (e.g., egress port, destination MAC address or destination IP address), while priority or best effort queuing of multicast traffic is determined based on source parameters (e.g., ingress port source MAC address or source IP address).
  • FIG. 5 is a diagram of the software architecture of the switch 17 of FIG. 1 according to one embodiment. An operating system 143 and device drivers 145 are provided to interface with the device hardware 141. For example, device drivers are provided to write configuration information and AR storage entries to the ARUs in respective I/O units. Also, the operating system 143 performs memory management functions and other system services in response to requests from higher level software. Generally, the device drivers 145 extend the services provided by the operating system and are invoked in response to requests for operating system service that involve device-specific operations.
  • The device management code 147 is executed by the processing unit (e.g., element 10 of FIG. 1) to perform system level functions, including management of forwarding entries in the distributed AR tables and management of forwarding entries in a master forwarding table maintained in the memory of the processing unit. The device management code 147 also includes routines for invoking device driver services, for example, to query the ARU for traffic statistics and error information, or to write updated configuration information to the ARUs, including priority queuing information. Further, the device management code 147 includes routines for writing updated configuration information to the ARUs, as discussed below in reference to FIG. 6. In one implementation, the device management code 147 is native code, meaning that the device management code 147 is a compiled set of instructions that can be executed directly by a processor in the processing unit to carry out the device management functions.
  • In one embodiment, the device management code 147 supports the operation of a Java client 160 that includes a number of Java applets, including a monitor applet 157, a policy enforcement applet 159 and configuration applet 161. A Java applet is an instantiation of a Java class that includes one or more methods for self initialization (e.g., a constructor method called “Applet( )”), and one or more methods for communicating with a controlling application. Typically the controlling application for a Java applet is a web browser executed on a general purpose computer. In the software architecture shown in FIG. 5, however, a Java application called Data Communication Interface (DCI) 153 is the controlling application for the monitor, policy enforcement and configuration applets 157, 159, 161. The DCI application 153 is executed by a Java virtual machine 149 to manage the download of Java applets from a network management server (NMS) 170. A library of Java objects 155 is provided for use by the Java applets 157, 159, 161 and the DCI application 153.
  • As above, it should be noted that the use of Java is not essential to the present invention and is used for purposes of illustration and explanation. Other programming languages may be used in its place.
  • In one implementation, the NMS 170 supplies Java applets to the switch 17 in a hyper-text transfer protocol (HTTP) data stream. Other protocols may also be used. The constituent packets of the HTTP data stream are addressed to the IP address of the switch and are directed to the processing unit after being received by the I/O unit coupled to the NMS 170. After authenticating the HTTP data stream, the DCI application 153 stores the Java applets provided in the data stream in the memory of the processing unit and executes a method to invoke each applet. An applet is invoked by supplying the Java virtual machine 149 with the address of the constructor method of the applet and causing the Java virtual machine 149 to begin execution of the applet code. Program code defining the Java virtual machine 149 is executed to interpret the platform independent byte codes of the Java applets 157, 159, 161 into native instructions that can be executed by a processor within the processing unit.
  • According to one embodiment, the monitor applet 157, policy enforcement applet 159 and configuration applet 161 communicate with the device management code 147 through a Java-native interface (JNI) 151. The JNI 151 is essentially an application programming interface (API) and provides a set of methods that can be invoked by the Java applets 157, 159, 161 to send messages and receive responses from the device management code 147. In one implementation, the JNI 151 includes methods by which the monitor applet 157 can request the device management code 147 to gather error information and traffic statistics from the device hardware 141. The JNI 151 also includes methods by which the configuration applet 161 can request the device management code 147 to write configuration information to the device hardware 141. More specifically, the JNI 151 includes a method by which, the configuration applet 161 can indicate that priority queuing should be performed for specified classes of traffic, including, but not limited to, the classes of traffic discussed above in reference to FIG. 4. In this way, a user-coded configuration applet 161 may be executed by the Java virtual machine 149 within the switch 17 to invoke a method in the JNI 151 to request the device management code 147 to write information that assigns selected classes of traffic to be queued in the priority egress queue. In effect, the configuration applet 161 assigns virtual queues defined by the selected classes of traffic to feed into the priority egress queue.
  • As noted above, although a Java virtual machine 149 and Java applets 157, 159, 161 have been described, other virtual machines, interpreters and scripting languages may be used in alternate embodiments, Also, as discussed below, more or fewer Java applets may be used to perform the monitoring, policy enforcement and configuration functions in alternate embodiments.
  • FIG. 6 illustrates an example of dynamic assignment traffic classes to a priority queue. An exemplary network includes switches A and B coupled together at physical ports 32 and 1, respectively. Suppose that a network administrator or other user determines that an important server 175 on port 2 of switch A requires a relatively high quality of service (QoS), and that, at least in switch B, the required QoS can be provided by ensuring that at least 20% of the egress capacity of switch B, port 1 is reserved for traffic destined to the MAC address of the server 175. One way to ensure that 20% egress capacity is reserved to traffic destined for the server 175 is to assign priority queuing for packets destined to the MAC address of the server 175, but not for other traffic. While such an assignment would ensure priority egress to the server traffic, it also may result in unnecessarily high bandwidth allocation to the server 175, potentially starving other important traffic or causing other important traffic to become bottlenecked behind less important traffic in the best effort queue. For example, suppose that there are at least two other MAC address destinations, MAC address A and MAC address B, to which the user desires to assign priority queuing, so long as the egress capacity required by the server-destined traffic is available. In that case, it would be desirable to dynamically configure the MAC address A and MAC address B traffic to be queued in either the priority queue or the best effort queue according to existing traffic conditions. In at least one embodiment, this is accomplished using monitor, policy enforcement and configuration applets that have been downloaded to switch B and which are executed in a Java client in switch B as described above in reference to FIG. 5.
  • FIG. 6 includes exemplary pseudocode listings of monitor, policy enforcement and configuration applets 178, 179, 180 that can be used to ensure that at least 20% of the egress capacity of switch B, port 1 is reserved for traffic destined to the server 175, but without unnecessarily denying priority queuing assignment to traffic destined for MAC addresses A and B. Ater initialization, the monitor applet 178 repeatedly measures of the port 1 line utilization from the device hardware. In one embodiment, the ARU in the I/O unit that manages port 1 keeps a count of the number of packets destined for particular egress ports, packets destined for particular MAC addresses, packets destined for particular VLANS, packets that form part of a particular IP low, packets having a particular IP protocol, and so forth. The ARU also tracks the number of errors associated with these different classes of traffic, the number of packets from each class of traffic that are dropped, and other statistics. By determining the change in these different statistics per unit time, a utilization factor may be generated that represents the percent utilization of the capacity of an egress port, an I/O unit or the overall switch. Error rates and packet drop rates may also be generated.
  • In one embodiment, the monitor applet 178 measures line utilization by invoking methods in the JNI to read the port 1 line utilization resulting from traffic destined for MAC address A and for MAC address B on a periodic basis, e.g., every 10 milliseconds.
  • The policy enforcement applet 179 includes variables to hold the line utilization percentage of traffic destined for MAC address A (A %), the line utilization percentage of traffic destined for MAC address B (B %), the queue assignment (i.e., priority or best effort) of traffic destined for the server MAC address (QA_S), the queue assignment of traffic destined for MAC address A (QA_A) and the queue assignment of traffic destined for MAC address B. Also, a constant, DELTA, is defined to be 5% and the queue assignments for the MAC address A, MAC address B and server MAC address traffic are initially set to the priority queue.
  • The policy enforcement applet 179 also includes a forever loop in which the line utilization percentages A % and B % are obtained from the monitor applet 178 and used to determine whether to change the queue assignments QA_A and QA_B. If the MAC address A traffic and the MAC address B traffic are both assigned to the priority queue (the initial configuration) and the sum of the line utilization percentages A % and B % exceeds 80%, then less than 20% line utilization remains for the saver-destined traffic. In that event, the MAC address A traffic is reassigned from the priority queue to the best effort queue (code statement 181). If the MAC address A traffic is assigned to the best effort queue and the MAC address B traffic is assigned to the priority queue, then the MAC address A traffic is reassigned to the priority queue if the sum of the line utilization percentages A % and B % drops below 80% less DELTA (code statement 183). The DELTA parameter provides a deadband to prevent rapid changing of priority queue assignment.
  • If the MAC address A traffic is assigned to the best effort queue and the MAC address B traffic is assigned to the priority queue and the line utilization percentage B % exceeds 80%, then less than 20% line utilization remains for the server-destined traffic. Consequently, the MAC address B traffic is reassigned from the priority queue to the best effort queue (code statement 185). If the MAC address B traffic is assigned to the best effort queue and the line utilization percentage B % drops below 80% less DELTA, then the MAC address B traffic is reassigned to the priority queue (code statement 187). Although not specifically provided for in the exemplary pseudocode listing of FIG. 6, the policy enforcement applet 179 may treat the traffic destined for the MAC A and MAC B addresses more symmetrically by including additional statements to conditionally assign traffic destined for MAC address A to the priority queue, but not traffic destined for MAC address B. In the exemplary pseudocode listing of FIG. 6, the policy enforcement applet 179 delays for 5 milliseconds at the end of each pass through the forever loop before repeating.
  • The configuration applet 180 includes variables, QA_A and QA_B, to hold the queue assignments of the traffic destined for the MAC addresses A and B, respectively. Variables LAST_QA_A and LAST_QA_B are also provided to record the history (i.e., most recent values) of the QA_A and QA_B values. The LAST_QA_A and LAST_QA_B variables are initialized to indicate that traffic destined for the MAC addresses A and B is assigned to the priority queue.
  • Like the monitor and policy enforcement applets 178, 179, the configuration applet 180 includes a forever loop in which a code sequence is executed followed by a delay, in the exemplary listing of FIG. 6, the first operation performed by the configuration applet 180 within the forever loop is to obtain the queue assignments QA_A and QA_B from the policy enforcement applet 179. If the queue assignment indicated by QA_A is different from the queue assignment indicated by LAST_QA_A, then a JNI method is invoked to request the device code to reconfigure the queue assignment of the traffic destined for MAC address A according to the new QA_A value. The new QA_A value is then copied into the LAST_QA_A variable so that subsequent queue assignment changes are detected. If the queue assignment indicated by QA_B is different from the queue assignment indicated by LAST_QA_B, then a JNI method is invoked to request the device code to reconfigure the queue assignment of the traffic destined for MAC address B according to the new QA_B value. The new QA_B value is then copied into the LAST_QA_B variable so that subsequent queue assignment changes are detected. By this operation, and the operation of the monitor and policy enforcement applets 178, 179, traffic destined for the MAC addresses A and B is dynamically assigned to the priority queue according to real-time evaluations of the traffic conditions in the switch.
  • Although a three-applet implementation is illustrated in FIG. 6, more or fewer applets may be used in an alternate embodiment. For example, the functions of the monitor, policy enforcement and configuration applets 178, 179, 180 may be implemented in a single applet. Alternatively, multiple applets may be provided to perform policy enforcement or other functions using different queue assignment criteria. For example, one policy enforcement applet may make priority queue assignments based on destination MAC addresses, while another policy enforcement applet makes priority queue assignments based on error rates or line utilization of higher level protocols. Multiple monitor applets or configuration applets may similarly be provided.
  • Although queue assignment policy based on destination MAC address is illustrated in FIG. 6, myriad different queue assignment criteria may be used in other embodiments. For example, instead of monitoring and updating queue assignment based on traffic to destination MAC addresses, queue assignments may be updated on other traffic patterns, including traffic to specified destination ports, traffic from specified source ports, traffic from specified source MAC addresses, traffic that forms part of a specified IP flow, traffic that is transmitted using a specified protocol (e.g., HTTP, FTP or other protocols) and so forth. Also, queue assignments may be updated based on environmental conditions such as time of day, changes in network configuration (e.g., due to failure or congestion at other network nodes), error rates, packet drop rates and so forth. Monitoring, policy enforcement and configuration applets that combine many or all of the above-described criteria may be implemented to provide sophisticated traffic handling capability in a packet forwarding device.
  • Although dynamic assignment of traffic classes to a priority egress queue has been emphasized, the methods and apparatuses described herein may alternatively be used to assign traffic classes to a hierarchical set of queues anywhere in a packet forwarding device including, but not limited to, ingress queues and queues associated with delivering and receiving packets from the switching fabric. Further, although the queue assignment of traffic classes has been described in terms of a pair of queues (priority and best effort), additional queues in a prioritization hierarchy may be used without departing from the spirit and scope of the present invention.
  • Further, although the modification of various queues in this way has been described herein, the invention is not so limited and other embodiments also exist. For example, traffic can be filtered based on its type—source (e.g., source MAC address or source VLAN), ingress port, destination (e.g., destination. MAC address or destination IP address), egress port, protocol (e.g., FTP, HTTP) or other hardware-supported filters. In one embodiment, filtering of unicast traffic is determined based on destination parameters such as egress port, destination MAC address or IP address, while filtering of multicast traffic is determined based on source parameters such as ingress port, source MAC address or source IP address.
  • Filtering may be based on environmental conditions, such as time of day, changes in network configuration (e.g., due to failure or congestion at other network nodes), error rates, packet drop rates, line utilization of higher-level protocols. It may be based on traffic patterns such as traffic from specified source ports, traffic to specified destination ports, traffic from specified source MAC addresses or traffic that forms part of a specified IP flow. Various other hardware counters, monitors and dynamic values can be read from the hardware.
  • Still further, dynamic filtering decisions may be made on how to process packets other than choosing whether they should go to a priority or best effort queue; for example, they may be dropped or copied, or traffic of a specific type as described above may be diverted. Packet headers may be modified, and use of differentiated services (DS), quality of service (QoS), TOS, TTL, destination and the like is possible as long as it is supported by the hardware. The configurability of filtering and subsequent processing in the invention is, in fact, limited only by the hardware and numerous possibilities for filtering and subsequent processing of traffic other than those described herein will be readily apparent to those skilled in the art after reading and understanding this application.
  • As an example, consider the routing of multimedia traffic. Such traffic might be sent by three or more separated streams defined by, e.g., virtual port number. This traffic could be filtered and processed to dynamically add or drop specific streams. Based on such dynamic adaptation, active network applications on nodes between the source and destination can negotiate and dynamically set different adaptation mechanisms. As described above, the invention is of course not limited to this example, and in fact is intended to cover such filtering and processing using future hardware platforms which provide new capabilities and which afford new ways of using and controlling such functionality.
  • In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims (42)

1. In a packet forwarding device in a network in which packet flows are assigned respective classes and each respective class is accorded a respective packet forwarding treatment, a method comprising:
detecting a predetermined time of day; and
responsive to detecting the predetermined time of day, changing the respective packet forwarding treatment accorded to at least one class of packet flow from a first packet forwarding treatment to a second packet forwarding treatment.
2. The method of claim 1, wherein changing the respective packet flow treatment accorded to the at least one class of packet flow comprises changing assignment of the at least one class of packet flow from a queue having a first priority to a queue having a second priority.
3. The method of claim 1, wherein changing the respective packet flow treatment accorded to the at least one class of packet flow comprises dropping packets of the at least one class of packet flow.
4. The method of claim 1, wherein changing the respective packet flow treatment accorded to the at least one class of packet flow comprises copying packets of the at least one class of packet flow.
5. The method of claim 1, wherein changing the respective packet flow treatment accorded to the at least one class of packet flow comprises diverting packets of the at least one class of packet flow.
6. The method of claim 1, wherein a packet flow is assigned a respective class based on at least one IP flow parameter of the packet flow.
7. The method of claim 1, wherein a packet flow is assigned a respective class based on a respective source of the packet flow.
8. The method of claim 7, wherein the packet flow is assigned the respective class based on a MAC address of the source of the packet flow.
9. The method of claim 7, wherein the packet flow is assigned the respective class based on a VLAN associated with the source of the packet flow.
10. The method of claim 1, wherein a packet flow is assigned a respective class based on a destination of the packet flow.
11. The method of claim 10, wherein the packet flow is assigned a respective class based on a MAC address of the destination of the packet flow.
12. The method of claim 10, wherein the packet flow is assigned the respective class based on a VLAN associated with the destination of the packet flow.
13. The method of claim 1, wherein the packet flow is assigned a respective class based on an ingress port associated with the packet flow.
14. The method of claim 1, wherein the packet flow is assigned a respective class based on an egress port associated with the packet flow.
15. The method of claim 1, wherein the packet flow is assigned a respective class based on a virtual port associated with the packet flow.
16. The method of claim 1, wherein a packet flow is assigned a respective class based on a protocol of the packet flow.
17. The method of claim 16, wherein a packet flow is assigned a respective class based on whether is associated with a specified high level protocol.
18. The method of claim 17, wherein a packet flow is assigned a respective class if it is associated with an FTP flow.
19. The method of claim 17, wherein the packet flow is assigned a respective class if it is associated with an HTTP flow.
20. The method of claim 1, wherein a packet flow is assigned a respective class based on a traffic type of the of the packet flow.
21. The method of claim 1, wherein a packet flow is assigned a respective class based on whether it is associated with a specified IP flow.
22. A non-transitory, processor-readable medium carrying instructions for execution by at least one processor, the instructions comprising instructions executable in a packet forwarding device in a network in which packet flows are assigned respective classes and each respective class is accorded a respective packet forwarding treatment, the instructions comprising:
instructions executable to detect a predetermined time of day; and
instructions executable responsive to detecting the predetermined time of day, to change the respective packet forwarding treatment accorded to at least one class of packet flow from a first packet forwarding treatment to a second packet forwarding treatment.
23. The medium of claim 22, wherein the instructions executable to change the respective packet flow treatment accorded to the at least one class of packet flow comprise instructions executable to change assignment of the at least one class of packet flow from a queue having a first priority to a queue having a second priority.
24. The medium of claim 22, wherein the instructions executable to change the respective packet flow treatment accorded to the at least one class of packet flow comprise instructions executable to drop packets of the at least one class of packet flow.
25. The medium of claim 22, wherein the instructions executable to change the respective packet flow treatment accorded to the at least one class of packet flow comprise instructions executable to copy packets of the at least one class of packet flow.
26. The medium of claim 22, wherein the instructions executable to change the respective packet flow treatment accorded to the at least one class of packet flow comprise instructions executable to divert packets of the at least one class of packet flow.
27. The medium of claim 22, wherein the instructions comprise instructions executable to assign a packet flow a respective class based on at least one IP flow parameter of the packet flow.
28. The medium of claim 22, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a respective source of the packet flow.
29. The medium of claim 28, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a MAC address of the source of the packet flow.
30. The medium of claim 28, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a VLAN associated with the source of the packet flow.
31. The medium of claim 22, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a destination of the packet flow.
32. The medium of claim 31, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a MAC address of the destination of the packet flow.
33. The medium of claim 31, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a VLAN associated with the destination of the packet flow.
34. The medium of claim 22, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on an ingress port associated with the packet flow.
35. The medium of claim 22, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on an egress port associated with the packet flow.
36. The medium of claim 22, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a virtual port associated with the packet flow.
37. The medium of claim 22, wherein the instructions comprise instructions executable to assign a respective class to a packet flow based on a protocol of the packet flow.
38. The medium of claim 37, wherein a packet flow is assigned a respective class based on whether is associated with a specified high level protocol.
39. The medium of claim 38, wherein the instructions executable to assign a respective class to the packet flow based on the protocol of the packet flow comprise instructions executable to assign the packet flow the respective class if it is associated with an FTP flow.
40. The medium of claim 38, wherein the instructions executable to assign a respective class to the packet flow based on the protocol of the packet flow comprise instructions executable to assign the packet flow the respective class if it is associated with an HTTP flow.
41. The medium of claim 22, comprising instructions executable to assign a packet flow a respective class based on a traffic type of the of the packet flow.
42. The medium of claim 22, wherein a packet flow is assigned a respective class based on whether it is associated with a specified IP flow.
US14/134,230 2000-08-21 2013-12-19 Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device Abandoned US20140105025A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/134,230 US20140105025A1 (en) 2000-08-21 2013-12-19 Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US22678700P 2000-08-21 2000-08-21
US09/747,296 US8619793B2 (en) 2000-08-21 2000-12-22 Dynamic assignment of traffic classes to a priority queue in a packet forwarding device
US14/134,230 US20140105025A1 (en) 2000-08-21 2013-12-19 Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/747,296 Continuation US8619793B2 (en) 2000-08-21 2000-12-22 Dynamic assignment of traffic classes to a priority queue in a packet forwarding device

Publications (1)

Publication Number Publication Date
US20140105025A1 true US20140105025A1 (en) 2014-04-17

Family

ID=26920883

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/747,296 Expired - Fee Related US8619793B2 (en) 2000-08-21 2000-12-22 Dynamic assignment of traffic classes to a priority queue in a packet forwarding device
US14/134,230 Abandoned US20140105025A1 (en) 2000-08-21 2013-12-19 Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device
US14/133,936 Abandoned US20140105012A1 (en) 2000-08-21 2013-12-19 Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/747,296 Expired - Fee Related US8619793B2 (en) 2000-08-21 2000-12-22 Dynamic assignment of traffic classes to a priority queue in a packet forwarding device

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/133,936 Abandoned US20140105012A1 (en) 2000-08-21 2013-12-19 Dynamic Assignment of Traffic Classes to a Priority Queue in a Packet Forwarding Device

Country Status (1)

Country Link
US (3) US8619793B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10277518B1 (en) 2017-01-16 2019-04-30 Innovium, Inc. Intelligent packet queues with delay-based actions
US10313255B1 (en) * 2017-01-16 2019-06-04 Innovium, Inc. Intelligent packet queues with enqueue drop visibility and forensics
US10735339B1 (en) 2017-01-16 2020-08-04 Innovium, Inc. Intelligent packet queues with efficient delay tracking
US11057307B1 (en) 2016-03-02 2021-07-06 Innovium, Inc. Load balancing path assignments techniques
US11075847B1 (en) 2017-01-16 2021-07-27 Innovium, Inc. Visibility sampling
US11621904B1 (en) 2020-11-06 2023-04-04 Innovium, Inc. Path telemetry data collection
US11784932B2 (en) 2020-11-06 2023-10-10 Innovium, Inc. Delay-based automatic queue management and tail drop
US11863458B1 (en) 2016-01-30 2024-01-02 Innovium, Inc. Reflected packets

Families Citing this family (139)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7219132B2 (en) * 2001-03-30 2007-05-15 Space Systems/Loral Dynamic resource allocation architecture for differentiated services over broadband communication networks
US6993613B2 (en) * 2001-09-17 2006-01-31 Intel Corporation Methods and apparatus for reducing receive interrupts via paced ingress indication
US7839785B2 (en) * 2001-09-27 2010-11-23 Broadcom Corporation System and method for dropping lower priority packets that are slated for transmission
US7881202B2 (en) * 2002-09-25 2011-02-01 Broadcom Corporation System and method for dropping lower priority packets that are slated for wireless transmission
US8688853B2 (en) * 2001-12-21 2014-04-01 Agere Systems Llc Method and apparatus for maintaining multicast lists in a data network
US7412701B1 (en) * 2002-04-22 2008-08-12 Cisco Technology, Inc. Method for network management using a virtual machine in a network device
US7519066B1 (en) 2002-05-09 2009-04-14 Silicon Image, Inc. Method for switching data in a crossbar switch
US7505422B1 (en) 2002-11-22 2009-03-17 Silicon Image, Inc. Preference programmable first-one detector and quadrature based random grant generator
US7461167B1 (en) * 2002-11-22 2008-12-02 Silicon Image, Inc. Method for multicast service in a crossbar switch
US20080028157A1 (en) * 2003-01-13 2008-01-31 Steinmetz Joseph H Global shared memory switch
US6947409B2 (en) * 2003-03-17 2005-09-20 Sony Corporation Bandwidth management of virtual networks on a shared network
US20040252711A1 (en) * 2003-06-11 2004-12-16 David Romano Protocol data unit queues
US20050144290A1 (en) * 2003-08-01 2005-06-30 Rizwan Mallal Arbitrary java logic deployed transparently in a network
US9065741B1 (en) * 2003-09-25 2015-06-23 Cisco Technology, Inc. Methods and apparatuses for identifying and alleviating internal bottlenecks prior to processing packets in internal feature modules
US20050220096A1 (en) 2004-04-06 2005-10-06 Robert Friskney Traffic engineering in frame-based carrier networks
US8923292B2 (en) 2004-04-06 2014-12-30 Rockstar Consortium Us Lp Differential forwarding in address-based carrier networks
US8254310B2 (en) * 2007-06-19 2012-08-28 Fleetwood Group, Inc. Audience response system and method with multiple base unit capability
US20060153185A1 (en) * 2004-12-28 2006-07-13 Intel Corporation Method and apparatus for dynamically changing ring size in network processing
DE112006000618T5 (en) 2005-03-15 2008-02-07 Trapeze Networks, Inc., Pleasanton System and method for distributing keys in a wireless network
US7623457B2 (en) * 2005-03-31 2009-11-24 At&T Intellectual Property I, L.P. Method and apparatus for managing end-to-end quality of service policies in a communication system
CN100459577C (en) * 2005-09-28 2009-02-04 华为技术有限公司 Band-width or buffer-storage distribution processing method in communication network
US8638762B2 (en) * 2005-10-13 2014-01-28 Trapeze Networks, Inc. System and method for network integrity
US7573859B2 (en) * 2005-10-13 2009-08-11 Trapeze Networks, Inc. System and method for remote monitoring in a wireless network
WO2007044986A2 (en) 2005-10-13 2007-04-19 Trapeze Networks, Inc. System and method for remote monitoring in a wireless network
US7724703B2 (en) 2005-10-13 2010-05-25 Belden, Inc. System and method for wireless network monitoring
US7551619B2 (en) * 2005-10-13 2009-06-23 Trapeze Networks, Inc. Identity-based networking
GB0606367D0 (en) * 2006-03-30 2006-05-10 Vodafone Plc Telecommunications networks
US20070248111A1 (en) * 2006-04-24 2007-10-25 Shaw Mark E System and method for clearing information in a stalled output queue of a crossbar
US7558266B2 (en) 2006-05-03 2009-07-07 Trapeze Networks, Inc. System and method for restricting network access using forwarding databases
US8966018B2 (en) * 2006-05-19 2015-02-24 Trapeze Networks, Inc. Automated network device configuration and network deployment
US20070268903A1 (en) * 2006-05-22 2007-11-22 Fujitsu Limited System and Method for Assigning Packets to Output Queues
US8665892B2 (en) * 2006-05-30 2014-03-04 Broadcom Corporation Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
US9191799B2 (en) 2006-06-09 2015-11-17 Juniper Networks, Inc. Sharing data between wireless switches system and method
US9258702B2 (en) 2006-06-09 2016-02-09 Trapeze Networks, Inc. AP-local dynamic switching
US8818322B2 (en) 2006-06-09 2014-08-26 Trapeze Networks, Inc. Untethered access point mesh system and method
US7822594B2 (en) * 2006-08-07 2010-10-26 Voltaire Ltd. Service-oriented infrastructure management
US8340110B2 (en) * 2006-09-15 2012-12-25 Trapeze Networks, Inc. Quality of service provisioning for wireless networks
US7873061B2 (en) 2006-12-28 2011-01-18 Trapeze Networks, Inc. System and method for aggregation and queuing in a wireless network
US20080226075A1 (en) * 2007-03-14 2008-09-18 Trapeze Networks, Inc. Restricted services for wireless stations
US20080276303A1 (en) * 2007-05-03 2008-11-06 Trapeze Networks, Inc. Network Type Advertising
US8902904B2 (en) 2007-09-07 2014-12-02 Trapeze Networks, Inc. Network assignment based on priority
US8509128B2 (en) * 2007-09-18 2013-08-13 Trapeze Networks, Inc. High level instruction convergence function
DE102007052673A1 (en) * 2007-11-05 2009-05-07 Kuka Roboter Gmbh A computing system and method for managing available resources of a particular provided for a control of an industrial robot computing system
US8238942B2 (en) * 2007-11-21 2012-08-07 Trapeze Networks, Inc. Wireless station location detection
US8150357B2 (en) 2008-03-28 2012-04-03 Trapeze Networks, Inc. Smoothing filter for irregular update intervals
JP4983712B2 (en) * 2008-04-21 2012-07-25 富士通株式会社 Transmission information transfer apparatus and method
US20090287816A1 (en) * 2008-05-14 2009-11-19 Trapeze Networks, Inc. Link layer throughput testing
US8978105B2 (en) 2008-07-25 2015-03-10 Trapeze Networks, Inc. Affirming network relationships and resource access via related networks
US8238298B2 (en) * 2008-08-29 2012-08-07 Trapeze Networks, Inc. Picking an optimal channel for an access point in a wireless network
US8201168B2 (en) * 2008-12-25 2012-06-12 Voltaire Ltd. Virtual input-output connections for machine virtualization
US8732339B2 (en) * 2009-03-24 2014-05-20 Hewlett-Packard Development Company, L.P. NPIV at storage devices
US8665886B2 (en) 2009-03-26 2014-03-04 Brocade Communications Systems, Inc. Redundant host connection in a routed network
US8638799B2 (en) * 2009-07-10 2014-01-28 Hewlett-Packard Development Company, L.P. Establishing network quality of service for a virtual machine
JP5501052B2 (en) * 2010-03-24 2014-05-21 キヤノン株式会社 COMMUNICATION DEVICE, COMMUNICATION DEVICE CONTROL METHOD, PROGRAM
US8369335B2 (en) 2010-03-24 2013-02-05 Brocade Communications Systems, Inc. Method and system for extending routing domain to non-routing end stations
US8989186B2 (en) 2010-06-08 2015-03-24 Brocade Communication Systems, Inc. Virtual port grouping for virtual cluster switching
US9270486B2 (en) 2010-06-07 2016-02-23 Brocade Communications Systems, Inc. Name services for virtual cluster switching
US9716672B2 (en) 2010-05-28 2017-07-25 Brocade Communications Systems, Inc. Distributed configuration management for virtual cluster switching
US8867552B2 (en) 2010-05-03 2014-10-21 Brocade Communications Systems, Inc. Virtual cluster switching
US9769016B2 (en) 2010-06-07 2017-09-19 Brocade Communications Systems, Inc. Advanced link tracking for virtual cluster switching
US8625616B2 (en) 2010-05-11 2014-01-07 Brocade Communications Systems, Inc. Converged network extension
US9001824B2 (en) 2010-05-18 2015-04-07 Brocade Communication Systems, Inc. Fabric formation for virtual cluster switching
US9231890B2 (en) 2010-06-08 2016-01-05 Brocade Communications Systems, Inc. Traffic management for virtual cluster switching
US9461840B2 (en) 2010-06-02 2016-10-04 Brocade Communications Systems, Inc. Port profile management for virtual cluster switching
US8885488B2 (en) 2010-06-02 2014-11-11 Brocade Communication Systems, Inc. Reachability detection in trill networks
US8634308B2 (en) 2010-06-02 2014-01-21 Brocade Communications Systems, Inc. Path detection in trill networks
US9628293B2 (en) 2010-06-08 2017-04-18 Brocade Communications Systems, Inc. Network layer multicasting in trill networks
US8446914B2 (en) 2010-06-08 2013-05-21 Brocade Communications Systems, Inc. Method and system for link aggregation across multiple switches
US9608833B2 (en) 2010-06-08 2017-03-28 Brocade Communications Systems, Inc. Supporting multiple multicast trees in trill networks
US9246703B2 (en) 2010-06-08 2016-01-26 Brocade Communications Systems, Inc. Remote port mirroring
US9806906B2 (en) 2010-06-08 2017-10-31 Brocade Communications Systems, Inc. Flooding packets on a per-virtual-network basis
US9807031B2 (en) 2010-07-16 2017-10-31 Brocade Communications Systems, Inc. System and method for network configuration
US9264369B2 (en) * 2010-12-06 2016-02-16 Qualcomm Incorporated Technique for managing traffic at a router
US8612649B2 (en) 2010-12-17 2013-12-17 At&T Intellectual Property I, L.P. Validation of priority queue processing
US9008113B2 (en) * 2010-12-20 2015-04-14 Solarflare Communications, Inc. Mapped FIFO buffering
US9270572B2 (en) 2011-05-02 2016-02-23 Brocade Communications Systems Inc. Layer-3 support in TRILL networks
CN102833145A (en) * 2011-06-16 2012-12-19 中兴通讯股份有限公司 Self-adaptive dynamic bandwidth adjusting device and method
US8879549B2 (en) 2011-06-28 2014-11-04 Brocade Communications Systems, Inc. Clearing forwarding entries dynamically and ensuring consistency of tables across ethernet fabric switch
US9401861B2 (en) 2011-06-28 2016-07-26 Brocade Communications Systems, Inc. Scalable MAC address distribution in an Ethernet fabric switch
US8948056B2 (en) 2011-06-28 2015-02-03 Brocade Communication Systems, Inc. Spanning-tree based loop detection for an ethernet fabric switch
US9407533B2 (en) 2011-06-28 2016-08-02 Brocade Communications Systems, Inc. Multicast in a trill network
US9007958B2 (en) 2011-06-29 2015-04-14 Brocade Communication Systems, Inc. External loop detection for an ethernet fabric switch
US8885641B2 (en) 2011-06-30 2014-11-11 Brocade Communication Systems, Inc. Efficient trill forwarding
US9736085B2 (en) 2011-08-29 2017-08-15 Brocade Communications Systems, Inc. End-to end lossless Ethernet in Ethernet fabric
US20130166774A1 (en) * 2011-09-13 2013-06-27 Niksun, Inc. Dynamic network provisioning systems and methods
US9699117B2 (en) 2011-11-08 2017-07-04 Brocade Communications Systems, Inc. Integrated fibre channel support in an ethernet fabric switch
US9450870B2 (en) 2011-11-10 2016-09-20 Brocade Communications Systems, Inc. System and method for flow management in software-defined networks
US8995272B2 (en) 2012-01-26 2015-03-31 Brocade Communication Systems, Inc. Link aggregation in software-defined networks
US9742693B2 (en) 2012-02-27 2017-08-22 Brocade Communications Systems, Inc. Dynamic service insertion in a fabric switch
US9154416B2 (en) 2012-03-22 2015-10-06 Brocade Communications Systems, Inc. Overlay tunnel in a fabric switch
US9374301B2 (en) 2012-05-18 2016-06-21 Brocade Communications Systems, Inc. Network feedback in software-defined networks
US10277464B2 (en) 2012-05-22 2019-04-30 Arris Enterprises Llc Client auto-configuration in a multi-switch link aggregation
EP2853066B1 (en) 2012-05-23 2017-02-22 Brocade Communications Systems, Inc. Layer-3 overlay gateways
US9438527B2 (en) * 2012-05-24 2016-09-06 Marvell World Trade Ltd. Flexible queues in a network switch
US9602430B2 (en) 2012-08-21 2017-03-21 Brocade Communications Systems, Inc. Global VLANs for fabric switches
US9401872B2 (en) 2012-11-16 2016-07-26 Brocade Communications Systems, Inc. Virtual link aggregations across multiple fabric switches
US9350680B2 (en) 2013-01-11 2016-05-24 Brocade Communications Systems, Inc. Protection switching over a virtual link aggregation
US9413691B2 (en) 2013-01-11 2016-08-09 Brocade Communications Systems, Inc. MAC address synchronization in a fabric switch
US9548926B2 (en) 2013-01-11 2017-01-17 Brocade Communications Systems, Inc. Multicast traffic load balancing over virtual link aggregation
US9565113B2 (en) 2013-01-15 2017-02-07 Brocade Communications Systems, Inc. Adaptive link aggregation and virtual link aggregation
US9565099B2 (en) 2013-03-01 2017-02-07 Brocade Communications Systems, Inc. Spanning tree in fabric switches
US9401818B2 (en) 2013-03-15 2016-07-26 Brocade Communications Systems, Inc. Scalable gateways for a fabric switch
US10361918B2 (en) 2013-03-19 2019-07-23 Yale University Managing network forwarding configurations using algorithmic policies
US9565028B2 (en) 2013-06-10 2017-02-07 Brocade Communications Systems, Inc. Ingress switch multicast distribution in a fabric switch
US9699001B2 (en) 2013-06-10 2017-07-04 Brocade Communications Systems, Inc. Scalable and segregated network virtualization
US9806949B2 (en) 2013-09-06 2017-10-31 Brocade Communications Systems, Inc. Transparent interconnection of Ethernet fabric switches
US9912612B2 (en) 2013-10-28 2018-03-06 Brocade Communications Systems LLC Extended ethernet fabric switches
US9548873B2 (en) 2014-02-10 2017-01-17 Brocade Communications Systems, Inc. Virtual extensible LAN tunnel keepalives
US10581758B2 (en) 2014-03-19 2020-03-03 Avago Technologies International Sales Pte. Limited Distributed hot standby links for vLAG
US10476698B2 (en) 2014-03-20 2019-11-12 Avago Technologies International Sales Pte. Limited Redundent virtual link aggregation group
US10063473B2 (en) 2014-04-30 2018-08-28 Brocade Communications Systems LLC Method and system for facilitating switch virtualization in a network of interconnected switches
US9800471B2 (en) 2014-05-13 2017-10-24 Brocade Communications Systems, Inc. Network extension groups of global VLANs in a fabric switch
US10382228B2 (en) * 2014-06-26 2019-08-13 Avago Technologies International Sales Pte. Limited Protecting customer virtual local area network (VLAN) tag in carrier ethernet services
US10616108B2 (en) 2014-07-29 2020-04-07 Avago Technologies International Sales Pte. Limited Scalable MAC address virtualization
US9544219B2 (en) 2014-07-31 2017-01-10 Brocade Communications Systems, Inc. Global VLAN services
US9807007B2 (en) 2014-08-11 2017-10-31 Brocade Communications Systems, Inc. Progressive MAC address learning
US9524173B2 (en) 2014-10-09 2016-12-20 Brocade Communications Systems, Inc. Fast reboot for a switch
US9699029B2 (en) 2014-10-10 2017-07-04 Brocade Communications Systems, Inc. Distributed configuration management in a switch group
US9774540B2 (en) 2014-10-29 2017-09-26 Red Hat Israel, Ltd. Packet drop based dynamic receive priority for network devices
US9628407B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Multiple software versions in a switch group
US9626255B2 (en) 2014-12-31 2017-04-18 Brocade Communications Systems, Inc. Online restoration of a switch snapshot
US9942097B2 (en) 2015-01-05 2018-04-10 Brocade Communications Systems LLC Power management in a network of interconnected switches
US10003552B2 (en) 2015-01-05 2018-06-19 Brocade Communications Systems, Llc. Distributed bidirectional forwarding detection protocol (D-BFD) for cluster of interconnected switches
US10038592B2 (en) 2015-03-17 2018-07-31 Brocade Communications Systems LLC Identifier assignment to a new switch in a switch group
US9807005B2 (en) 2015-03-17 2017-10-31 Brocade Communications Systems, Inc. Multi-fabric manager
US10579406B2 (en) 2015-04-08 2020-03-03 Avago Technologies International Sales Pte. Limited Dynamic orchestration of overlay tunnels
US10439929B2 (en) 2015-07-31 2019-10-08 Avago Technologies International Sales Pte. Limited Graceful recovery of a multicast-enabled switch
US10171303B2 (en) 2015-09-16 2019-01-01 Avago Technologies International Sales Pte. Limited IP-based interconnection of switches with a logical chassis
US9912614B2 (en) 2015-12-07 2018-03-06 Brocade Communications Systems LLC Interconnection of switches based on hierarchical overlay tunneling
US10009291B1 (en) * 2015-12-21 2018-06-26 Amazon Technologies, Inc. Programmable switching fabric for dynamic path selection
US10075965B2 (en) * 2016-04-06 2018-09-11 P2 Solutions Limited Apparatus and method for detecting and alleviating unfairness in wireless network
US10341259B1 (en) 2016-05-31 2019-07-02 Amazon Technologies, Inc. Packet forwarding using programmable feature prioritization
US10520110B2 (en) * 2016-10-10 2019-12-31 Citrix Systems, Inc. Systems and methods for executing cryptographic operations across different types of processing hardware
US10237090B2 (en) 2016-10-28 2019-03-19 Avago Technologies International Sales Pte. Limited Rule-based network identifier mapping
US10700955B2 (en) * 2018-09-14 2020-06-30 The Nielsen Company (Us), Llc Methods apparatus and medium to exclude network communication traffic from media monitoring records
US11277351B2 (en) * 2018-11-08 2022-03-15 Arris Enterprises Llc Priority-based queueing for scalable device communication
US10887122B1 (en) * 2018-11-19 2021-01-05 Juniper Networks, Inc. Dynamically providing traffic via various packet forwarding techniques
WO2020236275A1 (en) 2019-05-23 2020-11-26 Cray Inc. System and method for facilitating dynamic command management in a network interface controller (nic)
US11513848B2 (en) * 2020-10-05 2022-11-29 Apple Inc. Critical agent identification to modify bandwidth allocation in a virtual channel

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499238A (en) * 1993-11-06 1996-03-12 Electronics And Telecommunications Research Institute Asynchronous transfer mode (ATM) multiplexing process device and method of the broadband integrated service digital network subscriber access apparatus
US5570360A (en) * 1995-03-20 1996-10-29 Stratacom, Inc. Method and apparatus for implementing communication service contract using cell arrival information
US5822319A (en) * 1995-05-18 1998-10-13 Kabushiki Kaisha Toshiba Router device and datagram transfer method for data communication network system
US5983278A (en) * 1996-04-19 1999-11-09 Lucent Technologies Inc. Low-loss, fair bandwidth allocation flow control in a packet switch
US6046980A (en) * 1996-12-09 2000-04-04 Packeteer, Inc. System for managing flow bandwidth utilization at network, transport and application layers in store and forward network
US6081507A (en) * 1998-11-04 2000-06-27 Polytechnic University Methods and apparatus for handling time stamp aging
US6088356A (en) * 1997-06-30 2000-07-11 Sun Microsystems, Inc. System and method for a multi-layer network element
US6094435A (en) * 1997-06-30 2000-07-25 Sun Microsystems, Inc. System and method for a quality of service in a multi-layer network element
US6104700A (en) * 1997-08-29 2000-08-15 Extreme Networks Policy based quality of service
US6185215B1 (en) * 1996-10-15 2001-02-06 International Business Machines Corporation Combined router, ATM, WAN and/or LAN switch (CRAWLS) cut through and method of use
US6252848B1 (en) * 1999-03-22 2001-06-26 Pluris, Inc. System performance in a data network through queue management based on ingress rate monitoring
US6408006B1 (en) * 1997-12-01 2002-06-18 Alcatel Canada Inc. Adaptive buffering allocation under multiple quality of service
US6463067B1 (en) * 1999-12-13 2002-10-08 Ascend Communications, Inc. Submission and response architecture for route lookup and packet classification requests
US6493318B1 (en) * 1998-05-04 2002-12-10 Hewlett-Packard Company Cost propagation switch protocols
US6614790B1 (en) * 1998-06-12 2003-09-02 Telefonaktiebolaget Lm Ericsson (Publ) Architecture for integrated services packet-switched networks
US6654374B1 (en) * 1998-11-10 2003-11-25 Extreme Networks Method and apparatus to reduce Jitter in packet switched networks
US6680906B1 (en) * 1999-03-31 2004-01-20 Cisco Technology, Inc. Regulating packet traffic in an integrated services network
US6735198B1 (en) * 1999-12-21 2004-05-11 Cisco Technology, Inc. Method and apparatus for updating and synchronizing forwarding tables in a distributed network switch
US6985431B1 (en) * 1999-08-27 2006-01-10 International Business Machines Corporation Network switch and components and method of operation
US6996099B1 (en) * 1999-03-17 2006-02-07 Broadcom Corporation Network switch having a programmable counter
US7046665B1 (en) * 1999-10-26 2006-05-16 Extreme Networks, Inc. Provisional IP-aware virtual paths over networks

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2775280A (en) * 1955-03-01 1956-12-25 Majestic Metal Specialties Inc Lady's handbag
US5515376A (en) * 1993-07-19 1996-05-07 Alantec, Inc. Communication apparatus and methods
JP3419627B2 (en) * 1996-06-11 2003-06-23 株式会社日立製作所 Router device
US5898687A (en) * 1996-07-24 1999-04-27 Cisco Systems, Inc. Arbitration mechanism for a multicast logic engine of a switching fabric circuit
US5953335A (en) * 1997-02-14 1999-09-14 Advanced Micro Devices, Inc. Method and apparatus for selectively discarding packets for blocked output queues in the network switch
US5991305A (en) * 1997-02-14 1999-11-23 Advanced Micro Devices, Inc. Integrated multiport switch having independently resettable management information base (MIB)
US6246680B1 (en) * 1997-06-30 2001-06-12 Sun Microsystems, Inc. Highly integrated multi-layer switch element architecture
US5926463A (en) * 1997-10-06 1999-07-20 3Com Corporation Method and apparatus for viewing and managing a configuration of a computer network
EP1005779B1 (en) * 1998-06-19 2008-03-12 Juniper Networks, Inc. Device for performing ip forwarding and atm switching
WO2000003256A1 (en) * 1998-07-08 2000-01-20 Broadcom Corporation Network switch utilizing packet based per head-of-line blocking prevention
US6430188B1 (en) * 1998-07-08 2002-08-06 Broadcom Corporation Unified table for L2, L3, L4, switching and filtering
US6876653B2 (en) * 1998-07-08 2005-04-05 Broadcom Corporation Fast flexible filter processor based architecture for a network device
US6912232B1 (en) * 1998-10-19 2005-06-28 At&T Corp. Virtual private network
US6430616B1 (en) * 1998-12-04 2002-08-06 Sun Microsystems, Inc. Scalable system method for efficiently logging management information associated with a network
US6643260B1 (en) * 1998-12-18 2003-11-04 Cisco Technology, Inc. Method and apparatus for implementing a quality of service policy in a data communications network
US7116679B1 (en) * 1999-02-23 2006-10-03 Alcatel Multi-service network switch with a generic forwarding interface
US6980515B1 (en) * 1999-02-23 2005-12-27 Alcatel Multi-service network switch with quality of access
US6295532B1 (en) * 1999-03-02 2001-09-25 Nms Communications Corporation Apparatus and method for classifying information received by a communications system
US6788681B1 (en) * 1999-03-16 2004-09-07 Nortel Networks Limited Virtual private networks and methods for their operation
US6952401B1 (en) * 1999-03-17 2005-10-04 Broadcom Corporation Method for load balancing in a network switch
US6707818B1 (en) * 1999-03-17 2004-03-16 Broadcom Corporation Network switch memory interface configuration
US6405258B1 (en) * 1999-05-05 2002-06-11 Advanced Micro Devices Inc. Method and apparatus for controlling the flow of data frames through a network switch on a port-by-port basis
US6577636B1 (en) * 1999-05-21 2003-06-10 Advanced Micro Devices, Inc. Decision making engine receiving and storing a portion of a data frame in order to perform a frame forwarding decision
US6515993B1 (en) * 1999-05-28 2003-02-04 Advanced Micro Devices, Inc. Method and apparatus for manipulating VLAN tags
US6611867B1 (en) * 1999-08-31 2003-08-26 Accenture Llp System, method and article of manufacture for implementing a hybrid network
US6427132B1 (en) * 1999-08-31 2002-07-30 Accenture Llp System, method and article of manufacture for demonstrating E-commerce capabilities via a simulation on a network
US6731601B1 (en) * 1999-09-09 2004-05-04 Advanced Micro Devices, Inc. Apparatus and method for resetting a retry counter in a network switch port in response to exerting backpressure
US7020697B1 (en) * 1999-10-01 2006-03-28 Accenture Llp Architectures for netcentric computing systems
US6687247B1 (en) * 1999-10-27 2004-02-03 Cisco Technology, Inc. Architecture for high speed class of service enabled linecard
US6798788B1 (en) * 1999-11-24 2004-09-28 Advanced Micro Devices, Inc. Arrangement determining policies for layer 3 frame fragments in a network switch
US7016365B1 (en) * 2000-03-31 2006-03-21 Intel Corporation Switching fabric including a plurality of crossbar sections
US6760776B1 (en) * 2000-04-10 2004-07-06 International Business Machines Corporation Method and apparatus for processing network frames in a network processor by embedding network control information such as routing and filtering information in each received frame
US7099317B2 (en) * 2000-06-09 2006-08-29 Broadcom Corporation Gigabit switch with multicast handling
US6904054B1 (en) * 2000-08-10 2005-06-07 Verizon Communications Inc. Support for quality of service and vertical services in digital subscriber line domain
US6870840B1 (en) * 2000-08-16 2005-03-22 Alcatel Distributed source learning for data communication switch
US6804234B1 (en) * 2001-03-16 2004-10-12 Advanced Micro Devices, Inc. External CPU assist when peforming a network address lookup
US6728213B1 (en) * 2001-03-23 2004-04-27 Advanced Micro Devices, Inc. Selective admission control in a network device
US6963566B1 (en) * 2001-05-10 2005-11-08 Advanced Micro Devices, Inc. Multiple address lookup engines running in parallel in a switch for a packet-switched network
US7464180B1 (en) * 2001-10-16 2008-12-09 Cisco Technology, Inc. Prioritization and preemption of data frames over a switching fabric
US6633835B1 (en) * 2002-01-10 2003-10-14 Networks Associates Technology, Inc. Prioritized data capture, classification and filtering in a network monitoring environment
US8040901B1 (en) * 2008-02-06 2011-10-18 Juniper Networks, Inc. Packet queueing within ring networks

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5499238A (en) * 1993-11-06 1996-03-12 Electronics And Telecommunications Research Institute Asynchronous transfer mode (ATM) multiplexing process device and method of the broadband integrated service digital network subscriber access apparatus
US5570360A (en) * 1995-03-20 1996-10-29 Stratacom, Inc. Method and apparatus for implementing communication service contract using cell arrival information
US5822319A (en) * 1995-05-18 1998-10-13 Kabushiki Kaisha Toshiba Router device and datagram transfer method for data communication network system
US5983278A (en) * 1996-04-19 1999-11-09 Lucent Technologies Inc. Low-loss, fair bandwidth allocation flow control in a packet switch
US6185215B1 (en) * 1996-10-15 2001-02-06 International Business Machines Corporation Combined router, ATM, WAN and/or LAN switch (CRAWLS) cut through and method of use
US6046980A (en) * 1996-12-09 2000-04-04 Packeteer, Inc. System for managing flow bandwidth utilization at network, transport and application layers in store and forward network
US6088356A (en) * 1997-06-30 2000-07-11 Sun Microsystems, Inc. System and method for a multi-layer network element
US6094435A (en) * 1997-06-30 2000-07-25 Sun Microsystems, Inc. System and method for a quality of service in a multi-layer network element
US6104700A (en) * 1997-08-29 2000-08-15 Extreme Networks Policy based quality of service
US6408006B1 (en) * 1997-12-01 2002-06-18 Alcatel Canada Inc. Adaptive buffering allocation under multiple quality of service
US6493318B1 (en) * 1998-05-04 2002-12-10 Hewlett-Packard Company Cost propagation switch protocols
US6614790B1 (en) * 1998-06-12 2003-09-02 Telefonaktiebolaget Lm Ericsson (Publ) Architecture for integrated services packet-switched networks
US6081507A (en) * 1998-11-04 2000-06-27 Polytechnic University Methods and apparatus for handling time stamp aging
US6654374B1 (en) * 1998-11-10 2003-11-25 Extreme Networks Method and apparatus to reduce Jitter in packet switched networks
US6996099B1 (en) * 1999-03-17 2006-02-07 Broadcom Corporation Network switch having a programmable counter
US6252848B1 (en) * 1999-03-22 2001-06-26 Pluris, Inc. System performance in a data network through queue management based on ingress rate monitoring
US6680906B1 (en) * 1999-03-31 2004-01-20 Cisco Technology, Inc. Regulating packet traffic in an integrated services network
US6985431B1 (en) * 1999-08-27 2006-01-10 International Business Machines Corporation Network switch and components and method of operation
US7046665B1 (en) * 1999-10-26 2006-05-16 Extreme Networks, Inc. Provisional IP-aware virtual paths over networks
US6463067B1 (en) * 1999-12-13 2002-10-08 Ascend Communications, Inc. Submission and response architecture for route lookup and packet classification requests
US6735198B1 (en) * 1999-12-21 2004-05-11 Cisco Technology, Inc. Method and apparatus for updating and synchronizing forwarding tables in a distributed network switch

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11863458B1 (en) 2016-01-30 2024-01-02 Innovium, Inc. Reflected packets
US11057307B1 (en) 2016-03-02 2021-07-06 Innovium, Inc. Load balancing path assignments techniques
US11736388B1 (en) 2016-03-02 2023-08-22 Innovium, Inc. Load balancing path assignments techniques
US11855901B1 (en) 2017-01-16 2023-12-26 Innovium, Inc. Visibility sampling
US10313255B1 (en) * 2017-01-16 2019-06-04 Innovium, Inc. Intelligent packet queues with enqueue drop visibility and forensics
US10673770B1 (en) 2017-01-16 2020-06-02 Innovium, Inc. Intelligent packet queues with delay-based actions
US10735339B1 (en) 2017-01-16 2020-08-04 Innovium, Inc. Intelligent packet queues with efficient delay tracking
US11075847B1 (en) 2017-01-16 2021-07-27 Innovium, Inc. Visibility sampling
US10277518B1 (en) 2017-01-16 2019-04-30 Innovium, Inc. Intelligent packet queues with delay-based actions
US11665104B1 (en) 2017-01-16 2023-05-30 Innovium, Inc. Delay-based tagging in a network switch
US11621904B1 (en) 2020-11-06 2023-04-04 Innovium, Inc. Path telemetry data collection
US11784932B2 (en) 2020-11-06 2023-10-10 Innovium, Inc. Delay-based automatic queue management and tail drop
US11943128B1 (en) 2020-11-06 2024-03-26 Innovium, Inc. Path telemetry data collection

Also Published As

Publication number Publication date
US20140105012A1 (en) 2014-04-17
US8619793B2 (en) 2013-12-31
US20020021701A1 (en) 2002-02-21

Similar Documents

Publication Publication Date Title
US8619793B2 (en) Dynamic assignment of traffic classes to a priority queue in a packet forwarding device
EP1142213B1 (en) Dynamic assignment of traffic classes to a priority queue in a packet forwarding device
KR100735408B1 (en) Method and apparatus for controlling a traffic switching operation based on a service class in an ethernet-based network
US7099275B2 (en) Programmable multi-service queue scheduler
US6822940B1 (en) Method and apparatus for adapting enforcement of network quality of service policies based on feedback about network conditions
US7936770B1 (en) Method and apparatus of virtual class of service and logical queue representation through network traffic distribution over multiple port interfaces
US7539143B2 (en) Network switching device ingress memory system
KR100977651B1 (en) Method and apparatus for network congestion control
US6094435A (en) System and method for a quality of service in a multi-layer network element
Varadarajan et al. EtheReal: A host-transparent real-time Fast Ethernet switch
US5787086A (en) Method and apparatus for emulating a circuit connection in a cell based communications network
US7855960B2 (en) Traffic shaping method and device
US7573821B2 (en) Data packet rate control
US20020181484A1 (en) Packet switch and switching method for switching variable length packets
US6392996B1 (en) Method and apparatus for frame peeking
US20060045009A1 (en) Device and method for managing oversubsription in a network
US7652988B2 (en) Hardware-based rate control for bursty traffic
US8743685B2 (en) Limiting transmission rate of data
US7397762B1 (en) System, device and method for scheduling information processing with load-balancing
US7016302B1 (en) Apparatus and method for controlling queuing of data at a node on a network
KR100836947B1 (en) Tag generation based on priority or differentiated services information
CA2521600A1 (en) Method for the priority classification of frames
JP3601078B2 (en) Router, frame relay exchange, and frame relay priority communication method
US7009973B2 (en) Switch using a segmented ring
Minami et al. Class-based QoS control scheme by flow management in the Internet router

Legal Events

Date Code Title Description
AS Assignment

Owner name: RPX CLEARINGHOUSE LLC, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROCKSTAR CONSORTIUM US LP;ROCKSTAR CONSORTIUM LLC;BOCKSTAR TECHNOLOGIES LLC;AND OTHERS;REEL/FRAME:034924/0779

Effective date: 20150128

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION