WO2001067694A1 - Broadband mid-network server - Google Patents

Broadband mid-network server Download PDF

Info

Publication number
WO2001067694A1
WO2001067694A1 PCT/US2001/001003 US0101003W WO0167694A1 WO 2001067694 A1 WO2001067694 A1 WO 2001067694A1 US 0101003 W US0101003 W US 0101003W WO 0167694 A1 WO0167694 A1 WO 0167694A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
processing
server
module
protocol
Prior art date
Application number
PCT/US2001/001003
Other languages
French (fr)
Other versions
WO2001067694A9 (en
Inventor
Jean Pierre Bordes
Otto Andreas Schmid
Curtis Davis
Monier Maher
Manju Hegde
Original Assignee
Celox Networks, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Celox Networks, Inc. filed Critical Celox Networks, Inc.
Priority to EP01908601A priority Critical patent/EP1260067A1/en
Priority to AU2001236450A priority patent/AU2001236450A1/en
Publication of WO2001067694A1 publication Critical patent/WO2001067694A1/en
Publication of WO2001067694A9 publication Critical patent/WO2001067694A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3009Header conversion, routing tables or routing tags
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/253Routing or path finding in a switch fabric using establishment or release of connections between ports
    • H04L49/255Control mechanisms for ATM switching fabrics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3081ATM peripheral units, e.g. policing, insertion or extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/50Overload detection or protection within a single switching element
    • H04L49/501Overload detection
    • H04L49/503Policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/602Multilayer or multiprotocol switching, e.g. IP switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/60Software-defined switches
    • H04L49/608ATM switches adapted to switch variable length packets, e.g. IP packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/04Selecting arrangements for multiplex systems for time-division multiplexing
    • H04Q11/0428Integrated services digital network, i.e. systems for transmission of different types of digitised signals, e.g. speech, data, telecentral, television signals
    • H04Q11/0478Provisions for broadband connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5665Interaction of ATM with other protocols
    • H04L2012/5667IP over ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/40Constructional details, e.g. power supply, mechanical construction or backplane

Definitions

  • the present invention relates to internetworked communication systems, and especially (but not exclusively) to a highly scalable broadband mid-network, server for performing mid-network processing functions including routing functions, per user processing, encryption, bandwidth distribution and traffic shaping.
  • IP Internet Protocol
  • VoIP Voice over IP
  • Real Time Video are envisioned to be two significant applications for propelling Internet growth to the next level. VoIP can be defined as the ability to make telephone calls and send faxes over IP networks.
  • Real Time Video is a "direct-to-user" technique in which a video signal is transmitted to the client device and presentation of the video begins after a short delay for data buffering, and eliminates the need for significant client-site storage capacity. It is also expected to become popular with businesses.
  • webconferencing which requires high bandwidth since it is a continuous transfer of image information together with voice transfer. Webconferencing also requires real time traffic handling because it is usually implemented as an interactive application.
  • Virtual Private Network services allow a private network to be configured within a public network. This is one of the drivers for Internet access amongst businesses. To allow Virtual Private Networks to coexist on the public Internet, and to encourage business use of the Internet, great care must be taken with respect to security and authentication issues, and tunneling protocols such as L2TP and IPSec must be efficiently supported. • The number of subscribers handled by one system and the different qualities of service provided will make service provider administration more complex. To make provisioning of broadband access more attractive to service providers, subscriber management and usage accounting must be simplified, and differentiated services must be provided.
  • Broadband makes it possible to provide different amounts of bandwidth to users and to smaller Internet Service Providers. To make wholesaling of IP connectivity possible, and to simplify service and repair functions, the ability to support multiple service providers with one mid-network server must be provided.
  • a large number of connections are serviced with a broadband mid-network server.
  • the broadband server In order to ensure that service is not interrupted, the broadband server must have very high availability. Such availability is also required for mission-critical business applications .
  • the inventors hereof have succeeded at designing and developing a broadband mid-network server that, in the most preferred embodiment, satisfies all of the requirements described above.
  • This inventive server provides reliable, secure, fast, flexible, high-bandwidth, and easily managed access to the Internet so as to accommodate all current Internet services including email, file transfer, web surfing and e-commerce, as well as the new value added services such as VoIP and Real Time Video.
  • the broadband mid- network server of the present invention has been designed to scale not only in bandwidth, but also in processing power and state space.
  • the architecture allows a service provider to configure the cards chosen for use in the available chassis space to suit his particular application.
  • a service provider could increase the number of IPE cards at the expense of a fewer number of line cards; as few as one line card. In the case of one line card, the maximum amount of processing power would be available to a service provider. IN the preferred embodiment described in detail below, this configuration would provide 240 processors and 39 gigabytes of memory. This would allow for a greater number and complexity of value added services which require more processing power. Alternatively, a greater number of line cards could be selected for use in a chassis which would be desirable to handle greater traffic and throughput at the expense of fewer value added services.
  • the broadband mid-network server of the present invention includes the ability to distribute traffic across a number of Internet processing engines and, more specifically, across a number of protocol processing units provided in each engine (the bandwidth to which can be coordinated) , to provide compute power and state space required for performing per user processing for a large number of users.
  • One important feature of the present invention is a unique architectural philosophy, which provides that processing be performed as close to the physical layer as warranted by considerations of flexibility, cost and complexity.
  • This architectural philosophy maintains balance between two kinds of processing which are important to scaling bandwidth with value-added services in broadband networks: time-consuming, repetitive processing; and flexible processing which must be easy to program by third parties.
  • time-consuming, repetitive processing which has proved to create a bottleneck in the processor-based servers of the prior art, is addressed by the inventive architecture through specialized hardware, and results in dramatic increases in speed and decreases in delay.
  • the broadband mid-network server of the present invention provides a system that is currently unrivalled in performance and which can become the prime mover of Internet services such as managed, secure VPNs, Voice over IP and Real Time Video.
  • Fig. 1 illustrates a single shelf broadband mid- network server according to one embodiment of the present invention
  • Fig. 2 is a functional block diagram of the preferred server shown in Fig. 1;
  • Fig. 3 is a functional block diagram of an exemplary line card shown in Figs. 1 and 2;
  • Fig. 4 is a functional block diagram of an exemplary IPE card shown in Figs. 1 and 2;
  • Fig. 5 illustrates routed distribution to an IPE card;
  • Fig. 6 illustrates the processing flow on an IPE card
  • Fig. 7 illustrates a protocol processing platform according to the present invention
  • Fig. 8 is a functional block diagram of an exemplary buffer access controller
  • Fig. 9 illustrates the format of a cell received at an input to a BAG from a PIC
  • Fig. 10 is a functional block diagram of a preferred packet manager
  • Fig. 11 is an illustration of the deployment of a broad-band mid-network server at a Service Provider POP;
  • Fig. 12 is an illustraion of the different kinds of links an ISP may want on a secure segment;
  • Fig. 13 is an illustration of the system wide bandwidth distribution functions
  • Fig. 14 is an illustration of the multi-level policing and multi-level shaping that occurs in the system
  • Fig. 15 is an illustration of router distribution, two level policing, routing and two level shaping;
  • Fig. 16 is a functional block diagram of a preferred packet inspector;
  • Fig. 17 is an illustration of the preferred Distributor Flow Unit.
  • Fig. 18 is a summary of the highlights of the DFU . Detailed Description of the Preferred Embodiments
  • the mid-network processor of the present invention is preferably implemented in a single shelf system as shown generally in Fig. 1, and is indicated generally by reference character 300.
  • the mid- network processor 300 is provided with a number of physical connection (“PHY") cards 302-316 through which packets may enter and exit the mid-network processor 300 according to a particular communication protocol, as is known in the art.
  • PHY physical connection
  • the mid-network processor 300 supports the POS, ATM, and Gigabit Ethernet layer two protocols, although the mid-network processor may readily be configured to support additional protocols, as will be apparent.
  • the PHY cards 302-316 are each associated with line cards 322-336, respectively, as shown in Fig. 1.
  • each PHY card is media specific.
  • each PHY card is provided with connectors and other components necessary to interface with the communication media connected thereto, and over which packets enter and exit the PHY card.
  • Each line card is configured to process packets of the type received from its associated PHY card, as explained more fully below.
  • the preferred mid-network processor 300 shown in Fig. 1 is also provided with a number of Internet Processing Engine ("IPE") cards 340-354, as well as two flash memory modules 360, 362 and four switch fabric modules 364-368. As appreciated by those skilled in the art, the number of switch fabric cards required is a function of the switch fabric card design as well as the desired redundancy overall performance.
  • Fig. 1 also illustrates a midplane 370 that is provided for interconnecting the various cards described above.
  • the preferred mid-network processor 300 utilizes a card-based approach to facilitate maintenance and expansion of the mid-network processor 300, as necessary, but this is clearly not a limitation of the present invention.
  • Fig. 2 is a functional block diagram of the preferred mid-network processor 300 shown in Fig. 1 (although, to simplify the illustration, Fig. 2 does not show the PHY cards 310-316, the line cards 330- 336 and the IPE cards 346-354 shown in Fig. 1) .
  • Packets enter the mid-network processor 300 via the PHY cards, as is known in the art.
  • Each PHY card then delivers its packets to its associated line card through the midplane 370.
  • the line card After performing initial processing of the packet, the line card delivers the packet again through the midplane to the switch fabric which, in turn, delivers the packet to one of the IPE cards for performing certain mid-network processing functions, such as routing functions, per user processing, encryption, and bandwidth distribution.
  • the IPE card After performing mid-network processing for the packet delivered thereto, the IPE card sends the packet back into the switch fabric, typically for delivery to one of the line cards for some additional processing before allowing the packet to exit the mid- network processor 300 through one of the PHY cards.
  • a single IPE card may be insufficient to complete the necessary mid- network processing functions for a packet delivered thereto.
  • the IPE card upon performing some processing, the IPE card will deliver the packet to another IPE card (rather than to one of the line cards) via the switch fabric for further processing.
  • a packet will typically be processed by only one IPE card, it is possible to process a packet in multiple IPE cards, if necessary.
  • all of the line cards contain identical hardware, but are independently programmable.
  • all of the IPE cards contain identical hardware, but are independently programmable. This contributes to the scalability and elegantly simple design of the preferred mid-network processor 300. Additional processing power can be provided to the mid- network processor by simply adding additional IPE cards.
  • additional users can be supported by the mid- network processor 300 by adding additional line cards and PHY cards, and perhaps additional IPE cards to provide additional processing for the newly added users, if necessary.
  • the flash memory cards are provided for storing configuration data used by the IPE cards during system initialization.
  • packet refers to any type of packet that enters or exits the mid- network processor 300, including packets input to the mid-network processor 300 in the form of cells (such as ATM cells) via an interleaved or non-interleaved cell stream.
  • cells such as ATM cells
  • each line card used in the preferred mid-network processor 300 performs a number of functions. Initially, the line card converts packets (possibly of varying lengths) delivered thereto into fixed length cells. In this preferred embodiment, each line card converts input packets (including packets represented by individual cells) into 64 byte cells. The line card then examines the stream of fixed length cells "on the fly” to obtain important control information, including the protocol encapsulation sequence for each packet and those portions of the packet which should be captured for processing. This control information is then used on the line card to reassemble the packet, and to format the reassembled packet into one of a limited number of protocol types that are supported by the IPE cards.
  • any given line card can be configured to support packets having a number of protocol layers and protocol encapsulation sequences
  • the line card is configured to convert these packets into generally non- encapsulated packets (or, stated another way, into packets having an encapsulation sequence of one) of a type that is supported by each of the IPE cards.
  • the line card then sends the reassembled and formatted packet into the switch fabric (in the form of contiguous fixed length cells) for delivery to one of the IPE cards that was designated by the line card for further processing that particular packet.
  • the fixed length cells which comprise a packet are arranged back to back when the packet is delivered to the switch fabric by a line card, the cells may become interleaved with other cells destined for the same IPE card during the course of traversing the switch fabric.
  • the cell stream provided by the switch fabric to any given IPE card may be an interleaved cell stream.
  • the IPE card will first examine this cell stream "on the fly" (much like the cell stream examination conducted by the line cards, explained above) to ascertain important control information. The IPE card then processes this control information to perform routing look-ups and other mid-network processing functions for each packet delivered thereto.
  • the control information is also used by the IPE card to reassemble each packet, and to format each packet according to the packet's destination interface.
  • the IPE card then sends the reassembled and formatted packet back into the switch fabric in the form of contiguous fixed length cells for delivery to one of the line cards (or for delivery to another IPE card, in the case where additional mid- network processing functions must be performed for the packet in question) .
  • the cells of any given packet may enter the switch fabric in a back to back arrangement, these cells may become interleaved with other cells during the course of traversing the switch fabric.
  • the stream of cells provided by the switch fabric to any given line card may be an interleaved cell stream. Accordingly, a line card will first examine this cell stream "on the fly" to ascertain important control information that will be used primarily to reassemble packets, and to format the reassembled packets for their destination interfaces. Additional processing of outbound packets is also conducted on the line card for PHY scheduling and bandwidth distribution purposes.
  • Fig. 3 illustrates an exemplary line card 380 used in the preferred mid-network processor 300 of the present invention.
  • the line card 380 preferably includes an ingress side (i.e., the left half of Fig. 3) and an egress side (i.e., the right half of Fig. 3) .
  • the packets are first provided to a packet inspector chip (“PIC") 400 which converts the packets (which may already be represented by individual cells such as ATM cells) into fixed length cells.
  • PIC packet inspector chip
  • the fixed length cells are 64 byte cells that are 8 bytes wide and 8 bytes long.
  • a "cell time,” in the context of cells propagating within the preferred mid-network processor 300, corresponds to 8 clock cycles, as appreciated by those skilled in the art.
  • the PIC 400 then examines the stream of fixed length cells "on the fly” to identify the "classification” (that is, the protocol encapsulation sequence) , capture matrix, and other control information for each packet (as described more fully in copending Application No. 09/494,235 filed January 30, 2000 entitled “Device and Method for Packet Inspection, " the disclosure of which is incorporated herein by reference) . More specifically, the preferred PIC 400 generates a control cell for each examined cell of a packet, and each control cell represents the control information that has been determined thus far for the corresponding packet. Thus, the PIC 400 outputs both the stream of fixed length cells that was produced before this stream was examined "on the fly” therein, as well as corresponding control cells. As shown in Fig.
  • these control and data cells are then provided by the PIC 400 to four preferably identical buffer access controllers ("BACs") 402-408.
  • Each BAC stores a different quarter (i.e., 25%) of the data cells received from the PIC 400 in its corresponding cell buffer (“CB”) .
  • Each control cell output by the PIC 400 also includes a protocol processing unit (“PPU") identifier which identifies a PPU associated with a particular BAC for processing that control cell.
  • PPU protocol processing unit
  • each PPU in this preferred embodiment, preferably comprises two general purpose central processing units (“CPUs"), as shown in Fig. 3.
  • CPUs general purpose central processing units
  • a PPU could comprise one or more network processors, digital signal processors, or any programmable processors.
  • the BACs 402-408 each examine the PPU identifiers contained in the control cells delivered thereto over a bus by the PIC 400.
  • each control cell output by the PIC 400 is acted on by only one BAC and its associated PPU.
  • the size of the control cell is much smaller than the typical size of a packet. This can significantly increase the utilization of the processor by reducing the I/O bandwidth which is the typical limiting factor in processor use.
  • all control cells corresponding to a specific packet are processed by the same BAC PPU on the line card 380.
  • the PPU assigned by the PIC 400 for any given packet is performed according to configuration and control information received by the PIC 400 from a master PPU ("MPPU") 410, and can be changed by the MPPU 410 over time as necessary for PPU load balancing on the line card 380.
  • the PIC 400 also keeps track of the available memory addresses in the cell buffers associated with the BACs using a free buffer (“FB") list 412, and also keeps track of where each data cell is stored in the cell buffers with respect to other cells of the same packet using a link list 414.
  • MPPU master PPU
  • FB free buffer
  • the PPU When a control cell is processed within a particular BAC PPU, the PPU produces a new control cell to be provided to a packet manager ("PM") 420 which is in communication with the PIC 400 and the BACs 402-408. Included in this control cell provided to the PM 420 is a dequeue pointer which designates the location of the first cell of a packet that is to be dequeued and sent to the PM 420 along with the second and subsequent cells of that packet (if applicable) . The packet manager 420 then forwards this dequeue pointer back to the PIC 400, which, in turn, provides instructions to the BACs 402-408 to dequeue each quarter cell of the designed packet in sequence using the information previously stored by the PIC 400 in the link list 414. Thus, the designed packet is reassembled as it is dequeued and delivered to the packet manager 420.
  • PM packet manager
  • the packet manager 420 stores the cells of the reassembled packet in its own cell buffer 422 (using a free buffer list 424 and link list 426) .
  • the packet manager 420 processes the control information it received for that packet from one of the BAC PPUs and then formats the packet according to this control information by modifying or augmenting the packet header as the cells of the packet are dequeued from the cell buffer 422.
  • This process and additional details of the preferred packet manager 420 are described more fully in copending Application No. 09/494,236 filed January 30, 2000 entitled “Device and Method for Packet Formatting," the disclosure of which is incorporated herein by reference.
  • the packet manager 420 also appends a header to each of the 64 byte cells that constitute the reassembled and formatted packet, and these headers will be used by the switch fabric for routing the cells therethrough.
  • the packet manager 420 then forwards the cells of the packet in sequence to a UDASL 430, which is provided for managing cell traffic into and out of the switch fabric for the line card 380.
  • the UDASL 430 then forwards the packet cells into the switch fabric for delivery to an IPE card that will perform mid- network processing functions for the packet in question.
  • This IPE card is preferably designated by the BAC PPU that prepared and forwarded control information to the packet manager 420.
  • a 9-port Ethernet switch 450 which provides for interprocessor communications between the eight PPUs on the line card 380 (i.e., 4 PPUs on the ingress side and 4 PPUs on the egress side) and the MPPU 410 for purposes of load balancing, hardware monitoring and bandwidth distribution, and for sharing user and configuration information.
  • the bandwidth distribution process and the preferred hardware are described more fully in copending Application No. 09/515,028 filed February 29, 2000 entitled “Method and Device for Distributing Bandwidth, " the disclosure of which is incorporated herein by reference.
  • Fig. 4 illustrates an exemplary IPE card 500 used in the, preferred mid-network processor 300 of the present invention.
  • the hardware layout of the IPE card 500 is similar to the hardware layout on the ingress side (and the egress side) of the line card 380 shown in Fig. 3. That is, the IPE card 500 is also provided with a UDASL
  • the PIC 501 that delivers a typically interleaved cell stream received from the switch fabric to a PIC 502.
  • the present invention provides, amongst other things, an inventive hardware module that can be programmed to perform requisite processing either on the ingress side or the egress side of a line card, or on an IPE card. This contributes to the configurability and scalability of the preferred mid-network processor 300, which can be reconfigured as necessary (both through programming and/or by adding additional lines cards and/or IPE cards) to accommodate additional users and/or to provide additional processing power.
  • the PIC 502 provided on the preferred IPE card 500 is used to inspect the stream of fixed length cells provided thereto by the switch fabric "on the fly" to ascertain control information for each packet to be processed on the IPE card. In most cases, this control information was added to the packet by the PM 420 on the ingress side of the line card that forwarded the packet to this particular IPE card.
  • the PIC 502 outputs the stream of data cells to the four BACs 504-510, each of which is configured to store a different quarter of each data cell in its corresponding cell buffer (note that each BAC on the preferred IPE card 500 has two PPUs associated therewith, whereas only one PPU is associated with each BAC on the preferred line card
  • the PIC 502 also outputs control cells to the BACs 504-510, where each control cell contains a PPU identifier that designates one of the two PPUs associated with a particular BAC for processing that control cell on the IPE card to perform mid-network processing functions for the corresponding packet.
  • all control cells corresponding to a specific packet are processed by the same BAC PPU on the IPE card 500.
  • the PPU that processed control information for that packet on the ingress side of the line card is also responsible for determining to which IPE card and, more specifically, to which PPU on a particular IPE card, the packet should be sent for further processing.
  • the PPU After a BAC PPU on the IPE card processes the control information for a particular packet, the PPU sends a control cell back to the PM 512, which then cooperates with the PIC 502 to dequeue the quarter cells of that packet in sequence from the cell buffers associated with the BACs 504-510.
  • the PM 512 Upon receiving the constituent cells of a reassembled packet and storing these cells in its own cell buffer 514 (using a link list 516 and a free buffer list 518), the PM 512 processes the control cell received from the BAC PPU to format the ' reassembled packet according to its destination interface before forwarding the reassembled formatted packet back into the switch fabric for delivery to its destination line card (or another IPE card, in the case where additional processing of the packet is required) .
  • a 9-port Ethernet switch 550 which, like the Ethernet switch provided on the preferred line card 380, provides for interprocessor communications between the eight PPUs and an MPPU 530 on the IPE card 500 for purposes of load balancing, hardware monitoring and bandwidth distribution, and for sharing user and configuration information.
  • the egress side of the exemplary line card 380 is also provided with a PIC 600, four BACs 602-608, and a PM 610.
  • the PIC 600 Upon receiving a possibly interleaved stream of fixed length cells from the switch fabric via the UDASL 430, the PIC 600 examines this cell stream "on the fly" to ascertain control information (including control information that may have been added to the packet header by the PM 512 on an exemplary IPE card 500) .
  • the PIC 600 then forwards the data cells to the BACs 602-608 for storage in their corresponding cell buffers, and forwards corresponding control cells for each packet to one of the BAC PPUs (typically assigned by an IPE card BAC PPU that previously processed control information for the same packet) for further processing.
  • the assigned BAC PPU then performs additional packet processing, primarily for traffic shaping, PHY card scheduling and bandwidth distribution on that PHY card.
  • this BAC PPU Upon processing the control information received from the PIC 600, this BAC PPU produces and forwards a control cell to the packet manager 610, which, in turn, dequeues the quarter cells of the corresponding packet in sequence from the cell buffers associated with the BACs 602-608 in cooperation with the PIC 600.
  • the PM 610 then stores the constituent cells of the reassembled packet in its own cell buffer 612 (using a link list 614 and a free buffer list 616) , and formats the packet for its intended destination before forwarding the reassembled formatted packet to the PHY card associated with this line card for outputting the packet from the mid-network processor 300.
  • Cardld An 8 bit number that uniquely identifies an IPE or Line Card in the system.
  • Flowld A 10 bit number whose lower (least significant) 8 bits contain a Cardld, and whose upper (most significant) 2 bits identify the priority (class) of the traffic sent through the switch fabric to this card using this Flowld. (In the switch fabric, this field is 12 bits, but our implementation only uses the least significant 10 bits.)
  • a datalink (layer 2) interface examples include ATM virtual circuits, PPP sessions (over SONET, Ethernet, or ATM) , and MPLS label switched paths .
  • Userld A 32-bit value that can be used as a system-wide pointer to user configuration and state information. Since multiple cards (one or more IPEs and one Line Card) can store information about a user, it is possible to have multiple Userlds that refer to a single user.
  • the upper (most significant) 8 bits of the value represent the Cardld of the card which contains the user information being identified.
  • the next 4 bits represent the PPUID of the PPU on the card where the information is stored, and the lower (least significant) 20 bits represent the CID assigned by that card to the user.
  • the CID is used as an index into the PPU' s table of user information.
  • LCUserld A Userld in which the Cardld identifies a Line Card.
  • Primary Userld A Userld in which the Cardld and PPUID identify the PPU on an IPE with has the primary responsibility for managing a user.
  • Small User A user whose ingress packet stream is processed entirely by a single IPE PPU. Small users do not have Secondary Userlds. Large User: A user whose configured bandwidth is too high for his ingress packet stream to be processed by a single IPE PPU. All large users have one or more Secondary Userlds.
  • Logical Link A group of users of the same type (i.e.: a group of ATM Virtual Circuits) . If the Logical Link is a group of PPPoE sessions over ATM, the Logical Link must be an ATM Virtual Circuit.
  • CSIX Header The header of a CSIX (i.e., Common Switch Interface) cell.
  • the CSIX Header is separate from the 64 byte cell payload.
  • Cell Header The first two bytes of the 64 byte payload of a CSIX cell.
  • PIE Header The 6 bytes immediately following the Cell Header of the first cell of a packet. Overview:
  • the server system preferably comprises one or more rack mountable system units (i.e., shelves).
  • the system also contains at least one line card, exactly as many PHY cards as line cards, and at least as many IPE cards as line cards.
  • each shelf of the system contains preferably three switch fabric cards and two flash disk cards.
  • Each line card is uniquely associated with a particular PHY card.
  • Each IPE card can be thought of as an independent router, with one or more IP addresses associated with it.
  • Each Layer 2 (datalink) interface (referred to as a "user") provided by a line card is associated with exactly one IPE card (more specifically, exactly one PPU on one IPE card) . Different users from the same line card can be associated with different PPUs on different IPE cards, and a particular PPU can have users from multiple line cards.
  • PPP/Ethernet/ATM PPP/Ethernet/ATM
  • the inner-most levels of encapsulation each of which being layer 2 interfaces (users) in their own right, can be associated with different PPUs within an IPE card, or even PPUs on different IPE cards, thus causing traffic from the outer levels of encapsulation to be split among multiple PPUs or IPE cards.
  • outer layers can be encapsulated layer 3 traffic as well as layer 2 traffic (for example, an Ethernet/ATM virtual circuit can carry IP as well as PPPoE packets) . In this case, all the layer 3 traffic will be associated with a single PPU (a user) , but the encapsulated layer 2 datalinks (users) can each be associated with a different IPE card.
  • the set of all users on the system is preferably distributed as evenly as possible across all the IPE cards in the system.
  • the MPPU stores the per-user information for the users assigned to that IPE and distributes those users across its PPUs.
  • Each PPU stores a copy of the per-user information assigned to it.
  • each user is associated with one and only one IPE card and one and only one PPU on that IPE.
  • This PPU' s copy of the user's configuration and state information can be uniquely identified on a system- wide basis by the Primary Userld.
  • the architecture of this preferred implementation is based on line cards, PHY cards, a switching fabric, internet processing engines (IPE) and flash memory modules, as was described generally above.
  • the line cards terminate the link protocol and distribute the received packets based on user, tunnel or logical link information to a particular IPE through the switching fabric.
  • the procedure of forwarding a packet to a particular IPE and PPU will be denoted as "routed distribution.”
  • a midplane is also used to connect the different cards.
  • the preferred line card and the preferred IPE card were described above with reference to Figs. 3 and 4.
  • the system is comprised of a set of hardware components, as described, which can be used to configure a system for a wide variety of applications as well as throughput requirements cost effectively.
  • the preferred switch fabric and scheduler support cell switching at OC-192 speeds, and the switch fabric is both fully redundant and highly scalable.
  • the preferred IPE cards have the following attributes: high performance protocol processing engine; manages users, tunnels and secure segment groups; supports policing and traffic shaping; implements highly sophisticated QoS with additional support for differentiated services; supports distributed bandwidth management processing; and supports distributed logical link management, able to do NAT, packet filtering and firewalls .
  • the preferred line cards have the following attributes: packet lookup processing; protocol identification; scheduling; supports distributed bandwidth management processing; multi-I/F support (ATM, GE, POS) ; and AAL-5 Processing (CRC check and generation) .
  • the preferred PHY cards have the following attributes : line termination for rates up to OC 192c; ATM - Layer Processing; ATM - SONET Mapping; POS - SONET Mapping
  • the overall system preferably has the following attributes: high availability; 1+1 switch fabric and scheduler redundancy; 1+1 control system unit redundancy; all field replaceable units are hot-swappable; N+l AC power supply redundancy; and N+l fan redundancy.
  • routed distribution is to forward a packet to a particular PPU within an IPE.
  • the key benefits of this approach are: incremental provisioning of compute power per packet; allows load distribution based on the packet computation needs for a particular user or tunnel; user and tunnel configuration information can be maintained by one single processor thus minimizing the inter-process communication needs; and allowing the portability of single processor application S/W onto the system.
  • Fig. 5 illustrates the distribution of packets to a particular IPE.
  • a packet is received from a line card.
  • the line card examines the packet and forwards the packet based on the IP source or destination address, the user session ID, or the tunnel ID.
  • the IPE receives the packets and hands it over to the PPU specified by the line card.
  • the line cards and the IPE host the flexible protocol- processing platform.
  • This platform is comprised of a data path processing engine and the already mentioned protocol- processing unit.
  • the separation of data path processing from protocol processing leads to the separation of memory and compute intensive applications from the flexible protocol processing requirements.
  • a clearly defined interface in the form of dual-port memory modules and data structures containing protocol specific information allows the deployment of general-purpose CPU modules for supporting the ever changing requirements of packet forwarding based on multilayer protocol layers.
  • the protocol-processing platform can be configured for multiple purposes and environments. That is, it supports a variable number of general purpose CPUs which are used in the context of this architecture as Protocol Processing Units (PPU) .
  • One of these CPUs is denoted as the Master Protocol Processing Unit (MPPU) .
  • MPPU Master Protocol Processing Unit
  • the data path processing unit extracts, in the packet inspector, all necessary information from the received packets or cells and passes this information on to a selected PPU via one of the buffer access controller devices.
  • the cells themselves are stored in the cell buffer and linked together as linked lists of cells, which form a packet.
  • the packet is then forwarded either as a whole or segmented based on the configured interface.
  • Each PPU is associated with one dual-ported memory, where one port is controlled by the data-path processing unit and the other by the corresponding PPU.
  • Each dual-ported memory contains two ring buffers, where one ring buffer is used to forward protocol specific information from the data path to the PPU and the other is used for the other direction.
  • the ring buffer for passing on protocol specific information to the PPU is called the receive buffer.
  • the other buffer is called the send buffer.
  • Two pointers are maintained for each ring buffer.
  • the write pointer for the receive buffer is maintained by the data path processing unit while the read pointer is set by the PPU.
  • the send buffer's write pointer is controlled by the PPU and the read buffer by the data path processing unit.
  • the PHY card terminates the incoming transmission line. It also performs clock recovery and clock synthesis. Optical signals are converted into a parallel electrical signal which is then an input to a physical framer device which maps the incoming bit stream into the transmitted physical frame. Finally the physical layer of the corresponding link protocol processes the physical frames. In addition, link layer protocol processing is performed in order to provide a common packet interface to the line card. On the transmission side, the packets or cells are mapped into physical frames. These frames are then encoded into the corresponding physical layer format and sent over the optical fiber to the receiving peer.
  • the physical layer format is preferably either SONET or Gigabit Ethernet.
  • the link layer format is preferably GE, ATM or PPP for POS.
  • the line card performs packet forwarding for the egress and ingress path. Full duplex 10 Gbit/s throughput is provided.
  • the line card interfaces to the PHY cards and the switch fabric card.
  • the Line Card is preferably configured for either POS-PHY or UTOPIA III interface to the PHY card.
  • the Line Card preferably hosts two Protocol Internet Engine (PIE) chip sets. On the ingress side, one PIE chip set supports four protocol-processing units (PPU) and one MPPU.
  • PIE Protocol Internet Engine
  • the Four PPUs perform routed distribution to the various IPEs in the system. They also provide traffic shaping and scheduling of flows to the switching fabric. The remaining MPPU is used for overall control and supports the distributed bandwidth allocation protocol of the switching fabric.
  • the Packet Inspector first examines incoming cells or packets and the protocol information is extracted based on matched patterns in the data flow. This information is then made available to the PPU which is responsible for processing the incoming packet.
  • Cells or packets from a PHY card are processed by a particular PPU based on a chosen configuration. This configuration depends upon the configuration of the PHY card itself and upon the protocol supported by the PHY card.
  • the other PIE chip set, processing the egress flow is preferably responsible for cell assembly from the switch fabric and packet scheduling for multiple physical ports. Additional support for AAL5 processing is provided for ATM flows.
  • the MPPU from the ingress path is shared for configuration, maintenance and cell extraction of the egress flow.
  • the communication channel provides signaling and connection setup control for the ATM PHY card.
  • the PHY card informs the Line Card about the physical layer status and reports alarm and error conditions.
  • the ingress packet processing preferably involves:
  • Packet Assembly for ATM traffic AAL5 processing
  • Protocol Identification Packet Data Inspection
  • Routed Distribution Scheduling of traffic flows through switching fabric; Buffer management for ingress cell buffers; and cell scheduling for the switch fabric.
  • the egress packet processing preferably involves: Traffic Shaping; Packet Assembly for switch fabric flow; MPHY Buffering; Cell Scheduling for ATM with multiple physical interfaces with AAL5 processing (CPCS, SAR) ; and Packet Scheduling for POS with multiple physical interfaces.
  • the Internet Processing Engine provides the functionality for protocol processing, user management, tunnel management and secure segmentation. It receives the packets from the switching, enforces the service level agreements (SLA's), performs packet classification, filtering and forwarding, and finally schedules the packet for transmission to the requested interface.
  • SLA service level agreements
  • the PI is part of the Packet Internet Engine (PIE) chip set, which consists of the Packet Inspector, the Buffer Access Controller, and the Packet Manager. Together with the sixteen PPUs and the MPPU, the PIE chip set provides a powerful Protocol Processing unit.
  • the PIE chip extracts informative protocol information and forwards it to the PPUs and the MPPU based on the routed distribution decision made in the Line Cards.
  • the chosen PPU processes this information and performs all necessary packet processing. This includes, besides forwarding and filtering, policing, and packet formatting.
  • the MPPU controls the IPE and is negotiating with the units in the system the bandwidth allocation of the switch fabric. It also provides bandwidth management for the configured logical links.
  • the MPPU manages its connections by assigning users and tunnels to individual PPUs for forwarding processing from the Line Card to a particular IPE. Once a connection between the MPPU and Line Card is set up, all packets belonging to such a connection are forwarded from the Line-Card to the chosen PPU.
  • a PPU is chosen based on the already assigned connections, their bandwidth and the bandwidth and QoS required for the new connection. Connectionless traffic (Internet to Internet) is mapped onto an internal connection. If more bandwidth is needed than one PPU can manage, the packets will be distributed over multiple PPUs.
  • the functionality of the IPEs include: User Management; Tunnel* Management; Logical Link Management; Support for Secure Segmentation; Policing; QoS Control with Diff Service Support; Buffer Management; IPv4, IPv6 Forwarding; Packet Classification; Packet Filtering with support for user
  • Protocol Internet Engine Chip Set (PIE) :
  • the Protocol Internet Engine provides the data path processing capabilities for the server system at OC-192c rates.
  • the PIE chip set comprises three chips. These chips result in a very high performance packet processing system together with an interface controller and multiple general purpose CPUs.
  • Each cell is preferably transferred into the buffer through four buffer access controllers ("BACs") in order to increase. the bandwidth to the PPUs and to increase the bandwidth to the external cell buffers .
  • BACs buffer access controllers
  • Different portions of the same cell are written to the cell buffers attached to the different BACs. However, the captured portion of the data is sent to just one of the PPUs.
  • the preferred BAC unit is shown in Fig. 8.
  • the RSU receives incoming data, reformats the data to * an internal format, performs a parity check for incoming data, and also performs synchronization control.
  • the preferred format of a cell received by the BAC from the packet inspector is shown in Fig. 9.
  • the Cell Filter unit extracts control information from the cell and sends the cell data to the BAU along with the indication of which portion of the cell has to be stored in this cell buffer.
  • the CFU also sends the cell data stream to the PTU which translates the PPUID to the appropriate PPU and thence to the CCU where, based on the PPUID and the capture matrix, the control cell is extracted from the data cell CCU and stored in the CBU.
  • the CMU then transmits the control cell to the appropriate PPUs through a dual port RAM interface.
  • the control cell corresponding to the packet is sent by the PPU which processed that user to the PM along with the dequeue pointer. This is received by the BEU of the PM, as shown in Figure 8.
  • the control cell data stream (shown as the narrow arrow in Fig. 8) then goes to the ICU where it is stored while the DSU does deficit round robin scheduling of the data packets corresponding to the control packets in order to distribute bandwidth equitably to the BACs for sending out packets.
  • the dequeue pointer corresponding to the packet to be dequeued is sent to the PIU from where it is transmitted to the PI where it is received at the PIU and passed on to the BMU.
  • the dequeue pointers are stored in a FIFO while the previous packets are being dequeued.
  • the dequeue pointer information is passed onto the BACs and the BAU in the BACs dequeues the packet and passes it through the PMU to the packet manager.
  • a packet is dequeued by dequeuing all the cells comprising the packet which are held in the form of a linked list.
  • Data packets from the data packet stream (shown as the thick arrow in Fig. 8) undergo AAL5 processing (should they need it) in the APU, and are stored in the IDU buffer.
  • the FAU reformats packets into 64 bit slices and controls dequeuing from both the IDU and the DSU' s DPRAM in accordance with the PFU.
  • a sequence number is used at the beginning of both the data and the control cells.
  • Both the control and data streams enter the PFU where they are formatted and sent to the TIU to be sent to the phy cards or the switch fabric.
  • the PIE chip set can be configured for multiple purposes and environments. That is, it supports a variable number of general purpose CPUs which are used in the context with the PIE chip set as Protocol Processing Units (PPU) . One of these CPUs is reserved for maintenance and control purposes and is denoted as MPPU.
  • the PIE chip set implements all necessary functions in order to hide all data path processing from the actual protocol processing functionality.
  • the PIE chip set extracts all necessary information from the received packets or cells and passes this information on to a selected PPU.
  • the cells are then stored in the cell buffer and linked together as linked lists of cells, which form a packet.
  • the packet is then forwarded to the MPHY scheduler as a whole or segmented based on the configured interface .
  • Each PIE chip set is differently configured.
  • the PIE chip set on the IPE supports as many as 8 PPUs and 1 MPPU. 4 PPUs and 1 MPPU will support the PIE chip set on the ingress side of the Line Card, and an equal number on the egress side of the Line Card.
  • the characteristics of the preferred PIE are as follows: Three Chip Chip-Set; Full Data-path processing in hardware; Support for distributed .protocol processing by general purpose CPU modules; Highly scalable compute power per packet (up to 64 PPUs can be supported) ; Flexible interface support with MPHY scheduling; AAL-5 Processing; SAR Sublayer: Assembly and Segmentation for up to 256K connections; CPCS Sublayer: CRC 32 generation and check, padding control, and length field control; Internal Packet Processing; Checksum computation and check; Length field control; Padding control; Micro- programmable Packet Inspection Engine; Supports any layer packet inspection; Supports byte matched pattern processing; Supports bit matched pattern processing; Results are made available to protocol processing units; Supports extraction of any portion of packet for protocol processing; IPv4/IPv6 Header Checksum; Congestion Avoidance Support; EPD; PPD; Internal Back-pressure control; Linked List Control; Supports up to 8 million 64 byte cells (initially a million) ; Links cells together to form a packet
  • the preferred PIE supports: Packet Classification: Based on Layer 3,4,... Information (any layer); Packet Filtering; User programmable filters; Group filters; Firewall processing; Packet Forwarding; IPv4 Lookup Processing; IPv6 Lookup Processing; Tunnel Forwarding; Buffer Management; Dynamic Thresholding on a per user and assigned rate basis; Support for up to 8 million Cell Buffer (initially a million) ; Congestion avoidance with Early Packet Discard (EPD) , Partial Packet Discard (PPD) , Selective Packet Discard; Policing; Per User and Logical Link; Enforcing traffic contracts based on SLA; Traffic Shaping; Per User and Logical Link; Support for traffic contracts based on SLAs; Support for Real-time traffic (low delay traffic) ; QoS Control; Supported for differentiated services; Multiple priorities per user; Flow based queuing (not initially supported) ; Bandwidth Management; Distributed processing for allocation of bandwidth on
  • Traffic Management for an Internet access system is complex due to the involvement of various system interfaces.
  • a system might be connected to users, the Internet backbone, a Local Area Network with file and Web servers, and a Metropolitan Area Network (MAN) which gives access to local TV and media servers as shown in Fig. 11.
  • Each link has different link properties with respect to available bandwidth and Dollar per Megabyte. This means that a user's share of bandwidth on a particular link has to be based on the property of this link.
  • a user might get more bandwidth share on the MAN link than on the backbone link due to the fact that more bandwidth at a cheaper price is available on the MAN link than on the backbone link. The same is true for bandwidth wholesaling of the preferred system to multiple ISPs who would like to resell bandwidth to their customers.
  • the enabling technology for this model is Secure Segmentation.
  • a logical link group can be assigned to a secure segment based on the bandwidth needs of the considered secure segment for a particular link as shown in Fig. 12. This means that not only user allocation has to be considered but also logical link bandwidth needs to be included. Therefore, bandwidth is distributed based on traffic class, user, and logical link group. This supports the wholesaling model and takes into account over-subscription requirements in order to support QoS including differentiated services.
  • the preferred system represents a highly distributed system.
  • resources have to be allocated based on the requirements of the traffic of each component. That means in general that each component has to take part in a distributed computation method in order to allocate the resources.
  • the traffic management requirements for bandwidth allocation within the preferred system will have to include bandwidth negotiation for the various flows through the switching fabric.
  • Buffer management and QoS Control is an integral part of the overall traffic management scheme implemented in the preferred server system. Due to the large buffer, the system has to maintain on various different places in the distributed system a sophisticated buffer management scheme which has to be implemented and supported by QoS control in order to support differentiated services and other traffic flow specific requirements Policing - Traffic Shaping:
  • Policing and Traffic Shaping have closely related functionality. Policing ensures that the incoming stream does conform to the negotiated link parameters for a logical link group as well as the user of the incoming link. Traffic shaping enforces the link parameters for the outgoing traffic stream based on the outgoing user, the logical link group and the link itself. Fig. 14 is intended to illustrate the need for policing as well as traffic shaping. An incoming traffic stream is shaped (policed) in order to enforce the traffic contracts of a user for the considered link and logical link. Before the traffic is forwarded to another link, the traffic contracts for this particular link have to be enforced. This traffic contract might be much different from the traffic contract of the incoming stream. Consider the case where a user requests information over the Internet backbone link.
  • the bandwidth allocated on this link for this user might be 500 Kbit/s.
  • the logical link bandwidth for the corresponding secure segment might be set to 10 Mbit/s. If the user's access link to the system uses an ATM connection with an assigned rate 1 Mbit/s and no policing is enforced, the user could use the full 1 Mbit/s. This is possible since the traffic shaped onto the user ATM link allows the user to transmit the higher rate. Therefore, it is necessary to police the incoming traffic and the other for shaping the traffic for a particular link.
  • Fig. 15 shows the schematic implementation of the policer and traffic shaper in an IPE within the preferred server system. A received cell is assigned to a particular user data structure assigned to the incoming link for the considered user.
  • the policing information can be directly obtained from the user who is sending a packet based on the connection identifier, the corresponding session ID, or the IP source address. However, if the packet on the incoming connection cannot be directly associated with a user or logical link group, then the user and/or logical link group for whom it is destined classifies the packet. Based on the obtained user and logical link information the incoming traffic stream is policed by queuing up the packets and enforcing the negotiated traffic contract.
  • the packet conforms to the incoming link requirements, the packet is shaped based on the user parameters and logical parameters for the outgoing link. These parameters are obtained from the user connection itself if a session ID can be associated with it. If the packet comes from a user and is forwarded across the Internet to a remote terminal, then the shaping parameters are obtained from the sending user for the corresponding link and the associated logical link group. For connectionless traffic, which cannot be directly associated with users, logical link group can be assigned based on the IP destination address and or source address. This allows managing traffic flows between networks. Switch Fabric Bandwidth Management and Scheduling:
  • Exhibit A Attached hereto as Exhibit A are details of the manner in which the preferred server system is programmed so as to minimize inter-IPE card communications.
  • Line Cards do not perform any traffic policing. Policing is performed, in distributed fashion, by the all IPE cards in the system. If during testing, it can be determined that the Line Cards have enough processor and I/O bandwidth to perform policing, this function might be moved to the Line Cards in a future version ofthe software. Also, Line Cards do not perform any routing table lookups.
  • One operation that must be performed is determining if the destination IP address is one of the IP addresses of our system. This can be done using a simple hash table. A full CIDR routing search is not necessary, since we are only looking for an exact match.
  • the result of the lookup (if successful) is the Cardld of the IPE that the address belongs to. If a match is found and the Cardld is equal to the Cardld of the IPE that the packet is about to be forwarded to, the packet must be forwarded with die Destination PPU bit set. This is so that when the packet is received, the PI can select the packet to be captured in its entirety (as long as it is not part of a non- encrypted tunnel).
  • the Userld should be determined based on die IPsec Security Parameter Index (SPI) rather than on • flie hash of die source and destination IP addresses in the ff header of the packet.
  • SPI die IPsec Security Parameter Index
  • the following information is sent to the IPE PPU along with the packet payload:
  • IP checksum IP checksum, AAL5 CRC, internal parity
  • IPC for inter-processor communications
  • IP for inter-processor communications
  • PPP for inter-processor communications
  • Ethernet for inter-processor communications
  • ATM for inter-processor communications
  • MPLS MPLS
  • This 4 bit field can be used to give additional information to the IPE about the encapsulation of this packet It specifies which stage in flie IPE PI will be the first to inspect flie packet.
  • This bit is set for IP packets whose destination address is equal to one of the IP addresses of the IPE card that packet is being sent to.
  • the PPU identifier of the TPE PPU that the packet is being sent to is the PPU identifier of the TPE PPU that the packet is being sent to.
  • the PI uses the VP CI and PHYID to calculate the LC CID, which is used by the PI as index into flie hardware connection table.
  • the PI reads (amongst other tilings) a LC PPUID which selects the LC PPU that the control information for the packet should be sent to.
  • the LC CID is also used by the LC PPU as an index into a software connection table.
  • this connection table is used to determine the Userld (which consists of a Destination Cardld, Destination PPUID, and IPE CID) that is sent to the IPE in flie PIE Header ofthe packet.
  • a determination of priority is made based on flie protocols found in the packet. Alternatively, the priority could be read from the connection table. This priority is used to determine flie two most significant bits of the Destination Flowld when the packet is forwarded to an IPE.
  • the ATM cell headers and the AAL5 trailer and padding are removed (by the PM) before forwarding the packet.
  • the IP packet is forwarded to the IPE.
  • the IPE CID is determined by reading the software connection table.
  • Each ATM LC must have a standard globally unique Ethernet MAC address permanently assigned to it.
  • Each Ethernet/ATM VC should be configurable as to whether or not it is in "promiscuous" mode - that is, whether or not it should discard unicast packets not sent to its MAC address.
  • PPPoE session packets 0x8864.
  • the IPE CID is determined by reading the software connection table, and the Initial PID is set to indicate an Ethernet packet.
  • the PPPoE header is removed, and the PPPoE Session ID (from the PPPoE header) is used to index into a PPPoE session table, from which the IPE CID can be retrieved.
  • the Initial PID is set to indicate a PPP packet.
  • the PPP/PPPoE protocol type is IP
  • the PPP header is also removed before the packet is forwarded to the IPE, and the Initial PID is set to indicate an IP packet.
  • the PPP protocol type is IP
  • the PPP header is removed, and the Initial PIDS is set to indicate IP
  • the PPP header is kept, and the Initial PIDS is set to indicate PPP.
  • the IPE ODD is determined by reading the software connection table.
  • the top of stack shim label (in the AAL5 PDU) is replaced with flie VPI/NCI of the virtual circuit
  • the VPI/VCI can be deduced from flie LC CID.
  • the IPE CID is determined by reading the software connection table.
  • the PI DFU control registers can be programmed (by the MPPU) with the LC CID's of up to 4 large virtual circuits. For these circuits, if die packet contains an IP header, flie PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of flie source and destination IP addresses of the packet (calculated by the PI DFU).
  • any ofthe entries (circuits) in the software NC connection table can be marked for distribution across multiple IPE PPUs. These are known as large users, and need not be the same virtual circuits that are distributed by the DFU as explained above. For these circuits, if the packet contains an IP header, a new hash is calculated over the source and destination IP addresses of the packet and used to select one of several Userlds (Destination Cardld, Destination PPUID, and IPE CID) that are sent to the IPE in the PIE Header of the packet.
  • Destination Cardld Destination PPUID
  • IPE CID IPE CID
  • the Userld is selected using a different means, as described in the IPsec protocol processing section below.
  • the following information is sent to the IPE PPU along with the packet payload: • In the CSIX Header: • Destination Flowld: Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet.
  • IP checksum IP checksum, internal parity
  • This 3 bit field tells the IPE the type of encapsulation this packet has.
  • the choices are: IPC, IP, PPP, or MPLS
  • This 4 bit field can be used to give additional information to the IPE about flie encapsulation of this packet It specifies which stage in flie IPE PI will be the first to inspect the packet
  • This bit is set for IP packets whose destination address is equal to one of flie IP addresses of the IPE card that packet is being sent to.
  • each PPP/SONET PHY comprises a single user.
  • each MPLS Label Switched Path represents an additional user.
  • the LC CID is simply the PHYID.
  • the PI DFU control registers can be programmed (by the MPPU) with the LC CID's of up to 4 PHYs. For these PHYs, if the packet contains an IP header, the PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of the source and destination IP addresses ofthe packet (calculated by the PI DFU). This capability ofthe PI DFU must be used for OC-192c and OC-48c PHYs in order to distribute the load over multiple LC PPUs. For OC-12c and smaller PHYs, the PI DFU need not be used. Instead, the PI uses the LC CID as index into the hardware connection table. The PI reads (amongst other things) a LC PPUID which selects the LC PPU that the control information for the packet should be sent to.
  • a determination of priority (one of four classes) is made. This priority is used to determine the two most significant bits ofthe Destination Flowld when the packet is forwarded to an IPE.
  • the LC PPU uses the LC CID (which is really just the PHYID) as an index into a software PHY table.
  • This table provides the Primary Userld, which determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • the Primary Userld determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • the Initial PID is set to indicate a PPP packet.
  • the LC PPU uses the LC CID (which is really just the PHYED) to index into and read from the software PHY table. From this the LC determines whether this is a small user or a large user. For small users, the Primary Userld is also read from the PHY table. This determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • LC CID which is really just the PHYED
  • a hash is calculated over the source and destination IP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs.
  • the selected UserlD determines where the packet is sent as well as the IPE CED that is sent in the PIE Header of the packet.
  • the PPP header is removed before the packet is forwarded to the IPE, and the Initial PID is set to indicate an IP packet
  • the LC CID only identifies the PHYID. Therefore, when the LC PI identifies an MPLS packet, the top of stack label must be captured in order to identify the user. For each POS PHY, the LC PPU must maintain a table of MPLS LSPs. The LC CED selects which table, and flie top of stack label is used to index into the table. For small users, the Primary UserlD that corresponds to the LSP can then be read the table. For large users, however, a similar process to the one described above for IP is used. A hash is calculated over the source and destination D? addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet
  • the PPP header is removed before the packet is forwarded to the IPE, and the Initial PED is set to indicate an MPLS packet
  • Initial PED This 3 bit field tells the IPE the type of encapsulation this packet has. The choices are: IPC, Ethernet, PPP, IP, or MPLS.
  • This 4 bit field can be used to give additional information to the IPE about the encapsulation of this packet. It specifies which stage in the IPE PI will be the first to inspect the packet.
  • This bit is set for IP packets whose destination address is equal to one of the EP addresses of the IPE card that packet is being sent to.
  • the PPU identifier of the IPE PPU that the packet is being sent to is the PPU identifier of the IPE PPU that the packet is being sent to.
  • each PHY comprises a single user.
  • each MPLS Label Switched Path (LSP) or PPPoE session represents an additional user.
  • the LC CID is simply the PHYED.
  • the PI DFU control registers can be programmed (by flie MPPU) with the LC CED's of up to 4 PHYs. For these PHYs, if the packet contains an P header, the PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of the source and destination EP addresses ofthe packet (calculated by the PI DFU). This capability ofthe PI DFU must be used for 10 Gigabit Ethernet Cards in order to distribute the load over multiple LC PPUs. For 1 Gigabit and smaller PHYs, the PI DFU need not be used.
  • flie PI uses the LC CED as index into flie hardware connection table.
  • the PI reads (amongst other things) a LC PPUED which selects flie LC PPU that flie control information for the packet should be sent to.
  • a determination of priority (one of four classes) is made. This priority is used to determine the two most significant bits ofthe Destination Flowld when the packet is forwarded to an IPE.
  • the LC PPU uses the LC CID (which is really just the PHYED) to index into and read from the software PHY table. From this the LC determines whether this is a small user or a large user. For small users, the Primary Userld is also read from the PHY table. This determines where the packet is sent as well as the IPE CID that is sent in the PIE Header ofthe packet.
  • LC CID which is really just the PHYED
  • a hash is calculated over the source and destination EP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs.
  • the selected UserlD determines where the packet is sent as well as the EPE CED that is sent in flie PIE Header ofthe packet.
  • the packet is forwarded to the IPE with the Ethernet MAC header intact, and the Initial P D is set to indicate an Ethernet packet .1.4.2 PPPoE Session
  • PPPoE Session packets For PPPoE Session packets, the Ethernet and PPPoE headers are removed, and the PPPoE Session ID (from the PPPoE header) is used to index into a PPPoE session table, from which the Userld (EPE Cardld, IPE PPUID and IPE CID) can be retrieved.
  • a unique PPPoE Session table can be maintained for each PHY, and the LC CID can be used to select which session table to use.
  • the PPP header is also removed, and the Initial PEDS is set to indicate IP, otherwise, the PPP header is kept, and the Initial PIDS is set to indicate PPP.
  • the LC CID only identifies the PHYED that the packet was received on. Therefore, when the LC PI identifies an MPLS packet, the top of stack label must be captured in order to identify the user. For each Ethernet PHY, the LC PPU must maintain a table of MPLS LSPs. The LC CED selects which table, and the top of stack label is used to index into the table. For small MPLS users, the Primary UserlD that corresponds to the LSP can then be read the table. For large users, however, a similar process to the one described above for IP is used. A hash is calculated over the source and destination IP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as flie EPE CID that is sent in the PIE Header ofthe packet.
  • Ethernet header is removed before the packet is forwarded to the IPE, and flie Initial PED is set to indicate an MPLS packet.
  • Ethernet protocols other than EP, MPLS, and PPPoE Session.
  • the LC PPU uses the LC CED (which is really just the PHYED) as an index into a software PHY table.
  • This table provides the Primary Userld, which determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
  • fliese packets are sent to the EPE PPU identified by flie Primary Userld. No distribution is performed for fliese packets.
  • the Initial PED is set to indicate an Ethernet packet.
  • Line cards perform all the traffic shaping for the system.
  • Sent in the Cell Header to allow the LC to reassemble the packet This is simply the identification of the EPE card (in the least significant 8 bits) and flie priority (class) in the most significant two bits.
  • the priority MUST be the same as is specified in the Destination Flowld of this packet.
  • Initial PED This 3 bit field tells the LC the type of encapsulation this packet has. The choices are: IPC (for inter-processor communications), EP, Ethernet, PPP, or MPLS
  • the PPU identifier ofthe LC PPU that the packet is being sent to is the PPU identifier ofthe LC PPU that the packet is being sent to.
  • the Destination PPUID selects the LC PPU that will process the packet
  • the LC CID is used by the LC PPU as an index into a software connection table. This connection table provides the shaping parameters, any additional encapsulation that must be added by the LC, the PHYID, and the ATM VPI/NCI for the packet
  • the priority (one of four classes) is based on the two most significant bits of the Source Flowld in the Cell Header. The priority is used by the Traffic Shaper and the Scheduler to determine when to forward the packet to the PHY.
  • the ATM cell headers and the AAL5 trailer and padding are always added (by the PM) before forwarding the packet to the PHY card.
  • the desired encapsulation for the packet can be eiflier ff/PPP/PPPoE/Ethernet/ATM, EP/PPP/ATM or EP/ATM.
  • the PPU can determine which it is from flie connection table. If the encapsulation should be EP/PPP/PPPoE/Ethernet/ATM, flie connection table will provide the necessary information to add the missing headers. If the encapsulation should be IP/PPP/ATM, a PPP header identifying the protocol as EP is added. Also, the entry in the connection table may specify that an LLC header should also be added.
  • connection table may specify that an LLC header should be added to the beginning ofthe packet. Otherwise the packet is sent as is.
  • the desired encapsulation may be either PPP/PPPoE/Ethernet/ATM or PPP/ATM.
  • the PPU can determine which it is from the connection table. If it is PPP/ATM, the packet is sent as is, otherwise, the connection table will provide the necessary information to add a PPPoE header and an Ethernet Header.
  • VPI/NCI is obtained from flie connection table.
  • IPE PPU The following information is received from IPE PPU along with the packet payload: • In the CSIX Header: • Destination Flowld:
  • This 3 bit field tells the LC the type of encapsulation this packet has.
  • the choices are: EPC (for inter-processor communications), EP, PPP, or MPLS
  • the PPU identifier ofthe LC PPU that the packet is being sent to is the PPU identifier ofthe LC PPU that the packet is being sent to.
  • the Destination PPUID selects the LC PPU tiiat will process the packet
  • the LC CED is used by flie LC PPU as an index into a software connection table. This connection table provides the shaping parameters, and flie PHYED for the packet.
  • a PPP header identifying the packet as an IP packet is added.
  • a PPP header identifying the packet as a MPLS packet is added.
  • This 3 bit field tells the LC the type of encapsulation this packet has.
  • the choices are: IPC (for inter-processor communications), Ethernet, PPP, IP, or MPLS
  • the PPU identifier of the LC PPU that the packet is being sent to is the PPU identifier of the LC PPU that the packet is being sent to.
  • the Destination PPUID selects the LC PPU that will process the packet.
  • the LC CID is used by the LC PPU as an index into a software connection table. This connection table provides the shaping parameters, and the PHYID for the packet
  • IP/Ethernet are sent using this type because the EPE, not the LC implements ARP, and therefore adds the Ethernet header to all EP packets before sending them to the LC.
  • the desired encapsulation is PPP/PPPoE/Ethernet
  • the connection table provides the necessary information to add a PPPoE header and an Ethernet header.
  • the desired encapsulation is EP/PPP/PPPoE/Ethernet
  • a PPP header indicating an EP packet is added.
  • the connection table then provides the necessary information to add a PPPoE header and an Ethernet header.
  • connection table provides the information needed to add an Ethernet header (the destination MAC address is all that is required from the connection table).
  • All packets received by an EPE card from the Line Cards (or from other EPEs) will be of one of the following types.
  • the Initial PED field in the PIE Header will identify which one of these types each packet corresponds to. If there are more than 8 such types, the Initial Stage field in the PIE Header can be used to select a different stage to begin inspection, each of which allows 8 additional protocols to be identified by the Initial PID field.
  • the EPE CID and PPUID in the PEE Header of the received packet combine with the Flowld to give the Userld. Only the least significant 18 of the 20 bits of the EPE CTD are used. 1.3.1.1 IPC
  • the PI should be programmed to capture these packets to a PPU (as specified in the Destination PPU field in the PIE Header) in their entirety.
  • IP/ATM IP/PPP/ATM
  • EP/PPP/SONET IP/PPP/PPPoE/Ethernet
  • IP/PPP/PPPoE/Ethernet/ATM IP/PPP/PPPoE/Ethernet/ATM.
  • the IPE CED uniquely identifies the PPPoE Session ED, or the ATM Virtual Circuit that the packet was received on as well as the PHY/LC that it was received on.
  • the EPE CED will identify only flie PHY/LC that the packet was received on, that is, it will be constant for all EP/PPP/SONET packets received from a particular PHY/LC.
  • This category consists of all PPP packets received whose PPP protocol type was not IP or MPLS. These packets can come from a POS LC, an ATM LC, or an Ethernet LC. For those PPP sessions fliat will be tunneled using L2TP, the EPE must add a new PPP header to the IP/PPP and MPLS/PPP packets, since for those protocols, the PPP header will have been removed by the Line Card.
  • PPP/SONET PPP/PPPoE/Ethernet
  • PPP/ATM PPP/PPPoE/Ethernet/ATM.
  • the EPE CED uniquely identifies the PPPoE Session ED, or the ATM Virtual Circuit that the packet was received on as well as the PHY/LC fliat it was received on. In the case of PPP/SONET, the EPE CED will identify only flie PHY/LC that the packet was received on, that is, it will be constant for all PPP/SONET packets received from a particular PHY/LC.
  • ARP/Ethernet ARP/Ethernet
  • IP/Ethernet IP/Ethernet
  • PPPoE Discovery/Ethernet ARP/Ethernet/ATM EP/Efliernet/ATM
  • PPPoE Discovery/Ethernet/ATM PPPoE Discovery/Ethernet/ATM.
  • the EPE CED For EthernetATM, the EPE CED uniquely identifies the ATM Virtual Circuit that the packet was received on as well as the PHY/LC that it was received on. In the case of Native Ethernet, the EPE CED will identify only the PHY/LC that the packet was received on, that is, it will be constant for all packets received from a particular PHY/LC.
  • This category consists of packets which begin with an MPLS label stack. These can come from a POS LC, an ATM LC or an Ethernet LC.
  • MPLS/PPP/SONET MPLS/Ethernet
  • MPLS/ATM MPLS/ATM
  • the Line Card will have replaced the top of stack shim label with the real label because the real label was encoded as the ATM VPI/VCI in the packet received from the network.
  • MPLS/PPP/PPPoE/Ethernet MPLS/PPP/ATM
  • MPLS/PPP/PPPoE/Ethernet/ATM MPLS/Ethernet/ATM.
  • the IPE CID uniquely identifies the same as the top of stack incoming top of stack MPLS label, as well as the PHY/LC tiiat it was received on.
  • the top of stack label has a one to one correspondence with the ATM Virtual Circuit that the packet was received on.
  • the following table shows the first two layers of protocols that must be identified by the PI on the EPE for each packet that passes through it.
  • All EP packets received by the EPE will fall into one of two categories: those for which the destination EP address is equal to one of the addresses of the IPE, and those for which it isn't. In the case of the latter, flie packet must be forwarded or discarded by the PPU. But for die former, it must be determined whether or not the packet can be processed entirely by the PPU, or whether it must be sent to the MPPU for further processing. If it must be sent to the MPPU, it must be captured in its entirety.
  • All EP packets received regardless of their encapsulation, must have their destination IP address captured and examined. All routing table searches are performed by the EPE cards. If the destination address is one of the system's EP addresses, but not one of the EPE card's addresses, the packet must be forwarded with Destination PPU bit set.
  • Each L2TP tunnel is handled entirely by a particular EPE card.
  • Each session within the tunnel must be handled entirely by a particular PPU. This requirement comes primarily from the need to support sequence numbers on the data sessions:
  • RFC-2661 "Each peer maintains separate sequence numbers for the control connection and each individual data session within a tunnel. "
  • LAC L2TP Access Concentrator
  • Any PPP user can be selected for L2TP tunneling by the EPE MPPU. If a user is selected for tunneling, then the PPU receiving PPP packets from that user must encapsulate those packets, first with an L2TP header, then a UDP header, and finally an IP header.
  • the IP header's destination address will be that ofthe configured LNS, and the source address will be one ofthe IP addresses of IPE.
  • the resulting IP packet can then be forwarded using the standard IP forwarding procedure to the appropriate Line Card for transmission. It should be evident that tunneled PPP users on different IPE cards will be placed in separate tunnels even if being tunneled to the same destination LNS.
  • EP packets received from the LNS will be sent by the receiving Line Card to the IPE PPU associated with the ingress interface (user).
  • This PPU may well be on a different IPE card than the one handling the tunnel. This is easily determined from the destination IP address of the packet. In this case, the PPU receiving the packet from the Line Card must forward the packet to the EPE card handling the tunnel.
  • the L2TP Session ID can be used to identify which PPU on that IPE card should receive the packet (this PPUED must be sent in the PIE Header so that the receiving PI will know which PPU should receive the packet). This is done by always encoding flie PPUID of the PPU handling a particular session in the most significant four bits ofthe L2TP Session ED.
  • the PPU to which the packet is sent to can in turn can de-encapsulate the PPP packet and forward it to the PPP user identified by flie L2TP session ED.
  • LNS L2TP Network Server
  • L2TP packets received from flie LAC will be forwarded, either by a Line Card or anoflier EPE, to the EPE handling die tunneL This is because the destination EP address of the packet will be equal to one of flie IP addresses ofthe IPE handling the tunnel.
  • the PPU that should process the L2TP session is identified using the most significant four bits of the L2TP Session ED.
  • the PPU will de-encapsulate flie PPP packet, then process the PPP packet as if it was received from a PPP user. From this point on, the processing is the same as for a "real" PPP user.
  • packets which, when their destination IP address is looked up in the routing table, yield a destination PPP user fliat is associated with a L2TP tunnel instead of with a Line Card, must be sent to flie IPE PPU handling the PPP user. This is because of the sequence number requirement of L2TP mentioned above.
  • the packet Once received this PPU, the packet must have a PPP header added, as is the case with a normal PPP user.
  • a L2TP header is added, followed by a UDP header and an IP header.
  • the EP destination address is that of the LAC at the other end of the tunnel.
  • the resulting IP packet can then be forwarded using the standard EP forwarding procedure to the appropriate Line Card for transmission.
  • SA IPSec Security Association
  • EPE PPU IPSec Security Association
  • a Security Association is a unidirectional, "simplex" connection that provides security services to the traffic carried by it.
  • Every PPU must have a copy of the SPD for every user from which it receives packets.
  • the PPU For every UserlD (Primary or Secondary) that points to a particular IPE PPU, the PPU must have a pointer to an SPD. If a user 's traffic is split among multiple PPUs (i.e.: a large user), then they should have identical SPDs configured for the user, and each will create its own set of Security Associations for its share of the user's traffic. Every packet received must be processed using the SPD of he user the packet is received from. • Tunneled packets
  • the SPI is the field in the IPSec header that, along with the destination IP address, identifies the SA. Traffic from a small user will always be directed by receiving Line Card to a particular PPU.
  • This PPU uses the SPI to identify the SA, and thus has access to the information it needs to decapsulate the packet.
  • the Line Card must detect EPsec packets whose IP destination address is one of the addresses that belongs to the EPE card identified by the user's Primary Userld. Rather than select a Userld (primary or secondary) based on the hash ofthe source and destination EP addresses ofthe packet, the LC must use the SPI in the IPsec header to select the Userld, and thus the EPE PPU, to send the packet to.
  • the most significant 4 bits of an SPI always contain the PPUID identifying the PPU that is handling the SA identified by that SPI.
  • the difficulty with outbound processing is that, as discussed earlier, the configuration information (and thus the SPD) associated with the egress user is not readily available.
  • the information must be requested from the PPU identified by the Primary Userld and stored in a cache. Each PPU sending to a user will thus create its own set of Security Associations.
  • the IPE card PPUs performs routing table searches for all packets tiiat heed forwarding.
  • the global Forwarding Information Base (FEB) is distributed to every PPU in the system, and contains IP unicast and multicast routing tables in a form fliat facilitates longest matching prefix searches (i.e.: Patricia tries), as well as tables required for MPLS label based forwarding.
  • flie Primary Userld identifies the EPE PPU that maintains the configuration and state information for the user.
  • LCUserld contains the Cardld of the Line Card that the packet must be forwarded to, as well as the PPUID and CED that should be sent in the PIE header ofthe packet to that Line Card.

Abstract

A broadband mid-network server provides high-speed, reliable, secure, flexible, high-bandwidth, and easily managed access to the Internet to accommodate all current Internet services including email, file transfer, web surfing and e-commerce, as well as new value added services such as VoIP and Real Time Video. The preferred server is scalable in terms of both bandwidth and processing power. The server includes the ability to distribute traffic across a number of Internet processing engines and, more specifically, across a number of protocol processing units provided in each engine (the bandwidth to which can be coordinated), to provide compute power and state space required for performing per user processing for a large number of users.

Description

BROADBAND MID-NETWORK SERVER
The present invention relates to internetworked communication systems, and especially (but not exclusively) to a highly scalable broadband mid-network, server for performing mid-network processing functions including routing functions, per user processing, encryption, bandwidth distribution and traffic shaping. Background and Summary of the Invention
As bandwidths within the core of the Internet increase, there is an increasing trend towards using the Internet Protocol ("IP") as the core network layer protocol for all kinds of traffic, including voice, video and data.
Historically, quality of service on the Internet has been what is called "best effort." That is, the network attempts to transport as much traffic as possible, but if there is insufficient capacity to handle the traffic, all connections are equally likely to be influenced by congestion. Thus, "best effort" implies that the Internet provides only one class of service to any connection, and that all connections are handled equally with no priority. In the case of traditional Internet applications, this approach was often sufficient. However, the intrinsic potential of the Internet is considerably greater, and includes new multimedia and interactive applications. Voice over IP ("VoIP") and Real Time Video are envisioned to be two significant applications for propelling Internet growth to the next level. VoIP can be defined as the ability to make telephone calls and send faxes over IP networks. The benefits of this technology are cost reduction, simplification, consolidation and advanced applications such as shared screens or whiteboarding which combine voice and data. Real Time Video is a "direct-to-user" technique in which a video signal is transmitted to the client device and presentation of the video begins after a short delay for data buffering, and eliminates the need for significant client-site storage capacity. It is also expected to become popular with businesses. Related to this is webconferencing, which requires high bandwidth since it is a continuous transfer of image information together with voice transfer. Webconferencing also requires real time traffic handling because it is usually implemented as an interactive application.
All of these new applications will generally require significant bandwidth and/or reduced latencies. Bandwidth is the critical factor when large amounts of information must be transferred within a reasonable time period. Latency is the minimum time elapsed between requesting and receiving data and is important in real-time and interactive applications such as webconferencing and telecommuting. Presently, most telecommuters depend upon analog modems with limited bandwidth and significant latency for dial-up connectivity. Even for today's applications, dialup connectivity is often inadequate. There are competing "last mile" technologies today which provide transport services to the user for delivering packets to the "edge" of the Internet. To complete the communication, the packets need to be formatted to allow them to enter the Internet cloud and find their way to their respective destinations. The emergence of supporting protocols for new applications and the growth spurt in umber of users and the required bandwidth per user results in a very dynamic access environment .
The following is a summary of observations that pertain to an ideal mid-network point within the Internet:
• In order to accommodate a variety of source packets, all the requisite protocols must be efficiently supported.
• Virtual Private Network services allow a private network to be configured within a public network. This is one of the drivers for Internet access amongst businesses. To allow Virtual Private Networks to coexist on the public Internet, and to encourage business use of the Internet, great care must be taken with respect to security and authentication issues, and tunneling protocols such as L2TP and IPSec must be efficiently supported. • The number of subscribers handled by one system and the different qualities of service provided will make service provider administration more complex. To make provisioning of broadband access more attractive to service providers, subscriber management and usage accounting must be simplified, and differentiated services must be provided.
• Broadband makes it possible to provide different amounts of bandwidth to users and to smaller Internet Service Providers. To make wholesaling of IP connectivity possible, and to simplify service and repair functions, the ability to support multiple service providers with one mid-network server must be provided.
• A large number of connections are serviced with a broadband mid-network server. In order to ensure that service is not interrupted, the broadband server must have very high availability. Such availability is also required for mission-critical business applications .
• Central office co-location space is limited. To conserve this space, large connection densities must be provided.
• When subscribers are allowed access at high speeds, it is possible for a limited number of users demanding disproportionate amounts of bandwidth to disrupt service for other customers. To ensure that large traffic bursts do not overload small client buffers, and to ensure that service providers and customers are treated fairly, traffic shaping must be provided.
• To enable new value-added services, large bandwidths and low latencies are critical.
In order to solve these and other needs in the art, the inventors hereof have succeeded at designing and developing a broadband mid-network server that, in the most preferred embodiment, satisfies all of the requirements described above. This inventive server provides reliable, secure, fast, flexible, high-bandwidth, and easily managed access to the Internet so as to accommodate all current Internet services including email, file transfer, web surfing and e-commerce, as well as the new value added services such as VoIP and Real Time Video. To meet these requirements, the broadband mid- network server of the present invention has been designed to scale not only in bandwidth, but also in processing power and state space. In the preferred embodiment, the architecture allows a service provider to configure the cards chosen for use in the available chassis space to suit his particular application. For example, to maximize processing power, a service provider could increase the number of IPE cards at the expense of a fewer number of line cards; as few as one line card. In the case of one line card, the maximum amount of processing power would be available to a service provider. IN the preferred embodiment described in detail below, this configuration would provide 240 processors and 39 gigabytes of memory. This would allow for a greater number and complexity of value added services which require more processing power. Alternatively, a greater number of line cards could be selected for use in a chassis which would be desirable to handle greater traffic and throughput at the expense of fewer value added services.
The high bandwidth core routers that are currently under development by third parties are optimized for performing large numbers of fast routing lookups, but are not expected to provide generalized and flexible computing power for supporting the substantial amount of processing needed for, among other things, per user and per packet processing. In contrast, the broadband mid-network server of the present invention includes the ability to distribute traffic across a number of Internet processing engines and, more specifically, across a number of protocol processing units provided in each engine (the bandwidth to which can be coordinated) , to provide compute power and state space required for performing per user processing for a large number of users.
One important feature of the present invention is a unique architectural philosophy, which provides that processing be performed as close to the physical layer as warranted by considerations of flexibility, cost and complexity. This architectural philosophy maintains balance between two kinds of processing which are important to scaling bandwidth with value-added services in broadband networks: time-consuming, repetitive processing; and flexible processing which must be easy to program by third parties. The need for considerable time-consuming repetitive processing, which has proved to create a bottleneck in the processor-based servers of the prior art, is addressed by the inventive architecture through specialized hardware, and results in dramatic increases in speed and decreases in delay. The need for flexible, easy to use, computing power to enable service providers to scale with value-added services is addressed by the inventive architecture preferably through the provision of high-performance general purpose processors which are paralleled and which can be scaled to a virtually limitless degree. Alternatively, network processors or digital signal processors or any other programmable processor could be utilized as well. Accordingly, the broadband mid-network server of the present invention provides a system that is currently unrivalled in performance and which can become the prime mover of Internet services such as managed, secure VPNs, Voice over IP and Real Time Video.
While some of the principal features and advantages of the present invention have been described above, a greater and more thorough appreciation of the invention may be attained by referring to the drawings and the detailed description of the preferred embodiments which follow. Brief Description of the Drawings
Fig. 1 illustrates a single shelf broadband mid- network server according to one embodiment of the present invention;
Fig. 2 is a functional block diagram of the preferred server shown in Fig. 1;
Fig. 3 is a functional block diagram of an exemplary line card shown in Figs. 1 and 2;
Fig. 4 is a functional block diagram of an exemplary IPE card shown in Figs. 1 and 2; Fig. 5 illustrates routed distribution to an IPE card;
Fig. 6 illustrates the processing flow on an IPE card; Fig. 7 illustrates a protocol processing platform according to the present invention;
Fig. 8 is a functional block diagram of an exemplary buffer access controller; Fig. 9 illustrates the format of a cell received at an input to a BAG from a PIC;
Fig. 10 is a functional block diagram of a preferred packet manager;
Fig. 11 is an illustration of the deployment of a broad-band mid-network server at a Service Provider POP; Fig. 12 is an illustraion of the different kinds of links an ISP may want on a secure segment;
Fig. 13 is an illustration of the system wide bandwidth distribution functions; Fig. 14 is an illustration of the multi-level policing and multi-level shaping that occurs in the system;
Fig. 15 is an illustration of router distribution, two level policing, routing and two level shaping; Fig. 16 is a functional block diagram of a preferred packet inspector;
Fig. 17 is an illustration of the preferred Distributor Flow Unit; and
Fig. 18 is a summary of the highlights of the DFU . Detailed Description of the Preferred Embodiments
The mid-network processor of the present invention is preferably implemented in a single shelf system as shown generally in Fig. 1, and is indicated generally by reference character 300. As shown in Fig. 1, the mid- network processor 300 is provided with a number of physical connection ("PHY") cards 302-316 through which packets may enter and exit the mid-network processor 300 according to a particular communication protocol, as is known in the art. For the preferred embodiment illustrated in Fig. 1, the mid-network processor 300 supports the POS, ATM, and Gigabit Ethernet layer two protocols, although the mid-network processor may readily be configured to support additional protocols, as will be apparent. The PHY cards 302-316 are each associated with line cards 322-336, respectively, as shown in Fig. 1. As is well known in the art, each PHY card is media specific. In other words, each PHY card is provided with connectors and other components necessary to interface with the communication media connected thereto, and over which packets enter and exit the PHY card. Each line card is configured to process packets of the type received from its associated PHY card, as explained more fully below.
The preferred mid-network processor 300 shown in Fig. 1 is also provided with a number of Internet Processing Engine ("IPE") cards 340-354, as well as two flash memory modules 360, 362 and four switch fabric modules 364-368. As appreciated by those skilled in the art, the number of switch fabric cards required is a function of the switch fabric card design as well as the desired redundancy overall performance. Fig. 1 also illustrates a midplane 370 that is provided for interconnecting the various cards described above. The preferred mid-network processor 300 utilizes a card-based approach to facilitate maintenance and expansion of the mid-network processor 300, as necessary, but this is clearly not a limitation of the present invention. The manner in which packets are processed by the preferred mid-network processor 300 will now be described with reference to Fig. 2, which is a functional block diagram of the preferred mid-network processor 300 shown in Fig. 1 (although, to simplify the illustration, Fig. 2 does not show the PHY cards 310-316, the line cards 330- 336 and the IPE cards 346-354 shown in Fig. 1) . Packets enter the mid-network processor 300 via the PHY cards, as is known in the art. Each PHY card then delivers its packets to its associated line card through the midplane 370. After performing initial processing of the packet, the line card delivers the packet again through the midplane to the switch fabric which, in turn, delivers the packet to one of the IPE cards for performing certain mid-network processing functions, such as routing functions, per user processing, encryption, and bandwidth distribution. After performing mid-network processing for the packet delivered thereto, the IPE card sends the packet back into the switch fabric, typically for delivery to one of the line cards for some additional processing before allowing the packet to exit the mid- network processor 300 through one of the PHY cards. In some cases, depending upon how the mid-network processor of the present invention is implemented, a single IPE card may be insufficient to complete the necessary mid- network processing functions for a packet delivered thereto. In this case, upon performing some processing, the IPE card will deliver the packet to another IPE card (rather than to one of the line cards) via the switch fabric for further processing. Thus, although a packet will typically be processed by only one IPE card, it is possible to process a packet in multiple IPE cards, if necessary. In this preferred embodiment, all of the line cards contain identical hardware, but are independently programmable. Likewise, all of the IPE cards contain identical hardware, but are independently programmable. This contributes to the scalability and elegantly simple design of the preferred mid-network processor 300. Additional processing power can be provided to the mid- network processor by simply adding additional IPE cards. Similarly, additional users can be supported by the mid- network processor 300 by adding additional line cards and PHY cards, and perhaps additional IPE cards to provide additional processing for the newly added users, if necessary. The flash memory cards are provided for storing configuration data used by the IPE cards during system initialization.
Note that, as used herein, the term "packet" refers to any type of packet that enters or exits the mid- network processor 300, including packets input to the mid-network processor 300 in the form of cells (such as ATM cells) via an interleaved or non-interleaved cell stream.
In general, each line card used in the preferred mid-network processor 300 performs a number of functions. Initially, the line card converts packets (possibly of varying lengths) delivered thereto into fixed length cells. In this preferred embodiment, each line card converts input packets (including packets represented by individual cells) into 64 byte cells. The line card then examines the stream of fixed length cells "on the fly" to obtain important control information, including the protocol encapsulation sequence for each packet and those portions of the packet which should be captured for processing. This control information is then used on the line card to reassemble the packet, and to format the reassembled packet into one of a limited number of protocol types that are supported by the IPE cards. Thus, while any given line card can be configured to support packets having a number of protocol layers and protocol encapsulation sequences, the line card is configured to convert these packets into generally non- encapsulated packets (or, stated another way, into packets having an encapsulation sequence of one) of a type that is supported by each of the IPE cards. The line card then sends the reassembled and formatted packet into the switch fabric (in the form of contiguous fixed length cells) for delivery to one of the IPE cards that was designated by the line card for further processing that particular packet.
Although the fixed length cells which comprise a packet are arranged back to back when the packet is delivered to the switch fabric by a line card, the cells may become interleaved with other cells destined for the same IPE card during the course of traversing the switch fabric. As a result, the cell stream provided by the switch fabric to any given IPE card may be an interleaved cell stream. Thus, the IPE card will first examine this cell stream "on the fly" (much like the cell stream examination conducted by the line cards, explained above) to ascertain important control information. The IPE card then processes this control information to perform routing look-ups and other mid-network processing functions for each packet delivered thereto. The control information is also used by the IPE card to reassemble each packet, and to format each packet according to the packet's destination interface. The IPE card then sends the reassembled and formatted packet back into the switch fabric in the form of contiguous fixed length cells for delivery to one of the line cards (or for delivery to another IPE card, in the case where additional mid- network processing functions must be performed for the packet in question) .
As noted above, although the cells of any given packet may enter the switch fabric in a back to back arrangement, these cells may become interleaved with other cells during the course of traversing the switch fabric. Thus, the stream of cells provided by the switch fabric to any given line card may be an interleaved cell stream. Accordingly, a line card will first examine this cell stream "on the fly" to ascertain important control information that will be used primarily to reassemble packets, and to format the reassembled packets for their destination interfaces. Additional processing of outbound packets is also conducted on the line card for PHY scheduling and bandwidth distribution purposes.
While the preferred mid-network processor 300 of the present invention has been described as delivering packets from a line card to an IPE card and then back to a line card (or to one or more additional IPE cards) , the mid-network processor 300 can also be configured to route cells arriving over an ATM interface on one line card through the switch fabric and directly to another line card ATM interface, and can therefore function as an ATM switch. Fig. 3 illustrates an exemplary line card 380 used in the preferred mid-network processor 300 of the present invention. As shown therein, the line card 380 preferably includes an ingress side (i.e., the left half of Fig. 3) and an egress side (i.e., the right half of Fig. 3) . When packets are provided to the ingress side of the line card from the line card's associated PHY card, the packets are first provided to a packet inspector chip ("PIC") 400 which converts the packets (which may already be represented by individual cells such as ATM cells) into fixed length cells. In this preferred embodiment, the fixed length cells are 64 byte cells that are 8 bytes wide and 8 bytes long. Thus, a "cell time," in the context of cells propagating within the preferred mid-network processor 300, corresponds to 8 clock cycles, as appreciated by those skilled in the art. The PIC 400 then examines the stream of fixed length cells "on the fly" to identify the "classification" (that is, the protocol encapsulation sequence) , capture matrix, and other control information for each packet (as described more fully in copending Application No. 09/494,235 filed January 30, 2000 entitled "Device and Method for Packet Inspection, " the disclosure of which is incorporated herein by reference) . More specifically, the preferred PIC 400 generates a control cell for each examined cell of a packet, and each control cell represents the control information that has been determined thus far for the corresponding packet. Thus, the PIC 400 outputs both the stream of fixed length cells that was produced before this stream was examined "on the fly" therein, as well as corresponding control cells. As shown in Fig. 3, these control and data cells are then provided by the PIC 400 to four preferably identical buffer access controllers ("BACs") 402-408. Each BAC stores a different quarter (i.e., 25%) of the data cells received from the PIC 400 in its corresponding cell buffer ("CB") .
Each control cell output by the PIC 400 also includes a protocol processing unit ("PPU") identifier which identifies a PPU associated with a particular BAC for processing that control cell. Note that each PPU, in this preferred embodiment, preferably comprises two general purpose central processing units ("CPUs"), as shown in Fig. 3. Alternatively, a PPU could comprise one or more network processors, digital signal processors, or any programmable processors. The BACs 402-408 each examine the PPU identifiers contained in the control cells delivered thereto over a bus by the PIC 400. When a BAC determines that the PPU identifier in a particular control cell is identifying the PPU associated with that BAC, the BAC will forward the control cell to its associated PPU for processing, as described more fully below. Thus, while every BAC 402-408 in this preferred embodiment stores a quarter of every data cell in their associated cell buffers, each control cell output by the PIC 400 is acted on by only one BAC and its associated PPU. As a result of being so processed, the size of the control cell is much smaller than the typical size of a packet. This can significantly increase the utilization of the processor by reducing the I/O bandwidth which is the typical limiting factor in processor use. In this preferred embodiment, all control cells corresponding to a specific packet (and, more generally, to a specific user) are processed by the same BAC PPU on the line card 380.
Note that the PPU assigned by the PIC 400 for any given packet is performed according to configuration and control information received by the PIC 400 from a master PPU ("MPPU") 410, and can be changed by the MPPU 410 over time as necessary for PPU load balancing on the line card 380. The PIC 400 also keeps track of the available memory addresses in the cell buffers associated with the BACs using a free buffer ("FB") list 412, and also keeps track of where each data cell is stored in the cell buffers with respect to other cells of the same packet using a link list 414.
When a control cell is processed within a particular BAC PPU, the PPU produces a new control cell to be provided to a packet manager ("PM") 420 which is in communication with the PIC 400 and the BACs 402-408. Included in this control cell provided to the PM 420 is a dequeue pointer which designates the location of the first cell of a packet that is to be dequeued and sent to the PM 420 along with the second and subsequent cells of that packet (if applicable) . The packet manager 420 then forwards this dequeue pointer back to the PIC 400, which, in turn, provides instructions to the BACs 402-408 to dequeue each quarter cell of the designed packet in sequence using the information previously stored by the PIC 400 in the link list 414. Thus, the designed packet is reassembled as it is dequeued and delivered to the packet manager 420.
At this point in the processing, the packet manager 420 stores the cells of the reassembled packet in its own cell buffer 422 (using a free buffer list 424 and link list 426) . The packet manager 420 processes the control information it received for that packet from one of the BAC PPUs and then formats the packet according to this control information by modifying or augmenting the packet header as the cells of the packet are dequeued from the cell buffer 422. This process and additional details of the preferred packet manager 420 are described more fully in copending Application No. 09/494,236 filed January 30, 2000 entitled "Device and Method for Packet Formatting," the disclosure of which is incorporated herein by reference. The packet manager 420 also appends a header to each of the 64 byte cells that constitute the reassembled and formatted packet, and these headers will be used by the switch fabric for routing the cells therethrough. The packet manager 420 then forwards the cells of the packet in sequence to a UDASL 430, which is provided for managing cell traffic into and out of the switch fabric for the line card 380. Typically, the UDASL 430 then forwards the packet cells into the switch fabric for delivery to an IPE card that will perform mid- network processing functions for the packet in question. This IPE card is preferably designated by the BAC PPU that prepared and forwarded control information to the packet manager 420.
Also shown in Fig. 3 is a 9-port Ethernet switch 450 which provides for interprocessor communications between the eight PPUs on the line card 380 (i.e., 4 PPUs on the ingress side and 4 PPUs on the egress side) and the MPPU 410 for purposes of load balancing, hardware monitoring and bandwidth distribution, and for sharing user and configuration information. The bandwidth distribution process and the preferred hardware are described more fully in copending Application No. 09/515,028 filed February 29, 2000 entitled "Method and Device for Distributing Bandwidth, " the disclosure of which is incorporated herein by reference. Fig. 4 illustrates an exemplary IPE card 500 used in the, preferred mid-network processor 300 of the present invention. The hardware layout of the IPE card 500 is similar to the hardware layout on the ingress side (and the egress side) of the line card 380 shown in Fig. 3. That is, the IPE card 500 is also provided with a UDASL
501 that delivers a typically interleaved cell stream received from the switch fabric to a PIC 502. The PIC
502 is in communication with four BACs 504-510 that are in communication with a PM 512. Thus, the primary difference between the preferred IPE card 500 and either side of the preferred line card 380 is the processing that is performed therein, even though this processing is performed with similar hardware. It should thus be apparent that the present invention provides, amongst other things, an inventive hardware module that can be programmed to perform requisite processing either on the ingress side or the egress side of a line card, or on an IPE card. This contributes to the configurability and scalability of the preferred mid-network processor 300, which can be reconfigured as necessary (both through programming and/or by adding additional lines cards and/or IPE cards) to accommodate additional users and/or to provide additional processing power.
Much like the PIC 400 resident on the ingress side of the preferred line card 380, the PIC 502 provided on the preferred IPE card 500 is used to inspect the stream of fixed length cells provided thereto by the switch fabric "on the fly" to ascertain control information for each packet to be processed on the IPE card. In most cases, this control information was added to the packet by the PM 420 on the ingress side of the line card that forwarded the packet to this particular IPE card. The PIC 502 outputs the stream of data cells to the four BACs 504-510, each of which is configured to store a different quarter of each data cell in its corresponding cell buffer (note that each BAC on the preferred IPE card 500 has two PPUs associated therewith, whereas only one PPU is associated with each BAC on the preferred line card
380) . The PIC 502 also outputs control cells to the BACs 504-510, where each control cell contains a PPU identifier that designates one of the two PPUs associated with a particular BAC for processing that control cell on the IPE card to perform mid-network processing functions for the corresponding packet. In this preferred embodiment, all control cells corresponding to a specific packet (and, more generally, to a specific user) are processed by the same BAC PPU on the IPE card 500.
For any given packet, the PPU that processed control information for that packet on the ingress side of the line card is also responsible for determining to which IPE card and, more specifically, to which PPU on a particular IPE card, the packet should be sent for further processing.
After a BAC PPU on the IPE card processes the control information for a particular packet, the PPU sends a control cell back to the PM 512, which then cooperates with the PIC 502 to dequeue the quarter cells of that packet in sequence from the cell buffers associated with the BACs 504-510. Upon receiving the constituent cells of a reassembled packet and storing these cells in its own cell buffer 514 (using a link list 516 and a free buffer list 518), the PM 512 processes the control cell received from the BAC PPU to format the 'reassembled packet according to its destination interface before forwarding the reassembled formatted packet back into the switch fabric for delivery to its destination line card (or another IPE card, in the case where additional processing of the packet is required) .
Also shown in Fig. 4 is a 9-port Ethernet switch 550 which, like the Ethernet switch provided on the preferred line card 380, provides for interprocessor communications between the eight PPUs and an MPPU 530 on the IPE card 500 for purposes of load balancing, hardware monitoring and bandwidth distribution, and for sharing user and configuration information.
Referring again to Fig. 3, it can be seen that the egress side of the exemplary line card 380 is also provided with a PIC 600, four BACs 602-608, and a PM 610. Upon receiving a possibly interleaved stream of fixed length cells from the switch fabric via the UDASL 430, the PIC 600 examines this cell stream "on the fly" to ascertain control information (including control information that may have been added to the packet header by the PM 512 on an exemplary IPE card 500) . The PIC 600 then forwards the data cells to the BACs 602-608 for storage in their corresponding cell buffers, and forwards corresponding control cells for each packet to one of the BAC PPUs (typically assigned by an IPE card BAC PPU that previously processed control information for the same packet) for further processing. The assigned BAC PPU then performs additional packet processing, primarily for traffic shaping, PHY card scheduling and bandwidth distribution on that PHY card. This process and the preferred hardware are described more fully in copending Application No. 09/511,059 filed February 23, 2000 entitled "Method and Device for Data Traffic Shaping," the disclosure of which is incorporated herein by reference. Upon processing the control information received from the PIC 600, this BAC PPU produces and forwards a control cell to the packet manager 610, which, in turn, dequeues the quarter cells of the corresponding packet in sequence from the cell buffers associated with the BACs 602-608 in cooperation with the PIC 600. The PM 610 then stores the constituent cells of the reassembled packet in its own cell buffer 612 (using a link list 614 and a free buffer list 616) , and formats the packet for its intended destination before forwarding the reassembled formatted packet to the PHY card associated with this line card for outputting the packet from the mid-network processor 300. A description of one preferred implementation of the broadband mid-network server described above will now' be provided, wherein the following terms have the following meanings :
Cardld: An 8 bit number that uniquely identifies an IPE or Line Card in the system.
Flowld A 10 bit number whose lower (least significant) 8 bits contain a Cardld, and whose upper (most significant) 2 bits identify the priority (class) of the traffic sent through the switch fabric to this card using this Flowld. (In the switch fabric, this field is 12 bits, but our implementation only uses the least significant 10 bits.)
User: A datalink (layer 2) interface. Examples include ATM virtual circuits, PPP sessions (over SONET, Ethernet, or ATM) , and MPLS label switched paths .
Userld: A 32-bit value that can be used as a system-wide pointer to user configuration and state information. Since multiple cards (one or more IPEs and one Line Card) can store information about a user, it is possible to have multiple Userlds that refer to a single user. The upper (most significant) 8 bits of the value represent the Cardld of the card which contains the user information being identified. The next 4 bits represent the PPUID of the PPU on the card where the information is stored, and the lower (least significant) 20 bits represent the CID assigned by that card to the user. The CID is used as an index into the PPU' s table of user information.
LCUserld: A Userld in which the Cardld identifies a Line Card. Primary Userld: A Userld in which the Cardld and PPUID identify the PPU on an IPE with has the primary responsibility for managing a user.
Secondary Userld: A Userld in which the Cardld and PPUID identify an IPE PPU other than the one identified by the Primary Userld
Small User: A user whose ingress packet stream is processed entirely by a single IPE PPU. Small users do not have Secondary Userlds. Large User: A user whose configured bandwidth is too high for his ingress packet stream to be processed by a single IPE PPU. All large users have one or more Secondary Userlds.
Logical Link: A group of users of the same type (i.e.: a group of ATM Virtual Circuits) . If the Logical Link is a group of PPPoE sessions over ATM, the Logical Link must be an ATM Virtual Circuit.
CSIX Header: The header of a CSIX (i.e., Common Switch Interface) cell. The CSIX Header is separate from the 64 byte cell payload.
Cell Header: The first two bytes of the 64 byte payload of a CSIX cell.
PIE Header: The 6 bytes immediately following the Cell Header of the first cell of a packet. Overview:
In this particular implementation, the server system preferably comprises one or more rack mountable system units (i.e., shelves). The system also contains at least one line card, exactly as many PHY cards as line cards, and at least as many IPE cards as line cards. Also, each shelf of the system contains preferably three switch fabric cards and two flash disk cards. Each line card is uniquely associated with a particular PHY card. However, there is no particular association between line cards and IPE cards. Each IPE card can be thought of as an independent router, with one or more IP addresses associated with it. Each Layer 2 (datalink) interface (referred to as a "user") provided by a line card is associated with exactly one IPE card (more specifically, exactly one PPU on one IPE card) . Different users from the same line card can be associated with different PPUs on different IPE cards, and a particular PPU can have users from multiple line cards.
Since it is possible for multiple Layer 2 protocols to be encapsulated within each other (for example,
PPP/Ethernet/ATM) , there is an exception to the "one user, one PPU" rule. In this case, the inner-most levels of encapsulation, each of which being layer 2 interfaces (users) in their own right, can be associated with different PPUs within an IPE card, or even PPUs on different IPE cards, thus causing traffic from the outer levels of encapsulation to be split among multiple PPUs or IPE cards. It is also possible for outer layers to be encapsulated layer 3 traffic as well as layer 2 traffic (for example, an Ethernet/ATM virtual circuit can carry IP as well as PPPoE packets) . In this case, all the layer 3 traffic will be associated with a single PPU (a user) , but the encapsulated layer 2 datalinks (users) can each be associated with a different IPE card.
The set of all users on the system is preferably distributed as evenly as possible across all the IPE cards in the system. Within an IPE, the MPPU stores the per-user information for the users assigned to that IPE and distributes those users across its PPUs. Each PPU stores a copy of the per-user information assigned to it. Thus each user is associated with one and only one IPE card and one and only one PPU on that IPE. This PPU' s copy of the user's configuration and state information can be uniquely identified on a system- wide basis by the Primary Userld.
The architecture of this preferred implementation is based on line cards, PHY cards, a switching fabric, internet processing engines (IPE) and flash memory modules, as was described generally above. The line cards terminate the link protocol and distribute the received packets based on user, tunnel or logical link information to a particular IPE through the switching fabric. The procedure of forwarding a packet to a particular IPE and PPU will be denoted as "routed distribution." A midplane is also used to connect the different cards. The preferred line card and the preferred IPE card were described above with reference to Figs. 3 and 4. The system is comprised of a set of hardware components, as described, which can be used to configure a system for a wide variety of applications as well as throughput requirements cost effectively. The preferred switch fabric and scheduler support cell switching at OC-192 speeds, and the switch fabric is both fully redundant and highly scalable. The preferred IPE cards have the following attributes: high performance protocol processing engine; manages users, tunnels and secure segment groups; supports policing and traffic shaping; implements highly sophisticated QoS with additional support for differentiated services; supports distributed bandwidth management processing; and supports distributed logical link management, able to do NAT, packet filtering and firewalls .
The preferred line cards have the following attributes: packet lookup processing; protocol identification; scheduling; supports distributed bandwidth management processing; multi-I/F support (ATM, GE, POS) ; and AAL-5 Processing (CRC check and generation) .
The preferred PHY cards have the following attributes : line termination for rates up to OC 192c; ATM - Layer Processing; ATM - SONET Mapping; POS - SONET Mapping
(including FEC checksum computation) ; GE - MAC and PHY Processing; and support the following line cards: ATM: 4x OC- 48, 8x OC-12, 16x OC-3; POS: lxOC192, 4x OC-48, 16x OC-12; and GE: 8/10x GE. Additionally, the overall system preferably has the following attributes: high availability; 1+1 switch fabric and scheduler redundancy; 1+1 control system unit redundancy; all field replaceable units are hot-swappable; N+l AC power supply redundancy; and N+l fan redundancy.
One purpose of routed distribution is to forward a packet to a particular PPU within an IPE. The key benefits of this approach are: incremental provisioning of compute power per packet; allows load distribution based on the packet computation needs for a particular user or tunnel; user and tunnel configuration information can be maintained by one single processor thus minimizing the inter-process communication needs; and allowing the portability of single processor application S/W onto the system.
Fig. 5 illustrates the distribution of packets to a particular IPE. A packet is received from a line card. The line card examines the packet and forwards the packet based on the IP source or destination address, the user session ID, or the tunnel ID. The IPE receives the packets and hands it over to the PPU specified by the line card.
The line cards and the IPE host the flexible protocol- processing platform. This platform is comprised of a data path processing engine and the already mentioned protocol- processing unit. The separation of data path processing from protocol processing leads to the separation of memory and compute intensive applications from the flexible protocol processing requirements. A clearly defined interface in the form of dual-port memory modules and data structures containing protocol specific information allows the deployment of general-purpose CPU modules for supporting the ever changing requirements of packet forwarding based on multilayer protocol layers. The protocol-processing platform can be configured for multiple purposes and environments. That is, it supports a variable number of general purpose CPUs which are used in the context of this architecture as Protocol Processing Units (PPU) . One of these CPUs is denoted as the Master Protocol Processing Unit (MPPU) . The data path processing unit extracts, in the packet inspector, all necessary information from the received packets or cells and passes this information on to a selected PPU via one of the buffer access controller devices. The cells themselves are stored in the cell buffer and linked together as linked lists of cells, which form a packet. Once a PPU has selected a packet for transmission, it passes the pointer to the packet and the necessary formatting and routing information to the data path processing unit. This enables the formatting and the segmenting of the packet. The packet is then forwarded either as a whole or segmented based on the configured interface.
Each PPU is associated with one dual-ported memory, where one port is controlled by the data-path processing unit and the other by the corresponding PPU. Each dual-ported memory contains two ring buffers, where one ring buffer is used to forward protocol specific information from the data path to the PPU and the other is used for the other direction. The ring buffer for passing on protocol specific information to the PPU is called the receive buffer. The other buffer is called the send buffer. Two pointers are maintained for each ring buffer. The write pointer for the receive buffer is maintained by the data path processing unit while the read pointer is set by the PPU. The send buffer's write pointer is controlled by the PPU and the read buffer by the data path processing unit. The PHY Card:
The PHY card terminates the incoming transmission line. It also performs clock recovery and clock synthesis. Optical signals are converted into a parallel electrical signal which is then an input to a physical framer device which maps the incoming bit stream into the transmitted physical frame. Finally the physical layer of the corresponding link protocol processes the physical frames. In addition, link layer protocol processing is performed in order to provide a common packet interface to the line card. On the transmission side, the packets or cells are mapped into physical frames. These frames are then encoded into the corresponding physical layer format and sent over the optical fiber to the receiving peer. The physical layer format is preferably either SONET or Gigabit Ethernet. The link layer format is preferably GE, ATM or PPP for POS. The Line Card:
The line card performs packet forwarding for the egress and ingress path. Full duplex 10 Gbit/s throughput is provided. The line card interfaces to the PHY cards and the switch fabric card. The Line Card is preferably configured for either POS-PHY or UTOPIA III interface to the PHY card. The Line Card preferably hosts two Protocol Internet Engine (PIE) chip sets. On the ingress side, one PIE chip set supports four protocol-processing units (PPU) and one MPPU.
The Four PPUs perform routed distribution to the various IPEs in the system. They also provide traffic shaping and scheduling of flows to the switching fabric. The remaining MPPU is used for overall control and supports the distributed bandwidth allocation protocol of the switching fabric.
The Packet Inspector (PI) first examines incoming cells or packets and the protocol information is extracted based on matched patterns in the data flow. This information is then made available to the PPU which is responsible for processing the incoming packet. Cells or packets from a PHY card are processed by a particular PPU based on a chosen configuration. This configuration depends upon the configuration of the PHY card itself and upon the protocol supported by the PHY card. The other PIE chip set, processing the egress flow, is preferably responsible for cell assembly from the switch fabric and packet scheduling for multiple physical ports. Additional support for AAL5 processing is provided for ATM flows. The MPPU from the ingress path is shared for configuration, maintenance and cell extraction of the egress flow. The communication channel provides signaling and connection setup control for the ATM PHY card. The PHY card informs the Line Card about the physical layer status and reports alarm and error conditions. The ingress packet processing preferably involves:
Packet Assembly for ATM traffic (AAL5 processing) ; Protocol Identification (Packet Data Inspection) ; Routed Distribution; Scheduling of traffic flows through switching fabric; Buffer management for ingress cell buffers; and cell scheduling for the switch fabric.
The egress packet processing preferably involves: Traffic Shaping; Packet Assembly for switch fabric flow; MPHY Buffering; Cell Scheduling for ATM with multiple physical interfaces with AAL5 processing (CPCS, SAR) ; and Packet Scheduling for POS with multiple physical interfaces.
The Internet Processing Engine (IPE) provides the functionality for protocol processing, user management, tunnel management and secure segmentation. It receives the packets from the switching, enforces the service level agreements (SLA's), performs packet classification, filtering and forwarding, and finally schedules the packet for transmission to the requested interface.
The PI is part of the Packet Internet Engine (PIE) chip set, which consists of the Packet Inspector, the Buffer Access Controller, and the Packet Manager. Together with the sixteen PPUs and the MPPU, the PIE chip set provides a powerful Protocol Processing unit. The PIE chip extracts informative protocol information and forwards it to the PPUs and the MPPU based on the routed distribution decision made in the Line Cards. The chosen PPU processes this information and performs all necessary packet processing. This includes, besides forwarding and filtering, policing, and packet formatting. The MPPU controls the IPE and is negotiating with the units in the system the bandwidth allocation of the switch fabric. It also provides bandwidth management for the configured logical links. The MPPU manages its connections by assigning users and tunnels to individual PPUs for forwarding processing from the Line Card to a particular IPE. Once a connection between the MPPU and Line Card is set up, all packets belonging to such a connection are forwarded from the Line-Card to the chosen PPU. A PPU is chosen based on the already assigned connections, their bandwidth and the bandwidth and QoS required for the new connection. Connectionless traffic (Internet to Internet) is mapped onto an internal connection. If more bandwidth is needed than one PPU can manage, the packets will be distributed over multiple PPUs.
The functionality of the IPEs include: User Management; Tunnel* Management; Logical Link Management; Support for Secure Segmentation; Policing; QoS Control with Diff Service Support; Buffer Management; IPv4, IPv6 Forwarding; Packet Classification; Packet Filtering with support for user
Filters; Celox Management Database Support; Packet Formatting, and NAT.
The Protocol Internet Engine Chip Set (PIE) :
The Protocol Internet Engine (PIE) provides the data path processing capabilities for the server system at OC-192c rates. The PIE chip set comprises three chips. These chips result in a very high performance packet processing system together with an interface controller and multiple general purpose CPUs. Each cell is preferably transferred into the buffer through four buffer access controllers ("BACs") in order to increase. the bandwidth to the PPUs and to increase the bandwidth to the external cell buffers . Different portions of the same cell are written to the cell buffers attached to the different BACs. However, the captured portion of the data is sent to just one of the PPUs.
The preferred BAC unit is shown in Fig. 8. The RSU receives incoming data, reformats the data to* an internal format, performs a parity check for incoming data, and also performs synchronization control. The preferred format of a cell received by the BAC from the packet inspector is shown in Fig. 9. Referring again to Fig. 8, the Cell Filter unit extracts control information from the cell and sends the cell data to the BAU along with the indication of which portion of the cell has to be stored in this cell buffer. The CFU also sends the cell data stream to the PTU which translates the PPUID to the appropriate PPU and thence to the CCU where, based on the PPUID and the capture matrix, the control cell is extracted from the data cell CCU and stored in the CBU. The CMU then transmits the control cell to the appropriate PPUs through a dual port RAM interface.
When the packet has to be dequeued, the control cell corresponding to the packet is sent by the PPU which processed that user to the PM along with the dequeue pointer. This is received by the BEU of the PM, as shown in Figure 8. The control cell data stream (shown as the narrow arrow in Fig. 8) then goes to the ICU where it is stored while the DSU does deficit round robin scheduling of the data packets corresponding to the control packets in order to distribute bandwidth equitably to the BACs for sending out packets. In addition, the dequeue pointer corresponding to the packet to be dequeued is sent to the PIU from where it is transmitted to the PI where it is received at the PIU and passed on to the BMU. In the BMU, the dequeue pointers are stored in a FIFO while the previous packets are being dequeued. The dequeue pointer information is passed onto the BACs and the BAU in the BACs dequeues the packet and passes it through the PMU to the packet manager. A packet is dequeued by dequeuing all the cells comprising the packet which are held in the form of a linked list. Data packets from the data packet stream (shown as the thick arrow in Fig. 8) undergo AAL5 processing (should they need it) in the APU, and are stored in the IDU buffer. The FAU reformats packets into 64 bit slices and controls dequeuing from both the IDU and the DSU' s DPRAM in accordance with the PFU. In order to ensure matching of the control packet with the data packet, a sequence number is used at the beginning of both the data and the control cells. Both the control and data streams enter the PFU where they are formatted and sent to the TIU to be sent to the phy cards or the switch fabric.
The PIE chip set can be configured for multiple purposes and environments. That is, it supports a variable number of general purpose CPUs which are used in the context with the PIE chip set as Protocol Processing Units (PPU) . One of these CPUs is reserved for maintenance and control purposes and is denoted as MPPU. The PIE chip set implements all necessary functions in order to hide all data path processing from the actual protocol processing functionality. The PIE chip set extracts all necessary information from the received packets or cells and passes this information on to a selected PPU. The cells are then stored in the cell buffer and linked together as linked lists of cells, which form a packet. Once the PPU has selected a packet for transmission, it passes the pointer to the packet and the necessary formatting and routing information to the PIE chip set. This allows formatting and segmenting of the packet. The packet is then forwarded to the MPHY scheduler as a whole or segmented based on the configured interface .
Each PIE chip set is differently configured. The PIE chip set on the IPE supports as many as 8 PPUs and 1 MPPU. 4 PPUs and 1 MPPU will support the PIE chip set on the ingress side of the Line Card, and an equal number on the egress side of the Line Card.
The characteristics of the preferred PIE are as follows: Three Chip Chip-Set; Full Data-path processing in hardware; Support for distributed .protocol processing by general purpose CPU modules; Highly scalable compute power per packet (up to 64 PPUs can be supported) ; Flexible interface support with MPHY scheduling; AAL-5 Processing; SAR Sublayer: Assembly and Segmentation for up to 256K connections; CPCS Sublayer: CRC 32 generation and check, padding control, and length field control; Internal Packet Processing; Checksum computation and check; Length field control; Padding control; Micro- programmable Packet Inspection Engine; Supports any layer packet inspection; Supports byte matched pattern processing; Supports bit matched pattern processing; Results are made available to protocol processing units; Supports extraction of any portion of packet for protocol processing; IPv4/IPv6 Header Checksum; Congestion Avoidance Support; EPD; PPD; Internal Back-pressure control; Linked List Control; Supports up to 8 million 64 byte cells (initially a million) ; Links cells together to form a packet; Garbage Collection; Assembly aging control; Buffer Access; Parity generation and check for signal integrity; Cyclic access for data rates up to 12Gbit/s; PPU Access; Dual-Port access control for up to 8 dual-port RAMs each with 512/256 KByte memory; Support for dual-port RAM data synchronization; Dual-ring buffer control for each dual- port RAM for data exchange; Threshold-based access control for writes to ring buffer; Support for up to 24 Gb/s throughput (bi-directional) ; Back-pressuring in case of buffer overflow; Cyclic Packet Scheduling; Packet Scheduling for cyclic access control with support for data rates up to 12Gb/s; Micro- programmable packet formatter; Supports insertion, removal and overwriting for any byte in a packet at OC-192 speeds; Supports IPv4/IPv6 Header Checksum generation; Support UDP/TCP checksum generation; Cell Scheduling, Buffering and Linked List Management; Supports cell buffering for up 512K cells; and supports scheduling for up to 1024 queues.
Together with the PPUs, the preferred PIE supports: Packet Classification: Based on Layer 3,4,... Information (any layer); Packet Filtering; User programmable filters; Group filters; Firewall processing; Packet Forwarding; IPv4 Lookup Processing; IPv6 Lookup Processing; Tunnel Forwarding; Buffer Management; Dynamic Thresholding on a per user and assigned rate basis; Support for up to 8 million Cell Buffer (initially a million) ; Congestion avoidance with Early Packet Discard (EPD) , Partial Packet Discard (PPD) , Selective Packet Discard; Policing; Per User and Logical Link; Enforcing traffic contracts based on SLA; Traffic Shaping; Per User and Logical Link; Support for traffic contracts based on SLAs; Support for Real-time traffic (low delay traffic) ; QoS Control; Supported for differentiated services; Multiple priorities per user; Flow based queuing (not initially supported) ; Bandwidth Management; Distributed processing for allocation of bandwidth on switch-fabric links including MPHY links; Distributed processing for allocation of bandwidth for logical link management; User Management for up to 512K users; Tunnel Management for up to 128K users; L2TP; IPSec; Multi Protocol Processing; and Support for any protocol . Traffic Management:
Traffic Management for an Internet access system is complex due to the involvement of various system interfaces.
A system might be connected to users, the Internet backbone, a Local Area Network with file and Web servers, and a Metropolitan Area Network (MAN) which gives access to local TV and media servers as shown in Fig. 11. Each link has different link properties with respect to available bandwidth and Dollar per Megabyte. This means that a user's share of bandwidth on a particular link has to be based on the property of this link. A user might get more bandwidth share on the MAN link than on the backbone link due to the fact that more bandwidth at a cheaper price is available on the MAN link than on the backbone link. The same is true for bandwidth wholesaling of the preferred system to multiple ISPs who would like to resell bandwidth to their customers. The enabling technology for this model is Secure Segmentation. This model has also led to the introduction of logical link groups. A logical link group can be assigned to a secure segment based on the bandwidth needs of the considered secure segment for a particular link as shown in Fig. 12. This means that not only user allocation has to be considered but also logical link bandwidth needs to be included. Therefore, bandwidth is distributed based on traffic class, user, and logical link group. This supports the wholesaling model and takes into account over-subscription requirements in order to support QoS including differentiated services.
The preferred system represents a highly distributed system. In such a system, resources have to be allocated based on the requirements of the traffic of each component. That means in general that each component has to take part in a distributed computation method in order to allocate the resources. The traffic management requirements for bandwidth allocation within the preferred system will have to include bandwidth negotiation for the various flows through the switching fabric. One also has to consider the specific requirements to support the above-introduced concept of logical link groups. Since logical link groups are managed in a distributed manner, bandwidth information has to be exchanged between the entities managing one logical link as shown in Fig. 13. Buffer management and QoS Control is an integral part of the overall traffic management scheme implemented in the preferred server system. Due to the large buffer, the system has to maintain on various different places in the distributed system a sophisticated buffer management scheme which has to be implemented and supported by QoS control in order to support differentiated services and other traffic flow specific requirements Policing - Traffic Shaping:
Policing and Traffic Shaping have closely related functionality. Policing ensures that the incoming stream does conform to the negotiated link parameters for a logical link group as well as the user of the incoming link. Traffic shaping enforces the link parameters for the outgoing traffic stream based on the outgoing user, the logical link group and the link itself. Fig. 14 is intended to illustrate the need for policing as well as traffic shaping. An incoming traffic stream is shaped (policed) in order to enforce the traffic contracts of a user for the considered link and logical link. Before the traffic is forwarded to another link, the traffic contracts for this particular link have to be enforced. This traffic contract might be much different from the traffic contract of the incoming stream. Consider the case where a user requests information over the Internet backbone link. The bandwidth allocated on this link for this user might be 500 Kbit/s. The logical link bandwidth for the corresponding secure segment might be set to 10 Mbit/s. If the user's access link to the system uses an ATM connection with an assigned rate 1 Mbit/s and no policing is enforced, the user could use the full 1 Mbit/s. This is possible since the traffic shaped onto the user ATM link allows the user to transmit the higher rate. Therefore, it is necessary to police the incoming traffic and the other for shaping the traffic for a particular link. Fig. 15 shows the schematic implementation of the policer and traffic shaper in an IPE within the preferred server system. A received cell is assigned to a particular user data structure assigned to the incoming link for the considered user. As discussed earlier, the policing information can be directly obtained from the user who is sending a packet based on the connection identifier, the corresponding session ID, or the IP source address. However, if the packet on the incoming connection cannot be directly associated with a user or logical link group, then the user and/or logical link group for whom it is destined classifies the packet. Based on the obtained user and logical link information the incoming traffic stream is policed by queuing up the packets and enforcing the negotiated traffic contract.
Once the packet conforms to the incoming link requirements, the packet is shaped based on the user parameters and logical parameters for the outgoing link. These parameters are obtained from the user connection itself if a session ID can be associated with it. If the packet comes from a user and is forwarded across the Internet to a remote terminal, then the shaping parameters are obtained from the sending user for the corresponding link and the associated logical link group. For connectionless traffic, which cannot be directly associated with users, logical link group can be assigned based on the IP destination address and or source address. This allows managing traffic flows between networks. Switch Fabric Bandwidth Management and Scheduling:
In order to meet the QoS requirements of individual traffic flows and to ensure that delay requirements of certain flows can be met, sophisticated scheduling must be conducted across the entire switch fabric. This scheduling takes into account the allocated user bandwidth, logical link share, buffer occupancy for output queues, available sub-port bandwidth, priority of class of traffic, and expected delay. All this is accomplished while maintaining high throughput across the switch fabric. Attached hereto as Exhibit A are details of the manner in which the preferred server system is programmed so as to minimize inter-IPE card communications.
There are various changes and modifications which may be made to the invention, as apparent to those skilled in the art. However, such changes and modifications are suggested by the present disclosure, and the invention should therefore be limited only by the scope of the claims appended hereto, and their legal equivalents .
EXHIBIT A
1.1.1 Line Cards - Ingress Side
Line Cards do not perform any traffic policing. Policing is performed, in distributed fashion, by the all IPE cards in the system. If during testing, it can be determined that the Line Cards have enough processor and I/O bandwidth to perform policing, this function might be moved to the Line Cards in a future version ofthe software. Also, Line Cards do not perform any routing table lookups.
All IP packets received, regardless of their encapsulation, must have their destination IP address captured and examined by a LC PPU. One operation that must be performed is determining if the destination IP address is one of the IP addresses of our system. This can be done using a simple hash table. A full CIDR routing search is not necessary, since we are only looking for an exact match. The result of the lookup (if successful) is the Cardld of the IPE that the address belongs to. If a match is found and the Cardld is equal to the Cardld of the IPE that the packet is about to be forwarded to, the packet must be forwarded with die Destination PPU bit set. This is so that when the packet is received, the PI can select the packet to be captured in its entirety (as long as it is not part of a non- encrypted tunnel).
Additionally, if the packet is an IPsec packet that has been received from a large user, and the destination IP address is one ofthe addresses ofthe IPE to which the packet is about to be forwarded to, the Userld should be determined based on die IPsec Security Parameter Index (SPI) rather than on flie hash of die source and destination IP addresses in the ff header of the packet. These operations will be discussed in greater detail in die sections that follow.
PIE Header
Figure imgf000038_0001
Figure imgf000038_0002
1.1.2 ATM Line Card
1.1.2.1 Small Virtual Circuits
The following information is sent to the IPE PPU along with the packet payload:
• ϊn ύie CSIX Header:
• Destination Flowld:
Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet
• In the Cell Header:
• Source Flowld:
Sent in the Cell Header to allow the IPE to reassemble the packet. This is simply the identification ofthe line card (in the least significant 8 bits) and the priority (class) in the most significant two bits. The priority MUST be the same as is specified in the Destination Flowld of this packet.
• Discard Bit:
Set by the LC PM to inform the IPE that an error (IP checksum, AAL5 CRC, internal parity) was detected in the packet.
• EOP Bit:
Set by die LC PM to indicate the last cell of flie packet
• In flie PIE Header:
• Initial PID:
This 3 bit field tells the IPE the type of encapsulation this packet has. The choices are: IPC (for inter-processor communications), IP, PPP, Ethernet, ATM, or MPLS
• Initial Stage:
This 4 bit field can be used to give additional information to the IPE about the encapsulation of this packet It specifies which stage in flie IPE PI will be the first to inspect flie packet.
• Destination PPU (1 bit):
This bit is set for IP packets whose destination address is equal to one of the IP addresses of the IPE card that packet is being sent to.
• Destination PPUID:
The PPU identifier of the TPE PPU that the packet is being sent to.
• IPE CID:
An index into the connection table ofthe IPE PPU that the packet is being sent to.
The PI uses the VP CI and PHYID to calculate the LC CID, which is used by the PI as index into flie hardware connection table. The PI reads (amongst other tilings) a LC PPUID which selects the LC PPU that the control information for the packet should be sent to.
The LC CID is also used by the LC PPU as an index into a software connection table. Typically (though not always), this connection table is used to determine the Userld (which consists of a Destination Cardld, Destination PPUID, and IPE CID) that is sent to the IPE in flie PIE Header ofthe packet. As packets are inspected by the LC, a determination of priority (one of four classes) is made based on flie protocols found in the packet. Alternatively, the priority could be read from the connection table. This priority is used to determine flie two most significant bits of the Destination Flowld when the packet is forwarded to an IPE.
The ATM cell headers and the AAL5 trailer and padding are removed (by the PM) before forwarding the packet.
The following is a list of all the different types of top level protocol encapsulation that can be received by the ATM line card, along with an explanation of the processing that must take place on the line card for each type of protocol: 1.1.2.1.1 IP ( FC-1483)
The IP packet is forwarded to the IPE. The IPE CID is determined by reading the software connection table.
1.1.2.1.2 Ethernet (RFC-1483)
Each ATM LC must have a standard globally unique Ethernet MAC address permanently assigned to it. Each Ethernet/ATM VC should be configurable as to whether or not it is in "promiscuous" mode - that is, whether or not it should discard unicast packets not sent to its MAC address.
All Ethernet packets are forwarded to the IPE with their MAC headers intact, except for PPPoE session packets (ethertype = 0x8864). For non-PPPoE session packets, the IPE CID is determined by reading the software connection table, and the Initial PID is set to indicate an Ethernet packet.
For PPPoE session packets, the PPPoE header is removed, and the PPPoE Session ID (from the PPPoE header) is used to index into a PPPoE session table, from which the IPE CID can be retrieved. In this case the Initial PID is set to indicate a PPP packet. Additionally, if the PPP/PPPoE protocol type is IP, the PPP header is also removed before the packet is forwarded to the IPE, and the Initial PID is set to indicate an IP packet.
1.1.2.1.3 PPP
If the PPP protocol type is IP, the PPP header is removed, and the Initial PIDS is set to indicate IP, otherwise, the PPP header is kept, and the Initial PIDS is set to indicate PPP. The IPE ODD is determined by reading the software connection table.
1.1.2.1.4 MPLS
The top of stack shim label (in the AAL5 PDU) is replaced with flie VPI/NCI of the virtual circuit The VPI/VCI can be deduced from flie LC CID. The IPE CID is determined by reading the software connection table.
1.1.2.2 Large Virtual Circuits
With the exception of flie following changes, large virtual circuits are handled in the same way as small virtual circuits.
The PI DFU control registers can be programmed (by the MPPU) with the LC CID's of up to 4 large virtual circuits. For these circuits, if die packet contains an IP header, flie PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of flie source and destination IP addresses of the packet (calculated by the PI DFU).
Any ofthe entries (circuits) in the software NC connection table can be marked for distribution across multiple IPE PPUs. These are known as large users, and need not be the same virtual circuits that are distributed by the DFU as explained above. For these circuits, if the packet contains an IP header, a new hash is calculated over the source and destination IP addresses of the packet and used to select one of several Userlds (Destination Cardld, Destination PPUID, and IPE CID) that are sent to the IPE in the PIE Header of the packet. The most significant bits ofthe Destination Flowld are still used to select the priority (class) of the packet However, in the case of an IPsec packet addressed to the IPE card that the packet will be forwarded to, the Userld is selected using a different means, as described in the IPsec protocol processing section below.
1.1.3 POS Line Card
The following information is sent to the IPE PPU along with the packet payload: • In the CSIX Header: • Destination Flowld: Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet.
• In the Cell Header:
• Source Flowld:
Sent in the Cell Header to allow the IPE to reassemble the packet. This is simply the identification ofthe line card (in the least significant 8 bits) and the priority (class) in the most significant two bits. The priority MUST be the same as is specified in the Destination Flowld of this packet.
• Discard Bit:
Set by the LC PM to inform the IPE that an error (IP checksum, internal parity) was detected in the packet.
• EOP Bit:
Set by the LC PM to indicate the last cell ofthe packet.
• Xn ihe PIE Header:
• Initial PID:
This 3 bit field tells the IPE the type of encapsulation this packet has. The choices are: IPC, IP, PPP, or MPLS
• Initial Stage:
This 4 bit field can be used to give additional information to the IPE about flie encapsulation of this packet It specifies which stage in flie IPE PI will be the first to inspect the packet
• Destination PPU (1 bit):
This bit is set for IP packets whose destination address is equal to one of flie IP addresses of the IPE card that packet is being sent to.
• Destination PPUID:
The PPU identifier ofthe IPE PPU that flie packet is being sent to.
• IPE CID:
An index into the connection table of flie IPE PPU that the packet is being sent to.
Unless MPLS/PPP/SONET is being used, each PPP/SONET PHY comprises a single user. When MPLS is in use, however, each MPLS Label Switched Path (LSP) represents an additional user.
For POS Line Cards, the LC CID is simply the PHYID. The PI DFU control registers can be programmed (by the MPPU) with the LC CID's of up to 4 PHYs. For these PHYs, if the packet contains an IP header, the PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of the source and destination IP addresses ofthe packet (calculated by the PI DFU). This capability ofthe PI DFU must be used for OC-192c and OC-48c PHYs in order to distribute the load over multiple LC PPUs. For OC-12c and smaller PHYs, the PI DFU need not be used. Instead, the PI uses the LC CID as index into the hardware connection table. The PI reads (amongst other things) a LC PPUID which selects the LC PPU that the control information for the packet should be sent to.
As packets are inspected by the LC, a determination of priority (one of four classes) is made. This priority is used to determine the two most significant bits ofthe Destination Flowld when the packet is forwarded to an IPE.
The following is a list of all the different types of top level protocol encapsulation that can be received by the POS line card, along with an explanation of the processing that must take place on the line card for each type of protocol:
1.1.3.1 PPP Control Protocol (LCP, PAP, CHAP, IPCP, MPLSCP, etc..)
This category includes not only PPP Control Protocols, but also any Network Protocol other than IP or MPLS. The LC PPU uses the LC CID (which is really just the PHYID) as an index into a software PHY table. This table provides the Primary Userld, which determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet. For large and small PPP/SONET users, all non-IP and non-MPLS packets are sent to the IPE PPU identified by the Primary Userld. No distribution is performed for these packets. Also, the Initial PID is set to indicate a PPP packet.
1.1.3.2 IP
The LC PPU uses the LC CID (which is really just the PHYED) to index into and read from the software PHY table. From this the LC determines whether this is a small user or a large user. For small users, the Primary Userld is also read from the PHY table. This determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet.
For large users, however, a hash is calculated over the source and destination IP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as the IPE CED that is sent in the PIE Header of the packet.
The PPP header is removed before the packet is forwarded to the IPE, and the Initial PID is set to indicate an IP packet
1.1.3.3 MPLS
As is flie case with IP/PPP/SONET, the LC CID only identifies the PHYID. Therefore, when the LC PI identifies an MPLS packet, the top of stack label must be captured in order to identify the user. For each POS PHY, the LC PPU must maintain a table of MPLS LSPs. The LC CED selects which table, and flie top of stack label is used to index into the table. For small users, the Primary UserlD that corresponds to the LSP can then be read the table. For large users, however, a similar process to the one described above for IP is used. A hash is calculated over the source and destination D? addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet
The PPP header is removed before the packet is forwarded to the IPE, and the Initial PED is set to indicate an MPLS packet
1.1.4 Ethernet Line Card
The following information is sent to flie IPE PPU along with the packet payload:
• In the CSIX Header:
• Destination Flowld:
Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet.
• In the Cell Header:
• Source Flowld:
Sent in the Cell Header to allow the IPE to reassemble the packet. This is simply the identification of he line card (in the least significant 8 bits) and the priority (class) in the most significant two bits. The priority MUST be the same as is specified in the Destination Flowld of this packet
• Discard Bit:
Set by the LC PM to inform the IPE that an error (EP checksum, internal parity) was detected in the packet
• EOP Bit:
Set by the LC PM to indicate the last cell of flie packet
• In the PIE Header:
• Initial PED: This 3 bit field tells the IPE the type of encapsulation this packet has. The choices are: IPC, Ethernet, PPP, IP, or MPLS.
• Initial Stage:
This 4 bit field can be used to give additional information to the IPE about the encapsulation of this packet. It specifies which stage in the IPE PI will be the first to inspect the packet.
• Destination PPU (1 bit):
This bit is set for IP packets whose destination address is equal to one of the EP addresses of the IPE card that packet is being sent to.
• Destination PPUID:
The PPU identifier of the IPE PPU that the packet is being sent to.
• IPE CED:
An index into the connection table ofthe IPE PPU that the packet is being sent to.
Unless MPLS or PPPoE is being used over the Ethernet, each PHY comprises a single user. When MPLS or PPPoE is in use, however, each MPLS Label Switched Path (LSP) or PPPoE session represents an additional user.
For Ethernet Line Cards, the LC CID is simply the PHYED. The PI DFU control registers can be programmed (by flie MPPU) with the LC CED's of up to 4 PHYs. For these PHYs, if the packet contains an P header, the PI DFU will replace the LC PPUID read from the hardware connection table with a LC PPUID read from a hash table which is indexed by a hash of the source and destination EP addresses ofthe packet (calculated by the PI DFU). This capability ofthe PI DFU must be used for 10 Gigabit Ethernet Cards in order to distribute the load over multiple LC PPUs. For 1 Gigabit and smaller PHYs, the PI DFU need not be used. Instead, flie PI uses the LC CED as index into flie hardware connection table. The PI reads (amongst other things) a LC PPUED which selects flie LC PPU that flie control information for the packet should be sent to.
Each Ethernet PHY must have a globally unique Ethernet MAC address permanently assigned to it All Ethernet packets are forwarded to the EPE with their MAC headers intact, and the with the Initial P DS set to indicate Ethernet, except for MPLS and PPPoE session packets (ethertype = 0x8864).
As packets are inspected by the LC, a determination of priority (one of four classes) is made. This priority is used to determine the two most significant bits ofthe Destination Flowld when the packet is forwarded to an IPE.
The following is a list of all the different types of top level protocol encapsulation that can be received by the Ethernet line card, along with an explanation of the processing that must take place on the line card for each type of protocol:
1.1.4.1 IP
The LC PPU uses the LC CID (which is really just the PHYED) to index into and read from the software PHY table. From this the LC determines whether this is a small user or a large user. For small users, the Primary Userld is also read from the PHY table. This determines where the packet is sent as well as the IPE CID that is sent in the PIE Header ofthe packet.
For large users, however, a hash is calculated over the source and destination EP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as the EPE CED that is sent in flie PIE Header ofthe packet.
The packet is forwarded to the IPE with the Ethernet MAC header intact, and the Initial P D is set to indicate an Ethernet packet .1.4.2 PPPoE Session
For PPPoE Session packets, the Ethernet and PPPoE headers are removed, and the PPPoE Session ID (from the PPPoE header) is used to index into a PPPoE session table, from which the Userld (EPE Cardld, IPE PPUID and IPE CID) can be retrieved. A unique PPPoE Session table can be maintained for each PHY, and the LC CID can be used to select which session table to use.
If the PPP protocol type is IP, the PPP header is also removed, and the Initial PEDS is set to indicate IP, otherwise, the PPP header is kept, and the Initial PIDS is set to indicate PPP.
1.1.4.3 MPLS
The LC CID only identifies the PHYED that the packet was received on. Therefore, when the LC PI identifies an MPLS packet, the top of stack label must be captured in order to identify the user. For each Ethernet PHY, the LC PPU must maintain a table of MPLS LSPs. The LC CED selects which table, and the top of stack label is used to index into the table. For small MPLS users, the Primary UserlD that corresponds to the LSP can then be read the table. For large users, however, a similar process to the one described above for IP is used. A hash is calculated over the source and destination IP addresses of the packet and used to select either the Primary UserlD or one of several Secondary UserlDs. The selected UserlD determines where the packet is sent as well as flie EPE CID that is sent in the PIE Header ofthe packet.
The Ethernet header is removed before the packet is forwarded to the IPE, and flie Initial PED is set to indicate an MPLS packet.
1.1.4.4 Other Ethernet Protocols (ARP, PPPoE Discovery, etc.)
This category includes all Ethernet protocols (ethertypes) other than EP, MPLS, and PPPoE Session.
The LC PPU uses the LC CED (which is really just the PHYED) as an index into a software PHY table. This table provides the Primary Userld, which determines where the packet is sent as well as the IPE CID that is sent in the PIE Header of the packet. For large and small Ethernet users, fliese packets are sent to the EPE PPU identified by flie Primary Userld. No distribution is performed for fliese packets. Also, the Initial PED is set to indicate an Ethernet packet.
1.2 Line Cards -Egress Side
Line cards perform all the traffic shaping for the system.
1.2.1 ATM Line Card
The following information is received from EPE PPU along with the packet payload:
• In the CSIX Header:
• Destination Flowld:
Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet.
• In the Cell Header:
• Source Flowld:
Sent in the Cell Header to allow the LC to reassemble the packet. This is simply the identification of the EPE card (in the least significant 8 bits) and flie priority (class) in the most significant two bits. The priority MUST be the same as is specified in the Destination Flowld of this packet.
• Discard Bit:
Set by the IPE PM to inform the LC that an error (internal parity) was detected in the packet.
• EOP Bit:
Set by the IPE PM to indicate the last cell ofthe packet
• In the PIE Header:
• Initial PED: This 3 bit field tells the LC the type of encapsulation this packet has. The choices are: IPC (for inter-processor communications), EP, Ethernet, PPP, or MPLS
• Initial Stage:
Always 0.
• Destination PPU ( 1 bit):
Always 0.
• Destination PPUID:
The PPU identifier ofthe LC PPU that the packet is being sent to.
• LC CID:
This is an index into the connection table ofthe LC.
The Destination PPUID selects the LC PPU that will process the packet The LC CID is used by the LC PPU as an index into a software connection table. This connection table provides the shaping parameters, any additional encapsulation that must be added by the LC, the PHYID, and the ATM VPI/NCI for the packet
The priority (one of four classes) is based on the two most significant bits of the Source Flowld in the Cell Header. The priority is used by the Traffic Shaper and the Scheduler to determine when to forward the packet to the PHY.
The ATM cell headers and the AAL5 trailer and padding are always added (by the PM) before forwarding the packet to the PHY card.
The following is a list of all the different types of top level protocol encapsulation that can be received from an EPE by the ATM line card, along with an explanation of flie processing that must take place on flie line card for each type of protocol:
1.2.1.1.1 IP
The desired encapsulation for the packet can be eiflier ff/PPP/PPPoE/Ethernet/ATM, EP/PPP/ATM or EP/ATM. The PPU can determine which it is from flie connection table. If the encapsulation should be EP/PPP/PPPoE/Ethernet/ATM, flie connection table will provide the necessary information to add the missing headers. If the encapsulation should be IP/PPP/ATM, a PPP header identifying the protocol as EP is added. Also, the entry in the connection table may specify that an LLC header should also be added.
1.2.1.1.2 Ethernet
The connection table may specify that an LLC header should be added to the beginning ofthe packet. Otherwise the packet is sent as is.
1.2.1.1.3 PPP
The desired encapsulation may be either PPP/PPPoE/Ethernet/ATM or PPP/ATM. The PPU can determine which it is from the connection table. If it is PPP/ATM, the packet is sent as is, otherwise, the connection table will provide the necessary information to add a PPPoE header and an Ethernet Header.
1.2.1.1.4 MPLS
As with all other encapsulations, the proper VPI/NCI is obtained from flie connection table.
1.2.2 POS Line Card
The following information is received from IPE PPU along with the packet payload: • In the CSIX Header: • Destination Flowld:
Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) ofthe packet. • In the Cell Header:
• Source Flowld:
Sent in the Cell Header to allow the LC to reassemble the packet. This is simply the identification ofthe IPE card (in the least significant 8 bits) and the priority (class) in the most significant two bits. The priority MUST be the same as is specified in the Destination Flowld of this packet.
• Discard Bit:
Set by the IPE PM to inform the LC that an error (internal parity) was detected in the packet.
• EOP Bit:
Set by the EPE PM to indicate the last cell ofthe packet.
• n the PIE Header:
• Initial PID:
This 3 bit field tells the LC the type of encapsulation this packet has. The choices are: EPC (for inter-processor communications), EP, PPP, or MPLS
• Initial Stage:
Always 0.
• Destination PPU (1 bit):
Always 0.
• Destination PPUID:
The PPU identifier ofthe LC PPU that the packet is being sent to.
• LC CED:
This is an index into the connection table ofthe LC.
The Destination PPUID selects the LC PPU tiiat will process the packet The LC CED is used by flie LC PPU as an index into a software connection table. This connection table provides the shaping parameters, and flie PHYED for the packet.
The following is a list of all the different types of top level protocol encapsulation that can be received by the POS line card, along with an explanation ofthe processing that must take place on the line card for each type of protocol:
1.2.2.1 PPP
No additional processing is needed.
1.2.2.2 IP
A PPP header identifying the packet as an IP packet is added.
1.2.2.3 MPLS
A PPP header identifying the packet as a MPLS packet is added.
1.2.3 Ethernet Line Card
The following information is received from IPE PPU along with the packet payload:
• In the CSIX Header:
• Destination Flowld:
Sent in the CSIX Header of every cell of the packet to identify where the switch fabric should send it as well as the priority (class) of the packet.
• In the Cell Header:
• Source Flowld:
Sent in the Cell Header to allow the LC to reassemble the packet. This is simply the identification ofthe EPE card (in the least significant 8 bits) and the priority (class) in the most significant two bits. The priority MUST be flie same as is specified in the Destination Flowld of this packet.
• Discard Bit: Set by the IPE PM to inform the LC that an error (internal parity) was detected in the packet. • EOP Bit:
Set by the IPE PM to indicate the last cell ofthe packet. • In the PIE Header:
• Initial PID:
This 3 bit field tells the LC the type of encapsulation this packet has. The choices are: IPC (for inter-processor communications), Ethernet, PPP, IP, or MPLS
• Initial Stage:
Always 0.
• Destination PPU ( 1 bit) :
Always 0.
• Destination PPUID:
The PPU identifier of the LC PPU that the packet is being sent to.
• LC CID:
This is an index into the connection table ofthe LC.
The Destination PPUID selects the LC PPU that will process the packet. The LC CID is used by the LC PPU as an index into a software connection table. This connection table provides the shaping parameters, and the PHYID for the packet
The following is a list of all the different types of top level protocol encapsulation that can be received by the Ethernet line card, along with an explanation ofthe processing that must take place on the line card for each type of protocol:
1.2.3.1 Ethernet
No additional processing is needed. All IP/Ethernet are sent using this type because the EPE, not the LC implements ARP, and therefore adds the Ethernet header to all EP packets before sending them to the LC.
1.2.3.2 PPP
The desired encapsulation is PPP/PPPoE/Ethernet The connection table provides the necessary information to add a PPPoE header and an Ethernet header.
1.2.3.3 IP
The desired encapsulation is EP/PPP/PPPoE/Ethernet A PPP header indicating an EP packet is added. The connection table then provides the necessary information to add a PPPoE header and an Ethernet header.
1.2.3.4 MPLS
The connection table provides the information needed to add an Ethernet header (the destination MAC address is all that is required from the connection table).
1.3 IPE Card
1.3.1 IPE Ingress Protocols
All packets received by an EPE card from the Line Cards (or from other EPEs) will be of one of the following types. The Initial PED field in the PIE Header will identify which one of these types each packet corresponds to. If there are more than 8 such types, the Initial Stage field in the PIE Header can be used to select a different stage to begin inspection, each of which allows 8 additional protocols to be identified by the Initial PID field.
The EPE CID and PPUID in the PEE Header of the received packet combine with the Flowld to give the Userld. Only the least significant 18 of the 20 bits of the EPE CTD are used. 1.3.1.1 IPC
These packets are used for inter-processor communication within the system. The PI should be programmed to capture these packets to a PPU (as specified in the Destination PPU field in the PIE Header) in their entirety.
1.3.1.2 IP
These packets consist of only an EP packet That is, an P header, followed by an IP payload (which might include TCP, UDP, ICMP, etc.). This category does not include EP packets received over MPLS or over Ethernet.
These packets can come from either a POS LC, an ATM LC, or an Ethernet LC. The possible encapsulations that could result in such a packet are: IP/ATM, IP/PPP/ATM, EP/PPP/SONET, IP/PPP/PPPoE/Ethernet, and IP/PPP/PPPoE/Ethernet/ATM.
The IPE CED uniquely identifies the PPPoE Session ED, or the ATM Virtual Circuit that the packet was received on as well as the PHY/LC that it was received on. In the case of EP/PPP/SONET, the EPE CED will identify only flie PHY/LC that the packet was received on, that is, it will be constant for all EP/PPP/SONET packets received from a particular PHY/LC.
1.3.1.3 PPP
This category consists of all PPP packets received whose PPP protocol type was not IP or MPLS. These packets can come from a POS LC, an ATM LC, or an Ethernet LC. For those PPP sessions fliat will be tunneled using L2TP, the EPE must add a new PPP header to the IP/PPP and MPLS/PPP packets, since for those protocols, the PPP header will have been removed by the Line Card.
The possible encapsulations that could result in such a packet are: PPP/SONET, PPP/PPPoE/Ethernet, PPP/ATM, and PPP/PPPoE/Ethernet/ATM.
The EPE CED uniquely identifies the PPPoE Session ED, or the ATM Virtual Circuit that the packet was received on as well as the PHY/LC fliat it was received on. In the case of PPP/SONET, the EPE CED will identify only flie PHY/LC that the packet was received on, that is, it will be constant for all PPP/SONET packets received from a particular PHY/LC.
1.3.1.4 Ethernet
This category consists of all Native Ethernet or Ethernet/ATM packets received except for MPLS (ethertype = Ox????) and PPPoE data packets (ethertype = 0x8864). This category also includes packets whose destination MAC address is not equal to the MAC address of the PHY on which flie packet was received (broadcast and multicast packets, and unicast packets if in promiscuous mode).
The possible encapsulations that could result in such a packet are: ARP/Ethernet, IP/Ethernet, PPPoE Discovery/Ethernet, ARP/Ethernet/ATM EP/Efliernet/ATM, and PPPoE Discovery/Ethernet/ATM.
For EthernetATM, the EPE CED uniquely identifies the ATM Virtual Circuit that the packet was received on as well as the PHY/LC that it was received on. In the case of Native Ethernet, the EPE CED will identify only the PHY/LC that the packet was received on, that is, it will be constant for all packets received from a particular PHY/LC.
1.3.1.5 MPLS
This category consists of packets which begin with an MPLS label stack. These can come from a POS LC, an ATM LC or an Ethernet LC.
The possible encapsulations that could result in such a packet are: MPLS/PPP/SONET, MPLS/Ethernet, and MPLS/ATM. In the case of MPLS/ATM, the Line Card will have replaced the top of stack shim label with the real label because the real label was encoded as the ATM VPI/VCI in the packet received from the network.
The following encapsulations are NOT supported: MPLS/PPP/PPPoE/Ethernet, MPLS/PPP/ATM, MPLS/PPP/PPPoE/Ethernet/ATM, and MPLS/Ethernet/ATM.
The IPE CID uniquely identifies the same as the top of stack incoming top of stack MPLS label, as well as the PHY/LC tiiat it was received on. In the case of MPLS/ATM the top of stack label has a one to one correspondence with the ATM Virtual Circuit that the packet was received on.
1.3.2 IPE Ingress Protocol Decoding
The following table shows the first two layers of protocols that must be identified by the PI on the EPE for each packet that passes through it.
Figure imgf000049_0001
1.3.2.1 IP Packets
All EP packets received by the EPE, whether still encapsulated with Ethernet, with MPLS, or without encapsulation, will fall into one of two categories: those for which the destination EP address is equal to one of the addresses of the IPE, and those for which it isn't. In the case of the latter, flie packet must be forwarded or discarded by the PPU. But for die former, it must be determined whether or not the packet can be processed entirely by the PPU, or whether it must be sent to the MPPU for further processing. If it must be sent to the MPPU, it must be captured in its entirety.
All EP packets received, regardless of their encapsulation, must have their destination IP address captured and examined. All routing table searches are performed by the EPE cards. If the destination address is one of the system's EP addresses, but not one of the EPE card's addresses, the packet must be forwarded with Destination PPU bit set.
1.3.2.2 L2TP Tunnels
Each L2TP tunnel is handled entirely by a particular EPE card. Each session within the tunnel must be handled entirely by a particular PPU. This requirement comes primarily from the need to support sequence numbers on the data sessions:
RFC-2661: "Each peer maintains separate sequence numbers for the control connection and each individual data session within a tunnel. "
Therefore, large PPP users cannot be tunneled, All L2TP control packets are forwarded to and processed by the MPPU ofthe EPE card.
1.3.2.2.1 L2TP Access Concentrator (LAC)
Any PPP user can be selected for L2TP tunneling by the EPE MPPU. If a user is selected for tunneling, then the PPU receiving PPP packets from that user must encapsulate those packets, first with an L2TP header, then a UDP header, and finally an IP header. The IP header's destination address will be that ofthe configured LNS, and the source address will be one ofthe IP addresses of IPE. The resulting IP packet can then be forwarded using the standard IP forwarding procedure to the appropriate Line Card for transmission. It should be evident that tunneled PPP users on different IPE cards will be placed in separate tunnels even if being tunneled to the same destination LNS.
EP packets received from the LNS will be sent by the receiving Line Card to the IPE PPU associated with the ingress interface (user). This PPU may well be on a different IPE card than the one handling the tunnel. This is easily determined from the destination IP address of the packet. In this case, the PPU receiving the packet from the Line Card must forward the packet to the EPE card handling the tunnel. In addition, the L2TP Session ID can be used to identify which PPU on that IPE card should receive the packet (this PPUED must be sent in the PIE Header so that the receiving PI will know which PPU should receive the packet). This is done by always encoding flie PPUID of the PPU handling a particular session in the most significant four bits ofthe L2TP Session ED.
RFC-2661: "Since L2TP sessions are named by identifiers that have local significance only. That is, the same session will be given different Session Ids by each end of the session. Session ID in each message is that ofthe intended recipient, not the sender. "
The PPU to which the packet is sent to can in turn can de-encapsulate the PPP packet and forward it to the PPP user identified by flie L2TP session ED.
1.3.2.2.2 L2TP Network Server (LNS)
When functioning as an LNS, L2TP packets received from flie LAC will be forwarded, either by a Line Card or anoflier EPE, to the EPE handling die tunneL This is because the destination EP address of the packet will be equal to one of flie IP addresses ofthe IPE handling the tunnel. Within that IPE, the PPU that should process the L2TP session is identified using the most significant four bits of the L2TP Session ED. The PPU will de-encapsulate flie PPP packet, then process the PPP packet as if it was received from a PPP user. From this point on, the processing is the same as for a "real" PPP user.
In the other direction, packets which, when their destination IP address is looked up in the routing table, yield a destination PPP user fliat is associated with a L2TP tunnel instead of with a Line Card, must be sent to flie IPE PPU handling the PPP user. This is because of the sequence number requirement of L2TP mentioned above. Once received this PPU, the packet must have a PPP header added, as is the case with a normal PPP user. Then, instead of forwarding the packet to a Line Card, a L2TP header is added, followed by a UDP header and an IP header. The EP destination address is that of the LAC at the other end of the tunnel. The resulting IP packet can then be forwarded using the standard EP forwarding procedure to the appropriate Line Card for transmission.
1.3.2.3 IPSec Tunnels
Each IPSec Security Association (SA) is handled entirely by a particular EPE PPU. As defined in RFC-2401, a Security Association is a unidirectional, "simplex" connection that provides security services to the traffic carried by it.
1.3.2.3.1 Inbound IPsec processing
• Plain packets
Every PPU must have a copy of the SPD for every user from which it receives packets. In other words, for every UserlD (Primary or Secondary) that points to a particular IPE PPU, the PPU must have a pointer to an SPD. If a user 's traffic is split among multiple PPUs (i.e.: a large user), then they should have identical SPDs configured for the user, and each will create its own set of Security Associations for its share of the user's traffic. Every packet received must be processed using the SPD of he user the packet is received from. • Tunneled packets
The SPI is the field in the IPSec header that, along with the destination IP address, identifies the SA. Traffic from a small user will always be directed by receiving Line Card to a particular PPU. This PPU uses the SPI to identify the SA, and thus has access to the information it needs to decapsulate the packet. For large users, however, the Line Card must detect EPsec packets whose IP destination address is one of the addresses that belongs to the EPE card identified by the user's Primary Userld. Rather than select a Userld (primary or secondary) based on the hash ofthe source and destination EP addresses ofthe packet, the LC must use the SPI in the IPsec header to select the Userld, and thus the EPE PPU, to send the packet to. In order to accomplish this, the most significant 4 bits of an SPI always contain the PPUID identifying the PPU that is handling the SA identified by that SPI.
1.3.2.3.2 Outbound IPsec processing
Since EPSec performs tunneling at Layer 3, entire users don't get tunneled. Rather, each packet about to be sent to a user is individually examined using the Security Policy Database (SPD) associated with that user, from which a pointer to a SA (in the SAD also associated with the user) is obtained.
The difficulty with outbound processing is that, as discussed earlier, the configuration information (and thus the SPD) associated with the egress user is not readily available. The information must be requested from the PPU identified by the Primary Userld and stored in a cache. Each PPU sending to a user will thus create its own set of Security Associations.
1.3.3 IPE Packet Forwarding and Egress Processing
The IPE card PPUs performs routing table searches for all packets tiiat heed forwarding. The global Forwarding Information Base (FEB) is distributed to every PPU in the system, and contains IP unicast and multicast routing tables in a form fliat facilitates longest matching prefix searches (i.e.: Patricia tries), as well as tables required for MPLS label based forwarding.
One ofthe results of every routing table lookup is the Primary Userld identifying flie layer 2 interface by which the packet should be transmitted. It is important to note that the Primary Userld is not the same as the LCUserld, and does not directly give the Cardld ofthe Line Card where the packet should be forwarded. Rather, flie Primary Userld identifies the EPE PPU that maintains the configuration and state information for the user.
This presents a complication because the IPE that is trying to forward the packet needs the information stored on the IPE PPU identified by the Primary Userld. Rather than simply forward the packet to flie other EPE for egress processing, which would result in additional latency and switch fabric bandwidth utilization for every packet, it sends a message to the PPU identified by the Primary Userld, requesting a copy of the user 's configuration. This information is kept in a user configuration cache and is used for all subsequent packets directed to the same user. All counters and statistics that need to be maintained for each user must also be maintained for each cached user, and must also be periodically sent to the PPU identified by the Primary Userld.
This process makes it difficult to implement such functionality as per-user traffic shaping in the EPE PPU, because the processing would need to be distributed among a potentially large number of processors. Therefore, traffic shaping is to be implemented strictly on flie Line Card using the egress PPUs.
One of the fields that is acquired and cached as part of the user configuration information is the LCUserld. This field contains the Cardld of the Line Card that the packet must be forwarded to, as well as the PPUID and CED that should be sent in the PIE header ofthe packet to that Line Card.

Claims

What Is claimed is:
1. A packet processing circuit comprising: a packet inspector for examining a stream of cells to determine control information for packets represented thereby; at least one buffer access controller connected to said packet inspector for storing at least a portion of data cells received from said packet inspector, and for processing control information received from said packet inspector to produce additional control information; and a packet manager connected to said buffer access controller for receiving control information therefrom for use in formatting packets corresponding to said control information.
2. The circuit of claim 1 wherein said packet manager is configured for using the control information received from said buffer access controller to reassemble said corresponding packets.
3. The circuit of claim 2 wherein said packet manager is connected to said packet inspector for coordinating the dequeuing of data cells representing said corresponding packets from said buffer access controller.
4. The circuit of claim 1 further comprising a cell buffer associated with said buffer access controller for storing said data cells.
5. The circuit of claim 4 further comprising at least one protocol processing unit associated with said buffer access controller for processing said control information received from said packet inspector.
6. The circuit of claim 5 wherein said protocol processing unit comprises at least one 'general purpose processor unit.
7. The circuit of claim 5 further comprising an additional buffer access controller connected to said packet inspector, wherein said buffer access controllers . are configured for storing different portions of data cells received from said packet inspector.
8. The circuit of claim 7 further comprising a protocol processing unit associated with said additional buffer access controller, and wherein said buffer access controllers are each configured for determining whether to forward certain control information received from said packet inspector to its associated protocol processing unit for processing.
9. The circuit of claim 8 further comprising a master processing unit connected to said protocol processing units for providing said protocol processing units with configuration data.
10. The circuit of claim 9 further comprising a switch, wherein said master processing unit and said protocol processing units are interconnected to one another through said switch.
11. The circuit of claim 7 wherein each buffer access controller has at least two protocol processing units associated therewith.
12. A mid-network server comprising: an input for receiving a packet delivered thereto; a line module connected to said input for receiving said packet; a plurality of processing modules for performing mid-network processing functions; and a switch fabric connected to said line module and said processing modules for delivering packets therebetween, wherein said processing modules are at least substantially identical to one another and independently programmable.
13. The server of claim 12 further comprising an additional line module, wherein said line modules are at least substantially identical to one another and independently programmable.
14. The server of claim 12 wherein said processing modules are each configured to support a plurality of packet types, and each line module is configured for formatting a packet into one of said types prior to sending said packet through said switch fabric to one of said processing modules.
15. The server of claim 12 wherein said line module and said processing modules each comprise a packet inspector, a packet manager, and at least one buffer access controller.
16. The server of claim 15 wherein said line module and said processing modules each comprise a plurality of buffer access controllers interconnected with said packet inspector and said packet manager.
17. The server of claim 16 wherein each of said buffer access controllers have at least one protocol processing unit associated therewith.
18. The server of claim 17 wherein each protocol processing unit is in communication with at least one other protocol processing unit on the same module.
19. The server of claim 18 wherein said line module and said processing modules each comprise a master protocol processing unit for controlling the protocol processing units on that module.
20. The server of claim 19 wherein said line module and said processing modules each comprise an Ethernet switch for interconnecting the master protocol processing unit with said other protocol processing units on that module .
21. A packet server comprising: an input for receiving a packet delivered thereto; a line module connected to said input for receiving said packet; a plurality of processing modules for performing packet routing functions; and a switch fabric connected to said line module and said processing modules for delivering packets therebetween, wherein said line module is configurable to send said packet to any one of said processing modules through said switch fabric, and said processing' modules are each configurable to perform said routing functions for said packet if said packet is sent thereto by said line module.
22. The server of claim 21 wherein said line module supports a plurality of user interfaces and is configured to send said packet to one of said processing modules according to the user interface through which said packet arrives at said server.
23. The server of claim 22 wherein each processing module includes a plurality of processing units, and said line module is configured to send said packet to one of said processing units of one of said processing modules according to the user interface through which said packet arrives at said server.
24. The server of claim 23 wherein said processing modules are each configured to support a plurality of packet types, and said line module is configured for formatting said packet into one of said types prior to sending said packet through said switch fabric to one of said processing modules.
25. The server of claim 21 wherein said line module is connected to said input through a phy module.
26. The server of claim 21 wherein said line module and said processing modules each include a plurality of general purpose processing units.
27. The server of claim 21 wherein said line module and said processing modules can be programmed to support any type of transmission protocol.
28. A packet server comprising: a plurality of line modules for receiving packets delivered to said server over a physical connection; at least one processing module for performing packet routing functions; and a switch fabric connected to said line modules and said processing module for delivering packets therebetween, wherein each line module is configured to format a packet into one of a plurality of types prior to sending said packet through said switch fabric to said processing module, and said processing module is configured to support each of said packet types.
29. The server of claim 28 wherein said processing module includes a plurality of processing units.
30. The server of claim 29 wherein each line card supports a plurality of users and is configured to assign each user to one of the processing units of said processing module.
31. The server of claim 30 wherein at least one of the processing units of said processing module is assigned to a first user supported by a first one of said line modules and a second user supported by a second one of said line modules.
32. A method for processing packets within a server, said method comprising the steps of: converting a packet input to said server into a stream of fixed length cells; processing said stream of fixed length cells using a line module to format said packet into one of a plurality of protocol types; and sending said formatted packet to a processing module configured to support each of said plurality of protocol types.
33. The method of claim 32 wherein said processing step includes reassembling said packet.
34. The method of claim 33 wherein said processing step further includes examining said cell stream to obtain control information for said packet.
35. The method of claim 34 wherein said control information includes information identifying a particular processing module for further processing said packet.
36. The method of claim 34 wherein said processing step further includes processing said control information to produce additional control information for use in reassembling and formatting said packet.
37. The method of claim 36 wherein said processing step further includes identifying a particular processing module to which said packet should be sent.
38. The method of claim 37 wherein the sending step includes sending said packet 'to said particular processing module identified in said processing step.
39. The method of claim 37 wherein said identifying step includes identifying a particular protocol processing unit on said particular processing module for processing control information corresponding to said packet.
40. The method of claim 33 further comprising the step of performing mid-network processing functions on said sent packet using said processing module.
41. The method of claim 40 wherein the step of performing mid-network processing functions includes formatting said packet for its destination interface.
42. The method of claim 41 further comprising the step of sending the packet formatted by said processing module to a line module corresponding to said destination interface .
43. The method of claim 32 wherein said sending step includes sending said packet through a switch fabric.
44. The method of claim 32 wherein said input packet is a packet represented by a plurality of fixed length cells.
45. The method of claim 44 wherein the converting step includes modifying the length of said input cells.
46. A method for processing packets within a server, said method comprising the steps of: converting a packet input to said server into a stream of fixed length cells; processing said stream of fixed length cells using a line module to format said packet into one of a' plurality of protocol types; and sending said formatted packet to another line module configured to support each of said plurality of protocol types.
47. A method for processing packets within a server, said method comprising the steps of: converting a packet input to said server into a stream of fixed length cells; processing said stream of fixed length cells using a line module to format said packet into one of a plurality of protocol types; sending said formatted packet to a processing module configured to support each of said plurality of protocol types; and processing said stream of fixed length cells in said processing module; reformatting said formatted packet into one of a plurality of protocol types; and sending said reformatted packet to another processing module.
PCT/US2001/001003 2000-03-03 2001-01-11 Broadband mid-network server WO2001067694A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP01908601A EP1260067A1 (en) 2000-03-03 2001-01-11 Broadband mid-network server
AU2001236450A AU2001236450A1 (en) 2000-03-03 2001-01-11 Broadband mid-network server

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US51857500A 2000-03-03 2000-03-03
US09/518,575 2000-03-03
US51852600A 2000-03-04 2000-03-04
US09/518,526 2000-03-04

Publications (2)

Publication Number Publication Date
WO2001067694A1 true WO2001067694A1 (en) 2001-09-13
WO2001067694A9 WO2001067694A9 (en) 2002-01-10

Family

ID=27059497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/001003 WO2001067694A1 (en) 2000-03-03 2001-01-11 Broadband mid-network server

Country Status (3)

Country Link
EP (1) EP1260067A1 (en)
AU (1) AU2001236450A1 (en)
WO (1) WO2001067694A1 (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1298940A2 (en) 2001-09-27 2003-04-02 Alcatel Canada Inc. System and method for configuring a network element
WO2003030582A2 (en) * 2001-09-27 2003-04-10 Siemens Aktiengesellschaft Device and method for transmitting a plurality of signals by means of multi-stage protocol processing
WO2004016034A1 (en) * 2002-08-13 2004-02-19 Starent Networks Corporation Communicating in voice and data communications systems
WO2004095874A2 (en) * 2003-04-16 2004-11-04 Intel Corporation Architecture, method and system of multiple high-speed servers for wdm based photonic burst-switched networks
WO2005083946A1 (en) 2004-02-13 2005-09-09 Intel Corporation Apparatus and method for a dynamically extensible virtual switch
US7266295B2 (en) 2003-04-17 2007-09-04 Intel Corporation Modular reconfigurable multi-server system and method for high-speed networking within photonic burst-switched network
US7272310B2 (en) 2003-06-24 2007-09-18 Intel Corporation Generic multi-protocol label switching (GMPLS)-based label space architecture for optical switched networks
DE10245330B4 (en) * 2001-09-27 2008-04-17 Samsung Electronics Co., Ltd., Suwon Software switch of distributed firewalls used for load sharing of Internet telephony traffic in an IP network
DE102007003258A1 (en) * 2007-01-23 2008-07-24 Infineon Technologies Ag Method for data transfer in voice communication line card, involves coupling components of voice communication line card with one another over ethernet connection, where coupling of components take place over ethernet switch
WO2008101041A1 (en) 2007-02-15 2008-08-21 Harris Corporation An apparatus and method for soft media processing within a routing switcher
CN100508495C (en) * 2002-05-30 2009-07-01 株式会社日立制作所 Packet communication equipment
US7653602B2 (en) 2003-11-06 2010-01-26 Visa U.S.A. Inc. Centralized electronic commerce card transactions
US7725369B2 (en) 2003-05-02 2010-05-25 Visa U.S.A. Inc. Method and server for management of electronic receipts
US7848649B2 (en) 2003-02-28 2010-12-07 Intel Corporation Method and system to frame and format optical control and data bursts in WDM-based photonic burst switched networks
US7857215B2 (en) 2003-09-12 2010-12-28 Visa U.S.A. Inc. Method and system including phone with rewards image
US8005763B2 (en) 2003-09-30 2011-08-23 Visa U.S.A. Inc. Method and system for providing a distributed adaptive rules based dynamic pricing system
US8010405B1 (en) 2002-07-26 2011-08-30 Visa Usa Inc. Multi-application smart card device software solution for smart cardholder reward selection and redemption
US8015060B2 (en) 2002-09-13 2011-09-06 Visa Usa, Inc. Method and system for managing limited use coupon and coupon prioritization
US8407083B2 (en) 2003-09-30 2013-03-26 Visa U.S.A., Inc. Method and system for managing reward reversal after posting
US8429048B2 (en) 2009-12-28 2013-04-23 Visa International Service Association System and method for processing payment transaction receipts
US8554610B1 (en) 2003-08-29 2013-10-08 Visa U.S.A. Inc. Method and system for providing reward status
US8626577B2 (en) 2002-09-13 2014-01-07 Visa U.S.A Network centric loyalty system
US8660427B2 (en) 2002-09-13 2014-02-25 Intel Corporation Method and apparatus of the architecture and operation of control processing unit in wavelenght-division-multiplexed photonic burst-switched networks
WO2017009461A1 (en) * 2015-07-15 2017-01-19 Lantiq Beteiligungs-GmbH & Co.KG Method and device for packet processing
US9852437B2 (en) 2002-09-13 2017-12-26 Visa U.S.A. Inc. Opt-in/opt-out in loyalty system
US11132691B2 (en) 2009-12-16 2021-09-28 Visa International Service Association Merchant alerts incorporating receipt data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0531599A1 (en) * 1991-09-13 1993-03-17 International Business Machines Corporation Configurable gigabit/s switch adapter
EP0944288A2 (en) * 1998-03-20 1999-09-22 Nec Corporation An ATM exchange having packet processing trunks
WO2000010297A1 (en) * 1998-08-17 2000-02-24 Vitesse Semiconductor Corporation Packet processing architecture and methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0531599A1 (en) * 1991-09-13 1993-03-17 International Business Machines Corporation Configurable gigabit/s switch adapter
EP0944288A2 (en) * 1998-03-20 1999-09-22 Nec Corporation An ATM exchange having packet processing trunks
WO2000010297A1 (en) * 1998-08-17 2000-02-24 Vitesse Semiconductor Corporation Packet processing architecture and methods

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10245330B4 (en) * 2001-09-27 2008-04-17 Samsung Electronics Co., Ltd., Suwon Software switch of distributed firewalls used for load sharing of Internet telephony traffic in an IP network
WO2003030582A2 (en) * 2001-09-27 2003-04-10 Siemens Aktiengesellschaft Device and method for transmitting a plurality of signals by means of multi-stage protocol processing
WO2003030582A3 (en) * 2001-09-27 2003-10-02 Siemens Ag Device and method for transmitting a plurality of signals by means of multi-stage protocol processing
EP1298940A2 (en) 2001-09-27 2003-04-02 Alcatel Canada Inc. System and method for configuring a network element
EP1298940A3 (en) * 2001-09-27 2007-06-13 Alcatel Canada Inc. System and method for configuring a network element
KR100850382B1 (en) 2001-09-27 2008-08-04 노키아 지멘스 네트웍스 게엠베하 운트 코. 카게 Device and method for transmitting a plurality of signals by means of multi-stage protocol processing
CN100508495C (en) * 2002-05-30 2009-07-01 株式会社日立制作所 Packet communication equipment
US8010405B1 (en) 2002-07-26 2011-08-30 Visa Usa Inc. Multi-application smart card device software solution for smart cardholder reward selection and redemption
WO2004016034A1 (en) * 2002-08-13 2004-02-19 Starent Networks Corporation Communicating in voice and data communications systems
US8023507B2 (en) 2002-08-13 2011-09-20 Starent Networks Llc Card to card communications in voice and data communications systems
US8599846B2 (en) 2002-08-13 2013-12-03 Cisco Technology, Inc. Communicating in voice and data communications systems
US8660427B2 (en) 2002-09-13 2014-02-25 Intel Corporation Method and apparatus of the architecture and operation of control processing unit in wavelenght-division-multiplexed photonic burst-switched networks
US10460338B2 (en) 2002-09-13 2019-10-29 Visa U.S.A. Inc. Network centric loyalty system
US8015060B2 (en) 2002-09-13 2011-09-06 Visa Usa, Inc. Method and system for managing limited use coupon and coupon prioritization
US9852437B2 (en) 2002-09-13 2017-12-26 Visa U.S.A. Inc. Opt-in/opt-out in loyalty system
US8626577B2 (en) 2002-09-13 2014-01-07 Visa U.S.A Network centric loyalty system
US8239261B2 (en) 2002-09-13 2012-08-07 Liane Redford Method and system for managing limited use coupon and coupon prioritization
US7848649B2 (en) 2003-02-28 2010-12-07 Intel Corporation Method and system to frame and format optical control and data bursts in WDM-based photonic burst switched networks
WO2004095874A2 (en) * 2003-04-16 2004-11-04 Intel Corporation Architecture, method and system of multiple high-speed servers for wdm based photonic burst-switched networks
WO2004095874A3 (en) * 2003-04-16 2004-12-29 Intel Corp Architecture, method and system of multiple high-speed servers for wdm based photonic burst-switched networks
KR100812833B1 (en) 2003-04-16 2008-03-11 인텔 코포레이션 Architecture, method and system of multiple high-speed servers to network in wdm based photonic burst-switched networks
US7266295B2 (en) 2003-04-17 2007-09-04 Intel Corporation Modular reconfigurable multi-server system and method for high-speed networking within photonic burst-switched network
US7987120B2 (en) 2003-05-02 2011-07-26 Visa U.S.A. Inc. Method and portable device for management of electronic receipts
US7827077B2 (en) 2003-05-02 2010-11-02 Visa U.S.A. Inc. Method and apparatus for management of electronic receipts on portable devices
US7725369B2 (en) 2003-05-02 2010-05-25 Visa U.S.A. Inc. Method and server for management of electronic receipts
US9087426B2 (en) 2003-05-02 2015-07-21 Visa U.S.A. Inc. Method and administration system for management of electronic receipts
US8386343B2 (en) 2003-05-02 2013-02-26 Visa U.S.A. Inc. Method and user device for management of electronic receipts
US7272310B2 (en) 2003-06-24 2007-09-18 Intel Corporation Generic multi-protocol label switching (GMPLS)-based label space architecture for optical switched networks
US8554610B1 (en) 2003-08-29 2013-10-08 Visa U.S.A. Inc. Method and system for providing reward status
US8793156B2 (en) 2003-08-29 2014-07-29 Visa U.S.A. Inc. Method and system for providing reward status
US7857215B2 (en) 2003-09-12 2010-12-28 Visa U.S.A. Inc. Method and system including phone with rewards image
US7857216B2 (en) 2003-09-12 2010-12-28 Visa U.S.A. Inc. Method and system for providing interactive cardholder rewards image replacement
US8005763B2 (en) 2003-09-30 2011-08-23 Visa U.S.A. Inc. Method and system for providing a distributed adaptive rules based dynamic pricing system
US9141967B2 (en) 2003-09-30 2015-09-22 Visa U.S.A. Inc. Method and system for managing reward reversal after posting
US8244648B2 (en) 2003-09-30 2012-08-14 Visa U.S.A. Inc. Method and system for providing a distributed adaptive rules based dynamic pricing system
US8407083B2 (en) 2003-09-30 2013-03-26 Visa U.S.A., Inc. Method and system for managing reward reversal after posting
US9710811B2 (en) 2003-11-06 2017-07-18 Visa U.S.A. Inc. Centralized electronic commerce card transactions
US7653602B2 (en) 2003-11-06 2010-01-26 Visa U.S.A. Inc. Centralized electronic commerce card transactions
WO2005083946A1 (en) 2004-02-13 2005-09-09 Intel Corporation Apparatus and method for a dynamically extensible virtual switch
JP2007522583A (en) * 2004-02-13 2007-08-09 インテル・コーポレーション Apparatus and method for dynamically expandable virtual switch
JP4912893B2 (en) * 2004-02-13 2012-04-11 インテル・コーポレーション Apparatus and method for dynamically expandable virtual switch
US8838743B2 (en) 2004-02-13 2014-09-16 Intel Corporation Apparatus and method for a dynamically extensible virtual switch
EP1950916A3 (en) * 2007-01-23 2013-10-02 Lantiq Deutschland GmbH Method for transmitting data in a voice communication line card, voice communication line card and signal processor for a voice communication line card
DE102007003258B4 (en) * 2007-01-23 2008-08-28 Infineon Technologies Ag A method for data transmission in a voice communication linecard, voice communication linecard and signal processing processor for a voice communication linecard
DE102007003258A1 (en) * 2007-01-23 2008-07-24 Infineon Technologies Ag Method for data transfer in voice communication line card, involves coupling components of voice communication line card with one another over ethernet connection, where coupling of components take place over ethernet switch
EP1950916A2 (en) * 2007-01-23 2008-07-30 Infineon Technologies AG Method for transmitting data in a voice communication line card, voice communication line card and signal processor for a voice communication line card
WO2008101041A1 (en) 2007-02-15 2008-08-21 Harris Corporation An apparatus and method for soft media processing within a routing switcher
US7920557B2 (en) 2007-02-15 2011-04-05 Harris Corporation Apparatus and method for soft media processing within a routing switcher
CN101675628B (en) * 2007-02-15 2013-03-20 哈里公司 An apparatus and method for soft media processing within a routing switcher
US11132691B2 (en) 2009-12-16 2021-09-28 Visa International Service Association Merchant alerts incorporating receipt data
US8650124B2 (en) 2009-12-28 2014-02-11 Visa International Service Association System and method for processing payment transaction receipts
US8429048B2 (en) 2009-12-28 2013-04-23 Visa International Service Association System and method for processing payment transaction receipts
WO2017009461A1 (en) * 2015-07-15 2017-01-19 Lantiq Beteiligungs-GmbH & Co.KG Method and device for packet processing

Also Published As

Publication number Publication date
EP1260067A1 (en) 2002-11-27
WO2001067694A9 (en) 2002-01-10
AU2001236450A1 (en) 2001-09-17

Similar Documents

Publication Publication Date Title
WO2001067694A1 (en) Broadband mid-network server
US6611522B1 (en) Quality of service facility in a device for performing IP forwarding and ATM switching
US7151744B2 (en) Multi-service queuing method and apparatus that provides exhaustive arbitration, load balancing, and support for rapid port failover
US6195355B1 (en) Packet-Transmission control method and packet-transmission control apparatus
US7369568B2 (en) ATM-port with integrated ethernet switch interface
Aweya IP router architectures: an overview
US5467349A (en) Address handler for an asynchronous transfer mode switch
US5809024A (en) Memory architecture for a local area network module in an ATM switch
EP1063818A2 (en) System for multi-layer provisioning in computer networks
WO1998036608A2 (en) Method and apparatus for multiplexing of multiple users on the same virtual circuit
WO2000056113A1 (en) Internet protocol switch and method
Byrne et al. Evolution of metropolitan area networks to broadband ISDN
US20020159391A1 (en) Packet-transmission control method and packet-transmission control apparatus
US6952420B1 (en) System and method for polling devices in a network system
Tomonaga IP router for next-generation network
EP0905994A2 (en) Packet-transmission control method and packet-transmission control apparatus
KR20020069578A (en) Transmission system for supplying quality of service in network using internet protocol and method thereof
JPH09181726A (en) Method and system for linking connection in atm network
Aoki et al. Next generation carriers Internet backbone node architecture (MSN Type-X)
Ojesanmi Asynchronous Transfer Mode (ATM) Network.
Durresi et al. Asynchronous Transfer Mode (ATM)
Gebali et al. Switches and Routers
Subramanian Frame Relay Networks-a survey
Chorafas et al. Appreciating the Implementation of Asynchronous Transfer Mode (ATM)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/15-11/15 AND 13/15-15/15, DRAWINGS, REPLACED BY NEW PAGES 1/22-17/22 AND 20/22-22/22; PAGE 12/15 RENUMBERED AS 19/22; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

WWE Wipo information: entry into national phase

Ref document number: 2001908601

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2001908601

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Ref document number: 2001908601

Country of ref document: EP