US20080291829A1 - System for content based message processing - Google Patents

System for content based message processing Download PDF

Info

Publication number
US20080291829A1
US20080291829A1 US12/180,630 US18063008A US2008291829A1 US 20080291829 A1 US20080291829 A1 US 20080291829A1 US 18063008 A US18063008 A US 18063008A US 2008291829 A1 US2008291829 A1 US 2008291829A1
Authority
US
United States
Prior art keywords
message
priority
processing priority
processing
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/180,630
Inventor
Fredrik Hammar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/180,630 priority Critical patent/US20080291829A1/en
Assigned to TELEFONAKTIEBOLAGET LM ERICSSON (PUBL) reassignment TELEFONAKTIEBOLAGET LM ERICSSON (PUBL) ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AYRES, LAWRENCE
Publication of US20080291829A1 publication Critical patent/US20080291829A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2408Traffic characterised by specific attributes, e.g. priority or QoS for supporting different services, e.g. a differentiated services [DiffServ] type of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2458Modification of priorities while in transit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9047Buffering arrangements including multiple buffers, e.g. buffer pools
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/21Monitoring or handling of messages
    • H04L51/226Delivery according to priorities

Definitions

  • the present invention relates generally to the field of communications and, more particularly, to a method and apparatus for content based message processing.
  • Integrated packet networks typically fast packet networks
  • Packet networks are generally used to carry at least two (2) classes of traffic, which may include, for example, continuous bit-rate (“CBR”), speech (“Packet Voice”), data (“Framed Data”), image, and so forth. Packet networks source, sink and/or forward protocol packets.
  • Congestion and Quality of Service (“QoS”) problems inside these networks have not been solved satisfactorily and remain as outstanding issues to be resolved.
  • message scheduling helps alleviate these problems, the efficient scheduling of work with thousands of entities (instances) is not a simple matter.
  • FIFO queuing techniques do not address QoS parameters. This technique can also allow overload periods for digitized speech packets and for Framed Data packets, which results in a greater share of bandwidth being provided to one at the expense of the other; an undesirable result.
  • HOLP head-of-line-priority
  • speech fast packets may affect the QoS of lower priority queues.
  • queuing schemes designed only for data do not solve the problems of integrating other traffic types, such as speech and CBR data.
  • RTOS Real Time Operating System
  • a particular function has a priority and all packets processed by the function inherit that priority for the duration of their processing by the function. If the next processing step to which the packet is subjected is set at a different priority, then the packet inherits a different priority for the duration of this processing step. Priority is associated with the function applied to the packet rather than the packet itself. If all packets traverse the same set of functions, they have equal access to the central processing unit (“CPU”) and receive equal priority treatment.
  • CPU central processing unit
  • the present invention provides a system for assigning different priorities to a packet, and varying resource allocation (especially processing time) and forwarding treatment on a per-packet basis.
  • the present invention is adaptable to accommodate new message types, multimedia applications and multi-service applications. It is flexible, with the ability to cater to a wide range of configurations and environments and improves the QoS of VoIP calls.
  • the present invention provides a packet having a message and a processing priority associated with the message.
  • the processing priority is dynamically changeable by a function operating on the message.
  • the processing priority can be associated with the message by attaching the processing priority to the start of the message, appending the processing priority to the end of the message or linking the processing priority to the message using pointers.
  • the system for associating a processing priority to a message involves receiving the message, determining the processing priority for the message and associating the processing priority with the message such that the processing priority is dynamically changeable by a function operating on the message. This process can be implemented using a computer program embodied on a computer readable medium wherein each step is performed using one or more code segments.
  • the present invention also provides a method for scheduling one or more messages.
  • the one or more messages are received and then each message is stored in a multidimensional processing queue based on a processing priority and an attribute associated with the message.
  • Each queued message from the multidimensional processing queue is scheduled for processing based on an algorithm.
  • the attribute can be a virtual private network classification, destination software function, function index, a functionality type or other message attribute.
  • the algorithm can be one or more algorithms, such as an exponentially weighted, non-starving, nested-round-robin, message-priority-based scheme or a weighted, non-starving, nested-round-robin, class-based scheme.
  • This process can be implemented using a computer program embodied on a computer readable medium wherein each step is performed using one or more code segments.
  • the present invention provides a communications switch having one or more ingress cards, one or more signal processing cards, one or more control cards containing one or more processors and one or more egress cards.
  • Each signal-processing card contains an array of digital signal processors.
  • the switch also includes a switch fabric communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a TDM bus communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a multidimensional processing queue, and a scheduler communicably coupled to each processor and the multidimensional processing queue.
  • the scheduler receives one or more messages, stores each message in the multidimensional processing queue based on a priority and an attribute of the message, and schedules each queued message from the multidimensional processing queue for processing based on an algorithm.
  • FIG. 1 is a block diagram of a representative integrated network in accordance with the PRIOR ART
  • FIG. 2 is a schematic diagram illustrating a message scheduling system in accordance with the PRIOR ART
  • FIG. 3 is a schematic diagram illustrating another message scheduling system in accordance with the PRIOR ART
  • FIGS. 4A , 4 B and 4 C are block diagrams illustrating a packet having an associated processing priority in accordance with various embodiments of the present invention.
  • FIG. 5 is a flowchart illustrating a method for associating a priority with a message in accordance with one embodiment of the present invention
  • FIG. 6 is a diagram of a packet network switch in accordance with the present invention.
  • FIG. 7 is a schematic diagram illustrating a packet operating system in accordance with the present invention.
  • FIG. 8 is a schematic diagram illustrating a message scheduling system in accordance with the present invention.
  • FIG. 9 is a flowchart illustrating a method for scheduling messages into queues in accordance with one embodiment of the present invention.
  • the present invention provides a system for assigning different priorities to a packet, and vary resource allocation (especially processing time) and forwarding treatment on a per-packet basis.
  • the present invention is adaptable to accommodate new message types, multimedia applications and multi-service applications. It is flexible, with the ability to cater to a wide range of configurations and environments and improves the QoS of VoIP calls.
  • FIG. 1 depicts a representative integrated network 100 in which phones 102 and faxes 104 are communicably coupled to a public switched telephone network (“PSTN”) 106 .
  • PSTN public switched telephone network
  • a switch 108 is communicably coupled to the PSTN 106 and an Internet Protocol (“IP”) network 110 to convert time division multiplexing (“TDM”) based communications 112 to IP-based communications 114 .
  • IP Internet Protocol
  • the switch 108 creates IP packets containing the necessary destination information so that the packets 114 can be properly routed to their destinations, which may include computers 116 or other devices communicably coupled to the IP network 110 .
  • a network controller 118 is communicably coupled to the PSTN 106 and the switch 108 , and provides control signals to the switch 108 for proper processing of the TDM based communications 112 .
  • the network controller 118 may also be communicably connected to the IP network 110 .
  • Network controller 118 can function as a Media Gateway Control (“MGC”).
  • MGC Media Gateway Control
  • the MGC protocol is one of a few proposed control and signal standards to compete with the older H.323 standard for the conversion of audio signals carried on telephone circuits, such as PSTN 106 to data packets carried over the Internet or other packet networks, such as IP network 110 .
  • this example is not limited to the conversion of TDM based communications to IP-based communications; instead, the present invention may be applied to any conversion of a multiplexed communication to a packet-based communication.
  • IP specifies the format of packets, also called datagrams, and the addressing scheme. Most networks combine IP with a higher-level protocol called Transport Control Protocol (“TCP”), which establishes a virtual connection between a destination and a source. IP allows a packet to be addressed and dropped in a system, but there is no direct link between the sender and the recipient. TCP/IP, on the other hand, establishes a connection between two hosts so that they can send messages back and forth for a period of time. IP network 110 receives and sends messages through switch 108 , ultimately to phone 102 and/or fax 104 . PCs 116 receive and send messages through IP network 110 in a packet-compatible format. Voice over IP (“VoIP”) is the ability to make telephone calls and send faxes over IP-based data networks, such as IP network 110 . An integrated voice/data network 100 allows more standardization and reduces total equipment needs. VoIP can support multimedia and multi-service applications.
  • VoIP Voice over IP
  • FIGS. 2 and 3 are schematic diagrams illustrating two message scheduling systems 200 and 300 in accordance with the prior art.
  • messages 202 are received and stored in first-in-first-out (“FIFO”) queue 204 .
  • Messages 202 are then sent to processor 206 in the order in which they were received. No processing prioritization other than arrival time is applied in queue 204 is applied.
  • messages 302 enter data type sorter 304 where messages 302 are separated by data type.
  • a FIFO queue 306 a , 306 b , . . . 306 n exists for each individual data type.
  • Data type sorter 304 sends messages 302 to FIFO queues 306 a , 306 b , . . .
  • Scheduler 308 then pulls messages 302 from FIFO queues 306 a , 306 , . . . 306 n and sends messages 302 to processor 310 .
  • the primary prioritization is again based on arrival time in queues 306 a , 306 b , . . . 306 n .
  • Scheduler 308 only coordinates the pulling of messages 302 for processing.
  • FIGS. 4A , 4 B and 4 C block diagrams illustrating a packet 400 , 410 and 420 having an associated processing priority 402 in accordance with various embodiments of the present invention are shown.
  • the present invention associates a processing priority or priority criteria 402 within or attached to a packet or message 404 in such a way that that the priority or priority criteria 402 traverses the system along with the packet or message 404 .
  • the priority or priority criteria 402 may be one or more parameters that are evaluated to produce a priority for the message 404 .
  • the priority or priority criteria 402 may be modified dynamically during the traversal as decisions are made regarding the priority/criteria 402 .
  • the priority/priority criteria 402 are associated with the packet/message 404 in such a way that reference to one allows reference to the other, they traverse the system together, and that functions operating upon the packet/message 404 have the ability to change the priority/priority criteria 402 .
  • Changing the priority/priority criteria 402 dynamically when used in conjunction with the other aspects of this invention, creates a processing environment where the priority/priority criteria 402 of the message/packet 404 governs the work allocation or dispatching.
  • the present invention provides at least three ways to associate the priority or parameters that may be evaluated to produce a priority 402 to a message 404 .
  • the priority/priority criteria 402 may be attached to the start of the message 404 ( FIG. 4A ), appended to the end of the message 404 ( FIG. 4B ) or linked to the message 404 ( FIG. 4C ).
  • the message header and the message itself are stored in non-adjacent memory locations, and linked together by a memory pointer or some other means which enables a reference to one portion to be used to locate and reference the other portion.
  • FIG. 5 a flowchart illustrating a method 500 for associating a priority with a message in accordance with one embodiment of the present invention is shown.
  • the process begins in block 502 and a message is received in block 504 .
  • the processing priority for the message is determined in block 506 .
  • the processing priority is then associated with the message such that the processing priority is dynamically changeable by a function operating on the message in block 508 .
  • the process 500 then repeats for each newly received message.
  • this method 500 can be implemented as a computer program embodied on a computer readable medium wherein each block is performed by one or more code segments.
  • the packet network switch 600 can be used to process VoIP, voice over Frame Relay (“VoFR”) and other types of calls. Moreover, the packet network switch 600 is similar to an asynchronous transfer mode (“ATM”) switch.
  • ATM is a connection-oriented technology used in both local area network (“LAN”) and wide area network (“WAN”) environments. It is a fast-packet switching technology that allows free allocation of capacity to each channel.
  • Packet network switch 600 includes one or more ingress cards 602 a and 602 b , one or more signal processing cards 604 , one or more control cards 606 , one or more egress cards 608 a and 608 b , a switch fabric 610 and a TDM bus 612 .
  • Each signal processing card 604 contains an array of digital signal processors (“DSP”) (not shown) and each control card 606 contains one or more processors (not shown).
  • DSP digital signal processors
  • the switch fabric 610 communicably couples the ingress cards 602 , the signal processing cards 604 , the control cards 606 and the egress cards 608 together.
  • the TDM bus 612 also communicably couples the ingress cards 602 , the signal processing cards 604 , the control cards 606 and the egress cards 608 together.
  • cards 602 , 604 , 606 and 608 can be inserted in any order within packet network switch 600 .
  • the packet network switch 600 should include sufficient numbers of redundant cards to serve as backup cards in the event a card 602 , 604 , 606 and 608 fails.
  • the main function of a packet network switch 600 is to relay user data cells from input ports to the appropriate output ports.
  • a network controller 118 FIG. 1
  • Control card 608 uses this call setup information to assign a port in ingress cards 602 a or 602 b to receive the call from the PSTN 106 ( FIG. 1 ), a DSP within processing card 604 to process the call, and a port in egress cards 608 a or 608 b to send the call to IP network 110 ( FIG. 1 ).
  • the TDM-based communications or messages 112 enter through ingress cards 602 a or 602 b and are routed to the appropriate processing card 604 through TDM Bus 612 .
  • the DSPs in processing card 604 convert messages between analog and digital information formats, and provide digital compression and switching functions. In one embodiment, each processing card 604 is capable of processing 1024 simultaneous sessions.
  • the processing card 604 then sends the messages from the DSP to cell switch fabric 610 , which is primarily responsible for the routing and transferring of messages or data cells, the basic transmission unit, between switch elements.
  • the switch fabric 610 may also provide cell buffering, traffic concentration and multiplexing, redundancy for fault tolerance, multicasting or broadcasting, and cell scheduling based on delay priorities and congestion monitoring.
  • Switch fabric 610 ultimately routes the messages to egress cards 608 a or 608 b .
  • each egress card 608 is capable of handling at least 8000 calls.
  • Egress cards 608 a and 608 b typically send the messages to a gigabit Ethernet (not shown). As its name indicates, the gigabit Ethernet supports data rates of one (1) gigabit (1,000 megabits) per second.
  • FIG. 7 a schematic diagram illustrating a packet operating system 700 with redundant control cards 702 a and 702 b is shown.
  • Control cards 702 a and 702 b are housed within a single chassis, such as switch 600 ( FIG. 6 ).
  • Messages 704 enter packet operating system 700 through interface 706 on control card 702 a .
  • Messages 704 travel from interface 706 onto protocol stack 708 and then to peripheral component interconnect (“PCI”) bus 710 .
  • PCI bus 710 sends messages 704 to either input/output (“I/O”) cards 712 or DSP cards 714 .
  • Control card 702 b mirrors either a portion or all of the data of control card 702 a .
  • Each control card 702 a and 702 b of packet operating system 700 has its own memory and thus avoids the typical problems associated with shared memory, such as recursive calls and have synchronization and corruption problems.
  • FIG. 8 is a schematic diagram illustrating a message scheduling system 800 in accordance with the present invention.
  • the scheduling system 800 of the present invention includes a scheduler 802 communicably coupled to a multidimensional queue 804 .
  • Scheduler 802 may comprise a receiver function and a dispatcher function.
  • the multidimensional queue 804 may be described as a “set” of queues wherein the first square along the X-axis and Y-axis, such as square 804 A, represents the head of a queue. Note that the multidimensional queue 804 is not limited to a three-dimensional queue as depicted in FIG. 8 .
  • Each queue within the multidimensional queue 804 is designated to receive messages based on a processing priority or criteria and an attribute associated with the message.
  • the message attributes may include a virtual private network (“VPN”) classification, a destination software function, functionality type or other attribute that distinguish one message from another, or combinations thereof.
  • the processing priority can be based on QoS parameters or the type of message, such as data, fax, image, multimedia, voice, etc.
  • VPN classification can be individual VPNs or groups of VPNs.
  • multidimensional queue 804 could be based on VPN classification in the X-direction, processing priority in the Y-direction, and first-in-first-out (“FIFO”) in the Z-direction.
  • each function can have a slot comprised of multiple dimensions.
  • a fourth dimension can also be added to the multidimensional queue 804 by making it an array of three-dimensional queues, where each one is handled by one type of functionality.
  • a function index and a jump table can be used.
  • the multidimensional queue 804 can be characterized as an advanced queue structure that consists of multiple sub-queues bundled in a single receive queue wherein each sub-queue serves a set of messages 806 .
  • the messages 806 can be classified by their priority (first dimension) and message classification or service classes (second dimension).
  • Priority sub-queues will be serviced according to one or more algorithms, such as an exponentially weighted round robin scheme.
  • Within each priority there will be multiple sub-queues representing multiple VPN service classes. VPNs will be mapped into these service classes. Service classes themselves will have a weighting scheme among themselves so that different qualities of service can be provided.
  • the multidimensional queue 804 is a two-dimensional queue, consisting of p*c monolithic sub-queues, where p is the number of message priorities and c is the number of VPN service classes.
  • the multidimensional queue 804 itself is three-dimensional since the messages within each of the p*c sub-queues represent the third dimension (the depth of the sub-queues). The messages within each one-dimensional sub-queue are serviced in FIFO order.
  • the receiver function of scheduler 802 stores messages 806 in the multidimensional queue 804 (indicated by arrow 808 ) based on a processing priority or priority criteria and an attribute associated with the message 806 . Note that multiple attributes may be used to determine where the message 806 is stored in the multidimensional queue 804 .
  • a special function can be used to insert the message 806 into the multidimensional queue 804 . For example, this special function can use the function index, the VPN, the priority, and/or any other important criteria to insert the message 806 into the multidimensional queue 804 .
  • the dispatcher function of scheduler 802 pulls or schedules queued messages from the multidimensional queue 804 (indicated by arrow 810 ) for processing by the one or more processors 812 based on an algorithm.
  • the algorithm may take into account operating criteria, such as historical operating data, current operating data, anti-starvation criteria, one or more of the message attributes as described above, or combinations thereof.
  • the algorithm may be an exponentially weighted, non-starving, nested-round-robin, message-priority-based scheme, or a weighted, non-starving, nested-round-robin, class-based scheme, or any combination thereof.
  • Other suitable algorithms depending upon the specific application, may be used in accordance with the present invention.
  • the algorithm may also provide no more than a pre-determined number of consecutive messages to a function or processing entity within a time period. Once the scheduler 802 pulls or schedules a queued message, the scheduler 802 sends the message to the processor 812 .
  • FIG. 9 a flowchart illustrating a method 900 for scheduling one or more messages for processing in accordance with one embodiment of the present invention is shown.
  • the process 900 begins in block 902 and one or more messages are received in block 904 .
  • Each message is stored in a multidimensional processing queue based on a processing priority and an attribute associated with the message in block 906 .
  • each queued message from the multidimensional processing queue is scheduled for processing based on an algorithm in block 908 .
  • the process 900 then repeats for each newly received message and until all messages are scheduled from the multidimensional queue.
  • this method 900 can be implemented as a computer program embodied on a computer readable medium wherein each block is performed by one or more code segments.
  • priority levels are set at compile time, while service levels are set by a network administrator.
  • the present invention provides a communications switch having one or more ingress cards, one or more signal processing cards, one or more control cards containing one or more processors and one or more egress cards.
  • Each signal processing card contains an array of digital signal processors.
  • the switch also includes a switch fabric communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a TDM bus communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a multidimensional processing queue, and a scheduler communicably coupled to each processor and the multidimensional processing queue.
  • the scheduler receives one or more messages, stores each message in the multidimensional processing queue based on a priority and an attribute of the message, and schedules each queued message from the multidimensional processing queue for processing based on an algorithm.
  • the algorithm used by the present invention can be a single algorithm or multiple algorithms that are selectively used depending on various operating criteria. For example, an exponentially weighted, non-starving, nested-round-robin message-priority-based scheme could be used. Weighted means higher priority messages are served more frequently than the lower priority messages. There is an exponential service ratio between successive priority levels. Non-starving means lower priority messages will eventually get served. Round-robin means servicing mechanism moves from one priority-level to the other in a round-robin fashion. The nesting gives the exponential service weighting, e.g. assume that there are three message priorities: High, Medium and Low. Also assume that the queues have messages in them at any given time.
  • the order and amount of servicing would be H-M-H-L-H-M-H and repeating in the same order. So, four High messages, two Medium messages and one Low message would have been serviced. Also, after one high priority message is serviced, it will take at most one more lower priority message service before another high priority message is serviced. The following illustration may serve better to explain the order of servicing.
  • Another algorithm that can be used is a weighted, non-starving, round-robin, VPN class-based scheme.
  • Within each priority there are multiple classes of service. For each class there is a maximum number of messages that can be serviced before the next class is serviced. The maximum number of serviceable messages assigned to each class defines the relative priority among those classes.
  • the algorithm may also provide a maximum number of messages that can be serviced during each scheduling period. Regardless of priority or service class a function is not given more than a pre-determined number of consecutive messages to be serviced. When it reaches the maximum, the dispatcher starts dequeuing messages for another function.
  • Service ratios for priorities 8-4-2-1, i.e. for every eight priority-one messages serviced the task will serve one priority-four message. However, it will do this in a round-robin fashion so that priorities are interleaved. For example, assume that there are enough messages at each priority level and these number represent the priority level of each successive message being dequeued: 1-2-1-3-1-2-1-4-1-2-1-3-1-2-1, and so on repeating the same sequence.
  • Service ratios for classes 10-6-3, i.e. for every 10 class-one messages serviced, the task will service 6 class-two messages and 3 class-three messages. These ratios are kept on a per priority basis so as to avoid starvation and imbalance among different classes. Within each class the higher priority messages will be serviced more frequently than the lower priority messages based on the service ratios for priorities. The maximum messages to serve consecutively for this function are 15.
  • Each cell represents the depth of the sub-queue. For example, there are three messages in the queue represented by priority four and class three. For simplicity of this illustration, assume that no new message is inserted into these queues during servicing.
  • the following table illustrates the dequeuing from the subqueues at each iteration.
  • the first column indicates the cell being serviced.
  • the second column indicates the depth of the sub-queue after the service.
  • the third column indicates the next message priority that needs to be serviced within this class.
  • the fourth column indicates the next calls that needs to be serviced.
  • the fifth column indicates remaining service quota for current class. Note that when remaining class quota reaches 0 or there are no more messages left in the current class, we move on to the next class.
  • the seventh column indicates total number of messages that were served during this scheduling period. The first iteration would be:
  • K No service, move onto next class.
  • L Queues in this class depleted, move onto next class.
  • M Queues in this class depleted, move onto next class. Quota for this class also depleted. At this point there is no message left in any of the subqueues, so the dispatcher would move onto the next function's queue.
  • the exponential weighted priority servicing mechanism is not reset back to priority-one because the next message to service may be the lowest priority message in that class. This ensures that there is no starvation of low priority messages if the process happens to move out of a class whenever the class quota is exhausted.

Abstract

The present invention provides a packet having a message and a processing priority associated with the message. The processing priority is dynamically changeable by a function operating on the message. The present invention also provides a method for associating a processing priority to a message by receiving the message, determining the processing priority for the message and associating the processing priority with the message such that the processing priority is dynamically changeable by a function operating on the message. In addition, the present invention provides a method for scheduling messages by receiving one or more messages and storing each message in a multidimensional processing queue based on a processing priority and an attribute associated with the message. Each queued message from the multidimensional processing queue is scheduled for processing based on an algorithm.

Description

    FIELD OF THE INVENTION
  • The present invention relates generally to the field of communications and, more particularly, to a method and apparatus for content based message processing.
  • BACKGROUND OF THE INVENTION
  • The increasing demand for data communications has fostered the development of techniques that provide more cost-effective and efficient means of using communication networks to handle more information and new types of information. One such technique is to segment the information, which may be a voice or data communication, into packets. A packet is typically a group of binary digits, including at least data and control information. Integrated packet networks (typically fast packet networks) are generally used to carry at least two (2) classes of traffic, which may include, for example, continuous bit-rate (“CBR”), speech (“Packet Voice”), data (“Framed Data”), image, and so forth. Packet networks source, sink and/or forward protocol packets.
  • Congestion and Quality of Service (“QoS”) problems inside these networks have not been solved satisfactorily and remain as outstanding issues to be resolved. Although, message scheduling helps alleviate these problems, the efficient scheduling of work with thousands of entities (instances) is not a simple matter. At present, most message scheduling is based on the simplest technique for queuing packets for transmission on an internodal trunk of a fast-packet network: a first-in-first-out (“FIFO”) queue. However, FIFO queuing techniques do not address QoS parameters. This technique can also allow overload periods for digitized speech packets and for Framed Data packets, which results in a greater share of bandwidth being provided to one at the expense of the other; an undesirable result.
  • Another technique, head-of-line-priority (“HOLP”), may give data priority over speech, but does not solve the problem of data and speech queues affecting the QoS of each other and of CBR data fast packets under high traffic conditions. In HOLP, where speech fast packets are given a high priority, speech fast packets may affect the QoS of lower priority queues. Likewise, queuing schemes designed only for data do not solve the problems of integrating other traffic types, such as speech and CBR data.
  • Traditional packet data routers are constructed in software using a scheduler or Real Time Operating System (“RTOS”), which associates the processing priority of functions (protocols or other operations performed upon a packet including forwarding) with the task or process that the function operates under. Thus, a particular function has a priority and all packets processed by the function inherit that priority for the duration of their processing by the function. If the next processing step to which the packet is subjected is set at a different priority, then the packet inherits a different priority for the duration of this processing step. Priority is associated with the function applied to the packet rather than the packet itself. If all packets traverse the same set of functions, they have equal access to the central processing unit (“CPU”) and receive equal priority treatment.
  • If all packets had equal priority, this might be sufficient. However, due to the need to sell different QoS, and the needs resulting from multimedia (voice, video and data) carried upon the same network infrastructure, there is a need to assign different priorities to a packet, and vary resource allocation (especially processing time) and forwarding treatment on a per-packet basis.
  • SUMMARY OF THE INVENTION
  • The present invention provides a system for assigning different priorities to a packet, and varying resource allocation (especially processing time) and forwarding treatment on a per-packet basis. The present invention is adaptable to accommodate new message types, multimedia applications and multi-service applications. It is flexible, with the ability to cater to a wide range of configurations and environments and improves the QoS of VoIP calls.
  • The present invention provides a packet having a message and a processing priority associated with the message. The processing priority is dynamically changeable by a function operating on the message. The processing priority can be associated with the message by attaching the processing priority to the start of the message, appending the processing priority to the end of the message or linking the processing priority to the message using pointers. The system for associating a processing priority to a message involves receiving the message, determining the processing priority for the message and associating the processing priority with the message such that the processing priority is dynamically changeable by a function operating on the message. This process can be implemented using a computer program embodied on a computer readable medium wherein each step is performed using one or more code segments.
  • The present invention also provides a method for scheduling one or more messages. The one or more messages are received and then each message is stored in a multidimensional processing queue based on a processing priority and an attribute associated with the message. Each queued message from the multidimensional processing queue is scheduled for processing based on an algorithm. The attribute can be a virtual private network classification, destination software function, function index, a functionality type or other message attribute. The algorithm can be one or more algorithms, such as an exponentially weighted, non-starving, nested-round-robin, message-priority-based scheme or a weighted, non-starving, nested-round-robin, class-based scheme. This process can be implemented using a computer program embodied on a computer readable medium wherein each step is performed using one or more code segments.
  • In addition, the present invention provides a communications switch having one or more ingress cards, one or more signal processing cards, one or more control cards containing one or more processors and one or more egress cards. Each signal-processing card contains an array of digital signal processors. The switch also includes a switch fabric communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a TDM bus communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a multidimensional processing queue, and a scheduler communicably coupled to each processor and the multidimensional processing queue. The scheduler receives one or more messages, stores each message in the multidimensional processing queue based on a priority and an attribute of the message, and schedules each queued message from the multidimensional processing queue for processing based on an algorithm.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying-drawings, in which:
  • FIG. 1 is a block diagram of a representative integrated network in accordance with the PRIOR ART;
  • FIG. 2 is a schematic diagram illustrating a message scheduling system in accordance with the PRIOR ART;
  • FIG. 3 is a schematic diagram illustrating another message scheduling system in accordance with the PRIOR ART;
  • FIGS. 4A, 4B and 4C are block diagrams illustrating a packet having an associated processing priority in accordance with various embodiments of the present invention;
  • FIG. 5 is a flowchart illustrating a method for associating a priority with a message in accordance with one embodiment of the present invention;
  • FIG. 6 is a diagram of a packet network switch in accordance with the present invention;
  • FIG. 7 is a schematic diagram illustrating a packet operating system in accordance with the present invention;
  • FIG. 8 is a schematic diagram illustrating a message scheduling system in accordance with the present invention; and
  • FIG. 9 is a flowchart illustrating a method for scheduling messages into queues in accordance with one embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • While the making and using of various embodiments of the present invention are discussed in detail below, it should be appreciated that the present invention provides many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed herein are merely illustrative of specific ways to make and use the invention and do not delimit the scope of the invention. The discussion herein relates to communications systems, and more particularly, to processing messages within a communications switch. It will be understood that, although the description herein refers to a communications environment, the concepts of the present invention are applicable to other environments, such as general data processing.
  • The present invention provides a system for assigning different priorities to a packet, and vary resource allocation (especially processing time) and forwarding treatment on a per-packet basis. The present invention is adaptable to accommodate new message types, multimedia applications and multi-service applications. It is flexible, with the ability to cater to a wide range of configurations and environments and improves the QoS of VoIP calls.
  • Now briefly referring to FIGS. 1-3, a representative network (FIG. 1) and various message scheduling systems (FIGS. 2 and 3) will be described in accordance with the prior art. FIG. 1 depicts a representative integrated network 100 in which phones 102 and faxes 104 are communicably coupled to a public switched telephone network (“PSTN”) 106. A switch 108 is communicably coupled to the PSTN 106 and an Internet Protocol (“IP”) network 110 to convert time division multiplexing (“TDM”) based communications 112 to IP-based communications 114. The switch 108 creates IP packets containing the necessary destination information so that the packets 114 can be properly routed to their destinations, which may include computers 116 or other devices communicably coupled to the IP network 110. A network controller 118 is communicably coupled to the PSTN 106 and the switch 108, and provides control signals to the switch 108 for proper processing of the TDM based communications 112. The network controller 118 may also be communicably connected to the IP network 110. Network controller 118 can function as a Media Gateway Control (“MGC”). The MGC protocol is one of a few proposed control and signal standards to compete with the older H.323 standard for the conversion of audio signals carried on telephone circuits, such as PSTN 106 to data packets carried over the Internet or other packet networks, such as IP network 110. As will be appreciated by those skilled in the art, this example is not limited to the conversion of TDM based communications to IP-based communications; instead, the present invention may be applied to any conversion of a multiplexed communication to a packet-based communication.
  • IP specifies the format of packets, also called datagrams, and the addressing scheme. Most networks combine IP with a higher-level protocol called Transport Control Protocol (“TCP”), which establishes a virtual connection between a destination and a source. IP allows a packet to be addressed and dropped in a system, but there is no direct link between the sender and the recipient. TCP/IP, on the other hand, establishes a connection between two hosts so that they can send messages back and forth for a period of time. IP network 110 receives and sends messages through switch 108, ultimately to phone 102 and/or fax 104. PCs 116 receive and send messages through IP network 110 in a packet-compatible format. Voice over IP (“VoIP”) is the ability to make telephone calls and send faxes over IP-based data networks, such as IP network 110. An integrated voice/data network 100 allows more standardization and reduces total equipment needs. VoIP can support multimedia and multi-service applications.
  • FIGS. 2 and 3 are schematic diagrams illustrating two message scheduling systems 200 and 300 in accordance with the prior art. In FIG. 2, messages 202 are received and stored in first-in-first-out (“FIFO”) queue 204. Messages 202 are then sent to processor 206 in the order in which they were received. No processing prioritization other than arrival time is applied in queue 204 is applied. In FIG. 3, messages 302 enter data type sorter 304 where messages 302 are separated by data type. A FIFO queue 306 a, 306 b, . . . 306 n exists for each individual data type. Data type sorter 304 sends messages 302 to FIFO queues 306 a, 306 b, . . . 306 n based on matching data types. Scheduler 308 then pulls messages 302 from FIFO queues 306 a, 306, . . . 306 n and sends messages 302 to processor 310. The primary prioritization is again based on arrival time in queues 306 a, 306 b, . . . 306 n. Scheduler 308 only coordinates the pulling of messages 302 for processing.
  • Now referring to the present invention and to FIGS. 4A, 4B and 4C, block diagrams illustrating a packet 400, 410 and 420 having an associated processing priority 402 in accordance with various embodiments of the present invention are shown. The present invention associates a processing priority or priority criteria 402 within or attached to a packet or message 404 in such a way that that the priority or priority criteria 402 traverses the system along with the packet or message 404. The priority or priority criteria 402 may be one or more parameters that are evaluated to produce a priority for the message 404. Moreover, the priority or priority criteria 402 may be modified dynamically during the traversal as decisions are made regarding the priority/criteria 402.
  • The priority/priority criteria 402 are associated with the packet/message 404 in such a way that reference to one allows reference to the other, they traverse the system together, and that functions operating upon the packet/message 404 have the ability to change the priority/priority criteria 402. Changing the priority/priority criteria 402 dynamically, when used in conjunction with the other aspects of this invention, creates a processing environment where the priority/priority criteria 402 of the message/packet 404 governs the work allocation or dispatching.
  • Since system hardware and software for carrying messages around a system vary, the present invention provides at least three ways to associate the priority or parameters that may be evaluated to produce a priority 402 to a message 404. The priority/priority criteria 402 may be attached to the start of the message 404 (FIG. 4A), appended to the end of the message 404 (FIG. 4B) or linked to the message 404 (FIG. 4C). With respect to FIG. 4C, the message header and the message itself are stored in non-adjacent memory locations, and linked together by a memory pointer or some other means which enables a reference to one portion to be used to locate and reference the other portion.
  • Referring now to FIG. 5, a flowchart illustrating a method 500 for associating a priority with a message in accordance with one embodiment of the present invention is shown. The process begins in block 502 and a message is received in block 504. The processing priority for the message is determined in block 506. The processing priority is then associated with the message such that the processing priority is dynamically changeable by a function operating on the message in block 508. The process 500 then repeats for each newly received message. Note that this method 500 can be implemented as a computer program embodied on a computer readable medium wherein each block is performed by one or more code segments.
  • Now referring to FIG. 6, a packet network switch 600 will now be described. The packet network switch 600 can be used to process VoIP, voice over Frame Relay (“VoFR”) and other types of calls. Moreover, the packet network switch 600 is similar to an asynchronous transfer mode (“ATM”) switch. ATM is a connection-oriented technology used in both local area network (“LAN”) and wide area network (“WAN”) environments. It is a fast-packet switching technology that allows free allocation of capacity to each channel. Packet network switch 600 includes one or more ingress cards 602 a and 602 b, one or more signal processing cards 604, one or more control cards 606, one or more egress cards 608 a and 608 b, a switch fabric 610 and a TDM bus 612. Each signal processing card 604 contains an array of digital signal processors (“DSP”) (not shown) and each control card 606 contains one or more processors (not shown). The switch fabric 610 communicably couples the ingress cards 602, the signal processing cards 604, the control cards 606 and the egress cards 608 together. The TDM bus 612 also communicably couples the ingress cards 602, the signal processing cards 604, the control cards 606 and the egress cards 608 together. Preferably cards 602, 604, 606 and 608 can be inserted in any order within packet network switch 600. Moreover, the packet network switch 600 should include sufficient numbers of redundant cards to serve as backup cards in the event a card 602, 604, 606 and 608 fails.
  • The main function of a packet network switch 600 is to relay user data cells from input ports to the appropriate output ports. When a call or communication is to be handled by the packet network switch 600, a network controller 118 (FIG. 1) provides the control card 608 with the necessary call setup information. Control card 608 uses this call setup information to assign a port in ingress cards 602 a or 602 b to receive the call from the PSTN 106 (FIG. 1), a DSP within processing card 604 to process the call, and a port in egress cards 608 a or 608 b to send the call to IP network 110 (FIG. 1). The TDM-based communications or messages 112 enter through ingress cards 602 a or 602 b and are routed to the appropriate processing card 604 through TDM Bus 612. The DSPs in processing card 604 convert messages between analog and digital information formats, and provide digital compression and switching functions. In one embodiment, each processing card 604 is capable of processing 1024 simultaneous sessions. The processing card 604 then sends the messages from the DSP to cell switch fabric 610, which is primarily responsible for the routing and transferring of messages or data cells, the basic transmission unit, between switch elements. The switch fabric 610 may also provide cell buffering, traffic concentration and multiplexing, redundancy for fault tolerance, multicasting or broadcasting, and cell scheduling based on delay priorities and congestion monitoring. Switch fabric 610 ultimately routes the messages to egress cards 608 a or 608 b. In one embodiment, each egress card 608 is capable of handling at least 8000 calls. Egress cards 608 a and 608 b typically send the messages to a gigabit Ethernet (not shown). As its name indicates, the gigabit Ethernet supports data rates of one (1) gigabit (1,000 megabits) per second.
  • Turning now to FIG. 7, a schematic diagram illustrating a packet operating system 700 with redundant control cards 702 a and 702 b is shown. Control cards 702 a and 702 b are housed within a single chassis, such as switch 600 (FIG. 6). Messages 704 enter packet operating system 700 through interface 706 on control card 702 a. Messages 704 travel from interface 706 onto protocol stack 708 and then to peripheral component interconnect (“PCI”) bus 710. PCI bus 710 sends messages 704 to either input/output (“I/O”) cards 712 or DSP cards 714. Control card 702 b mirrors either a portion or all of the data of control card 702 a. Each control card 702 a and 702 b of packet operating system 700 has its own memory and thus avoids the typical problems associated with shared memory, such as recursive calls and have synchronization and corruption problems.
  • FIG. 8 is a schematic diagram illustrating a message scheduling system 800 in accordance with the present invention. The scheduling system 800 of the present invention includes a scheduler 802 communicably coupled to a multidimensional queue 804. Scheduler 802 may comprise a receiver function and a dispatcher function. The multidimensional queue 804 may be described as a “set” of queues wherein the first square along the X-axis and Y-axis, such as square 804A, represents the head of a queue. Note that the multidimensional queue 804 is not limited to a three-dimensional queue as depicted in FIG. 8. Each queue within the multidimensional queue 804 is designated to receive messages based on a processing priority or criteria and an attribute associated with the message. The message attributes may include a virtual private network (“VPN”) classification, a destination software function, functionality type or other attribute that distinguish one message from another, or combinations thereof. The processing priority can be based on QoS parameters or the type of message, such as data, fax, image, multimedia, voice, etc. VPN classification can be individual VPNs or groups of VPNs.
  • For example, one possible configuration of the multidimensional queue 804 could be based on VPN classification in the X-direction, processing priority in the Y-direction, and first-in-first-out (“FIFO”) in the Z-direction. Moreover, each function can have a slot comprised of multiple dimensions. A fourth dimension can also be added to the multidimensional queue 804 by making it an array of three-dimensional queues, where each one is handled by one type of functionality. In order for the scheduler or dispatcher 802 of the multidimensional queue 804 to call the right functionality, a function index and a jump table can be used.
  • The multidimensional queue 804 can be characterized as an advanced queue structure that consists of multiple sub-queues bundled in a single receive queue wherein each sub-queue serves a set of messages 806. The messages 806 can be classified by their priority (first dimension) and message classification or service classes (second dimension). Priority sub-queues will be serviced according to one or more algorithms, such as an exponentially weighted round robin scheme. Within each priority there will be multiple sub-queues representing multiple VPN service classes. VPNs will be mapped into these service classes. Service classes themselves will have a weighting scheme among themselves so that different qualities of service can be provided. In this example, the multidimensional queue 804 is a two-dimensional queue, consisting of p*c monolithic sub-queues, where p is the number of message priorities and c is the number of VPN service classes. The multidimensional queue 804 itself is three-dimensional since the messages within each of the p*c sub-queues represent the third dimension (the depth of the sub-queues). The messages within each one-dimensional sub-queue are serviced in FIFO order.
  • The receiver function of scheduler 802 stores messages 806 in the multidimensional queue 804 (indicated by arrow 808) based on a processing priority or priority criteria and an attribute associated with the message 806. Note that multiple attributes may be used to determine where the message 806 is stored in the multidimensional queue 804. A special function can be used to insert the message 806 into the multidimensional queue 804. For example, this special function can use the function index, the VPN, the priority, and/or any other important criteria to insert the message 806 into the multidimensional queue 804. The dispatcher function of scheduler 802 pulls or schedules queued messages from the multidimensional queue 804 (indicated by arrow 810) for processing by the one or more processors 812 based on an algorithm. The algorithm may take into account operating criteria, such as historical operating data, current operating data, anti-starvation criteria, one or more of the message attributes as described above, or combinations thereof. For example, the algorithm may be an exponentially weighted, non-starving, nested-round-robin, message-priority-based scheme, or a weighted, non-starving, nested-round-robin, class-based scheme, or any combination thereof. Other suitable algorithms, depending upon the specific application, may be used in accordance with the present invention. The algorithm may also provide no more than a pre-determined number of consecutive messages to a function or processing entity within a time period. Once the scheduler 802 pulls or schedules a queued message, the scheduler 802 sends the message to the processor 812.
  • Now referring to FIG. 9, a flowchart illustrating a method 900 for scheduling one or more messages for processing in accordance with one embodiment of the present invention is shown. The process 900 begins in block 902 and one or more messages are received in block 904. Each message is stored in a multidimensional processing queue based on a processing priority and an attribute associated with the message in block 906. Thereafter, each queued message from the multidimensional processing queue is scheduled for processing based on an algorithm in block 908. The process 900 then repeats for each newly received message and until all messages are scheduled from the multidimensional queue. Note that this method 900 can be implemented as a computer program embodied on a computer readable medium wherein each block is performed by one or more code segments. Also note that it may be desirable to give system messages the highest priority (label lookups, etc.). Typically priority levels are set at compile time, while service levels are set by a network administrator.
  • In addition, the present invention provides a communications switch having one or more ingress cards, one or more signal processing cards, one or more control cards containing one or more processors and one or more egress cards. Each signal processing card contains an array of digital signal processors. The switch also includes a switch fabric communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a TDM bus communicably coupling the ingress cards, the signal processing cards, the control cards and the egress cards, a multidimensional processing queue, and a scheduler communicably coupled to each processor and the multidimensional processing queue. The scheduler receives one or more messages, stores each message in the multidimensional processing queue based on a priority and an attribute of the message, and schedules each queued message from the multidimensional processing queue for processing based on an algorithm.
  • The algorithm used by the present invention can be a single algorithm or multiple algorithms that are selectively used depending on various operating criteria. For example, an exponentially weighted, non-starving, nested-round-robin message-priority-based scheme could be used. Weighted means higher priority messages are served more frequently than the lower priority messages. There is an exponential service ratio between successive priority levels. Non-starving means lower priority messages will eventually get served. Round-robin means servicing mechanism moves from one priority-level to the other in a round-robin fashion. The nesting gives the exponential service weighting, e.g. assume that there are three message priorities: High, Medium and Low. Also assume that the queues have messages in them at any given time. Then, the order and amount of servicing would be H-M-H-L-H-M-H and repeating in the same order. So, four High messages, two Medium messages and one Low message would have been serviced. Also, after one high priority message is serviced, it will take at most one more lower priority message service before another high priority message is serviced. The following illustration may serve better to explain the order of servicing.
  • Another algorithm that can be used is a weighted, non-starving, round-robin, VPN class-based scheme. Within each priority, there are multiple classes of service. For each class there is a maximum number of messages that can be serviced before the next class is serviced. The maximum number of serviceable messages assigned to each class defines the relative priority among those classes.
  • The algorithm may also provide a maximum number of messages that can be serviced during each scheduling period. Regardless of priority or service class a function is not given more than a pre-determined number of consecutive messages to be serviced. When it reaches the maximum, the dispatcher starts dequeuing messages for another function.
  • A service example with four priority levels and three classes will now be described. Service ratios for priorities: 8-4-2-1, i.e. for every eight priority-one messages serviced the task will serve one priority-four message. However, it will do this in a round-robin fashion so that priorities are interleaved. For example, assume that there are enough messages at each priority level and these number represent the priority level of each successive message being dequeued: 1-2-1-3-1-2-1-4-1-2-1-3-1-2-1, and so on repeating the same sequence.
  • Service ratios for classes: 10-6-3, i.e. for every 10 class-one messages serviced, the task will service 6 class-two messages and 3 class-three messages. These ratios are kept on a per priority basis so as to avoid starvation and imbalance among different classes. Within each class the higher priority messages will be serviced more frequently than the lower priority messages based on the service ratios for priorities. The maximum messages to serve consecutively for this function are 15.
  • Assume the following queue status to start with. The rows represent different priorities and the columns represent different classes. Each cell represents the depth of the sub-queue. For example, there are three messages in the queue represented by priority four and class three. For simplicity of this illustration, assume that no new message is inserted into these queues during servicing.
  • Class 1 Class 2 Class 3
    Priority 1 2 0 1
    Priority 2 3 5 2
    Priority 3 4 2 0
    Priority 4 0 1 3

    If this queue was serviced for the first time, the message at the head of priority-one, class-one sub-queue would be de-queued. So, after the first iteration, the queue depths would look like this (the change is shown in bold):
  • Class 1 Class 2 Class 3
    Priority 1 1 0 1
    Priority 2 3 5 2
    Priority 3 4 2 0
    Priority 4 0 1 3
  • The following table illustrates the dequeuing from the subqueues at each iteration. The first column indicates the cell being serviced. The second column indicates the depth of the sub-queue after the service. The third column indicates the next message priority that needs to be serviced within this class. The fourth column indicates the next calls that needs to be serviced. The fifth column indicates remaining service quota for current class. Note that when remaining class quota reaches 0 or there are no more messages left in the current class, we move on to the next class. The seventh column indicates total number of messages that were served during this scheduling period. The first iteration would be:
  • Mes-
    sages Next Next Quota Highest Total Com-
    Service Left Priority Class Left Priority Messages ments
    P1-C1 1 2 1 9 1 1
    P2-C1 2 1 1 8 1 2
    P1-C1 0 3 1 7 2 3
    P3-C1 3 1 1 6 2 4
    P1-C1 0 2 1 6 2 4 A
    P2-C1 1 1 1 5 2 5 B
    P1-C1 0 4 1 5 2 5 A
    P4-C1 0 1 1 5 2 5 A
    P1-C1 0 2 1 5 2 5 A
    P2-C1 0 1 1 4 3 6
    P1-C1 0 3 1 4 3 6 A
    P3-C1 2 1 1 3 3 7
    C
    P3-C1 1 1 1 2 3 8
    D
    P3-C1 0 1 1 1 3 9
    E
    P1-C2 0 2 2 6 2 9 F
    P2-C2 4 1 2 5 2 10
    G
    P2-C2 3 1 2 4 2 11
    P3-C2 1 1 2 3 2 12
    P2-C2 2 1 2 2 2 13
    P4-C2 0 1 2 1 2 14
    P2-C2 1 1 3 0 2 15 H
    Comments:
    A: Bypassed (no message).
    B: Priority bypassed 1, 4, 1 since there are no messages.
    C: 1, 2, 1 cycle repeat 1, 2, 1 will be bypassed since there are no messages.
    D: 1, 2, 1, 4, 1, 2, 1 will be bypassed since there are no messages.
    E: Indication that there are no more messages in this class, we reset class quota back to 10 and move on to the next class. Priority serviced for the next class in this case will be 1. For compactness, only the scheduled order is present in the remaining rows. The next priority still contains what would have been scheduled if there were messages. The priority in the row indicates the actual message that was selected.
    F: No messages at this priority level.
    G: Priority cycle repeats here.
    H: Class quote exhausted, move onto next class, set quota back to 6. Function max reached. Move onto next function
  • At this point the maximum number of messages that can be serviced consecutively have been exhausted. Here is how the queue depths look like after the first consecutive run:
  • Class 1 Class 2 Class 3
    Priority 1 0 0 1
    Priority 2 0 1 2
    Priority 3 0 1 0
    Priority 4 0 0 3

    The second iteration would be:
  • Mes-
    sages Next Next Quota Highest Total Com-
    Service Left Priority Class Left Priority Messages ments
    P1-C3 0 3 3 2 2 1
    P2-C3 1 1 3 1 2 2 I
    P2-C3 0 1 1 3 2 3 J
    C1 0 −1 2 10 −1 3 K
    P3-C2 0 1 2 5 2 4
    P2-C2 0 1 2 4 2 5
    C2 0 −1 3 6 −1 5 L
    P4-C3 2 4 3 2 4 6
    P4-C3 1 4 3 1 4 7
    P4-C3 0 −1 1 3 −1 8 M
    Comments:
    I: Cycle repeats after this.
    J: Class quota exhausted, move onto the next class, set quota back to 3.
    K: No service, move onto next class.
    L: Queues in this class depleted, move onto next class.
    M: Queues in this class depleted, move onto next class. Quota for this class also depleted.

    At this point there is no message left in any of the subqueues, so the dispatcher would move onto the next function's queue. The exponential weighted priority servicing mechanism is not reset back to priority-one because the next message to service may be the lowest priority message in that class. This ensures that there is no starvation of low priority messages if the process happens to move out of a class whenever the class quota is exhausted.
  • Although preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that various modifications can be made therein without departing from the spirit and scope of the invention as set forth in the appended claims.

Claims (31)

1. A packet comprising:
a message;
a processing priority associated with the message, the processing priority being dynamically changeable on a per packet basis by a function operating on the message, wherein the processing priority comprises one or more parameters per packet that are evaluated to produce a priority for the packet.
2. The packet as recited in claim 1, wherein the processing priority comprises one or more parameters that are evaluated to produce a priority for the message.
3. The packet as recited in claim 1, wherein the processing priority is associated with the message by attaching the processing priority to the start of the message.
4. The packet as recited in claim 1, wherein the processing priority is associated with the message by appending the processing priority to the end of the message.
5. The packet as recited in claim 1, wherein the processing priority is associated with the message by linking the processing priority to the message using pointers.
6. A method for associating a processing priority to a message comprising the steps of:
receiving the message;
determining the processing priority for the message on a per packet basis;
associating the processing priority with the message such that the processing priority is dynamically changeable by a function operating on the message, wherein the processing priority comprises one or more parameters that are evaluated per packet to produce a priority for the packet.
7. The method as recited in claim 6, wherein the processing priority comprises one or more parameters that are evaluated to produce a priority for the message.
8. The method as recited in claim 6, wherein the processing priority is associated with the message by attaching the processing priority to the start of the message.
9. The method as recited in claim 6, wherein the processing priority is associated with the message by appending the processing priority to the end of the message.
10. The method as recited in claim 6, wherein the processing priority is associated with the message by linking the processing priority to the message using pointers.
11. A computer program loadable into a memory to be read and executed by a processor, comprising:
code segments adapted to associate a processing priority to a message, said code segments further comprising:
a code segment for receiving the message;
a code segment for determining the processing priority for the message;
a code segment for associating the processing priority with the message such that the processing priority is dynamically changeable by a function operating on the message, wherein the processing priority comprises one or more parameters that are evaluated per packet to produce a priority for the packet.
12. The computer program as recited in claim 11, wherein the processing priority comprises one or more parameters that are evaluated to produce a priority for the message.
13. The computer program as recited in claim 11, wherein the processing priority is associated with the message by attaching the processing priority to the start of the message.
14. The computer program as recited in claim 11, wherein the processing priority is associated with the message by appending the processing priority to the end of the message.
15. The computer program as recited in claim 11, wherein the processing priority is associated with the message by linking the processing priority to the message using pointers.
16. A method for scheduling one or more messages comprising the steps of:
receiving the one or more messages on a per packet basis;
storing each message in a multidimensional processing queue based on a processing priority and an attribute associated with the message, wherein the processing priority comprises one or more parameters that are evaluated per packet to produce a priority for the packet; and
scheduling each queued message from the multidimensional processing queue for processing based on an algorithm,
17. The method as recited in claim 16, wherein the attribute is a virtual private network classification.
18. The method as recited in claim 16, wherein the attribute is a destination software function.
19. The method as recited in claim 16, wherein the attribute is a function index.
20. The method as recited in claim 16, wherein the attribute is a functionality type.
21. The method as recited in claim 16, wherein the algorithm provides no more than a pre-determined number of consecutive messages to a function within a time period.
22. The method as recited in claim 16, wherein the processing priority comprises one or more parameters that are evaluated to produce a priority for the message.
23. The method as recited in claim 16, wherein the processing priority is dynamically changeable by a function operating on the message.
24. A computer program embodied on a computer readable medium loadable into a computer memory to be read and executed by a processor, comprising
code segments adapted to schedule one or more messages on a per packet basis, further comprising:
a code segment for receiving the one or more messages;
a code segment for storing each message in a multidimensional processing queue based on a processing priority and an attribute associated with the message wherein the processing priority comprises one or more parameters that are evaluated per packet to produce a priority for the packet, and further wherein the processing priority; and
a code segment for scheduling each queued message from the multidimensional processing queue for processing based on an algorithm.
25. The computer program as recited in claim 24, wherein the attribute is a virtual private network classification.
26. The computer program as recited in claim 24, wherein the attribute is a destination software function.
27. The computer program as recited in claim 24, wherein the attribute is a function index.
28. The computer program as recited in claim 24, wherein the attribute is a functionality type.
29. The computer program as recited in claim 24, wherein the algorithm provides no more than a pre-determined number of consecutive messages to a function within a time period.
30. The computer program as recited in claim 24, wherein the processing priority comprises one or more parameters that are evaluated to produce a priority for the message.
31. The computer program as recited in claim 24, wherein the processing priority is dynamically changeable by a function operating on the message.
US12/180,630 2002-12-13 2008-07-28 System for content based message processing Abandoned US20080291829A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/180,630 US20080291829A1 (en) 2002-12-13 2008-07-28 System for content based message processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/318,742 US7426209B2 (en) 2002-12-13 2002-12-13 System for content based message processing
US12/180,630 US20080291829A1 (en) 2002-12-13 2008-07-28 System for content based message processing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/318,742 Continuation US7426209B2 (en) 2002-12-13 2002-12-13 System for content based message processing

Publications (1)

Publication Number Publication Date
US20080291829A1 true US20080291829A1 (en) 2008-11-27

Family

ID=32592881

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/318,742 Active 2026-03-11 US7426209B2 (en) 2002-12-13 2002-12-13 System for content based message processing
US12/180,630 Abandoned US20080291829A1 (en) 2002-12-13 2008-07-28 System for content based message processing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/318,742 Active 2026-03-11 US7426209B2 (en) 2002-12-13 2002-12-13 System for content based message processing

Country Status (5)

Country Link
US (2) US7426209B2 (en)
EP (1) EP1570613A2 (en)
CN (2) CN102158418A (en)
AU (1) AU2003297057A1 (en)
WO (1) WO2004056070A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080117474A1 (en) * 2006-11-22 2008-05-22 Canon Kabushiki Kaisha Data communication apparatus and data communication method
US20120163396A1 (en) * 2010-12-22 2012-06-28 Brocade Communications Systems, Inc. Queue speed-up by using multiple linked lists
US20120243557A1 (en) * 2011-03-24 2012-09-27 Stanton Kevin B Reducing latency of at least one stream that is associated with at least one bandwidth reservation
US20120290789A1 (en) * 2011-05-12 2012-11-15 Lsi Corporation Preferentially accelerating applications in a multi-tenant storage system via utility driven data caching
US8761201B2 (en) 2010-10-22 2014-06-24 Intel Corporation Reducing the maximum latency of reserved streams
US20140289320A1 (en) * 2007-09-21 2014-09-25 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US9350659B1 (en) * 2009-06-26 2016-05-24 Marvell International Ltd. Congestion avoidance for network traffic

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7426209B2 (en) * 2002-12-13 2008-09-16 Telefonaktiebolaget L M Ericsson (Publ) System for content based message processing
US20040243979A1 (en) * 2003-02-27 2004-12-02 Bea Systems, Inc. Systems utilizing a debugging proxy
US20050249144A1 (en) * 2003-10-17 2005-11-10 Abheek Saha Method of implementing scheduling discipline based on radio resource allocation for variable bandwidth satellite channels
US7899828B2 (en) 2003-12-10 2011-03-01 Mcafee, Inc. Tag data structure for maintaining relational data over captured objects
US7814327B2 (en) * 2003-12-10 2010-10-12 Mcafee, Inc. Document registration
US8548170B2 (en) 2003-12-10 2013-10-01 Mcafee, Inc. Document de-registration
US8656039B2 (en) 2003-12-10 2014-02-18 Mcafee, Inc. Rule parser
US20050131876A1 (en) * 2003-12-10 2005-06-16 Ahuja Ratinder Paul S. Graphical user interface for capture system
US7984175B2 (en) 2003-12-10 2011-07-19 Mcafee, Inc. Method and apparatus for data capture and analysis system
US7774604B2 (en) 2003-12-10 2010-08-10 Mcafee, Inc. Verifying captured objects before presentation
US7930540B2 (en) * 2004-01-22 2011-04-19 Mcafee, Inc. Cryptographic policy enforcement
GB0413482D0 (en) * 2004-06-16 2004-07-21 Nokia Corp Packet queuing system and method
US7962591B2 (en) * 2004-06-23 2011-06-14 Mcafee, Inc. Object classification in a capture system
US8560534B2 (en) 2004-08-23 2013-10-15 Mcafee, Inc. Database for a capture system
US7949849B2 (en) * 2004-08-24 2011-05-24 Mcafee, Inc. File system for a capture system
CN100421428C (en) * 2004-10-28 2008-09-24 华为技术有限公司 Method for scheduling forward direction public control channel message
US7907608B2 (en) * 2005-08-12 2011-03-15 Mcafee, Inc. High speed packet capture
US7818326B2 (en) 2005-08-31 2010-10-19 Mcafee, Inc. System and method for word indexing in a capture system and querying thereof
US7730011B1 (en) 2005-10-19 2010-06-01 Mcafee, Inc. Attributes of captured objects in a capture system
US7657104B2 (en) 2005-11-21 2010-02-02 Mcafee, Inc. Identifying image type in a capture system
CN100463451C (en) * 2005-12-29 2009-02-18 中山大学 Multidimensional queue dispatching and managing system for network data stream
US7724754B2 (en) * 2006-02-24 2010-05-25 Texas Instruments Incorporated Device, system and/or method for managing packet congestion in a packet switching network
US20070226504A1 (en) * 2006-03-24 2007-09-27 Reconnex Corporation Signature match processing in a document registration system
US8504537B2 (en) 2006-03-24 2013-08-06 Mcafee, Inc. Signature distribution in a document registration system
US7689614B2 (en) * 2006-05-22 2010-03-30 Mcafee, Inc. Query generation for a capture system
US8010689B2 (en) * 2006-05-22 2011-08-30 Mcafee, Inc. Locational tagging in a capture system
US7958227B2 (en) 2006-05-22 2011-06-07 Mcafee, Inc. Attributes of captured objects in a capture system
US20080019382A1 (en) * 2006-07-20 2008-01-24 British Telecommunications Public Limited Company Telecommunications switching
WO2008021182A2 (en) * 2006-08-09 2008-02-21 Interdigital Technology Corporation Method and apparatus for providing differentiated quality of service for packets in a particular flow
CN101163175A (en) * 2006-10-11 2008-04-16 鸿富锦精密工业(深圳)有限公司 Network voice device and service switch method thereof
US20080112399A1 (en) * 2006-11-13 2008-05-15 British Telecommunications Public Limited Company Telecommunications system
US20080186854A1 (en) * 2007-02-06 2008-08-07 British Telecommunications Public Limited Company Network monitoring system
US20080188191A1 (en) * 2007-02-06 2008-08-07 British Telecommunications Public Limited Company Network monitoring system
US7627618B2 (en) * 2007-02-21 2009-12-01 At&T Knowledge Ventures, L.P. System for managing data collection processes
US8185899B2 (en) * 2007-03-07 2012-05-22 International Business Machines Corporation Prediction based priority scheduling
US8205242B2 (en) 2008-07-10 2012-06-19 Mcafee, Inc. System and method for data mining and security policy management
US9253154B2 (en) 2008-08-12 2016-02-02 Mcafee, Inc. Configuration management for a capture/registration system
US8850591B2 (en) 2009-01-13 2014-09-30 Mcafee, Inc. System and method for concept building
US8706709B2 (en) 2009-01-15 2014-04-22 Mcafee, Inc. System and method for intelligent term grouping
US8473442B1 (en) 2009-02-25 2013-06-25 Mcafee, Inc. System and method for intelligent state management
CN101505273B (en) * 2009-03-04 2011-07-13 中兴通讯股份有限公司 Switch and scheduling method for implementing private network packet thereof
US8447722B1 (en) 2009-03-25 2013-05-21 Mcafee, Inc. System and method for data mining and security policy management
US8667121B2 (en) 2009-03-25 2014-03-04 Mcafee, Inc. System and method for managing data and policies
KR20100107801A (en) * 2009-03-26 2010-10-06 삼성전자주식회사 Apparatus and method for antenna selection in wireless communication system
US8806615B2 (en) 2010-11-04 2014-08-12 Mcafee, Inc. System and method for protecting specified data combinations
CN102546388B (en) * 2010-12-16 2014-08-13 国际商业机器公司 Method and system for grouping qos horizontal attribution
US20130246431A1 (en) 2011-12-27 2013-09-19 Mcafee, Inc. System and method for providing data protection workflows in a network environment
CN103795648A (en) * 2012-10-30 2014-05-14 中兴通讯股份有限公司 Method, device and system for scheduling queue
US9385974B2 (en) * 2014-02-14 2016-07-05 Sprint Communications Company L.P. Data message queue management to identify message sets for delivery metric modification
CN105848082B (en) * 2015-01-12 2019-03-15 中国移动通信集团湖南有限公司 A kind of processing method and processing device of up-on command
CN112311694B (en) * 2019-07-31 2022-08-26 华为技术有限公司 Priority adjustment method and device
CN111475312B (en) * 2019-09-12 2021-05-18 北京东土科技股份有限公司 Message driving method and device based on real-time operating system
CN113438153B (en) * 2021-06-25 2022-05-10 北京理工大学 Vehicle-mounted gateway, intelligent automobile and control method

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5506966A (en) * 1991-12-17 1996-04-09 Nec Corporation System for message traffic control utilizing prioritized message chaining for queueing control ensuring transmission/reception of high priority messages
US20020019853A1 (en) * 2000-04-17 2002-02-14 Mark Vange Conductor gateway prioritization parameters
US20020067296A1 (en) * 2000-03-23 2002-06-06 Mosaid Technologies, Inc. Multi-stage lookup for translating between signals of different bit lengths
US20020097675A1 (en) * 1997-10-03 2002-07-25 David G. Fowler Classes of service in an mpoa network
US20030123386A1 (en) * 2001-12-27 2003-07-03 Institute For Information Industry Flexible and high-speed network packet classifying method
US20040052262A1 (en) * 2002-09-16 2004-03-18 Vikram Visweswaraiah Versatile system for message scheduling within a packet operating system
US6798789B1 (en) * 1999-01-27 2004-09-28 Motorola, Inc. Priority enhanced messaging and method therefor
US7058051B2 (en) * 2000-12-08 2006-06-06 Fujitsu Limited Packet processing device
US7095740B1 (en) * 1998-06-30 2006-08-22 Nortel Networks Limited Method and apparatus for virtual overlay networks
US20070081456A1 (en) * 2002-04-08 2007-04-12 Gorti Brahmanand K Priority based bandwidth allocation within real-time and non-real time traffic streams
US7426209B2 (en) * 2002-12-13 2008-09-16 Telefonaktiebolaget L M Ericsson (Publ) System for content based message processing
US20100220742A1 (en) * 2000-12-19 2010-09-02 Foundry Networks, Inc. System and method for router queue and congestion management

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6188698B1 (en) 1997-12-31 2001-02-13 Cisco Technology, Inc. Multiple-criteria queueing and transmission scheduling system for multimedia networks
US6003101A (en) * 1998-07-15 1999-12-14 International Business Machines Corp. Efficient priority queue
US6185221B1 (en) 1998-11-09 2001-02-06 Cabletron Systems, Inc. Method and apparatus for fair and efficient scheduling of variable-size data packets in an input-buffered multipoint switch
GB2362776B (en) * 2000-05-23 2002-07-31 3Com Corp Allocation of asymmetric priority to traffic flow in network switches
US6731631B1 (en) * 2000-08-11 2004-05-04 Paion Company, Limited System, method and article of manufacture for updating a switching table in a switch fabric chipset system
JP3606188B2 (en) * 2000-10-18 2005-01-05 日本電気株式会社 Communication packet priority class setting control method and system, apparatus used therefor, and recording medium
US7170900B2 (en) * 2001-07-13 2007-01-30 Telefonaktiebolaget Lm Ericsson (Publ) Method and apparatus for scheduling message processing
US7042888B2 (en) * 2001-09-24 2006-05-09 Ericsson Inc. System and method for processing packets
US7088739B2 (en) * 2001-11-09 2006-08-08 Ericsson Inc. Method and apparatus for creating a packet using a digital signal processor
US7852865B2 (en) * 2002-11-26 2010-12-14 Broadcom Corporation System and method for preferred service flow of high priority messages

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5506966A (en) * 1991-12-17 1996-04-09 Nec Corporation System for message traffic control utilizing prioritized message chaining for queueing control ensuring transmission/reception of high priority messages
US20020097675A1 (en) * 1997-10-03 2002-07-25 David G. Fowler Classes of service in an mpoa network
US7095740B1 (en) * 1998-06-30 2006-08-22 Nortel Networks Limited Method and apparatus for virtual overlay networks
US6798789B1 (en) * 1999-01-27 2004-09-28 Motorola, Inc. Priority enhanced messaging and method therefor
US20020067296A1 (en) * 2000-03-23 2002-06-06 Mosaid Technologies, Inc. Multi-stage lookup for translating between signals of different bit lengths
US20020019853A1 (en) * 2000-04-17 2002-02-14 Mark Vange Conductor gateway prioritization parameters
US7058051B2 (en) * 2000-12-08 2006-06-06 Fujitsu Limited Packet processing device
US20100220742A1 (en) * 2000-12-19 2010-09-02 Foundry Networks, Inc. System and method for router queue and congestion management
US20030123386A1 (en) * 2001-12-27 2003-07-03 Institute For Information Industry Flexible and high-speed network packet classifying method
US20070081456A1 (en) * 2002-04-08 2007-04-12 Gorti Brahmanand K Priority based bandwidth allocation within real-time and non-real time traffic streams
US20040052262A1 (en) * 2002-09-16 2004-03-18 Vikram Visweswaraiah Versatile system for message scheduling within a packet operating system
US7426209B2 (en) * 2002-12-13 2008-09-16 Telefonaktiebolaget L M Ericsson (Publ) System for content based message processing

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080117474A1 (en) * 2006-11-22 2008-05-22 Canon Kabushiki Kaisha Data communication apparatus and data communication method
US11856072B2 (en) 2007-09-21 2023-12-26 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US11528337B2 (en) 2007-09-21 2022-12-13 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US10757211B2 (en) 2007-09-21 2020-08-25 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US9794363B2 (en) 2007-09-21 2017-10-17 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US9444901B2 (en) * 2007-09-21 2016-09-13 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US20140289320A1 (en) * 2007-09-21 2014-09-25 Huawei Technologies Co., Ltd. Method and apparatus for sending a push content
US9350659B1 (en) * 2009-06-26 2016-05-24 Marvell International Ltd. Congestion avoidance for network traffic
US8761201B2 (en) 2010-10-22 2014-06-24 Intel Corporation Reducing the maximum latency of reserved streams
US20140294014A1 (en) * 2010-12-22 2014-10-02 Brocade Communications Systems, Inc. Queue speed-up by using multiple linked lists
US9143459B2 (en) * 2010-12-22 2015-09-22 Brocade Communications Systems, Inc. Queue speed-up by using multiple linked lists
US8737418B2 (en) * 2010-12-22 2014-05-27 Brocade Communications Systems, Inc. Queue speed-up by using multiple linked lists
US20120163396A1 (en) * 2010-12-22 2012-06-28 Brocade Communications Systems, Inc. Queue speed-up by using multiple linked lists
US9083617B2 (en) 2011-03-24 2015-07-14 Intel Corporation Reducing latency of at least one stream that is associated with at least one bandwidth reservation
US8705391B2 (en) * 2011-03-24 2014-04-22 Intel Corporation Reducing latency of at least one stream that is associated with at least one bandwidth reservation
US20120243557A1 (en) * 2011-03-24 2012-09-27 Stanton Kevin B Reducing latency of at least one stream that is associated with at least one bandwidth reservation
US20120290789A1 (en) * 2011-05-12 2012-11-15 Lsi Corporation Preferentially accelerating applications in a multi-tenant storage system via utility driven data caching

Also Published As

Publication number Publication date
AU2003297057A1 (en) 2004-07-09
US20040120325A1 (en) 2004-06-24
CN1745549A (en) 2006-03-08
WO2004056070A3 (en) 2004-10-28
WO2004056070A2 (en) 2004-07-01
AU2003297057A8 (en) 2004-07-09
EP1570613A2 (en) 2005-09-07
US7426209B2 (en) 2008-09-16
CN102158418A (en) 2011-08-17
CN1745549B (en) 2011-07-06

Similar Documents

Publication Publication Date Title
US7426209B2 (en) System for content based message processing
US7170900B2 (en) Method and apparatus for scheduling message processing
US7099275B2 (en) Programmable multi-service queue scheduler
US7701849B1 (en) Flow-based queuing of network traffic
JP4616535B2 (en) Network switching method using packet scheduling
US6049546A (en) System and method for performing switching in multipoint-to-multipoint multicasting
US7016366B2 (en) Packet switch that converts variable length packets to fixed length packets and uses fewer QOS categories in the input queues that in the outout queues
KR100922654B1 (en) System and method for processing packets
US6438135B1 (en) Dynamic weighted round robin queuing
EP1264430B1 (en) Non-consecutive data readout scheduler
US8184540B1 (en) Packet lifetime-based memory allocation
US20030048792A1 (en) Forwarding device for communication networks
US6795870B1 (en) Method and system for network processor scheduler
JPH05502776A (en) A method for prioritizing, selectively discarding, and multiplexing high-speed packets of different traffic types
US20020085548A1 (en) Quality of service technique for a data communication network
US20030118044A1 (en) Queue scheduling mechanism in a data packet transmission system
CA2338778A1 (en) A link-level flow control method for an atm server
WO2002054183A2 (en) Address learning technique in a data communication network
NZ531355A (en) Distributed transmission of traffic flows in communication networks
US7382792B2 (en) Queue scheduling mechanism in a data packet transmission system
CA2347592C (en) Improvements in or relating to packet switches
Teruhi et al. An adaptive MPEG2-TS packet scheduling discipline for multimedia broadcasting
Karlsson Department of Communication Systems, Lund Institute of Technology PO Box 118, S-221 00 LUND, Sweden, johan@ tts. lth. se Hideaki Yamashita Faculty of Business Administration, Komazawa University
Karlsson et al. Discrete-time analysis of a finite capacity queue with an’all or nothing policy’to reduce burst loss
Dastangoo Performance consideration for building the next generation multi-service optical communications platforms

Legal Events

Date Code Title Description
AS Assignment

Owner name: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL), SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AYRES, LAWRENCE;REEL/FRAME:021392/0138

Effective date: 20021209

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION