US20140185629A1 - Queue processing method - Google Patents

Queue processing method Download PDF

Info

Publication number
US20140185629A1
US20140185629A1 US14/165,373 US201414165373A US2014185629A1 US 20140185629 A1 US20140185629 A1 US 20140185629A1 US 201414165373 A US201414165373 A US 201414165373A US 2014185629 A1 US2014185629 A1 US 2014185629A1
Authority
US
United States
Prior art keywords
state parameter
packet
value
port
parameter counter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/165,373
Inventor
Finbar Naven
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
US Bank NA
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Priority to US14/165,373 priority Critical patent/US20140185629A1/en
Publication of US20140185629A1 publication Critical patent/US20140185629A1/en
Assigned to U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICRON TECHNOLOGY, INC.
Assigned to MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT reassignment MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: MICRON TECHNOLOGY, INC.
Assigned to U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT reassignment U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST. Assignors: MICRON TECHNOLOGY, INC.
Assigned to JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICRON SEMICONDUCTOR PRODUCTS, INC., MICRON TECHNOLOGY, INC.
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT
Assigned to MICRON TECHNOLOGY, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT
Assigned to MICRON TECHNOLOGY, INC., MICRON SEMICONDUCTOR PRODUCTS, INC. reassignment MICRON TECHNOLOGY, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the present invention relates to a method of processing a queue of data packets.
  • a processing device it is often necessary to connect a processing device to a plurality of input and output devices.
  • Appropriate data communication is achieved by connecting the devices in such a way as to allow them to send data to each other over a physical link, which may be a wired link or a wireless link.
  • Switches may therefore contain a buffer to store incoming data packets as they are waiting to be switched to one or more appropriate output ports. It is known to store data in such a buffer in the form of one or more queues which temporarily store data received from a device until that data can be sent to a receiving device.
  • I/O input/output
  • each computer has its own dedicated I/O devices. It is, however advantageous to allow the sharing of I/O devices such that a plurality of computers can access one or more shared I/O devices. This allows an I/O device to appear to a computer system to be dedicated (i.e. local) to that computer system, while in reality it is shared between a plurality of computers.
  • I/O Virtualization allows physical resources (e.g. memory) associated with a particular I/O device to be shared by a plurality of computers.
  • One advantage of I/O virtualization is that it allows an I/O device to appear to function as multiple devices, each of the multiple devices being associated with a particular computer.
  • I/O virtualization allows I/O devices on a single computer to be shared by multiple operating systems running concurrently on that computer.
  • Another application of I/O virtualization known as multi-root I/O virtualization, allows multiple independent servers to share a set of I/O devices. Such servers may be connected together by way of a computer network.
  • a switch having a plurality of ports connects multiple I/O devices to multiple independent servers.
  • the switch provides queues allowing received data to be stored until onward transmission of the data to a destination is possible. This allows efficient utilization of link bandwidth, maximizing throughput and minimizing congestion.
  • These queues often comprise memory arranged as FIFO (first in, first out) queues.
  • FIFO first in, first out queues.
  • Shared queues allow for more efficient use of resources and the design of more scalable and cost-efficient systems. Shared queues allow packets received at the switch from a plurality of inputs and destined for a plurality of outputs to be stored in the same queue.
  • shared queues create problems in applications where it is a requirement to allow individual servers to perform system resets independently of other servers sharing the same I/O devices.
  • a shared queue can contain data packets interleaved from multiple sources. If a server is reset, it is desirable that only those data packets stored within the shared queue associated with that server are discarded. This requirement can be difficult to achieve in practice, as in standard systems, a reset causes data packets from all active servers within the queue to be discarded.
  • a method of processing a data packets comprising: storing a data packet associated with a respective one of the plurality of entities in a buffer; storing state parameter data associated with the stored data packet, the state parameter data being based upon a value of a state parameter associated with the respective one of the plurality of entities; and processing a data packet in the buffer based upon the associated state parameter data.
  • embodiments of the invention allow server independence to be achieved. That is, by storing a state parameter associated with a particular entity, changes in a state of that entity can be monitored. In some embodiments it can be determined which data packets were sent before and after a change of the state parameter associated with a corresponding entity and consequently which data packets were sent before and after a change of state of the entity. Where the state parameter is updated to reflect events at the entity with which it is associated, processing of a data packet can be based upon events at the entity. State parameter values may be stored with data packets in the buffer. As such a data packet need only be examined when it is processed.
  • the state parameter data may be based upon a value of a state parameter associated with said respective one of said plurality of entities when the stored data packet is received or processed in some predetermined way.
  • the processing may comprise selecting a data packet for processing and processing the state parameter data associated with the selected data packet with reference to a current value of the state parameter associated with the respective entity. If the processing indicates a first relationship between the state parameter data associated with the selected data packet and the current value of the state parameter associated with the respective entity the method may further comprise transmitting the selected data packet to at least one destination associated with the selected data packet. If the processing indicates a second relationship between the state parameter data associated with the selected data packet and the current value of the state parameter associated with the respective entity the method may further comprise discarding the selected data packet.
  • the first relationship could be equality, i.e. state parameter associated with the selected data packet and the current value of the state parameter associated with the respective entity match.
  • the state parameter may be a counter. In such a case the state parameter may be updated by incrementing its value.
  • the state parameter data may be stored in the buffer alongside the data packet, or alternatively may be stored in other appropriate storage, preferably storage which is local to the buffer.
  • the buffer may be implemented as a queue, preferably a first-in, first-out queue.
  • the state parameter associated with an entity may be updated in response to at least one event associated with the entity.
  • the state parameter may update each time the entity is reset.
  • the entity may be a source of the data packet or a destination of the data packet.
  • the entity may be a computing device.
  • the entity may be a computer program running on a computing device. That is, a plurality of entities may be a plurality of different computer programs (e.g. different operating system instances) running on a common computer. Alternatively, a plurality of entities may comprise a plurality of computing devices.
  • a method of storing data packets in a buffer comprising: receiving a data packet associated with a respective one of the plurality of entities; determining a value of a state parameter associated with the respective one of the plurality of entities; storing the data packet in the buffer; and storing state parameter data based upon the determined value.
  • a computer apparatus for processing data packets.
  • the apparatus comprises a memory storing processor readable instructions and a processor configured to read and execute instructions stored in the memory.
  • the processor readable instructions comprise instructions controlling the processor to carry out a method as described above.
  • aspects of the present invention can be implemented in any convenient way including by way of suitable hardware and/or software.
  • a switching device arranged to implement the invention may be created using appropriate hardware components.
  • a programmable device may be programmed to implement embodiments of the invention.
  • the invention therefore also provides suitable computer programs for implementing aspects of the invention. Such computer programs can be carried on suitable carrier media including tangible carrier media (e.g. hard disks, CD ROMs and so on) and intangible carrier media such as communications signals.
  • FIG. 1 is a schematic illustration of a plurality of servers connected to a plurality of input/output (I/O) devices by way of a switch;
  • FIG. 2 is a schematic illustration of a plurality of servers connected to an I/O device by way of a switch having a shared queue;
  • FIG. 3 is a schematic illustration of a plurality of servers connected to a plurality of I/O devices by way of two switches, each switch having a shared queue;
  • FIG. 4 is a schematic illustration of a plurality of servers connected to an I/O device by way of a switch having a shared queue adapted according to an embodiment of the present invention
  • FIG. 5A is a timing diagram showing times of arrival of a stream of data packets arriving at the switch of FIG. 4 ;
  • FIG. 5B is a timing diagram showing how values of state parameters change during receipt of the stream of data packets shown in FIG. 5A ;
  • FIGS. 6A to 6E are schematic illustrations showing the processing of the shared queue in the embodiment of the present invention shown in FIG. 4 ;
  • FIG. 7 is a schematic illustration showing comparisons performed during the processing described with reference to FIGS. 6A to 6E and the result of each comparison.
  • FIG. 1 Certain Referring first to FIG. 1 , three servers, server A, server B, server C are connected to a switch 1 .
  • the switch 1 has three ports 2 , 3 , 4 and the server A is connected to the port 2 , the server B is connected to the port 3 and the server C is connected to the port 4 .
  • Three I/O devices (sometimes referred to as I/O endpoints) 5 a , 5 b and 5 c are also connected to the switch 1 .
  • the I/O device 5 a is connected to a port 6 of the switch 1
  • the I/O device 5 b is connected to a port 7 of the switch 1 while the I/O device 5 c is connected to a port 8 of the switch 1 .
  • the servers A, B, C communicate with the I/O devices 5 a , 5 b , 5 c by sending and receiving data packets through the switch 1 .
  • Each of the Servers A, B, C may transmit data packets to and receive data packets from some or all of the I/O devices 5 a , 5 b , 5 c.
  • Each of the shared I/O devices 5 a , 5 b , 5 c may have a plurality of independent functions. That is, for example, the shared I/O device 5 a may appear to the servers A, B, C as a plurality of separate devices. The servers A, B, C may be given access to some or all of the functions of the I/O devices 5 a , 5 b , 5 c .
  • the I/O devices 5 a , 5 b , 5 c can take any suitable form, and can be, for example, network interface cards, storage devices, or graphics rendering devices.
  • FIG. 2 shows part of the arrangement shown in FIG. 1 .
  • the switch 1 has a shared input queue 9 .
  • Data packets received from the three servers A, B, C enter the switch 1 via the respective ports 2 , 3 , 4 and are queued in the shared input queue 9 .
  • the shared input queue 9 is processed by removing packets from the head of the queue, and transmitting removed packets to the I/O device 5 a via the port 6 of the switch 1 .
  • the server C sends a data packet to the I/O device 5 a
  • the data packet is transmitted from the server C and received by the switch 1 via port 4 .
  • the data packet is then stored in the shared input queue 9 until bandwidth is available to transmit the packet to the I/O device 5 a through the port 6 of the switch 1 .
  • Bandwidth might be unavailable for a variety of reasons, for example if the I/O device 5 a is busy, or if a link between the switch 1 and the I/O device 5 a is busy.
  • the shared input queue 9 can be implemented in any suitable way.
  • the shared input queue may be implemented as a first in first out (FIFO) queue.
  • FIFO first in first out
  • packets are received from the three servers A, B, C and are queued as they are received. Packets are then transmitted to the I/O device 5 a in the order in which they were received, regardless of the server from which they were received.
  • the shared input queue 9 can be stored within appropriate storage provided by the switch 1 , such as appropriate RAM.
  • the servers A, B, C send data packets to the shared I/O device 5 d via both of the switches 1 , 10 .
  • a data packet transmitted by, for example the server A is first received at the switch 1 through the port 2 , and is queued in the shared input queue 9 until it can be sent on towards its destination.
  • the data packet is transmitted to the switch 10 through the port 11 of the switch 1 and the port 14 b of the switch 10 .
  • the data packet is queued in the shared input queue 12 until it can be forwarded onto the intended destination, which is the I/O device 5 d.
  • Server D sends data packets to the I/O device 5 d via the switch 10 .
  • Data packets transmitted by the server D are received at the switch 10 through the port 14 a , and are queued in the shared input queue 12 .
  • the shared input queue 12 contains data packets transmitted by each of the servers A, B, C and D.
  • Each of the servers A, B, C are connected to the switch 1 , and as such the shared input queue 9 provided by the switch 1 contains data packets transmitted by each of the servers A, B, C.
  • FIG. 3 shows one way in which the arrangement of FIG. 2 can be modified to allow a larger number of devices to be connected together. It will be readily apparent that other configurations are possible.
  • FIG. 4 shows the general arrangement of FIG. 2 adapted to implement the invention. It will be appreciated that in alternative embodiments of the invention alternative arrangements may be employed including, for example, an arrangement based upon that shown in FIG. 3 and described above, as is discussed further below.
  • the switch 1 connects three servers A, B, C and the shared I/O device 5 a as described with reference to FIG. 2 .
  • the switch 1 again implements a shared input queue 9 which is arranged to store packets received from the servers A, B, C as they are received by the switch 1 and before they are transmitted onwards to the shared I/O device 5 a .
  • the switch 1 stores a state parameter for each of the servers A, B, C which is connected to the switch 1 .
  • a state parameter 15 is associated with the server A
  • a state parameter 16 is associated with the server B
  • a state parameter 17 is associated with the server C. It will be appreciated that if further servers were connected to the switch 1 further state parameters would be stored. Indeed, the switch 1 stores as many state parameters as is necessary to cater for the number of data sources which may be connected to the switch 1 .
  • the state parameters 15 , 16 , 17 are stored in RAM provided by the switch 1 .
  • Each data packet is stored together with a value of the state parameter associated with the port through which the data packet was received, the value of the state parameter being determined as the data packet is received by the switch 1 .
  • the value of the state parameter 15 associated with the port 2 (and therefore associated with the server A) when the data packet is received is stored alongside the received data packet in the shared input queue 9 . That is, if the state parameter 15 has a value of ‘1’ when a particular data packet is received from Server A by the switch 1 the shared input queue 9 will store the received data packet together with the value ‘1’ in the shared input queue 9 .
  • each received data packet is stored in the shared input queue 9 together with an appropriate value of an appropriate state parameter.
  • this is detected by the switch 1 and the corresponding state parameter 15 , 16 , 17 is updated in response to the reset.
  • a reset can be detected by the switch 1 in any convenient way. For example a signal may be received at the switch which is indicative of a reset of one of the servers. Such a signal may be provided in the form of a control data packet. Alternatively, the switch 1 may detect a failure of the link between the switch 1 and one of the servers. Regardless of how a reset is detected by the switch 1 , in response to detection of a reset, the corresponding state parameter is updated.
  • the update of the corresponding state parameter comprises incrementing the corresponding state parameter. For example, if state parameter 16 has a value of ‘1’ and the server B is reset, this is detected and the state parameter 16 is incremented such that it has the value of ‘2’.
  • each of the servers A, B and C sends data packets to the I/O device 5 a .
  • the data packets are sent via the switch 1 and are stored together with corresponding state parameters values in the shared input queue 9 in order of receipt.
  • FIG. 5A illustrates the timing relationship between the receipt of data packets at the switch 1 .
  • Each data packet is identified by reference to a server from which it was received and a counter indicating how many packets have been received from that server. That is, a data packet denoted A 0 is the first data packet received from the server A.
  • FIG. 5B shows changes in the values of the state parameters 15 , 16 , 17 . It can be seen that between times t 0 and t 1 the state parameter 15 has a value of ‘0’, the state parameter 16 has a value of ‘1’ and the state parameter 17 has a value of ‘2’. At time t 1 the value of the state parameter 15 is changed from ‘0’ to ‘1’, while the values of the state parameters 16 , 17 remain respectively ‘1’ and ‘2’. No further changes in state parameter values occur between times t 1 and tx. From FIG. 5B it can be seen that the server A was reset at time t 1 resulting in the state parameter 15 being incremented.
  • FIG. 6A shows the shared input queue 9 after receipt of all data packets shown in FIG. 5A .
  • the data packet at the head of the shared input queue 9 is data packet B 0 given that this was the first data packet received at the switch 1 .
  • the data packet B 0 is stored in the shared input queue 9 together with the value of the state parameter 16 when the data packet B 0 is received, i.e. a value of ‘1’.
  • a next data packet stored in the shared input queue 9 is the data packet A 0 received from the server A.
  • the data packet A 0 is stored together with a value ‘0’, which is the value of the state parameter 15 when the data packet A 0 is received at the switch 1 .
  • the next data packet stored in the shared input queue 9 is the data packet C 0 received from the server C.
  • the data packet C 0 is stored in the shared input queue 9 together with a value ‘2’, which is the value of the state parameter 17 when the packet C 0 is received at the switch 1 .
  • the shared input queue 9 Next in the shared input queue 9 are stored two data packets received from the server B, the data packets B 1 and B 2 , both of which are stored together with a value of ‘1’ which is the value of state parameter 16 when these packets are received at the switch 1 .
  • the data packet B 2 is followed in the shared input queue 9 by a data packet A 1 received from the server A and stored together with a value of ‘0’, that being the value of the state parameter 15 when the data packet A 1 is received at the switch 1 .
  • the server A is reset, resulting in the state parameter 15 being incremented as described above.
  • the data packet A 2 is received from the server A and is added to the shared input queue 9 .
  • the data packet A 2 is stored in the shared input queue 9 with a value of ‘1’ following update of the state parameter 15 .
  • FIG. 6A also shows the values of the state parameters 15 , 16 , 17 after receipt of all packets shown in the shared input queue 9 in FIG. 6A . It is assumed, for the purposes of example only, that all data packets shown in FIG. 5A are received at the switch 1 and are added to the shared input queue 9 before any of the received packets leave the shared input queue 9 . That is, following receipt of the data packets shown in FIG. 5A , the shared input queue 9 and the state parameters 15 , 16 , 17 have the state shown in FIG. 6A .
  • the shared input queue 9 is processed as a FIFO queue. That is, a data packet at the head of the queue is the data packet which is considered for transmission. To determine whether the data packet at the head of the shared input queue 9 should be transmitted, the port through which the data packet was received (and consequently a server from which the data packet was received) is determined from information contained within the header of the stored packet. The value of the state parameter stored alongside the processed data packet is then compared with the current value of the appropriate state parameter corresponding to the port on which the data packet was received. So, when the queue illustrated in FIG. 6A is processed, the data packet B 0 is considered for transmission.
  • the relevant state parameter is the state parameter 16 , which has a value of ‘1’ when the shared input queue shown in FIG. 6A is processed. It can be seen that the data packet B 0 is stored alongside a value of ‘1’. Given that the value stored alongside the data packet B 0 (′ 1 ) is equal to the current value of the relevant state parameter it can be determined that the server B has not been reset since receipt of the data packet B 0 , and as such the data packet B 0 is transmitted.
  • FIG. 6B shows the state of the shared input queue 9 after data packet B 0 has been transmitted.
  • Data packet A 0 is now at the head of shared input queue 9 and is therefore processed.
  • the data packet A 0 was received from server A.
  • the state parameter value (‘0’) stored alongside the data packet A 0 does not match the current value (‘1’) of the corresponding state parameter 15 .
  • server A has been reset since sending data packet A 0 and data packet A 0 should therefore not be transmitted from the switch 1 to the I/O device 5 a .
  • the data packet A 0 is therefore discarded without transmission.
  • FIG. 6D shows the state of the shared input queue 9 after data packets C 0 , B 1 and B 2 have been transmitted as described with reference to FIG. 6C .
  • the data packet at the head of the shared input queue 9 is the data packet A 1 .
  • the state parameter value stored alongside the data packet A 1 is ‘0’ which does not match the current value, ‘1’, of relevant state parameter 15 . This indicates that a reset of server A has occurred subsequent to the sending of data packet A 1 .
  • the data packet A 1 is therefore invalid and is discarded.
  • the data packet A 2 is at the head of the shared input queue 9 .
  • the data packet A 2 was sent by the server A after the server A was reset.
  • the state parameter value (‘1’) stored alongside data packet A 2 matches the current value of the relevant state parameter 15 indicating that server A has not been reset subsequent to sending data packet A 2 .
  • Data packet A 2 is therefore transmitted to the I/O device 5 a.
  • FIG. 7 illustrates the comparisons performed during the processing described with reference to FIGS. 6A to 6E and the result of each comparison. It can be seen that the data packets A 0 and A 1 are discarded as the value of the state parameter stored alongside the data packets A 0 and A 1 does not match the current value of state parameter 15 . All other data packets are transmitted.
  • data packet traffic may equally flow in the other direction, i.e. the shared I/O device 5 a may send data packets to one or more of the servers A, B, C utilizing the same queue processing method.
  • data packets may be stored in a queue together with a value of the state parameter associated with a server to which the data packets are to be transmitted. That is, the method described above may be applied using state parameters associated with destinations (rather than sources) of data packets. It will similarly be appreciated that the state parameters used in the queuing method may be associated with the I/O devices rather than with the servers.
  • each switch port stores one state parameter for each unique ingress flow passing through the port. That is, the port 14 b of the switch 10 to which the switch 1 is connected has three state parameters, one for each of the servers A, B, C. Given that the shared queue 12 is shared by each of the servers A, B, C, D, four state parameters will be maintained in total, one for each of the servers.
  • the I/O device 5 d has four returning data flows, one to each of the servers A, B, C, D. As such four state parameters will need to be associated with the port 13 of the switch 10 to which the I/O device 5 d is connected.
  • a reset is initiated by the server, and will propagate downwards to the I/O devices.
  • the server can reset part of its hierarchy, for example from port 13 downwards, without resetting ports 14 b , 11 or 2 , so in this situation the state parameters at the various ports are updated differently. In this case only the relevant state parameter associated with the port 13 is updated.
  • the state parameters may be implemented in any suitable way.
  • the state parameter may be implemented as a counter where the counter size is determined by the minimum reset frequency and the maximum latency for data packets within the shared queue.
  • a 2-bit counter will provide a suitable range of values for many applications.
  • data packets are transmitted from a plurality of different servers.
  • data packets may be transmitted from a plurality of different entities of any kind.
  • the entities may take the form of operating system instances operating on a single server.
  • server is intended broadly and is intended to cover any computing device.

Abstract

A method of processing data packets, each data packet being associated with one of a plurality of entities. The method comprises storing a data packet associated with a respective one of said plurality of entities in a buffer, storing state parameter data associated with said stored data packet, the state parameter data being based upon a value of a state parameter associated with said respective one of said plurality of entities, and processing a data packet in said buffer based upon said associated state parameter data.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of pending U.S. patent application Ser. No. 12/410,704, filed Mar. 25, 2009, which application claims priority, under 35 U.S.C. .sctn. 119(a), to British Patent Application No. 0806145.9, filed Apr. 4, 2008, and claims the benefit under 35 U.S.C. .sctn. 119(e) of U.S. Provisional Patent Application No. 61/042,321, filed Apr. 4, 2008. These applications and patents are incorporated herein by reference, in their entirety, for any purpose.
  • TECHNICAL FIELD
  • The present invention relates to a method of processing a queue of data packets.
  • It is often necessary to send data between devices in a computer system, for example it is often necessary to connect a processing device to a plurality of input and output devices. Appropriate data communication is achieved by connecting the devices in such a way as to allow them to send data to each other over a physical link, which may be a wired link or a wireless link.
  • It is known in the art to use a switch to route data packets from an output of one device to inputs of one or more other devices. Such a switch comprises one or more input ports arranged to allow the data packets to be received by the switch, and a plurality of output ports arranged to allow the data to be transmitted from the switch. Circumstances may arise where there is insufficient bandwidth on a connecting link or where a receiving device is busy processing previously received data such that data received at a switch cannot be sent from the switch to a receiving device through the appropriate output port. Switches may therefore contain a buffer to store incoming data packets as they are waiting to be switched to one or more appropriate output ports. It is known to store data in such a buffer in the form of one or more queues which temporarily store data received from a device until that data can be sent to a receiving device.
  • Many conventional computer systems do not share input/output (I/O) devices. That is, each computer has its own dedicated I/O devices. It is, however advantageous to allow the sharing of I/O devices such that a plurality of computers can access one or more shared I/O devices. This allows an I/O device to appear to a computer system to be dedicated (i.e. local) to that computer system, while in reality it is shared between a plurality of computers.
  • Sharing of I/O devices can be implemented using what is known as I/O virtualization. I/O Virtualization allows physical resources (e.g. memory) associated with a particular I/O device to be shared by a plurality of computers. One advantage of I/O virtualization is that it allows an I/O device to appear to function as multiple devices, each of the multiple devices being associated with a particular computer.
  • Sharing of I/O devices can lead to better resource utilization, scalability, ease of upgrade, and improved reliability. One application of I/O virtualization allows I/O devices on a single computer to be shared by multiple operating systems running concurrently on that computer. Another application of I/O virtualization, known as multi-root I/O virtualization, allows multiple independent servers to share a set of I/O devices. Such servers may be connected together by way of a computer network.
  • In order to ensure ease of integration, flexibility and compatibility with existing system components it is necessary to be able to provide I/O virtualization transparently, without requiring changes to the applications or operating systems running on the servers. Each server should be able to operate independently and be unaware that it is sharing I/O resources with other servers. It is desirable to be able to reset a server and its I/O resources without impacting other running servers that are sharing the I/O resources.
  • In typical multi-root I/O virtualization (IOV) implementations, a switch having a plurality of ports connects multiple I/O devices to multiple independent servers. The switch provides queues allowing received data to be stored until onward transmission of the data to a destination is possible. This allows efficient utilization of link bandwidth, maximizing throughput and minimizing congestion. These queues often comprise memory arranged as FIFO (first in, first out) queues. When a packet is received at the switch, it is stored in a queue until it can be sent to its intended destination. Since the queues operate on a first in, first out basis, a data packet that cannot be forwarded to its next destination prevents subsequent data packets from making forward progress, causing the queues to fill and resulting in congestion.
  • It is known in the art to use shared queues. Shared queues allow for more efficient use of resources and the design of more scalable and cost-efficient systems. Shared queues allow packets received at the switch from a plurality of inputs and destined for a plurality of outputs to be stored in the same queue.
  • However, shared queues create problems in applications where it is a requirement to allow individual servers to perform system resets independently of other servers sharing the same I/O devices. A shared queue can contain data packets interleaved from multiple sources. If a server is reset, it is desirable that only those data packets stored within the shared queue associated with that server are discarded. This requirement can be difficult to achieve in practice, as in standard systems, a reset causes data packets from all active servers within the queue to be discarded.
  • It is an object of an embodiment of the present invention to obviate or mitigate one or more of the problems outlined above.
  • According to a first aspect of the present invention there is provided a method of processing a data packets, each data packet being associated with one of a plurality of entities, the method comprising: storing a data packet associated with a respective one of the plurality of entities in a buffer; storing state parameter data associated with the stored data packet, the state parameter data being based upon a value of a state parameter associated with the respective one of the plurality of entities; and processing a data packet in the buffer based upon the associated state parameter data.
  • By keeping track of a state parameter associated with an entity and storing corresponding state parameter data for each data packet stored in the queue, embodiments of the invention allow server independence to be achieved. That is, by storing a state parameter associated with a particular entity, changes in a state of that entity can be monitored. In some embodiments it can be determined which data packets were sent before and after a change of the state parameter associated with a corresponding entity and consequently which data packets were sent before and after a change of state of the entity. Where the state parameter is updated to reflect events at the entity with which it is associated, processing of a data packet can be based upon events at the entity. State parameter values may be stored with data packets in the buffer. As such a data packet need only be examined when it is processed. As such, the processing of the queue need not change following an event at the entity with which a stored data packet is associated. Further, as only the state parameter needs updating, an event is reflected almost instantaneously allowing the buffer to respond to multiple events in sequence or in parallel. The state parameter data may be based upon a value of a state parameter associated with said respective one of said plurality of entities when the stored data packet is received or processed in some predetermined way.
  • The processing may comprise selecting a data packet for processing and processing the state parameter data associated with the selected data packet with reference to a current value of the state parameter associated with the respective entity. If the processing indicates a first relationship between the state parameter data associated with the selected data packet and the current value of the state parameter associated with the respective entity the method may further comprise transmitting the selected data packet to at least one destination associated with the selected data packet. If the processing indicates a second relationship between the state parameter data associated with the selected data packet and the current value of the state parameter associated with the respective entity the method may further comprise discarding the selected data packet.
  • For example, the first relationship could be equality, i.e. state parameter associated with the selected data packet and the current value of the state parameter associated with the respective entity match.
  • The state parameter may be a counter. In such a case the state parameter may be updated by incrementing its value. The state parameter data may be stored in the buffer alongside the data packet, or alternatively may be stored in other appropriate storage, preferably storage which is local to the buffer.
  • The buffer may be implemented as a queue, preferably a first-in, first-out queue.
  • The state parameter associated with an entity may be updated in response to at least one event associated with the entity. For example, the state parameter may update each time the entity is reset.
  • The entity may be a source of the data packet or a destination of the data packet. The entity may be a computing device. Alternatively, the entity may be a computer program running on a computing device. That is, a plurality of entities may be a plurality of different computer programs (e.g. different operating system instances) running on a common computer. Alternatively, a plurality of entities may comprise a plurality of computing devices.
  • According to a second aspect of the present invention, there is provided a method of storing data packets in a buffer, each data packet being associated with one of a plurality of entities, the method comprising: receiving a data packet associated with a respective one of the plurality of entities; determining a value of a state parameter associated with the respective one of the plurality of entities; storing the data packet in the buffer; and storing state parameter data based upon the determined value.
  • It will be appreciated that many features described in connection with the first aspect of the present invention can similarly be applied to the second aspect of the present invention.
  • According to a third aspect of the present invention, there is provided a computer apparatus for processing data packets. The apparatus comprises a memory storing processor readable instructions and a processor configured to read and execute instructions stored in the memory. The processor readable instructions comprise instructions controlling the processor to carry out a method as described above.
  • It will be appreciated that aspects of the present invention can be implemented in any convenient way including by way of suitable hardware and/or software. For example, a switching device arranged to implement the invention may be created using appropriate hardware components. Alternatively, a programmable device may be programmed to implement embodiments of the invention. The invention therefore also provides suitable computer programs for implementing aspects of the invention. Such computer programs can be carried on suitable carrier media including tangible carrier media (e.g. hard disks, CD ROMs and so on) and intangible carrier media such as communications signals.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Preferred embodiments of the present invention will now be described, by way of example, with reference to the accompanying drawings in which:
  • FIG. 1 is a schematic illustration of a plurality of servers connected to a plurality of input/output (I/O) devices by way of a switch;
  • FIG. 2 is a schematic illustration of a plurality of servers connected to an I/O device by way of a switch having a shared queue;
  • FIG. 3 is a schematic illustration of a plurality of servers connected to a plurality of I/O devices by way of two switches, each switch having a shared queue;
  • FIG. 4 is a schematic illustration of a plurality of servers connected to an I/O device by way of a switch having a shared queue adapted according to an embodiment of the present invention;
  • FIG. 5A is a timing diagram showing times of arrival of a stream of data packets arriving at the switch of FIG. 4;
  • FIG. 5B is a timing diagram showing how values of state parameters change during receipt of the stream of data packets shown in FIG. 5A;
  • FIGS. 6A to 6E are schematic illustrations showing the processing of the shared queue in the embodiment of the present invention shown in FIG. 4; and
  • FIG. 7 is a schematic illustration showing comparisons performed during the processing described with reference to FIGS. 6A to 6E and the result of each comparison.
  • DETAILED DESCRIPTION
  • Certain Referring first to FIG. 1, three servers, server A, server B, server C are connected to a switch 1. The switch 1 has three ports 2, 3, 4 and the server A is connected to the port 2, the server B is connected to the port 3 and the server C is connected to the port 4. Three I/O devices (sometimes referred to as I/O endpoints) 5 a, 5 b and 5 c are also connected to the switch 1. The I/O device 5 a is connected to a port 6 of the switch 1, the I/O device 5 b is connected to a port 7 of the switch 1 while the I/O device 5 c is connected to a port 8 of the switch 1.
  • The servers A, B, C communicate with the I/ O devices 5 a, 5 b, 5 c by sending and receiving data packets through the switch 1. Each of the Servers A, B, C may transmit data packets to and receive data packets from some or all of the I/ O devices 5 a, 5 b, 5 c.
  • Each of the shared I/ O devices 5 a, 5 b, 5 c may have a plurality of independent functions. That is, for example, the shared I/O device 5 a may appear to the servers A, B, C as a plurality of separate devices. The servers A, B, C may be given access to some or all of the functions of the I/ O devices 5 a, 5 b, 5 c. The I/ O devices 5 a, 5 b, 5 c can take any suitable form, and can be, for example, network interface cards, storage devices, or graphics rendering devices.
  • FIG. 2 shows part of the arrangement shown in FIG. 1. Here, only the I/O device 5 a is shown connected to the switch 1. Data packets are sent from each of the servers A, B, C to the I/O device 5 a via the switch 1. The switch 1 has a shared input queue 9. Data packets received from the three servers A, B, C enter the switch 1 via the respective ports 2, 3, 4 and are queued in the shared input queue 9. The shared input queue 9 is processed by removing packets from the head of the queue, and transmitting removed packets to the I/O device 5 a via the port 6 of the switch 1. For example, if the server C sends a data packet to the I/O device 5 a, the data packet is transmitted from the server C and received by the switch 1 via port 4. The data packet is then stored in the shared input queue 9 until bandwidth is available to transmit the packet to the I/O device 5 a through the port 6 of the switch 1. Bandwidth might be unavailable for a variety of reasons, for example if the I/O device 5 a is busy, or if a link between the switch 1 and the I/O device 5 a is busy.
  • The shared input queue 9 can be implemented in any suitable way. For example, the shared input queue may be implemented as a first in first out (FIFO) queue. Where the shared input queue 9 is implemented as a FIFO queue packets are received from the three servers A, B, C and are queued as they are received. Packets are then transmitted to the I/O device 5 a in the order in which they were received, regardless of the server from which they were received. The shared input queue 9 can be stored within appropriate storage provided by the switch 1, such as appropriate RAM.
  • It is known to use a plurality of switches in combination to extend the number of devices (i.e. I/O devices and servers) which can be connected together. In such arrangements, it may be advantageous to provide a shared queue at a port where two switches are interconnected. An example of a multiple switch configuration is shown in FIG. 3.
  • FIG. 3 shows the switch 1 shown in FIG. 2. Additionally, in the arrangement of FIG. 3 a switch 10 is connected to the switch 1 via a port 11. The switch 10 has a shared input queue 12. An I/O device 5 d is connected to the switch 10 through a port 13 provided by the switch 10. A further server D is also connected to the switch 10 via a port 14 a.
  • The servers A, B, C send data packets to the shared I/O device 5 d via both of the switches 1, 10. A data packet transmitted by, for example the server A, is first received at the switch 1 through the port 2, and is queued in the shared input queue 9 until it can be sent on towards its destination. Upon leaving the shared input queue 9, the data packet is transmitted to the switch 10 through the port 11 of the switch 1 and the port 14 b of the switch 10. At the switch 10 the data packet is queued in the shared input queue 12 until it can be forwarded onto the intended destination, which is the I/O device 5 d.
  • Server D sends data packets to the I/O device 5 d via the switch 10. Data packets transmitted by the server D are received at the switch 10 through the port 14 a, and are queued in the shared input queue 12. As each of servers A, B, C and D may send data packets to the shared I/O endpoint 5 d, the shared input queue 12 contains data packets transmitted by each of the servers A, B, C and D. Each of the servers A, B, C are connected to the switch 1, and as such the shared input queue 9 provided by the switch 1 contains data packets transmitted by each of the servers A, B, C.
  • FIG. 3 shows one way in which the arrangement of FIG. 2 can be modified to allow a larger number of devices to be connected together. It will be readily apparent that other configurations are possible.
  • An embodiment of the invention is now described in further detail. The embodiment is described with reference to FIG. 4 which shows the general arrangement of FIG. 2 adapted to implement the invention. It will be appreciated that in alternative embodiments of the invention alternative arrangements may be employed including, for example, an arrangement based upon that shown in FIG. 3 and described above, as is discussed further below.
  • In the arrangement shown in FIG. 4 it can be seen that the switch 1 connects three servers A, B, C and the shared I/O device 5 a as described with reference to FIG. 2. The switch 1 again implements a shared input queue 9 which is arranged to store packets received from the servers A, B, C as they are received by the switch 1 and before they are transmitted onwards to the shared I/O device 5 a. The switch 1 stores a state parameter for each of the servers A, B, C which is connected to the switch 1. A state parameter 15 is associated with the server A, a state parameter 16 is associated with the server B and a state parameter 17 is associated with the server C. It will be appreciated that if further servers were connected to the switch 1 further state parameters would be stored. Indeed, the switch 1 stores as many state parameters as is necessary to cater for the number of data sources which may be connected to the switch 1. The state parameters 15, 16, 17 are stored in RAM provided by the switch 1.
  • As data packets are received from the servers A, B, C they are stored in the shared input queue 9 of switch 1. Each data packet is stored together with a value of the state parameter associated with the port through which the data packet was received, the value of the state parameter being determined as the data packet is received by the switch 1. For example, if a data packet is received from the server A through port 2 the value of the state parameter 15 associated with the port 2 (and therefore associated with the server A) when the data packet is received is stored alongside the received data packet in the shared input queue 9. That is, if the state parameter 15 has a value of ‘1’ when a particular data packet is received from Server A by the switch 1 the shared input queue 9 will store the received data packet together with the value ‘1’ in the shared input queue 9.
  • From the preceding description it can be seen that each received data packet is stored in the shared input queue 9 together with an appropriate value of an appropriate state parameter. When one of the servers A, B, C is reset, this is detected by the switch 1 and the corresponding state parameter 15, 16, 17 is updated in response to the reset.
  • A reset can be detected by the switch 1 in any convenient way. For example a signal may be received at the switch which is indicative of a reset of one of the servers. Such a signal may be provided in the form of a control data packet. Alternatively, the switch 1 may detect a failure of the link between the switch 1 and one of the servers. Regardless of how a reset is detected by the switch 1, in response to detection of a reset, the corresponding state parameter is updated.
  • In the described embodiment the update of the corresponding state parameter comprises incrementing the corresponding state parameter. For example, if state parameter 16 has a value of ‘1’ and the server B is reset, this is detected and the state parameter 16 is incremented such that it has the value of ‘2’.
  • The processing of the shared input queue 9 is described in more detail with reference to FIGS. 5A and 5B and FIGS. 6A to 6E. In the described example, each of the servers A, B and C sends data packets to the I/O device 5 a. The data packets are sent via the switch 1 and are stored together with corresponding state parameters values in the shared input queue 9 in order of receipt.
  • FIG. 5A illustrates the timing relationship between the receipt of data packets at the switch 1. Each data packet is identified by reference to a server from which it was received and a counter indicating how many packets have been received from that server. That is, a data packet denoted A0 is the first data packet received from the server A.
  • FIG. 5B shows changes in the values of the state parameters 15, 16, 17. It can be seen that between times t0 and t1 the state parameter 15 has a value of ‘0’, the state parameter 16 has a value of ‘1’ and the state parameter 17 has a value of ‘2’. At time t1 the value of the state parameter 15 is changed from ‘0’ to ‘1’, while the values of the state parameters 16, 17 remain respectively ‘1’ and ‘2’. No further changes in state parameter values occur between times t1 and tx. From FIG. 5B it can be seen that the server A was reset at time t1 resulting in the state parameter 15 being incremented.
  • FIG. 6A shows the shared input queue 9 after receipt of all data packets shown in FIG. 5A. It can be seen that the data packet at the head of the shared input queue 9 is data packet B0 given that this was the first data packet received at the switch 1. The data packet B0 is stored in the shared input queue 9 together with the value of the state parameter 16 when the data packet B0 is received, i.e. a value of ‘1’. A next data packet stored in the shared input queue 9 is the data packet A0 received from the server A. The data packet A0 is stored together with a value ‘0’, which is the value of the state parameter 15 when the data packet A0 is received at the switch 1. The next data packet stored in the shared input queue 9 is the data packet C0 received from the server C. The data packet C0 is stored in the shared input queue 9 together with a value ‘2’, which is the value of the state parameter 17 when the packet C0 is received at the switch 1.
  • Next in the shared input queue 9 are stored two data packets received from the server B, the data packets B1 and B2, both of which are stored together with a value of ‘1’ which is the value of state parameter 16 when these packets are received at the switch 1. The data packet B2 is followed in the shared input queue 9 by a data packet A1 received from the server A and stored together with a value of ‘0’, that being the value of the state parameter 15 when the data packet A1 is received at the switch 1.
  • Between times t1 and t2 the server A is reset, resulting in the state parameter 15 being incremented as described above. Following this reset of the server A, the data packet A2 is received from the server A and is added to the shared input queue 9. Now, the data packet A2 is stored in the shared input queue 9 with a value of ‘1’ following update of the state parameter 15.
  • FIG. 6A also shows the values of the state parameters 15, 16, 17 after receipt of all packets shown in the shared input queue 9 in FIG. 6A. It is assumed, for the purposes of example only, that all data packets shown in FIG. 5A are received at the switch 1 and are added to the shared input queue 9 before any of the received packets leave the shared input queue 9. That is, following receipt of the data packets shown in FIG. 5A, the shared input queue 9 and the state parameters 15, 16, 17 have the state shown in FIG. 6A.
  • Processing of the shared input queue 9 is now described with reference to FIGS. 6A to 6E.
  • The shared input queue 9 is processed as a FIFO queue. That is, a data packet at the head of the queue is the data packet which is considered for transmission. To determine whether the data packet at the head of the shared input queue 9 should be transmitted, the port through which the data packet was received (and consequently a server from which the data packet was received) is determined from information contained within the header of the stored packet. The value of the state parameter stored alongside the processed data packet is then compared with the current value of the appropriate state parameter corresponding to the port on which the data packet was received. So, when the queue illustrated in FIG. 6A is processed, the data packet B0 is considered for transmission. Given that this data packet was received from the server B the relevant state parameter is the state parameter 16, which has a value of ‘1’ when the shared input queue shown in FIG. 6A is processed. It can be seen that the data packet B0 is stored alongside a value of ‘1’. Given that the value stored alongside the data packet B0 (′ 1) is equal to the current value of the relevant state parameter it can be determined that the server B has not been reset since receipt of the data packet B0, and as such the data packet B0 is transmitted.
  • FIG. 6B shows the state of the shared input queue 9 after data packet B0 has been transmitted. Data packet A0 is now at the head of shared input queue 9 and is therefore processed. The data packet A0 was received from server A. The state parameter value (‘0’) stored alongside the data packet A0 does not match the current value (‘1’) of the corresponding state parameter 15. This indicates that server A has been reset since sending data packet A0 and data packet A0 should therefore not be transmitted from the switch 1 to the I/O device 5 a. The data packet A0 is therefore discarded without transmission.
  • FIG. 6C shows the state of shared input queue 9 after the data packet A0 has been processed. The data packet at the head of the shared input queue 9 is now the data packet C0. The state parameter value stored alongside the data packet C0 (‘2’) is equal to the current value of state parameter 17 indicating that server C has not been reset since sending data packet C0. The data packet C0 is therefore transmitted from the switch 1 to the shared I/O device 5 a. Data packets B1 and B2 are similarly processed. That is, the state parameter values stored alongside each of the data packets B1 and B2 match the current value of the state parameter 16 and so the data packets B1 and B2 are transmitted from the switch 1 to the I/O device 5 a.
  • FIG. 6D shows the state of the shared input queue 9 after data packets C0, B1 and B2 have been transmitted as described with reference to FIG. 6C. The data packet at the head of the shared input queue 9 is the data packet A1. The state parameter value stored alongside the data packet A1 is ‘0’ which does not match the current value, ‘1’, of relevant state parameter 15. This indicates that a reset of server A has occurred subsequent to the sending of data packet A1. The data packet A1 is therefore invalid and is discarded.
  • In FIG. 6E the data packet A2 is at the head of the shared input queue 9. The data packet A2 was sent by the server A after the server A was reset. The state parameter value (‘1’) stored alongside data packet A2 matches the current value of the relevant state parameter 15 indicating that server A has not been reset subsequent to sending data packet A2. Data packet A2 is therefore transmitted to the I/O device 5 a.
  • FIG. 7 illustrates the comparisons performed during the processing described with reference to FIGS. 6A to 6E and the result of each comparison. It can be seen that the data packets A0 and A1 are discarded as the value of the state parameter stored alongside the data packets A0 and A1 does not match the current value of state parameter 15. All other data packets are transmitted.
  • It will be appreciated that although in this example the servers A, B, C are acting as the data source sending data packets to the shared I/O device 5 a, data packet traffic may equally flow in the other direction, i.e. the shared I/O device 5 a may send data packets to one or more of the servers A, B, C utilizing the same queue processing method. In such a case data packets may be stored in a queue together with a value of the state parameter associated with a server to which the data packets are to be transmitted. That is, the method described above may be applied using state parameters associated with destinations (rather than sources) of data packets. It will similarly be appreciated that the state parameters used in the queuing method may be associated with the I/O devices rather than with the servers.
  • Where an embodiment of the invention is based upon an arrangement similar to that of FIG. 3, each switch port stores one state parameter for each unique ingress flow passing through the port. That is, the port 14 b of the switch 10 to which the switch 1 is connected has three state parameters, one for each of the servers A, B, C. Given that the shared queue 12 is shared by each of the servers A, B, C, D, four state parameters will be maintained in total, one for each of the servers.
  • The I/O device 5 d, has four returning data flows, one to each of the servers A, B, C, D. As such four state parameters will need to be associated with the port 13 of the switch 10 to which the I/O device 5 d is connected. The state parameters associated with the server A at port 13 (ingress, I/O to server) and port 14 b (ingress server to I/O), do not have to be kept synchronous, or even be initialized to a common value. What matters is the local value at each port, so that a reset at that port can be detected as described above.
  • Usually, a reset is initiated by the server, and will propagate downwards to the I/O devices. However, the server can reset part of its hierarchy, for example from port 13 downwards, without resetting ports 14 b, 11 or 2, so in this situation the state parameters at the various ports are updated differently. In this case only the relevant state parameter associated with the port 13 is updated.
  • The state parameters may be implemented in any suitable way. For example, the state parameter may be implemented as a counter where the counter size is determined by the minimum reset frequency and the maximum latency for data packets within the shared queue. A 2-bit counter will provide a suitable range of values for many applications.
  • In the preceding description it has been described that data packets are stored in a queue, and particularly a FIFO queue. It will be appreciated that embodiments of the invention can use any suitable queue. Furthermore, it will be appreciated that embodiments of the invention are not restricted to the use of queues, but can instead use any appropriate storage buffer which can be implemented in any convenient way.
  • The preceding description has described embodiments of the invention where data packets are transmitted from a plurality of different servers. It will be appreciated that data packets may be transmitted from a plurality of different entities of any kind. For example, the entities may take the form of operating system instances operating on a single server. It will further be appreciated that the term server is intended broadly and is intended to cover any computing device.
  • While it is the case that embodiments of the present invention as described above have particular relevance to shared I/O applications, the method is generally applicable to any shared queuing arrangement in which devices may be reset and in which the resetting of devices should affect the processing of queued data packets. Indeed, shared queuing technology has widespread application in the areas of networking, data communication and other data processing systems for improving cost and efficiency.
  • Further modifications and applications of the present invention will be readily apparent to the appropriately skilled person from the teaching herein, without departing from the scope of the appended claims.

Claims (20)

What is claimed is:
1. An apparatus, comprising:
a plurality of output ports, each output port of the plurality of output ports having an associated state parameter counter, wherein the state parameter counter is configured to be indicative of an operational state of a device connected to that port; and
a shared queue configured to store packets received on a plurality of input ports with an entry in the shared queue including both the packets received and a value of the state parameter associated with the output port on which the packet is to be directed at the time the packet was received;
wherein a state parameter counter associated with a port is updated based on detection of an event associated with the device connected to that port; and
wherein a packet at the head of the shared queue is processed based on a comparison of the state parameter value stored in the shared queue with the packet and a current value of the state parameter counter of the output port on which the packet is to be transmitted.
2. The apparatus of claim 1, wherein the packet at head of the shared queue is forwarded to an output port of the plurality of output ports based on the state parameter value stored in the shared queue with the packet matching the current value of the state parameter counter of the output port on which the packet is to be transmitted.
3. The apparatus of claim 1, wherein the packet at head of the shared queue is discarded based on the state parameter value stored in the shared queue with the packet not matching the current value of the state parameter counter of the output port on which the packet is to be transmitted.
4. The apparatus of claim 1, wherein the state parameter counter associated with a port is updated based on the event indicating the device connected to that port has been reset.
5. The apparatus of claim 4, wherein the state parameter counter associated with a port is incremented based on the event indicating the device connected to that port has been reset.
6. The apparatus of claim 1, wherein the state parameter counter associated with a port remains at a current value until a message is received from the device connected to that port.
7. The apparatus of claim 1, wherein each of the plurality of input ports has an associated state parameter counter that may also be updated based on a message received from a peripheral device connected to an input port of the plurality of input ports.
8. A method, comprising:
receiving a packet at an input port of a switch, the packet designating an output port;
storing the packet in a shared queue along with a value of a state parameter counter associated with the output port;
detecting an event at a device connected to the output port;
updating the state parameter counter associated with the output port based on the event; and
processing the first packet based on the stored value of the state parameter counter.
9. The method of claim 8, wherein the event causes the state parameter counter to increment.
10. The method of claim 8, wherein processing the packet based on the stored value of the state parameter comprises:
comparing the stored value of the state parameter counter with a current value of the state parameter counter; and
based on the stored value of the state parameter counter equaling the current value of the state parameter counter, forwarding the packet to the output port.
11. The method of claim 8, wherein processing the packet based on the stored value of the state parameter comprises:
comparing the stored value of the state parameter counter with a current value of the state parameter counter; and
based on the stored value of the state parameter counter being unequal to the current value of the state parameter counter, discarding the packet to the output port.
12. The method of claim 8,
receiving an additional packet at the input port of the switch, the packet designating a second output port;
storing the additional packet in the shared queue along with a value of a second state parameter counter associated with the second output port; and
processing the additional packet based on the stored value of the second state parameter.
13. The method of claim 8, wherein the detected event indicates the device connected to the output port has been reset.
14. An apparatus, comprising:
a plurality of input/output ports with each input/output port of the plurality of input/output ports having an associated state parameter counter;
a buffer to store packets received on the plurality of input/output ports and configured to:
store a value of the state parameter counter with a received packet, the value representing the value of the state parameter counter at the time the packet arrived; and
selectively forward a packet at the head of the buffer to a destination based on a comparison of the stored value and a current value of the state parameter counter; and
wherein the state parameter value may be changed in response to a message received from a device connected to an input/output port of the plurality of input/output ports.
15. The apparatus of claim 14, wherein the state parameter value is indicative of a state of operation of the device connected to that port.
16. The apparatus of claim 14, wherein the packet is selectively forwarded based on the stored value and the current value of the state parameter counter being equal.
17. The apparatus of claim 14, wherein the packet is discarded based on the stored value and the current value of the state parameter counter not being equal.
18. The apparatus of claim 17, wherein the current value of the state parameter counter is greater than the stored value.
19. The apparatus of claim 14, wherein the message indicates, at least, that the device connected to that port has been reset.
20. The apparatus of claim 14, wherein the state parameter counter associated with an input/output port of the plurality of input/output ports is incremented in response to receiving a reset message from the device connected to that input/output port of the plurality of input/output ports.
US14/165,373 2008-04-04 2014-01-27 Queue processing method Abandoned US20140185629A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/165,373 US20140185629A1 (en) 2008-04-04 2014-01-27 Queue processing method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US4232108P 2008-04-04 2008-04-04
GB0806145.9 2008-04-04
GB0806145.9A GB2458952B (en) 2008-04-04 2008-04-04 Queue processing method
US12/410,704 US8644326B2 (en) 2008-04-04 2009-03-25 Queue processing method
US14/165,373 US20140185629A1 (en) 2008-04-04 2014-01-27 Queue processing method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/410,704 Continuation US8644326B2 (en) 2008-04-04 2009-03-25 Queue processing method

Publications (1)

Publication Number Publication Date
US20140185629A1 true US20140185629A1 (en) 2014-07-03

Family

ID=39433133

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/410,704 Active 2030-04-15 US8644326B2 (en) 2008-04-04 2009-03-25 Queue processing method
US14/165,373 Abandoned US20140185629A1 (en) 2008-04-04 2014-01-27 Queue processing method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/410,704 Active 2030-04-15 US8644326B2 (en) 2008-04-04 2009-03-25 Queue processing method

Country Status (3)

Country Link
US (2) US8644326B2 (en)
GB (1) GB2458952B (en)
WO (1) WO2009122122A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8346975B2 (en) * 2009-03-30 2013-01-01 International Business Machines Corporation Serialized access to an I/O adapter through atomic operation
US9170976B2 (en) * 2013-01-03 2015-10-27 International Business Machines Corporation Network efficiency and power savings
SG11201509253WA (en) 2013-05-31 2015-12-30 Nasdaq Technology Ab Apparatus, system, and method of elastically processing message information from multiple sources
US9817450B2 (en) * 2015-07-22 2017-11-14 Celestica Technology Consultancy (Shanghai) Co. Ltd Electronic apparatus
US20200059437A1 (en) * 2018-08-20 2020-02-20 Advanced Micro Devices, Inc. Link layer data packing and packet flow control scheme

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4945548A (en) * 1988-04-28 1990-07-31 Digital Equipment Corporation Method and apparatus for detecting impending overflow and/or underrun of elasticity buffer
US20010048660A1 (en) * 1999-01-15 2001-12-06 Saleh Ali N. Virtual path restoration scheme using fast dynamic mesh restoration in an optical network.
US6473432B1 (en) * 1997-07-14 2002-10-29 Fujitsu Limited Buffer control apparatus and method
US6493347B2 (en) * 1996-12-16 2002-12-10 Juniper Networks, Inc. Memory organization in a switching device
US20030185223A1 (en) * 2002-03-28 2003-10-02 Michael Tate Signaling methods for a telecommunication system and devices for implementing such methods
US20040202192A9 (en) * 1999-08-17 2004-10-14 Galbi Duane E. Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task
US20050147117A1 (en) * 2003-01-21 2005-07-07 Nextio Inc. Apparatus and method for port polarity initialization in a shared I/O device
US7190667B2 (en) * 2001-04-26 2007-03-13 Intel Corporation Link level packet flow control mechanism
US7324428B1 (en) * 2001-10-19 2008-01-29 Legend Silicon Corporation Frame identifier

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5809522A (en) 1995-12-18 1998-09-15 Advanced Micro Devices, Inc. Microprocessor system with process identification tag entries to reduce cache flushing after a context switch
US6201807B1 (en) 1996-02-27 2001-03-13 Lucent Technologies Real-time hardware method and apparatus for reducing queue processing
US6678271B1 (en) * 1999-07-12 2004-01-13 Nortel Networks Limited High performance system and method having a local bus and a global bus
JP3687501B2 (en) * 2000-07-05 2005-08-24 日本電気株式会社 Transmission queue management system and management method for packet switch
US6967926B1 (en) * 2000-12-31 2005-11-22 Cisco Technology, Inc. Method and apparatus for using barrier phases to limit packet disorder in a packet switching system
WO2002084957A2 (en) * 2001-04-13 2002-10-24 Motorola, Inc., A Corporation Of The State Of Delaware Manipulating data streams in data stream processors
US7088731B2 (en) 2001-06-01 2006-08-08 Dune Networks Memory management for packet switching device
US7260104B2 (en) * 2001-12-19 2007-08-21 Computer Network Technology Corporation Deferred queuing in a buffered switch
US7447197B2 (en) * 2001-10-18 2008-11-04 Qlogic, Corporation System and method of providing network node services
US7145914B2 (en) * 2001-12-31 2006-12-05 Maxxan Systems, Incorporated System and method for controlling data paths of a network processor subsystem
JP3914771B2 (en) * 2002-01-09 2007-05-16 株式会社日立製作所 Packet communication apparatus and packet data transfer control method
US7483631B2 (en) * 2002-12-24 2009-01-27 Intel Corporation Method and apparatus of data and control scheduling in wavelength-division-multiplexed photonic burst-switched networks
US7324438B1 (en) * 2003-02-13 2008-01-29 Cisco Technology, Inc. Technique for nondisruptively recovering from a processor failure in a multi-processor flow device
US7660239B2 (en) * 2003-04-25 2010-02-09 Alcatel-Lucent Usa Inc. Network data re-routing
US7447224B2 (en) * 2003-07-21 2008-11-04 Qlogic, Corporation Method and system for routing fibre channel frames
US7596086B2 (en) * 2003-11-05 2009-09-29 Xiaolin Wang Method of and apparatus for variable length data packet transmission with configurable adaptive output scheduling enabling transmission on the same transmission link(s) of differentiated services for various traffic types
US20050147032A1 (en) * 2003-12-22 2005-07-07 Lyon Norman A. Apportionment of traffic management functions between devices in packet-based communication networks
US7453810B2 (en) * 2004-07-27 2008-11-18 Alcatel Lucent Method and apparatus for closed loop, out-of-band backpressure mechanism
US7443878B2 (en) * 2005-04-04 2008-10-28 Sun Microsystems, Inc. System for scaling by parallelizing network workload
US20070153683A1 (en) * 2005-12-30 2007-07-05 Mcalpine Gary L Traffic rate control in a network
US7539133B2 (en) * 2006-03-23 2009-05-26 Alcatel-Lucent Usa Inc. Method and apparatus for preventing congestion in load-balancing networks
US7742408B2 (en) * 2006-08-04 2010-06-22 Fujitsu Limited System and method for filtering packets in a switching environment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4945548A (en) * 1988-04-28 1990-07-31 Digital Equipment Corporation Method and apparatus for detecting impending overflow and/or underrun of elasticity buffer
US6493347B2 (en) * 1996-12-16 2002-12-10 Juniper Networks, Inc. Memory organization in a switching device
US6473432B1 (en) * 1997-07-14 2002-10-29 Fujitsu Limited Buffer control apparatus and method
US20010048660A1 (en) * 1999-01-15 2001-12-06 Saleh Ali N. Virtual path restoration scheme using fast dynamic mesh restoration in an optical network.
US20040202192A9 (en) * 1999-08-17 2004-10-14 Galbi Duane E. Method for automatic resource reservation and communication that facilitates using multiple processing events for a single processing task
US7190667B2 (en) * 2001-04-26 2007-03-13 Intel Corporation Link level packet flow control mechanism
US7324428B1 (en) * 2001-10-19 2008-01-29 Legend Silicon Corporation Frame identifier
US20030185223A1 (en) * 2002-03-28 2003-10-02 Michael Tate Signaling methods for a telecommunication system and devices for implementing such methods
US20050147117A1 (en) * 2003-01-21 2005-07-07 Nextio Inc. Apparatus and method for port polarity initialization in a shared I/O device

Also Published As

Publication number Publication date
GB0806145D0 (en) 2008-05-14
GB2458952A (en) 2009-10-07
US20090252167A1 (en) 2009-10-08
US8644326B2 (en) 2014-02-04
WO2009122122A1 (en) 2009-10-08
GB2458952B (en) 2012-06-13

Similar Documents

Publication Publication Date Title
US11757763B2 (en) System and method for facilitating efficient host memory access from a network interface controller (NIC)
US10129153B2 (en) In-line network accelerator
US9762497B2 (en) System, method and apparatus for network congestion management and network resource isolation
US8059671B2 (en) Switching device
US7733890B1 (en) Network interface card resource mapping to virtual network interface cards
US20140185629A1 (en) Queue processing method
EP3298739B1 (en) Lightweight transport protocol
EP3754920A1 (en) Coalescing small payloads
JP3394504B2 (en) Method and apparatus for maintaining packet order integrity in a parallel switching engine
CN103873550A (en) Method for data transmission among ecus and/or measuring devices
US7760647B2 (en) Out of band flow control
US7209489B1 (en) Arrangement in a channel adapter for servicing work notifications based on link layer virtual lane processing
US7174394B1 (en) Multi processor enqueue packet circuit
US7607168B1 (en) Network interface decryption and classification technique
US7672299B2 (en) Network interface card virtualization based on hardware resources and software rings
US7616653B1 (en) Network interface card aggregation framework
US7519060B2 (en) Reducing inter-packet gaps in packet-based input/output communications
US20230403229A1 (en) System and method for facilitating efficient host memory access from a network interface controller (nic)
EP1347597B1 (en) Embedded system having multiple data receiving channels
GB2585947A (en) Method and system for improving mass transfer of data via a standard network adapter

Legal Events

Date Code Title Description
AS Assignment

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038669/0001

Effective date: 20160426

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGEN

Free format text: SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038669/0001

Effective date: 20160426

AS Assignment

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT, MARYLAND

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038954/0001

Effective date: 20160426

Owner name: MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:038954/0001

Effective date: 20160426

AS Assignment

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:043079/0001

Effective date: 20160426

Owner name: U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGEN

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE REPLACE ERRONEOUSLY FILED PATENT #7358718 WITH THE CORRECT PATENT #7358178 PREVIOUSLY RECORDED ON REEL 038669 FRAME 0001. ASSIGNOR(S) HEREBY CONFIRMS THE SECURITY INTEREST;ASSIGNOR:MICRON TECHNOLOGY, INC.;REEL/FRAME:043079/0001

Effective date: 20160426

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNORS:MICRON TECHNOLOGY, INC.;MICRON SEMICONDUCTOR PRODUCTS, INC.;REEL/FRAME:047540/0001

Effective date: 20180703

Owner name: JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT, IL

Free format text: SECURITY INTEREST;ASSIGNORS:MICRON TECHNOLOGY, INC.;MICRON SEMICONDUCTOR PRODUCTS, INC.;REEL/FRAME:047540/0001

Effective date: 20180703

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:U.S. BANK NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:047243/0001

Effective date: 20180629

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:MORGAN STANLEY SENIOR FUNDING, INC., AS COLLATERAL AGENT;REEL/FRAME:050937/0001

Effective date: 20190731

AS Assignment

Owner name: MICRON TECHNOLOGY, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051028/0001

Effective date: 20190731

Owner name: MICRON SEMICONDUCTOR PRODUCTS, INC., IDAHO

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS COLLATERAL AGENT;REEL/FRAME:051028/0001

Effective date: 20190731