WO2002014998A2 - Method and apparatus for transferring data in a data processing system - Google Patents

Method and apparatus for transferring data in a data processing system Download PDF

Info

Publication number
WO2002014998A2
WO2002014998A2 PCT/US2001/024641 US0124641W WO0214998A2 WO 2002014998 A2 WO2002014998 A2 WO 2002014998A2 US 0124641 W US0124641 W US 0124641W WO 0214998 A2 WO0214998 A2 WO 0214998A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
packet
packets
command
receiving
Prior art date
Application number
PCT/US2001/024641
Other languages
French (fr)
Other versions
WO2002014998A9 (en
WO2002014998A3 (en
Inventor
Christophe Carret
Charles A. Milligan
Original Assignee
Storage Technology Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Storage Technology Corporation filed Critical Storage Technology Corporation
Priority to AU2001284730A priority Critical patent/AU2001284730A1/en
Publication of WO2002014998A2 publication Critical patent/WO2002014998A2/en
Publication of WO2002014998A3 publication Critical patent/WO2002014998A3/en
Publication of WO2002014998A9 publication Critical patent/WO2002014998A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements

Definitions

  • the present invention related generally to an improved data processing system and in particular to a method and apparatus for transferring data. Still more particularly, the present invention relates to a method and apparatus for transferring data to and from a storage device using data packets .
  • Data is often transferred from one application to another application or to a storage device. Data transfers also may involve transferring the data from one computer to another computer or remote device. This type of transfer is facilitated through the use of a protocol. For example, if the transfer of data is over a network, the protocol TCP/IP may be used. If the transfer of data is over a device channel, the protocol SCSI may be used. Also, one protocol may be executed embedded within another protocol, for example sending data via the SCSI protocol over networks using the TCP/IP protocol. Applications typically send large numbers of identical commands when data is being read or written. Applications usually send a small amount of data with each command, compared with the destination device or application capability. Data sizes of 321* ⁇ bytes and 64k bytes are typical sizes for such transfers.
  • a protocol engine When a data packet is received, a protocol engine is employed to process the packet. Currently, the protocol engine will identify the command in the data packet and allocate a buffer to process the data in the data packet . This process is performed each time a data packet is received.
  • the present invention recognizes that with the large number of identical commands and the individual processing of each data packet, performance is degraded. The degradation is caused by having to process each of the data packets as potentially unrelated events and allocate resources for each data packet. Therefore, it would be advantageous to have an improved method and apparatus for transferring data in which performance degradation associated with data packet processing is avoided.
  • the present invention provides a method and apparatus for transferring data.
  • a plurality of packets is received, wherein each of the plurality of packets includes a command and data. Packets within the plurality of packets having identical commands are identified to form a set of selected packets.
  • a buffer is allocated to process the set of selected packets. Packets not having identical commands to those in the set of selected packets are allocated to other buffers for processing.
  • Figure 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented
  • FIG. 2 is a block diagram illustrating a data processing system in which the present invention may be implemented
  • Figure 3 is a block diagram illustrating components used to process packets in accordance with a preferred embodiment of the present invention
  • Figures 4 and 5 are diagrams illustrating read and write command protocol phases in accordance with a preferred embodiment of the present invention.
  • Figures 6 and 7 are diagrams illustrating data flow in read command processing and write command processing in accordance with a preferred embodiment of the present invention
  • Figure 8 is a flowchart of a process for grouping commands in data transfers in accordance with a preferred embodiment of the present invention.
  • Figure 9 is an illustration of a data transfer through protocol stacks in accordance with a preferred embodiment of the present invention.
  • Figure 10 is a diagram illustrating data structures used in decoding and receiving packets in accordance with a preferred embodiment of the present invention
  • Figure 11 is a flowchart of a process in an application layer for generating a packet set and sending data using the packet set in accordance with a preferred embodiment of the present invention
  • Figure 12 is a flowchart of a process in a physical layer used to generate a packet set in accordance with a preferred embodiment of the present invention
  • Figure 13 is a flowchart of a process in a physical layer for sending packets from a packet set across a data channel in accordance with a preferred embodiment of the present invention
  • Figure 14 is a flowchart of a process in a physical layer used to receive a packet in accordance with a preferred embodiment of the present invention
  • Figure 15 is a flowchart of a process for handling packets in an application layer in accordance with a preferred embodiment of the present invention.
  • Figure 16 is a flowchart for identifying buffer space in accordance with a preferred embodiment of the present invention.
  • FIG. 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented.
  • Distributed data processing system 100 is a network of computers in which the present invention may be implemented.
  • Distributed data processing system 100 contains a network 102, which is the medium used to provide communications links between various devices and computers connected together within distributed data processing system 100.
  • Network 102 may include permanent connections, such as wire or fiber optic cables, or temporary connections made through telephone connections.
  • a server 104 is connected to network 102 along with storage unit 106.
  • clients 108, 110, and 112 also are connected to network 102.
  • These clients 108, 110, and 112 may be, for example, personal computers or network computers .
  • a network computer is any computer, coupled to a network, which receives a program or other application from another computer coupled to the network.
  • Clients 108, 110, and 112 are clients to server 104.
  • Distributed data processing system 100 may include additional servers, clients, and other devices not shown.
  • distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the TCP/IP suite of protocols to communicate with one another.
  • distributed data processing system 100 also may be implemented using a number of different types of networks.
  • the different computers or devices may be connected using physical links.
  • the networks may be, for example, an intranet, a local area network (LAN) , or a wide area network (WAN) .
  • the links found in the network or those making up physical links may be, for example, fiber optic links, packet switched communication links, Enterprise Systems Connection (ESCON) fibers, SCSI cable links, wireless communication links, and the like.
  • Figure 1 is intended as an example, and not as an architectural limitation for the present invention.
  • the present invention provides a method, apparatus, and computer implemented instructions for transferring data from one device to another device.
  • This transfer of data may take place between two computers, such as server 104 and client 110.
  • the transfer may be between a computer and a storage device, such as client 112 and storage unit 106.
  • These transfers take place through network 102, which may be a traditional network or a direct connection between the two devices. Further, these transfers may take place between a host and a device located in the same data processing system, such as client 112.
  • the mechanism of the present invention involves identifying a new packet or set of packets containing commands identical to those received in previous packets or sets of packets and processing those newly received packets without incurring additional the overhead or allocation of resources generally required for receiving such packets. It is possible that a command and related data may be received in a single packet. However, the general case is for such to be received in a series of packets (not necessarily contiguous) . The text of this invention will refer to the single packet or to the series of related packets containing the command and data in the singular as the packet' . For example, when a read command and data are received in a packet by a target device, resources,- such as buffer space and processing time to decode the command are used to direct the data to the appropriate location in the target device .
  • the system then remembers that a read command has been processed and also remembers the location of the buffer containing a series of data spaces for later data buffering. If another packet containing a new read command and data is received by the target device, the system first checks to see if this is a remembered' command type. This is done via exclusive or or masking the command with its appropriate parameters. A value of zero distinguishes an exact map and the command is on that has been remembered. If it is, the resources allocated to processing such commands are used to process the data for this packet. The system will presume that the decode of the new command is predetermined and the data will go to the preassigned buffer. In this manner, additional processing time and resources are not required to be used to process the new command.
  • FIG. 2 a block diagram illustrating a data processing system in which the present invention may be implemented.
  • Server 104 and clients 108-112 in Figure 1 may be implemented using data processing system 200.
  • Data processing system 200 employs a peripheral component interconnect (PCI) local bus architecture.
  • PCI peripheral component interconnect
  • AGP Accelerated Graphics Port
  • ISA Industry Standard Architecture
  • Processor 202 and main memory 204 are connected to PCI local bus 206 through PCI bridge 208.
  • PCI bridge 208 also may include an integrated memory controller and cache memory for processor 202.
  • PCI local bus 206 may be made through direct component interconnection or through add-in boards.
  • communications adapter 210, SCSI host bus adapter 212, and expansion bus interface 214 are connected to PCI local bus 206 by direct component connection.
  • audio adapter 216, graphics adapter 218, and audio/video adapter 219 are connected to PCI local bus 206 by add-in boards inserted into expansion slots.
  • Expansion bus interface 214 provides a connection for a keyboard and mouse adapter 220, modem 222, and additional memory 224.
  • Small computer system interface (SCSI) host bus adapter 212 provides a connection for hard disk drive 226, tape drive 228, and CD- ROM drive 230.
  • SCSI Small computer system interface
  • An operating system runs on processor 202 and is used to coordinate and provide control of various components within data processing system 200 in Figure 2.
  • the operating system may be a commercially available operating system, such as Windows NT, which is available from Microsoft Corporation. Instructions for the operating system, the object-oriented operating system, and applications or programs are located on storage devices, such as hard disk drive 226, and may be loaded into main memory 204 for execution by processor 202.
  • the mechanism of the present invention may be implemented as instructions executed by processor 202 to identify commands in packets as they arrive.
  • the mechanism of the present invention also may be implemented as part of the host adapters 210 or 212, in a form that may be either software or hardware. Further, the mechanism of the present invention may be implemented in a protocol stack in a protocol engine used to process packets.
  • the mechanism of the present invention also may be implemented in a manner that reduces the amount of decoding and processing within the protocol stack. Once a command type has been decoded, the parameters and resources used for processing that packet may be used as an example or template for another packet containing the same command type . As a result, the resources used by the protocol stack to process the next packet containing the same command type are reduced.
  • the hardware in Figure 2 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash ROM (or equivalent nonvolatile memory) or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in Figure 2. Also, the processes of the present invention may be applied to a multiprocessor data processing system.
  • data processing system 200 also may be a notebook computer or hand held computer in addition to taking the form of a PDA.
  • data processing system 200 also may be a kiosk or a Web app1iance .
  • accelerator 300 works with protocol engine 302 to process packets received from a communications channel at an adapter, such as communications adapter 210 or SCSI adapter 212 in Figure 2.
  • Protocol engine 302 processes packets received from a communications channel at an adapter, such as communications adapter 210 or SCSI adapter 212 in Figure 2.
  • Processing overhead is reduced by recognizing and grouping a series of identical commands. Such a process avoids the need for the processor to be utilized to process each subsequent command after it is received.
  • processing would include command decode to determine the exact command operation code, interpreting each accompanying parameter, and doing a buffer allocation operation. Therefore, the latency time induced by the time the protocol engine 302 spends processing each command also is reduced.
  • a buffer such as buffer 306, is allocated to hold the command and data from packet 304. Initially several sets of data space are allocated and concatenated together in a series to form the buffer 306. The processing required for allocating a serial set is much more efficient than that required for allocating each amount of space needed individually.
  • the information in the packet 304 is passed to protocol engine 302, which decodes the information to transfer the data to the appropriate destination.
  • accelerator 300 identifies the command in the packet. Normally, another buffer, such as buffer 310, is allocated for processing packet 308. Instead, if the command is of the same type for a packet already being processed, the command and data in the packet is placed into next data space of the same buffer, such as buffer 306. As the data for several commands are placed in buffer 306, only one copy of the original command is placed in this buffer, which is used as a prototype for comparison with subsequent commands, but buffer 306 also contains the actual number of commands stored in the buffer in this example.
  • ⁇ command' here means the operation code passed to the device and associated parameters required for such an operation code.
  • buffer 306 has been allocated for read commands of a specific format, all packets containing such read commands for the target are discovered via a simple compare or mask logic and have their data placed into buffer 306. Further, the data does not have to be decoded by protocol engine 302. Once the destination has been identified and data is being transferred to the destination, additional data may be placed in the buffer and transferred to the destination without requiring additional resources and processing time from protocol engine 302. If packet 308 contains a different command type, a new buffer, such as buffer 310, is allocated to process packet 308. A new allocation of data space is required and must be added to the buffer if the command type is the same, but the current buffer being used for that command type is full or unable to accept additional data for processing.
  • a buffer is selected to be used by several identical commands.
  • the identical commands are for a data transfer to or from a specific device.
  • new commands are received, they are compared to the current command being processed.
  • a different command may result in the allocation of a new buffer, such as buffer 310.
  • the processes of the present invention are implemented in accelerator 300. These processes, however, also may be implemented in other locations, such as in protocol engine 302.
  • FIGs 4 and 5 diagrams illustrating read and write command protocol phases are depicted in accordance with a preferred embodiment of the present invention. As these phases can be interpreted as logical steps in the processing of a command, some of them may not be included in all the protocols that may be used in the implementation of the present invention. For instance, the SCSI protocol includes all of these phases, while the ESCON protocol or the TCP/IP protocol do not implement any device ready phase 506, which represents a flow control phase in the processing of the commands. Also, many applications may send the data with the command when they have to transfer data .
  • phases for read commands between host 400 and device 402 are illustrated. Host 400 and device 402 may be in the same computer or located on different computers.
  • read commands involve a command phase 404, a data phase 406, and a status phase 408.
  • Read commands are sent to device 402 from host 400 during command phase 404.
  • Data is returned from device 402 to host 400 during data phase 406.
  • a status phase occurs in which the status of the command is returned during status phase 408.
  • the optimization occurs between command phase 404 and data phase 406. The optimizations in read operations allows for the data for several read commands to be acquired in one operation at the side of device 402 and for device 402 to send the data related to each of several subsequent read commands to host 400 without additional command processing or buffer allocation overhead.
  • write commands are sent from host 500 to device 502.
  • Write commands involve a command phase 504, a device ready phase 506, a data phase 508, and a status phase 510.
  • the phases involved in write operations are similar to those described above for read operations.
  • Device ready phase 506 is an additional phase used to indicate that the device is ready or available for data transfer.
  • the optimizations provided by the present invention occur between command phase 504 and device ready phase 506. Further, optimizations occur between data phase 508 and status phase 510.
  • the first optimization comes from the fact that, after a first write command has been received from host 500 by device 502, a buffer able to store the data for several of these commands has been allocated by device 502, and no additional processing is required before device 502 accepts the command and notifies host 500 by the way of device ready phase 506.
  • the second optimization comes from the fact that device 502 does not try to move the data received from previous write commands before a full buffer has been filled. Instead, device 502 returns a continuation status as soon as the last message of data has been received and this allows host 500 to issue a new command as soon as it can.
  • read operations involve an adapter 600, an accelerator 602, and a protocol engine 604.
  • Adapter 600 is the hardware used to receive and send data. Commands are received by accelerator 602 for a requestor through adapter 600. Memory allocation occurs to allocate a buffer for transferring data. The buffer allocated is large enough to hold data for several read commands. Read commands are sent to protocol engine 604 with data being read from the media. If additional read commands are received, the data for these read commands also are place in the buffer. When the buffer is filled, the data is returned to accelerator 602 and a decision is made whether or not to add data space to the buffer.
  • the data is transferred to adapter 600 for transfer to the requestor asynchronously to the transfer of data between the accelerator 602 and the protocol engine 604. These additional transfers of data occur without requiring additional overhead for setting up buffers and without spending the time to decode and process the parameters for each read command received.
  • adapter 700, accelerator 702, and protocol engine 704 are components involved in write operations to a device.
  • a write operation involves allocating memory, such as buffer space to receive data.
  • data is received from a host or requestor of the write operation with the device being the target of the data.
  • the buffer is allocated such that data for multiple write commands may be stored in the buffer.
  • additional write commands are received, the data for these commands are stored in the buffer.
  • the buffer is full, the data is then written to the device through protocol engine 704.
  • additional write operations may occur without requiring the overhead involved in setting up additional buffers and without spending the time to decode and process the parameters .
  • the data is written to the device or sent to the requestor after the buffer has been filled.
  • data may be transferred continuously from the buffer to the device.
  • FIG. 8 a flowchart of a process for grouping commands in data transfers is depicted in accordance with a preferred embodiment of the present invention.
  • the processes illustrated in Figure 8 are implemented in an accelerator in these examples.
  • the process begins by receiving a command and data (step 800) . Only information about the data such as the length can be received at this point, since the application can assist in delivering data to the protocol engine without additional copy.
  • step 802 if a command or list of commands is currently being processed, the received command is compared the current command or in the case of a list is compared to each command in the list (step 804) .
  • the order of the list can be varied (e.g., most recently used, most frequently use, etc.) as the processing continues such that the most probable match is found early in the compare process .
  • a determination is then made as to whether the commands are the same or identical (step 806) . This determination involves identifying whether the type of command is of the same type. For example, the determination may be whether both commands are read commands. Further, the determination also may involve identifying whether the source of the commands are the same . The grouping of commands, in these example, may be performed by the source application sending the command.
  • step 808 a determination is then made as to whether buffer space is available in the buffer allocated for these commands (step 808) . If buffer space is available, the command and the data are placed in the buffer (step 810) . A determination is then made as to whether the buffer is full (step 812) . If the buffer is full, the data is then transferred to the protocol engine (step 814) with the process terminating thereafter. If continuous data transfer is used in the process rather than buffer full transfer, the decision at 808 when there is insufficient buffer space available will branch to a function that will make a further decision about allocating more data space to the buffer. If more space is allocated, the logic will return and re ask the question at step 808.
  • step 816 If more space is not allocated, then the logic will flow on to step 816 as shown in Figure 8. With reference again to step 812, if the buffer is not full, the process terminates. With reference back to step 808, if buffer space is unavailable, a new buffer is allocated for this command type (step 816) . The process then proceeds to step 810. Returning to step 806, if the command is not the same command, the process also proceeds to step 816 as described above. The process also proceeds to step 816 from step 802 if a command is not currently being processed.
  • the commands also may be grouped at the protocol engine associated with the application when data transfers occur using packets transferred across a network or other communications channel .
  • the protocol engine associated with the application when data transfers occur using packets transferred across a network or other communications channel .
  • an application located in one computer may request data transfer from a storage device using a network communications protocol, such as TCP/IP.
  • protocol stack 900 includes an application layer 902, a presentation layer 904, a transport layer 906, a network layer 908, a data link layer 910, and a physical layer 912.
  • protocol stack 900 is located in a client with an application that performs data transfers.
  • Protocol stack 914 includes an application layer 916, a presentation layer 918, a transport layer 920, a network layer 922, a data link layer 924, and a physical layer 926.
  • protocol stack 914 is located in a system containing the storage device or application- that is involved in the data transfer.
  • Protocol stack 900 and protocol stack 914 may be found in a protocol engine, such as protocol engine 302 in Figure 3.
  • application layer 902 sends data directly to physical layer 912 for transfer to protocol stack 914 across a communications channel.
  • Pseudo block 928 is a packet generated by the application in application layer 902, which will be transformed into the appropriate format for transfer over a communications channel . This transformation typically includes placing the data from pseudo block 928 into a number of packets, as well as generating the header information needed to send the packets to the target.
  • Pseudo block 928 includes a flag 930 and data 932.
  • Data 932 is dummy data, which is processed by the different layers in protocol stack 900. This processing is used to encode the data in the pseudo block into the appropriate format and packets for transfer over a communications channel.
  • a packet set is generated by physical layer 912.
  • Physical layer 912 is configured to return the packet set to application layer 902.
  • the application in application layer 902 that is to receive the packet set may be identified by flag 930.
  • packet set 934 is returned to application layer 902 in buffer space 936.
  • Application layer 902 will replicate or make copies of packet set 934, such as packet sets 938 and 940.
  • Packet 942 is an example of a packet found in packet sets 934, 938, and 940.
  • Packet 942 includes a header 944, which was generated by physical layer 912 to transport packet 942 to the target. Additionally, packet 942 includes a flag 946 and data 948, which forms a payload section for packet 942.
  • flag 946 may be located in header 944. Flag 946 is used to indicate that they are preprocessed and ready for transfer across the communications channel. Flag 946 also may be unique for a particular transfer by a particular application, such that all packets containing a flag can be associated with a particular application.
  • Application layer 902 will place data into packets in the packet set.
  • this data includes a command and the data that is to be processed in response to the command.
  • the data that is to be processed or transferred to another application or device is referred to as "customer data" .
  • the command and the customer data are placed into the data or payload areas in the packets for a packet set, such as packet set 934.
  • Packet 942 is an illustration of a packet transferred by physical layer 912.
  • physical layer 926 will decode a packet and send the packet up through protocol stack 914 to application layer 916 if a flag is absent from the packet.
  • Application layer 916 may then process the data or place it in storage. If a flag is present, the packet may be sent directly to application layer 916 for processing.
  • Application layer 916 will take packets with flags and recreate packet sets to extract data. When an entire packet set has been recreated, the block of data sent from the source may be extracted and processed according to the command associated with the packet set. Alternatively, the data may be extracted as packets in a packet set are received by application layer 916.
  • the packet is decoded and the information to decode the packet is stored to build an inventory of preprocessed decode examples.
  • decode examples may include, for example, the parameters, registers, variables, and buffers required to decode the data into a form used by an application in application layer 916. These examples may be built on a per packet or per packet set basis.
  • the flag may be used to identify the appropriate decode example for use in processing the packet. In this manner, the overhead required to decode and process the packet is reduced.
  • the decode process may be performed vertically on a subset of the layers in stacks 900 and 914.
  • physical layer 912 may group commands and data for sending after application layer 902 has sent a local set-up message defining a packet set type.
  • Transport layer 926 may group received commands after application layer 916 has sent a local set-up message describing a packet set type.
  • Physical layers 912 or 926, application layers 902 or 916, or any intermediate layer may determine packet set boundaries based on statistical elements and without any external intervention. The processes described with respect to the physical layers in Figure 9 may be implemented as part of the application or by another program in the application layer.
  • FIG. 10 a diagram illustrating data structures used in decoding and receiving packets is depicted in accordance with a preferred embodiment of the present invention.
  • a comparator stack 1000 an example decode matrix 1002, and an example data structure 1004 are used to process packets received by a physical layer, such as physical layer 926 in Figure 9.
  • a flag within the packet or other identification information is used to identify the packet type. More specifically, the packet type, in these examples, is associated with a command or other instructions used to perform an operation on the data in the packet. This identification information is compared to identification information stored in comparator stack 1000.
  • Each of these packet types are categories by the type of command or operation that is to be carried out on the data in a packet. In these examples, the packet types are "A", "B", and "C" .
  • the packet identification information is placed into comparator stack 1000, and the packet is decoded.
  • the data structures, the parameters, the variables, as well as any other information or setting required to decode and place the data into a format for use by an application in the application layer is stored as a data structure, such as example data structure 1004.
  • this data structure contains command information 1008, parameter information 1010, and data 1012. All of this information is used to place data from a packet into a format for use by the application.
  • Example data structure 1004 in these examples, is replicated a number of different times. Pointers to these data structures are placed in example decode matrix 1002. In this example, pointer 1014 points to example decode data structure 1004.
  • a pointer from example decode matrix 1002 to an example decode data structure for that packet type is used to select a data structure to process the packet. In this manner, the resources and time used in decoding a packet may be reduced. This mechanism may be applied to entire packet sets in addition to individual packets. In this example, a packet set corresponds to a block of data handled by an application in the application layer.
  • a larger data structure may be used for a number of packet or packet sets.
  • This larger allocation of memory or buffer space may be selected to be large enough to handle predicted numbers of packets or packet sets. Additionally, the memory allocation or buffer space may be dynamically varied to take into account increasing or decreasing needs in processing data.
  • FIG. 11-16 flowcharts illustrating processes used to group commands in packets are depicted in accordance with a preferred embodiment of the present invention.
  • the processes illustrated in Figures 11-13 are those used to send packets, while the processes depicted in Figures 14-16 are those used to receive packets.
  • the processes, in these examples, are implemented in a protocol stack, such as protocol stack 900 or protocol stack 914 in Figure 9. Of course, the processes may be located elsewhere depending on the implementation.
  • FIG 11 a flowchart of a process in an application layer for generating a packet set and sending data using the packet set is depicted in accordance with a preferred embodiment of the present invention. This process may be implemented in an application layer, such as application layer 902 in Figure 9.
  • the process begins by generating a pseudo block (step 1100) .
  • Pseudo block 928 in Figure 9 is an example of a pseudo block that is generated in step 1100.
  • This pseudo block includes a flag used to identify the application that is transferring data. Further, this flag is used by other layers, such as the physical layer, as an indication to return a set of packets to the application layer.
  • the pseudo block in these examples, takes the form of a packet generated by an application, which is typically placed into smaller sized packets for transport across a communications channel.
  • the pseudo block is then passed to the next layer (step 1102) .
  • the next layer in an OSI model is a presentation layer.
  • the process then waits to receive a packet set (step 1104) .
  • the packet set is a set of data structures, which are ready for transport over the communications channel by the physical layer.
  • the data to actually be transported is placed within the appropriate places in these packet sets. These places are typically the payload portions of the packet.
  • the packet set is placed repetitively in a buffer space (step 1106) . This replication of the packet set allows for multiple blocks of data to be filled by the application layer and passed to the physical layer for transfer.
  • the data space of a packet in a packet set is filled, and the packet is sent to the physical layer (step 1108) .
  • This data space in the packet is also referred to as the "payload" .
  • the data space is filled with the customer data and the command for the operation to be performed on the customer data. Further, a flag is placed in the payload if a flag is not already present elsewhere in the packet.
  • FIG. 12 a flowchart of a process in a physical layer used to generate a packet set is depicted in accordance with a preferred embodiment of the present invention.
  • the processes illustrated in Figure 12 may be implemented in a physical layer, such as physical layer 912 in Figure 9.
  • the process begins by receiving a packet from the previous layer (step 1200) . With an OSI model, this layer would be a data link layer. A determination is made as to whether the packet includes a flag (step 1202) . If the packet does not include a flag, the packet is sent using normal processing within the physical layer (step 1204) with the process terminating thereafter. With reference again to step 1202, if a flag is present, the packet is broken into a set of physical packets for transfer on a communications channel or link (step 1206) . This set of packets is sent back to the application associated with the flag (step 1208) with the process terminating thereafter. Further, these packets include a flag to identify the packets as being part of the same set of packets or to identify the set of packets to be part of a data transfer for the application.
  • FIG. 13 a flowchart of a process in a physical layer for sending packets from a packet set across a data channel is depicted in accordance with a preferred embodiment of the present invention.
  • the process in Figure 13 is implemented in a physical layer in these examples .
  • the process begins by receiving a packet from the application layer (step 1300) .
  • a determination is made as to whether a flag is present in the packet (step 1302) . If a flag is present, the packet is sent onto the communications channel for transfer to the target (step 1304) with the process terminating thereafter.
  • a flag is absent in the packet, an error message' is generated for return to the application (step 1306) with the process terminating thereafter.
  • Each packet received directly from the application layer should include a flag in these examples. If a flag is absent, then some error in processing in the application layer is assumed.
  • FIG.14 a flowchart of a process in a physical layer used to receive a packet is depicted in accordance with a preferred embodiment of the present invention.
  • the process in Figure 14 may be implemented in a physical layer, such as physical layer 926 in Figure 9.
  • the process begins by receiving a packet from the physical media (step 1400) .
  • This physical media is a communications channel in this example.
  • a determination is made as to whether a flag is present in the packet (step 1402) . If a flag is present, the packet is sent directly to the application (step 1404) with the process terminating thereafter. If a flag is absent in the packet, then the packet is sent to the next layer above (step 1406) with the process terminating thereafter.
  • the next layer is a data link layer if an OSI model is used.
  • the flag indicates that the mechanism of the present invention is being used to process these packets. If the physical layer does not recognize the flag or is not configured to use the mechanism of the present invention the flag is ignored and the packet is sent to the next layer.
  • FIG 15 a flowchart of a process for handling packets in an application layer is depicted in accordance with a preferred embodiment of the present invention.
  • the process illustrated in Figure 15 may be implemented in an application layer, such as application layer 916 in Figure 9, when an OSI model is used.
  • the process begins by receiving a packet from the physical layer (step 1500) .
  • a determination is made as to whether this packet is a first packet of a set of packets (step 1502) . If the packet is a first packet in a set of packets, the packet is associated with a set (step 1504) . This step is used to begin a new set in which the packet will be placed.
  • a determination is made as to whether this packet is the first time a packet has been received from an entity originating the packet (step 1506) . The entity is identified by a device address or file address in these examples. If this packet is the first time a packet has been received from the entity, the process begins building the set and extracting data (step 1508) .
  • decode is performed against the data (step 1510) .
  • This step involves performing the necessary actions to place the data into a form for processing for a command or for use by an application.
  • information is placed in a comparator stack (step 1512) . This information may be for the data or a packet set.
  • an inventory of preprocessed examples of decode is created (step 1514) with the process terminating thereafter.
  • decode are also referred to as example decodes. The examples include the information necessary to decode or place the data in a form for use by an application, which is a target of the data transfer.
  • a decode example such as example decode data structure 1004 in Figure 10, includes information, such as, for example, registers in which data is to be placed and pointers to allocated buffer space.
  • a decode example is a template used to process data for a particular command. With a decode example, processing of data for the command does not require identifying where the data should be placed or what buffer space should be used. In this manner, the mechanism of the present invention reduces the resources and processing time needed to handle data transfers or other data operations.
  • step 1506 if the packet is not the first packet for a particular entity, the packet is added to a set and data extraction continues (step 1516) . Thereafter, the data or the set is compared against information in the comparator stack (step 1518) . A determination is made as to whether a match is present between the data or set and the information in the comparator stack (step 1520) . If a match is not present, the process returns to step 1510 as described above. When a match is present, an example decode associated with the match is obtained (step 1522) .
  • the example decode may be obtained from a data structure, such as example decode matrix 1002 in Figure 10. This matrix is a matrix of pointers to different example decode structures, which may be used to process the data for a particular type of command. The data is then placed using the example decode (step 1524) with the process terminating thereafter.
  • step 1502 if the packet is not a first packet in a set of packets, the process then proceeds to step 1508 as described above.
  • buffer space is allocated for these examples.
  • the amount of space that is needed may be determined in a number of ways.
  • FIG 16 a flowchart for identifying buffer space is depicted in accordance with a preferred embodiment of the present invention.
  • the process illustrated in Figure 16 is used to identify and allocate buffer space for write operations.
  • the process begins by identifying a need for write buffer space (step 1600) .
  • a determination is made as to whether the size for the write buffer space is provided (step 1602) . If a size for the write buffer space is provided, this size is used in building examples (step 1604) with the process terminating thereafter.
  • a default size is selected (step 1606) .
  • the history of the appropriateness of the default size is monitored (step 1608) .
  • This step includes determining whether the default size provides the correct amount of space, too much space, or too little space for the examples.
  • the size of the write buffer space is adjusted based on the history (step 1610) with the process terminating thereafter.
  • the examples illustrated above identify commands in packets on a per packet basis.
  • the processes of the present invention also may be applied to recognizing patterns of commands being received in successive packets. Further, packet processing also may be based on a number of strategies, such as, for example, first-in-first-out (FIFO) , frequency of packet types, and ordered set list processing.
  • FIFO first-in-first-out
  • decode examples are set up for different sequences of command types in received packets. For example, a decode example may be set up for a command sequence of read, read, and write. Another decode example in this methodology may be set up for a command sequence of read, write, and verify. Different lengths may be selected for these sequences .
  • the present invention provides an improved method, apparatus, and computer implemented instructions for reducing command processing in data transfers. This advantage is provided through the different mechanisms for grouping a series of identical commands or identical command patterns.
  • the mechanism of the present invention reduces the number of buffer allocation operations by avoiding such an operation for every command.
  • the present invention reduces the latency time in use of resources in fully processing each individual command. In this way, bottlenecks or congestion occurring at the protocol engine in high bandwidth data transfers are reduced or eliminated.
  • the mechanism of the present invention may be applied to existing mechanisms, such as in an application layer and a physical layer in an OSI stack within a protocol engine.
  • the processes of the present invention may be applied to data transfers involving many types of readable and/or. writable media devices, such as, for example, floppy disk drive, a hard disk drive, a CD-ROM drive, digital versatile disk (DVD) drive, and a magnetic tape drive. Further, this mechanism may be applied to data transfers between two applications in to data transfers to and from a storage device. Additionally, although the depicted examples illustrate the processes implemented in an OSI model, the processes of the present invention may be applied to other types of protocol models and may be located in other layers depending on the implementation.

Abstract

A method and apparatus for transferring data. A plurality of packets is received, wherein several of the plurality of packets includes a command and data. Packets within the plurality of packets having identical commands are identified to form a set of selected packets. A buffer is allocated to process the set of selected packets. A selectable subset of the packets not having identical commands to those in the set of selected packets are allocated to other buffers for processing.

Description

METHOD AND APPARATUS FOR TRANSERRING DATA IN A DATA PROCESSING SYSTEM
BACKGROUND OF THE INVENTION
1. Technical Field:
The present invention related generally to an improved data processing system and in particular to a method and apparatus for transferring data. Still more particularly, the present invention relates to a method and apparatus for transferring data to and from a storage device using data packets .
2. Description of Related Art:
Data is often transferred from one application to another application or to a storage device. Data transfers also may involve transferring the data from one computer to another computer or remote device. This type of transfer is facilitated through the use of a protocol. For example, if the transfer of data is over a network, the protocol TCP/IP may be used. If the transfer of data is over a device channel, the protocol SCSI may be used. Also, one protocol may be executed embedded within another protocol, for example sending data via the SCSI protocol over networks using the TCP/IP protocol. Applications typically send large numbers of identical commands when data is being read or written. Applications usually send a small amount of data with each command, compared with the destination device or application capability. Data sizes of 321*^ bytes and 64k bytes are typical sizes for such transfers.
When a data packet is received, a protocol engine is employed to process the packet. Currently, the protocol engine will identify the command in the data packet and allocate a buffer to process the data in the data packet . This process is performed each time a data packet is received. The present invention recognizes that with the large number of identical commands and the individual processing of each data packet, performance is degraded. The degradation is caused by having to process each of the data packets as potentially unrelated events and allocate resources for each data packet. Therefore, it would be advantageous to have an improved method and apparatus for transferring data in which performance degradation associated with data packet processing is avoided.
SUMMARY OF THE INVENTION
The present invention provides a method and apparatus for transferring data. A plurality of packets is received, wherein each of the plurality of packets includes a command and data. Packets within the plurality of packets having identical commands are identified to form a set of selected packets. A buffer is allocated to process the set of selected packets. Packets not having identical commands to those in the set of selected packets are allocated to other buffers for processing.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
Figure 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented;
Figure 2 is a block diagram illustrating a data processing system in which the present invention may be implemented;
Figure 3 is a block diagram illustrating components used to process packets in accordance with a preferred embodiment of the present invention;
Figures 4 and 5 are diagrams illustrating read and write command protocol phases in accordance with a preferred embodiment of the present invention;
Figures 6 and 7 are diagrams illustrating data flow in read command processing and write command processing in accordance with a preferred embodiment of the present invention;
Figure 8 is a flowchart of a process for grouping commands in data transfers in accordance with a preferred embodiment of the present invention;
Figure 9 is an illustration of a data transfer through protocol stacks in accordance with a preferred embodiment of the present invention;
Figure 10 is a diagram illustrating data structures used in decoding and receiving packets in accordance with a preferred embodiment of the present invention; Figure 11 is a flowchart of a process in an application layer for generating a packet set and sending data using the packet set in accordance with a preferred embodiment of the present invention;
Figure 12 is a flowchart of a process in a physical layer used to generate a packet set in accordance with a preferred embodiment of the present invention;
Figure 13 is a flowchart of a process in a physical layer for sending packets from a packet set across a data channel in accordance with a preferred embodiment of the present invention;
Figure 14 is a flowchart of a process in a physical layer used to receive a packet in accordance with a preferred embodiment of the present invention;
Figure 15 is a flowchart of a process for handling packets in an application layer in accordance with a preferred embodiment of the present invention; and
Figure 16 is a flowchart for identifying buffer space in accordance with a preferred embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
With reference now to the figures, Figure 1 depicts a pictorial representation of a distributed data processing system in which the present invention may be implemented. Distributed data processing system 100 is a network of computers in which the present invention may be implemented. Distributed data processing system 100 contains a network 102, which is the medium used to provide communications links between various devices and computers connected together within distributed data processing system 100. Network 102 may include permanent connections, such as wire or fiber optic cables, or temporary connections made through telephone connections.
In the depicted example, a server 104 is connected to network 102 along with storage unit 106. In addition, clients 108, 110, and 112 also are connected to network 102. These clients 108, 110, and 112 may be, for example, personal computers or network computers . For purposes of this application, a network computer is any computer, coupled to a network, which receives a program or other application from another computer coupled to the network. Clients 108, 110, and 112 are clients to server 104. Distributed data processing system 100 may include additional servers, clients, and other devices not shown.
In the depicted example, distributed data processing system 100 is the Internet with network 102 representing a worldwide collection of networks and gateways that use the TCP/IP suite of protocols to communicate with one another. Of course, distributed data processing system 100 also may be implemented using a number of different types of networks. Alternatively, the different computers or devices may be connected using physical links. The networks may be, for example, an intranet, a local area network (LAN) , or a wide area network (WAN) . The links found in the network or those making up physical links may be, for example, fiber optic links, packet switched communication links, Enterprise Systems Connection (ESCON) fibers, SCSI cable links, wireless communication links, and the like. Figure 1 is intended as an example, and not as an architectural limitation for the present invention.
The present invention provides a method, apparatus, and computer implemented instructions for transferring data from one device to another device. This transfer of data may take place between two computers, such as server 104 and client 110. Alternatively, the transfer may be between a computer and a storage device, such as client 112 and storage unit 106. These transfers take place through network 102, which may be a traditional network or a direct connection between the two devices. Further, these transfers may take place between a host and a device located in the same data processing system, such as client 112.
The mechanism of the present invention involves identifying a new packet or set of packets containing commands identical to those received in previous packets or sets of packets and processing those newly received packets without incurring additional the overhead or allocation of resources generally required for receiving such packets. It is possible that a command and related data may be received in a single packet. However, the general case is for such to be received in a series of packets (not necessarily contiguous) . The text of this invention will refer to the single packet or to the series of related packets containing the command and data in the singular as the packet' . For example, when a read command and data are received in a packet by a target device, resources,- such as buffer space and processing time to decode the command are used to direct the data to the appropriate location in the target device . The system then remembers that a read command has been processed and also remembers the location of the buffer containing a series of data spaces for later data buffering. If another packet containing a new read command and data is received by the target device, the system first checks to see if this is a remembered' command type. This is done via exclusive or or masking the command with its appropriate parameters. A value of zero distinguishes an exact map and the command is on that has been remembered. If it is, the resources allocated to processing such commands are used to process the data for this packet. The system will presume that the decode of the new command is predetermined and the data will go to the preassigned buffer. In this manner, additional processing time and resources are not required to be used to process the new command.
With reference now to Figure 2, a block diagram illustrating a data processing system in which the present invention may be implemented. Server 104 and clients 108-112 in Figure 1 may be implemented using data processing system 200. Data processing system 200 employs a peripheral component interconnect (PCI) local bus architecture. Although the depicted example employs a PCI bus, other bus architectures such as Accelerated Graphics Port (AGP) and Industry Standard Architecture (ISA) may be used. Processor 202 and main memory 204 are connected to PCI local bus 206 through PCI bridge 208. PCI bridge 208 also may include an integrated memory controller and cache memory for processor 202.
Additional connections to PCI local bus 206 may be made through direct component interconnection or through add-in boards. In the depicted example, communications adapter 210, SCSI host bus adapter 212, and expansion bus interface 214 are connected to PCI local bus 206 by direct component connection. In contrast, audio adapter 216, graphics adapter 218, and audio/video adapter 219 are connected to PCI local bus 206 by add-in boards inserted into expansion slots. Expansion bus interface 214 provides a connection for a keyboard and mouse adapter 220, modem 222, and additional memory 224. Small computer system interface (SCSI) host bus adapter 212 provides a connection for hard disk drive 226, tape drive 228, and CD- ROM drive 230.
An operating system runs on processor 202 and is used to coordinate and provide control of various components within data processing system 200 in Figure 2. The operating system may be a commercially available operating system, such as Windows NT, which is available from Microsoft Corporation. Instructions for the operating system, the object-oriented operating system, and applications or programs are located on storage devices, such as hard disk drive 226, and may be loaded into main memory 204 for execution by processor 202. The mechanism of the present invention may be implemented as instructions executed by processor 202 to identify commands in packets as they arrive. The mechanism of the present invention also may be implemented as part of the host adapters 210 or 212, in a form that may be either software or hardware. Further, the mechanism of the present invention may be implemented in a protocol stack in a protocol engine used to process packets. The mechanism of the present invention also may be implemented in a manner that reduces the amount of decoding and processing within the protocol stack. Once a command type has been decoded, the parameters and resources used for processing that packet may be used as an example or template for another packet containing the same command type . As a result, the resources used by the protocol stack to process the next packet containing the same command type are reduced. Those of ordinary skill in the art will appreciate that the hardware in Figure 2 may vary depending on the implementation. Other internal hardware or peripheral devices, such as flash ROM (or equivalent nonvolatile memory) or optical disk drives and the like, may be used in addition to or in place of the hardware depicted in Figure 2. Also, the processes of the present invention may be applied to a multiprocessor data processing system.
The depicted example in Figure 2 and above-described examples are not meant to imply architectural limitations. For example, data processing system 200 also may be a notebook computer or hand held computer in addition to taking the form of a PDA. Data processing system 200 also may be a kiosk or a Web app1iance .
With reference next to Figure 3, a block diagram illustrating components used to process packets is depicted in accordance with a preferred embodiment of the present invention. In this example, accelerator 300 works with protocol engine 302 to process packets received from a communications channel at an adapter, such as communications adapter 210 or SCSI adapter 212 in Figure 2. Processing overhead is reduced by recognizing and grouping a series of identical commands. Such a process avoids the need for the processor to be utilized to process each subsequent command after it is received. Such processing would include command decode to determine the exact command operation code, interpreting each accompanying parameter, and doing a buffer allocation operation. Therefore, the latency time induced by the time the protocol engine 302 spends processing each command also is reduced.
When a packet, such as packet 304, is first received, normal processing occurs using protocol engine 302. A buffer, such as buffer 306, is allocated to hold the command and data from packet 304. Initially several sets of data space are allocated and concatenated together in a series to form the buffer 306. The processing required for allocating a serial set is much more efficient than that required for allocating each amount of space needed individually. The information in the packet 304 is passed to protocol engine 302, which decodes the information to transfer the data to the appropriate destination.
When a second packet, such as packet 308, is received, accelerator 300 identifies the command in the packet. Normally, another buffer, such as buffer 310, is allocated for processing packet 308. Instead, if the command is of the same type for a packet already being processed, the command and data in the packet is placed into next data space of the same buffer, such as buffer 306. As the data for several commands are placed in buffer 306, only one copy of the original command is placed in this buffer, which is used as a prototype for comparison with subsequent commands, but buffer 306 also contains the actual number of commands stored in the buffer in this example. The term ^command' here means the operation code passed to the device and associated parameters required for such an operation code. Thus, if buffer 306 has been allocated for read commands of a specific format, all packets containing such read commands for the target are discovered via a simple compare or mask logic and have their data placed into buffer 306. Further, the data does not have to be decoded by protocol engine 302. Once the destination has been identified and data is being transferred to the destination, additional data may be placed in the buffer and transferred to the destination without requiring additional resources and processing time from protocol engine 302. If packet 308 contains a different command type, a new buffer, such as buffer 310, is allocated to process packet 308. A new allocation of data space is required and must be added to the buffer if the command type is the same, but the current buffer being used for that command type is full or unable to accept additional data for processing.
In this manner, optimization in transferring data is increased by associating specific commands. These are typically read or write operations. A buffer is selected to be used by several identical commands. In these examples, the identical commands are for a data transfer to or from a specific device. As new commands are received, they are compared to the current command being processed. As mentioned before, a different command may result in the allocation of a new buffer, such as buffer 310. In this example, the processes of the present invention are implemented in accelerator 300. These processes, however, also may be implemented in other locations, such as in protocol engine 302.
With reference now to Figures 4 and 5, diagrams illustrating read and write command protocol phases are depicted in accordance with a preferred embodiment of the present invention. As these phases can be interpreted as logical steps in the processing of a command, some of them may not be included in all the protocols that may be used in the implementation of the present invention. For instance, the SCSI protocol includes all of these phases, while the ESCON protocol or the TCP/IP protocol do not implement any device ready phase 506, which represents a flow control phase in the processing of the commands. Also, many applications may send the data with the command when they have to transfer data . In Figure 4, phases for read commands between host 400 and device 402 are illustrated. Host 400 and device 402 may be in the same computer or located on different computers. In this example, read commands involve a command phase 404, a data phase 406, and a status phase 408. Read commands are sent to device 402 from host 400 during command phase 404. Data is returned from device 402 to host 400 during data phase 406. A status phase occurs in which the status of the command is returned during status phase 408. With read operations, the optimization occurs between command phase 404 and data phase 406. The optimizations in read operations allows for the data for several read commands to be acquired in one operation at the side of device 402 and for device 402 to send the data related to each of several subsequent read commands to host 400 without additional command processing or buffer allocation overhead.
In Figure 5, write commands are sent from host 500 to device 502. Write commands involve a command phase 504, a device ready phase 506, a data phase 508, and a status phase 510. The phases involved in write operations are similar to those described above for read operations. Device ready phase 506 is an additional phase used to indicate that the device is ready or available for data transfer. The optimizations provided by the present invention occur between command phase 504 and device ready phase 506. Further, optimizations occur between data phase 508 and status phase 510. The first optimization comes from the fact that, after a first write command has been received from host 500 by device 502, a buffer able to store the data for several of these commands has been allocated by device 502, and no additional processing is required before device 502 accepts the command and notifies host 500 by the way of device ready phase 506. The second optimization comes from the fact that device 502 does not try to move the data received from previous write commands before a full buffer has been filled. Instead, device 502 returns a continuation status as soon as the last message of data has been received and this allows host 500 to issue a new command as soon as it can.
With reference now to Figures 6 and 7 , diagrams illustrating data flow in read command processing and write command processing are depicted in accordance with a preferred embodiment of the present invention. In Figure 6, read operations involve an adapter 600, an accelerator 602, and a protocol engine 604. Adapter 600 is the hardware used to receive and send data. Commands are received by accelerator 602 for a requestor through adapter 600. Memory allocation occurs to allocate a buffer for transferring data. The buffer allocated is large enough to hold data for several read commands. Read commands are sent to protocol engine 604 with data being read from the media. If additional read commands are received, the data for these read commands also are place in the buffer. When the buffer is filled, the data is returned to accelerator 602 and a decision is made whether or not to add data space to the buffer. The data is transferred to adapter 600 for transfer to the requestor asynchronously to the transfer of data between the accelerator 602 and the protocol engine 604. These additional transfers of data occur without requiring additional overhead for setting up buffers and without spending the time to decode and process the parameters for each read command received.
In Figure 7, adapter 700, accelerator 702, and protocol engine 704 are components involved in write operations to a device. As with read operations, a write operation involves allocating memory, such as buffer space to receive data. With write operations, data is received from a host or requestor of the write operation with the device being the target of the data. The buffer is allocated such that data for multiple write commands may be stored in the buffer. When additional write commands are received, the data for these commands are stored in the buffer. When the buffer is full, the data is then written to the device through protocol engine 704. Thus, once the buffer has been set up for a write command by accelerator 702, additional write operations may occur without requiring the overhead involved in setting up additional buffers and without spending the time to decode and process the parameters .
In the examples illustrated in Figures 6 and 7, the data is written to the device or sent to the requestor after the buffer has been filled. Alternatively, data may be transferred continuously from the buffer to the device.
With reference now to Figure 8, a flowchart of a process for grouping commands in data transfers is depicted in accordance with a preferred embodiment of the present invention. The processes illustrated in Figure 8 are implemented in an accelerator in these examples.
The process begins by receiving a command and data (step 800) . Only information about the data such as the length can be received at this point, since the application can assist in delivering data to the protocol engine without additional copy.
A determination is made as to whether a command or alternatively a list of commands is currently being processed (step 802) . If not, the command is passed to the protocol engine, which allocates a buffer which size depend on the command type. For instance, a read or write command will be processed in a way that allows the storage of several identical commands. It is an alternative at this point to have only a selectable subset of the range of possible commands that are processed in the accelerated manner. Commands not included for such processing are passed on to the system for normal processing. For commands selected for accelerator processing, after the buffer has been allocated, if the command is a write command, the data can be received prior being transferred to the protocol engine (step 814) . As a transition from step 802, if a command or list of commands is currently being processed, the received command is compared the current command or in the case of a list is compared to each command in the list (step 804) . In the case of a list of commands, the order of the list can be varied (e.g., most recently used, most frequently use, etc.) as the processing continues such that the most probable match is found early in the compare process . A determination is then made as to whether the commands are the same or identical (step 806) . This determination involves identifying whether the type of command is of the same type. For example, the determination may be whether both commands are read commands. Further, the determination also may involve identifying whether the source of the commands are the same . The grouping of commands, in these example, may be performed by the source application sending the command.
If the commands are identical, a determination is then made as to whether buffer space is available in the buffer allocated for these commands (step 808) . If buffer space is available, the command and the data are placed in the buffer (step 810) . A determination is then made as to whether the buffer is full (step 812) . If the buffer is full, the data is then transferred to the protocol engine (step 814) with the process terminating thereafter. If continuous data transfer is used in the process rather than buffer full transfer, the decision at 808 when there is insufficient buffer space available will branch to a function that will make a further decision about allocating more data space to the buffer. If more space is allocated, the logic will return and re ask the question at step 808. If more space is not allocated, then the logic will flow on to step 816 as shown in Figure 8. With reference again to step 812, if the buffer is not full, the process terminates. With reference back to step 808, if buffer space is unavailable, a new buffer is allocated for this command type (step 816) . The process then proceeds to step 810. Returning to step 806, if the command is not the same command, the process also proceeds to step 816 as described above. The process also proceeds to step 816 from step 802 if a command is not currently being processed.
In addition to grouping commands at the device, the commands also may be grouped at the protocol engine associated with the application when data transfers occur using packets transferred across a network or other communications channel . For example, an application located in one computer may request data transfer from a storage device using a network communications protocol, such as TCP/IP.
With reference to Figure 9, an illustration of a data transfer through protocol stacks is depicted in accordance with a preferred embodiment of the present invention. In this example, protocol stack 900 includes an application layer 902, a presentation layer 904, a transport layer 906, a network layer 908, a data link layer 910, and a physical layer 912. In the depicted example, protocol stack 900 is located in a client with an application that performs data transfers.
Protocol stack 914 includes an application layer 916, a presentation layer 918, a transport layer 920, a network layer 922, a data link layer 924, and a physical layer 926. In this example, protocol stack 914 is located in a system containing the storage device or application- that is involved in the data transfer.
These layers follow the Open System Interconnection (OSI) standard defining a framework implementing protocols in seven layers. Control is passed from one layer to the next, starting at the application "layer in one station or device, proceeding to the bottom layer, moving over a communications channel to the next station or device, and proceeding back up the hierarchy. Protocol stack 900 and protocol stack 914 may be found in a protocol engine, such as protocol engine 302 in Figure 3.
In accordance with a preferred embodiment of the present invention, once one type of command has been decoded and processed through protocol stack 900, application layer 902 sends data directly to physical layer 912 for transfer to protocol stack 914 across a communications channel.
This mechanism avoids having to perform the encoding processes in the other layers. More specifically, application layer 902 will send a pseudo block 928 to presentation layer 904 for processing in response to a request to transfer data. Pseudo block 928 is a packet generated by the application in application layer 902, which will be transformed into the appropriate format for transfer over a communications channel . This transformation typically includes placing the data from pseudo block 928 into a number of packets, as well as generating the header information needed to send the packets to the target. Pseudo block 928 includes a flag 930 and data 932. Data 932 is dummy data, which is processed by the different layers in protocol stack 900. This processing is used to encode the data in the pseudo block into the appropriate format and packets for transfer over a communications channel.
A packet set is generated by physical layer 912. Physical layer 912 is configured to return the packet set to application layer 902. The application in application layer 902 that is to receive the packet set may be identified by flag 930. In this example, packet set 934 is returned to application layer 902 in buffer space 936. Application layer 902 will replicate or make copies of packet set 934, such as packet sets 938 and 940. Packet 942 is an example of a packet found in packet sets 934, 938, and 940. Packet 942 includes a header 944, which was generated by physical layer 912 to transport packet 942 to the target. Additionally, packet 942 includes a flag 946 and data 948, which forms a payload section for packet 942.
Alternatively, flag 946 may be located in header 944. Flag 946 is used to indicate that they are preprocessed and ready for transfer across the communications channel. Flag 946 also may be unique for a particular transfer by a particular application, such that all packets containing a flag can be associated with a particular application.
Application layer 902 will place data into packets in the packet set. In these examples, this data includes a command and the data that is to be processed in response to the command. The data that is to be processed or transferred to another application or device is referred to as "customer data" . The command and the customer data are placed into the data or payload areas in the packets for a packet set, such as packet set 934.
Individual packets are passed back to physical layer 912 for transfer as they are filled from a packet set. Alternatively, an entire packet set is sent to physical layer 912 after the packet set has been filled by application layer 902. These packets or packet sets are passed directly to physical layer 912 in these examples. Alternatively, the packet or packet sets could be sent through the other layers in the protocol stack. If the packet or packet sets are passed through the other layers, no additional processing occurs. The other layers and physical layer 912 are able to recognize packets that do not need processing or encoding based on a flag located in each of the packets. Upon receiving packets containing a flag, physical layer 912 will send the packets across a communications channel to protocol stack 914. Packet 942 is an illustration of a packet transferred by physical layer 912.
At protocol stack 914, physical layer 926 will decode a packet and send the packet up through protocol stack 914 to application layer 916 if a flag is absent from the packet. Application layer 916 may then process the data or place it in storage. If a flag is present, the packet may be sent directly to application layer 916 for processing. Application layer 916 will take packets with flags and recreate packet sets to extract data. When an entire packet set has been recreated, the block of data sent from the source may be extracted and processed according to the command associated with the packet set. Alternatively, the data may be extracted as packets in a packet set are received by application layer 916. When a first packet in a set is received for the first time at application layer 916, the packet is decoded and the information to decode the packet is stored to build an inventory of preprocessed decode examples. These decode examples may include, for example, the parameters, registers, variables, and buffers required to decode the data into a form used by an application in application layer 916. These examples may be built on a per packet or per packet set basis. When subsequent packets are received, the flag may be used to identify the appropriate decode example for use in processing the packet. In this manner, the overhead required to decode and process the packet is reduced.
In the absence of flag 942 as part of the communication protocol between stack 900 and stack 914, the decode process may be performed vertically on a subset of the layers in stacks 900 and 914. For instance, physical layer 912 may group commands and data for sending after application layer 902 has sent a local set-up message defining a packet set type. Transport layer 926 may group received commands after application layer 916 has sent a local set-up message describing a packet set type. Physical layers 912 or 926, application layers 902 or 916, or any intermediate layer, may determine packet set boundaries based on statistical elements and without any external intervention. The processes described with respect to the physical layers in Figure 9 may be implemented as part of the application or by another program in the application layer.
Turning next to Figure 10, a diagram illustrating data structures used in decoding and receiving packets is depicted in accordance with a preferred embodiment of the present invention. In this example, a comparator stack 1000, an example decode matrix 1002, and an example data structure 1004 are used to process packets received by a physical layer, such as physical layer 926 in Figure 9.
When a packet, such as packet 1006, is received, a flag within the packet or other identification information is used to identify the packet type. More specifically, the packet type, in these examples, is associated with a command or other instructions used to perform an operation on the data in the packet. This identification information is compared to identification information stored in comparator stack 1000. Each of these packet types are categories by the type of command or operation that is to be carried out on the data in a packet. In these examples, the packet types are "A", "B", and "C" .
If the packet type does not correspond to the information in comparator stack 1000, the packet identification information is placed into comparator stack 1000, and the packet is decoded. The data structures, the parameters, the variables, as well as any other information or setting required to decode and place the data into a format for use by an application in the application layer is stored as a data structure, such as example data structure 1004. In this example, this data structure contains command information 1008, parameter information 1010, and data 1012. All of this information is used to place data from a packet into a format for use by the application.
Example data structure 1004, in these examples, is replicated a number of different times. Pointers to these data structures are placed in example decode matrix 1002. In this example, pointer 1014 points to example decode data structure 1004. When a packet is received having an identification corresponding to a packet type in comparator stack 1000, a pointer from example decode matrix 1002 to an example decode data structure for that packet type is used to select a data structure to process the packet. In this manner, the resources and time used in decoding a packet may be reduced. This mechanism may be applied to entire packet sets in addition to individual packets. In this example, a packet set corresponds to a block of data handled by an application in the application layer.
Further, rather than using an example data structure for each packet or packet set, a larger data structure may be used for a number of packet or packet sets. This larger allocation of memory or buffer space may be selected to be large enough to handle predicted numbers of packets or packet sets. Additionally, the memory allocation or buffer space may be dynamically varied to take into account increasing or decreasing needs in processing data.
With reference now to Figures 11-16, flowcharts illustrating processes used to group commands in packets are depicted in accordance with a preferred embodiment of the present invention. The processes illustrated in Figures 11-13 are those used to send packets, while the processes depicted in Figures 14-16 are those used to receive packets. The processes, in these examples, are implemented in a protocol stack, such as protocol stack 900 or protocol stack 914 in Figure 9. Of course, the processes may be located elsewhere depending on the implementation.
Turning now to Figure 11, a flowchart of a process in an application layer for generating a packet set and sending data using the packet set is depicted in accordance with a preferred embodiment of the present invention. This process may be implemented in an application layer, such as application layer 902 in Figure 9.
The process begins by generating a pseudo block (step 1100) . Pseudo block 928 in Figure 9 is an example of a pseudo block that is generated in step 1100. This pseudo block includes a flag used to identify the application that is transferring data. Further, this flag is used by other layers, such as the physical layer, as an indication to return a set of packets to the application layer. The pseudo block, in these examples, takes the form of a packet generated by an application, which is typically placed into smaller sized packets for transport across a communications channel.
The pseudo block is then passed to the next layer (step 1102) . With an application layer, the next layer in an OSI model is a presentation layer. The process then waits to receive a packet set (step 1104) . The packet set is a set of data structures, which are ready for transport over the communications channel by the physical layer. The data to actually be transported is placed within the appropriate places in these packet sets. These places are typically the payload portions of the packet. When a packet set is received, the packet set is placed repetitively in a buffer space (step 1106) . This replication of the packet set allows for multiple blocks of data to be filled by the application layer and passed to the physical layer for transfer.
The data space of a packet in a packet set is filled, and the packet is sent to the physical layer (step 1108) . This data space in the packet is also referred to as the "payload" . The data space is filled with the customer data and the command for the operation to be performed on the customer data. Further, a flag is placed in the payload if a flag is not already present elsewhere in the packet.
A determination is then made as to whether more data is present to be placed in packets in the packet set (step 1110) . This determination includes identifying whether the packet set has been completed or whether the amount of data sent does not require the entire packet set. If more data is present, then the process returns to step 1108. Otherwise, a determination is made as to whether more data is present for transfer (step 1112) . If more data is present, another packet set is selected for use (step 1114) with the process then returning to step 1108. Otherwise, the process terminates.
With reference now to Figure 12 , a flowchart of a process in a physical layer used to generate a packet set is depicted in accordance with a preferred embodiment of the present invention. The processes illustrated in Figure 12 may be implemented in a physical layer, such as physical layer 912 in Figure 9.
The process begins by receiving a packet from the previous layer (step 1200) . With an OSI model, this layer would be a data link layer. A determination is made as to whether the packet includes a flag (step 1202) . If the packet does not include a flag, the packet is sent using normal processing within the physical layer (step 1204) with the process terminating thereafter. With reference again to step 1202, if a flag is present, the packet is broken into a set of physical packets for transfer on a communications channel or link (step 1206) . This set of packets is sent back to the application associated with the flag (step 1208) with the process terminating thereafter. Further, these packets include a flag to identify the packets as being part of the same set of packets or to identify the set of packets to be part of a data transfer for the application.
With reference next to Figure 13, a flowchart of a process in a physical layer for sending packets from a packet set across a data channel is depicted in accordance with a preferred embodiment of the present invention. The process in Figure 13 is implemented in a physical layer in these examples .
The process begins by receiving a packet from the application layer (step 1300) . A determination is made as to whether a flag is present in the packet (step 1302) . If a flag is present, the packet is sent onto the communications channel for transfer to the target (step 1304) with the process terminating thereafter.
If a flag is absent in the packet, an error message' is generated for return to the application (step 1306) with the process terminating thereafter. Each packet received directly from the application layer should include a flag in these examples. If a flag is absent, then some error in processing in the application layer is assumed.
With reference now to Figure.14, a flowchart of a process in a physical layer used to receive a packet is depicted in accordance with a preferred embodiment of the present invention. The process in Figure 14 may be implemented in a physical layer, such as physical layer 926 in Figure 9. The process begins by receiving a packet from the physical media (step 1400) . This physical media is a communications channel in this example. A determination is made as to whether a flag is present in the packet (step 1402) . If a flag is present, the packet is sent directly to the application (step 1404) with the process terminating thereafter. If a flag is absent in the packet, then the packet is sent to the next layer above (step 1406) with the process terminating thereafter. In this example, the next layer is a data link layer if an OSI model is used. The flag indicates that the mechanism of the present invention is being used to process these packets. If the physical layer does not recognize the flag or is not configured to use the mechanism of the present invention the flag is ignored and the packet is sent to the next layer.
Turning next to Figure 15, a flowchart of a process for handling packets in an application layer is depicted in accordance with a preferred embodiment of the present invention. The process illustrated in Figure 15 may be implemented in an application layer, such as application layer 916 in Figure 9, when an OSI model is used.
The process begins by receiving a packet from the physical layer (step 1500) . A determination is made as to whether this packet is a first packet of a set of packets (step 1502) . If the packet is a first packet in a set of packets, the packet is associated with a set (step 1504) . This step is used to begin a new set in which the packet will be placed. A determination is made as to whether this packet is the first time a packet has been received from an entity originating the packet (step 1506) . The entity is identified by a device address or file address in these examples. If this packet is the first time a packet has been received from the entity, the process begins building the set and extracting data (step 1508) .
Next, decode is performed against the data (step 1510) . This step involves performing the necessary actions to place the data into a form for processing for a command or for use by an application. Next, information is placed in a comparator stack (step 1512) . This information may be for the data or a packet set. Thereafter, an inventory of preprocessed examples of decode is created (step 1514) with the process terminating thereafter. These examples of decode are also referred to as example decodes. The examples include the information necessary to decode or place the data in a form for use by an application, which is a target of the data transfer. A decode example, such as example decode data structure 1004 in Figure 10, includes information, such as, for example, registers in which data is to be placed and pointers to allocated buffer space. Basically, a decode example is a template used to process data for a particular command. With a decode example, processing of data for the command does not require identifying where the data should be placed or what buffer space should be used. In this manner, the mechanism of the present invention reduces the resources and processing time needed to handle data transfers or other data operations.
With reference again to step 1506, if the packet is not the first packet for a particular entity, the packet is added to a set and data extraction continues (step 1516) . Thereafter, the data or the set is compared against information in the comparator stack (step 1518) . A determination is made as to whether a match is present between the data or set and the information in the comparator stack (step 1520) . If a match is not present, the process returns to step 1510 as described above. When a match is present, an example decode associated with the match is obtained (step 1522) . The example decode may be obtained from a data structure, such as example decode matrix 1002 in Figure 10. This matrix is a matrix of pointers to different example decode structures, which may be used to process the data for a particular type of command. The data is then placed using the example decode (step 1524) with the process terminating thereafter.
With reference again to step 1502, if the packet is not a first packet in a set of packets, the process then proceeds to step 1508 as described above.
In generating decode examples, buffer space is allocated for these examples. The amount of space that is needed may be determined in a number of ways. Turning next to Figure 16, a flowchart for identifying buffer space is depicted in accordance with a preferred embodiment of the present invention. The process illustrated in Figure 16 is used to identify and allocate buffer space for write operations. The process begins by identifying a need for write buffer space (step 1600) . A determination is made as to whether the size for the write buffer space is provided (step 1602) . If a size for the write buffer space is provided, this size is used in building examples (step 1604) with the process terminating thereafter.
If a size is not provided, a default size is selected (step 1606) . The history of the appropriateness of the default size is monitored (step 1608) . This step includes determining whether the default size provides the correct amount of space, too much space, or too little space for the examples. The size of the write buffer space is adjusted based on the history (step 1610) with the process terminating thereafter. The examples illustrated above identify commands in packets on a per packet basis. The processes of the present invention also may be applied to recognizing patterns of commands being received in successive packets. Further, packet processing also may be based on a number of strategies, such as, for example, first-in-first-out (FIFO) , frequency of packet types, and ordered set list processing. With frequency, packets containing a command type that have a higher frequency are selected for processing before packets having a command type with a lower frequency. With ordered set list processing, decode examples are set up for different sequences of command types in received packets. For example, a decode example may be set up for a command sequence of read, read, and write. Another decode example in this methodology may be set up for a command sequence of read, write, and verify. Different lengths may be selected for these sequences .
Thus, the present invention provides an improved method, apparatus, and computer implemented instructions for reducing command processing in data transfers. This advantage is provided through the different mechanisms for grouping a series of identical commands or identical command patterns. The mechanism of the present invention reduces the number of buffer allocation operations by avoiding such an operation for every command. In addition, the present invention reduces the latency time in use of resources in fully processing each individual command. In this way, bottlenecks or congestion occurring at the protocol engine in high bandwidth data transfers are reduced or eliminated. Further, the mechanism of the present invention may be applied to existing mechanisms, such as in an application layer and a physical layer in an OSI stack within a protocol engine.
It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media, such as a floppy disk, a hard disk drive, a RAM, CD-ROMs, DVD-ROMs, and transmission-type media, such as digital and analog communications links, wired or wireless communications links using transmission forms, such as, for example, radio frequency and light wave transmissions. The computer readable media may take the form of coded formats that are decoded for actual use in a particular data processing system.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. For example, the processes of the present invention may be applied to data transfers involving many types of readable and/or. writable media devices, such as, for example, floppy disk drive, a hard disk drive, a CD-ROM drive, digital versatile disk (DVD) drive, and a magnetic tape drive. Further, this mechanism may be applied to data transfers between two applications in to data transfers to and from a storage device. Additionally, although the depicted examples illustrate the processes implemented in an OSI model, the processes of the present invention may be applied to other types of protocol models and may be located in other layers depending on the implementation.
The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims

CLAIMS :What is claimed is:
1. A method in a data processing system for transferring data, the method comprising: receiving a plurality of packets, wherein several of the plurality of packets includes a command and data; allocating a buffer; identifying packets within the plurality of packets having a particular command; and placing packets identified as having the particular command in the buffer.
2. The method of claim 1 further comprising: allocating a new buffer for each packet within the plurality of packets having a command from a selectable subset of all the possible commands which is a different command from the particular command and from other packets within the plurality of packets having been allocated a buffer.
3. The method of claim 1, wherein the receiving, allocating, identifying, and placing steps are performed in a protocol engine .
4. The method of claim 1, wherein the commands include at least one of a read command and a write command.
5. A method in a data processing system for processing data, the method comprising: receiving an initial packet, wherein the initial packet includes data and a command; allocating a buffer to the initial packet, wherein the data and the command- are stored in the buffer; responsive to receiving a subsequent packet, determining whether a command in the subsequent packet is identical to the command in the initial packet; and responsive to the command in the subsequent packet being identical to the command in the initial packet, placing the data for the subsequent packet in the buffer.
6. The method of claim 5, wherein the placing step places packets having the particular command in the buffer as they are identified.
7. A method for processing data in a data processing system, the method comprising: sending a block of pseudo data to a protocol layer below an application layer, wherein the block of psuedo data includes an indicator; receiving a set of packets from a physical layer below the protocol layer, wherein the set of packets include portions for real data; placing real data in the set of packets; and sending the set of packets to the physical layer for transfer.
8. The method of claim 7, wherein the sending, the receiving, and the placing steps are performed in the application layer.
9. The method of claim 8, wherein the application layer, the protocol layer, and the physical layer are located in an Open Systems Interconnect protocol stack.
10. The method of claim 7, wherein the block of psuedo data is a packet containing the psuedo data and a flag.
11. A method in a data processing system for processing data, the method comprising: receiving a packet from a protocol layer, wherein the packet originated from an application layer located prior to the protocol layer; responsive to the packet including a selected indicator, generating a set of packets from the packet, wherein the set of packets are in a format for transmission to a target over a communications channel; and returning the set of packets to the protocol layer.
12. The method of claim 11 further comprising: receiving the packet from the application layer; and sending the packet to a target over the communications channel .
13. The method of claim 12, wherein the packet includes the selected indicator.
14. A method for processing packets in a protocol stack within a data processing system, the method comprising: receiving a packet at a physical layer in the protocol stack; and responsive to a presence of an indicator in the packet, sending the packet to an application layer in the protocol stack, wherein other layers in the protocol stack are bypassed; and responsive to an absence of the indicator in the packet, sending the packet to a next layer in the protocol stack.
15. The method of claim 14, wherein the protocol stack is an Open Systems Interconnect protocol stack.
16. A method in a data processing system for processing packets, the method comprising: receiving a packet from a physical layer in a protocol stack, wherein the packet includes data; determining whether the packet is a first packet in a set of packets; responsive to the packet being the first packet in the set of packets, associating the packet with the set of packets; extracting data from the packet; processing the data; and creating a template for processing the data.
17. The method of claim 16 further comprising: receiving additional packets, wherein the additional packets contain data; determining whether the template is present for processing the data; and responsive to the template being present, processing the data using the template, wherein processing operations required to process the data are reduced.
18. The method of claim 16, wherein the template includes at least one of registers used in processing the data and pointers to allocated buffer space.
19. The method of claim 16, wherein the step of processing the data includes allocating a buffer space to process the data and adjusting the buffer space based on usage of the buffer space.
20. A protocol stack for use in a data processing system the protocol stack comprising: an application layer; at least one intermediate layer located after the application layer; and a physical layer located after the at least one intermediate layer, wherein the application layer sends a block of data including a selected indicator for transmission, the physical layer generates a set of packets and returns the set of packets to the application layer in response to detecting the selected indicator in the data, the application places data in the set of packets and sends the packets to the physical layer for transfer, the physical layer sends a packet to a target in response to receiving the packet .
21. A data processing system comprising: a storage device; an application, wherein the application sends a plurality of packets to the storage device, wherein each packet includes data and a command for an operation on the data; and a protocol engine, wherein the protocol engine receives an initial packet, allocates a buffer to process the data in the initial packet, places additional packets within the buffer in response to the additional packets having a same type of command as the initial packet.
22. The data processing system of claim 21, wherein the storage device and the application are located in a computer.
23. The data processing system of claim 21, wherein the storage device is located on a first computer and the application is located on a second computer.
24. A data processing system for transferring data, the data processing system comprising: receiving means for receiving a plurality of packets, wherein each of the plurality of packets includes a command and data; allocating means for allocating a buffer identifying means for identifying packets within the plurality of packets having a particular command; and placing means for placing two packets identified as having the particular command in the buffer.
25. The data processing system of claim 24, wherein the allocation means is a first allocation means, further comprising: second allocation means for allocating a new buffer for each packet within the plurality of packets having a different command from the particular command and from other packets within the plurality of packets having been allocated a buffer.
26. The data processing system of claim 24, wherein the receiving means, the identifying means, and the allocation means are located in a protocol engine .
27. The data processing system of claim 24, wherein the commands include at least one of a read command and a write command.
28. A data processing system for processing data, the data processing system comprising: receiving means for receiving an initial packet, wherein the initial packet includes data and a command; allocation means for allocating a buffer to the initial packet, wherein the data and the command are stored in the buffer; determining means responsive to receiving a subsequent packet, for determining whether a command in the subsequent packet is identical to the command in the initial packet; and placing means responsive to the command in the subsequent packet being identical to the command in the initial packet, for placing the data for the subsequent packet in the buffer.
29. A data processing system, the data processing system comprising : first sending means for sending a block of pseudo data to a protocol layer below an application layer, wherein the block' of psuedo data includes an indicator; receiving means for receiving a set of packets from a physical layer below the protocol layer, wherein the set of packets include portions for real data; placing means for placing real data in the set of packets; and second sending means for sending the set of packets to the physical layer for transfer.
30. The data processing system of claim 29, wherein the first sending means, the second sending means, the receiving means, and the placing means are located in the application layer.
31. The data processing system of claim 30, wherein the application layer, the protocol layer, and the physical layer are located in an Open Systems Interconnect protocol stack.
32. The data processing system of claim 29, wherein the block of psuedo data is a packet containing the psuedo data and a flag.
33. A data processing system for processing data, the data processing system comprising: receiving means for receiving a packet from a protocol layer, wherein the packet originated from an application layer located prior to the protocol layer; generating means responsive to the packet including a selected indicator, for generating a set of packets from the packet, wherein the set of packets are in a format for transmission to a target over a communications channel; and returning means for returning the set of packets to the protocol layer.
34. The data processing system of claim 33, wherein the receiving means is a first receiving means, further comprising: second receiving means for receiving the packet from the application layer; sending means for sending the packet to a target over the communications channel.
35. The data processing system of claim 34, wherein the packet include the selected indicator.
36. A data processing system for processing packets in a protocol stack, the data processing system comprising: receiving means for receiving a packet at a physical layer in the protocol stack; and first sending means responsive to a presence of an indicator in the packet, for sending the packet to an application layer in the protocol stack, wherein other layers in the protocol stack are bypassed; and second sending means responsive to an absence of the indicator in the packet, for sending the packet to a next layer in the protocol stack.
37. The data processing system of claim 36, wherein the protocol stack is an Open Systems Interconnect protocol stack.
38. A data processing system for processing packets, the data processing system comprising: receiving means for receiving a packet from a physical layer in a protocol stack, wherein the packet includes data; determining means for determining whether the packet is a first packet in a set of packets; associating means responsive to the packet being the first packet in the set of packets, for associating the packet with the set of packets; extracting means for extracting data from the packet; processing means for processing the data; and creating means for creating a template for processing the data.
39. The data processing system of claim 38, wherein the receiving means is a first receiving means, the determining means is a first determining means, and the processing means is a first processing means, further comprising: second receiving means for receiving additional packets, wherein the additional packets contain data; second determining means for determining whether the template is present for processing the data; and second processing means responsive to the template being present, for processing the data using the template, wherein processing operations required to process the data are reduced.
40. The data processing system of claim 38, wherein the template includes at least one of registers used in processing the data and pointers to allocated buffer space.
41. The data processing system of claim 38, wherein the step of processing the data includes allocating a buffer space to process the data and adjusting the buffer space based on usage of the buffer space.
42. A computer program product in a computer readable medium for transferring data, the computer program product comprising: first instructions for receiving a plurality of packets, wherein each of the plurality of packets includes a command and data; second instruction for allocating a buffer; second instructions for identifying packets within instructions for identifying packets within the plurality of packets having a particular command; and third instructions for placing the commands identified as having the particular command in the buffer.
43. A computer program product in a computer readable medium for processing data, the computer program product comprising: first instructions for receiving an initial packet, wherein the initial packet includes data and a command; second instructions for allocating a buffer to the initial packet, wherein the data and the command are stored in the buffer; third instructions responsive to receiving a subsequent packet, for determining whether a command in the subsequent packet is identical to the command in the initial packet; and fourth instructions responsive to the command in the subsequent packet being identical to the command in the initial packet, for placing the data for the subsequent packet in the buffer.
44. A computer program product in a computer readable medium for processing data, the computer program product comprising: first instructions for sending a block of pseudo data to a protocol layer below an application layer, wherein the block of psuedo data includes an indicator; second instructions for receiving a set of packets from a physical layer below the protocol layer, wherein the set of packets include portions for real data; third instructions for placing real data in the set of packets; and fourth instructions for sending the set of packets to the physical layer for transfer.
45. A computer program product in a computer readable medium for processing data, the computer program product comprising: first instructions for receiving a packet from a protocol layer, wherein the packet originated from an application layer located prior to the protocol layer; second instructions responsive to the packet including a selected indicator, for generating a set of packets from the packet, wherein the set of packets are in a format for transmission to a target over a communications channel; and third instructions for returning the set of packets to the protocol layer.
46. A computer program product in a computer readable medium for processing packets in a protocol stack within a data processing system, the computer program product comprising: first instructions for receiving a packet at a physical layer in the protocol stack; and second instructions responsive to a presence of an indicator in the packet, for sending the packet to an application layer in the protocol stack, wherein other layers in the protocol stack are bypassed; and third instructions responsive to an absence of the indicator in the packet, for sending the packet to a next layer in the protocol stack.
47. A computer program product in a computer readable medium for processing packets, the computer program product comprising: first instructions for receiving a packet from a physical layer in a protocol stack, wherein the packet includes data; second instructions for determining whether the packet is a first packet in a set of packets; third instructions responsive to the packet being the first packet in the set of packets, for associating the packet with the set of packets; fourth instructions for extracting data from the packet; fifth instructions for processing the data; and sixth instructions for creating a template for processing the data.
PCT/US2001/024641 2000-08-11 2001-08-06 Method and apparatus for transferring data in a data processing system WO2002014998A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001284730A AU2001284730A1 (en) 2000-08-11 2001-08-06 Method and apparatus for transferring data in a data processing system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US63817300A 2000-08-11 2000-08-11
US09/638,173 2000-08-11

Publications (3)

Publication Number Publication Date
WO2002014998A2 true WO2002014998A2 (en) 2002-02-21
WO2002014998A3 WO2002014998A3 (en) 2003-04-03
WO2002014998A9 WO2002014998A9 (en) 2004-04-22

Family

ID=24558936

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/024641 WO2002014998A2 (en) 2000-08-11 2001-08-06 Method and apparatus for transferring data in a data processing system

Country Status (2)

Country Link
AU (1) AU2001284730A1 (en)
WO (1) WO2002014998A2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5414702A (en) * 1992-10-20 1995-05-09 Kabushiki Kaisha Toshiba Packet disassembler for use in a control unit of an asynchronous switching system
JPH08130555A (en) * 1994-11-02 1996-05-21 Nec Corp Communication resource management type packet exchange
US5802278A (en) * 1995-05-10 1998-09-01 3Com Corporation Bridge/router architecture for high performance scalable networking
US5870394A (en) * 1996-07-23 1999-02-09 Northern Telecom Limited Method and apparatus for reassembly of data packets into messages in an asynchronous transfer mode communications system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5414702A (en) * 1992-10-20 1995-05-09 Kabushiki Kaisha Toshiba Packet disassembler for use in a control unit of an asynchronous switching system
JPH08130555A (en) * 1994-11-02 1996-05-21 Nec Corp Communication resource management type packet exchange
US5802278A (en) * 1995-05-10 1998-09-01 3Com Corporation Bridge/router architecture for high performance scalable networking
US5870394A (en) * 1996-07-23 1999-02-09 Northern Telecom Limited Method and apparatus for reassembly of data packets into messages in an asynchronous transfer mode communications system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BLACKWELL T: "Fast decoding of tagged message formats" PROCEEDINGS OF IEEE INFOCOM 1996. CONFERENCE ON COMPUTER COMMUNICATIONS. FIFTEENTH ANNUAL JOINT CONFERENCE OF THE IEEE COMPUTER AND COMMUNICATIONS SOCIETIES. NETWORKING THE NEXT GENERATION. SAN FRANCISCO, MAR. 24 - 28, 1996, PROCEEDINGS OF INFOCOM, L, vol. 2 CONF. 15, 24 March 1996 (1996-03-24), pages 224-231, XP010158074 ISBN: 0-8186-7293-5 *
PATENT ABSTRACTS OF JAPAN vol. 1996, no. 09, 30 September 1996 (1996-09-30) & JP 08 130555 A (NEC CORP), 21 May 1996 (1996-05-21) *

Also Published As

Publication number Publication date
WO2002014998A9 (en) 2004-04-22
WO2002014998A3 (en) 2003-04-03
AU2001284730A1 (en) 2002-02-25

Similar Documents

Publication Publication Date Title
US6272581B1 (en) System and method for encapsulating legacy data transport protocols for IEEE 1394 serial bus
US7660866B2 (en) Use of virtual targets for preparing and servicing requests for server-free data transfer operations
US7289509B2 (en) Apparatus and method of splitting a data stream over multiple transport control protocol/internet protocol (TCP/IP) connections
EP0788267A2 (en) User-extensible interactive network packet language
US20060101111A1 (en) Method and apparatus transferring arbitrary binary data over a fieldbus network
US20040013117A1 (en) Method and apparatus for zero-copy receive buffer management
US20080279208A1 (en) System and method for buffering data received from a network
US20050135395A1 (en) Method and system for pre-pending layer 2 (L2) frame descriptors
US7457845B2 (en) Method and system for TCP/IP using generic buffers for non-posting TCP applications
JPH10276227A (en) Method for processing data stream and data processing system
US7283527B2 (en) Apparatus and method of maintaining two-byte IP identification fields in IP headers
US20080263171A1 (en) Peripheral device that DMAS the same data to different locations in a computer
US10673768B2 (en) Managing data compression
US20200220952A1 (en) System and method for accelerating iscsi command processing
CN115801770B (en) Large file transmission method based on full-user-state QUIC protocol
US7643502B2 (en) Method and apparatus to perform frame coalescing
WO2002014998A2 (en) Method and apparatus for transferring data in a data processing system
US10958588B2 (en) Reliability processing of remote direct memory access
CN1266912C (en) Multiple buffers for removing unwanted header information from received data packets
US20010018732A1 (en) Parallel processor and parallel processing method
US6922833B2 (en) Adaptive fast write cache for storage devices
US7469305B2 (en) Handling multiple data transfer requests within a computer system
CA2223876C (en) Apparatus and method for redundant write removal
JPH0458646A (en) Buffer management system
EP1347597A2 (en) Embedded system having multiple data receiving channels

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
COP Corrected version of pamphlet

Free format text: PAGE 1, DESCRIPTION, REPLACED BY A NEW PAGE 1; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

NENP Non-entry into the national phase

Ref country code: JP