US20040119886A1 - Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion - Google Patents

Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion Download PDF

Info

Publication number
US20040119886A1
US20040119886A1 US10/732,561 US73256103A US2004119886A1 US 20040119886 A1 US20040119886 A1 US 20040119886A1 US 73256103 A US73256103 A US 73256103A US 2004119886 A1 US2004119886 A1 US 2004119886A1
Authority
US
United States
Prior art keywords
yuv
component
components
format
conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/732,561
Inventor
Val Cook
Kam Leung
Wing Wong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US10/732,561 priority Critical patent/US20040119886A1/en
Publication of US20040119886A1 publication Critical patent/US20040119886A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/641Multi-purpose receivers, e.g. for auxiliary information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Definitions

  • the present invention relates to the field of YUV color space conversion from planar YUV 4:2:0 format to packed 4:2:2 YUV format and packed 4:2:2 YUV format to planar YUV 4:2:0 format.
  • 4:2:0 color space data is stored in a planar format, that is, in three contiguous locations, or surfaces, of memory. Therefore, in order to convert YUV color space from 4:2:0 to 4:2:2, it has been necessary to read all three surfaces, for the Y, U and V components respectively, convert the data and write the data in the converted format. Such processing incurs high overhead, in terms of address generation capabilities, buffering capacities, and data paths/streams to and from a memory. Furthermore, for the conversion of YUV 4:2:2 packed format data to the YUV 4:2:0 planar format, the converted data is required to be written to the three separate, memory locations which also requires additional buffering capabilities and data paths/streams to and from the memory.
  • a method and circuit are provided for color space conversion of YUV (luminance and chrominance) components, including the steps of reading source data, sampling said UV samples in a vertical direction, performing a pass for each of said Y, U and V components, and writing said YUV components in the converted format.
  • YUV luminance and chrominance
  • FIG. 1 is an example block diagram illustrating an example of a DVD data stream processing pipeline
  • FIG. 2 is an example block diagram illustrating an example of an personal computer (PC) system according to an example embodiment of the present invention
  • FIG. 3A shows an example sampling grid of 4:2:0 color space data
  • FIG. 3B shows an example of planar YUV 4:2:0 format color space data as it is stored in a planar format
  • FIG. 4 shows an example sampling grid of 4:2:2 YUV color space data
  • FIG. 5 shows an example flowchart of color space conversion from planar YUV 4:2:0 format to packed YUV 4:2:2 format according to an example embodiment of the present invention
  • FIG. 6A shows example sampling grids relating to the example color space conversion from planar YUV 4:2:0 format to packed YUV 4:2:2 format shown in the example embodiment of the invention of FIG. 5;
  • FIG. 6B shows example YUV data, respectively, as it is written to memory, after the respective passes, in accordance with the example of FIG. 6A;
  • FIG. 7 shows an example flowchart of color space conversion from packed YUV 4:2:2 format to planar YUV 4:2:0 format according to an example embodiment of the present invention
  • FIG. 8 shows an example conversion of packed YUV 4:2:2 data to planar YUV 4:2:0 format data, in accordance with an example embodiment of the present invention.
  • DVD Digital Versatile Disk
  • PC personal computer
  • a DVD data stream can contain several types of packetized streams, including video, audio, subpicture, presentation and control information, and data search information (DSI).
  • DVD supports up to 32 subpicture streams that overlay the video to provide subtitles, captions, karaoke, lyrics, menus, simple animation and other graphical overlays.
  • the subpictures are intended to be blended with the video for a translucent overlay in the final digital video signal.
  • FIG. 1 is a block diagram illustrating a typical DVD data stream processing pipeline 10 .
  • the video and audio streams are compressed according to the Moving Pictures Experts Group MPEG-2 standard. Additional information regarding the DVD processing can be found in the DVD Specification, Version 1.0, August, 1996; and additional information regarding the MPEG-2 standard can be found, for example, in MPEG Video Standard: ISO/IEC 13818-2: Information Technology—Generic Coding of Moving Pictures and Associated Audio Information: Video (1996) (a.k.a. ITU-T Rec. H-262 (1996)). A discussion of a typical DVD data stream processing is also provided in published PCT application No. WO 99/23831.
  • an incoming DVD data stream is parsed or split (i.e., demultiplexed) into multiple independent streams, including a subpicture stream 13 , a MPEG-2 video stream 15 and a MPEG-2 audio stream 17 .
  • the MPEG-2 video stream 15 and the subpicture stream 13 are provided to a video processing stage 14 .
  • the MPEG-2 audio stream is provided to an audio processing stage 16 .
  • Video processing stage 14 may include three sub-stages (sub-stages 18 , 20 and 21 ).
  • the first sub-stage is a DVD subpicture decode stage 18 in which the subpicture stream is decoded into a two-dimensional array of subpicture values.
  • Each subpicture value includes an index into a subpicture palette a and a corresponding alpha value.
  • the indices identify Y, U and V values of the subpicture pixels.
  • the alpha values are used for blending or image compositing the subpicture signal and the video signal.
  • the subpicture data may be considered as being provided in a YUV 4:4:4 format (the palette contains the YUV 4:4:4 values or color codes for the subpicture signal).
  • YUV is a color-difference video signal containing one luminance value (Y) or component and two chrominance values (U, V) or components, and is also commonly referred to as YCrCb (where Cr and Cb are chrominance values corresponding to U and V).
  • YUV and YCrCb can be used interchangeably.
  • YUV 4:4:4 is a component digital video format in which each of the luminance and chrominance values are sampled equally (e.g., one Y value, one U value and one V value per pixel).
  • the second sub-stage of video processing stage 14 is an MPEG-2 video decode sub-stage 20 in which the MPEG-2 video stream is decoded and decompressed and converted to a YUV 4:2:2 digital video signal.
  • the incoming DVD video signals in the DVD data stream are decoded into a planar YUV 4:2:0 format.
  • MPEG-2 decode sub-stage 20 then conducts a variable length decode (VLD) 22 , an inverse quantization (IQUANT) 24 , an Inverse Discrete Cosine Transform (IDCT) 26 and motion compensation 28 .
  • VLD variable length decode
  • IQUANT inverse quantization
  • IDCT Inverse Discrete Cosine Transform
  • planar YUV 4:2:0 format is the digital component format used to perform the MPEG-2 motion compensation, stage 28 .
  • a subsequent alpha-blending stage 32 is typically performed in packed YUV 4:2:2 format. Therefore, after motion compensation 28 , a conversion stage 30 is used to convert the digital video data from a planar YUV 4:2:0 format to an interleaved (or packed) YUV 4:2:2 format.
  • the interleaved (or packed) format is where the Y, U and V samples are provided or stored in an interleaved arrangement (e.g., YUVYUVYUV . . . ).
  • the planar format is where a group of Y samples (e.g., for a frame) are provided or stored together (typically contiguously) in a surface or set of buffers, a group of U samples are provided or stored together (typically contiguously) in a second surface or a second set of memory buffers, and the V samples are stored together (typically contiguously) in a third surface or set of buffers.
  • the sets of Y, U and V samples are stored in separate surfaces (or separate sets of buffers or separate regions in memory).
  • a video frame can be compressed without a significant perceived loss in quality by compressing only the color or chrominance information (e.g., resulting in a packed YUV 4:2:2 format, or even a planar YUV 4:2:0 format).
  • compression can be achieved by downsampling the chrominance samples horizontally (for a packed YUV 4:2:2 format) or by downsampling the chrominance samples both horizontally and vertically (for the planar YUV 4:2:0 format).
  • the resulting YUV 4:2:2 decoded video signals are provided to a third sub-stage 21 where the YUV 4:2:2 video signals and the subpicture signals are blended together in an alpha blend process 32 (or image compositing process) to produce a video signal having a translucent overlay.
  • the blended video signal is converted from YUV 4:2:2 to YUV 4:4:4 (not shown), and then provided to a YUV-to-RGB conversion process 34 , in which the blended digital video signal is converted from a YUV 4:4:4 format to a (red-green-blue) RGB format, which is compatible with a cathode ray tube (CRT) display or other display.
  • An image scaling process 36 may then be performed to scale the image to a particular size for display.
  • the RGB signal may be converted to an analog signal if required by the display or receiving device.
  • the scaled RGB signal is then provided to a display or provided to other devices for recording, etc.
  • MPEG-2 motion compensation sub-stage 28 will be briefly discussed.
  • MPEG-2 video performs image compression using motion compensation and motion estimation. Since motion video is a sequence of still pictures or frames, many of which are very similar, each picture can be compared to the pictures adjacent in time.
  • the MPEG encoding process breaks each picture into regions, called macroblocks, then hunts around in neighboring pictures for similar blocks. Then instead of storing the entire block, the system stores a much smaller pointer called a motion vector describing how far the block has moved (or didn't move) between the pictures. In this manner, one block or even a large group of blocks that move together can be efficiently compressed.
  • I frames Intra pictures
  • P frames Predicted pictures
  • a third type of frame is a bidirectional picture (B frame), where the system looks forward and backward to match blocks to the closest I frame and/or P frame. B frames do not function as reference frames.
  • the processing stages/substages associated with DVD processing pipeline 10 tend to be extremely compute intensive.
  • the MPEG-2 decode stages, including the motion compensation 28 tend to be the most compute intensive stages.
  • An important consideration for PC manufacturers in providing DVD capabilities is cost.
  • the processor typically executes software to perform some if not all of the DVD processing. While this may be relatively inexpensive because no specialized DVD hardware is necessary, such a solution can overburden the processor and results in a “jerky” frame rate or dropped frames which are very noticeable and generally considered unacceptable.
  • one or more functions in the DVD pipeline can be performed in hardware to provide increased performance. As described below in detail, several new techniques are used to decrease hardware complexity and cost while maintaining adequate DVD quality and performance.
  • FIG. 2 is a block diagram illustrating an example personal computer (PC) system. Included within such system may be a processor 112 (e.g., an Intel® Celeron® processor) connected to a system bus 114 . A chipset 110 is also connected to system bus 114 . Although only one processor 112 is shown, multiple processors may be connected to system bus 114 . In an example embodiment, the chipset 110 may be a highly-integrated three-chip solution including a graphics and memory controller hub (GMCH) 120 , an input/output (I/O) controller hub(ICH) 130 and a firmware hub (FWH) 140 .
  • GMCH graphics and memory controller hub
  • I/O controller hub(ICH) 130 input/output controller hub
  • FWH firmware hub
  • the GMCH 120 provides graphics and video functions and interfaces one or more memory devices to the system bus 114 .
  • the GMCH 120 may include a memory controller as well as a graphics controller (which in turn may include a 3-dimensional (3D) engine, a 2-dimensional (2D) engine, and a video engine).
  • GMCH 120 may be interconnected to any of a system memory 150 , a local display memory 160 , a display 170 (e.g., a computer monitor) and to a television (TV) via an encoder and a digital video output signal.
  • GMCH 120 maybe, for example, an Intel® 82810 or 82810-DC100 chip.
  • the GMCH 120 also operates as a bridge or interface for communications or signals sent between the processor 112 and one or more I/O devices which may be connected to ICH 140 . As shown in FIG. 2, the GMCH 120 includes an integrated graphics controller and memory controller. However, the graphics controller and memory controller may be provided as separate components.
  • ICH 130 interfaces one or more I/O devices to GMCH 120 .
  • FWH 140 is connected to the ICH 130 and provides firmware for additional system control.
  • the ICH 130 may be for example an Intel® 82801 chip and the FWH 140 may be for example an Intel® 82802 chip.
  • the ICH 130 may be connected to a variety of I/O devices and the like, such as: a Peripheral Component Interconnect (PCI) bus 180 (PCI Local Bus Specification Revision 2.2) which may have one or more I/O devices connected to PCI slots 192 , an Industry Standard Architecture (ISA) bus option 194 and a local area network (LAN) option 196 ; a Super I/O chip 190 for connection to a mouse, keyboard and other peripheral devices (not shown); an audio coder/decoder (Codec) and modem Codec; a plurality of Universal Serial Bus (USB) ports (USB Specification, Revision 1.0); and a plurality of Ultra/66 AT Attachment (ATA) 2 ports (X3T9.2 948D specification; commonly also known as Integrated Drive Electronics (IDE) ports) for receiving one or more magnetic hard disk drives or other I/O devices.
  • PCI Peripheral Component Interconnect
  • ISA Industry Standard Architecture
  • LAN local area network
  • One or more speakers are typically connected to the computer system for outputting sounds or audio information (speech, music, etc.).
  • a compact disc(CD) player or preferably a Digital Video Disc (DVD) player is connected to the ICH 130 via one of the I/O ports (e.g., IDE ports, USB ports, PCI slots).
  • the DVD player uses information encoded on a DVD disc to provide digital audio and video data streams and other information to allow the computer system to display and output a movie or other multimedia (e.g., audio and video) presentation.
  • the discussion of the present invention will also include the conversion of digital video data from the packed YUV 4:2:2 format to the planar YUV 4:2:0 format, which has application, for example, in DVD playback. It should be noted that for YUV 4:2:2 packed format, for which the following discussion pertains although the present invention is not limited thereto, data is written in double words which contain two pixels. A double word is 32 bits since a word is 16 bits.
  • An embodiment of the present invention utilizes multiple passes, one pass for all values of each of the respective YUV components, to thereby provide a low-cost conversion method for planar format (e.g., 4:2:0) to packed format (e.g., 4:2:2) having reduced overhead requirements. That is, by utilizing multiple passes, an embodiment of the present invention requires less extensive hardware than a single pass approach which would require that three separate streams of data (one for each of the YUV components, respectively) be input and processed in parallel. The single pass approach would require three separate circuits for addressing memory buffers, three separate circuits for routing or inputting the respective data streams, and three sets of temporary buffers for buffering the data during processing.
  • Planar 4:2:0 YUV color space data is stored in a planar format, that is, in three locations, or surfaces, of memory, as shown in FIG. 3B. Therefore, in order to convert YUV color space from planar YUV 4:2:0 format to packed YUV 4:2:2 format, it has been necessary to read all three surfaces, for the Y, U and V components respectively, convert the data and write the data to a double word of the converted packed YUV 4:2:2 format. As set forth above, such processing incurs high overhead, in terms of address generation capabilities, buffering capacities, and memory streams. Furthermore, for the conversion of packed YUV 4:2:2 format color space data, shown in FIG. 4, to the planar YUV 4:2:0 format color space data, the converted data is written to the three separate memory locations utilizing multiple passes, thereby requiring additional buffering capabilities and memory streams.
  • planar YUV 4:2:0 format source data may be read as texture stream data, and then three separate passes, including one pass for all values of each of the respective Y, U and V components, and byte masking, or logic packing, are executed in order to convert the planar YUV 4:2:0 format source data into packed YUV 4:2:2 format destination data, thus providing color space conversion at a lower cost with reduced overhead.
  • Byte masking enables respective ones of the YUV components to be selectively written to memory, in converted form, without corrupting previously written component values.
  • the individual YUV values of the planar YUV 4:2:0 format source data are scaled to the resolution of the destination surface.
  • Such scaling can be accomplished either by upsampling or downsampling the source data with a bilinear filter in both the vertical and horizontal directions, and the UV data is further adjusted by a half-pixel location to align with the sampling point of the packed YUV 4:2:2 format.
  • Y data can be scaled without such a half-pixel adjustment.
  • This process utilizes, for example, the example interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ to determine the interpolated upsampled values of UV.
  • the adjustments of the sampling points may be, but is not at all limited to, one half pixel position.
  • the value “ ⁇ ” indicates the offset of the sample in relation to the converted format, ⁇ 1 is the value of the lower component and ⁇ 0 is the value of the higher component.
  • U′ For example, to interpolate the new values of UV, represented by U′, reference is made to the example of FIG. 6A. To arrive at the new value of UV at U′ 10 in the 4:2:2 sampling grid 900 , an interpolation utilizing the example interpolation or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ must be made with ⁇ 1 representing the lower UV values U 20 in the 4:2:0 sampling grid 800 and ⁇ 0 representing the upper UV values U 00 at bit address Y 00 in the 4:2:0 sampling grid 800 .
  • the value “ ⁇ ” in this example is 0.25, since the sampling point for the UV value represented by U′′ 10 in the 4:2:2 sampling grid 900 is offset downward by 0.25 pixel from the upper UV value represented by U 00 at bit address Y 00 in the 4:2:0 sampling grid 800 .
  • an effect of scaling the YUV values by the bilinear filter is to produce data in the resolution of the destination surface whereby each double word pixel of 4 bytes and 32 bits is assigned one luminance (Y) and two chrominance samples (UV), in the manner of four bytes including a YUYV configuration.
  • Y luminance
  • UV chrominance samples
  • byte masks are provided to the memory to perform the logic packing of a double word by which the components are written to memory.
  • Logic packing utilizes byte masks which are logic configurations for selectively writing the respective YUV component values to the correspondingly appropriate bytes of the memory to thereby avoid corrupting the other component values.
  • Y components are written to memory using a “1010” byte mask, and therefore Y values will be written into the first and third bytes of the 4:2:2 double word while protecting the contents of the second and fourth bytes from overwriting; then the U components are written to memory using, for example, a “0100” byte mask so as not to corrupt the written Y components, and therefore U values will be written into the second byte of the 4:2:2 double word while protecting contents of the first, third and fourth bytes from overwriting; and lastly, the V components are written to the destination using, for example, a “0001” byte mask so as not to corrupt either of the written Y or U components, and therefore V values will be written into the fourth byte of the 4:2:2 double word while protecting the contents of the first, second and third bytes from overwriting. It should be noted that the individual passes for all of the values of the respective YUV components may be in any order, although the byte masks are provided similar to those described above so that the previously written components are
  • FIG. 5 shows a flow chart for converting planar YUV 4:2:0 format data to packed YUV 4:2:2 format data in accordance with the sample 4:2:0 and 4:2:2 sampling grids of FIG. 6A ( 800 and 900 , respectively).
  • An example of the byte masks used for such conversion is shown in FIG. 6C, with the masks being, for example, “0100” for the U pass, “0001” for the V pass, and “1010” for the Y pass.
  • a Y pass is executed as a bilinear filter reads the incoming Y surface data ( 510 ). Then, the read Y data is interpolated ( 525 ) by utilizing the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ described above to perform a combination of functions including upsampling or downsampling the data to the resolution of the destination surface ( 520 ), and sampling the sample the Y data to the packed YUV 4:2:2 format resolution ( 530 ).
  • the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ described above is utilized to determine the interpolated Y value appropriate for the packed YUV 4:2:2 format ( 525 ).
  • the present example to which the present invention is not limited, no such sampling of the Y is necessary, since the Y values, in this simple example, are the same for both the planar YUV 4:2:0 format data and the packed YUV 4:2:2 format data.
  • Y values are written to an internal buffer in the form of a packed YUV 4:2:2 format grid ( 900 ), with the even vertical columns thereof being written into the third byte of a double word in the internal buffer ( 550 ) and the odd vertical columns thereof being written into the first byte of the double word of the internal buffer ( 560 ).
  • the internal buffer is determined to be full ( 570 )
  • the Y data is written from the internal buffer to memory utilizing the mask “1010” ( 580 ), shown as “byte mask Y” in FIG. 6C.
  • the writing of Y data to the memory is shown in block 910 of FIG. 6B. If the internal buffer is determined not to be full, any further 4:2:0 Y samples undergo the same processing described above ( 590 ).
  • a U pass is executed as a bilinear filter reads the incoming U surface data ( 600 ). Then, the read U data is interpolated ( 615 ) by utilizing the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ to perform the combination of upsampling or downsampling the data to the resolution of the destination surface ( 610 ), and sampling the U data to the packed YUV 4:2:2 format resolution ( 620 ).
  • the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ described above is utilized to determine the interpolated U value appropriate for the packed YUV 4:2:2 format ( 615 ).
  • U values are written to the second byte of the double word in the internal buffer ( 630 ).
  • the U data is written from the internal buffer to memory utilizing the mask “0100” ( 640 ), shown as “byte mask U” in FIG. 6C.
  • the writing of U data to the memory is shown in block 920 of FIG. 6B. Any further 4:2:0 U samples undergo the same processing described above (650).
  • a V pass is executed as a bilinear filter reads the incoming V surface data ( 660 ). Then, the read V data is interpolated ( 675 ) by utilizing the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ to perform the combination of upsampling or downsampling the data to the resolution of the destination surface ( 670 ), and sampling the sample the V data to the packed YUV 4:2:2 format resolution ( 680 ).
  • the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ is utilized to determine the interpolated V value appropriate for the packed YUV 4:2:2 format ( 675 ).
  • V values are written to the fourth byte of the double word in the internal buffer ( 690 ).
  • the V data is written from the internal buffer to memory utilizing the mask “0001” ( 700 ), shown as “byte mask V” in FIG. 6C.
  • the writing of V data to the memory is shown in block 930 of FIG. 6B. Any further 4:2:0 V samples undergo the same processing described above (710). Otherwise, the processing ends ( 720 ).
  • the result of the YUV passes is shown in the conversion from the 4:2:0 sampling grid 800 to the 4:2:2 sampling grid 900 shown in FIG. 6A, which is a memory composite of Y-grid 910 , U-grid 920 and V-grid 930 .
  • Such result may be achieved using multiple passes through a single data stream, thus reducing costs and overhead, relative to previous conversion methods and circuits.
  • the packed YUV 4:2:2 format data shown in FIG. 8 may be read as texture stream data, and then three separate passes, including one pass for all values of each of the respective Y, U and V components, are executed in order to convert the packed YUV 4:2:2 source data into planar YUV 4:2:0 format data, thus providing color space conversion at a lower cost with reduced overhead.
  • FIG. 7 is a flow chart for the conversion of packed YUV 4:2:2 format data to planar 4:2:0 planar data
  • the YUV 4:2:2 format data is read ( 1010 ), and the read Y data is interpolated ( 1025 ) by utilizing the same interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ described above to perform a combination of functions including upsampling or downsampling the data to the resolution of the destination surface ( 1020 ), and sampling the sample the Y data to the planar YUV 4:2:0 format resolution ( 1030 ).
  • the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ is utilized to determine the interpolated Y value appropriate for the planar YUV 4:2:0 format ( 1025 ).
  • the present example to which the present invention is not limited, no such sampling of the Y is necessary, since the Y values are the same for both the packed YUV 4:2:2 format data and the planar YUV 4:2:0 format data.
  • Y values are written to a double word in an internal buffer in the form of a planar YUV 4:2:2 format. When all Y values have been written to the internal buffer, the buffer is then written to memory in planar YUV 4:2:0 format. Otherwise, the remaining Y values undergo the same processing described above ( 1070 ).
  • a U pass is executed as a bilinear filter reads the incoming U surface data ( 1080 ). Then, the read U data is interpolated ( 1095 ) by utilizing the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ to perform the combination of upsampling or downsampling the data to the resolution of the destination surface ( 1090 ), and sampling the sample the U data to the planar YUV 4:2:0 format resolution (2000).
  • the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ is utilized to determine the interpolated U value appropriate for the planar YUV 4:2:0 format ( 1095 ). Then, U values are written to the double word in the internal buffer ( 2010 ). When the internal buffer is determined to be full ( 2020 ), the U data is written from the internal buffer to memory ( 2030 . The writing of YUV data to the memory is shown in FIG. 8. Any further U samples undergo the same processing described above ( 2040 ).
  • a V pass is executed as a bilinear filter reads the incoming V surface data ( 2050 ). Then, the read V data is interpolated ( 2065 ) by utilizing the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ to perform the combination of upsampling or downsampling the data to the resolution of the destination surface ( 2060 ), and sampling the sample the V data to the planar YUV 4:2:0 format resolution ( 2070 ).
  • the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ is utilized to determine the interpolated V value appropriate for the planar YUV 4:2:0 format ( 2065 ). Then, V values are written to the double word in the internal buffer ( 2080 ). The writing of YUV data to the memory is shown in FIG. 8. Any further V samples undergo the same processing described above ( 3010 ). Otherwise, the processing ends ( 3020 ).
  • FIG. 8 illustrates three separate passes, including a Y-pass, a U-pass and a V-pass.
  • the bilinear filter performs the sampling interpolations according to the interpolation formula or conversion formula ⁇ 1 ⁇ (1 ⁇ ) ⁇ 0 ⁇ for each of the YUV components, and then performs a pass for all values of each of the respective YUV components.
  • the converted planar YUV 4:2:0 format data is written in three separate surfaces, for the YUV components, respectively.

Abstract

A method and circuit are provided for color space conversion of Y (luminance) and UV (chrominance) components from a planar YUV 4:2:0 format to an interleaved, or packed YUV 4:2:2 format, and from an interleaved, or packed YUV 4:2:2 format to a planar YUV 4:2:0 format. The method for both conversions includes reading source data, interpolating the sampled YUV component values, and performing a pass to thereby write the converted YUV component values in three passes, one pass for all values of the respective YUV components.

Description

    FIELD
  • The present invention relates to the field of YUV color space conversion from planar YUV 4:2:0 format to packed 4:2:2 YUV format and packed 4:2:2 YUV format to planar YUV 4:2:0 format. [0001]
  • BACKGROUND
  • Conversion of digital data from planar YUV 4:2:0 format to packed YUV 4:2:2 format has been performed for graphics generation and digital video processing since packed YUV 4:2:2 format provides a more detailed, richer display. Conversely, digital playback has typically been performed in a planar YUV 4:2:0 format which is a more compact format requiring less bandwidth. [0002]
  • 4:2:0 color space data is stored in a planar format, that is, in three contiguous locations, or surfaces, of memory. Therefore, in order to convert YUV color space from 4:2:0 to 4:2:2, it has been necessary to read all three surfaces, for the Y, U and V components respectively, convert the data and write the data in the converted format. Such processing incurs high overhead, in terms of address generation capabilities, buffering capacities, and data paths/streams to and from a memory. Furthermore, for the conversion of YUV 4:2:2 packed format data to the YUV 4:2:0 planar format, the converted data is required to be written to the three separate, memory locations which also requires additional buffering capabilities and data paths/streams to and from the memory. [0003]
  • Therefore, there exists a need for a simpler and less expensive technique for performing color conversions. [0004]
  • SUMMARY
  • According to an embodiment of the present invention, a method and circuit are provided for color space conversion of YUV (luminance and chrominance) components, including the steps of reading source data, sampling said UV samples in a vertical direction, performing a pass for each of said Y, U and V components, and writing said YUV components in the converted format.[0005]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and a better understanding of the present invention will become apparent from the following detailed description of example embodiments and the claims when read in connection with the accompanying drawings, all forming a part of the disclosure of this invention. While the foregoing and following written disclosure focuses on disclosing example embodiments of this invention, it should be clearly understood that the same is by way of illustration and example only and the invention is not limited thereto. The spirit and scope of the present invention are limited only by the terms of the appended claims. [0006]
  • The following represents brief descriptions of the drawings, wherein: [0007]
  • FIG. 1 is an example block diagram illustrating an example of a DVD data stream processing pipeline; [0008]
  • FIG. 2 is an example block diagram illustrating an example of an personal computer (PC) system according to an example embodiment of the present invention; [0009]
  • FIG. 3A shows an example sampling grid of 4:2:0 color space data; [0010]
  • FIG. 3B shows an example of planar YUV 4:2:0 format color space data as it is stored in a planar format; [0011]
  • FIG. 4 shows an example sampling grid of 4:2:2 YUV color space data; [0012]
  • FIG. 5 shows an example flowchart of color space conversion from planar YUV 4:2:0 format to packed YUV 4:2:2 format according to an example embodiment of the present invention; [0013]
  • FIG. 6A shows example sampling grids relating to the example color space conversion from planar YUV 4:2:0 format to packed YUV 4:2:2 format shown in the example embodiment of the invention of FIG. 5; [0014]
  • FIG. 6B shows example YUV data, respectively, as it is written to memory, after the respective passes, in accordance with the example of FIG. 6A; [0015]
  • FIG. 7 shows an example flowchart of color space conversion from packed YUV 4:2:2 format to planar YUV 4:2:0 format according to an example embodiment of the present invention and [0016]
  • FIG. 8 shows an example conversion of packed YUV 4:2:2 data to planar YUV 4:2:0 format data, in accordance with an example embodiment of the present invention.[0017]
  • DETAILED DESCRIPTION
  • Before beginning a detailed description of the invention, it should be noted that, when appropriate, like reference numerals and characters may be used to designate identical, corresponding or similar components in differing figure drawings. Further, in the detailed description to follow, example embodiments and values may be given, although the present invention is not limited thereto. [0018]
  • The emergence of Digital Versatile Disk (DVD) (also known as Digital Video Disk) has allowed personal computer (PC) manufacturers to provide a more effective multimedia PC for delivering video and audio information to users. It also presents a significant technical challenge in the highly price-competitive PC market to provide PCs capable of providing high performance video and audio while maintaining a low cost. [0019]
  • A DVD data stream can contain several types of packetized streams, including video, audio, subpicture, presentation and control information, and data search information (DSI). DVD supports up to [0020] 32 subpicture streams that overlay the video to provide subtitles, captions, karaoke, lyrics, menus, simple animation and other graphical overlays. According to the DVD specification, the subpictures are intended to be blended with the video for a translucent overlay in the final digital video signal.
  • FIG. 1 is a block diagram illustrating a typical DVD data [0021] stream processing pipeline 10. The video and audio streams are compressed according to the Moving Pictures Experts Group MPEG-2 standard. Additional information regarding the DVD processing can be found in the DVD Specification, Version 1.0, August, 1996; and additional information regarding the MPEG-2 standard can be found, for example, in MPEG Video Standard: ISO/IEC 13818-2: Information Technology—Generic Coding of Moving Pictures and Associated Audio Information: Video (1996) (a.k.a. ITU-T Rec. H-262 (1996)). A discussion of a typical DVD data stream processing is also provided in published PCT application No. WO 99/23831.
  • Referring to FIG. 1 again, in data [0022] stream parsing stage 12, an incoming DVD data stream is parsed or split (i.e., demultiplexed) into multiple independent streams, including a subpicture stream 13, a MPEG-2 video stream 15 and a MPEG-2 audio stream 17. The MPEG-2 video stream 15 and the subpicture stream 13 are provided to a video processing stage 14. Similarly, the MPEG-2 audio stream is provided to an audio processing stage 16.
  • [0023] Video processing stage 14, as depicted in FIG. 1, may include three sub-stages ( sub-stages 18, 20 and 21). The first sub-stage is a DVD subpicture decode stage 18 in which the subpicture stream is decoded into a two-dimensional array of subpicture values. Each subpicture value includes an index into a subpicture palette a and a corresponding alpha value. The indices identify Y, U and V values of the subpicture pixels. The alpha values are used for blending or image compositing the subpicture signal and the video signal. As a result, the subpicture data may be considered as being provided in a YUV 4:4:4 format (the palette contains the YUV 4:4:4 values or color codes for the subpicture signal). YUV is a color-difference video signal containing one luminance value (Y) or component and two chrominance values (U, V) or components, and is also commonly referred to as YCrCb (where Cr and Cb are chrominance values corresponding to U and V). The terms YUV and YCrCb can be used interchangeably. YUV 4:4:4 is a component digital video format in which each of the luminance and chrominance values are sampled equally (e.g., one Y value, one U value and one V value per pixel).
  • The second sub-stage of [0024] video processing stage 14 is an MPEG-2 video decode sub-stage 20 in which the MPEG-2 video stream is decoded and decompressed and converted to a YUV 4:2:2 digital video signal. The incoming DVD video signals in the DVD data stream are decoded into a planar YUV 4:2:0 format. In accordance with the MPEG-2 specification, MPEG-2 decode sub-stage 20 then conducts a variable length decode (VLD) 22, an inverse quantization (IQUANT) 24, an Inverse Discrete Cosine Transform (IDCT) 26 and motion compensation 28.
  • As noted, the incoming DVD video signals in the DVD data stream are decoded into a planar YUV 4:2:0 format. Also, planar YUV 4:2:0 format is the digital component format used to perform the MPEG-2 motion compensation, [0025] stage 28. However, a subsequent alpha-blending stage 32 is typically performed in packed YUV 4:2:2 format. Therefore, after motion compensation 28, a conversion stage 30 is used to convert the digital video data from a planar YUV 4:2:0 format to an interleaved (or packed) YUV 4:2:2 format.
  • The interleaved (or packed) format is where the Y, U and V samples are provided or stored in an interleaved arrangement (e.g., YUVYUVYUV . . . ). The planar format is where a group of Y samples (e.g., for a frame) are provided or stored together (typically contiguously) in a surface or set of buffers, a group of U samples are provided or stored together (typically contiguously) in a second surface or a second set of memory buffers, and the V samples are stored together (typically contiguously) in a third surface or set of buffers. Thus, in the planar format, as shown in FIG. 3B, the sets of Y, U and V samples are stored in separate surfaces (or separate sets of buffers or separate regions in memory). [0026]
  • As shown in the YUV 4:2:2 sampling grid of FIG. 4, in the packed YUV 4:2:2 format, there is one pair of chrominance samples (UV, represented by O in the figure) for two luminance samples (Y, represented by X in the figure), that is, chrominance samples U, V are shared across two pixels. This is done by a 2:1 horizontal downsanpling of the YUV 4:4:4 chrominance samples. In YUV 4:2:0, there is both a horizontal 2:1 downsampling and a vertical 2:1 downsampling of the chrominance samples (UV). Thus, as shown in the YUV 4:2:0 sampling grid of FIG. 3A, in the planar YUV 4:2:0 format, one pair of chrominance samples (UV) are shared for four pixels (while each pixel includes its own luminance sample, Y). [0027]
  • Since the human eye is more sensitive to brightness than color, rather than sampling the Y, U and V samples equally (as in YUV 4:4:4), a video frame can be compressed without a significant perceived loss in quality by compressing only the color or chrominance information (e.g., resulting in a packed YUV 4:2:2 format, or even a planar YUV 4:2:0 format). As a result, compression can be achieved by downsampling the chrominance samples horizontally (for a packed YUV 4:2:2 format) or by downsampling the chrominance samples both horizontally and vertically (for the planar YUV 4:2:0 format). Referring to FIG. 1 again, the resulting YUV 4:2:2 decoded video signals are provided to a [0028] third sub-stage 21 where the YUV 4:2:2 video signals and the subpicture signals are blended together in an alpha blend process 32 (or image compositing process) to produce a video signal having a translucent overlay. Next the blended video signal is converted from YUV 4:2:2 to YUV 4:4:4 (not shown), and then provided to a YUV-to-RGB conversion process 34, in which the blended digital video signal is converted from a YUV 4:4:4 format to a (red-green-blue) RGB format, which is compatible with a cathode ray tube (CRT) display or other display. An image scaling process 36 may then be performed to scale the image to a particular size for display. The RGB signal may be converted to an analog signal if required by the display or receiving device. The scaled RGB signal is then provided to a display or provided to other devices for recording, etc.
  • The MPEG-2 [0029] motion compensation sub-stage 28 will be briefly discussed. MPEG-2 video performs image compression using motion compensation and motion estimation. Since motion video is a sequence of still pictures or frames, many of which are very similar, each picture can be compared to the pictures adjacent in time. The MPEG encoding process breaks each picture into regions, called macroblocks, then hunts around in neighboring pictures for similar blocks. Then instead of storing the entire block, the system stores a much smaller pointer called a motion vector describing how far the block has moved (or didn't move) between the pictures. In this manner, one block or even a large group of blocks that move together can be efficiently compressed.
  • MPEG-2 uses three kinds of pictures. Intra pictures (I frames) are pictures in which the entire picture is compressed and stored with DCT quantization. This I frame creates a reference frame from which successive pictures are built. Predicted pictures (P frames) contain motion vectors describing the difference from the closest I frame or P frame. If the frame has changed slightly in intensity (luminance) or color (chrominance), then this difference is also encoded. If something new appears which does not match previous blocks, a new block is stored in the same way an I frame is stored. Thus, P frames also operate as reference frames for building additional frames. A third type of frame is a bidirectional picture (B frame), where the system looks forward and backward to match blocks to the closest I frame and/or P frame. B frames do not function as reference frames. [0030]
  • The processing stages/substages associated with [0031] DVD processing pipeline 10 tend to be extremely compute intensive. In particular, the MPEG-2 decode stages, including the motion compensation 28, tend to be the most compute intensive stages. An important consideration for PC manufacturers in providing DVD capabilities is cost. Because the DVD processes are compute intensive, there is a need to provide cost-effective solutions that reduce the costs associated with the various stages and substages of the DVD processing pipeline. In a computer system, the processor typically executes software to perform some if not all of the DVD processing. While this may be relatively inexpensive because no specialized DVD hardware is necessary, such a solution can overburden the processor and results in a “jerky” frame rate or dropped frames which are very noticeable and generally considered unacceptable. As described below, according to an embodiment of the invention, one or more functions in the DVD pipeline can be performed in hardware to provide increased performance. As described below in detail, several new techniques are used to decrease hardware complexity and cost while maintaining adequate DVD quality and performance.
  • Although example embodiments of the present invention will be described using an example system block diagram in an example personal computer (PC) system or environment, practice of the invention is not limited thereto, i.e., the invention may be practiced with other types of systems, and in other types of environments. [0032]
  • Referring to the figures in which like numerals indicate like elements, FIG. 2 is a block diagram illustrating an example personal computer (PC) system. Included within such system may be a processor [0033] 112 (e.g., an Intel® Celeron® processor) connected to a system bus 114. A chipset 110 is also connected to system bus 114. Although only one processor 112 is shown, multiple processors may be connected to system bus 114. In an example embodiment, the chipset 110 may be a highly-integrated three-chip solution including a graphics and memory controller hub (GMCH) 120, an input/output (I/O) controller hub(ICH) 130 and a firmware hub (FWH) 140.
  • The [0034] GMCH 120 provides graphics and video functions and interfaces one or more memory devices to the system bus 114. The GMCH 120 may include a memory controller as well as a graphics controller (which in turn may include a 3-dimensional (3D) engine, a 2-dimensional (2D) engine, and a video engine). GMCH 120 may be interconnected to any of a system memory 150, a local display memory 160, a display 170 (e.g., a computer monitor) and to a television (TV) via an encoder and a digital video output signal. GMCH 120 maybe, for example, an Intel® 82810 or 82810-DC100 chip. The GMCH 120 also operates as a bridge or interface for communications or signals sent between the processor 112 and one or more I/O devices which may be connected to ICH 140. As shown in FIG. 2, the GMCH 120 includes an integrated graphics controller and memory controller. However, the graphics controller and memory controller may be provided as separate components.
  • [0035] ICH 130 interfaces one or more I/O devices to GMCH 120. FWH 140 is connected to the ICH 130 and provides firmware for additional system control. The ICH 130 may be for example an Intel® 82801 chip and the FWH 140 may be for example an Intel® 82802 chip.
  • The [0036] ICH 130 may be connected to a variety of I/O devices and the like, such as: a Peripheral Component Interconnect (PCI) bus 180 (PCI Local Bus Specification Revision 2.2) which may have one or more I/O devices connected to PCI slots 192, an Industry Standard Architecture (ISA) bus option 194 and a local area network (LAN) option 196; a Super I/O chip 190 for connection to a mouse, keyboard and other peripheral devices (not shown); an audio coder/decoder (Codec) and modem Codec; a plurality of Universal Serial Bus (USB) ports (USB Specification, Revision 1.0); and a plurality of Ultra/66 AT Attachment (ATA) 2 ports (X3T9.2 948D specification; commonly also known as Integrated Drive Electronics (IDE) ports) for receiving one or more magnetic hard disk drives or other I/O devices.
  • One or more speakers are typically connected to the computer system for outputting sounds or audio information (speech, music, etc.). According to an embodiment, a compact disc(CD) player or preferably a Digital Video Disc (DVD) player is connected to the [0037] ICH 130 via one of the I/O ports (e.g., IDE ports, USB ports, PCI slots). The DVD player uses information encoded on a DVD disc to provide digital audio and video data streams and other information to allow the computer system to display and output a movie or other multimedia (e.g., audio and video) presentation.
  • Discussion now turns more specifically to the second sub-stage of [0038] video processing stage 14 in which the MPEG-2 video stream is decoded and decompressed and converted to a YUV 4:2:2 digital video signal, particularly conversion stage 30 (see FIG. 1) in which the digital video data is converted from a planar YUV 4:2:0 format to a packed YUV 4:2:2 format. Such conversion has application, for example, for video-conferencing display or the display of other network data which is usually transmitted in planar YUV 4:2:0 format but is displayed using a packed YUV 4:2:2 format. Further, the discussion of the present invention will also include the conversion of digital video data from the packed YUV 4:2:2 format to the planar YUV 4:2:0 format, which has application, for example, in DVD playback. It should be noted that for YUV 4:2:2 packed format, for which the following discussion pertains although the present invention is not limited thereto, data is written in double words which contain two pixels. A double word is 32 bits since a word is 16 bits.
  • An embodiment of the present invention utilizes multiple passes, one pass for all values of each of the respective YUV components, to thereby provide a low-cost conversion method for planar format (e.g., 4:2:0) to packed format (e.g., 4:2:2) having reduced overhead requirements. That is, by utilizing multiple passes, an embodiment of the present invention requires less extensive hardware than a single pass approach which would require that three separate streams of data (one for each of the YUV components, respectively) be input and processed in parallel. The single pass approach would require three separate circuits for addressing memory buffers, three separate circuits for routing or inputting the respective data streams, and three sets of temporary buffers for buffering the data during processing. [0039]
  • Planar 4:2:0 YUV color space data is stored in a planar format, that is, in three locations, or surfaces, of memory, as shown in FIG. 3B. Therefore, in order to convert YUV color space from planar YUV 4:2:0 format to packed YUV 4:2:2 format, it has been necessary to read all three surfaces, for the Y, U and V components respectively, convert the data and write the data to a double word of the converted packed YUV 4:2:2 format. As set forth above, such processing incurs high overhead, in terms of address generation capabilities, buffering capacities, and memory streams. Furthermore, for the conversion of packed YUV 4:2:2 format color space data, shown in FIG. 4, to the planar YUV 4:2:0 format color space data, the converted data is written to the three separate memory locations utilizing multiple passes, thereby requiring additional buffering capabilities and memory streams. [0040]
  • According to an embodiment of the present invention, for conversion of planar YUV 4:2:0 format data to packed YUV 4:2:2 format data, planar YUV 4:2:0 format source data may be read as texture stream data, and then three separate passes, including one pass for all values of each of the respective Y, U and V components, and byte masking, or logic packing, are executed in order to convert the planar YUV 4:2:0 format source data into packed YUV 4:2:2 format destination data, thus providing color space conversion at a lower cost with reduced overhead. Byte masking enables respective ones of the YUV components to be selectively written to memory, in converted form, without corrupting previously written component values. [0041]
  • In particular, the individual YUV values of the planar YUV 4:2:0 format source data are scaled to the resolution of the destination surface. Such scaling can be accomplished either by upsampling or downsampling the source data with a bilinear filter in both the vertical and horizontal directions, and the UV data is further adjusted by a half-pixel location to align with the sampling point of the packed YUV 4:2:2 format. Y data can be scaled without such a half-pixel adjustment. This process utilizes, for example, the example interpolation formula or conversion formula {αμ[0042] 1−(1−α)μ0} to determine the interpolated upsampled values of UV. The adjustments of the sampling points may be, but is not at all limited to, one half pixel position. The value “α” indicates the offset of the sample in relation to the converted format, μ1 is the value of the lower component and μ0 is the value of the higher component. For example, to interpolate the new values of UV, represented by U′, reference is made to the example of FIG. 6A. To arrive at the new value of UV at U′10 in the 4:2:2 sampling grid 900, an interpolation utilizing the example interpolation or conversion formula {αμ1−(1−α)μ0} must be made with μ1 representing the lower UV values U20 in the 4:2:0 sampling grid 800 and μ0 representing the upper UV values U00 at bit address Y00 in the 4:2:0 sampling grid 800. The value “α” in this example is 0.25, since the sampling point for the UV value represented by U″10 in the 4:2:2 sampling grid 900 is offset downward by 0.25 pixel from the upper UV value represented by U00 at bit address Y00 in the 4:2:0 sampling grid 800. Thus, utilizing the interpolation or conversion formula {αμ1−(1−α)μ0}, the new chrominance values UV at U″10 in the converted 4:2:2 sampling grid is interpolated as U″10=(0.25)U20+(1−0.25)U00.
  • It should be noted that, in the present example of converting planar YUV 4:2:0 format data to packed YUV 4:2:2 format data, no interpolating or upsampling is necessary for the Y component values since the Y components are the same for both the planar YUV 4:2:0 and packed YUV 4:2:2 formats. However, practice of the present invention is not limited to embodiments where Y components are not interpolated, i.e., Y interpolation may be appropriate in some embodiments, e.g., for other types of format conversions. [0043]
  • Thus, an effect of scaling the YUV values by the bilinear filter is to produce data in the resolution of the destination surface whereby each double word pixel of 4 bytes and 32 bits is assigned one luminance (Y) and two chrominance samples (UV), in the manner of four bytes including a YUYV configuration. To thereby write the data into a double word of the converted packed YUV 4:2:2 format, byte masks are provided to the memory to perform the logic packing of a double word by which the components are written to memory. Logic packing utilizes byte masks which are logic configurations for selectively writing the respective YUV component values to the correspondingly appropriate bytes of the memory to thereby avoid corrupting the other component values. For example, Y components are written to memory using a “1010” byte mask, and therefore Y values will be written into the first and third bytes of the 4:2:2 double word while protecting the contents of the second and fourth bytes from overwriting; then the U components are written to memory using, for example, a “0100” byte mask so as not to corrupt the written Y components, and therefore U values will be written into the second byte of the 4:2:2 double word while protecting contents of the first, third and fourth bytes from overwriting; and lastly, the V components are written to the destination using, for example, a “0001” byte mask so as not to corrupt either of the written Y or U components, and therefore V values will be written into the fourth byte of the 4:2:2 double word while protecting the contents of the first, second and third bytes from overwriting. It should be noted that the individual passes for all of the values of the respective YUV components may be in any order, although the byte masks are provided similar to those described above so that the previously written components are not corrupted. [0044]
  • Specifically, FIG. 5 shows a flow chart for converting planar YUV 4:2:0 format data to packed YUV 4:2:2 format data in accordance with the sample 4:2:0 and 4:2:2 sampling grids of FIG. 6A ([0045] 800 and 900, respectively). An example of the byte masks used for such conversion is shown in FIG. 6C, with the masks being, for example, “0100” for the U pass, “0001” for the V pass, and “1010” for the Y pass.
  • Referring again to FIG. 5, after [0046] START 500, a Y pass is executed as a bilinear filter reads the incoming Y surface data (510). Then, the read Y data is interpolated (525) by utilizing the interpolation formula or conversion formula {αμ1−(1−α)μ0} described above to perform a combination of functions including upsampling or downsampling the data to the resolution of the destination surface (520), and sampling the sample the Y data to the packed YUV 4:2:2 format resolution (530). That is, to scale the planar YUV 4:2:0 format data to the packed YUV 4:2:2 format data, the interpolation formula or conversion formula {αμ1−(1−α)μ0} described above is utilized to determine the interpolated Y value appropriate for the packed YUV 4:2:2 format (525). However, in the present example, to which the present invention is not limited, no such sampling of the Y is necessary, since the Y values, in this simple example, are the same for both the planar YUV 4:2:0 format data and the packed YUV 4:2:2 format data. Thus, Y values are written to an internal buffer in the form of a packed YUV 4:2:2 format grid (900), with the even vertical columns thereof being written into the third byte of a double word in the internal buffer (550) and the odd vertical columns thereof being written into the first byte of the double word of the internal buffer (560). When the internal buffer is determined to be full (570), the Y data is written from the internal buffer to memory utilizing the mask “1010” (580), shown as “byte mask Y” in FIG. 6C. The writing of Y data to the memory is shown in block 910 of FIG. 6B. If the internal buffer is determined not to be full, any further 4:2:0 Y samples undergo the same processing described above (590).
  • A U pass is executed as a bilinear filter reads the incoming U surface data ([0047] 600). Then, the read U data is interpolated (615) by utilizing the interpolation formula or conversion formula {αμ1−(1−α)μ0} to perform the combination of upsampling or downsampling the data to the resolution of the destination surface (610), and sampling the U data to the packed YUV 4:2:2 format resolution (620). That is, as set forth above, to scale the planar YUV 4:2:0 format data to the packed YUV 4:2:2 format data, the interpolation formula or conversion formula {αμ1−(1−α)μ0} described above is utilized to determine the interpolated U value appropriate for the packed YUV 4:2:2 format (615). Then, U values are written to the second byte of the double word in the internal buffer (630). The U data is written from the internal buffer to memory utilizing the mask “0100” (640), shown as “byte mask U” in FIG. 6C. The writing of U data to the memory is shown in block 920 of FIG. 6B. Any further 4:2:0 U samples undergo the same processing described above (650).
  • A V pass is executed as a bilinear filter reads the incoming V surface data ([0048] 660). Then, the read V data is interpolated (675) by utilizing the interpolation formula or conversion formula {αμ1−(1−α)μ0} to perform the combination of upsampling or downsampling the data to the resolution of the destination surface (670), and sampling the sample the V data to the packed YUV 4:2:2 format resolution (680). That is, to scale the planar YUV 4:2:0 format data to the packed YUV 4:2:2 format data, the interpolation formula or conversion formula {αμ1−(1−α)μ0} is utilized to determine the interpolated V value appropriate for the packed YUV 4:2:2 format (675). Then, V values are written to the fourth byte of the double word in the internal buffer (690). The V data is written from the internal buffer to memory utilizing the mask “0001” (700), shown as “byte mask V” in FIG. 6C. The writing of V data to the memory is shown in block 930 of FIG. 6B. Any further 4:2:0 V samples undergo the same processing described above (710). Otherwise, the processing ends (720).
  • The result of the YUV passes is shown in the conversion from the 4:2:0 sampling grid [0049] 800 to the 4:2:2 sampling grid 900 shown in FIG. 6A, which is a memory composite of Y-grid 910, U-grid 920 and V-grid 930. Such result may be achieved using multiple passes through a single data stream, thus reducing costs and overhead, relative to previous conversion methods and circuits.
  • For conversion from packed YUV 4:2:2 format data to planar YUV 4:2:0 format data, once again three passes are required of the bilinear filter, one pass for all values of each of the respective YUV components. That is, in this conversion, the packed YUV 4:2:2 format data shown in FIG. 8, for example, may be read as texture stream data, and then three separate passes, including one pass for all values of each of the respective Y, U and V components, are executed in order to convert the packed YUV 4:2:2 source data into planar YUV 4:2:0 format data, thus providing color space conversion at a lower cost with reduced overhead. [0050]
  • As shown in FIG. 7 which is a flow chart for the conversion of packed YUV 4:2:2 format data to planar 4:2:0 planar data, the YUV 4:2:2 format data is read ([0051] 1010), and the read Y data is interpolated (1025) by utilizing the same interpolation formula or conversion formula {αμ1−(1−α)μ0} described above to perform a combination of functions including upsampling or downsampling the data to the resolution of the destination surface (1020), and sampling the sample the Y data to the planar YUV 4:2:0 format resolution (1030). That is, to scale the packed YUV 4:2:2 format data to the planar YUV 4:2:0 format data, the interpolation formula or conversion formula {αμ1−(1−α)μ0} is utilized to determine the interpolated Y value appropriate for the planar YUV 4:2:0 format (1025). However, in the present example, to which the present invention is not limited, no such sampling of the Y is necessary, since the Y values are the same for both the packed YUV 4:2:2 format data and the planar YUV 4:2:0 format data. Thus, Y values are written to a double word in an internal buffer in the form of a planar YUV 4:2:2 format. When all Y values have been written to the internal buffer, the buffer is then written to memory in planar YUV 4:2:0 format. Otherwise, the remaining Y values undergo the same processing described above (1070).
  • A U pass is executed as a bilinear filter reads the incoming U surface data ([0052] 1080). Then, the read U data is interpolated (1095) by utilizing the interpolation formula or conversion formula {αμ1−(1−α)μ0} to perform the combination of upsampling or downsampling the data to the resolution of the destination surface (1090), and sampling the sample the U data to the planar YUV 4:2:0 format resolution (2000). That is, as set forth above, to scale the packed YUV 4:2:2 format data to the planar YUV 4:2:0 format data, the interpolation formula or conversion formula {αμ1−(1−α)μ0} is utilized to determine the interpolated U value appropriate for the planar YUV 4:2:0 format (1095). Then, U values are written to the double word in the internal buffer (2010). When the internal buffer is determined to be full (2020), the U data is written from the internal buffer to memory (2030. The writing of YUV data to the memory is shown in FIG. 8. Any further U samples undergo the same processing described above (2040).
  • A V pass is executed as a bilinear filter reads the incoming V surface data ([0053] 2050). Then, the read V data is interpolated (2065) by utilizing the interpolation formula or conversion formula {αμ1−(1−α)μ0} to perform the combination of upsampling or downsampling the data to the resolution of the destination surface (2060), and sampling the sample the V data to the planar YUV 4:2:0 format resolution (2070). That is, to scale the packed YUV 4:2:2 format data to the planar YUV 4:2:0 format data, the interpolation formula or conversion formula {αμ1−(1−α)μ0} is utilized to determine the interpolated V value appropriate for the planar YUV 4:2:0 format (2065). Then, V values are written to the double word in the internal buffer (2080). The writing of YUV data to the memory is shown in FIG. 8. Any further V samples undergo the same processing described above (3010). Otherwise, the processing ends (3020).
  • Thus, similar to the conversion of planar YUV 4:2:0 format data to packed YUV 4:2:2 format data described above, an effect of scaling the YUV values by bilinear filtering is to produce data in the resolution of the destination surface, whereby each pixel is assigned one luminance (Y) and two chrominance samples (UV), although the respective values of YUV have been interpolated or converted as set forth above. Further, since the output is in planar YUV 4:2:0 format, only the values of one of the YUV components is selected, packed and written to the memory buffer, per pass. FIG. 8. illustrates three separate passes, including a Y-pass, a U-pass and a V-pass. That is, the bilinear filter performs the sampling interpolations according to the interpolation formula or conversion formula {αμ[0054] 1−(1−α)μ0} for each of the YUV components, and then performs a pass for all values of each of the respective YUV components. Thus, the converted planar YUV 4:2:0 format data is written in three separate surfaces, for the YUV components, respectively.
  • This concludes the description of the example embodiments. Although the present invention has been described with reference to illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the scope and spirit of the principals of the invention. More particularly, reasonable variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the foregoing disclosure, the drawings and the appended claims without department from the spirit of the invention. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art. [0055]

Claims (23)

What is claimed is:
1. A method comprising:
performing conversion of YUV (luminance and chrominance) components by use of a number of successive passes for each of the YUV components, wherein a successive pass of the number of successive passes for one component of the YUV components is separate from successive passes of the number of successive passes for other components of the YUV components, wherein a successive pass of the one component of the YUV components comprises:
scaling the one component of the YUV components to a destination resolution;
aligning the one component with a sampling point of a destination format; and
writing the one component in the destination format.
2. The method of claim 1, wherein performing conversion of YUV (luminance and chrominance) components by use of successive passes for each of the YUV components comprises performing conversion of YUV components from source planar YUV 4:2:0 format to destination packed YUV 4:2:2 format with the same or different surface resolutions.
3. The method of claim 1, wherein performing conversion of YUV (luminance and chrominance) components by use of successive passes for each of the YUV components comprises performing conversion of YUV components from source packed YUV 4:2:2 format to destination planar YUV 4:2:0 format with the same or different surface resolutions.
4. The method of claim 1, wherein the scaling, the aligning and the writing occur subsequent to decoding of source data that includes the YUV components.
5. The method of claim 1, wherein writing the one component in the destination format comprises writing the one component in the destination format based on a byte masking operation.
6. A method comprising:
performing color space conversion of color components, which comprises performing color space conversion of a first component of the color components prior to performing color space conversion of a second component of the color components,
wherein performing color space conversion of the first component comprises:
scaling a first component of the color components to a destination resolution to generate a first scaled component;
aligning the first scaled component with a sampling point of a destination format; and
writing the first scaled component in the destination format;
wherein performing color space conversion of the second component comprises:
scaling a second component of color components to the destination resolution to generate a second scaled component;
aligning the second scaled component with a sampling point of the destination format; and
writing said second scaled component in the destination format.
7. The method of claim 6, wherein scaling the first component comprises scaling and sampling the first component based on an interpolation/conversion operation.
8. The method of claim 6, wherein scaling the second component comprises scaling and sampling the second component based on an interpolation/conversion operation.
9. An apparatus comprising:
a memory; and
a filter to perform a conversion operation for color components, wherein the conversion operation for one component of the color components is separate from the conversion operation for other components of the color components, wherein the conversion operation for the one component of the color components comprises a scale operation, an align operation and a write operation, wherein the filter is to generate a scaled color component based on the scale operation of the one component from a source resolution to a destination resolution, the filter to align the scaled color component with a sampling point of a destination format based on the align operation, the filter to write the scaled color component in the destination format into the memory.
10. The apparatus of claim 9, wherein the color components are YUV (luminance and chrominance) components, wherein the filter is to perform the conversion operation from a planar YUV 4:2:0 format to a packed YUV 4:2:2 format.
11. The apparatus of claim 9, wherein the color components are YUV (luminance and chrominance) components, wherein the filter is to perform the conversion operation from a packed YUV 4:2:2 format to planar YUV 4:2:0 format.
12. A system comprising:
a universal serial bus port to receive source data that includes components of a source format within a color space;
a memory to store the components of the source format; and
a filter to convert the components of the source format to components of a destination format, wherein conversion for one component is separate for conversion of the other components, the conversion comprising a scale function, an align function and a write function.
13. The system of claim 12, wherein the filter is to generate a scaled color component based on the scale function of the one component from the source format to the destination format.
14. The system of claim 13, wherein the filter is to align the scaled color component with a sampling point of the destination format based on the align function.
15. The system of claim 14, wherein the filter is to write the scaled color component in the destination format into the memory.
16. A machine-readable medium that provides instructions, which when executed by a machine, cause said machine to perform operations comprising:
performing conversion of YUV (luminance and chrominance) components by use of a number of successive passes for each of the YUV components, wherein a successive pass of the number of successive passes for one component of the YUV components is separate from successive passes of the number of successive passes for other components of the YUV components, wherein a successive pass of the one component of the YUV components comprises:
scaling the one component of the YUV components to a destination resolution;
aligning the one component with a sampling point of a destination format; and
writing the one component in the destination format.
17. The machine-readable medium of claim 16, wherein performing conversion of YUV (luminance and chrominance) components by use of successive passes for each of the YUV components comprises performing conversion of YUV components from source planar YUV 4:2:0 format to destination packed YUV 4:2:2 format with the same or different surface resolutions.
18. The machine-readable medium of claim 16, wherein performing conversion of YUV (luminance and chrominance) components by use of successive passes for each of the YUV components comprises performing conversion of YUV components from source packed YUV 4:2:2 format to destination planar YUV 4:2:0 format with the same or different surface resolutions.
19. The machine-readable medium of claim 16, wherein the scaling, the aligning and the writing occur subsequent to decoding of source data that includes the YUV components.
20. The machine-readable medium of claim 16, wherein writing the one component in the destination format comprises writing the one component in the destination format based on a byte masking operation.
21. A machine-readable medium that provides instructions, which when executed by a machine, cause said machine to perform operations comprising:
performing color space conversion of color components, which comprises performing color space conversion of a first component of the color components prior to performing color space conversion of a second component of the color components,
wherein performing color space conversion of the first component comprises:
scaling a first component of the color components to a destination resolution to generate a first scaled component;
aligning the first scaled component with a sampling point of a destination format; and
writing the first scaled component in the destination format;
wherein performing color space conversion of the second component comprises:
scaling a second component of color components to the destination resolution to generate a second scaled component;
aligning the second scaled component with a sampling point of the destination format; and
writing said second scaled component in the destination format.
22. The machine-readable medium of claim 21, wherein scaling the first component comprises scaling and sampling the first component based on an interpolation/conversion operation.
23. The machine-readable medium of claim 21, wherein scaling the second component comprises scaling and sampling the second component based on an interpolation/conversion operation.
US10/732,561 2000-01-07 2003-12-10 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion Abandoned US20040119886A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/732,561 US20040119886A1 (en) 2000-01-07 2003-12-10 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/478,793 US6674479B2 (en) 2000-01-07 2000-01-07 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion
US10/732,561 US20040119886A1 (en) 2000-01-07 2003-12-10 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/478,793 Continuation US6674479B2 (en) 2000-01-07 2000-01-07 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion

Publications (1)

Publication Number Publication Date
US20040119886A1 true US20040119886A1 (en) 2004-06-24

Family

ID=23901366

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/478,793 Expired - Fee Related US6674479B2 (en) 2000-01-07 2000-01-07 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion
US10/732,561 Abandoned US20040119886A1 (en) 2000-01-07 2003-12-10 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/478,793 Expired - Fee Related US6674479B2 (en) 2000-01-07 2000-01-07 Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion

Country Status (1)

Country Link
US (2) US6674479B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122860A1 (en) * 2003-11-10 2008-05-29 Nvidia Corporation Video format conversion using 3D graphics pipeline of a GPU
US8139081B1 (en) * 2007-09-07 2012-03-20 Zenverge, Inc. Method for conversion between YUV 4:4:4 and YUV 4:2:0
WO2015133712A1 (en) * 2014-03-06 2015-09-11 삼성전자 주식회사 Image decoding method and device therefor, and image encoding method and device therefor
US9218792B2 (en) 2008-12-11 2015-12-22 Nvidia Corporation Variable scaling of image data for aspect ratio conversion
WO2016100424A1 (en) * 2014-12-19 2016-06-23 Mediatek Inc. Methods of palette based prediction for non-444 color format in video and image coding
US9723216B2 (en) 2014-02-13 2017-08-01 Nvidia Corporation Method and system for generating an image including optically zoomed and digitally zoomed regions
CN107135382A (en) * 2017-03-02 2017-09-05 广东美电贝尔科技集团股份有限公司 A kind of quick Zoom method of image based on YUV signal processing
US9865035B2 (en) 2014-09-02 2018-01-09 Nvidia Corporation Image scaling techniques

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3918497B2 (en) * 2000-11-02 2007-05-23 株式会社村田製作所 Edge reflection type surface acoustic wave device
US7006147B2 (en) * 2000-12-22 2006-02-28 Thomson Lincensing Method and system for MPEG chroma de-interlacing
US20030025698A1 (en) * 2001-08-01 2003-02-06 Riemens Abraham Karel Programmed stall cycles slow-down video processor
US6741263B1 (en) * 2001-09-21 2004-05-25 Lsi Logic Corporation Video sampling structure conversion in BMME
US7136417B2 (en) 2002-07-15 2006-11-14 Scientific-Atlanta, Inc. Chroma conversion optimization
US7239754B2 (en) * 2002-07-16 2007-07-03 Hiroshi Akimoto Method, apparatus and system for compressing still images in multipurpose compression systems
SG111087A1 (en) * 2002-10-03 2005-05-30 St Microelectronics Asia Cache memory system
US7002595B2 (en) * 2002-10-04 2006-02-21 Broadcom Corporation Processing of color graphics data
US6989837B2 (en) * 2002-12-16 2006-01-24 S3 Graphics Co., Ltd. System and method for processing memory with YCbCr 4:2:0 planar video data format
US7554608B2 (en) * 2003-04-01 2009-06-30 Panasonic Corporation Video composition circuit for performing vertical filtering to α-blended video data and successively input video data
US7450831B2 (en) * 2003-04-16 2008-11-11 Lsi Corporation Method for DVD-subpicture compositing in 420 chroma format
TWI229562B (en) * 2003-04-17 2005-03-11 Mediatek Inc Apparatus and method for signal processing of format conversion and combination of video signals
US8572280B2 (en) * 2004-05-06 2013-10-29 Valve Corporation Method and system for serialization of hierarchically defined objects
WO2006113057A2 (en) * 2005-04-15 2006-10-26 Thomson Licensing Down-sampling and up-sampling processes for chroma samples
US8212838B2 (en) * 2005-05-27 2012-07-03 Ati Technologies, Inc. Antialiasing system and method
US7688334B2 (en) * 2006-02-14 2010-03-30 Broadcom Corporation Method and system for video format transformation in a mobile terminal having a video display
US8130317B2 (en) * 2006-02-14 2012-03-06 Broadcom Corporation Method and system for performing interleaved to planar transformation operations in a mobile terminal having a video display
US8243340B2 (en) 2006-02-23 2012-08-14 Microsoft Corporation Pre-processing of image data for enhanced compression
US7724305B2 (en) * 2006-03-21 2010-05-25 Mediatek Inc. Video data conversion method and system for multiple receivers
US8401071B2 (en) * 2007-12-19 2013-03-19 Sony Corporation Virtually lossless video data compression
US9161063B2 (en) 2008-02-27 2015-10-13 Ncomputing, Inc. System and method for low bandwidth display information transport
US8570441B2 (en) * 2008-06-11 2013-10-29 Microsoft Corporation One pass video processing and composition for high-definition video
US20090310947A1 (en) * 2008-06-17 2009-12-17 Scaleo Chip Apparatus and Method for Processing and Blending Multiple Heterogeneous Video Sources for Video Output
TW201021578A (en) * 2008-11-26 2010-06-01 Sunplus Mmedia Inc Decompression system and method for DCT-based compressed graphic data with transparent attribute
KR101546022B1 (en) * 2008-12-09 2015-08-20 삼성전자주식회사 Apparatus and method for data management
US8723891B2 (en) * 2009-02-27 2014-05-13 Ncomputing Inc. System and method for efficiently processing digital video
US8830300B2 (en) * 2010-03-11 2014-09-09 Dolby Laboratories Licensing Corporation Multiscalar stereo video format conversion
US8625666B2 (en) * 2010-07-07 2014-01-07 Netzyn, Inc. 4:4:4 color space video with 4:2:0 color space video encoders and decoders systems and methods
US8947449B1 (en) * 2012-02-21 2015-02-03 Google Inc. Color space conversion between semi-planar YUV and planar YUV formats
US9123278B2 (en) 2012-02-24 2015-09-01 Apple Inc. Performing inline chroma downsampling with reduced power consumption
US9979960B2 (en) * 2012-10-01 2018-05-22 Microsoft Technology Licensing, Llc Frame packing and unpacking between frames of chroma sampling formats with different chroma resolutions
US9661340B2 (en) 2012-10-22 2017-05-23 Microsoft Technology Licensing, Llc Band separation filtering / inverse filtering for frame packing / unpacking higher resolution chroma sampling formats
US20140198855A1 (en) * 2013-01-14 2014-07-17 Qualcomm Incorporated Square block prediction
US9998750B2 (en) 2013-03-15 2018-06-12 Cisco Technology, Inc. Systems and methods for guided conversion of video from a first to a second compression format
US10291827B2 (en) * 2013-11-22 2019-05-14 Futurewei Technologies, Inc. Advanced screen content coding solution
CN103747162B (en) * 2013-12-26 2016-05-04 南京洛菲特数码科技有限公司 A kind of image sampling method for external splicer or hybrid matrix
US9438910B1 (en) 2014-03-11 2016-09-06 Google Inc. Affine motion prediction in video coding
WO2015143351A1 (en) 2014-03-21 2015-09-24 Futurewei Technologies, Inc. Advanced screen content coding with improved color table and index map coding methods
US10091512B2 (en) 2014-05-23 2018-10-02 Futurewei Technologies, Inc. Advanced screen content coding with improved palette table and index map coding methods
US9749646B2 (en) 2015-01-16 2017-08-29 Microsoft Technology Licensing, Llc Encoding/decoding of high chroma resolution details
US9854201B2 (en) 2015-01-16 2017-12-26 Microsoft Technology Licensing, Llc Dynamically updating quality to higher chroma sampling rate
US9805662B2 (en) * 2015-03-23 2017-10-31 Intel Corporation Content adaptive backlight power saving technology
US10070098B2 (en) * 2016-10-06 2018-09-04 Intel Corporation Method and system of adjusting video quality based on viewer distance to a display
US10368080B2 (en) 2016-10-21 2019-07-30 Microsoft Technology Licensing, Llc Selective upsampling or refresh of chroma sample values
CN107105185A (en) * 2017-04-18 2017-08-29 深圳创维-Rgb电子有限公司 The transmission method and device of vision signal
US10819965B2 (en) 2018-01-26 2020-10-27 Samsung Electronics Co., Ltd. Image processing device and method for operating image processing device
US11570473B2 (en) * 2018-08-03 2023-01-31 V-Nova International Limited Entropy coding for signal enhancement coding
CN110262529B (en) * 2019-06-13 2022-06-03 桂林电子科技大学 Unmanned aerial vehicle monitoring method and system based on convolutional neural network
CN111147857B (en) * 2019-12-06 2023-01-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN115348432A (en) * 2022-08-15 2022-11-15 上海壁仞智能科技有限公司 Data processing method and device, image processing method, electronic device and medium
CN117750025A (en) * 2024-02-20 2024-03-22 上海励驰半导体有限公司 Image data processing method, device, chip, equipment and medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412766A (en) * 1992-10-21 1995-05-02 International Business Machines Corporation Data processing method and apparatus for converting color image data to non-linear palette
US5550597A (en) * 1994-07-05 1996-08-27 Mitsubishi Denki Kabushiki Kaisha Signal processing method and signal processing device
US5568204A (en) * 1994-02-23 1996-10-22 Sony Corporation Digital video switching apparatus
US5650824A (en) * 1994-07-15 1997-07-22 Matsushita Electric Industrial Co., Ltd. Method for MPEG-2 4:2:2 and 4:2:0 chroma format conversion
US5684544A (en) * 1995-05-12 1997-11-04 Intel Corporation Apparatus and method for upsampling chroma pixels
US5745186A (en) * 1995-05-19 1998-04-28 Sanyo Electric Co., Ltd. Video signal processing circuit for reducing a video signal
US5790197A (en) * 1994-01-12 1998-08-04 Thomson Consumer Electronics,. Inc. Multimode interpolation filter as for a TV receiver
US5812204A (en) * 1994-11-10 1998-09-22 Brooktree Corporation System and method for generating NTSC and PAL formatted video in a computer system
US5832120A (en) * 1995-12-22 1998-11-03 Cirrus Logic, Inc. Universal MPEG decoder with scalable picture size
US5982432A (en) * 1997-02-27 1999-11-09 Matsushita Electric Industrial Co., Ltd. Method and apparatus for converting color component type of picture signals, method and apparatus for converting compression format of picture signals and system for providing picture signals of a required compression format
US6005546A (en) * 1996-03-21 1999-12-21 S3 Incorporated Hardware assist for YUV data format conversion to software MPEG decoder
US6205181B1 (en) * 1998-03-10 2001-03-20 Chips & Technologies, Llc Interleaved strip data storage system for video processing
US6208350B1 (en) * 1997-11-04 2001-03-27 Philips Electronics North America Corporation Methods and apparatus for processing DVD video
US6363440B1 (en) * 1998-11-13 2002-03-26 Gateway, Inc. Method and apparatus for buffering an incoming information signal for subsequent recording
US6445386B1 (en) * 1999-01-15 2002-09-03 Intel Corporation Method and apparatus for stretch blitting using a 3D pipeline
US6452601B1 (en) * 1999-05-20 2002-09-17 International Business Machines Corporation Pixel component packing, unpacking, and modification
US6526244B1 (en) * 2001-11-21 2003-02-25 Xerox Corporation Hybrid electrophotographic apparatus for custom color printing
US6822694B2 (en) * 1997-12-04 2004-11-23 Sony Corporation Signal processing apparatus

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6529244B1 (en) * 1999-12-22 2003-03-04 International Business Machines Corporation Digital video decode system with OSD processor for converting graphics data in 4:4:4 format to 4:2:2 format by mathematically combining chrominance values

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5412766A (en) * 1992-10-21 1995-05-02 International Business Machines Corporation Data processing method and apparatus for converting color image data to non-linear palette
US5790197A (en) * 1994-01-12 1998-08-04 Thomson Consumer Electronics,. Inc. Multimode interpolation filter as for a TV receiver
US5568204A (en) * 1994-02-23 1996-10-22 Sony Corporation Digital video switching apparatus
US5550597A (en) * 1994-07-05 1996-08-27 Mitsubishi Denki Kabushiki Kaisha Signal processing method and signal processing device
US5650824A (en) * 1994-07-15 1997-07-22 Matsushita Electric Industrial Co., Ltd. Method for MPEG-2 4:2:2 and 4:2:0 chroma format conversion
US5812204A (en) * 1994-11-10 1998-09-22 Brooktree Corporation System and method for generating NTSC and PAL formatted video in a computer system
US5684544A (en) * 1995-05-12 1997-11-04 Intel Corporation Apparatus and method for upsampling chroma pixels
US5745186A (en) * 1995-05-19 1998-04-28 Sanyo Electric Co., Ltd. Video signal processing circuit for reducing a video signal
US5832120A (en) * 1995-12-22 1998-11-03 Cirrus Logic, Inc. Universal MPEG decoder with scalable picture size
US6005546A (en) * 1996-03-21 1999-12-21 S3 Incorporated Hardware assist for YUV data format conversion to software MPEG decoder
US5982432A (en) * 1997-02-27 1999-11-09 Matsushita Electric Industrial Co., Ltd. Method and apparatus for converting color component type of picture signals, method and apparatus for converting compression format of picture signals and system for providing picture signals of a required compression format
US6208350B1 (en) * 1997-11-04 2001-03-27 Philips Electronics North America Corporation Methods and apparatus for processing DVD video
US6822694B2 (en) * 1997-12-04 2004-11-23 Sony Corporation Signal processing apparatus
US6205181B1 (en) * 1998-03-10 2001-03-20 Chips & Technologies, Llc Interleaved strip data storage system for video processing
US6363440B1 (en) * 1998-11-13 2002-03-26 Gateway, Inc. Method and apparatus for buffering an incoming information signal for subsequent recording
US6445386B1 (en) * 1999-01-15 2002-09-03 Intel Corporation Method and apparatus for stretch blitting using a 3D pipeline
US6452601B1 (en) * 1999-05-20 2002-09-17 International Business Machines Corporation Pixel component packing, unpacking, and modification
US6526244B1 (en) * 2001-11-21 2003-02-25 Xerox Corporation Hybrid electrophotographic apparatus for custom color printing

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080122860A1 (en) * 2003-11-10 2008-05-29 Nvidia Corporation Video format conversion using 3D graphics pipeline of a GPU
US7511714B1 (en) * 2003-11-10 2009-03-31 Nvidia Corporation Video format conversion using 3D graphics pipeline of a GPU
US7760209B2 (en) 2003-11-10 2010-07-20 Nvidia Corporation Video format conversion using 3D graphics pipeline of a GPU
US8139081B1 (en) * 2007-09-07 2012-03-20 Zenverge, Inc. Method for conversion between YUV 4:4:4 and YUV 4:2:0
US9218792B2 (en) 2008-12-11 2015-12-22 Nvidia Corporation Variable scaling of image data for aspect ratio conversion
US9723216B2 (en) 2014-02-13 2017-08-01 Nvidia Corporation Method and system for generating an image including optically zoomed and digitally zoomed regions
WO2015133712A1 (en) * 2014-03-06 2015-09-11 삼성전자 주식회사 Image decoding method and device therefor, and image encoding method and device therefor
US10506243B2 (en) 2014-03-06 2019-12-10 Samsung Electronics Co., Ltd. Image decoding method and device therefor, and image encoding method and device therefor
US9865035B2 (en) 2014-09-02 2018-01-09 Nvidia Corporation Image scaling techniques
US10462475B2 (en) 2014-12-19 2019-10-29 Hfi Innovation Inc. Methods of palette based prediction for non-444 color format in video and image coding
WO2016100424A1 (en) * 2014-12-19 2016-06-23 Mediatek Inc. Methods of palette based prediction for non-444 color format in video and image coding
US10798392B2 (en) 2014-12-19 2020-10-06 Hfi Innovation Inc. Methods of palette based prediction for non-444 color format in video and image coding
CN107135382A (en) * 2017-03-02 2017-09-05 广东美电贝尔科技集团股份有限公司 A kind of quick Zoom method of image based on YUV signal processing

Also Published As

Publication number Publication date
US6674479B2 (en) 2004-01-06
US20020101536A1 (en) 2002-08-01

Similar Documents

Publication Publication Date Title
US6674479B2 (en) Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion
US6538658B1 (en) Methods and apparatus for processing DVD video
US6947485B2 (en) System, method and apparatus for an instruction driven digital video processor
EP1715696B1 (en) System, method and apparatus for a variable output video decoder
US6493005B1 (en) On screen display
US6961063B1 (en) Method and apparatus for improved memory management of video images
US5912710A (en) System and method for controlling a display of graphics data pixels on a video monitor having a different display aspect ratio than the pixel aspect ratio
US6828987B2 (en) Method and apparatus for processing video and graphics data
US6437787B1 (en) Display master control
US20090016438A1 (en) Method and apparatus for a motion compensation instruction generator
US7880808B2 (en) Video signal processing apparatus to generate both progressive and interlace video signals
KR19980068686A (en) Letter Box Processing Method of MPEG Decoder
US7414632B1 (en) Multi-pass 4:2:0 subpicture blending
US20060209086A1 (en) Video and graphics system with square graphics pixels
US7203236B2 (en) Moving picture reproducing device and method of reproducing a moving picture
EP1024668B1 (en) Method and apparatus for a motion compensation instruction generator
US7483037B2 (en) Resampling chroma video using a programmable graphics processing unit to provide improved color rendering
EP1147671B1 (en) Method and apparatus for performing motion compensation in a texture mapping engine
US6707853B1 (en) Interface for performing motion compensation
US6509932B1 (en) Method and apparatus for providing audio in a digital video system
US6959043B2 (en) Video decoding method, video decoding apparatus, and video decoding program storage medium
WO1999000985A1 (en) Image encoding method and image decoding method
JP2001223989A (en) Image reproducing device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION