WO2005038603A2 - Overlapped block motion compensation for variable size blocks in the context of mctf scalable video coders - Google Patents
Overlapped block motion compensation for variable size blocks in the context of mctf scalable video coders Download PDFInfo
- Publication number
- WO2005038603A2 WO2005038603A2 PCT/US2004/033876 US2004033876W WO2005038603A2 WO 2005038603 A2 WO2005038603 A2 WO 2005038603A2 US 2004033876 W US2004033876 W US 2004033876W WO 2005038603 A2 WO2005038603 A2 WO 2005038603A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- self
- neighbor
- motion
- frame
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/57—Motion estimation characterised by a search window with variable size or shape
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
- H04N19/615—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding using motion compensated temporal filtering [MCTF]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/583—Motion compensation with overlapping blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
Definitions
- the present invention relates generally to a method, computer program product, and computer system for processing video frames, and more specifically to a method, system, computer program product, and computer system for performing overlapped block motion compensation (OBMC) for variable size blocks in the context of motion compensated temporal filtering (MCTF) scalable video coders.
- OBMC overlapped block motion compensation
- MCTF motion compensated temporal filtering
- variable size block matching VSBM
- NSBM variable size block matching
- the present invention provides a method for processing video frames, said method comprising the steps of: providing a current frame divided into blocks that include at least two differently sized blocks; and performing overlapped block motion compensation (OBMC) on each block, said block on which said OBMC is being performed being denoted as a self block, said performing OBMC comprising performing OBMC on the self block with respect to neighbor blocks of the self block, said neighbor blocks consisting of nearest neighbor blocks of the self block, said neighbor blocks comprising a first neighbor block, said performing OBMC on the self block comprising generating a weighting window for the self block and for each of its neighbor blocks.
- OBMC overlapped block motion compensation
- the present invention provides a computer program product, comprising a computer usable medium having a computer readable program code embodied therein, said computer readable program code comprising an algorithm adapted to implement a method for processing video frames, said method comprising the steps of: providing a current frame divided into blocks that include at least two differently sized blocks; and performing overlapped block motion compensation (OBMC) on each block, said block on which said OBMC is being performed being denoted as a self block, said performing OBMC comprising performing OBMC on the self block with respect to neighbor blocks of the self block, said neighbor blocks consisting of nearest neighbor blocks of the self block, said neighbor blocks comprising a first neighbor block, said performing OBMC on the self block comprising generating a weighting window for the self block and for each of its neighbor blocks.
- OBMC overlapped block motion compensation
- the present invention provides a computer system comprising a processor and a computer readable memory unit coupled to the processor, said memory unit containing instructions that when executed by the processor implement a method for processing video frames, said method comprising the computer implemented steps of: providing a current frame divided into blocks that include at least two differently sized blocks; and performing overlapped block motion compensation (OBMC) on each block, said block on which said OBMC is being performed being denoted as a self block, said performing OBMC comprising performing OBMC on the self block with respect to neighbor blocks of the self block, said neighbor blocks consisting of nearest neighbor blocks of the self block, said neighbor blocks comprising a first neighbor block, said performing OBMC on the self block comprising generating a weighting window for the self block and for each of its neighbor blocks.
- OBMC overlapped block motion compensation
- FIG. 1 depicts a video coding system comprising a Motion Compensated Temporal Filtering (MCTF) processor, in accordance with embodiments of the present invention.
- FIG. 2 depicts the MCTF process implemented by the MCTF processor of FIG. 1, in accordance with embodiments of the present invention.
- FIG. 3 is a flow chart depicting utilizing I-BLOCKs in temporal high frames generated by the MCTF process of FIG. 2, in accordance with embodiments of the present invention.
- FIG. 4 depicts connections between pixels of successive frames, in accordance with embodiments of the present invention.
- FIG. 5 depicts a frame comprising I-BLOCKs and P-BLOCKs, in accordance with embodiments of the present invention.
- FIG. 6 illustrates notation used for spatial interpolation of an I-BLOCK, in accordance with embodiments of the present invention.
- FIGS. 7A-7C illustrate spatial interpolation of an I-BLOCK for a case in which only one neighbor block is available, in accordance with embodiments of the present invention.
- FIG. 8 illustrates a variable block size of I-BLOCKs in a frame, in accordance with embodiments of the present invention.
- FIGS. 9A-9F illustrate directional spatial interpolation of an I-BLOCK, in accordance with embodiments of the present invention.
- FIGS. 9A-9F illustrate directional spatial interpolation of an I-BLOCK, in accordance with embodiments of the present invention.
- FIG. 10 illustrates hybrid spatial interpolation of an I-BLOCK, in accordance with embodiments of the present invention.
- FIG. 11 depicts a current frame that has been configured into variable-size blocks, in accordance with embodiments of the present invention.
- FIG. 12 depicts the current frame of FIG. 11 and its reference frame together with motion vectors that link blocks in the current frame with corresponding blocks in the reference frame, in accordance with embodiments of the present invention.
- FIG. 13A is a flow chart for utilizing variable block-size OBMC in the MCTF temporal high frames of FIG. 2, in accordance with embodiments of the present invention.
- FIG. 13B is a flow chart describing the variable block-size OBMC processing step of FIG. 13 A, in accordance with embodiments of the present invention.
- FIG. 14 is a block diagram of frame processing associated with the flow charts of FIGS. 13A-13B, in accordance with embodiments of the present invention.
- FIG. 15 depicts two successive input frames to be transformed by the MCTF into a high temporal frame and a low temporal frame, in accordance with embodiments of the present invention.
- FIG. 16 depicts a self block and associated nearest neighboring blocks used by OBMC, in accordance with embodiments of the present invention.
- FIG. 17A illustrates 4x4 weighting windows, wherein a self block is a motion block, in accordance with embodiments of the present invention.
- FIG. 17B illustrates 4x4 weighting windows, wherem a self block is an I-BLOCK, in accordance with embodiments of the present invention.
- FIG. 18A illustrates 8x8 weighting windows, wherein a self block is a motion block, in accordance with embodiments of the present invention.
- FIG. 18B illustrates 8x8 weighting windows, wherein a self block is an I-BLOCK, in accordance with embodiments of the present invention.
- FIG. 19 shows the frame of FIG. 11 such that portions of selected nearest neighbor blocks are depicted, said selected nearest neighbor blocks being larger than their associated self block, in accordance with embodiments of the present invention.
- FIG. 20 shows the frame of FIG. 11, depicting portions of a self block that is larger than associated nearest neighbor blocks, in accordance with embodiments of the present invention.
- FIGS. 21A-21C depict weighting windows for a self block and an associated smaller nearest neighboring block used by OBMC in conjunction with a shrinking scheme wherein the nearest neighboring block is a motion block, in accordance with embodiments of the present invention.
- FIG. 23 is a flow chart for calculating weighting windows for variable block-size OBMC, in accordance with embodiments of the present invention.
- FIG. 24 is a flow chart for calculating successively improved motion vectors for the self blocks of a current frame processed according to variable block size OBMC using the probability weighting windows calculated according to according to the methodology described by the flow charts of FIGS. 13A, 13B, and 23, in accordance with embodiments of the present invention.
- FIG. 25 illustrates a computer system for processing I-BLOCKs used with MCTF and/or for performing overlapped block motion compensation (OBMC) for variable size blocks in the context of motion MCTF scalable video coders, in accordance with embodiments of the present invention.
- OBMC overlapped block motion compensation
- Video compression schemes remove redundant information from input video signals before their transmission, by encoding frames of the input video signals into compressed information that represents an approximation of the images comprised by the frames of the input video signals. Following the transmission of the compressed information to its destination, the video signals are reconstructed by decoding the approximation of the images from the compressed information.
- temporal redundancy pixel values are not independent but are correlated with their neighbors across successive frames of the input video signals.
- MPEG Moving Pictures Experts Group
- MCP motion-compensated prediction
- a video signal is typically divided into a series of groups of pictures (GOP), where each GOP begins with an intra-coded frame (I) followed by an arrangement of forward predictive-coded frames (P) and bidirectional predicted frames (B). Both P-frames and B-frames are interframes.
- a target macroblock in a P-frame can be predicted from one or more past reference frames (forward prediction).
- Bidirectional prediction also called motion-compensated (MC) interpolation, is an important feature of MPEG video.
- B-frames coded with bidirectional prediction use two reference frames, one in the past and one in the future.
- a target macroblock in a B-frame can be predicted from past reference frames (forward prediction) or from future reference frames (backward prediction), or by an average of these two predictions (interpolation).
- the target macroblock in either a P-frame or a B-frame can also be intra coded as an I-BLOCK or a P-BLOCK as defined infra.
- Forward or backward prediction encodes data in a current input frame (i.e., picture) based upon the contents of a preceding or succeeding reference frame, respectively, in consideration of luminance and/or chrominance values at the pixels in both the current input frame and one or more reference frames.
- the reference frames used for the predictive encoding are either preceding reference frames or succeeding reference frames. For a given input block of pixels (e.
- the predictive encoding utilizes motion compensated prediction (MCP) to successively shift blocks in the reference frames, within a predetermined search range, to determine whether there is a 16x 16 array of pixels found within a reference frame which has at least a given minimum degree of correlation with the input block. If the given minimum degree of correlation is determined to exist, then the amount and direction of displacement between the found 16x16 pixel array in the reference frame and the input block is obtained in the form of a motion vector (MV), with horizontal and vertical components.
- MCP motion compensated prediction
- luminance values alone or luminance and chrominance values of the input block and the corresponding pixels within the found 16x16 array of pixels in the reference frame are motion compensated prediction error values, sometimes called prediction residuals or simply residuals.
- prediction from a preceding reference frame is referred to as forward prediction, and from a succeeding reference frame is referred to as backward prediction.
- the input block may be intra-coded within the input frame, and is referred to as an I-BLOCK.
- values for the input block may be predicted based on 16x16 blocks of pixels within both preceding and succeeding reference frames, respectively.
- an unconnected block within a current input frame is classified to be either an I-BLOCK or a P-BLOCK.
- An I-BLOCK is defined as an input block in the current input frame that does not have sufficient correlation (e.g., a given minimum degree of correlation) with a corresponding block of pixels in the reference frame that is being used for forward or backward prediction in relation to the frame. Due to the lack of sufficient correlation, an I-BLOCK is encoded entirely within the given frame independent of a reference frame.
- a P-BLOCK is encoded: by forward prediction from the reference frame under the assumption that the reference frame precedes the given frame; by backward prediction from the reference frame under the assumption that the reference frame succeeds the given frame; or by bidirectional prediction using both a preceding and succeeding reference frame.
- An example of an I-BLOCK is a block of newly uncovered pixels in the current input frame having no corresponding pixels in the preceding frame.
- Other examples of I-BLOCKs include poorly matched motion blocks such as, inter alia, a block of partially covered or partially occluded pixels in the current input frame wherein said block does not have sufficient correlation with a corresponding block of pixels in the reference frame.
- the present invention provides a method of determining and encoding I-BLOCKs.
- the present invention is directed to the quality of the motion, since Motion Compensated Temporal Filtering (MCTF) is rather sensitive to said quality.
- MCTF Motion Compensated Temporal Filtering
- the conventionally used block based motion in MPEG video standards is not of sufficient high quality to avoid the creation of artifacts in the lower frame rate videos output by the MCTF and resultant scalable video coder.
- VSBM scalable video coder
- the present invention provides a method of determining and coding a more smooth and consistent motion for MCTF use.
- Video coding system is a system that encodes video data.
- Video coder is an algorithm which reduces the number of bits necessary to store a video clip by removing redundancy and introducing controlled distortion
- Subband/wavelet coder is a video coder that uses the subband/wavelet transformation in the process of redundancy reduction
- Temporal correlation is a correlation between pixels in adjacent or nearby frames
- spatial correlation is a correlation between pixels in the same frame
- Motion estimation is estimation of a motion or displacement vector that locates a matching block in another frame
- Motion Compensation (MC) is a process of actually an alignment of a block in the present frame with a matching block in a different frame.
- Motion Compensated Temporal filtering is a process of filtering a block or array of pixels along the time axis (i.e., motion trajectory) in a manner to be described infra in conjunction with FIG. 2.
- Temporal low frame is a frame containing the spatial low frequencies that are common in a pair (or larger set) of frames.
- Temporal high frame is a frame containing the spatial high frequencies that constitute the MC difference in a pair (or larger set) of frames
- Temporal redundancy denotes a dependency between pixels in adjacent or nearby frames
- Block matching is a method that assigns one motion to a block of pixels.
- VSBM Very size block matching
- Block sizes may range, inter alia, from 4x4 to 64x64.
- Hierarchical VSBM HVSBM
- OBMC overlapped block motion compensation
- OBMC overlapped block motion compensation
- OBMC overlapped block motion compensation
- Global motion vector is a motion vector that is used for the entire frame, wherein the pertinent block size is equal to the frame size.
- Unconnected area is an area of the image frame that does not have a corresponding region in the reference frames or a region where the motion is too complicated for the motion estimator to track properly.
- MCP Motion compensated prediction
- DMD Displayed frame difference
- Hybrid coder is a video coder such as MPEG2 that makes use of MC prediction inside a feedback loop to temporally compress the data, and then a spatial transform coder to code the resulting prediction error.
- Motion Compensated Temporal Filtering Scalable video coding is an exploration activity in Moving Picture Experts Group (MPEG), which is one of subcommittees of International Organization for Standardization (ISO).
- MPEG Moving Picture Experts Group
- ISO International Organization for Standardization
- a key element in this is the compression of these audiovisual signals due to their large uncompressed size.
- a scalable video coder provides an embedded bit stream containing a whole range of bitrates, lower resolutions, and lower frame rates, in addition to the full frame rate and full resolution input to the scalable coder. With said embedding, the lower bitrate result is embedded in each of the higher bitrate streams.
- Input video 51 is received by a MCTF processor 52 and comprises a group of pictures (GOP) such as 16 input frames. Each frame has pixels, and each pixel has pixel value for the pixel characteristics of luminance and chrominance. For each block of data processed by the MCTF processor 52, the MCTF processor 52 needs motion information in the form of a motion vector. Accordingly, the Input Video 51 data is sent from the MCTF processor 52 to a Motion Estimation 56 block which determines the motion vectors and sends the determined motion vectors back up to the MCTF processor 52 to perform the motion compensated temporal filtering.
- a Motion Estimation 56 block determines the motion vectors and sends the determined motion vectors back up to the MCTF processor 52 to perform the motion compensated temporal filtering.
- the motion information is coded in the Motion Field Coding processor 57, and then transmitted to the Packetizer 55.
- the MCTF processor 52 generates output frames comprising one temporal low frame and multiple temporal high frames of transformed pixel values, derived from the input frames of the Input Video 51 as will be described infra in conjunction with FIG. 2.
- the generated output frames are processed by Spatial Analysis 53 by being analyzed spatially with a subband wavelet coder, namely a discrete wavelet transform.
- the video coding system 50 does not suffer the drift problem exhibited by hybrid coders that have feedback loops.
- the Spatial Analysis 53 decomposes the generated output frames (i.e., one temporal low frame and multiple temporal high frames) into one low frequency band and bands having increasing scales of higher and higher frequency.
- the Spatial Analysis 53 performs a spatial pixel transformation to derive spatial subbands in a manner that is analogous to pixel transformation performed by the MCTF processor 52 in the time domain.
- the output of Spatial Analysis 53 is uncompressed floating point data and many of the subbands may comprise mostly near zero values.
- EZBC embedded Zero Block Coder
- the EZBC 54 algorithm provides the basic scalability properties by individually coding each spatial resolution and temporal high subband.
- the EZBC 54 includes a compression block that quantizes the subband coefficients and assigns bits to them. Said quantizing converts the floating point output of Spatial Analysis 53 to a binary bit representation, followed by truncating the binary bit representation to discard relatively insignificant bits such that no more than negligible distortion is generated from said truncation.
- the EZBC 54 is an adaptive arithmetic coder which converts the fixed bit strings into variable length strings, thereby achieving further compression.
- the EZBC 54 is both a quantizer and a variable length coder called a Conditional Adaptive Arithmetic Coder. Whereas the quantizer is throwing away bits, the variable length coder compresses output from the quantizer losslessly.
- the bit streams generated by the EZBC 54 are interleaved and sent to the Packetizer 55.
- the EZBC coder can be substituted by another suitable embedded or layered coder, e.g. JPEG 2000 and others.
- the Packetizer 55 combines the bits of the streams generated by the EZBC 54 with the bits of motion vectors (needed for doing decoding later) transmitted from the Motion Field Coding 57 and breaks the combination of bits up into packets of desired sizes (e.g., internet packets of 500 kilobytes or less).
- the Packetizer 55 subsequently sends the packets over a communication channel to a destination (e.g., a storage area for storing the encoded video information).
- FIG. 2 depicts the MCTF process implemented by the MCTF processor 52 of FIG. 1 for an example GOP size of 16 frames, in accordance with embodiments of the present invention.
- FIG. 2 depicts the MCTF process implemented by the MCTF processor 52 of FIG. 1 for an example GOP size of 16 frames, in accordance with embodiments of the present invention.
- Level 5 contains the 16 input frames of the Input Video 51 of FIG. 1, namely input frames FI, F2, ..., F16 ordered in the direction of increasing time from left to right.
- MC temporal filtering is performed on pairs of frames to produce temporal low (t-L) and high (t-H) subband frames at the next lower temporal scale or frame rate.
- solid lines indicate the temporal low frames and dashed lines indicate the temporal high frames.
- curved lines indicate the corresponding motion vectors.
- the MC temporal filtering is performed four times in FIG. 2 to generate 5 temporal scales or frame rates, the original frame rate and four lower frame rates.
- the frame rates generated are full rate, l A full rate, full rate, 1/8 full rate, and 1/16 full frame rate at levels 5, 4, 3, 2, and 1, respectively.
- the input frame rate were 32 frames per second (fps)
- the lowest frame rate out is 2 fps at level 1.
- the lowest frame rate is denoted (1)
- the next higher frame rate is denoted as (2), etc.
- the MCTF processor 52 of FIG. 1 performs: motion estimation from: FI to F2, F3 to F4, F5 to F6, F7 to F8, F9 to F10, FI 1 to F12, F13 to F14, and F15 to F16 and determines the associated motion vectors Ml, M2, M3, M4, M5, M6, M7, and M8, respectively.
- temporal filtering on frames FI and F2 to generate temporal low frame LI and temporal high frame HI
- temporal filtering on frames F3 and F4 to generate temporal low frame L2 and temporal high frame H2
- temporal filtering on frames F5 and F6 to generate temporal low frame L3 and temporal high frame H3
- temporal filtering on frames F7 and F8 to generate temporal low frame L4 and temporal high frame H4
- temporal filtering on frames F9 and F10 to generate temporal low frame L5 and temporal high frame H5
- temporal filtering on frames FI 1 and F12 to generate temporal low frame L6 and temporal high frame H6
- temporal filtering on frames F13 and F14 to generate temporal low frame L7 and temporal high frame H7
- temporal filtering on frames F15 and F16 to generate temporal low frame L8 and temporal high frame HI 8.
- the frames being temporally filtered into temporal low and temporal high frames are called "child frames".
- the FI and F2 frames are child frames of the LI and HI frames.
- the corresponding pixel values in the temporal low and temporal high frames are proportional to V A +V B and V A -V B , respectively, in the special case where Haar filters are used for temporal filtering.
- pixel values in temporal low frames are proportional to the average of the corresponding pixel values in the child frames.
- pixel values in temporal high frames are proportional to the difference between corresponding pixel values in the child frames.
- the Motion Estimation 56 of FIG. 1 further performs: motion estimation from: LI to L2, L3 to L4, L5 to L6, and L7 to L8 and determines the associated motion vectors M9, M 10, Ml 1, and Ml 2, respectively.
- the Motion Estimation 56 of FIG. 1 further performs: motion estimation from: L9 to L10 and LI 1 to L12 and determines the associated motion vectors Ml 3 and Ml 4, respectively.
- the Motion Estimation 56 of FIG. 1 further performs: motion estimation from: L13 to L14 determines the associated motion vector Ml 5.
- the MCTF processor 52 of FIG. 1 further performs: temporal filtering on frames LI 3 and L14 to generate temporal low frame LI 5 and temporal high frame H15.
- the 16 frames in this 5 level example consisting of the temporal low frame L15 and the temporal high frames HI, H2, ..., H15 are transmitted as output from the MCTF processor 52 to the Spatial Analysis 12 of FIG. 1. Since the temporal high frames HI, H2, ..., H15 may comprise a large number of near zero values, as explained supra, the temporal high frames HI, H2, ..., H15 frames are amenable to being highly compressed. Given frames L15, HI, H2, ..., H15, the frames in Levels 2, 3, 4, and 5 may be regenerated by sequentially reversing the process that generated frames LI 5, HI, H2, ..., HI 5.
- frames LI 5 and HI 5 of Level 1 may be mathematically combined to regenerate frames L13 and L14 of Level 2.
- frames L13 and H13 of Level 2 may be mathematically combined to regenerate frames L9 and L10 of Level 3
- frames LI 4 and HI 4 of Level 2 may be mathematically combined to regenerate frames LI 1 and LI 2 of Level 3. This process may be sequentially continued until frames FI, F2, ..., F16 of Level 1 are regenerated. Since the compression performed by the EZBC 54 of FIG. 1 is lossy, the regenerated frames in Levels 2-5 will be approximately, but not exactly, the same as the original frames in Levels 2-5 before being temporally filtered.
- the representative two frames of this pair of successive frames are denoted as frames A and B, wherein forward estimation is performed from frame A to frame B, so that frame A is earlier in time than frame B.
- Newly uncovered pixels in frame B have no corresponding pixels in frame A.
- occluded pixels in frame A have no corresponding pixel in frame B.
- the present invention utilizes I-BLOCKs to deal locally with poorly matched motion blocks resulting from the newly uncovered pixels in frame B.
- MC temporal filtering is omitted and spatial interpolation is used instead to determine pixel values in the I-BLOCK.
- the resulting spatial interpolation error block for the I- BLOCK (also called the residual error block of the interpolated I-BLOCK) is subsequently overlayed on (i.e., inserted into) the corresponding block within the associated MCTF temporal high frame.
- the present invention discloses a method of compressing video that involves a spatiotemporal or space-time transformation utilizing motion compensated blocks in pairs of input frames, such as the representative pair having input frames A and B.
- This block based motion field is used to control the spatiotemporal transformation so that it filters along approximate motion trajectories.
- the output of such a transformation is compressed for transmission or storage.
- Some of the blocks may be unconnected with neighbors in the next frame (timewise) because of covering or uncovering of regions in the frame due to motion, e.g. a ball moving in front of a background object that is stationary.
- regions i.e., I-BLOCKs
- FIG. 3 is a flow chart depicting steps 31-38 for utilizing I-BLOCKs in the MCTF temporal high frames, in accordance with embodiments of the present invention.
- Step 31 utilizes two successive frames, A and B, in a MCTF filtering level, wherein forward estimation is performed from frame A to frame B.
- frames A and B could represent frames FI and F2 in level 5 of FIG.
- Steps 32 and 33 determine the connection state of pixels in frames A and B, respectively, as illustrated in FIG. 4 in accordance with embodiments of the present invention.
- Each pixel in frames A and B will be classified as having a connection state of "connected” or "unconnected” as follows.
- FIG. 4 shows pixels Al, A2, ..., A12 in frame A and Pixels Bl, B2, ..., B12 in frame B.
- Pixels Al, A2, A3, and A4 are in block 1 of frame A.
- Pixels A5, A6, A7, and A8 are in block 2 of frame A.
- Pixels A9, A10, Al 1, and A12 are in block 3 of frame A.
- Pixels Bl, B2, B3, and B4 are in block 1 of frame B. Pixels B5, B6, B7, and B8 are in block 2 of frame B. Pixels B9, BIO, Bl 1, and B 12 are in block 3 of frame B. Pixels in frame A are used as references for pixels in frame B in relation to the forward motion estimation from frame A to frame B. Note that the blocks in frames A and B are 4x4 pixel blocks, and FIG. 4 shows only one column of each 4-column block. In FIG. 4, a pixel P A in frame A that is pointed to by an arrow from a pixel P B in frame B is being used as a reference for pixel P ⁇ . For example, pixel Al in frame A is being used as a reference for pixel Bl in frame B.
- a pixel in frame A is labeled as unconnected if not used as a reference by any pixel in frame B. Accordingly, pixels A7 and A8 are unconnected.
- a pixel in frame A is connected if used as a reference for a pixel in frame B. Accordingly, pixels A1-A6 and A9-A12 are connected.
- Pixels A3 and A4 require special treatment, however, since pixels A3 and A4 are each being used as a reference by more than one pixel in frame B.
- pixel A3 is being used as a reference by pixels B3 and B5 of frame B, and the present invention uses an algorithm based on minimum mean-squared displaced frame difference (DFD) (to be defined infra) calculations to retain pixel Al as a reference for pixel B3 or for pixel B5 but not for both pixels B3 and B5.
- DFD minimum mean-squared displaced frame difference
- the algorithm calculates DFD11 which is the mean-squared DFD between block 1 of frame A and block 1 of frame B.
- the algorithm calculates DFD 12 which is the mean-squared DFD between block 1 of frame A and block 2 of frame B. If DFD 11 is less than DFD 12 then pixel A3 is retained as a reference for pixel B3 and pixel A3 is dropped as a reference for pixel B5. If DFD12 is less than DFD11 then pixel A3 is retained as a reference for pixel B5 and is dropped a reference for pixel B3. If DFD 11 is equal to DFD 12 then any tiebreaker may be used.
- a first example of a tie-breaker is "scan order" which means that pixel A3 is retained as a reference for whichever of pixels B3 and B5 is first determined to use pixel A3 as a reference.
- a second example of a tie-breaker is to pick a random number R from a uniform distribution between 0 and 1, and to retain pixel A3: as a reference for pixel B3 if R is less than 0.5; or as a reference for pixel B5 if R is not less than 0.5.
- DFD11 is less than DFD 12 so that pixel A3 is retained as a reference for pixel B3 and dropped as a reference for pixel B5.
- pixels B4 and B6 each use pixel A4 as a reference and the previously-described DFD-based algorithm may be used to retain pixel A4 as a reference for either pixel B4 or pixel B6 but not for both pixels B4 and B6.
- pixel A4 is retained as a reference for pixel B4 and dropped as a reference for pixel B6 based on the previously-described DFD-based algorithm.
- a pixel in frame B is labeled as unconnected if not using a reference pixel in frame A after the DFD-based algorithm has been applied to resolve those cases in which a pixel in frame A is used as a reference by more than one pixel in frame B.
- pixels A3 and A4 were dropped as a reference for pixels B5 and B6, respectively, after application of the DFD- based algorithm, as explained supra. Accordingly, pixels B5 and B6 are unconnected. Otherwise pixels in frame B are connected. Accordingly, pixels B1-B4 and B7-B12 are connected. Note that if the previously-described DFD-based algorithm has been executed (i.e., when the connection states of the pixels in frame A were determined) then the arrow pointing from pixel B5 to pixel A3 and the arrow pointing from pixel B6 to pixel A4 in FIG. 4 are irrelevant since pixels A3 and A4 have already been dropped as a reference for pixels B5 and B6, respectively. While FIG.
- step 33 may alternatively be executed before step 32.
- the previously-described DFD-based algorithm for resolving cases in which a pixel in frame A is used as a reference for more than one pixel in frame B may be executed at any time before, during, or after execution of steps 32 and 33.
- step 32 is executed prior to step 33
- the previously-described DFD-based algorithm may be executed before step 32, between steps 32 and 33, or after step 33.
- step 33 is executed prior to step 32
- the previously-described DFD-based algorithm may be executed before step 33, between steps 33 and 32, or after step 32.
- connection state i.e., connected or unconnected
- step 32 may alternatively be omitted, since the connection state of each pixel in frame B requires knowledge of the reference pixels in frame A for each pixel in frame B but does not require knowledge of the connection state of each pixel in frame A.
- the mean-squared DFD between a block in frame A and a block in frame B is defined as follows. Let n denote the number of pixels in each of said blocks. Let V AI , Y AI , —N An denote the values (e.g., luminance or chrominance) of the pixels in the block in frame A. Let V BI , V B2 , ..., V ⁇ n* denote the values of the corresponding pixels in the block in frame B.
- the mean- squared DFD between the block in frame A and the block in frame B is:
- Mean-squared DFD [(V A ⁇ -V B ⁇ ) 2 + (V A2 -V B2 ) 2 + ... + (VAn-V B n) n ] / n (1)
- the previously-described DFD-based algorithm is applicable to motion vectors with sub- pixel accuracy in relation to connections between subpixels, as utilized in high performance video coders.
- a subpixel is location between adjacent pixels.
- the interpolated subpixel is used to calculate the DFD.
- no other changes in the MCTF algorithm are necessary but the use of a prescribed form of spatial interpolation when the reference pixel is not an integer.
- a separable 9-tap FIR interpolation filter may be utilized for this purpose.
- Step 34 classifies the blocks in frame B as being “uni-connected” or “unconnected”, in accordance with embodiments of the present invention. If at least a fraction F of the pixels in a block of a frame are unconnected, then the block is an "unconnected” block; otherwise the block is a "uni-connected” block.
- the fraction F has a value reflective of a tradeoff between image quality and processing time, since I-BLOCKs require extra processing time.
- the fraction F may have a value, inter alia, of at least 0.50 (e.g., in a range of 0.50 to 0.60, 0.50 to 0.75, 0.60 to 0.80, 0.50 to 1.00, 0.30 to 1.00, 0.50 to less than 1.00, etc.).
- a matched block in frame A (called a uni-connected block of frame A) may be determined for each uni-connected block in frame B.
- the resultant uni-connected blocks in frames A and B form a set of matched pairs of uni-connected blocks, wherein each matched pair consists of a uni-connected block in frame B and a matched uni-connected block in frame A.
- Step 35 reclassifies the first and second uni-connected blocks of the matched pair of uni-connected blocks as being unconnected if the following reclassification criteria is satisfied, in accordance with embodiments of the present invention.
- Ni and V 2 denote the pixel variance of the first and second uni-connected blocks, respectively.
- the pixel variance of a block is the mean-squared deviation between the pixel values in the block and the mean pixel value for the block.
- V MI ⁇ denote the minimum of Vi and V 2 .
- the first and second uni-connected blocks are reclassified as being unconnected blocks if the mean-squared DFD between the first and second blocks exceeds IV MI ⁇ , wherein f is a real number in a range of 0 to 1.
- f may be in a range of, inter alia, 0.4 to 0.6, 0.5 to 0.7, 0.4 to 0.75, 0.5 to 0.9, 0.4 to 1.00, etc.
- Step 35 the classification of each block in frame B as "unconnected" or uni- connected" is complete.
- Step 36 categorizes each unconnected block in frame B as a P-BLOCK or an I-BLOCK, in accordance with embodiments of the present invention.
- An I-BLOCK will subsequently have its initial pixel values replaced by spatially interpolated values derived from neighboring pixels outside of the I-BLOCK, as will be described infra.
- the difference between an initial pixel value and a spatially interpolated pixel value of an I-BLOCK pixel is the residual error of the interpolated I-BLOCK pixel.
- the block of residual errors at all pixels in the I-BLOCK is called a residual error block of, or associated with, the I- block.
- the interpolated I-BLOCK is formed, its residual error block is computed, and the absolute value of the sum of the residual errors (SR ES ) in the residual error block is also computed.
- S RES is called the "residual interpolation error" of the unconnected block.
- the residual errors are the errors at the pixels of the residual error block.
- forward and backward motion is performed on the unconnected block.
- the sum of the absolute DFDs of the forward and backward motion compensated prediction errors are computed.
- the minimum of the sum of the absolute DFDs for the forward and backward motion compensated prediction errors (S MC - MIN ) is determined.
- S MC - MIN is called the "minimum motion compensated error" of the unconnected block.
- the unconnected block is classified as an I-BLOCK if S RES is less than S MC - MIN -
- the unconnected block is classified as a P-BLOCK if SRE S is not less than S MC - MIN -
- the I-BLOCKs determined in step 36 are processed by spatial interpolation from available neighboring pixels and the residual error block associated with the interpolated I- BLOCK is generated, in accordance with embodiments of the present invention.
- the blocks in a frame may have a fixed size or a variable size.
- step 38 the residual error block associated with the interpolated I-BLOCK is overlayed on (i.e., placed within) the pertinent temporal high frame associated with the frame pair A and B being analyzed, for subsequent compression of the pertinent temporal high frame by the EZBC 54 after execution of the Spatial Analysis 53 of FIG. 2.
- FIG. 7C shows that the residual error block contains numerous near zero values and is thus suitable for being efficiently compressed.
- FIG. 5 shows a frame comprising I-BLOCKs, P-BLOCKs, and uni-connected blocks.
- the I-BLOCKs comprise blocks 1-3
- the P-BLOCKs and uni-connected blocks comprise the remaining blocks which include blocks 4-10.
- Each I-BLOCK has four possible neighbors: an upper neighbor, a lower neighbor, a left neighbor, and a right neighbor.
- the blocks of a frame are processed in accordance with a scan order and only "available" blocks (i.e., previously processed I-BLOCKs having established pixel values therein, P-BLOCKs having original data therein, and/or uni-connected blocks) can be used for the spatial interpolation.
- FIGS. 7A-7C illustrate a case in which only one neighbor block is available.
- the 4x4 pixel I-BLOCK 40 in FIG. 7A is defined by row segments 41-44, and it is assumed that the only available neighbors are in row segment 45 in a neighboring upper block above block 40.
- the example pixel values shown for I-BLOCK 40 in FIG. 7A are the initial values prior to the spatial inte ⁇ olation.
- the pixel values in row segment 45 are used for spatial inte ⁇ olation.
- the C-code in Table 1 may be used to effectuate the spatial inte ⁇ olation.
- FIG. 7B shows the resultant inte ⁇ olated values in the I-BLOCK 40 resulting from execution of the C-code of Table 1.
- FIG. 7C shows the residual error block determined by subtracting the inte ⁇ olated pixel values of FIG. 7B from the initial pixel values of FIG. 7A.
- the residual error block depicted FIG. 7C is overlayed within (i.e., placed within) the pertinent temporal high frame associated with the frame pair A and B being analyzed, for subsequent compression of the pertinent temporal high frame by the EZBC 54 of FIG. 2. While FIGS.
- Tables 2 and 3 illustrate inte ⁇ olation algorithms in which two neighboring blocks are available.
- Table 2 specifies formulas for calculating the inte ⁇ olated pixel values in[0] ... in[15] (see FIG. 6) in the 4x4 I-BLOCK using available neighboring pixels in the upper and left positions in accordance with the notation of FIG. 6.
- Table 3 specifies C-code for calculating the inte ⁇ olated pixel values in[0] ... in[15] in the 4x4 I-BLOCK using neighboring pixels in the upper and lower positions in accordance with the notation of FIG. 6.
- Table 4 illustrates inte ⁇ olation algorithms in which three neighboring blocks are available.
- Table 4 specifies C-code for calculating the inte ⁇ olated pixel values in[0] ... in[15] (see FIG. 6) in the 4x4 I-BLOCK using neighboring pixels in the upper, left, and right positions in accordance with the notation of FIG. 6.
- Table 5 illustrates inte ⁇ olation algorithms in which four neighboring blocks are available.
- Table 5 specifies C-code for calculating the inte ⁇ olated pixel values in[0] ... in[15] (see FIG. 6) in the 4x4 I-BLOCK using neighboring pixels in the upper, lower, left, and right positions in accordance with the notation of FIG. 6.
- FIG. 8 illustrates the variable block size case, which arises from 5-level hierarchical variable size block matching where block sizes range from 4x4 to 64x64.
- I-BLOCKs 11 and 12 are shown.
- Block 11 has a pixel size of 8x8 and block 12 has a pixel size of 4x4. If I-BLOCKs 11 and 12 are processed in the previously mentioned left-to-right and then top-to-bottom scanning order (i.e., block 11 is inte ⁇ olated before block 12 is inte ⁇ olated) then block 12 will not be available for block 1 l's inte ⁇ olation.
- FIGS. 9A-9F (collectively, "FIG. 9) illustrate a directional spatial inte ⁇ olation scheme for determining pixel values for I-BLOCKs, in accordance with embodiments of the present invention.
- I-BLOCK 61 contains pixels P22, P23, P24, P25, P32, P33, P34, P35, P42, P43, P44, P45, P52, P53, P54, and P55.
- all pixels not in I-BLOCK 61 are neighbors of the pixels in I-BLOCK 61.
- the inte ⁇ olation for the pixels in I- BLOCK 61 are along parallel lines making a fixed angle ⁇ with the X axis as illustrated by one of the parallel lines, namely line 66, shown in FIG. 9A.
- Each Figure of FIGS. 9B-9F represents an embodiment with a different value of ⁇ .
- each pixel is a square.
- ⁇ 45 degrees for the line 66 in FIG. 9A which passes through diagonally opposite vertices of pixels P25, P34, P43, and P52.
- ⁇ will differ from 45 degrees for line 62 in FIG. 9A if the pixels have a rectangular, non-square shape.
- ⁇ and ⁇ +180 degrees represent the same set of parallel lines.
- the inte ⁇ olations along each such line utilizes pixel values of the nearest available neighbors on the line, wherein an available neighbor is a neighbor whose pixel value has been previously established.
- Lines 63, 64, ..., 69 are called "directional lines.” Since line 63 passes though pixel P22, line 63 is used to determine the value of pixel P22 based on inte ⁇ olation using: neighbors PI 3 and P31 if both PI 3 and P31 are available; only neighbor PI 3 if PI 3 is available and P31 is not available; or only neighbor P31 if P31 is available and P13 is not available.
- line 64 passes though pixels P23 and P32, line 64 is used to determine the value of pixels P23 and P32 based on inte ⁇ olation using: neighbors P14 and P41 if both P14 and P41 are available; only neighbor P 14 if P 14 is available and P41 is not available; or only neighbor P31 if P31 is available and P13 is not available.
- inte ⁇ olations along lines 65, 66, 67, 68, and 69 are used to determine pixel values at (P24, P33, P42), (P25, P34, P43, P52), (P35, P44, P53), (P45, P54), and (P55), respectively.
- Lines 67-69 present alternative possibilities for nearest neighbors.
- line 68 has neighbors (P36, P27, and P18) and (P63, P72, and P81) at opposite borders of the I-BLOCK 61.
- the directional inte ⁇ olation will use pixel P36 if available since pixel P36 is the nearest neighbor of the neighbors (P36, P27, and PI 8). If pixel P36 is unavailable then the directional inte ⁇ olation will use pixel P27 if available since pixel P27 is the nearest neighbor of the neighbors (P27 and PI 8). If pixel P27 is unavailable then the directional inte ⁇ olation will use the remaining neighbor pixel PI 8 if available.
- the directional inte ⁇ olation will not use any of pixels (P36, P27, and PI 8). Similarly, the directional inte ⁇ olation will choose one pixel of neighbor pixels (P63, P72, and P81) based on the nearest available neighbor criteria for making this choice.
- the directional inte ⁇ olation along line 68 for determining the values of pixels P45 and P54 will utilize one of the following neighbor combinations: P63 alone, P72 alone, P81 alone, P63 and P36, P63 and P27, P63 and P18, P72 and P36, P72 and P27, P72 and P18, P81 and P36, P81 and P27, P81 and P18, P36 alone, P27 alone, and P18 alone.
- the directional inte ⁇ olation for linear inte ⁇ olation along line 68 is next illustrated for determining pixel values for pixels P45 and P54, assuming that neighbor pixels P36 and P63 are both available.
- Point Q0, Ql, Q2, Q3, and Q4 along line 68 are as shown in FIG. 9B.
- Point Q0, Ql, Q2, Q3, and Q4 is at the midpoint of the portion of line 68 that respectively spans pixel P27, P36, P45, P54, and P63.
- D12, D13, and D14 respectively denote the distance between point Ql and point Q2, Q3, and Q4.
- Let F1214 and F1314 respectively denote D12/D14 and D13/D14.
- V36 and V63 respectively denote the pixel value at pixel P36 and P63.
- the pixel value at pixel P45 and P54 is (1-F1214)*V36 + F1214*V63 and (1- F1314)*N36 + F1314*V63, respectively.
- the directional inte ⁇ olation for linear inte ⁇ olation along line 68 raises the question of how to do the inte ⁇ olation if neighbor pixel P36 in not available and neighbor pixel P27 is available. If V27 denotes the pixel value at pixel P27 then V27 will substitute for V36 wherever V36 appears in the inte ⁇ olation formula.
- the scope of the present invention includes three options for treating the distances along line 68.
- a first option is to retain the parameters F1214 and FI 314 in the inte ⁇ olation formulas, which is conceptually equivalent utilizing point Ql as a reference for measuring distances even though pixel P36 has been replaced by pixel P27 as the nearest available neighbor.
- the pixel value at pixel P45 and P54 is (1-F1214)*V27 + F1214*V63 and (1-F1314)*V27 + F1314*V63, respectively.
- a second option is to utilize distances from point Q0 where line 68 begins at neighbor pixel P27.
- D02, D03, and D04 respectively denote the distance between point Q0 and point Q2, Q3, and Q4.
- F0204 and F0304 respectively denote D02/D04 and D03/D04. Then the pixel value at pixel P45 and P54 is (1-F0204)*V27 + F0204*V63 and (1- F0304)*N27 + F0304*N63, respectively.
- a third option is to use a compromise between the first and second options.
- the parameters (F1214,F0204) A VE and (F1314,F0304) A VE are used, wherein (F1214,F0204) A VE is a weighted or unweighted average of F1214 and F0204, and (F1314,F0304)AV E is a weighted or unweighted average of F1314 and F0304.
- the pixel value at pixel P45 and P54 is (1- (F1214,F0204)AV E )*N27 + (F1214,F0204) AV E *V63 and (F1314,F0304) A VE *V27 + (F1314,F0304) AVE *V63, respectively.
- Values at pixels P22, P32, P42, and P52 are determined from inte ⁇ olation along line 71, using a subset of neighbor pixels P12, P62, P72, P82, and P92.
- Values at pixels P23, P33, P43, and P53 are determined from inte ⁇ olation along line 72, using a subset of neighbor pixels P13, P63, P73, P83, and P93.
- Values at pixels P24, P34, P44, and P54 are determined from inte ⁇ olation along line 73, using a subset of neighbor pixels P14, P64, P74, P84, and P94.
- Values at pixels P25, P35, P45, and P55 are determined from inte ⁇ olation along line 74, using a subset of neighbor pixels PI 5, P65, P75, P85, and P95.
- ⁇ 135 degrees for directional lines 81-87.
- the value at pixel P52 is determined from inte ⁇ olation along line 81, using a subset of neighbor pixels P41, P63, P74, P85, and P96.
- Values at pixels P42 and P53 are determined from inte ⁇ olation along line 82, using a subset of neighbor pixels P31, P64, P75, P86, and P97.
- Values at pixels P32, P43, and P54 are determined from inte ⁇ olation along line 83, using a subset of neighbor pixels P21, P65, P76, P87, and P98.
- Values at pixels P22, P33, P44, and P55 are determined from inte ⁇ olation along line 84, using a subset of neighbor pixels PI 1, P66, P77, P88, and P99.
- Values at pixels P23, P34, and P45 are determined from inte ⁇ olation along line 85, using a subset of neighbor pixels P12, P56, P67, P78, and P89.
- Values at pixels P24 and P35 are determined from inte ⁇ olation along line 86, using a subset of neighbor pixels P13, P46, P57, P68, and P79.
- the value at pixel P25 is determined from inte ⁇ olation along line 87, using a subset of neighbor pixels P14, P36, P47, P58, and P69.
- ⁇ 0 degrees (or 180 degrees) for directional lines 76-79.
- Values at pixels P22, P23, P24, and P25 are determined from inte ⁇ olation along line 76, using a subset of neighbor pixels P21, P26, P27, P28, and P29.
- Values at pixels P32, P33, P34, and P35 are determined from inte ⁇ olation along line 77, using a subset of neighbor pixels P31, P36, P37, P38, and P39.
- Values at pixels P42, P43, P44, and P45 are determined from inte ⁇ olation along line 78, using a subset of neighbor pixels P41, P46, P47, P48, and P49.
- Values at pixels P52, P53, P54, and P55 are determined from inte ⁇ olation along line 79, using a subset of neighbor pixels P51, P56, P57, P58, and P59.
- ⁇ 26.56 degrees (i.e., ⁇ is the inverse tangent of 2/4) for directional lines 101-105.
- Values at pixels P22 and P23 are determined from inte ⁇ olation along line 101, using a subset of neighbor pixels P31 and P14.
- Values at pixels P32, P33, P24, and P25 are determined from inte ⁇ olation along line 102, using a subset of neighbor pixels P41 and PI 6.
- Values at pixels P42, P43, P34, and P35 are determined from inte ⁇ olation along line 103, using a subset of neighbor pixels P51, P26, P27, P18, and P19.
- Values at pixels P52, P53, P44, and P45 are determined from inte ⁇ olation along line 104, using a subset of neighbor pixels P61, P36, P37, P28, and P29. Values at pixels P54 and P55 are determined from inte ⁇ olation along line 105, using a subset of neighbor pixels P71, P46, P47, P38, and P39.
- FIGS. 9A-9F illustrate directional spatial inte ⁇ olation characterized by all pixel values in the I-BLOCK being determined by spatial inte ⁇ olation along parallel directional lines. In contrast, FIGS.
- FIG. 7A-7C and Tables 1-5 illustrate nondirectional spatial inte ⁇ olation characterized by all pixel values in the I-BLOCK being determined by nearest available neighbor spatial inte ⁇ olation in which no directional line passing through the I-BLOCK is utilized in the spatial inte ⁇ olations.
- Another spatial inte ⁇ olation method for an I-BLOCK is hybrid spatial inte ⁇ olation which comprises a combination of directional spatial inte ⁇ olation and nondirectional spatial inte ⁇ olation. With hybrid spatial inte ⁇ olation, at least one directional line is used for some spatial inte ⁇ olations in the I-BLOCK, and some pixel values in the I- BLOCK are determined by nearest available neighbor spatial inte ⁇ olation in which no directional line passing through the I-BLOCK is utilized.
- FIG. 10 illustrates hybrid spatial inte ⁇ olation, in accordance with embodiments of the present invention.
- FIG. 10 includes directional lines 121-124 which are used in the spatial inte ⁇ olations for determining values at pixels P25, P34, P43, and P52 (along line 121), pixels P35, P44, and P53 (along line 122), pixels P45 and P54 (along line 123), and pixel P55 (along line 124).
- values at pixels P22, P23, P24, P32, P33, and P42 are determined by nondirectional spatial inte ⁇ olation using nearest neighbor upper pixels P12, PI 3, P14 and nearest neighbor left pixels P21 , P31 , and P41.
- the values for the pixels of each I-BLOCK in a given frame are calculated by spatial inte ⁇ olation based on values of nearest available neighbor pixels relative to each said I- BLOCK in the given frame.
- a given pixel outside of a specified I-BLOCK of the given frame is said to be a neighbor pixel relative to the I-BLOCK if said given pixel is sufficiently close to the I-BLOCK to potentially contribute to the value of a pixel in the I-BLOCK by said spatial inte ⁇ olation.
- the present invention discloses embodiments relating to a processing of video frames, wherein each frame processed is divided into M blocks that include at least two differently sized blocks, and wherein M is at least 9.
- the cu ⁇ ent frame being processed is divided into blocks of pixels, wherein each such block BCUR O of pixels in the cu ⁇ ent frame is predicted from a block BREF O of the same size in the reference frame.
- the block BCURO of pixels in the cu ⁇ ent frame is called a "cu ⁇ ent block” or a "self block”.
- the self block BCUR O in the cu ⁇ ent frame is spatially shifted from the block BREFO in the reference frame by a motion vector V 0 .
- a pixel value I C UR O (ECURO) at a pixel location P 0 (identified by vector ECURO) i the self block BCURO in the cu ⁇ ent frame is predicted to equal the pixel value IREFO (ECUR O - Yo) at a pixel location identified by vector (ECURO - Yo) in the block BREFO in the reference frame.
- the dependent variable "I” denotes a pixel value of luminance and/or chrominance.
- the discontinuities may have the form of sha ⁇ horizontal and vertical edges which may be highly visible to the human eye and may also produce ringing effects (i.e., big coefficients in high frequency sub-bands) in the Fourier-related transform used for transform coding of the residual frames.
- OBMC overlapped block motion compensation
- nearest neighboring blocks of the self block BCUR may be utilized for predicting the pixel values in the self block BCUR.
- the nearest neighboring blocks may consist of the four nearest neighboring blocks immediately to the right, bottom, left, and top of the self block BCUR, respectively denoted as Bi, B R EF4 in the reference frame by the motion vectors Vj, V 2 , V 3 , and V 4 , respectively.
- the blocks BREFI, BREF2 » BREF 3 , and B R EF 4 in the reference frame are most likely not nearest neighbor blocks of the block BR E F O in the reference frame.
- a weight W(ECUR O ) is associated with a pixel location Eo (identified by vector ECU R O) in the self block BCUR O .
- Weights W(E ⁇ ), WO ⁇ ), W(Pj), and W(E 4 ) are associated with the pixel locations Pi, E2, Pj, and P 4 in the nearest neighboring blocks Bi, B 2s B 3 , and B 4 , respectively, such that the pixel locations E h E 25 E_ 3> and P 4 co ⁇ espond to the pixel location Eo- With OBMC, the pixel value ICUR O (ECUR O ) at the pixel location Eo is predicted to equal W(ECURO)*IREFO (ECURO - Yo) + [W(P ⁇ )*I(E ⁇ - y_ ⁇ ) + W(E 2 )*I(E 2 - Vj) + W(E 3 )*I(P 3 - V 3 ) + W(P 4 )*I(E 4 - V 4 )] .
- the predicted pixel value at the pixel location identified by vector E C URO in the self block BCUR O mitigates the discontinuities introduced at block borders by taking into account pixel value contributions from nearest neighbor blocks in their displaced locations in the reference frame.
- An a ⁇ ay of weights that includes W(ECU R O) for all pixels in the self block BCU RO constitutes a "weighting window" for the self block BCURO-
- an a ⁇ ay of weights that includes W(E ⁇ ) > W(P_2), W(E3) > and W(E_ 4 ) for all pixels in the nearest neighboring blocks Bi, B 2 , B 3 , and B 4 constitutes a weighting window for the nearest neighboring blocks Bi, B 2 , B 3 , and B 4 , respectively.
- FIGS. 17, 18, 21, and 22, discussed infra Examples of weighting windows and their generation according to the present invention are presented in FIGS. 17, 18, 21, and 22, discussed infra.
- the present invention discloses variable block size OBMC.
- the blocks in the cu ⁇ ent frame and its relationship to blocks in the reference frame are illustrated next in FIGS. 11 and 12.
- FIG. 11 depicts a cu ⁇ ent frame 240 that has been configured into variables size blocks (e.g., by a quad tree algorithm), in accordance with embodiments of the present invention.
- the cu ⁇ ent frame 240 comprises 22 blocks as shown.
- Each block of FIG. 11 is processed as a self block in consideration of its nearest neighbor blocks. For example, consider block 241 being processed as a self block.
- Self block 244 has nearest neighbor blocks 242-246.
- the self block may have a size that is equal to, larger than or smaller than the nearest neighbor block.
- the size of the self block 241 is: equal to the size of its nearest neighbor block 242, larger than the size of its nearest neighbor blocks 245 and 246, and smaller than the size of its nearest neighbor blocks 243 and 244.
- FIG. 12 depicts the cu ⁇ ent frame 240 of FIG. 11 and a reference frame 260 together with vectors 251-256 that respectively link blocks 241-246 in the cu ⁇ ent frame 240 with co ⁇ esponding blocks 261-266 in the reference frame 260, in accordance with embodiments of the present invention.
- a normal projection of the vectors 251-256 onto the reference frame 260 are the motion vectors denoting a vector displacement of the blocks 241-246 from the blocks 261-266, respectively.
- the blocks 261-266 appear for simplicity as having the same size in the reference frame 260, the blocks 261-266 in reality have the same size as their co ⁇ esponding blocks 241-246, respectively, in the cu ⁇ ent frame 240.
- the reference frame 260 represents one or more reference frames, each such reference frame having its own motion vectors and blocks associated with the blocks of the cu ⁇ ent frame, since each pixel in the cu ⁇ ent frame may be predicted from co ⁇ esponding pixels in a single reference frame or in a plurality of reference frames.
- the present invention discloses a method of compressing video that involves a spatiotemporal or space-time transformation utilizing motion compensated blocks in pairs of input frames, such as the representative pair having input frames A and B discussed supra in conjunction with FIG. 3.
- These blocks are of various sizes and are chosen to match the local motion vector field, so there are small blocks where the motion has a high spatial gradient and large blocks in more flat regions where the spatial gradient of the motion is small. Nevertheless as explained supra, the motion vectors of the different blocks are not continuous across the block edges. As a result artifacts can be created in the prediction of one frame from the other frame.
- OBMC of the present invention addresses this problem by making the prediction from a weighted combination of estimates using the cu ⁇ ent block's motion vector and the motion vectors of its nearest neighbor blocks.
- the OBMC of the present invention is further improved by iterative adjustments to the block motion vectors to arrive at improved motion vectors, which increase the accuracy of the resulting frame prediction and therefore increase coding efficiency. With the present invention, this iteration may be optionally omitted.
- the output of the MCTF obtained using the OBMC is then compressed for transmission or storage. Additionally, the motion vectors are sent to the receiver as overhead, and may constitute about 10-15% of the total bit rate.
- FIG. 13A is a flow chart depicting steps 211-214 for utilizing variable block size OBMC in the MCTF temporal high frames of FIG. 2, in accordance with embodiments of the present invention.
- Step 211 Prior to step 211, the cu ⁇ ent frame has been configured into M blocks that include at least two differently sized blocks, wherem M is at least 9.
- Step 211 performs the variable size block matching (VSBM) to obtain the initial vectors for the motion blocks as is known in the art (e.g., see Ostermann, and Zhang, "Video Processing and Communications", Prentice-Hall, pp. 182-187 (2002)).
- Step 212 classifies the blocks in the cu ⁇ ent frame as being either I-BLOCKs or motion blocks.
- a "motion block” is defined to be a non I-BLOCK. Detection and classification of unconnected blocks (i.e., I-BLOCKs and P-BLOCKs) and uniconnected blocks was described supra in conjunction with steps 31-36 ofFIG.
- FIG. 15 describes infra fas various categories of motion blocks including P-BLOCKs, DEFAULT blocks, and REVERSE blocks.
- Step 213 performs variable block size OBMC to provide an overlap smoothing for the motion blocks and the I-BLOCKs.
- the motion field i.e., the smoothed motion blocks and/or I-BLOCKs
- FIG. 13B is a flow chart depicting steps 221-223 which describe the variable block size OBMC processing step 213 of FIG. 13 A, in accordance with embodiments of the present invention.
- step 221 executes a shrinking scheme that generates a weighting window for the self block and its associated nearest neighbor block which takes into account whether the self block is a motion block or an I-BLOCK, and also takes into account whether the nearest neighbor block is a motion block or an I-BLOCK. If the nearest neighbor block is an I-BLOCK, the shrinking scheme of step 221 invokes step 222 which executes a reflecting scheme that impacts the generation of the weighting widows in a manner that accounts for the intrinsic inability of the nearest neighbor I-BLOCK to communicate with the reference frame.
- the shrinking scheme execution step 221 is performed for all nearest neighbor blocks of the given self block, and then for all self blocks of the cu ⁇ ent frame in a sequence dictated by a predetermined scan order.
- Step 223 is executed, wherein an initial motion vector for each self block in the cu ⁇ ent frame has been utilized.
- said initial motion vectors for the self blocks were used to generate the weighting windows, said initial motion vectors may not be the optimum weighting windows inasmuch as a perturbed set of motion vectors may result in more accurate predictions of pixel values in the cu ⁇ ent frame when the generated weighting windows are taken into account.
- step 223 performs an iterative process such that each iteration perturbs the motion vectors in a manner that improves the accuracy of pixel values in the cu ⁇ ent frame in light of the weighting windows generated in step 222.
- Steps 221-223 in FIG. 13B reflect a simplified description of the variable block size OBMC processing step 213 of FIG. 13 A.
- FIG. 23 will present infra a flow chart that describes in detail embodiments of the variable block size OBMC processing of steps 221-222 of FIG. 13B.
- FIG. 24 will present infra a flow chart that describes in detail the iterative process for improving the motion vectors in step 223 of FIG. 13B.
- FIG. 14 is a block diagram of frame processing associated with the flow charts of FIGS. 13A-13B, in accordance with embodiments of the present invention.
- FIG. 14 illustrates the following sequentially ordered processing: the component (YUV) hierarchical variable-size block matching (HVSBM) motion estimation 231 (which co ⁇ esponds to step 211 of FIG. 13 A); I-BLOCK detection 232 (which co ⁇ esponds to step 212 of FIG. 13 A); variable block size OBMC execution 233 (which is performed in step 213 of FIG. 13 A); MTCF processing 234 (which co ⁇ esponds to step 214 of FIG. 13A); and MC-EZBC coder processing 235.
- the component YUV
- HVSBM variable-size block matching
- the HVSBM motion estimation 231 generates motion vectors which are processed by an arithmetic coder 236.
- a bit stream 237 is formed from coded output generated by the EZBC coder 235 and from coded output generated by the arithmetic coder 236.
- the MCTF processing 234 sequentially comprises the processing of I- BLOCKs, P-BLOCKs, REVERSE blocks, and DEFAULT blocks.
- the REVERSE block prediction comprises a prediction of those blocks best predicted from the previous B frame.
- the pixels of the DEFAULT block includes those pixels actually taking part in the MC filtering, both for the 'predict' step for parent frame H and the 'update' step for parent frame L.
- FIG. 15 depicts two successive input frames A and B to be transformed by the MCTF into a high temporal frame H and a low temporal frame L, in accordance with embodiments of the present invention. See FIG. 2 (and a discussion thereof supra) for a derivation of the H and L frames from frames A and B by the MCTF processing. In FIG. 2, however, a different choice is made for the temporal location of the H and L frames.
- the L frame is time referenced to that of the input frame A and the H frame is time referenced to that of the input frame B
- FIG. 15 the H frame is time referenced to that of the input frame A and the L frame is time referenced to that of the input frame B.
- FIG. 16 depicts a self block and four nearest neighbor blocks in the cu ⁇ ent frame, in accordance with embodiments of the present invention.
- the four nearest neighbor blocks of the self block 270 in FIG. 16 are a right nearest neighbor 271, a lower nearest neighbor 272, a left nearest neighbor 273, and an upper nearest neighbor 274.
- the self block and its nearest neighbor blocks are assumed to have the same size.
- FIGS. 17A and 17B illustrate 4x4 weighting windows, in accordance with embodiments of the present invention.
- the self block is a motion block
- the self block is an I-BLOCK.
- FIG. 17A shows a weighting window 270 A for the self block and its associated nearest neighbor weighting windows 271 A, 272 A, 273A, and 274A for the right nearest neighbor block, lower nearest neighbor block, left nearest neighbor block, and upper nearest neighbor block, respectively.
- FIG. 17A shows a weighting window 270 A for the self block and its associated nearest neighbor weighting windows 271 A, 272 A, 273A, and 274A for the right nearest neighbor block, lower nearest neighbor block, left nearest neighbor block, and upper nearest neighbor block, respectively.
- FIG. 17B shows a weighting window 270B for the self block and its associated nearest neighbor weighting windows 27 IB, 272B, 273B, and 274B for the right nearest neighbor block, lower nearest neighbor block, left nearest neighbor block, and upper nearest neighbor block, respectively.
- the following convention is used to represent the weighting windows in the examples of FIGS. 17-18 and 21-22, using FIG. 17A for illustrative pu ⁇ oses.
- the pixel weights shown in FIG. 17A are in the same relative pixel positions as are the co ⁇ esponding pixel values at the physical pixel locations.
- the nearest neighbor blocks however, the pixel weights shown in FIG.
- the uppermost row 276A of weights (1 1 1 1) of the upper neighbor weighting window 274 A is for pixels in the top row 277A of the self block (in terms of physical pixel locations) weighting window 270A.
- the rightmost column 278A of weights (l l l l) of the right neighbor weighting window 271 A is for pixels in the rightmost column 279 A of the self block 270 A.
- the preceding convention has the visual advantage that if a nearest neighbor weighting window is superimposed over the self block weighting window 270A, the indicated weight at a given matrix position in the nearest neighbor-block weighting window and the weight directly underneath this matrix position in the self-block weighting window are being used as weights for the same physical pixel in the OBMC calculation of the present invention.
- the sum of the weights at each matrix position is 4, or generally N for NxN blocks. For example in FIG.
- the weights for the self block weighting window 270 A, right neighbor weighting window 271 A, lower neighbor weighting window 272A, left neighbor weighting window 273A, and upper neighbor weighting window 274A are 2, 1, 0, 0, and 1, respectively, which are summed to equal 4.
- the sum of the weights in the weighting window would be equal to 1, which would result in the weights having a fractional or decimal value less than or equal to 1.
- the self block is a motion block and two- dimensional (2-D) bilinear (i.e., straight-line) inte ⁇ olation is used to determine the weights in the self block weighting window 270 A and the nearest neighbor weighting windows 271 A, 272A, 273A, and 274A.
- the 2-D bilinearly inte ⁇ olated pixel values co ⁇ espond to linear inte ⁇ olation along the straight line between the center of the self block and the center of the neighboring block. Since the 2-D bilinear inte ⁇ olation is two-dimensional, a bilinearly inte ⁇ olated weight is the product of such inte ⁇ olated values in two mutually orthogonal directions. The weights resulting from the 2-D bilinear inte ⁇ olation have been rounded to the nearest integer, subject to the constraint that the normalization condition (i.e., the sum of the weights associated with each pixel is equal to N) is satisfied.
- the normalization condition i.e., the sum of the weights associated with each pixel is equal to N
- the self block is an I-BLOCK.
- the weights in the weighting windows in FIG. 17B are derived from the weights in the weighting windows in FIG. 17A by extracting portions of the weights in the self-block weighting window in and adding said portions to selected weights in the nearest neighbor weighting windows, subject to the constraint that the normalization condition is satisfied.
- the "portion of the weights" adjustment was experimentally determined in terms of giving a good visual e ⁇ or performance and was not optimized in any way. Other "portion of the weights" adjustments may be utilized if validated or substantiated by experimental and/or analytical methodology. Said selected weights are weights which are near the block boundaries. The preceding modification of the weights of FIG.
- the weight distribution method used for FIG. 17B is a "radiation scheme" that radiates weight components outward from the self cell to its neighbor cells.
- the nearest neighbor weighting window for the left nearest neighbor may be determined by exploiting the reflective symmetry shown in FIG. 16, or may be calculated via bilinear inte ⁇ olation.
- the nearest neighbor weighting window for the top nearest neighbor may be determined by exploiting the reflective symmetry shown in FIG. 16, or may be calculated via bilinear inte ⁇ olation.
- FIG. 17 illustrates an embodiment, wherem the self block is an I- BLOCK, and wherein the generated window of the self block consists of first pixel weights and second pixel weights. The first pixel weights are less than what the first pixel weights would have been if the self block had been a motion block, and the second pixel weights are equal to what the second pixel weights would have been if the self block had been the motion block.
- the first pixel weights of "2" in selected matrix positions of the self I-BLOCK 270B of FIG. 17B are less than the weights of "3" in the co ⁇ esponding selected matrix positions of the motion block 270 A of FIG. 17A
- the second pixel weights are the remaining pixel weights which are the same weights (i.e., same weight magnitudes) in the self I-BLOCK 270B of FIG. 17B and the motion block 270A of FIG. 17 A.
- generating the weighting window for the self block may comprise: generating a first weighting window for the self block as if the self block is the motion block (e.g., generating the motion block 270A of FIG.
- the generated window of each neighbor block of the self block may consist of third pixel weights and fourth pixel weights, wherein the third pixel weights are greater than what the third pixel weights would have been if the self block had been the motion block, and wherein the fourth pixel weights are equal to what the fourth pixel weights would have been if the self block had been the motion block.
- FIGS. 18A and 18B illustrate a 8x8 weighting windows wherein the associated self block is a motion block and an IBLOCK, respectively, in accordance with embodiments of the present invention.
- FIG. 18 illustrate a 8x8 weighting windows wherein the associated self block is a motion block and an IBLOCK, respectively, in accordance with embodiments of the present invention.
- FIG. 18A shows a weighting window 270C for the self block and its associated nearest neighbor weighting windows 27 IC, 272C, 273 C, and 274C for the right nearest neighbor block, lower nearest neighbor block, left nearest neighbor block, and upper nearest neighbor block, respectively.
- FIG. 18B shows a weighting window 270D for the self block and its associated nearest neighbor weighting windows 271D, 272D, 273D, and 274D for the right nearest neighbor block, lower nearest neighbor block, left nearest neighbor block, and upper nearest neighbor block, respectively.
- the methods for generating the weighting windows in FIG. 18 are the same as the methods used to generate the weighting windows in FIG. 17 as described supra.
- the self block and its associated nearest neighbor blocks may all have the same size, or the size of the self block may differ from the size of at least one of its associated nearest neighbor blocks.
- a nearest neighbor block associated with of a given self block there are three embodiments for a nearest neighbor block associated with of a given self block:
- the spatial nearest neighbor block size is the same as that of the self block (e.g., self block 241 and its neighbor block 242 in FIG. 14), which is the "standard same block-size" case analyzed supra in conjunction with FIGS. 17-18.
- the weighting windows for this embodiment are generated as described supra in conjunction with FIGS. 17-18.
- the spatial nearest neighbor block size is larger than that of the self block (e.g., self block 241 and its neighbor blocks 243 and 244 in FIG. 11), which is treated by utilizing the portion of the larger nearest neighbor block that is the same size as the self block, wherein said portion occupies the same space within the cu ⁇ ent frame as does the nearest neighbor block of the standard same block-size case (a).
- FIG. 19 depicts frame 240 of FIG. 11, wherein portions 243 A and 243B of nearest neighbor blocks 243 and 244 are shown, in accordance with embodiments of the present invention. Portions 243 A and 244A have the same size as self block 241 and occupy the space within the frame 240 appropriate to the standard same block-size case.
- the portions 243 A and 244A of nearest neighbor blocks 243 and 244, respectively, are utilized as effective nearest neighbor blocks to the self block 241.
- the portion 243 A portion of the neighbor block 243 is the only portion of the neighbor block 243 whose weighting window impacts a predicting of pixel values in the self block 241 during the performing of OBMC on the self block 241.
- the portion 244A of the neighbor block 244 is the only portion of the neighbor block 244 whose weighting window impacts a predicting of pixel values in the self block 241 during the performing of OBMC on the self block 241.
- the present invention "shrinks" the blocks 243 and 244 to the respective portions 243A and 244A.
- the self block 241 and the neighbor block 243 (or 244) may each be a motion block, wherein the generated weighting window of the portion 243 A (or 244 A) of the neighbor block 243 (or 244) may consist of bilinearly inte ⁇ olated weights.
- the weighting windows for this embodiment are generated as described supra in conjunction with FIGS. 17-18 for the standard same block- size case.
- the motion vector associated with the larger nearest neighbor block are used to provide the weighted neighbor estimate.
- the motion vector 253 would be used to locate the block 263 in the reference frame 260 in conjunction with utilizing the portion 243 A (see FIG. 19) of the larger neighbor block 243 for processing the self block 241.
- the spatial nearest neighbor block size is smaller than that of the self block (e.g.. self block 241 and its neighbor blocks 245 and 246 in FIG. 11), which necessitates choosing a portion of the self block to be of the same size as the smaller nearest neighbor block and adjacently located with respect to smaller nearest neighbor block.
- FIG. 20 depicts frame 240 of FIG. 11, wherein portions 1 A and IB of self block 241 are shown, in accordance with embodiments of the present invention. Portions 1 A and IB have the same size as (and are located adjacent to) the nearest neighbor blocks 245 and 246, respectively.
- the portions 1A and IB of the self block 241 are utilized as effective self block portions with respect to the nearest neighbor blocks 245 and 246, respectively.
- the portion 1 A of the self block 241 is the only portion of the self block 241 at which a predicting of pixel values is impacted by the weighting window of the neighbor block 245 during the performing of OBMC on the portion 1A of the self block 241.
- the portion IB of the self block 241 is the only portion of the self block 241 at which a predicting of pixel values is impacted by the weighting window of the neighbor block 246 during the performing of OBMC on the portion 1A of the self block 241.
- the present invention "shrinks" the self block 241 so as to utilize only the portions 1A and IB
- the weighting windows are generated as described infra in conjunction with the shrinking scheme examples of FIGS. 21-22, wherein each utilized portion of the self block comprise an affected area and an unaffected area such that the affected area is affected by the smaller nearest neighbor block and the unaffected area is not affected by the smaller nearest neighbor block.
- the affected area may comprise half of the smaller nearest neighbor blocksize, both horizontally and vertically.
- the weights in the weighting window of the affected area of the utilized portion of the self block and the co ⁇ esponding portion of the weighting window of the smaller nearest neighbor block are the same as is derived from the standard same block-size case.
- the weights in the portion of the weighting window of the smaller nearest neighbor block that co ⁇ esponds to the unaffected area of the utilized portion of the self block are "removed" and then set equal to zero.
- the weights in the unaffected area of the utilized portion of the self block are incremented (relative to the standard same block-size case) by said removed weights from the co ⁇ esponding portion of the weighting window of the smaller nearest neighbor block, as will be illustrated infra in conjunction with the example of FIG. 22.
- FIGS. 21A-21C depict weighting windows for a self block and an associated smaller nearest neighboring block used by OBMC in conjunction with a shrinking scheme, wherem the nearest neighboring block is a motion block, in accordance with embodiments of the present invention.
- FIG. 21 depict weighting windows for a self block and an associated smaller nearest neighboring block used by OBMC in conjunction with a shrinking scheme, wherem the nearest neighboring block is a motion block, in accordance with embodiments of the present invention.
- the self block 280 is an 8x8 block, and its right neighbor block 281 is a 4X4 block.
- FIG. 21B provides a self block weighting window 280A and its right neighbor weighting window 281 A, respectively associated with the self block 280 and the right neighbor block 281 of FIG. 21 A.
- the self block weighting window 280A includes a utilized portion 282A that is utilized in the OBMC procedure in conjunction with the right neighbor weighting window 281 A.
- the utilized portion 282A is a "shrinked" form of the self block weighting window 280A and has the same size as the right neighbor weighting window 281 A.
- the numerical weights shown are not the final weights but rather are the standard initial weights used for computing the final weights.
- the final weights are shown in FIG. 21C.
- the initial weights in FIG. 21B are the weights pertinent to the standard same block-size case.
- the weights in the self block weighting window 280A of FIG. 21B are the same bilinear weights that appear in the weighting window 270C of FIG. 18 A
- the weights in the right neighbor weighting window 281 A of FIG. 2 IB are the same bilinear weights that appear in the upper-right quadrant of the weighting window 27 IC of FIG. 18A
- the utilized portion 282A consists of an affected area 283A and an unaffected area 284A.
- the pixels of the self block that relate to the affected area 283A are affected in the OBMC procedure by an affecting area 285 A of the right neighbor weighting window 281 A.
- the pixels of the self block that relate to the unaffected area 284A are unaffected in the OBMC procedure by an unaffecting area 286A of the right neighbor weighting window 281A.
- the weights in FIG. 21C are derived from the weights in FIG. 21B as follows.
- the weights in the affecting area 285A and the affected area 283A in FIG. 21C are the same as in FIG. 21B.
- the preceding shrinking scheme illustrated in FIG. 21 avoids over smoothing at motion discontinuities. Since a large block may have a different a motion vector from its small nearest neighbors, the shrinking scheme can reduce over-smoothing at a motion discontinuity.
- the shrinking scheme can be applied to rectangular as well as the square block sizes discussed supra and the rectangular block sizes are thus within the scope of the present invention.
- a simple quadtree decomposition may be used to generate an a ⁇ ay of square blocks only.
- An a ⁇ ay of rectangular blocks may be effectuated by a horizontal and/or vertical splitting algorithm (e.g., splitting an 8x8 block into two 8x4 blocks or two 4x8 blocks).
- a horizontal and/or vertical splitting algorithm e.g., splitting an 8x8 block into two 8x4 blocks or two 4x8 blocks.
- the nearest neighbor block is a motion block characterized by a motion vector.
- a nearest neighbor block that is an I- BLOCK has no associated motion vector.
- the reflecting scheme of the present invention inco ⁇ orates a nearest neighbor I-BLOCK into the framework of OBMC as discussed infra. Reflecting scheme The reflecting scheme is used if a nearest neighbor is an I-BLOCK. The reflecting scheme reflects the nearest neighbor I-BLOCK weighting back on the self block. This effectively means that the self block's motion vector is used in place of the missing motion vector of the I-BLOCK.
- FIG. 22 depict weighting windows for a self block and an associated smaller nearest neighboring block used by OBMC in conjunction with a shrinking scheme, wherein the nearest neighboring block is an I-BLOCK, in accordance with embodiments of the present invention.
- the self block 290 is an 8x8 block
- its right neighbor block 291 is a 4X4 block.
- FIG. 22B provides a self block weighting window 290A and its right neighbor weighting window 291A, respectively associated with the self block 290 and the right neighbor block 291 of FIG. 22A.
- the self block weighting window 290A includes a utilized portion 292A that is utilized in the OBMC procedure in conjunction with the right neighbor weighting window 291 A.
- the utilized portion 292 A has the same size as the right neighbor weighting window 291 A.
- the numerical weights shown are not the final weights but rather are the standard initial weights used for computing the final weights.
- the final weights are shown in FIG. 22C.
- the initial weights in FIG. 22B are the weights pertinent to the standard same block-size case.
- the weights in the self block weighting window 290 A of FIG. 22B are the same bilinear weights that appear in the weighting window 270C of FIG.
- weights in the right neighbor weighting window 291 A of FIG. 22B are the same bilinear weights that appear in the upper-right quadrant of the weighting window 27 IC of FIG. 18A
- the weights in FIG. 22C are derived from the weights in FIG. 22B as follows.
- the weights in the right neighbor weighting window 291 A in FIG. 22B are added to the utilized portion 292A in FIG. 22B to form the weights in the utilized portion 292A in FIG. 22C, and the weights in the right neighbor weighting window 291 A in FIG. 21 C are set to zero.
- FIG. 23 is a flow chart depicting steps 311-319 of an algorithm for calculating weighting windows for variable block size OBMC, in accordance with embodiments of the present invention.
- the flow chart of FIG. 23 includes details of steps 221-222 in the flow chart of FIG. 1 IB discussed supra.
- the flow chart of FIG. 23 sequentially processes all self blocks in the cu ⁇ ent frame according to a predetermined scan order.
- Step 311 steps to the next self block to process which initially is the first block to be scanned according to the predetermined scan order.
- the next self block is processed with respect to its neighbor blocks consisting of the nearest neighbor blocks of the next self block.
- the neighbor blocks of the self block comprise a first neighbor block.
- the self block and the first neighbor block may each be a motion block.
- the self block and the first neighbor block may each be an I-BLOCK.
- the self block may be a motion block and the first neighbor block may be an I-BLOCK.
- the self block may be an I-BLOCK and the first neighbor block may be a motion block.
- step 312 steps to the next neighbor block which initially is a first neighbor block of a sequence of nearest neighbor blocks around the self block established in step 311.
- Step 313 determines whether the neighbor block is a motion block or an I-BLOCK.
- step 313 determines that the neighbor block is a motion block then step 314 performs the shrinking scheme, followed by execution of step 316.
- the shrinking scheme generates the weighting window for the self block and the neighbor block, based on whether the size of the neighbor block is equal to, larger than, or smaller than the size of the self block as discussed supra. If step 313 determines that the neighbor block is an I-BLOCk then step 315 performs the reflecting scheme, followed by execution of step 316.
- the reflecting scheme generates the weighting window for the self block and the neighbor block in accordance with the procedure described supra in conjunction with FIG. 22 Step 316 which determines whether there are more neighbor blocks of the self block to process.
- step 316 determines that there are more neighbor blocks of the self block to process then the algorithm loops back to step 312 to step to the next neighbor block to process. If step 316 determines that there are no more neighbor blocks of the self block to process then step 317 is executed. Step 317 determines whether the self block is an I-BLOCK. If step 317 determines that the self block is not an I-BLOCK, then step 319 is executed. If step 317 determines that the self block is an I-BLOCK, then step318 performs the radiation scheme described supra in conjunction with FIG. 17B to modify the weighting window of the self block and its neighbor blocks, followed by execution of step 319. Step 319 determines whether there are more self blocks in the frame to process. If step 319 determines that there are more self blocks in the frame to process, then the algorithm loops back to step 311 to step to the next self block to process. If step 319 determines that there are no more self blocks in the frame to process, then the algorithm ends.
- OBMC OBMC allows the nearest neighboring motion vectors to affect the prediction e ⁇ or in the self block, and that makes such a decoupled estimation suboptimal.
- OBMC specifies a non-causal nearest neighborhood, so there is no block scanning order such that, for every block, all its nearest neighbor blocks are scanned before it.
- the present invention uses an iterative estimation or search procedure for optimized motion estimation and spatial inte ⁇ olation/prediction mode selection, which ensures that the mean absolute distortion (MAD) converges to a local minimum.
- MAD mean absolute distortion
- the lor2 (i) are the weighting window coefficients (i.e.,
- block b is a motion block, and wherein h 2 (i) is used as the weighting window when the self
- Equation (2) v s (i) is a motion vector for the neighbor block i at pixel location s, and I(s) is the true pixel value at pixel location s, and I denotes an inte ⁇ olated value (needed because of sub-pixel accuracy) in the reference frame for the neighbor block pixel.
- the residual e ⁇ or image r( ⁇ ) is the motion compensation e ⁇ or that results when
- invention further optimizes v b for motion blocks or further optimizes nib , the spatial
- v b argmin ⁇
- nib argmin ⁇ ⁇ r(s(j))-h 2 (j)l k (s(j)) ⁇ (4) m *U) ⁇ w b
- v b are the conditional best motion vector
- nib are the conditional best spatial
- ⁇ (s(j)) is the spatial inte ⁇ olation/prediction value from self block b 's nearest neighbors.
- the OBMC iterations are controlled by the design parameters and ⁇ : i.e., a
- Equations (3) and (4) determine which of the 25 motion vector perturbations is the best choice for the motion vector at each self block.
- said best motion vector is perturbed in accordance with ⁇ in the next iteration to determine a further improved value of the motion vector at each self block.
- the convergence speed is very fast, but it can be switched off to reduce computational complexity, resulting in a modest suboptimality, depending on the video clip. Since bi-directional color HVSBM runs on both luminance and chrominance data, it follows naturally that the OBMC iterations may be applied to YUV simultaneously.
- U and V are sub-sampled frame data after some transform from RGB data.
- the weighting windows used for [/and are also sub-sampled versions of those used for Y.
- the iterative estimation approach of the present invention for OBMC (“OBMC iterations") computes successively improved sets of motion vectors for each self block of the cu ⁇ ent frame for a fixed number (a) of iterations or until a convergence criteria is satisfied.
- FIG. 24 is a flow chart depicting steps 321-326 of an algorithm for calculating successively improved motion vectors for the self blocks of a cu ⁇ ent frame processed according to variable block size OBMC using the weighting windows calculated according to the methodology described by the flow charts of FIGS. 13A, 13B, and 23, in accordance with embodiments of the present invention.
- Step 321 provides input, namely , ⁇ , and the weighting windows.
- Step 322 steps to the next iteration to execute which initially is the first iteration.
- step 323 steps to the next self block to process which initially is the first block to be scanned according to a predetermined scan order.
- Step 324 determines the best motion vector for the self block selected from the perturbed -based motion vectors, using Equations (3) or (4) in conjunction with Equation (2).
- Step 325 determines whether there are more self blocks in the frame to process.
- step 325 determines that there are more self blocks in the frame to process, then the algorithm loops back to step 323 to step to the next self block to process. If step 325 determines that there are no more self blocks in the frame to process, then step 326 is next executed. Step 326 determines whether there are more iterations to perform. If step 326 determines that there are more iterations to perform then the algorithm loops back to step 322 to step to the next iteration. If step 326 determines that there are no more iterations to perform then the algorithm ends. There may be no more iterations to perform, because the number of iteration performed is equal to a. There may also be no more iterations to perform, because a predetermined convergence criteria for the updated motion vectors has been satisfied.
- a convergence criteria may be, inter alia, that the mean square fractional change in the motion vectors (individually at each self block, or summed over all self blocks) from the immediately previous iteration to the present iteration is less than a predetermined tolerance.
- a convergence criteria may be, inter alia, that the mean square fractional change in the motion vectors (individually at each self block, or summed over all self blocks) from the immediately previous iteration to the present iteration is less than a predetermined tolerance.
- first embodiments only cus used in step 326 to determine whether there are more iterations to perform.
- only a convergence criteria is used in step 326 to determine whether there are more iterations to perform.
- both ⁇ and a convergence criteria are used in step 326 to determine whether there are more iterations to perform.
- the present invention does OBMC in a lifting implementation for DEFAULT blocks, i.e. with the prediction and update steps as normal in order to reduce the noise in the area of good motion.
- the specific equations for OBMC in lifting implementation are as follows,
- OBMC regards the motion vector field ( d m , d n ) as random process. That means that pixel
- Equation (5)-(6) (d m ,drete) is the nearest integer to (d m ,d n ) .
- FIG. 25 illustrates a computer system 90 for processing I-blocks used with MCTF and/or for performing overlapped block motion compensation (OBMC) for variable size blocks in the context of motion MCTF scalable video coders, in accordance with embodiments of the present invention.
- the computer system 90 comprises a processor 91, an input device 92 coupled to the processor 91, an output device 93 coupled to the processor 91, and memory devices 94 and 95 each coupled to the processor 91.
- OBMC overlapped block motion compensation
- the input device 92 may be, inter alia, a keyboard, a mouse, etc.
- the output device 93 may be, inter alia, a printer, a plotter, a computer screen, a magnetic tape, an internal hard disk or disk a ⁇ ay, a removable hard disk, a floppy disk, information network, etc.
- the memory devices 94 and 95 may be, ter alia, a hard disk, a floppy disk, a magnetic tape, an optical storage such as a compact disc (CD) or a digital video disc (DVD), a dynamic random access memory (DRAM), a read-only memory (ROM), etc.
- the memory device 95 includes a computer code 97.
- the computer code 97 includes an algorithm or algorithms for processing I-blocks used with MCTF and/or for performing overlapped block motion compensation (OBMC) for variable size blocks in the context of motion MCTF scalable video coders.
- the processor 91 executes the computer code 97.
- the memory device 94 includes input data 96.
- the input data 96 includes input required by the computer code 97.
- the output device 93 displays output from the computer code 97. Either or both memory devices 94 and 95 (or one or more additional memory devices not shown in FIG.
- FIG. 25 may be used as a computer usable medium (or a computer readable medium or a program storage device) having a computer readable program code embodied therein and/or having other data stored therein, wherein the computer readable program code comprises the computer code 97.
- a computer program product (or, alternatively, an article of manufacture) of the computer system 90 may comprise said computer usable medium (or said program storage device). While FIG. 25 shows the computer system 90 as a particular configuration of hardware and software, any configuration of hardware and software, as would be known to a person of ordinary skill in the art, may be utilized for the pu ⁇ oses stated supra in conjunction with the particular computer system 90 of FIG. 25.
- the memory devices 94 and 95 may be portions of a single memory device rather than separate memory devices. While embodiments of the present invention have been described herein for pu ⁇ oses of illustration, many modifications and changes will become apparent to those skilled in the art. Accordingly, the appended claims are intended to encompass all such modifications and changes as fall within the true spirit and scope of this invention.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04795085A EP1685716A2 (en) | 2003-10-17 | 2004-10-15 | Overlapped block motion compensation for variable size blocks in the context of mctf scalable video coders |
JP2006535646A JP5014793B2 (en) | 2003-10-17 | 2004-10-15 | Overlapping block motion compensation of variable size blocks in MCTF scalable video coder |
CN2004800367918A CN1926868B (en) | 2003-10-17 | 2004-10-15 | Overlapped block motion compensation for variable size blocks in the context of mctf scalable video coders |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US51212003P | 2003-10-17 | 2003-10-17 | |
US60/512,120 | 2003-10-17 | ||
US10/864,833 | 2004-06-09 | ||
US10/864,833 US7627040B2 (en) | 2003-06-10 | 2004-06-09 | Method for processing I-blocks used with motion compensated temporal filtering |
US10/965,237 | 2004-10-14 | ||
US10/965,237 US7653133B2 (en) | 2003-06-10 | 2004-10-14 | Overlapped block motion compression for variable size blocks in the context of MCTF scalable video coders |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2005038603A2 true WO2005038603A2 (en) | 2005-04-28 |
WO2005038603A3 WO2005038603A3 (en) | 2005-07-21 |
Family
ID=34468379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2004/033876 WO2005038603A2 (en) | 2003-10-17 | 2004-10-15 | Overlapped block motion compensation for variable size blocks in the context of mctf scalable video coders |
Country Status (6)
Country | Link |
---|---|
US (1) | US7653133B2 (en) |
EP (1) | EP1685716A2 (en) |
JP (1) | JP5014793B2 (en) |
KR (1) | KR100788707B1 (en) |
CN (1) | CN1926868B (en) |
WO (1) | WO2005038603A2 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007000657A1 (en) * | 2005-06-29 | 2007-01-04 | Nokia Corporation | Method and apparatus for update step in video coding using motion compensated temporal filtering |
WO2007020516A1 (en) * | 2005-08-15 | 2007-02-22 | Nokia Corporation | Method and apparatus for sub-pixel interpolation for updating operation in video coding |
WO2010017166A3 (en) * | 2008-08-04 | 2010-04-15 | Dolby Laboratories Licensing Corporation | Overlapped block disparity estimation and compensation architecture |
JP2012235520A (en) * | 2006-09-22 | 2012-11-29 | Thomson Licensing | Method and apparatus for multiple pass video coding and decoding |
EP2347591B1 (en) | 2008-10-03 | 2020-04-08 | Velos Media International Limited | Video coding with large macroblocks |
Families Citing this family (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1515561B1 (en) * | 2003-09-09 | 2007-11-21 | Mitsubishi Electric Information Technology Centre Europe B.V. | Method and apparatus for 3-D sub-band video coding |
US8340177B2 (en) * | 2004-07-12 | 2012-12-25 | Microsoft Corporation | Embedded base layer codec for 3D sub-band coding |
US8442108B2 (en) * | 2004-07-12 | 2013-05-14 | Microsoft Corporation | Adaptive updates in motion-compensated temporal filtering |
US8374238B2 (en) | 2004-07-13 | 2013-02-12 | Microsoft Corporation | Spatial scalability in 3D sub-band decoding of SDMCTF-encoded video |
KR100668345B1 (en) * | 2004-10-05 | 2007-01-12 | 삼성전자주식회사 | Apparatus and method for motion compensated temporal |
WO2006055882A2 (en) * | 2004-11-17 | 2006-05-26 | Howard Robert S | System and method for mapping mathematical finite floating-point numbers |
CN100366045C (en) * | 2005-01-11 | 2008-01-30 | 北京中星微电子有限公司 | Image conversion method capable of realizing zooming |
KR100732961B1 (en) * | 2005-04-01 | 2007-06-27 | 경희대학교 산학협력단 | Multiview scalable image encoding, decoding method and its apparatus |
US20060285590A1 (en) * | 2005-06-21 | 2006-12-21 | Docomo Communications Laboratories Usa, Inc. | Nonlinear, prediction filter for hybrid video compression |
KR100703200B1 (en) * | 2005-06-29 | 2007-04-06 | 한국산업기술대학교산학협력단 | Intra-coding apparatus and method |
US8005308B2 (en) * | 2005-09-16 | 2011-08-23 | Sony Corporation | Adaptive motion estimation for temporal prediction filter over irregular motion vector samples |
US7894522B2 (en) * | 2005-09-16 | 2011-02-22 | Sony Corporation | Classified filtering for temporal prediction |
US8233535B2 (en) * | 2005-11-18 | 2012-07-31 | Apple Inc. | Region-based processing of predicted pixels |
US7956930B2 (en) * | 2006-01-06 | 2011-06-07 | Microsoft Corporation | Resampling and picture resizing operations for multi-resolution video coding and decoding |
KR100772390B1 (en) * | 2006-01-23 | 2007-11-01 | 삼성전자주식회사 | Directional interpolation method and apparatus thereof and method for encoding and decoding based on the directional interpolation method |
JP5004150B2 (en) * | 2006-02-24 | 2012-08-22 | Kddi株式会社 | Image encoding device |
US8023732B2 (en) * | 2006-07-26 | 2011-09-20 | Siemens Aktiengesellschaft | Accelerated image registration by means of parallel processors |
KR100827093B1 (en) * | 2006-10-13 | 2008-05-02 | 삼성전자주식회사 | Method for video encoding and apparatus for the same |
US8144997B1 (en) * | 2006-12-21 | 2012-03-27 | Marvell International Ltd. | Method for enhanced image decoding |
US8041137B2 (en) * | 2007-03-06 | 2011-10-18 | Broadcom Corporation | Tiled output mode for image sensors |
JP4468404B2 (en) * | 2007-05-02 | 2010-05-26 | キヤノン株式会社 | Information processing apparatus control method, information processing apparatus, and program |
US9058668B2 (en) * | 2007-05-24 | 2015-06-16 | Broadcom Corporation | Method and system for inserting software processing in a hardware image sensor pipeline |
US20080292216A1 (en) * | 2007-05-24 | 2008-11-27 | Clive Walker | Method and system for processing images using variable size tiles |
US20080292219A1 (en) * | 2007-05-24 | 2008-11-27 | Gary Keall | Method And System For An Image Sensor Pipeline On A Mobile Imaging Device |
US8953673B2 (en) * | 2008-02-29 | 2015-02-10 | Microsoft Corporation | Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers |
US8711948B2 (en) | 2008-03-21 | 2014-04-29 | Microsoft Corporation | Motion-compensated prediction of inter-layer residuals |
CN101552918B (en) * | 2008-03-31 | 2011-05-11 | 联咏科技股份有限公司 | Generation method of block type information with high-pass coefficient and generation circuit thereof |
US20100080286A1 (en) * | 2008-07-22 | 2010-04-01 | Sunghoon Hong | Compression-aware, video pre-processor working with standard video decompressors |
US9538176B2 (en) | 2008-08-08 | 2017-01-03 | Dolby Laboratories Licensing Corporation | Pre-processing for bitdepth and color format scalable video coding |
US9571856B2 (en) | 2008-08-25 | 2017-02-14 | Microsoft Technology Licensing, Llc | Conversion operations in scalable video encoding and decoding |
US8213503B2 (en) * | 2008-09-05 | 2012-07-03 | Microsoft Corporation | Skip modes for inter-layer residual video coding and decoding |
KR101553850B1 (en) * | 2008-10-21 | 2015-09-17 | 에스케이 텔레콤주식회사 | / Video encoding/decoding apparatus and method and apparatus of adaptive overlapped block motion compensation using adaptive weights |
RU2520425C2 (en) * | 2009-07-03 | 2014-06-27 | Франс Телеком | Predicting motion vector of current image section indicating reference zone overlapping multiple sections of reference image, encoding and decoding using said prediction |
AU2015202119B2 (en) * | 2009-08-17 | 2015-06-04 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
KR101510108B1 (en) * | 2009-08-17 | 2015-04-10 | 삼성전자주식회사 | Method and apparatus for encoding video, and method and apparatus for decoding video |
CA2714932A1 (en) * | 2009-09-23 | 2011-03-23 | Her Majesty The Queen In The Right Of Canada As Represented By The Minister Of Industry | Image interpolation for motion/disparity compensation |
CN102939751B (en) * | 2010-03-31 | 2016-03-16 | 法国电信 | The method and apparatus for carrying out Code And Decode to image sequence of prediction is implemented by forward motion compensation, corresponding stream and computer program |
KR101529992B1 (en) * | 2010-04-05 | 2015-06-18 | 삼성전자주식회사 | Method and apparatus for video encoding for compensating pixel value of pixel group, method and apparatus for video decoding for the same |
KR101456499B1 (en) * | 2010-07-09 | 2014-11-03 | 삼성전자주식회사 | Method and apparatus for encoding and decoding motion vector |
EP2424243B1 (en) * | 2010-08-31 | 2017-04-05 | OCT Circuit Technologies International Limited | Motion estimation using integral projection |
US8755437B2 (en) | 2011-03-17 | 2014-06-17 | Mediatek Inc. | Method and apparatus for derivation of spatial motion vector candidate and motion vector prediction candidate |
CN104054338B (en) * | 2011-03-10 | 2019-04-05 | 杜比实验室特许公司 | Locating depth and color scalable video |
US9307239B2 (en) | 2011-03-14 | 2016-04-05 | Mediatek Inc. | Method and apparatus for derivation of motion vector candidate and motion vector prediction candidate |
WO2012134046A2 (en) | 2011-04-01 | 2012-10-04 | 주식회사 아이벡스피티홀딩스 | Method for encoding video |
TWI580264B (en) * | 2011-11-10 | 2017-04-21 | Sony Corp | Image processing apparatus and method |
US9883203B2 (en) | 2011-11-18 | 2018-01-30 | Qualcomm Incorporated | Adaptive overlapped block motion compensation |
CN104620583A (en) * | 2012-05-14 | 2015-05-13 | 卢卡·罗萨托 | Encoding and reconstruction of residual data based on support information |
US9948916B2 (en) | 2013-10-14 | 2018-04-17 | Qualcomm Incorporated | Three-dimensional lookup table based color gamut scalability in multi-layer video coding |
US9756337B2 (en) * | 2013-12-17 | 2017-09-05 | Qualcomm Incorporated | Signaling color values for 3D lookup table for color gamut scalability in multi-layer video coding |
US10531105B2 (en) | 2013-12-17 | 2020-01-07 | Qualcomm Incorporated | Signaling partition information for 3D lookup table for color gamut scalability in multi-layer video coding |
US10230980B2 (en) * | 2015-01-26 | 2019-03-12 | Qualcomm Incorporated | Overlapped motion compensation for video coding |
KR101754527B1 (en) * | 2015-03-09 | 2017-07-06 | 한국항공우주연구원 | Apparatus and method for coding packet |
CN106612440B (en) * | 2015-10-26 | 2019-04-30 | 展讯通信(上海)有限公司 | Image generating method and device |
EP3220642B1 (en) * | 2016-03-15 | 2018-03-07 | Axis AB | Method, apparatus and system for encoding a video stream by defining areas within a second image frame with image data common to a first image frame |
US10390033B2 (en) * | 2016-06-06 | 2019-08-20 | Google Llc | Adaptive overlapped block prediction in variable block size video coding |
US10567793B2 (en) * | 2016-06-06 | 2020-02-18 | Google Llc | Adaptive overlapped block prediction in variable block size video coding |
CN106454378B (en) * | 2016-09-07 | 2019-01-29 | 中山大学 | Converting video coding method and system in a kind of frame per second based on amoeboid movement model |
GB2558868A (en) * | 2016-09-29 | 2018-07-25 | British Broadcasting Corp | Video search system & method |
US10419777B2 (en) * | 2016-12-22 | 2019-09-17 | Google Llc | Non-causal overlapped block prediction in variable block size video coding |
CN116437104A (en) * | 2017-05-19 | 2023-07-14 | 松下电器(美国)知识产权公司 | Decoding method and encoding method |
WO2019004283A1 (en) * | 2017-06-28 | 2019-01-03 | シャープ株式会社 | Video encoding device and video decoding device |
CN117221526A (en) * | 2017-08-29 | 2023-12-12 | 株式会社Kt | Video decoding method, video encoding method and device |
US20220201282A1 (en) * | 2020-12-22 | 2022-06-23 | Qualcomm Incorporated | Overlapped block motion compensation |
CN113259662B (en) * | 2021-04-16 | 2022-07-05 | 西安邮电大学 | Code rate control method based on three-dimensional wavelet video coding |
CN113596474A (en) * | 2021-06-23 | 2021-11-02 | 浙江大华技术股份有限公司 | Image/video encoding method, apparatus, system, and computer-readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4849810A (en) * | 1987-06-02 | 1989-07-18 | Picturetel Corporation | Hierarchial encoding method and apparatus for efficiently communicating image sequences |
US5408274A (en) * | 1993-03-11 | 1995-04-18 | The Regents Of The University Of California | Method and apparatus for compositing compressed video data |
US5757969A (en) * | 1995-02-28 | 1998-05-26 | Daewoo Electronics, Co., Ltd. | Method for removing a blocking effect for use in a video signal decoding apparatus |
US6108448A (en) * | 1997-06-12 | 2000-08-22 | International Business Machines Corporation | System and method for extracting spatially reduced image sequences in a motion compensated compressed format |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2549479B2 (en) * | 1991-12-06 | 1996-10-30 | 日本電信電話株式会社 | Motion compensation inter-frame band division coding processing method |
JP2000506686A (en) * | 1995-10-25 | 2000-05-30 | サーノフ コーポレイション | Low bit rate video encoder using overlapping block motion compensation and zero-tree wavelet coding |
JP2000299864A (en) * | 1999-04-12 | 2000-10-24 | Canon Inc | Method for processing dynamic image |
KR20010105361A (en) | 1999-12-28 | 2001-11-28 | 요트.게.아. 롤페즈 | SNR scalable video encoding method and corresponding decoding method |
EP1159830A1 (en) | 1999-12-28 | 2001-12-05 | Koninklijke Philips Electronics N.V. | Video encoding method based on the matching pursuit algorithm |
WO2001063839A2 (en) * | 2000-02-23 | 2001-08-30 | Tantivy Communications, Inc. | Access probe acknowledgment with collision detection |
US20020037046A1 (en) | 2000-09-22 | 2002-03-28 | Philips Electronics North America Corporation | Totally embedded FGS video coding with motion compensation |
KR100783396B1 (en) | 2001-04-19 | 2007-12-10 | 엘지전자 주식회사 | Spatio-temporal hybrid scalable video coding using subband decomposition |
EP2458865A3 (en) * | 2001-06-29 | 2014-10-01 | NTT DoCoMo, Inc. | Apparatuses for image coding and decoding |
US20050084010A1 (en) * | 2001-12-28 | 2005-04-21 | Koninklijke Philips Electronics N.V. | Video encoding method |
US7042946B2 (en) * | 2002-04-29 | 2006-05-09 | Koninklijke Philips Electronics N.V. | Wavelet based coding using motion compensated filtering based on both single and multiple reference frames |
US20030202599A1 (en) * | 2002-04-29 | 2003-10-30 | Koninklijke Philips Electronics N.V. | Scalable wavelet based coding using motion compensated temporal filtering based on multiple reference frames |
US7023923B2 (en) * | 2002-04-29 | 2006-04-04 | Koninklijke Philips Electronics N.V. | Motion compensated temporal filtering based on multiple reference frames for wavelet based coding |
MXPA06002210A (en) * | 2003-08-26 | 2006-05-19 | Thomson Licensing | Method and apparatus for decoding hybrid intra-inter coded blocks. |
-
2004
- 2004-10-14 US US10/965,237 patent/US7653133B2/en not_active Expired - Fee Related
- 2004-10-15 WO PCT/US2004/033876 patent/WO2005038603A2/en active Application Filing
- 2004-10-15 EP EP04795085A patent/EP1685716A2/en not_active Withdrawn
- 2004-10-15 JP JP2006535646A patent/JP5014793B2/en not_active Expired - Fee Related
- 2004-10-15 KR KR1020067007040A patent/KR100788707B1/en not_active IP Right Cessation
- 2004-10-15 CN CN2004800367918A patent/CN1926868B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4849810A (en) * | 1987-06-02 | 1989-07-18 | Picturetel Corporation | Hierarchial encoding method and apparatus for efficiently communicating image sequences |
US5408274A (en) * | 1993-03-11 | 1995-04-18 | The Regents Of The University Of California | Method and apparatus for compositing compressed video data |
US5757969A (en) * | 1995-02-28 | 1998-05-26 | Daewoo Electronics, Co., Ltd. | Method for removing a blocking effect for use in a video signal decoding apparatus |
US6108448A (en) * | 1997-06-12 | 2000-08-22 | International Business Machines Corporation | System and method for extracting spatially reduced image sequences in a motion compensated compressed format |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007000657A1 (en) * | 2005-06-29 | 2007-01-04 | Nokia Corporation | Method and apparatus for update step in video coding using motion compensated temporal filtering |
WO2007020516A1 (en) * | 2005-08-15 | 2007-02-22 | Nokia Corporation | Method and apparatus for sub-pixel interpolation for updating operation in video coding |
JP2012235520A (en) * | 2006-09-22 | 2012-11-29 | Thomson Licensing | Method and apparatus for multiple pass video coding and decoding |
US10321134B2 (en) | 2008-08-04 | 2019-06-11 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
US9667993B2 (en) | 2008-08-04 | 2017-05-30 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
US9843807B2 (en) | 2008-08-04 | 2017-12-12 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
WO2010017166A3 (en) * | 2008-08-04 | 2010-04-15 | Dolby Laboratories Licensing Corporation | Overlapped block disparity estimation and compensation architecture |
US10574994B2 (en) | 2008-08-04 | 2020-02-25 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
US10645392B2 (en) | 2008-08-04 | 2020-05-05 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
US11025912B2 (en) | 2008-08-04 | 2021-06-01 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
US11539959B2 (en) | 2008-08-04 | 2022-12-27 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
US11843783B2 (en) | 2008-08-04 | 2023-12-12 | Dolby Laboratories Licensing Corporation | Predictive motion vector coding |
EP2347591B1 (en) | 2008-10-03 | 2020-04-08 | Velos Media International Limited | Video coding with large macroblocks |
EP2347591B2 (en) † | 2008-10-03 | 2023-04-05 | Qualcomm Incorporated | Video coding with large macroblocks |
Also Published As
Publication number | Publication date |
---|---|
CN1926868B (en) | 2011-04-13 |
KR20060096016A (en) | 2006-09-05 |
US7653133B2 (en) | 2010-01-26 |
EP1685716A2 (en) | 2006-08-02 |
JP2007509542A (en) | 2007-04-12 |
KR100788707B1 (en) | 2007-12-26 |
WO2005038603A3 (en) | 2005-07-21 |
CN1926868A (en) | 2007-03-07 |
JP5014793B2 (en) | 2012-08-29 |
US20050078755A1 (en) | 2005-04-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005038603A2 (en) | Overlapped block motion compensation for variable size blocks in the context of mctf scalable video coders | |
US8107535B2 (en) | Method and apparatus for scalable motion vector coding | |
KR100782829B1 (en) | A method for processing i-blocks used with motion compensated temporal filtering | |
Martucci et al. | A zerotree wavelet video coder | |
US20110038421A1 (en) | Apparatus and Method for Generating a Coded Video Sequence by Using an Intermediate Layer Motion Data Prediction | |
CA2703775A1 (en) | Method and apparatus for selecting a coding mode | |
Luo et al. | Advanced motion threading for 3D wavelet video coding | |
JP2005086834A (en) | Method for encoding frame sequence, method for decoding frame sequence, apparatus for implementing the method, computer program for implementing the method and recording medium for storing the computer program | |
Xiong et al. | Barbell lifting wavelet transform for highly scalable video coding | |
EP1461955A2 (en) | Video encoding method | |
Rusert et al. | Transition filtering and optimized quantization in interframe wavelet video coding | |
EP1817911A1 (en) | Method and apparatus for multi-layered video encoding and decoding | |
Bhojani et al. | Hybrid video compression standard | |
Rusert et al. | Enhanced interframe wavelet video coding considering the interrelation of spatio-temporal transform and motion compensation | |
Bhojani et al. | Introduction to video compression | |
Yin et al. | Directional lifting-based wavelet transform for multiple description image coding | |
CUI et al. | Research on the temporal model in video compression | |
Pau et al. | Optimized prediction of uncovered areas in subband video coding | |
Rüfenacht et al. | Scalable Image and Video Compression | |
Seran et al. | Quality variation control for three-dimensional wavelet-based video coders | |
Rusert | Interframe wavelet video coding with operating point adaptation | |
Nakachi et al. | A study on multiresolution lossless video coding using inter/intra frame adaptive prediction | |
KR20070028720A (en) | Motion image encoding system based on wavelet packet transform and the method thereof | |
Chen et al. | Implementation of Multiple Macroblock Mode Overlapped Block Motion Compensation for Wavelet Video Coding | |
Lee et al. | An integrated application of multiple description transform coding and error concealment for error-resilient video streaming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480036791.8 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DPEN | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed from 20040101) | ||
REEP | Request for entry into the european phase |
Ref document number: 2004795085 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004795085 Country of ref document: EP Ref document number: 1020067007040 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006535646 Country of ref document: JP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004795085 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067007040 Country of ref document: KR |