WO2002051158A2 - Method of filling exposed areas in digital images - Google Patents

Method of filling exposed areas in digital images Download PDF

Info

Publication number
WO2002051158A2
WO2002051158A2 PCT/US2001/050282 US0150282W WO0251158A2 WO 2002051158 A2 WO2002051158 A2 WO 2002051158A2 US 0150282 W US0150282 W US 0150282W WO 0251158 A2 WO0251158 A2 WO 0251158A2
Authority
WO
WIPO (PCT)
Prior art keywords
foe
values
fill
segments
segment
Prior art date
Application number
PCT/US2001/050282
Other languages
French (fr)
Other versions
WO2002051158A3 (en
Inventor
Adityo Prakash
David Kita
Edward Ratner
Oliver W. Shih
Hitoshi Watanabe
Original Assignee
Pulsent Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pulsent Corporation filed Critical Pulsent Corporation
Priority to AU2002239703A priority Critical patent/AU2002239703A1/en
Publication of WO2002051158A2 publication Critical patent/WO2002051158A2/en
Publication of WO2002051158A3 publication Critical patent/WO2002051158A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction

Definitions

  • Such block-based motion compensation suffers from the limitation that the objects within most images are not built up of blocks. Such an attempt leads to poor matching and motion compensation. In particular blocks that traverse the boundary of two objects tend to have even poorer matches than others. This situation is illustrated in Fig. IB where block 102 is matched with block 102' of a previous frame, but block 104 at the boundary of two objects cannot be matched with block 104'.
  • Fig. IB where block 102 is matched with block 102' of a previous frame, but block 104 at the boundary of two objects cannot be matched with block 104'.
  • it becomes desirable to be able to directly manipulate the inherent constituent components in any given video frame which are the objects or parts of objects, segments of arbitrary shapes (as allowed for instance in MPEG4 or as disclosed in "Method and Apparatus for Digital Image Segmentation," International Publication Number WO 00/77735, applicant Pulsent Corporation, inventors A.
  • a method of filling exposed areas of a video image involves approximating the color values of the pixels in the exposed area from the color values of a subset of the neighboring segments.
  • the multipart method is focused on identifying the neighboring segments, which most closely resemble the local color values of the exposed area. Selection and transmission of the identities of such segments to an exemplary decoder allows the decoder to efficiently reconstruct the exposed area.
  • the segments that are adjacent to the said exposed area are identified and designated as boundary segments.
  • a reference filling routine is performed next, which involves comparing the estimated value of each pixel with the actual value of the pixel and discarding the estimated value if it is not a close approximation.
  • the next step involves determining which boundary segments are to be used for filling the exposed area and designating them as fill segments.
  • a predictive filling routine is then performed utilizing only the segments designated as fill segment. The results are compared with that of the reference filling routine to ensure that the predictive filling generates a close approximation of the reference filling and hence the actual color values of the exposed area pixels. If the difference between the reference fill and the predictive fill results are greater than a certain threshold value, the filling routine is recalculated using a smaller set of boundary pixels. A final set of fill segments is then determined. In one embodiment the fill segment information is transmitted to an exemplary decoder.
  • FIG. 1 A-B illustrates MPEG compression scheme
  • FIG. 2 A illustrates four segments of a digital image frame
  • FIG. 2B illustrates a second image frame where the segments have moved relative to one another.
  • FIG. 2C illustrates a second image frame where the segments have moved relative to one another and an exposed area is visible.
  • FIG. 2D illustrates the results of filling which generates a poor reconstruction of the exposed area
  • FIG. 2E illustrates the results of filling which generates a good reconstruction of the exposed area
  • FIG. 3 illustrates the steps of a reference filling routine
  • FIG. 4 A-L illustrate reference filling FIG. 5. illustrate the method of changing the threshold value for the difference between the reference fill value and the actual value of an exposed area pixel
  • FIG. 6. illustrates the steps of determining a tentative fill segments
  • FIG. 7A illustrates the steps of determining a set of fill segments from a tentative set of fill segments.
  • FIG. 7B. illustrates the pixels of a fill segment, adjacent to the boundary of an exposed area.
  • FIG. 7C illustrate the final results of a reference fill routine.
  • FIG. 8 illustrates the steps of a predictive filling routine
  • FIG. 9 A-I illustrate the steps of predictive filling using a set of fill segments that does not produce an accurate reconstruction of the exposed area
  • FIG. 10 A-I illustrate the steps of predictive filling using a set of fill segments that produce a better reconstruction of the exposed area.
  • FIG. 11 illustrates the steps of recalculation of fill segments
  • FIG. 12 illustrates the steps of determining the final set of fill segments
  • FIG. 13 A-B illustrate exposed areas with multiple subregions.
  • the invention herein pertains to a highly effective method for approximating the information in exposed areas.
  • the application of the invention results in significantly lowering the cost of transmission and storage of multidimensional signals such as digital video and images compared to that incurred by existing technologies.
  • the invention also improves the quality and cost of digital video and image restoration.
  • Introduction When an object within a sequence of digital image frames changes position, a newly uncovered area is exposed. In order to efficiently encode this newly uncovered area, it would be desirable to approximate the color of the newly uncovered region by extending the characteristics of it neighbors. This method of approximation would be particularly valuable in situations where large amount of video data needs to be transmitted through narrow transmission channels such as telephone lines. This method can also be useful for storage of digital video data when storage disk space is limited.
  • the preferred embodiment is to use the current invention as a front end to a residue encoder because it forms a better approximation of the image.
  • the invention disclosed herein utilizes an image segmentation scheme disclosed in the Segmentation patent application.
  • This image segmentation method subdivides a digital image into segments based upon the natural boundaries of the objects within the image frame.
  • FIG 2A illustrates such an image frame where the image is subdivided into 3 segments 202, 204 and 206.
  • FIG. 2B when the objects move relative to one another, a previously occluded area is exposed.
  • FIG. 2C this area is designated 208. Since the image frame is segmented along the natural boundaries of the objects therein the color of the exposed area 208 is likely to be similar to the color of one or more of the surrounding segments.
  • an 'intelligent' encoding method can determine the likely color or colors of region 208 by receiving only a small fraction of the information required by existing video compression schemes.
  • FIG 2D illustrates a poor approximation of the exposed area that can be produced if it is filled using information from surrounding segments without determining which of these segments most closely resemble the color values of the exposed area.
  • FIG 2E illustrates a better approximation of the exposed area when it is filled by using information from a subset of the boundary segments, in this case segment 202.
  • the present invention will be used for exposed areas that are relatively small compared to the segments surrounding them either in terms of the area or in terms of their aspect ratio or when there is low contrast between the actual color values of the exposed area and at least one of the boundary segments.
  • This invention uses a multi-step process to determine the color information of the newly exposed image regions.
  • the steps are as follows:
  • Determining newly uncovered image segments in the second frame Determining which segments are adjacent to the newly uncovered image segment and designating them as boundary segments.
  • Determining the final set of fill segments Efficient transmission of the fill segment information to an exemplary decoder.
  • a segment is a portion of the image where the color is uniform or nearly uniform. It is not germane to this disclosure how the segments were obtained.
  • the motion of the segments from Frame 1 to Frame 2 are encoded as motion vectors for use in reconstructing the second frame. For purposes of this disclosure, it does not matter how the motion vector were determined.
  • surrounding segments are determined.
  • nearest neighbor points are examined and any segment containing such point is considered a surrounding segment.
  • a segment is included as a boundary segment only if it has sufficient contact with the exposed area.
  • segments overlapping an annular region around the exposed area may be used to determining the boundary segments.
  • Reference Filling Following the identification of the boundary segments, a routine is carried out that fills the exposed area using the color values of all of the boundary segments and subsequently compares the results of this filling routine with the actual color values of the pixels within the exposed area. The actual color values for comparison are obtained from the actual second frame and the filling is carried out on the second frame being reconstructed from the previous frame.
  • An image frame with an exposed area is an intermediate step in the process of reconstructing an image frame from the previous frame. This situation occurs when an image segment has moved but the information regarding the color of the exposed area is not available. It is important to note that such a situation occurs commonly in video compression where an exemplary encoder has transmitted the information regarding the motion of a segment to a decoder, but not the color values of the exposed area. Unlike a decoder, an exemplary encoder will have available the actual color values of the exposed area. Hence an encoder can fill in the exposed area by using the color information available from the boundary segments and compare the results with the actual value of the pixels within the exposed area. In contrast a decoder will be able to perform this filling based upon geometry. This will be elaborated in a later section.
  • FIG. 3 illustrates the method of reference filling.
  • the routine begins with setting a threshold value for color difference, also called color transition (302). Then the routine considers each pixel that is most proximal to the edge of the exposed area and considers an adaptively sized kernel around it (304, 306). The size and shape of the kernel is determined adaptively and may be based upon geometry of the exposed area or color values of the boundary segment or both.
  • FIG 4C illustrates such an exposed area where pixel 499 of the exposed area 410 is being filled and 404, 406 and 408 are the boundary segments surrounding exposed area 410.
  • FIG 4D illustrates a kernel 412 (area shaded in gray) around pixel 499.
  • the routine determines a histogram of colors of all pixels within the kernel for each boundary segment (308). Referring again to FIG 4D, the kernel overlaps with two boundary segments, 404 and 406 hence there will be two sets of histograms of color distributions for pixel 499. For each segment there will be a histograms of color distribution for each of the color components Y, U and V. In other embodiments, there can be a different set of histograms for other spectral components of the video image frame. The routine then determines which boundary segment is adjacent to the selected pixel and calculates a statistical parameter of the color values for each adjacent segment (310).
  • the routine calculates a statistical parameter for all of the pixels of segment 404 that falls within the kernel 412.
  • the statistical parameter calculated is the median, however in other embodiments, measures such as but not restricted to mean, mode, the weighted mean or any other statistical moment of the distribution may also be used.
  • the routine then calculates the difference between the actual color value of that pixel and the value of the calculated statistical parameter (312). If the selected pixel is adjacent to more than one boundary segments, the segment closest the actual color value of the selected pixel will be chosen. If this difference is less than the threshold, the pixel is then assigned the color value of the calculated statistical parameter and also assigned the segment identifier of the adjacent segment (314, 316).
  • Fig. 4 illustrates the process of reference filling.
  • Fig 4 A is the first frame of an image containing 3 segments 404, 406 and 408.
  • Fig 4B illustrates a second image frame where the segment 408 has moved and exposed a previously occluded region, where most pixels share color values of segment 404, and one pixel shares the color value of pixel 406.
  • FIG. 4C illustrates the image frame of Fig 4B, without any color values filling the exposed area 410.
  • Fig. 4D illustrates the domain of an exemplary kernel 412 centered around the pixel 499 at the edge of the exposed area.
  • Fig 4E illustrates the first ring of pixels at the edge of exposed area 410 labeled 1. These are the first ring of pixels to be filled using the method described in the previous paragraph and delineated in Fig. 3.
  • Fig 4F illustrates the results of filling the first ring.
  • the pixels that are colored gray or filled with a striped pattern and labeled 1 are filled.
  • the pixels that are labeled 1 and colored white are left unfilled because they were adjacent to a segment that was of significantly different color than their actual values.
  • FIG. 4G illustrates the second ring of pixels at the boundary of the exposed area. It is worth noting that the unfilled pixels from ring 1 illustrated in Fig. 4F, labeled 1 and colored white, are now part of the second ring. The pixels belonging to the second ring are labeled 2.
  • Fig. 4H shows the results of filling of the second ring. As before, some pixels belonging to ring two were left unfilled because the color transition was greater than the accepted threshold value.
  • Fig 41 shows the pixels of the third ring labeled 3.
  • Fig 4J shows the results of filling the third ring.
  • Fig 4K illustrates the fourth ring.
  • Fig. 4L illustrates the final result of reference filling. It is worth noting that reference filling was able to reconstruct the actual color values of the pixels in the exposed area 410 with high degree of accuracy, since this method compared the calculated value with the actual value of each pixel and accepted only those within a threshold.
  • FIG 5 illustrates the method of changing the threshold value for the difference between the actual and the calculated value of the pixel.
  • an entire ring of pixel color values may be calculated and none are filled because the difference between the actual value of each of the pixels and their calculated color values is greater than the preset threshold. In this event the threshold value is increased. In other embodiments this rule may be relaxed and increasing the threshold can be allowed if a certain proportion of the unfilled boundary pixels are unfilled.
  • FIG. 6A illustrates the method of selecting a tentative set of fill segments.
  • a parameter that represents the geometric shape of the regions filled by each of the boundary segments is calculated (606).
  • the geometric parameter may be the perimeter of the exposed area contributed by each of the boundary segments.
  • the parameter may be the percentage of the area contributed by each of the boundary segments.
  • the geometric parameter may be any function of both the area and the perimeter contributed by each of the boundary segments.
  • FIG. 7 delineates the steps of choosing a set of fill segment, from within the set of tentative fill segments determined above.
  • the routine calculates the normalized ratio of the squares of the boundary lengths that each of the tentative fill segments shares with the exposed area.
  • the boundary lengths would be the lengths of the pixels that surround the exposed area. For example, for segment 604, the total length of the sides of the pixels 704a, 704b, 704c, 704d, 704e, 704f, 704g, 704h, 704i, 704j that are adjacent to the exposed area 610 is the boundary length that segment 704 shares with 710.
  • the boundary lengths of each one of n tentative fill segments is denoted by (Li, L 2 , L 3 , L n ), with their corresponding ratios denoted by lower case T, the normalized ratio of squares would be
  • the routine then calculates the normalized ratio of the areas of the filled regions contributed by each of the boundary segments.
  • the region of exposed area 710 filled by segment 704 is shaded in gray and marked with the numbers the number 1, 2 3 or 4.
  • the regions filled by segment 706 is shaded in stripes and marked with the number 1.
  • the areas contributed by each one of n tentative fill segments is denoted by (Si, S 2 , S 3 , S n ), with their corresponding ratios denoted by lower case 's', the normalized ratio of areas would be, (s ⁇ :s 2 :s 3 : :s n ) where
  • the routine rejects the segment where the value obtained by subtracting the normalized area from the normalized length squared i.e., (l n 2 -S n ) is the greatest. Having removed the segment, the remaining ratios are renormalized and the score is recalculated. If this recalculated score is less than the previous score, the routine again removes the value of the segment where (l n 2 -S n ) is the greatest and recalculates. This process is repeated until the recalculated score is greater than the previous score. The segments used to calculate the lowest score are then selected as the fill segments.
  • a different function of the boundary lengths and area contributed by each of the segments may be used to determine the fill segments.
  • a predictive filling routine is carried out next.
  • Predictive filling carries out the filling procedure in the same way that a decoder would carry out filling.
  • One important difference between predictive filling and reference filling is that in predictive filling, the predicted pixel values are not compared with the actual values of the corresponding pixels.
  • an image decoder will have the final task of filling in the exposed area and it will not have the information about the actual values of the pixels within this area and thus must perform the final task of filling in the exposed area based upon information readily available or information transmitted by the encoder.
  • FIG. 8 delineates the steps of predictive filling.
  • the routine begins with taking each pixel that is most proximal to the edge of the exposed area (802) and considering a kernel of adaptive size around it (804) .
  • the domain of such a kernel 912 can be seen in Fig. 9C around the pixel 999 at the edge of the exposed area.
  • the routine creates a histogram of color values of all pixels within the kernel for each fill segment (806).
  • the routine For each segment there will be a histogram of color distribution for each of the color components Y, U and V. In other embodiments, a different set of histograms for other spectral components of the video image frame is also envisioned.
  • the routine identifies the segment adjacent to the pixel (808). If there is more than one segment adjacent to the pixel then the routine determines which segment contributes more pixels to the kernel (810). Then the routine calculates a statistical parameter of the color values for the adjacent fill segment (812). In one embodiment the statistical parameter calculated is the median, however in other embodiments, measures such as but not restricted to mean, mode, the weighted mean or any other statistical moment of the distribution may also be used.
  • pixel 999 will be filled with the color value calculated from the pixels of segment 904 that overlap with kernel 912.
  • FIG 9 A-I and Fig 10 A-I illustrate the steps of predictive filling.
  • FIG.9 shows the results of filling using segments 904 and 906 as fill segments.
  • FIG 10, in contrast, illustrates the steps of filling from segment 1004 only (904 in Fig. 9).
  • filling from segment 904 alone creates a better approximation of the actual color scheme of the exposed area, than the results obtained by filling from both segments 904 and 906.
  • the results demonstrate the importance of selecting the correct segments for filling an exposed area.
  • Fig 9A illustrates the first image frame, which contains three segments 904, 906 and 908.
  • Fig 9B illustrates frame 2 after the motion of the segment 908 where an exposed area 910 is visible. It is important to note here that this frame is not the actual image frame. This is an intermediate step in the process of reconstruction of the image frame after the motion of the segment 908. In the subsequent steps the routine will attempt to reconstruct the actual image frame by predicting the color values of the pixels in the exposed area.
  • Fig 9C shows the kernel 912 around the pixel 999.
  • Figure 9D illustrates the first ring the pixels within the exposed are to be filled; therein the first ring of pixels at the boundary of the exposed area are denoted by 1.
  • Figure 9E illustrates the results of the filling of the first ring using the method described in Fig. 8 for each pixel at the boundary. It is important to note here that since segments 904 and 906 have been chosen as fill segments, the pixels adjacent to segment 908 are left unfilled. As can be seen in Fig. 9F, during the filling of the next ring, these unfilled pixels will become parts of ring 2. Fig 9F illustrates the pixels that belong to the second ring as marked by the number 2 within the exposed area 910. Fig. 9G illustrates the results of filling of ring 2. Once again the two unfilled white pixels marked 2 will become part of the third ring as can be seen in Fig. 9H.
  • Fig 9H illustrates the pixels of the third ring labeled 3 within the exposed area 910.
  • Fig 91 illustrates the final results of the predictive filling. When this result is compared to the actual image frame as illustrated in Fig. 4B, one can see that filling from segments 904 and 906 produced a poor approximation of the actual color values of the exposed area in Fig 91.
  • segment 906 as a fill segment based upon the criteria described in the section titled 'determining fill segments' and illustrated in Fig. 6 and Fig. 7. Briefly this segment would be rejected since it does not fill a minimum percentage of pixels of the exposed area.
  • the determination criteria in the current example would leave only one segment 904 as the fill segment.
  • Figure 10 A-I illustrate the steps of filling from segment 1004 only (904 in Fig 9).
  • Fig. 10A illustrates the four segments 1004, 1006, 1008 and the exposed area 1010 (identical to Fig. 9A).
  • Fig 10B illustrates the first ring of the exposed area where the pixels of the first ring are labeled 1.
  • Fig. 10C illustrates the results of filling the first ring using only segment 1004 as the fill segment. It is worth noting that the three pixels labeled 1 adjacent to segment 906 have remained unfilled because 904 is the only fill segment and the routine only allows filling from adjacent fill segments.
  • Fig. 10D and Fig. 10E illustrate the second ring and the results of filling the second ring respectively.
  • Fig. 10F and Fig. 10G illustrate the third ring and the results of filling the third ring respectively.
  • Fig. 10H illustrates the fourth ring.
  • Fig. 101 illustrates the final result of filling the exposed area 1010 from the segment 1004.
  • the set of fill segments may be recalculated.
  • the set of fill segments chosen may not produce optimal results and using a smaller set may better approximate the actual color values of the exposed area.
  • the method involves comparing the predictive filling results with the reference filling results.
  • a function of the predictive fill values, the reference fill values and the actual values is used as a measure of the closeness of the predictive fill results to the actual values of the exposed area pixels. If this function is above a threshold for the pixels filled by any fill segment or segments, the segment or segments are rejected and the reference fill and predictive fill values are recalculated without the rejected fill segments.
  • FIG 11 illustrates a method, the routine utilizes in its current embodiment, to determine the quality of the results achieved by predictive filling.
  • the routine determines the average of the absolute value of the difference between the actual value and the reference fill value for regions filled by each of the fill segments (1102, 1104). Represented mathematically, if the number of pixels contributed by a fill segment is N, suppose the actual value of any pixel i is denoted by a ! , the predictive fill value of the pixel is denoted by p, and the reference fill value of the pixel is denoted by r,. For each region contributed by a fill segment the value is denoted by,
  • the routine also determines, the average of the absolute value of the difference between the actual value and the predictive fill value for regions filled by each of the fill segments (1106). This quantity can be denoted by,
  • routine the subtracts the average of the absolute value of the difference between the actual values and the reference fill values, from the average of the absolute values of the difference between the actual values and the predictive fill values (1108). This process is repeated for areas filled by each of the fill segments (1110, 1112). If for any segment, this value is above a threshold, the routine decides to do a recalculation (1114, 1118), otherwise the current set of fill segments are accepted as the final set (1116). Represented mathematically, if for any region contributed by a fill segment the quantity,
  • N N is greater than a threshold value, then a recalculation is initiated.
  • any other function of the actual values, reference fill values and the predictive fill values can be used to determine whether a recalculation should be done.
  • any fill segment or segments that fail to meet the above criteria is excluded from the set of possible boundary segments.
  • the routine then carries out reference filling, determines a set of fill segments and carries out predictive filling (1120).
  • one recalculation is performed if necessary, however, in other embodiments, multiple recalculations may be done.
  • the routine determines whether the set of fill segments determined after the first calculation or the set determined after recalculation is superior (1202, 1204).
  • the routine first calculates the average of the absolute values of the differences between the first predictive fill values and the actual values of all of the pixels in the exposed area (1206, 1208). It is important to note that the above difference is not calculated for the areas contributed by each of the fill segments, rather this is a global difference calculated for all of the pixels within an exposed area.
  • the routine calculates the average of the absolute values of the differences between the recalculated predictive fill values and the actual values of all of the pixels in the exposed area (1210, 1212).
  • the routine selects foe set of fill segments determined during the recalculation as foe final fill segments (1214, 1216). Represented mathematically if,
  • foe routine selects foe set of fill segments determined during the first calculation as foe final set of fill segments (1218). Represented mathematically if,
  • the routine selects the set of fill segments determined during foe first calculation as the final fill segments (1214, 1216). If foe average of foe absolute values of the differences between the recalculated predictive fill values and foe actual values is less than foe average of the absolute values of the differences between the first predictive fill values and the actual values, the routine selects foe set of fill segments determined during the recalculation as the final set of fill segments (1218).
  • fill begins with foe most exterior subdivisions and progresses into foe more interior subdivisions.
  • foe most exterior subdivisions are 1302, 1304, 1306, 1308, 1310, 1312, 1314, and 1316 (only 1318 is the interior subdivision)
  • Fig. 13b all the subdivisions are already exterior subdivisions.
  • a set of fill segments need to be determined in foe same manner as described above. After completing foe selection of final fill segments choose fill segments another predictive filling is performed for each of foe exterior subregions.
  • a global error of predictive fill against the actual image is calculated for all the exterior subdivisions.
  • the error is measured by calculating foe average of the absolute value of difference between the predictive fill value and foe actual pixel value for each subregion. Similar to the reference fill, only the subdivision(s) for which foe error is smaller than the pre-determined threshold are finally allowed to be filled. After the subdivisions for which the error is smaller than foe pre-determined threshold are filled, a new set of exterior subdivisions are identified. The exterior subdivisions which were not filled in foe previous attempt are once again identified as exterior subdivisions. Then, a set of subdivision(s) among foe newly identified exterior subdivisions which are allowed to be filled during the current attempt is foen determined in the same manner just described above. Here, the subdivisions which had been already filled during the previous attempts are also allowed to fill foe current exterior subdivisions. This process is foen repeated until all foe subdivisions in foe exposed area are filled Transmission:
  • fill segment information regarding which segments are to be used as fill segments is transmitted to an exemplary decoder for efficient reconstruction of the exposed area.
  • the fill segment information may also be coupled to a residue encoder to improve the local image quality.
  • foe information regarding which subregion is filled in which order and from which fill segments is sent to foe exemplary decoder.

Abstract

A method of reconstructing a previously occluded area of a video image frame exposed by the motion of a segment within said image frame. In one embodiment the method involves approximating the color values of the pixels in the newly exposed area from the color values of the neighboring image segments. The process is refined by identifying a set of neighboring segments to the exposed area, called fill segments, that most closely resemble the color values of the pixels within the exposed area. These fill segments are then used to reconstruct the color values of the exposed area. In one embodiment, the identities of the fill segments are transmitted to an exemplary decoder.

Description

METHOD OF FILLING EXPOSED AREAS IN DIGITAL IMAGES
CROSS-REFERENCES TO RELATED APPLICATIONS This application derives priority from provisional application titled "Filling by coding" No. 60/257,844 filed on Dec 20, 2000.
BACKGROUND OF THE INVENTION The world's technologies for communication, information and entertainment is steadily becoming digital. As part of this trend, still pictures are being stored and transmitted digitally; video is being transmitted and stored in digital form. Digital images and video together constitute an extremely important aspect of modern communication and information infrastructure. Efficient methods for processing multi-dimensional signals such as digital video and images are of deep technological significance. Examples of common application areas where such sophisticated processing is absolutely necessary include image and video compression for efficient video storage and delivery, manipulation of digital images and video frames for effective generation of artificial scenes, image or video restoration etc.
In the context of video compression since a number of the same objects move around in a scene spanning several video frames, one attempts to create shortcuts for describing a current video frame being encoded or compressed in relation to other video frames that have already been transmitted or stored in the bit stream through a process of identification of portions of the current frame w/ other portions of previously sent frames. This process is known as motion compensation. As illustrated in Fig 1 A, in technologies such a MPEG 1, 2 and 4, the image frame is subdivided into square blocks that are then matched to a previously encoded frame and a displacement vector, also called motion vector, is placed in the bit stream indicating that the block in question should be replaced by the corresponding block in a previously encoded frame.
Such block-based motion compensation suffers from the limitation that the objects within most images are not built up of blocks. Such an attempt leads to poor matching and motion compensation. In particular blocks that traverse the boundary of two objects tend to have even poorer matches than others. This situation is illustrated in Fig. IB where block 102 is matched with block 102' of a previous frame, but block 104 at the boundary of two objects cannot be matched with block 104'. Hence it becomes desirable to be able to directly manipulate the inherent constituent components in any given video frame, which are the objects or parts of objects, segments of arbitrary shapes (as allowed for instance in MPEG4 or as disclosed in "Method and Apparatus for Digital Image Segmentation," International Publication Number WO 00/77735, applicant Pulsent Corporation, inventors A. Prakash, E. Ratner, J.S. Chen, and D.L. Cook, published December 21, 2000) as the fundamental entities for use in motion compensation. Any form of motion compensation based on blocks or objects suffers from an additional drawback that as objects move in a video scene, previously occluded regions that constitute new information, appear in the image frame. Such new information in regions that were previously occluded and are now visible, hereafter referred to as exposed area or exposed region, constitute a very large proportion of the encoded information or bit stream in existing video compression technologies. Furthermore, applications such as artificial scene generation based on manipulation of objects or segments that are moved and placed in new locations, also result in such exposed areas. In applications such as image or video restoration where certain parts of an image may be unavailable, lost or corrupted, such regions may also be considered as unknown regions or exposed areas.
SUMMARY OF THE INVENTION A method of filling exposed areas of a video image is disclosed herein. In its current embodiment, the method involves approximating the color values of the pixels in the exposed area from the color values of a subset of the neighboring segments. The multipart method is focused on identifying the neighboring segments, which most closely resemble the local color values of the exposed area. Selection and transmission of the identities of such segments to an exemplary decoder allows the decoder to efficiently reconstruct the exposed area. Following the identification of an exposed area, the segments that are adjacent to the said exposed area are identified and designated as boundary segments. A reference filling routine is performed next, which involves comparing the estimated value of each pixel with the actual value of the pixel and discarding the estimated value if it is not a close approximation. This method produces a good reconstruction of the actual color values of the exposed area pixels. The next step involves determining which boundary segments are to be used for filling the exposed area and designating them as fill segments. A predictive filling routine is then performed utilizing only the segments designated as fill segment. The results are compared with that of the reference filling routine to ensure that the predictive filling generates a close approximation of the reference filling and hence the actual color values of the exposed area pixels. If the difference between the reference fill and the predictive fill results are greater than a certain threshold value, the filling routine is recalculated using a smaller set of boundary pixels. A final set of fill segments is then determined. In one embodiment the fill segment information is transmitted to an exemplary decoder.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 A-B illustrates MPEG compression scheme FIG. 2 A illustrates four segments of a digital image frame FIG. 2B illustrates a second image frame where the segments have moved relative to one another.
FIG. 2C illustrates a second image frame where the segments have moved relative to one another and an exposed area is visible.
FIG. 2D illustrates the results of filling which generates a poor reconstruction of the exposed area
FIG. 2E illustrates the results of filling which generates a good reconstruction of the exposed area
FIG. 3 illustrates the steps of a reference filling routine FIG. 4 A-L illustrate reference filling FIG. 5. illustrate the method of changing the threshold value for the difference between the reference fill value and the actual value of an exposed area pixel
FIG. 6. illustrates the steps of determining a tentative fill segments FIG. 7A. illustrates the steps of determining a set of fill segments from a tentative set of fill segments. FIG. 7B. illustrates the pixels of a fill segment, adjacent to the boundary of an exposed area.
FIG. 7C. illustrate the final results of a reference fill routine. FIG. 8 illustrates the steps of a predictive filling routine FIG. 9 A-I illustrate the steps of predictive filling using a set of fill segments that does not produce an accurate reconstruction of the exposed area
FIG. 10 A-I illustrate the steps of predictive filling using a set of fill segments that produce a better reconstruction of the exposed area.
FIG. 11 illustrates the steps of recalculation of fill segments
FIG. 12 illustrates the steps of determining the final set of fill segments FIG. 13 A-B illustrate exposed areas with multiple subregions.
DESCRIPTION OF THE SPECIFIC EMBODIMENTS The invention herein pertains to a highly effective method for approximating the information in exposed areas. The application of the invention results in significantly lowering the cost of transmission and storage of multidimensional signals such as digital video and images compared to that incurred by existing technologies. The invention also improves the quality and cost of digital video and image restoration. Introduction When an object within a sequence of digital image frames changes position, a newly uncovered area is exposed. In order to efficiently encode this newly uncovered area, it would be desirable to approximate the color of the newly uncovered region by extending the characteristics of it neighbors. This method of approximation would be particularly valuable in situations where large amount of video data needs to be transmitted through narrow transmission channels such as telephone lines. This method can also be useful for storage of digital video data when storage disk space is limited. The preferred embodiment is to use the current invention as a front end to a residue encoder because it forms a better approximation of the image.
The invention disclosed herein utilizes an image segmentation scheme disclosed in the Segmentation patent application. This image segmentation method subdivides a digital image into segments based upon the natural boundaries of the objects within the image frame. FIG 2A illustrates such an image frame where the image is subdivided into 3 segments 202, 204 and 206. As illustrated in FIG 2B, when the objects move relative to one another, a previously occluded area is exposed. In FIG. 2C, this area is designated 208. Since the image frame is segmented along the natural boundaries of the objects therein the color of the exposed area 208 is likely to be similar to the color of one or more of the surrounding segments. Therefore using the surrounding segments, an 'intelligent' encoding method can determine the likely color or colors of region 208 by receiving only a small fraction of the information required by existing video compression schemes. FIG 2D illustrates a poor approximation of the exposed area that can be produced if it is filled using information from surrounding segments without determining which of these segments most closely resemble the color values of the exposed area. FIG 2E illustrates a better approximation of the exposed area when it is filled by using information from a subset of the boundary segments, in this case segment 202. It is envisioned that the present invention will be used for exposed areas that are relatively small compared to the segments surrounding them either in terms of the area or in terms of their aspect ratio or when there is low contrast between the actual color values of the exposed area and at least one of the boundary segments. Brief Overview
This invention uses a multi-step process to determine the color information of the newly exposed image regions. The steps are as follows:
Obtaining a first image frame and a second image frame.
Determining newly uncovered image segments in the second frame. Determining which segments are adjacent to the newly uncovered image segment and designating them as boundary segments.
Carrying out a reference filling routine.
Determining which boundary segments are to be used for filling the exposed area and designating them as fill segments. Carrying out a predictive filling routine and comparing the results with that of the reference filling routine.
Recalculating the filling routine when the difference between the initial fill and the secondary fill results are greater than a certain threshold value.
Determining the final set of fill segments. Efficient transmission of the fill segment information to an exemplary decoder.
Obtain segments
For purposes of this disclosure, a segment is a portion of the image where the color is uniform or nearly uniform. It is not germane to this disclosure how the segments were obtained.
Segment Motion
In one embodiment, the motion of the segments from Frame 1 to Frame 2 are encoded as motion vectors for use in reconstructing the second frame. For purposes of this disclosure, it does not matter how the motion vector were determined.
Determine newly discovered areas In the process of reconstructing a second frame from the previous frame and information about the segment motion, there may be areas in the second frame that are devoid of video information as exemplified by the region 208 in FIG. 2C.
Determine boundary segments
For purposes of this disclosure, it does not matter how the surrounding segments are determined. In one embodiment, at each point in the perimeter, nearest neighbor points are examined and any segment containing such point is considered a surrounding segment. In another embodiment a segment is included as a boundary segment only if it has sufficient contact with the exposed area. In a third embodiment, segments overlapping an annular region around the exposed area may be used to determining the boundary segments.
Reference Filling: Following the identification of the boundary segments, a routine is carried out that fills the exposed area using the color values of all of the boundary segments and subsequently compares the results of this filling routine with the actual color values of the pixels within the exposed area. The actual color values for comparison are obtained from the actual second frame and the filling is carried out on the second frame being reconstructed from the previous frame.
An image frame with an exposed area is an intermediate step in the process of reconstructing an image frame from the previous frame. This situation occurs when an image segment has moved but the information regarding the color of the exposed area is not available. It is important to note that such a situation occurs commonly in video compression where an exemplary encoder has transmitted the information regarding the motion of a segment to a decoder, but not the color values of the exposed area. Unlike a decoder, an exemplary encoder will have available the actual color values of the exposed area. Hence an encoder can fill in the exposed area by using the color information available from the boundary segments and compare the results with the actual value of the pixels within the exposed area. In contrast a decoder will be able to perform this filling based upon geometry. This will be elaborated in a later section.
FIG. 3 illustrates the method of reference filling. The routine begins with setting a threshold value for color difference, also called color transition (302). Then the routine considers each pixel that is most proximal to the edge of the exposed area and considers an adaptively sized kernel around it (304, 306). The size and shape of the kernel is determined adaptively and may be based upon geometry of the exposed area or color values of the boundary segment or both. FIG 4C illustrates such an exposed area where pixel 499 of the exposed area 410 is being filled and 404, 406 and 408 are the boundary segments surrounding exposed area 410. FIG 4D illustrates a kernel 412 (area shaded in gray) around pixel 499. It is important to note that although the kernel 412 is shown as a square, in practice the kernel may be of any shape. The routine then determines a histogram of colors of all pixels within the kernel for each boundary segment (308). Referring again to FIG 4D, the kernel overlaps with two boundary segments, 404 and 406 hence there will be two sets of histograms of color distributions for pixel 499. For each segment there will be a histograms of color distribution for each of the color components Y, U and V. In other embodiments, there can be a different set of histograms for other spectral components of the video image frame. The routine then determines which boundary segment is adjacent to the selected pixel and calculates a statistical parameter of the color values for each adjacent segment (310). In FIG 4C the boundary segment 404 is adjacent to pixel 499 hence the routine calculates a statistical parameter for all of the pixels of segment 404 that falls within the kernel 412. In one embodiment the statistical parameter calculated is the median, however in other embodiments, measures such as but not restricted to mean, mode, the weighted mean or any other statistical moment of the distribution may also be used. The routine then calculates the difference between the actual color value of that pixel and the value of the calculated statistical parameter (312). If the selected pixel is adjacent to more than one boundary segments, the segment closest the actual color value of the selected pixel will be chosen. If this difference is less than the threshold, the pixel is then assigned the color value of the calculated statistical parameter and also assigned the segment identifier of the adjacent segment (314, 316). If the difference is greater than the threshold, the pixel is left unfilled (318). This routine is repeated for every exposed area pixel at the boundary of the exposed area. Once all of pixels in a ring have either been assigned a color and segment identity or left unfilled, the next set of pixels at the boundary of the exposed area are filled using the method described above. Fig. 4 illustrates the process of reference filling. Fig 4 A is the first frame of an image containing 3 segments 404, 406 and 408. Fig 4B illustrates a second image frame where the segment 408 has moved and exposed a previously occluded region, where most pixels share color values of segment 404, and one pixel shares the color value of pixel 406. Fig. 4C illustrates the image frame of Fig 4B, without any color values filling the exposed area 410. As described earlier, Fig. 4D illustrates the domain of an exemplary kernel 412 centered around the pixel 499 at the edge of the exposed area. Fig 4E illustrates the first ring of pixels at the edge of exposed area 410 labeled 1. These are the first ring of pixels to be filled using the method described in the previous paragraph and delineated in Fig. 3. Fig 4F illustrates the results of filling the first ring. The pixels that are colored gray or filled with a striped pattern and labeled 1 are filled. The pixels that are labeled 1 and colored white are left unfilled because they were adjacent to a segment that was of significantly different color than their actual values. Here significantly means that their estimated color transitions were greater than a certain threshold. Hence when their colors were estimated the color transition was higher than the accepted threshold. Hence these pixels were left unfilled. Fig. 4G illustrates the second ring of pixels at the boundary of the exposed area. It is worth noting that the unfilled pixels from ring 1 illustrated in Fig. 4F, labeled 1 and colored white, are now part of the second ring. The pixels belonging to the second ring are labeled 2. Fig. 4H shows the results of filling of the second ring. As before, some pixels belonging to ring two were left unfilled because the color transition was greater than the accepted threshold value. Fig 41 shows the pixels of the third ring labeled 3. Fig 4J shows the results of filling the third ring. Fig 4K illustrates the fourth ring. Fig. 4L illustrates the final result of reference filling. It is worth noting that reference filling was able to reconstruct the actual color values of the pixels in the exposed area 410 with high degree of accuracy, since this method compared the calculated value with the actual value of each pixel and accepted only those within a threshold.
FIG 5 illustrates the method of changing the threshold value for the difference between the actual and the calculated value of the pixel. During the process of filling the pixels, an entire ring of pixel color values may be calculated and none are filled because the difference between the actual value of each of the pixels and their calculated color values is greater than the preset threshold. In this event the threshold value is increased. In other embodiments this rule may be relaxed and increasing the threshold can be allowed if a certain proportion of the unfilled boundary pixels are unfilled.
Determining fill segments:
FIG. 6A illustrates the method of selecting a tentative set of fill segments. First the percentage of pixels that each of the boundary segments contribute are calculated (602, 604). Next a parameter that represents the geometric shape of the regions filled by each of the boundary segments is calculated (606). In one embodiment the geometric parameter may be the perimeter of the exposed area contributed by each of the boundary segments. In another embodiment, the parameter may be the percentage of the area contributed by each of the boundary segments. In an third embodiment the geometric parameter may be any function of both the area and the perimeter contributed by each of the boundary segments. After determining the appropriate geometric parameter, the routine uses the following criteria to determine which segments will be tentative fill segments:
Firstly, if for any boundary segment, their contribution to the filled region is greater than the average contribution of all of the boundary segments, then that segment is accepted as a fill segment (608, 610). Secondly, if their contribution is less than the average contribution but greater than the pre-determined threshold value (612), then determine if the aspect ratio is sufficiently close to 1, within a pre-determined threshold (614). If this is the case, accept this segment as a tentative fill segment (618), otherwise reject this segment (620). Thirdly, if their contribution to the filled region is less than a pre-determined threshold value, the segment is rejected as a fill segment (614). In another embodiment a different statistical parameter may be used as the criteria for choosing the tentative fill segments.
FIG. 7 delineates the steps of choosing a set of fill segment, from within the set of tentative fill segments determined above. In one embodiment of the current invention, the routine calculates the normalized ratio of the squares of the boundary lengths that each of the tentative fill segments shares with the exposed area. Referring to Fig. 7B, the boundary lengths would be the lengths of the pixels that surround the exposed area. For example, for segment 604, the total length of the sides of the pixels 704a, 704b, 704c, 704d, 704e, 704f, 704g, 704h, 704i, 704j that are adjacent to the exposed area 610 is the boundary length that segment 704 shares with 710. Suppose the boundary lengths of each one of n tentative fill segments is denoted by (Li, L2, L3 , Ln), with their corresponding ratios denoted by lower case T, the normalized ratio of squares would be
(li2: 12 2: 13 2: :ln2), where,
(l1 2 + l2 2 + l3 2+.... +ln 2) = l
The routine then calculates the normalized ratio of the areas of the filled regions contributed by each of the boundary segments. Referring to fig. 7C, the region of exposed area 710 filled by segment 704 is shaded in gray and marked with the numbers the number 1, 2 3 or 4. The regions filled by segment 706 is shaded in stripes and marked with the number 1. Suppose the areas contributed by each one of n tentative fill segments is denoted by (Si, S2, S3 , Sn), with their corresponding ratios denoted by lower case 's', the normalized ratio of areas would be, (sι:s2:s3: :sn) where
Figure imgf000011_0001
The routine then calculates a quantity or a score where the score is the sum of the absolute value of the difference between the normalized ratios, Score = ∑ιn|(Sn- ln 2)|
In the next step, the routine rejects the segment where the value obtained by subtracting the normalized area from the normalized length squared i.e., (ln 2-Sn) is the greatest. Having removed the segment, the remaining ratios are renormalized and the score is recalculated. If this recalculated score is less than the previous score, the routine again removes the value of the segment where (ln 2-Sn) is the greatest and recalculates. This process is repeated until the recalculated score is greater than the previous score. The segments used to calculate the lowest score are then selected as the fill segments.
In other embodiments a different function of the boundary lengths and area contributed by each of the segments may be used to determine the fill segments.
Predictive Filling:
Using the set of fill segments, a predictive filling routine is carried out next. Predictive filling carries out the filling procedure in the same way that a decoder would carry out filling. One important difference between predictive filling and reference filling is that in predictive filling, the predicted pixel values are not compared with the actual values of the corresponding pixels. In use, an image decoder will have the final task of filling in the exposed area and it will not have the information about the actual values of the pixels within this area and thus must perform the final task of filling in the exposed area based upon information readily available or information transmitted by the encoder. Hence the purpose of carrying out this routine is for an encoder to be able to estimate the result, that an exemplary decoder may produce when filling in the exposed area using information from the fill segments alone. The ultimate purpose of the predictive filling routine is to determine whether the segments chosen as fill segments can reasonably approximate the actual color values of the pixels above. FIG. 8 delineates the steps of predictive filling. The routine begins with taking each pixel that is most proximal to the edge of the exposed area (802) and considering a kernel of adaptive size around it (804) . The domain of such a kernel 912 can be seen in Fig. 9C around the pixel 999 at the edge of the exposed area. Next the routine creates a histogram of color values of all pixels within the kernel for each fill segment (806). For each segment there will be a histogram of color distribution for each of the color components Y, U and V. In other embodiments, a different set of histograms for other spectral components of the video image frame is also envisioned. The routine identifies the segment adjacent to the pixel (808). If there is more than one segment adjacent to the pixel then the routine determines which segment contributes more pixels to the kernel (810). Then the routine calculates a statistical parameter of the color values for the adjacent fill segment (812). In one embodiment the statistical parameter calculated is the median, however in other embodiments, measures such as but not restricted to mean, mode, the weighted mean or any other statistical moment of the distribution may also be used. The pixel is then filled with the value of the calculated statistical parameter and the pixel is assigned the segment identifier of the adjacent segment (814). For example, in Fig 9C, pixel 999 will be filled with the color value calculated from the pixels of segment 904 that overlap with kernel 912.
FIG 9 A-I and Fig 10 A-I illustrate the steps of predictive filling. In particular FIG.9 shows the results of filling using segments 904 and 906 as fill segments. FIG 10, in contrast, illustrates the steps of filling from segment 1004 only (904 in Fig. 9). As these examples will show, filling from segment 904 alone creates a better approximation of the actual color scheme of the exposed area, than the results obtained by filling from both segments 904 and 906. The results demonstrate the importance of selecting the correct segments for filling an exposed area.
Fig 9A illustrates the first image frame, which contains three segments 904, 906 and 908. Fig 9B illustrates frame 2 after the motion of the segment 908 where an exposed area 910 is visible. It is important to note here that this frame is not the actual image frame. This is an intermediate step in the process of reconstruction of the image frame after the motion of the segment 908. In the subsequent steps the routine will attempt to reconstruct the actual image frame by predicting the color values of the pixels in the exposed area. As mentioned earlier, Fig 9C shows the kernel 912 around the pixel 999. Figure 9D illustrates the first ring the pixels within the exposed are to be filled; therein the first ring of pixels at the boundary of the exposed area are denoted by 1. Figure 9E illustrates the results of the filling of the first ring using the method described in Fig. 8 for each pixel at the boundary. It is important to note here that since segments 904 and 906 have been chosen as fill segments, the pixels adjacent to segment 908 are left unfilled. As can be seen in Fig. 9F, during the filling of the next ring, these unfilled pixels will become parts of ring 2. Fig 9F illustrates the pixels that belong to the second ring as marked by the number 2 within the exposed area 910. Fig. 9G illustrates the results of filling of ring 2. Once again the two unfilled white pixels marked 2 will become part of the third ring as can be seen in Fig. 9H. Fig 9H illustrates the pixels of the third ring labeled 3 within the exposed area 910. Fig 91 illustrates the final results of the predictive filling. When this result is compared to the actual image frame as illustrated in Fig. 4B, one can see that filling from segments 904 and 906 produced a poor approximation of the actual color values of the exposed area in Fig 91.
In practice the steps of selecting fill segments would reject segment 906 as a fill segment based upon the criteria described in the section titled 'determining fill segments' and illustrated in Fig. 6 and Fig. 7. Briefly this segment would be rejected since it does not fill a minimum percentage of pixels of the exposed area. The determination criteria in the current example would leave only one segment 904 as the fill segment.
Figure 10 A-I illustrate the steps of filling from segment 1004 only (904 in Fig 9). Fig. 10A illustrates the four segments 1004, 1006, 1008 and the exposed area 1010 (identical to Fig. 9A). Fig 10B illustrates the first ring of the exposed area where the pixels of the first ring are labeled 1. Fig. 10C illustrates the results of filling the first ring using only segment 1004 as the fill segment. It is worth noting that the three pixels labeled 1 adjacent to segment 906 have remained unfilled because 904 is the only fill segment and the routine only allows filling from adjacent fill segments. Fig. 10D and Fig. 10E illustrate the second ring and the results of filling the second ring respectively. Therein the pixels belonging to the second ring are labeled 2. Fig. 10F and Fig. 10G illustrate the third ring and the results of filling the third ring respectively. Fig. 10H illustrates the fourth ring. Fig. 101 illustrates the final result of filling the exposed area 1010 from the segment 1004. When the result is compared to the actual value of the exposed area as illustrated in Fig 4B, one can see FiglOI is a much better reconstruction of the exposed area than was Fig 91. Thus the method of selecting a set of fill segments as described above produced a superior reconstruction of the exposed area.
Recalculation:
In one embodiment of the present invention the set of fill segments may be recalculated. In certain situations the set of fill segments chosen may not produce optimal results and using a smaller set may better approximate the actual color values of the exposed area. The method involves comparing the predictive filling results with the reference filling results. In this case, a function of the predictive fill values, the reference fill values and the actual values is used as a measure of the closeness of the predictive fill results to the actual values of the exposed area pixels. If this function is above a threshold for the pixels filled by any fill segment or segments, the segment or segments are rejected and the reference fill and predictive fill values are recalculated without the rejected fill segments.
FIG 11 illustrates a method, the routine utilizes in its current embodiment, to determine the quality of the results achieved by predictive filling. Following predictive filling, the routine determines the average of the absolute value of the difference between the actual value and the reference fill value for regions filled by each of the fill segments (1102, 1104). Represented mathematically, if the number of pixels contributed by a fill segment is N, suppose the actual value of any pixel i is denoted by a! , the predictive fill value of the pixel is denoted by p, and the reference fill value of the pixel is denoted by r,. For each region contributed by a fill segment the value is denoted by,
If
N
The routine also determines, the average of the absolute value of the difference between the actual value and the predictive fill value for regions filled by each of the fill segments (1106). This quantity can be denoted by,
N Then routine the subtracts the average of the absolute value of the difference between the actual values and the reference fill values, from the average of the absolute values of the difference between the actual values and the predictive fill values (1108). This process is repeated for areas filled by each of the fill segments (1110, 1112). If for any segment, this value is above a threshold, the routine decides to do a recalculation (1114, 1118), otherwise the current set of fill segments are accepted as the final set (1116). Represented mathematically, if for any region contributed by a fill segment the quantity,
∑k -P, \ ∑k - ,
N N is greater than a threshold value, then a recalculation is initiated. In other embodiments, any other function of the actual values, reference fill values and the predictive fill values can be used to determine whether a recalculation should be done.
Prior to recalculation of the fill segments, any fill segment or segments that fail to meet the above criteria is excluded from the set of possible boundary segments. Using this reduced set of boundary segments the routine then carries out reference filling, determines a set of fill segments and carries out predictive filling (1120).
In the current embodiment, one recalculation is performed if necessary, however, in other embodiments, multiple recalculations may be done.
Selection of the final fill segments:
As delineated in Fig. 12, if a recalculation is preformed, the routine then determines whether the set of fill segments determined after the first calculation or the set determined after recalculation is superior (1202, 1204). The routine first calculates the average of the absolute values of the differences between the first predictive fill values and the actual values of all of the pixels in the exposed area (1206, 1208). It is important to note that the above difference is not calculated for the areas contributed by each of the fill segments, rather this is a global difference calculated for all of the pixels within an exposed area. Next the routine calculates the average of the absolute values of the differences between the recalculated predictive fill values and the actual values of all of the pixels in the exposed area (1210, 1212). If the actual fill value of a pixel 'n' is denoted by an, the first predictive fill value is denoted by pf , and the recalculated predictive fill value is denoted by pm, then represented mathematically, the two above differences will be,
M
Y) an -Pfi n
M and,
Figure imgf000015_0001
n
M
where the total number of pixels in an exposed area is denoted by M. If foe average of the absolute values of foe differences between foe first predictive fill values and foe actual values is greater than foe average of the absolute values of the differences between the recalculated predictive fill values and foe actual values, the routine selects foe set of fill segments determined during the recalculation as foe final fill segments (1214, 1216). Represented mathematically if,
Figure imgf000016_0001
M M
then foe recalculated set fill segments is chosen.
If however, foe average of the absolute values of foe differences between the first predictive fill values and the actual values is less than foe average of the absolute values of the differences between foe recalculated predictive fill values and the actual values, foe routine selects foe set of fill segments determined during the first calculation as foe final set of fill segments (1218). Represented mathematically if,
Figure imgf000016_0002
<
M M
then the recalculated set fill segments is chosen.
If foe average of foe absolute values of foe differences between the recalculated predictive fill values and the actual values is greater than the average of foe absolute values of foe differences between foe first predictive fill values and the actual values, the routine selects the set of fill segments determined during foe first calculation as the final fill segments (1214, 1216). If foe average of foe absolute values of the differences between the recalculated predictive fill values and foe actual values is less than foe average of the absolute values of the differences between the first predictive fill values and the actual values, the routine selects foe set of fill segments determined during the recalculation as the final set of fill segments (1218). Filling of exposed areas consisting of multiple subdivisions: When foe exposed area is very large or has a very complex geometrical shape, it generally becomes more difficult to reconstruct it accurately by predictive fill. In such cases, it may be more beneficial to divide a large exposed area into smaller subregions. This situation is illustrated in Fig 13 A. In another embodiment of foe present invention such a method of subdividing is also applied exposed area with relatively complex geometric shapes such as foe exposed area illustrated in Fig. 13b. The purpose of this subdivision as in Fig 13b is to decompose the exposed area into several exposed areas with simpler geometries. For foe purpose of this disclosure, it does not matter how to divide foe exposed area into subdivisions. What matters is that there may be a situation where filling of exposed areas becomes more accurate if foe exposed areas are subdivided.
The above scheme of filling a single exposed area can be easily extended to fill the exposed areas consisting of multiple subdivided regions. In this case, fill begins with foe most exterior subdivisions and progresses into foe more interior subdivisions. For example, in Fig. 13a, foe most exterior subdivisions are 1302, 1304, 1306, 1308, 1310, 1312, 1314, and 1316 (only 1318 is the interior subdivision), and in Fig. 13b, all the subdivisions are already exterior subdivisions. For each subregion that is identified as the exterior subregion, a set of fill segments need to be determined in foe same manner as described above. After completing foe selection of final fill segments choose fill segments another predictive filling is performed for each of foe exterior subregions. Next a global error of predictive fill against the actual image is calculated for all the exterior subdivisions. In one embodiment the error is measured by calculating foe average of the absolute value of difference between the predictive fill value and foe actual pixel value for each subregion. Similar to the reference fill, only the subdivision(s) for which foe error is smaller than the pre-determined threshold are finally allowed to be filled. After the subdivisions for which the error is smaller than foe pre-determined threshold are filled, a new set of exterior subdivisions are identified. The exterior subdivisions which were not filled in foe previous attempt are once again identified as exterior subdivisions. Then, a set of subdivision(s) among foe newly identified exterior subdivisions which are allowed to be filled during the current attempt is foen determined in the same manner just described above. Here, the subdivisions which had been already filled during the previous attempts are also allowed to fill foe current exterior subdivisions. This process is foen repeated until all foe subdivisions in foe exposed area are filled Transmission:
Once the set of fill segments have been finalized in the manner described above, foe information regarding which segments are to be used as fill segments is transmitted to an exemplary decoder for efficient reconstruction of the exposed area. The fill segment information may also be coupled to a residue encoder to improve the local image quality.
For exposed areas where foe exposed area was filled after subdividing it into multiple subregions, after all the subdivisions are filled and foe set of fill segments as well as the order foe filling the subregions have been finalized, foe information regarding which subregion is filled in which order and from which fill segments, is sent to foe exemplary decoder.
The above description is illustrative and not restrictive. The scope of foe invention should, therefore, be determined not with reference to the above description, but instead should be determined wifo reference to the appended claims along wifo their foil scope of equivalents.

Claims

WHAT IS CLAIMED IS:
1. A method of efficiently encoding a portion of a digital image frame comprised of image segments having arbitrary shapes, wherein each image segment has a boundary and each image segment is comprised of pixels having values, the method comprising: obtaining a first image frame and a second image frame; determining newly uncovered image segments in the second frame, determining which segments are adjacent to foe newly uncovered image segment in the first image frame and designating them as boundary segments; carrying out a reference filling routine; determining which bbundary segments are to be used for filling the exposed area and designating them as fill segments; carrying out a predictive filling routine; recalculating if necessary and determining foe final set of fill segments.
2. The method of carrying out a reference filling routine, foe steps comprising: considering a region of adaptive dimensions around each unfilled pixel at the boundary of the exposed area; determining the statistical distribution of colors for all pixels within foe region belonging to each boundary segment;
3. The method of claim 2 further comprising determining to which segment each pixel is adjacent.
4. The method of claim 2 further comprising calculating a statistical parameter of foe color values for each adjacent segment determined from the said statistical distribution of colors.
5. The method of claim 2 further comprising calculating foe difference between the actual color values of foe pixel and foe value of foe calculated statistical parameter for each boundary segment and identifying foe smallest difference value and foe segment which provides foe smallest difference value.
6. The method of claim 2 further comprising filling the pixel with foe value of foe calculated statistical parameter of the segmentadjacent to foe selected pixel foat has the smallest difference if this difference is less than a threshold value.
7. The method of claim 2 further comprising assigning to the pixel, the segment identifier of foe segmentadjacent to foe selected pixel foat has foe smallest difference between the actual color value of that pixel and foe calculated statistical parameter if this difference is less than a threshold value.
8. The method of claim 2 further comprising leaving foe pixel unfilled if foe difference between the actual color value of that pixel and the calculated statistical parameter is greater than a threshold value.
9. The method of claim 2 further comprising increasing foe threshold value if a certain percentage or less of pixels can be filled wifo foe current threshold.
10. The method of claim 8 where foe percentage of pixels is zero.
11. The method of claim 2 further comprising repeating the steps in claims 2 through 7 for each unfilled pixel at the boundary of the exposed area until the entire exposed area is filled.
12. The method of claim 2 where statistical distributions of three color components Y, U, V are determined.
13. The method of claim 2 where statistical distributions of each component of a multi-spectral image is determined.
14. The method of claim 4 where the statistical parameter is the median color value of the statistical distribution.
15. The method of claim 4 where the statistical parameter is any statistical moment of foe distribution of color values.
16. The method of determining the segments to be used for filling, the steps comprising: calculating foe percentage of pixels within foe exposed area that are filled by each of the boundary segments; calculating a parameter that represents the geometric shape of foe area filled by each of foe boundary segments.
17. The method of claim 16 where foe parameter representing the geometric shape is a function of foe perimeter length, or a function of the area.
18. The method of claim 16 where foe parameter representing foe geometric shape is a function of both the perimeter length and foe area.
19. The method of claim 16 further comprising selecting a set of tentative fill segments, the steps comprising: selecting a segment as a tentative fill segment, if their contribution to foe filled region is greater than a predetermined contribution of all of foe boundary segments; selecting a segment as a tentative fill segment, if their contribution is less foan foe predetermined contribution but greater foan a certain threshold value, and foe geometric parameter of foe region filled by this segment is within a threshold range; rejecting a segment as a tentative fill segment if one of foe above two criteria are not met.
20. The method of claim 19 where foe predetermined contribution is foe average contribution of all of foe segments.
21. The method of claim 19 where the predetermined contribution is any statistical parameter of all of foe segments.
22. The method of claim 15 further comprising calculating a function of the perimeter lengfos that each of foe tentative fill segments contribute to foe perimeter of the exposed area.
23. The method of claim 22 where foe function of foe perimeter lengfos is foe normalized ratio of the squares of the perimeter lengfos.
24. The method of claim 15 further comprising calculating a function of the areas of the regions contributed by the each of the tentative fill segments.
25. The method of claim 24 where foe function of foe areas is foe normalized ratio of foe areas.
26. The method of claim 15 further comprising calculating a function of foe difference between the ratios.
27. The method of claim 26 where foe function of the difference between foe ratios is foe sum of the absolute values of the difference between the ratios.
28. The method of claim 15 further comprising: repeating foe steps in claims 15 through 26 after excluding foe segment where foe value obtained by subtracting the normalized area from the normalized length squared is the greatest; determining if foe sum of the absolute values of foe differences recalculated in this step is smaller than that obtained in the previous calculation of foe same parameter.
29. The method of claim 15 further comprising repeating the steps in claim 15 until foe sum of foe absolute values of foe difference is greater than foat calculated during foe previous recalculation.
30. The method of claim 15 further comprising selecting foe segments used to calculate foe lowest sum of absolute values of the differences as the fill segments.
31. The method of carrying out a predictive fill routine, foe steps comprising: considering a region of adaptive dimensions around each unfilled pixel at the boundary of the exposed area; determining the statistical distribution of colors for all pixels within foe region belonging to each boundary segment;
32. The method of claim 20 further comprising determining to which boundary segment each pixel is adjacent.
33. The method of claim 20 further comprising calculating a statistical parameter of foe color values for each adjacent segment determined from the said statistical distribution of colors.
34. The method of claim 20 further comprising assigning to foe pixel, the segment identifier of the segment that contributes the greatest number of pixels to foe said region of adaptive dimensions around the pixel.
35. The method of claim 20 further comprising filling foe pixel with the value of foe statistical parameter of the segment that contributes the greatest number of pixels to foe said region of adaptive dimensions around the pixel.
36. The method of claim 20 further comprising repeating foe steps in claims 18 through 22 for each unfilled pixel at the boundary of foe exposed area until the entire exposed area is filled.
37. The method of claim 20 where statistical distributions of three color components Y, U, V are determined.
38. The method of claim 20 where statistical distributions each component of a multi-spectral image is determined.
39. The method of claim 22 where foe statistical parameter is the median color value of foe statistical distribution.
40. The method of claim 22 where foe statistical parameter is any statistical moment of foe distribution of color values.
41. The method of determining whether the tentative fill segments chosen above sufficiently approximate foe actual value of the exposed area, foe steps comprising: calculating the value of a fonction of the actual values and foe reference fill values for regions filled by each of the fill segments; calculating the value of a function of the actual values and the predictive fill values for regions filled by each of the fill segments; determining whether for any segment, the value obtained by subtracting foe fonction of the actual values and foe reference fill values, from the fonction of foe actual values and foe predictive fill values is above a threshold.
42. The method of claim 30 where foe fonction of foe acfoal values and foe reference fill values is foe average of the absolute values of the color differences between foe actual values and foe reference fill values.
43. The method of claim 30 where foe fonction of foe acfoal values and foe predictive fill values is foe average of foe absolute values of foe color differences between the actual values and the predictive fill values.
44. The method of determining to undertake a recalculation, if for any segment or segments, the value obtained by subtracting the function of the actual values and the reference fill values, from foe function of foe actual values and the predictive fill values is above a threshold.
45. The method of claim 33 where foe function of foe actual values and foe reference fill values is foe average of the absolute values of the color differences between foe actual values and the reference fill values.
46. The method of claim 33 where the function of the actual values and the predictive fill values is the average of foe absolute values of the color differences between the actual values and the predictive fill values.
47. The method of recalculating foe fill segments and refilling, foe steps comprising: rejecting the segment or segments if, for any segment, the value obtained by subtracting the function of the actual values and the reference fill values, from foe fonction of foe acfoal values and foe predictive fill values is above a threshold.
Repeating the steps described in claims 2 through 29 after excluding foe rejected segment or segments.
48. The method of claim 36 where the function of foe actual values and foe reference fill values is foe average of the absolute values of foe color differences between foe actual values and the reference fill values.
49. The method of claim 36 where the function of foe actual values and the predictive fill values is the average of foe absolute values of foe color differences between the actual values and the predictive fill values.
50. The method of claim 29 where recalculation is carried out multiple times.
51. The method of determining foe final set of fill segments, if no recalculation is done, foe steps comprising designating the set of fill segments used for foe predictive fill as foe final fill segments.
52. The method of determining foe final set of fill segments, if a recalculation is performed, foe steps comprising: calculating foe value of a fonction of foe first predictive fill values and foe actual values of all of foe pixels in foe exposed area; calculating foe value of a function of foe recalculated predictive fill values and the actual values of all of foe pixels in foe exposed area; designating foe set of fill segments used for the first predictive fill as foe final fill segments, if the function of the recalculated predictive fill values and the actual values is greater than foe function of the first predictive fill values and the actual values; designating foe set of fill segments used for foe recalculated predictive fill as foe final fill segments, if the fonction of the recalculated predictive fill values and the actual values is less than foe function of the first predictive fill values and foe acfoal values.
53. The method of claim 41 where the function of the first predictive fill values and foe acfoal values is the average of foe absolute values of the color differences between the first predictive fill values and the actual values.
54. The method of claim 41 where the function of foe recalculated predictive fill values and the actual values is foe average of foe absolute values of foe color differences between foe recalculated predictive fill values and foe acfoal values.
55. The method of efficiently encoding exposed areas wifo multiple subregions, the steps comprising:
Determining final fill segments for each of the exterior subregions.
Calculating foe value of a function foat represents the error in foe predictive filling for each external subregion. If the fonction representing the error is less than a threshold value, foen filling the exterior subregion.
Determining again foe identities of the exterior subregions which include the subregions that were not filled in the first pass as well as regions that were previously in foe interior of the exposed area and previously had no boundary segments.
Filling the exterior subregions using fill segments which may include foe subregions that were filled in the previous step.
56. The method of claim 44 further comprising repeating the steps in claim 44 until all subregions have been filled.
57. The method of claim 44 where foe function that represents foe error in predictive filling is foe average of foe absolute value of the difference between foe predictive fill values and the actual pixel color values.
58. Transmission of the identities of the fill segments to a decoder
59. Transmitting the order of filling and foe fill segment identities to foe decoder if the exposed area is divided into multiple subregions.
60. Coupling the information of foe fill segments wifo any set of residue encoder to improve local quality.
PCT/US2001/050282 2000-12-20 2001-12-20 Method of filling exposed areas in digital images WO2002051158A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002239703A AU2002239703A1 (en) 2000-12-20 2001-12-20 Method of filling exposed areas in digital images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US25784400P 2000-12-20 2000-12-20
US60/257,844 2000-12-20

Publications (2)

Publication Number Publication Date
WO2002051158A2 true WO2002051158A2 (en) 2002-06-27
WO2002051158A3 WO2002051158A3 (en) 2003-04-24

Family

ID=22978005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/050282 WO2002051158A2 (en) 2000-12-20 2001-12-20 Method of filling exposed areas in digital images

Country Status (4)

Country Link
US (1) US7133566B2 (en)
AU (1) AU2002239703A1 (en)
TW (1) TWI233751B (en)
WO (1) WO2002051158A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1306824C (en) * 2004-07-29 2007-03-21 联合信源数字音视频技术(北京)有限公司 Image boundarg pixel extending system and its realizing method

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7119924B2 (en) * 2001-09-28 2006-10-10 Xerox Corporation Detection and segmentation of sweeps in color graphics images
US6983068B2 (en) * 2001-09-28 2006-01-03 Xerox Corporation Picture/graphics classification system and method
US6985628B2 (en) * 2002-01-07 2006-01-10 Xerox Corporation Image type classification using edge features
US6996277B2 (en) * 2002-01-07 2006-02-07 Xerox Corporation Image type classification using color discreteness features
US7362374B2 (en) * 2002-08-30 2008-04-22 Altera Corporation Video interlacing using object motion estimation
US7639741B1 (en) * 2002-12-06 2009-12-29 Altera Corporation Temporal filtering using object motion estimation
US7088870B2 (en) * 2003-02-24 2006-08-08 Microsoft Corporation Image region filling by example-based tiling
US6987520B2 (en) * 2003-02-24 2006-01-17 Microsoft Corporation Image region filling by exemplar-based inpainting
US20050110801A1 (en) * 2003-11-20 2005-05-26 I-Jong Lin Methods and systems for processing displayed images
EP1826723B1 (en) * 2006-02-28 2015-03-25 Microsoft Corporation Object-level image editing
US7830428B2 (en) * 2007-04-12 2010-11-09 Aptina Imaging Corporation Method, apparatus and system providing green-green imbalance compensation
JP4952627B2 (en) * 2008-03-21 2012-06-13 富士通株式会社 Image processing apparatus, image processing method, and image processing program
CN105844583A (en) * 2016-03-17 2016-08-10 西安建筑科技大学 Portrait stone crack intelligence extraction and virtual restoration method
CN111242874B (en) * 2020-02-11 2023-08-29 北京百度网讯科技有限公司 Image restoration method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999051029A2 (en) * 1998-04-01 1999-10-07 Koninklijke Philips Electronics N.V. A method and device for generating display frames from a sequence of source frames through synthesizing one or more intermediate frames exclusively from an immediately preceding source frame
WO2000064148A1 (en) * 1999-04-17 2000-10-26 Pulsent Corporation Method and apparatus for efficient video processing

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4189711A (en) * 1977-11-08 1980-02-19 Bell Telephone Laboratories, Incorporated Multilevel processing of image signals
US4628532A (en) * 1983-07-14 1986-12-09 Scan Optics, Inc. Alphanumeric handprint recognition
US4876728A (en) * 1985-06-04 1989-10-24 Adept Technology, Inc. Vision system for distinguishing touching parts
US4771469A (en) * 1986-06-30 1988-09-13 Honeywell Inc. Means and method of representing an object shape by hierarchical boundary decomposition
JP2625612B2 (en) * 1992-07-20 1997-07-02 インターナショナル・ビジネス・マシーンズ・コーポレイション Image processing method and image processing apparatus
US5748789A (en) * 1996-10-31 1998-05-05 Microsoft Corporation Transparent block skipping in object-based video coding systems
DE60003032T2 (en) 1999-06-11 2004-04-01 Pulsent Corp., Milpitas METHOD FOR IMAGE SEGMENTATION

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1999051029A2 (en) * 1998-04-01 1999-10-07 Koninklijke Philips Electronics N.V. A method and device for generating display frames from a sequence of source frames through synthesizing one or more intermediate frames exclusively from an immediately preceding source frame
WO2000064148A1 (en) * 1999-04-17 2000-10-26 Pulsent Corporation Method and apparatus for efficient video processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DELOPOULOS A N ET AL: "Object oriented motion and deformation estimation using composite segmentation" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. (ICIP). WASHINGTON, OCT. 23 - 26, 1995, LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. 3, 23 October 1995 (1995-10-23), pages 217-220, XP010197063 ISBN: 0-7803-3122-2 *
KAUP A ET AACH T: "A New Approach Towards Description Of Arbitrarily Shaped Image Segments" WORKSHOP NOTES. 1992 IEEE INTERNATIONAL WORKSHOP ON INTELLIGENT SIGNAL PROCESSING AND COMMUNICATION SYSTEMS, TAIPEI, TAIWAN, 19-21 MARCH 1992, pages 543-553, XP010322109 Taipei, Taiwan, Nat. Taiwan Univ, Taiwan *
KAUP A ET AL: "Efficient prediction of uncovered background in interframe coding using spatial extrapolation" ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 1994. ICASSP-94., 1994 IEEE INTERNATIONAL CONFERENCE ON ADELAIDE, SA, AUSTRALIA 19-22 APRIL 1994, NEW YORK, NY, USA,IEEE, 19 April 1994 (1994-04-19), pages V-501-V-504, XP010133695 ISBN: 0-7803-1775-0 *
YOKOYAMA Y ET AL: "VERY LOW BIT RATE VIDEO CODING USING ARBITRARILY SHAPED REGION-BASED MOTION COMPENSATION" IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, IEEE INC. NEW YORK, US, vol. 5, no. 6, 1 December 1995 (1995-12-01), pages 500-507, XP000545956 ISSN: 1051-8215 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1306824C (en) * 2004-07-29 2007-03-21 联合信源数字音视频技术(北京)有限公司 Image boundarg pixel extending system and its realizing method

Also Published As

Publication number Publication date
WO2002051158A3 (en) 2003-04-24
AU2002239703A1 (en) 2002-07-01
TWI233751B (en) 2005-06-01
US20020131495A1 (en) 2002-09-19
US7133566B2 (en) 2006-11-07

Similar Documents

Publication Publication Date Title
US7133566B2 (en) Method of filling exposed areas in digital images
JP3293920B2 (en) Image processing apparatus and method
US7949053B2 (en) Method and assembly for video encoding, the video encoding including texture analysis and texture synthesis, and corresponding computer program and corresponding computer-readable storage medium
KR950009458B1 (en) Video telecommunication system and method for compressing and decompressing digital color video data
CN111133476A (en) Point cloud compression
US20050244071A1 (en) Adaptive quantization of depth signal in 3D visual coding
JP2008282416A (en) Method and apparatus for segmenting image prior to coding
US20200007872A1 (en) Video decoding method, video decoder, video encoding method and video encoder
JP4261525B2 (en) A method for quantizing spatially dense components of representative colors
US6304605B1 (en) Video compressing method wherein the direction and location of contours within image blocks are defined using a binary picture of the block
JPH0670301A (en) Apparatus for segmentation of image
JPH0773330A (en) Demarcation device of texture image
US5969766A (en) Method and apparatus for contour motion estimating a binary image by using a weighted block match algorithm
KR20110020242A (en) Image coding method with texture synthesis
US20030076881A1 (en) Method and apparatus for coding and decoding image data
EP1046136B1 (en) Partition coding method and device
KR950015103B1 (en) Method and system for compressing and decompressing digital color video statistically encoded data
US5896467A (en) Method and apparatus for encoding a contour image of an object in a video signal
EP0871332A2 (en) Method and apparatus for coding a contour of an object employing temporal correlation thereof
EP0858226B1 (en) Method and apparatus for encoding a contour of an object by using a vertex inserting technique
Eryurtlu et al. Very low-bit-rate segmentation-based video coding using contour and texture prediction
Brand et al. A triangulation-based backward adaptive motion field subsampling scheme
CA2178781A1 (en) Method and equipment for coding and decoding a digital image
Torres et al. Segmentation based coding of textures using stochastic vector quantization
Coelho et al. Deep Learning-based Point Cloud Joint Geometry and Color Coding: Designing a Perceptually-Driven Differentiable Training Distortion Metric

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP