WO2007042853A1 - Method and system for vignetting elimination in digital image - Google Patents

Method and system for vignetting elimination in digital image Download PDF

Info

Publication number
WO2007042853A1
WO2007042853A1 PCT/IB2005/003057 IB2005003057W WO2007042853A1 WO 2007042853 A1 WO2007042853 A1 WO 2007042853A1 IB 2005003057 W IB2005003057 W IB 2005003057W WO 2007042853 A1 WO2007042853 A1 WO 2007042853A1
Authority
WO
WIPO (PCT)
Prior art keywords
lens
image
values
look
sampling
Prior art date
Application number
PCT/IB2005/003057
Other languages
French (fr)
Inventor
Jarmo Nikkanen
Ossi Kalevo
Original Assignee
Nokia Corporation
Nokia Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation, Nokia Inc. filed Critical Nokia Corporation
Priority to PCT/IB2005/003057 priority Critical patent/WO2007042853A1/en
Priority to CN2005800518289A priority patent/CN101292519B/en
Priority to EP05819749A priority patent/EP1946542A4/en
Priority to JP2008535109A priority patent/JP2009512303A/en
Publication of WO2007042853A1 publication Critical patent/WO2007042853A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Definitions

  • the present invention relates generally to digital camera imaging and, more particularly, to processing of raw imaging data, such as Bayer data.
  • Vignetting means the darkening of the corners and the borders relative to the center of the image.
  • vignetting is due to light distribution through the lens geometry, and part of the vignetting effect is due to the image sensor.
  • vignetting is also strongly color dependent in that the vignetting effect is different for a different color.
  • the present invention uses a light source having substantially uniform brightness throughout a test area as an imaging target.
  • a test image of the imaging target formed by the imaging lens is captured by the image sensor and sampling of the vignetting effect on the test image is carried out at a plurality of locations distributed on a sparse grid throughout the image area.
  • a sampling window is used to compute the average correction gain value at the sampling grid location.
  • a look-up table is generated according to gain value at the sampling grid locations. Because the look-up table contains only the gain values at the sampling grid locations rather than the gain value for each pixel of the entire image area, the size of the look-up table can be significantly reduced, depending on the sparsity of the grid.
  • look-up tables are generated, each table for a different color temperature of the light source.
  • LUTs for different lens positions.
  • the image area is effectively divided into a plurality of image zones, each zone defined by four grid locations.
  • the correction gain for each pixel within a zone is calculated in run time by interpolating a gain value on the basis of the correction gain values at those four grid locations, taken into account the distance of the pixel from each of those grid locations.
  • Figure 1 illustrates a system for generating a look-up table for vignetting correction purposes.
  • Figure 2 is illustrates a system for correcting a vignetting effect in an image using a look-up table.
  • Figure 3 A is a schematic representation of a square sampling grid for use in generating a look-up table.
  • Figure 3B is a schematic representation of a rectangular sampling grid for use in generating a look-up table.
  • Figure 3 C is a schematic representation of a sampling grid where more zones are located near the borders of an image area.
  • Figure 4 is a schematic representation of a zone.
  • Figure 5 is a schematic representation of different sampling windows for use in generating a look-up table.
  • Figure 6 is a schematic representation showing how the correction gain value at a pixel within a zone is calculated.
  • Figure 7 is a schematic representation of an electronic device capable of vignetting correction, according to the present invention.
  • the present invention uses one or more look-up tables for vignetting correction. Instead of generating a look-up table containing the correction factor for each of the pixels in an image area, the present invention samples the vignetting effect on a test image at a plurality of locations distributed on a sparse grid throughout the image area and generates a look-up table according to the gain values at the sampling grid locations. Effectively, the image area is divided into a plurality of image zones, each zone bounded by four grid locations. As such, the correction gain for each pixel within a zone is calculated in run time by interpolating a gain value on the basis of the correction gain values of those four grid locations, taken into account the distance of the pixel from each of those grid locations.
  • the present invention involves in two general level phases:
  • a light source having substantially uniform brightness throughout a test area is used as an imaging target, and a test image of the imaging target through the imaging lens is captured by the image sensor.
  • Sampling of the vignetting effect on the test image is carried out on a plurality of locations distributed on a sparse grid throughout the image area.
  • a sampling window is used to compute the average gain at the sampling grid location.
  • a look-up table is generated according to gain values at the sampling grid locations, hi order to eliminate noise and brightness fluctuation in the image data, two or more test images may be used to compute the gain at the sampling grid locations.
  • the look-up table contains only the gain values at the sampling grid locations rather than the gain value at each pixel of the entire image area, the size of the look-up table can be significantly reduced, depending on the sparsity of the grid.
  • the lens can be adjusted for automatic focusing or for optical zoom purposes, it is also possible to generate LUTs for different lens positions.
  • two or more look-up tables can be generated, each look-up table for a different color temperature of the light source. For example, two LUTs can be generated according to two extreme temperatures relevant to color imaging, one at 2500 0 K and one at 6500 0 K.
  • the color temperature of a color in the image data can be estimated in run time according to automatic white balance (AWB) result, and the final correction gain value can be calculated by weighting the correction gain values from the two LUTs according to the distance of the estimated color temperature from the reference color temperatures. It is possible to obtain the color temperature from manual white balance information, if such information is used. It is preferable that the color temperature and the exposure time are used so that all the color components of the image sensor exposed close to 80% of the dynamic range of the sensor regarding the pixels in the middle area of the sensor, for example.
  • a correction gain value is calculated for each color component in that location based on the pixel value at the sampling location.
  • the pixel values at a plurality of neighboring pixels within a sampling window are also used in the calculation in order to alleviate the noise effects.
  • the gain value of a color component at a sampling location is calculated based on the average pixel value of the color component within the sampling window.
  • FIG. 1 illustrates a system for generating a look-up table for vignetting correction carried out at the first phase of the present invention. As shown in Figure 1 , a planar light source with substantially uniform brightness within a test area is used as an imaging target.
  • a color temperature control module is used to control the color temperature of the light source.
  • An imaging lens and an image sensor that need vignetting elimination are used to capture the image of the test area.
  • a sampling control module is used to specify the sampling grid location at which a gain value is calculated.
  • a LUT generator is used to calculate the average gain value over the sampling window surrounding the sampling grid location. The size of the window can be established based on the sparsity of the grid.
  • an image processing software is tuned for a specific imaging lens and a specific image sensor model.
  • the LUT is generated for each sensor and lens in the production testing.
  • image quality can be further improved because the variation in the sensor samples is also taken into account.
  • two or more LUTs can be generated according to the lens locations.
  • a lens control is used to adjust the lens locations.
  • Figure 2 shows a system for correcting a vignetting effect in an image using a look-up table carried out at the second phase of the present invention.
  • the image formed by the image lens on the image sensor is read out as image data.
  • This image data is conveyed to a vignetting correction module for run-time correction, where the pixel value is multiplied by the correction gain value at that pixel in order to yield a corrected pixel value.
  • the correct gain value at the pixel is calculated by a gain value computation module by interpolating a gain value based on the correction gains, as provided by the LUT, of the four surrounding sampling locations, giving weight for each gain value on the basis of the distance of the pixel from that sampling location.
  • a lens position detector is used to select the LUTs for that position.
  • a color temperature information module may be used to select the LUTs according to the color temperature of the image.
  • the gain value computation module comprises a software application product having program codes embedded in a computer readable medium for carrying out gain value correction.
  • the number of correction gain values, or the number of entries in the LUT is reduced by the sampling of the vignetting effect on the test image at a plurality of locations on a sparse grid, taking into account the different color filter array (CFA) components.
  • the number of correction gains is reduced by a down-scale factor (DSF).
  • DSF down-scale factor
  • the same DSF is applied to both the x- direction and the y-direction of the pixel array in the image sensor.
  • the DSF is defined according to the image size. For example, DSF can be set equal to the array width divided by a number, truncated to the closest power of two.
  • the DSF can be set equal to 1152/36 or 32, for example.
  • the square zone division is schematically shown in Figure 3 A.
  • the DSF in the x direction is different from the DSF in the y direction.
  • the image area is effectively divided into a plurality of rectangular zones, as shown in Figure 3B.
  • Each of the zones as shown in Figures 3A - 3C is an image area bounded by four surrounding sampling locations.
  • zone Zl is defined by four sampling locations A, B, C and D.
  • the sampling locations C and D are also used to define zone Zm+1, and the sampling location D is used to define zone Zm+2.
  • the sampling locations B and D are further used to define zone Z2.
  • the DSF can * also be used to establish the size of a sampling window.
  • the sampling window size can be 2*DSF by 2*DSF.
  • Figure 5 shows the relationship between a pixel within a zone and the four corners of the zone and how the gain value is computed in that pixel for vignetting correction.
  • the vignetting effect is eliminated from each pixel by multiplying the pixel value by the gain value g(i,x,y) for the color component i.
  • the notation i for the color component is omitted.
  • A, B, C, D be the sampling locations at the top left corner, top right corner, low left corner, and low right corner, respectively; and g(A), g(B), g(C) and g(D) be the gain value at the corner sampling locations for a color component.
  • the gain value for the pixel (x,y) is
  • g(x,y) [X 2 *Y 2 *g(A) + X 2 *Y 1 *g(B) + X!*Y 2 *g(C) +X 1 *Yi*g(D)]/(Xmax*Ymax)
  • each sampling location corresponds to multiple color components
  • each color component in a sampling location corresponds to multiple pixels in the neighborhood of that sampling location.
  • the gain value at each pixel can also be adjusted according to the detected lens positions in relation to a reference lens position.
  • the vignetting correction method and system of the present invention can be used in a digital camera or an electronic device having digital imaging capability.
  • the electronic device 1 has a housing 10 to accommodate a plurality of electronic components.
  • the electronic components include an image display module 20, an imaging lens 30, an image sensor 32 disposed at the image plane of the imaging lens 30. An image formed by the imaging lens 30 on the image sensor 32 can be read out as raw image data 42.
  • the raw image data is processed by an image processing module 43 and the processed image data is conveyed to a vignetting correction module 44 so that the pixel values in the image data are adjusted by corresponding gain values for vignetting reduction.
  • the corrected image data is further processed by another image processing module 47 so that the image data can be displayed on the display module 20 or transmitted through an antenna 52 via a transceiver 50.
  • the vignetting correction process is controlled by an ASIC 60.
  • the electronic device 10 may have a lens control 34 for adjusting the focus of the image lens 30 or for adjusting the lens position for optical zoom purposes.
  • the vignetting correction module 44 comprises one or more LUTs, each LUT having a plurality of reference values, wherein the reference values are representative of gain values at a plurality of discrete sampling locations distributed over the image sensor including one or more locations in a center area of the image sensor, and the gain values compensate for brightness falling-off by the lens at the sampling locations as compared to said one or more locations in the center area.
  • the electronic device 1 can be a mobile terminal, a PDA or the like.
  • an LUT is typically carried out using raw images captured from test images, but it can also be carried out using an image that has been suitable processed.
  • an LUT can be indexed for a viewfinder image, for example, so that the sampling locations (x, y) do not represent the real pixel values. Rather, the locations (x, y) are relative locations. For example, the locations can be indexed between [-1.0, 1.0] with a midpoint (0, 0), for example.
  • it is possible that only a portion of the sensor area is used to produce an image such that pixels in some rows and columns near the sensor borders are cropped out. In that case, the indexing of the LUT must take image cropping into consideration.

Abstract

An imaging system having one or more look-up tables to compensate for vignetting effect on an image sensor by a lens. An image of a test target formed by the lens is captured by the image sensor and sampling is carried out at discrete locations distributed on a sparse grid throughout the image area. The look-up table is generated according to gain values at sampling grid locations. Two or more look-up tables may be generated for different color temperatures and different lens positions. Image area is effectively divided into a plurality of image zones, each zone defined by four grid locations. The correction gain for each pixel within a zone is calculated in run time by interpolating a gain value on the basis of the correction gain values at those four grid locations, taken into account the distance of the pixel from each of those grid locations.

Description

METHOD AND SYSTEM FOR VIGNETTING ELIMINATION IN DIGITAL IMAGE
Field of the Invention
The present invention relates generally to digital camera imaging and, more particularly, to processing of raw imaging data, such as Bayer data.
Background of the Invention
It is known in the art that images formed behind a camera lens always suffer from an uneven-brightness problem known as light falling-off or vignetting. Vignetting means the darkening of the corners and the borders relative to the center of the image. When a single-lens is used for image formation, the amount of light reaching the focal plane of the lens away from the optical axis is reduced according to an optical law known as the cosine 4th law. With a compound lens system, the falling off can be significantly reduced or substantially eliminated. However, the camera lens in a small portable device such is a mobile phone is usually not of a complex lens system, the image brightness in the border areas of the image is likely to fall off significantly. Part of the vignetting effect is due to light distribution through the lens geometry, and part of the vignetting effect is due to the image sensor. In particular, in a camera having an image sensor that uses micro-lenses and non-optical fill factor, vignetting is also strongly color dependent in that the vignetting effect is different for a different color.
Summary of the Invention
In order to find out the vignetting effect on an image sensor through an imaging lens, the present invention uses a light source having substantially uniform brightness throughout a test area as an imaging target. A test image of the imaging target formed by the imaging lens is captured by the image sensor and sampling of the vignetting effect on the test image is carried out at a plurality of locations distributed on a sparse grid throughout the image area. At each sampling location, a sampling window is used to compute the average correction gain value at the sampling grid location. A look-up table is generated according to gain value at the sampling grid locations. Because the look-up table contains only the gain values at the sampling grid locations rather than the gain value for each pixel of the entire image area, the size of the look-up table can be significantly reduced, depending on the sparsity of the grid. It is possible that two or more look-up tables are generated, each table for a different color temperature of the light source. i Furthermore, in a camera system where the lens can be adjusted for automatic focusing or for optical zoom purposes, it is also possible to generate LUTs for different lens positions.
According to the present invention, the image area is effectively divided into a plurality of image zones, each zone defined by four grid locations. As such, the correction gain for each pixel within a zone is calculated in run time by interpolating a gain value on the basis of the correction gain values at those four grid locations, taken into account the distance of the pixel from each of those grid locations.
Brief Description of the Drawings Figure 1 illustrates a system for generating a look-up table for vignetting correction purposes.
Figure 2 is illustrates a system for correcting a vignetting effect in an image using a look-up table.
Figure 3 A is a schematic representation of a square sampling grid for use in generating a look-up table.
Figure 3B is a schematic representation of a rectangular sampling grid for use in generating a look-up table.
Figure 3 C is a schematic representation of a sampling grid where more zones are located near the borders of an image area. Figure 4 is a schematic representation of a zone.
Figure 5 is a schematic representation of different sampling windows for use in generating a look-up table.
Figure 6 is a schematic representation showing how the correction gain value at a pixel within a zone is calculated. Figure 7 is a schematic representation of an electronic device capable of vignetting correction, according to the present invention.
Detailed Description of the Invention
The present invention uses one or more look-up tables for vignetting correction. Instead of generating a look-up table containing the correction factor for each of the pixels in an image area, the present invention samples the vignetting effect on a test image at a plurality of locations distributed on a sparse grid throughout the image area and generates a look-up table according to the gain values at the sampling grid locations. Effectively, the image area is divided into a plurality of image zones, each zone bounded by four grid locations. As such, the correction gain for each pixel within a zone is calculated in run time by interpolating a gain value on the basis of the correction gain values of those four grid locations, taken into account the distance of the pixel from each of those grid locations. The present invention involves in two general level phases:
1) Generate a reference look-up table (LUT) on the basis of suitable test images that are captured with the type of image sensor that needs vignetting elimination.
2) Eliminate the vignetting effect from the raw image data captured by the image sensor.
In the first phase, a light source having substantially uniform brightness throughout a test area is used as an imaging target, and a test image of the imaging target through the imaging lens is captured by the image sensor. Sampling of the vignetting effect on the test image is carried out on a plurality of locations distributed on a sparse grid throughout the image area. At each sampling location, a sampling window is used to compute the average gain at the sampling grid location. A look-up table is generated according to gain values at the sampling grid locations, hi order to eliminate noise and brightness fluctuation in the image data, two or more test images may be used to compute the gain at the sampling grid locations.
Because the look-up table contains only the gain values at the sampling grid locations rather than the gain value at each pixel of the entire image area, the size of the look-up table can be significantly reduced, depending on the sparsity of the grid. Thus, in a camera system where the lens can be adjusted for automatic focusing or for optical zoom purposes, it is also possible to generate LUTs for different lens positions. Moreover, it is possible that two or more look-up tables can be generated, each look-up table for a different color temperature of the light source. For example, two LUTs can be generated according to two extreme temperatures relevant to color imaging, one at 25000K and one at 65000K. In phase two, the color temperature of a color in the image data can be estimated in run time according to automatic white balance (AWB) result, and the final correction gain value can be calculated by weighting the correction gain values from the two LUTs according to the distance of the estimated color temperature from the reference color temperatures. It is possible to obtain the color temperature from manual white balance information, if such information is used. It is preferable that the color temperature and the exposure time are used so that all the color components of the image sensor exposed close to 80% of the dynamic range of the sensor regarding the pixels in the middle area of the sensor, for example.
At each sampling location, a correction gain value is calculated for each color component in that location based on the pixel value at the sampling location. The pixel values at a plurality of neighboring pixels within a sampling window are also used in the calculation in order to alleviate the noise effects. Thus, the gain value of a color component at a sampling location is calculated based on the average pixel value of the color component within the sampling window. Let the average pixel value of a color component (i) at the sampling location (x,y) be avg(i,x,y) and the average pixel value of the same color component at the sampling location in the center of the image be avg_ref(i), then the gain value of the color component, g(i,x,y), at the sampling location is equal to avg_ref(i)/ avg(i,x,y). The gain value for the pixel in the center of the image is 1. Figure 1 illustrates a system for generating a look-up table for vignetting correction carried out at the first phase of the present invention. As shown in Figure 1 , a planar light source with substantially uniform brightness within a test area is used as an imaging target. A color temperature control module is used to control the color temperature of the light source. An imaging lens and an image sensor that need vignetting elimination are used to capture the image of the test area. A sampling control module is used to specify the sampling grid location at which a gain value is calculated. A LUT generator is used to calculate the average gain value over the sampling window surrounding the sampling grid location. The size of the window can be established based on the sparsity of the grid.
In the LUT generation phase, an image processing software is tuned for a specific imaging lens and a specific image sensor model. Alternatively, the LUT is generated for each sensor and lens in the production testing. As such, image quality can be further improved because the variation in the sensor samples is also taken into account. In a camera system where the lens can be adjusted for automatic focusing or for optical zoom purposes, two or more LUTs can be generated according to the lens locations. As shown in Figure 1, a lens control is used to adjust the lens locations. Figure 2 shows a system for correcting a vignetting effect in an image using a look-up table carried out at the second phase of the present invention. As shown in Figure 2, the image formed by the image lens on the image sensor is read out as image data. This image data is conveyed to a vignetting correction module for run-time correction, where the pixel value is multiplied by the correction gain value at that pixel in order to yield a corrected pixel value. The correct gain value at the pixel is calculated by a gain value computation module by interpolating a gain value based on the correction gains, as provided by the LUT, of the four surrounding sampling locations, giving weight for each gain value on the basis of the distance of the pixel from that sampling location. A lens position detector is used to select the LUTs for that position. A color temperature information module may be used to select the LUTs according to the color temperature of the image. Preferably, the gain value computation module comprises a software application product having program codes embedded in a computer readable medium for carrying out gain value correction. According to the present invention, the number of correction gain values, or the number of entries in the LUT, is reduced by the sampling of the vignetting effect on the test image at a plurality of locations on a sparse grid, taking into account the different color filter array (CFA) components. The number of correction gains is reduced by a down-scale factor (DSF). In one embodiment of the present invention, the same DSF is applied to both the x- direction and the y-direction of the pixel array in the image sensor. The DSF is defined according to the image size. For example, DSF can be set equal to the array width divided by a number, truncated to the closest power of two. In an image of 1152x864 (=995,328) pixels, the DSF can be set equal to 1152/36 or 32, for example. As such, the image area is effectively divided into 36x27 (=972) square zones, each zone having 32x32 pixels. The square zone division is schematically shown in Figure 3 A. hi another embodiment, the DSF in the x direction is different from the DSF in the y direction. As such, the image area is effectively divided into a plurality of rectangular zones, as shown in Figure 3B. hi a different embodiment, there are more zones in the border area than in the middle section of the image, as shown in Figure 3C.
Each of the zones as shown in Figures 3A - 3C is an image area bounded by four surrounding sampling locations. As shown in Figure 4, zone Zl is defined by four sampling locations A, B, C and D. However, the sampling locations C and D are also used to define zone Zm+1, and the sampling location D is used to define zone Zm+2. The sampling locations B and D are further used to define zone Z2.
If the correction gain of a pixel within a zone is computed according to the gain values of the four corner pixels of the zone and the zones are overlapping (see Figure 4), the number of the entries in a LUT is equal to the number of zones plus the number of some sampling locations on the border of the image area. In an image area that is divided into 36x27 zones, the number of sampling locations is (36x27+36+27+1) = 1036. If each gain value can be fitted in one byte, then the LUT is 1036 bytes in size. Moreover, the magnitude of the correction gain can be limited, if so desired, in order to fit the correction in one byte, for example, when a fixed-point number is used to represent the correction gain and fractional part cannot be reduced without compromising the accuracy. The approach will also limit the increase in noise that results from high-gain values in vignetting elimination.
Because of the relatively small size of the LUT, it is possible to have two or more LUTs for each color components in each lens position.
The DSF can* also be used to establish the size of a sampling window. For example, the sampling window size can be 2*DSF by 2*DSF.
Figure 5 shows the relationship between a pixel within a zone and the four corners of the zone and how the gain value is computed in that pixel for vignetting correction. At run time, the vignetting effect is eliminated from each pixel by multiplying the pixel value by the gain value g(i,x,y) for the color component i. For simplicity, the notation i for the color component is omitted. Let A, B, C, D be the sampling locations at the top left corner, top right corner, low left corner, and low right corner, respectively; and g(A), g(B), g(C) and g(D) be the gain value at the corner sampling locations for a color component. Then, the gain value for the pixel (x,y) is
g(x,y) = [X2*Y2*g(A) + X2*Y1*g(B) + X!*Y2*g(C) +X1*Yi*g(D)]/(Xmax*Ymax)
where
X1, X2, Y1 and Y2 define the distance from the pixel (x,y) to the four corners, Xmax = X1+X2, and Ymax=Yi+Y2.
It should be noted that each sampling location corresponds to multiple color components, and each color component in a sampling location corresponds to multiple pixels in the neighborhood of that sampling location.
In a camera system where the lens or lenses are adjustable for automatic focusing or for optical zoom purposes, the gain value at each pixel can also be adjusted according to the detected lens positions in relation to a reference lens position. The vignetting correction method and system of the present invention can be used in a digital camera or an electronic device having digital imaging capability. As shown in Figure 7, the electronic device 1 has a housing 10 to accommodate a plurality of electronic components. The electronic components include an image display module 20, an imaging lens 30, an image sensor 32 disposed at the image plane of the imaging lens 30. An image formed by the imaging lens 30 on the image sensor 32 can be read out as raw image data 42. The raw image data is processed by an image processing module 43 and the processed image data is conveyed to a vignetting correction module 44 so that the pixel values in the image data are adjusted by corresponding gain values for vignetting reduction. The corrected image data is further processed by another image processing module 47 so that the image data can be displayed on the display module 20 or transmitted through an antenna 52 via a transceiver 50. The vignetting correction process is controlled by an ASIC 60. The electronic device 10 may have a lens control 34 for adjusting the focus of the image lens 30 or for adjusting the lens position for optical zoom purposes. The vignetting correction module 44 comprises one or more LUTs, each LUT having a plurality of reference values, wherein the reference values are representative of gain values at a plurality of discrete sampling locations distributed over the image sensor including one or more locations in a center area of the image sensor, and the gain values compensate for brightness falling-off by the lens at the sampling locations as compared to said one or more locations in the center area.
The electronic device 1 can be a mobile terminal, a PDA or the like.
The major advantages of the present invention include the following:
- Simple spatial relationship between the LUT and the image; - Relatively low computation power requirement at run time, substantially independent of the complexity of the vignetting effect that needs to be corrected;
- Reasonable memory requirement;
Memory requirement does not increase significantly as the image size increases because a larger down scale factor may be used for a larger image.
It should be noted that the generation of an LUT is typically carried out using raw images captured from test images, but it can also be carried out using an image that has been suitable processed. Furthermore, an LUT can be indexed for a viewfinder image, for example, so that the sampling locations (x, y) do not represent the real pixel values. Rather, the locations (x, y) are relative locations. For example, the locations can be indexed between [-1.0, 1.0] with a midpoint (0, 0), for example. Moreover, it is possible that only a portion of the sensor area is used to produce an image such that pixels in some rows and columns near the sensor borders are cropped out. In that case, the indexing of the LUT must take image cropping into consideration.
Thus, although the invention has been described with respect to one or more embodiments thereof, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims

What is claimed is:
1. A method for correcting brightness non-uniformity in an image formed on an image sensor through a lens, the image sensor having a number of pixels, each having a pixel value of the formed image, said method characterized by: forming at least one image of a target having substantially uniform brightness; obtaining pixel values at a plurality of discrete sampling locations distributed over said at least one image of the target, the sampling locations including one or more locations in a center area of said at least one image; computing gain values at said plurality of sampling locations based on the pixel values at said plurality of sampling locations as compared to the pixel values at said one or more locations in the center area; and generating a look-up table having a plurality of reference values based on the gain values so as to adjust the pixel values of the formed image according to the reference values in the look-up table.
2. The method of claim 1, wherein the lens is operable in a plurality of lens positions for image forming, the lens positions including at least a first lens position and a second lens position, and wherein said forming step is carried out when the lens is operated in the first lens position, said method further characterized by: forming at least another image of the target when the lens is operated in the second lens position, obtaining pixel values at said plurality of discrete sampling locations on said at least another image of the target, the sampling locations including one or more locations in a center area of said at least another image; computing further gain values at said plurality of sampling locations based on the pixel values at said plurality of sampling locations on said at least another image as compared to the pixel values at said one or more locations in the center area of said at least another image; and generating a further look-up table having a plurality of further reference values based on the further gain values so as to adjust the pixel values of the formed image at the second lens position according to the further reference values in the further look-up table.
3. The method of claim 1, wherein the brightness of the target is adjustable in a plurality of color temperatures including at least a first color temperature and a second color temperature, and wherein said forming step is carried out when the target is adjusted in the first color temperature, said method further characterized by: forming at least another image of the target when the target is adjusted in the second color temperature; obtaining pixel values at said plurality of discrete sampling locations on said at least another image of the target, the sampling locations including one or more locations in a center area of said at least another image; computing further gain values at said plurality of sampling locations based on the pixel values at said plurality of sampling locations on said at least another image as compared to the pixel values at said one or more locations in the center area of said at least another image; and generating a further look-up table having a plurality of further reference values based on the further gain values so that the pixel values of the formed image can be adjusted according to the reference values in the look-up table or the further reference values in the further look-up table substantially based on the color temperature of the formed image.
4. The method of claim 1, characterized in that the discrete sampling locations are distributed over a grid on said at least one image such that each sampling location is separated from a neighboring sampling location by a plurality of pixels.
5. The method of claim 4, characterized in that the grid substantially divides said at least one image into a plurality of sampling zones, each zone having a plurality of pixels and a plurality of sampling locations, and that each sampling location has a reference value in the look-up table so as to allow the pixel values of the pixels in said zone to be adjusted according to the reference values associated with the sampling locations in said zone.
6. The method of claim 4, characterized in that the grid substantially divides said at least one image into a plurality of rectangular sampling zones, each zone having a plurality of pixels and at least four sampling locations substantially located at four corners of said zone, and that each sampling location has a reference value in the look-up table so as to allow the pixel values of the pixels in said zone to be adjusted according to the reference values associated with the sampling locations in said zone.
7. The method of claim 6, characterized in that the rectangular sampling zones are substantially square in shape.
8. The method of claim 6, characterized in that the rectangular sampling zones are substantially equal in size.
9. The method of claim 1, characterized in that said at least one image has a plurality of bordering areas surrounding the center area, and that the sampling locations in the center area are distributed in a first density and the sampling locations in the bordering areas are distributed at more widely spaced intervals than the first density.
10. A brightness correction system for use with an imaging system, the image system comprising a lens and an image sensor for forming an image thereon through the lens, the image sensor having a plurality of pixels, each pixel having a pixel value of the formed image, said brightness correction system characterized by: a look-up table having a plurality of reference values, wherein the reference values are representative of gain values at a plurality of discrete sampling locations distributed over the image sensor including one or more locations in a center area of the image sensor, and the gain values compensate for brightness falling-off by the lens at the sampling locations as compared to said one or more locations in the center area; and a gain value computation module operatively connected to the look-up table for adjusting the pixel values based on the reference values in the look-up table,
11. The brightness correction system of claim 10, characterized in that the lens is operable in a plurality of lens positions for image forming, the lens positions including at least a first lens position and a second lens position, and that the reference values are representative of the gain values to compensate for the brightness falling-off by the lens operated in the first lens position, said brightness correction system further characterized by: a further look-up table having a plurality of further reference values, the further reference values representative of the gain values to compensate for the brightness falling- off by the lens operated in the second lens position, wherein the further look-up table is operatively connected to the gain value computation module for adjusting the pixel values based on the further reference values in the further look up table when the lens is operated in the second lens position,
12. The brightness correction system of claim 10, characterized in that the brightness of the target is adjustable in a plurality of color temperatures including at least a first color temperature and a second color temperature, and that the reference values are representative of the gain values to compensate for the brightness falling-off by the lens in the first color temperature, said brightness correction system further characterized by: a further look-up table having a plurality of further reference values operatively connected to the gain value computation module, wherein the further reference values are representative of gain values to compensate for the brightness falling-off by the lens in the second color temperature.
13. The brightness correction system of claim 10, characterized in that the discrete sampling locations are distributed over a grid on said at least one image such that each sampling location is separated from a neighboring sampling location by a plurality of pixels.
14. The brightness correction system of claim 13, characterized in that the grid substantially divides said at least one image into a plurality of sampling zones, each zone having a plurality of pixels and a plurality of sampling locations, and that each sampling location has a reference value in the look-up table so as to allow the pixel values of the pixels in said zone to be adjusted according to the reference values associated with the sampling locations in said zone.
15. The brightness correction system of claim 11 , further characterized by a lens position detection device operative connected to the imaging system for detecting the lens positions, wherein the lens position detection device provides information indicative of the detected lens positions to the gain value computation module so as to allow the gain value computation module to adjust the pixel values according to the reference values in the look-up table or the further reference values in the further look-up table based on the information.
16. The brightness correction system of claim 10, characterized in that the gain value computation module comprises a software application product having a plurality of program codes embedded in a computer readable medium, said program codes carrying out the steps of: reading a pixel value at a pixel location of the formed image; computing a gain value based on a plurality of reference values and on the distances between said pixel location and the sampling locations associated with said reference values; and adjusting the pixel value at said pixel location based on said computed gain value.
17. The brightness correction system of claim 13, characterized in that the grid also divides the formed image into a plurality of image zones, each image zone having a plurality of reference values, and that the gain value computation module comprises a software application product having a plurality of program codes embedded in a computer readable medium, said program codes carrying out the steps of: reading a pixel value at a pixel location in an image zone of the formed image; computing a gain value based on a plurality of reference values in said image zone and on the distances between said pixel location and the sampling locations associated with said reference values in said image zone; and adjusting the pixel value at said pixel location based on said computed gain value.
18. An imaging system characterized by: a lens; an imaging sensor for forming an image thereon through the lens, the image sensor having a plurality of pixels, each pixel having a pixel value of the formed image; a look-up table having a plurality of reference values, wherein the reference values are representative of gain values at a plurality of discrete sampling locations distributed over the image sensor including one or more locations in a center area of the image sensor, and the gain values compensate for brightness falling-off by the lens at the sampling locations as compared to said one or more locations in the center area; and a gain value computation module operatively connected to the look-up table for adjusting the pixel values based on the reference values in the look-up table.
19. The imaging system of claim 18, further characterized by: a lens position adjusting mechanism for adjusting the lens in a plurality of lens positions for image forming, the lens positions including at least a first lens position and a second lens position, wherein the reference values are representative of the gain values to compensate for the brightness falling-off by the lens operated in the first lens position; and a further look-up table having a plurality of further reference values, the further reference values representative of the gain values to compensate for the brightness falling- off by the lens operated in the second lens position, wherein the further look-up table is operatively connected to the gain value computation module for adjusting the pixel values based on the further reference values in the further look up table when the lens is operated in the second lens position.
20. The imaging system of claim 18, characterized in that the brightness of the target is adjustable in a plurality of color temperatures including at least a first color temperature and a second color temperature, and that the reference values are representative of the gain values to compensate for the brightness falling-off by the lens in the first color temperature, said imaging system further characterized by: a further look-up table having a plurality of further reference values operatively connected to the gain value computation module, wherein the further reference values are representative of gain values to compensate for the brightness falling-off by the lens in the second color temperature.
21. The imaging system of claim 18, characterized in that the discrete sampling locations are distributed over a grid on said at least one image such that each sampling location is separated from a neighboring sampling location by a plurality of pixels.
22. The imaging system of claim 21, characterized in that the grid substantially divides said at least one image into a plurality of sampling zones, each zone having a plurality of pixels and a plurality of sampling locations, and that each sampling location has a reference value in the look-up table so as to allow the pixel values of the pixels in said zone to be adjusted according to the reference values associated with the sampling locations in said zone.
23. The imaging system of claim 19, further characterized by a lens position detection device operative connected to the imaging system for detecting the lens positions, wherein the lens position detection device provides information indicative of the detected lens positions to the gain value computation module so as to allow the gain value computation module to adjust the pixel values according to the reference values in the look-up table or the further reference values in the further look-up table based on the information.
24. An electronic device characterized by: an imaging system; and a display module operatively connected to the imaging system, wherein the imaging system comprises: a lens; an imaging sensor for forming an image thereon through the lens, the image sensor having a plurality of pixels, each pixel having a pixel value of the formed image; a look-up table having a plurality of reference values, wherein the reference values are representative of gain values at a plurality of discrete sampling locations distributed over the image sensor including one or more locations in a center area of the image sensor, and the gain values compensate for brightness falling-off by the lens at the sampling locations as compared to said one or more locations in the center area; and a gain value computation module operatively connected to the look-up table for adjusting the pixel values based on the reference values in the look-up table.
25. The electronic device of claim 24, further characterized by: a lens position adjusting mechanism for adjusting the lens in a plurality of lens positions for image forming, the lens positions including at least a first lens position and a second lens position, wherein the reference values are representative of the gain values to compensate for the brightness falling-off by the lens operated in the first lens position; and a further look-up table having a plurality of further reference values, the further reference values representative of the gain values to compensate for the brightness falling- off by the lens operated in the second lens position, wherein the further look-up table is operatively connected to the gain value computation module for adjusting the pixel values based on the further reference values in the further look up table when the lens is operated in the second lens position.
26. The electronic device of claim 24, characterized in that the brightness of the target is adjustable in a plurality of color temperatures including at least a first color temperature and a second color temperature, and that the reference values are representative of the gain values to compensate for the brightness falling-off by the lens in the first color temperature, said imaging system further comprising: a further look-up table having a plurality of further reference values operatively connected to the gain value computation module, wherein the further reference values are representative of gain values to compensate for the brightness falling-off by the lens in the second color temperature.
27. The electronic device of claim 24, characterized in that the discrete sampling locations are distributed over a grid on said at least one image such that each sampling location is separated from a neighboring sampling location by a plurality of pixels.
28. The electronic device of claim 27, characterized in that the grid substantially divides said at least one image into a plurality of sampling zones, each zone having a plurality of pixels and a plurality of sampling locations, and that each sampling location has a reference value in the look-up table so as to allow the pixel values of the pixels in said zone to be adjusted according to the reference values associated with the sampling locations in said zone.
29. The electronic device of claim 24, comprises a mobile terminal.
30. An imaging apparatus for generating brightness correction images using the look- up table generated according to claim 1.
PCT/IB2005/003057 2005-10-13 2005-10-13 Method and system for vignetting elimination in digital image WO2007042853A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/IB2005/003057 WO2007042853A1 (en) 2005-10-13 2005-10-13 Method and system for vignetting elimination in digital image
CN2005800518289A CN101292519B (en) 2005-10-13 2005-10-13 Dark corner eliminating method and system in digital image
EP05819749A EP1946542A4 (en) 2005-10-13 2005-10-13 Method and system for vignetting elimination in digital image
JP2008535109A JP2009512303A (en) 2005-10-13 2005-10-13 Method and apparatus for removing vignetting in digital images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2005/003057 WO2007042853A1 (en) 2005-10-13 2005-10-13 Method and system for vignetting elimination in digital image

Publications (1)

Publication Number Publication Date
WO2007042853A1 true WO2007042853A1 (en) 2007-04-19

Family

ID=37942348

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2005/003057 WO2007042853A1 (en) 2005-10-13 2005-10-13 Method and system for vignetting elimination in digital image

Country Status (4)

Country Link
EP (1) EP1946542A4 (en)
JP (1) JP2009512303A (en)
CN (1) CN101292519B (en)
WO (1) WO2007042853A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010123499A1 (en) 2009-04-22 2010-10-28 Hewlett-Packard Development Company, L.P. Spatially-varying spectral response calibration data
US8085391B2 (en) 2007-08-02 2011-12-27 Aptina Imaging Corporation Integrated optical characteristic measurements in a CMOS image sensor
US8130292B2 (en) 2008-12-31 2012-03-06 Aptina Imaging Corporation Scene illumination adaptive lens shading correction for imaging devices
CN101534450B (en) * 2008-03-11 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image retrieving device, and unit and method thereof for establishing gain form
WO2012164161A1 (en) * 2011-06-01 2012-12-06 Nokia Corporation Method and apparatus for image signal processing
EP2395739A3 (en) * 2010-06-11 2012-12-26 Samsung Electronics Co., Ltd. Apparatus and method for creating lens shading compensation table suitable for photography environment
CN102905077A (en) * 2012-10-09 2013-01-30 深圳市掌网立体时代视讯技术有限公司 Image vignetting brightness regulating method and device
EP3096512A1 (en) * 2015-05-18 2016-11-23 Axis AB Method and camera for producing an image stabilized video

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8823841B2 (en) * 2012-06-20 2014-09-02 Omnivision Technologies, Inc. Method and apparatus for correcting for vignetting in an imaging system
TWI549511B (en) * 2014-03-19 2016-09-11 智原科技股份有限公司 Image sensing apparatus and color-correction matrix correcting method and look-up table establishing method
CN105100550A (en) * 2014-04-21 2015-11-25 展讯通信(上海)有限公司 Shadow correction method and device and imaging system
CN105282401B (en) * 2014-06-13 2018-09-14 聚晶半导体股份有限公司 Video capturing device and its vignetting compensation method
TWI562636B (en) * 2014-06-16 2016-12-11 Altek Semiconductor Corp Image capture apparatus and image compensating method thereof
CN105306913B (en) * 2014-06-16 2017-05-24 聚晶半导体股份有限公司 image obtaining device and image compensation method thereof
CN104363389A (en) * 2014-11-11 2015-02-18 广东中星电子有限公司 Lens vignetting compensation method and system
CN104581098B (en) * 2014-12-01 2016-09-28 北京思比科微电子技术股份有限公司 A kind of adaptive processing method of lens shading
US10708526B2 (en) * 2015-04-22 2020-07-07 Motorola Mobility Llc Method and apparatus for determining lens shading correction for a multiple camera device with various fields of view
WO2017063045A1 (en) * 2015-10-15 2017-04-20 Planet Intellectual Property Enterprises Pty Ltd Device for reading an ivd assay
CN106101588B (en) * 2016-07-08 2019-05-14 成都易瞳科技有限公司 The compensation method of panoramic picture gradual halation phenomena
CN107222681B (en) * 2017-06-30 2018-11-30 维沃移动通信有限公司 A kind of processing method and mobile terminal of image data
GB2578329B (en) 2018-10-24 2022-11-09 Advanced Risc Mach Ltd Retaining dynamic range using vignetting correction and intensity compression curves
CN111953953A (en) * 2019-05-17 2020-11-17 北京地平线机器人技术研发有限公司 Method and device for adjusting pixel brightness and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030194143A1 (en) * 2002-03-27 2003-10-16 Seiji Iida Image correction according to transmission characteristics of image sensing optical system
US20030234864A1 (en) * 2002-06-20 2003-12-25 Matherson Kevin J. Method and apparatus for producing calibration data for a digital camera
US20040041919A1 (en) * 2002-08-27 2004-03-04 Mutsuhiro Yamanaka Digital camera
EP1447977A1 (en) * 2003-02-12 2004-08-18 Dialog Semiconductor GmbH Vignetting compensation
US6833862B1 (en) * 1999-06-30 2004-12-21 Logitech, Inc. Image sensor based vignetting correction

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001275029A (en) * 2000-03-28 2001-10-05 Minolta Co Ltd Digital camera, its image signal processing method and recording medium
US6940546B2 (en) * 2001-04-04 2005-09-06 Eastman Kodak Company Method for compensating a digital image for light falloff while minimizing light balance change
US7391450B2 (en) * 2002-08-16 2008-06-24 Zoran Corporation Techniques for modifying image field data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6833862B1 (en) * 1999-06-30 2004-12-21 Logitech, Inc. Image sensor based vignetting correction
US20030194143A1 (en) * 2002-03-27 2003-10-16 Seiji Iida Image correction according to transmission characteristics of image sensing optical system
US20030234864A1 (en) * 2002-06-20 2003-12-25 Matherson Kevin J. Method and apparatus for producing calibration data for a digital camera
US20040041919A1 (en) * 2002-08-27 2004-03-04 Mutsuhiro Yamanaka Digital camera
EP1447977A1 (en) * 2003-02-12 2004-08-18 Dialog Semiconductor GmbH Vignetting compensation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP1946542A4 *
YU W ET AL: "Vignetting Distortion Correction Method for High Quality Digital Imaging.", PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, 2004, IEEE COMPUT SOC., vol. 3, 2004, pages 666 - 669, XP010724749 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8085391B2 (en) 2007-08-02 2011-12-27 Aptina Imaging Corporation Integrated optical characteristic measurements in a CMOS image sensor
CN101534450B (en) * 2008-03-11 2012-03-28 鸿富锦精密工业(深圳)有限公司 Image retrieving device, and unit and method thereof for establishing gain form
US8130292B2 (en) 2008-12-31 2012-03-06 Aptina Imaging Corporation Scene illumination adaptive lens shading correction for imaging devices
US8976240B2 (en) 2009-04-22 2015-03-10 Hewlett-Packard Development Company, L.P. Spatially-varying spectral response calibration data
EP2422524A1 (en) * 2009-04-22 2012-02-29 Hewlett-Packard Development Company, L.P. Spatially-varying spectral response calibration data
US20120133765A1 (en) * 2009-04-22 2012-05-31 Kevin Matherson Spatially-varying spectral response calibration data
WO2010123499A1 (en) 2009-04-22 2010-10-28 Hewlett-Packard Development Company, L.P. Spatially-varying spectral response calibration data
EP2422524A4 (en) * 2009-04-22 2014-05-07 Hewlett Packard Development Co Spatially-varying spectral response calibration data
EP2395739A3 (en) * 2010-06-11 2012-12-26 Samsung Electronics Co., Ltd. Apparatus and method for creating lens shading compensation table suitable for photography environment
WO2012164161A1 (en) * 2011-06-01 2012-12-06 Nokia Corporation Method and apparatus for image signal processing
US8588523B2 (en) 2011-06-01 2013-11-19 Nokia Corporation Method and apparatus for image signal processing
CN102905077A (en) * 2012-10-09 2013-01-30 深圳市掌网立体时代视讯技术有限公司 Image vignetting brightness regulating method and device
EP3096512A1 (en) * 2015-05-18 2016-11-23 Axis AB Method and camera for producing an image stabilized video
US9712747B2 (en) 2015-05-18 2017-07-18 Axis Ab Method and camera for producing an image stabilized video
KR101755608B1 (en) 2015-05-18 2017-07-19 엑시스 에이비 Method and camera for producing an image stabilized video
TWI699119B (en) * 2015-05-18 2020-07-11 瑞典商安訊士有限公司 Method and camera for producing an image stabilized video

Also Published As

Publication number Publication date
CN101292519B (en) 2010-05-12
CN101292519A (en) 2008-10-22
JP2009512303A (en) 2009-03-19
EP1946542A4 (en) 2010-08-11
EP1946542A1 (en) 2008-07-23

Similar Documents

Publication Publication Date Title
EP1946542A1 (en) Method and system for vignetting elimination in digital image
US8106976B2 (en) Peripheral light amount correction apparatus, peripheral light amount correction method, electronic information device, control program and readable recording medium
EP1711880B1 (en) Techniques of modifying image field data by extrapolation
EP3611916B1 (en) Imaging control method and electronic device
EP1700268B1 (en) Techniques for modifying image field data
US8619183B2 (en) Image pickup apparatus and optical-axis control method
CN110473159B (en) Image processing method and device, electronic equipment and computer readable storage medium
US20100295956A1 (en) Imaging apparatus and shake correcting method
US20170332000A1 (en) High dynamic range light-field imaging
CN110365894A (en) The method and relevant apparatus of image co-registration in camera system
US20080278613A1 (en) Methods, apparatuses and systems providing pixel value adjustment for images produced with varying focal length lenses
US8885075B2 (en) Image processing device, image processing method, and solid-state imaging device
CN104641276A (en) Imaging device and signal processing method
CN105306913A (en) Image obtaining device and image compensation method thereof
US20090190006A1 (en) Methods, systems and apparatuses for pixel signal correction using elliptical hyperbolic cosines
CN113747066A (en) Image correction method, image correction device, electronic equipment and computer-readable storage medium
EP4117282A1 (en) Image sensor, imaging apparatus, electronic device, image processing system and signal processing method
JP4993670B2 (en) Imaging apparatus and control method thereof
KR101854355B1 (en) Image correction apparatus selectively using multi-sensor
CN112866547B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112866554B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN113676659B (en) Image processing method and device, terminal and computer readable storage medium
KR101065223B1 (en) Image sensor and method for correcting lense shading
CN116962855A (en) Image compensation method, device, electronic equipment and readable storage medium
CN117835054A (en) Phase focusing method, device, electronic equipment, storage medium and product

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200580051828.9

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005819749

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2008535109

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2005819749

Country of ref document: EP