US9143762B2 - Camera module and image recording method - Google Patents

Camera module and image recording method Download PDF

Info

Publication number
US9143762B2
US9143762B2 US13/049,455 US201113049455A US9143762B2 US 9143762 B2 US9143762 B2 US 9143762B2 US 201113049455 A US201113049455 A US 201113049455A US 9143762 B2 US9143762 B2 US 9143762B2
Authority
US
United States
Prior art keywords
camera module
sub
image
optical system
correction coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US13/049,455
Other versions
US20110292182A1 (en
Inventor
Takayuki Ogasahara
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OGASAHARA, TAKAYUKI
Publication of US20110292182A1 publication Critical patent/US20110292182A1/en
Application granted granted Critical
Publication of US9143762B2 publication Critical patent/US9143762B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • H04N13/0239
    • H04N13/0022
    • H04N13/0296
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Definitions

  • a photograph of a 3D stereovision image has been realized by using a camera module for a special 3D camera module or plural ordinary camera modules and the like.
  • a 3D special camera module has a problem of increased development cost or production cost.
  • variation in camera modules during the production greatly affects an image quality. Therefore, in order to acquire a high-definition 3D stereovision image by using plural camera modules, there arises a problem of reduction in a yield of the camera module, and significant increase in cost for suppressing variation during the production.
  • FIG. 1 is a block diagram illustrating a configuration of a camera module according to a first embodiment
  • FIG. 2 is a flowchart for describing a procedure of a signal process by a signal processing unit
  • FIGS. 3 to 7 are explanatory views for realizing a 3D stereovision image
  • FIG. 9 is a view illustrating an example of an adjustment chart
  • FIG. 10 is a block diagram illustrating a configuration of a camera module according to a second embodiment
  • FIG. 11 is a flowchart illustrating a setting procedure in order to allow images of respective sub-camera modules to agree with one another at a cross point
  • FIG. 12 is a view illustrating an example of a relationship between an extended amount of a lens and a distance to a subject.
  • a camera module includes a first sub-camera module, a second sub-camera module, and a position correcting unit.
  • the first sub-camera module includes a first image sensor, and a first imaging optical system.
  • the first imaging optical system takes light into the first image sensor.
  • the second sub-camera module includes a second image sensor, and a second imaging optical system.
  • the second imaging optical system takes light into the second image sensor.
  • the position correcting unit corrects a position of an image at a cross point by the first imaging optical system and the second imaging optical system.
  • the position correcting unit corrects the position of the image captured by the second sub-camera module.
  • FIG. 1 is a block diagram illustrating a configuration of a camera module according to a first embodiment.
  • a camera module includes a first sub-camera module 1 , a second sub-camera module 2 , and an ISP (image signal processor) 3 .
  • ISP image signal processor
  • the first sub-camera module 1 includes an imaging lens 11 , an IR cut filter 12 , a sensor 13 , and an imaging circuit 14 .
  • the imaging lens 11 functions as the first imaging optical system that takes light from a subject into the sensor 13 .
  • the IR cut filter 12 removes infrared light from the light taken by the imaging lens 11 .
  • the sensor 13 functions as the first image sensor that converts the light taken by the imaging lens 11 into a signal charge.
  • the imaging circuit 14 takes sensitivity levels of red (R), green (G), and blue (B) in the order corresponding to a Bayer pattern so as to generate an analog image signal.
  • the imaging circuit 14 also sequentially amplifies the analog image signal with a gain according to an imaging condition, and then, converts the obtained image signal into a digital form from an analog form.
  • the second sub-camera module 2 includes an imaging lens 21 , an IR cut filter 22 , a sensor 23 , an imaging circuit 24 , and an OTP (one time programmable memory) 25 .
  • the imaging lens 21 functions as the second imaging optical system that takes light from a subject into the sensor 23 .
  • the IR cut filter 22 removes infrared light from the light taken by the imaging lens 21 .
  • the sensor 23 functions as the second image sensor that converts the light taken by the imaging lens 21 into a signal charge.
  • the imaging circuit 24 takes sensitivity levels of R, G, and B in the order corresponding to a Bayer pattern so as to generate an analog image signal.
  • the imaging circuit 24 also sequentially amplifies the analog image signal with a gain according to an imaging condition, and then, converts the obtained image signal into a digital form from an analog form.
  • the OTP 25 stores a correction coefficient for a coordinate conversion of an image.
  • the ISP 3 includes a first sub-camera module I/F 31 , a second sub-camera module I/F 32 , an image taking unit 33 , a signal processing unit 34 , and a driver I/F 35 .
  • a RAW image captured by the first sub-camera module 1 is taken into the image taking unit 33 from the first sub-camera module I/F 31
  • a RAW image captured by the second sub-camera module 2 is taken into the image taking unit 33 from the second sub-camera module I/F 32 .
  • the signal processing unit 34 performs a signal process to the RAW image taken into the image taking unit 33 .
  • the driver I/F 35 outputs the image signal, which has been subject to the signal process at the signal processing unit 34 , to an unillustrated display driver.
  • the display driver displays the image acquired by the first sub-camera module 1 and the image acquired by the second sub-camera module 2 .
  • FIG. 2 is a flowchart for describing a procedure of the signal process by the signal processing unit.
  • the signal processing unit 34 performs a shading correction to the RAW image captured by the first sub-camera module 1 and the RAW image captured by the second sub-camera module 2 , respectively (step S 1 ).
  • the shading correction a luminous unevenness due to a difference between a light quantity at the central part and the light quantity at the peripheral part caused by the imaging lenses 11 and 21 is corrected.
  • the signal processing unit 34 performs a noise reduction (step S 2 ) for removing noises such as fixed-pattern noise, dark-current noise, or shot noise, a resolution reconstruction process (step S 3 ), and an contour enhancement process (step S 4 ). Then, the signal processing unit 34 performs a pixel interpolation process (de-mosaicing) to the digital image signal transmitted in the order of the Bayer pattern (step S 5 ). In the de-mosaicing, a photosensitivity level of an insufficient color component by the interpolation process to the image signal obtained by the capture of the subject image is generated. The signal processing unit 34 synthesizes color images of the first sub-camera module 1 and the second sub-camera module 2 by the de-mosaicing.
  • the signal processing unit 34 performs an automatic white balance control (AWB) to the color image (step S 6 ).
  • the signal processing unit 34 also performs a linear color matrix process (step S 7 ) for obtaining color reproduction, and a gamma correction (step S 8 ) for correcting a color saturation or brightness of the image displayed on a display, etc.
  • AVB automatic white balance control
  • step S 7 linear color matrix process
  • step S 8 gamma correction
  • the procedure of the signal process described in the present embodiment is only illustrative, and the other processes may be added, skippable processes may be skipped, the order may be changed, and so on if necessary.
  • the signal process by the respective components may be executed by the first sub-camera module 1 and the second sub-camera module 2 , or the ISP 3 , and the signal process may be shared.
  • FIGS. 3 to 7 are explanatory views for realizing a 3D stereovision image.
  • a circle ⁇ in FIG. 3 is viewed on the same position on a surface A from both the left eye and the right eye of an observer.
  • the circle ⁇ is on the position (cross point) where there is no parallax by left and right eyes.
  • an image ⁇ 1 of the rectangle ⁇ is formed on the surface A on the gaze passing the rectangle ⁇ from the left eye
  • an image ⁇ 2 of the rectangle ⁇ is formed on the surface A on the gaze passing the rectangle ⁇ from the right eye, as illustrated in FIG. 4 .
  • the camera module captures an image for a right eye and an image for a left eye by the first sub-camera module 1 and the second sub-camera module 2 , which are arranged side by side, in order to form a stereovision image. It is desirable that the camera module allows the image obtained from the first sub-camera module 1 and the image obtained from the second sub-camera module 2 to completely agree with each other at the cross point where there is no parallax by the imaging lens 11 and the imaging lens 21 .
  • the camera module corrects the position of the image at the cross point by performing a coordinate conversion using a correction coefficient that is set beforehand.
  • FIG. 8 is a flowchart for describing a procedure of setting the correction coefficient.
  • the correction coefficient is set in the process after the first sub-camera module 1 and the second sub-camera module 2 are mounted during the production process of the camera module.
  • step S 11 a distance to a subject for obtaining the cross point is determined.
  • step S 12 an adjustment chart is arranged on the cross point determined in step S 11 , and the adjustment chart is photographed by the first sub-camera module 1 and the second sub-camera module 2 .
  • FIG. 9 is a view illustrating an example of the adjustment chart.
  • the adjustment chart 40 has plural adjustment markers 41 .
  • Each of the adjustment markers 41 is a mark including two mutually-perpendicular straight lines, wherein the intersection of two lines is defined as a coordinate.
  • the adjustment markers 41 are arranged in a matrix such that five markers are arranged in the longitudinal direction, and five markers are, for example, arranged in the lateral direction. It is desirable that 25 or more adjustment markers 41 are present, for example, but the number of the markers is not limited, so long as there are plural markers. Any adjustment markers 41 can be used, so long as they can specify the position on the adjustment chart 40 .
  • the shape of the adjustment markers 41 is not particularly limited.
  • the arrangement of the adjustment markers 41 also may appropriately be changed. For example, when there is a range where a particularly high-definition photography is desired, many adjustment markers 41 may be arranged within this range.
  • step S 13 an image composed of a signal of G (hereinafter appropriately referred to as “G image”) among R, G, and B is generated from the respective RAW images acquired from the first sub-camera module 1 and the second sub-camera module 2 .
  • G image a signal of G
  • the sensors 13 and 23 generate a signal value of G by interpolating the signal value of the peripheral G pixels.
  • the G image is generated by using the signal value through the detection of the G pixel and the signal value through the interpolation process for the R pixel and the B pixel.
  • the G image may be generated after the execution of noise reduction.
  • step S 14 the coordinates of the respective adjustment markers 41 are calculated from the G image generated in the step S 13 .
  • step S 15 a correction coefficient is calculated from the coordinate of the adjustment marker 41 calculated in the step S 14 .
  • the correction coefficient is calculated with the first sub-camera module 1 , of the first sub-camera module 1 and the second sub-camera module 2 , being set as a reference.
  • step S 16 the correction coefficient calculated in the step S 15 is written in the OTP 25 of the second sub-camera module 2 .
  • the correction coefficient may be written in the next ISP 3 .
  • the correction coefficient is supposed to be a coefficient of a matrix operation.
  • k is the correction coefficient
  • Y and X are coordinates of the adjustment markers 41 , wherein Y is the coordinate calculated from the image by the first sub-camera module 1 , and X is the coordinate calculated from the image by the second sub-camera module 2 .
  • X t is defined as a transposed matrix of X.
  • [XX t ] ⁇ 1 is defined as an inverse matrix of XX t .
  • the correction coefficient may be obtained by using other algorithm such as nonlinear optimization method, instead of the least-square method.
  • the imaging circuit 24 (see FIG. 1 ) of the second sub-camera module 2 reads the correction coefficient, which is set beforehand according to the above-mentioned procedure, from the OTP 25 when a subject is photographed by the camera module.
  • the imaging circuit 24 performs the coordinate conversion using the correction coefficient, which is read from the OTP 25 , to the RAW image obtained from the second sub-camera module 2 .
  • the imaging circuit 24 functions as the position correcting unit for correcting the position, at the cross point, of the image acquired by the second sub-camera module 2 .
  • the imaging circuit 24 performs the coordinate conversion by the operation described below, for example.
  • k ij is defined as the correction coefficient
  • (x, y) is defined as the coordinate before the correction
  • (x′, y′) is defined as the coordinate after the correction.
  • the camera module performs the coordinate conversion using the correction coefficient to the image obtained from the second sub-camera module 2 , thereby allowing the image obtained by the first sub-camera module 1 and the image obtained by the second sub-camera module 2 to agree with each other at the cross point.
  • the camera module can obtain a high-definition 3D stereovision image according to the present embodiment without performing a high-precise variation adjustment for the structure of plural sub-camera modules.
  • the image obtained by the second sub-camera module 2 may sometimes have uneven density in the resolution due to the coordinate conversion.
  • the imaging circuit 24 may perform the interpolation process of the signal value of the peripheral pixel for the “coarse” portion, i.e., the portion where the pixel spacing is wide, caused by the coordinate conversion, thereby producing a new pixel, for example. With this process, a uniform and high-quality image can be obtained.
  • the correction correcting unit may perform a local coordinate conversion by the operation using the correction coefficient, which is appropriately changed according to the height of the image, instead of executing the coordinate conversion at one time by the matrix operation.
  • the position correcting unit may perform the coordinate conversion by referring to a look-up table, for example, instead of the matrix operation.
  • the camera module generates the G image from the RAW image obtained by the capture by the sub-camera module in order to calculate the correction coefficient.
  • the camera module may extract the G image from the images that have already been subject to the de-mosaicing at the signal processing unit 34 in order to calculate the correction coefficient, for example.
  • an optional one of the plural sub-camera modules is defined as a reference, and the coordinate conversion is performed for the other sub-camera modules, whereby the adjustment for obtaining a high-definition stereovision image can be made.
  • the present embodiment is not limited to the camera module having sub-camera modules of the same resolution, but may be adapted to a camera module including sub-camera modules, each having a different resolution.
  • an image of a sub-camera module having a great pixel number may be subject to scaling to reduce the pixel number according to the other sub-camera module, in order to perform the coordinate conversion similar to that in the present embodiment.
  • a moving image generally has lower resolution than that of a still image, it is considered that an image is subject to scaling to be used during the photograph of a moving image.
  • the portion of the image corresponding to the center of the lens may be cropped (cut) to be used. Since the image at the portion corresponding to the center of the lens is used, the effect of distortion or shading due to the peripheral portion of the lens is reduced, whereby the image quality can be more enhanced.
  • CSCM chip scale camera module
  • Each of the first sub-camera module 1 and the second sub-camera module 2 illustrated in FIG. 1 may have the signal processing unit 34 . It is desirable that the image taking unit 33 compares the output image of the first sub-camera module 1 and the output image of the second sub-camera module 2 , and if there is a difference, the output image of the second sub-camera module 2 is corrected by a feedback circuit.
  • FIG. 10 is a block diagram illustrating a configuration of a camera module according to a second embodiment.
  • the camera module according to the present embodiment has an autofocus function.
  • the camera module according to the present embodiment is characterized in that a lens extended amount for a focus adjustment is adjusted, thereby allowing the images of the respective sub-camera modules at the cross point to agree with one another.
  • the components same as those in the first embodiment are identified by the same numerals, and the overlapping description will be skipped.
  • a first sub-camera module 51 includes an imaging lens 11 , an IR cut filter 12 , a sensor 13 , an imaging circuit 14 , and a focus adjusting unit 52 .
  • the focus adjusting unit 52 functions as a focus adjusting unit for adjusting the focus of the imaging lens 11 and the imaging lens 21 .
  • the camera module according to the present embodiment makes an adjustment by adding or subtracting an offset value, which is set beforehand according to the distance to the subject, to or from the lens extended amount of the imaging lens 21 , thereby allowing the focal point of the imaging lens 21 to agree with the cross point.
  • FIG. 11 is a flowchart illustrating a setting procedure in order to allow images of respective sub-camera modules to agree with one another at a cross point.
  • the offset value and the correction coefficient are set in the process after the first sub-camera module 51 and the second sub-camera module 2 are mounted during the production process of the camera module.
  • step S 21 a sub-camera module that is to be defined as a reference is determined.
  • the first sub-camera module 51 is set as a reference, of the first sub-camera module 51 and the second sub-camera module 2 .
  • step S 22 the relationship between the lens extended amount and the distance to a subject is obtained for the imaging lens 11 of the first sub-camera module 51 and the imaging lens 21 of the second sub-camera module 2 .
  • FIG. 12 is a view illustrating an example of a relationship between the lens extended amount and the distance to a subject.
  • the relationship between the lens extended amount and the distance to a subject is obtained such that a focus is adjusted for each of the first sub-camera module 51 and the second sub-camera module 2 , every time the distance to a subject is made different, and the lens extended amounts at the respective distances to a subject are plotted.
  • the relationship which is obtained beforehand through the photograph of the chart at the stage where the respective sub-camera modules are produced may be used as the relationship between the lens extended amount and the distance to a subject.
  • step S 23 an offset value Ov of the lens extended amount of the imaging lens 21 of the second sub-camera module 2 with respect to the lens extended amount of the imaging lens 11 of the first sub-camera module 51 is calculated.
  • the offset value Ov is the difference between the extended amount of the imaging lens 11 and the extended amount of the imaging lens 21 , and it is changed according to the distance to a subject, for example.
  • the focus adjusting unit 52 holds the calculated offset value Ov.
  • a correction coefficient according to the distance to a subject is calculated for the image acquired by the second sub-camera module 2 .
  • the correction coefficient is calculated with the distance to a subject being set to be 30 cm, 50 cm, 100 cm, 300 cm, and infinite, for example.
  • the distance to a subject for calculating the correction coefficient can appropriately be set according to the performance of the imaging lenses 11 and 21 .
  • the OTP 25 holds the calculated correction coefficient.
  • the correction coefficient may be held at the next ISP 3 .
  • the focus adjusting unit 52 determines the lens extended amount for the focus adjustment to the first sub-camera module 51 .
  • the focus adjusting unit 52 adds or subtracts an offset value Ov, which is set beforehand to the lens extended amount for the first sub-camera module 51 , to or from the second sub-camera module 2 , thereby calculating the lens extended amount of the imaging lens 21 .
  • the focus adjusting unit 52 adjusts the extended amount of the imaging lens 21 of the second sub-camera module 2 according to the extended amount of the imaging lens 11 of the first sub-camera module 51 , thereby allowing the focal point of the imaging lens 21 to agree with the cross point.
  • the camera module can uniquely determine the lens extended amount of the other sub-camera module with the determination of the lens extended amount of the sub-camera module that is defined as the reference. Since the individual focus adjustment for every sub-camera module is unnecessary, it is sufficient that the focus adjusting unit is provided only to the sub-camera module that is defined as the reference. Accordingly, a configuration is simplified, compared to the case in which the individual focus adjusting unit is provided to every sub-camera module, whereby the deviation of the focal points of the respective sub-camera modules can be suppressed.
  • the imaging circuit 24 of the second sub-camera module 2 performs the coordinate conversion, which uses the correction coefficient according to the distance to a subject, to the RAW image obtained by the second sub-camera module 2 .
  • the imaging circuit 24 uses the correction coefficient read from the OTP 25 as the coordinate conversion.
  • the imaging circuit 24 performs a linear interpolation to the correction coefficient read from the OTP 25 so as to calculate the correction coefficient corresponding to the distance to a subject during the photographing, for example. Then, the imaging circuit 24 uses the calculated correction coefficient for the coordinate conversion.
  • the camera module allows the focal points of the sub-camera modules to be agreed with each other, and allows the images to be agreed with each other at the cross point, through the adjustment of the lens extended amount by the first sub-camera module 51 and the second sub-camera module 2 .
  • the camera module can acquire a high-definition 3D stereovision image.
  • the focus adjusting unit employs the addition or subtraction of the offset value for the adjustment of the lens extended amount, and in addition to this, it may employ a numerical conversion by referring to a look-up table, etc., for example. It is desirable that the camera module allows the brightness of each image acquired by each sub-camera module to be agreed with each other by the shading correction at the signal processing unit 34 , and allows a color tone to be agreed with each other by a linear color matrix process.
  • the present embodiment is not limited to the camera module having sub-camera modules of the same resolution, but may be adapted to a camera module including sub-camera modules, each having a different resolution.
  • an image of a sub-camera module having a great pixel number may be subject to scaling to reduce the pixel number according to the other sub-camera module, in order to perform the coordinate conversion similar to that in the present embodiment.
  • the portion of the image, corresponding to the center of the lens, of the sub-camera module having a great pixel number may be cropped (cut). Since the image at the portion corresponding to the center of the lens is used, the effect of distortion or shading due to the peripheral portion of the lens is reduced, whereby the image quality can be more enhanced.
  • the first sub-camera module 51 and the second sub-camera module 2 illustrated in FIG. 10 may respectively have the signal processing unit 34 . It is desirable that the image taking unit 33 compares the output image of the first sub-camera module 51 and the output image of the second sub-camera module 2 , and if there is a difference, the output image of the second sub-camera module 2 is corrected by a feedback circuit.

Abstract

According to one embodiment, a camera module includes a first sub-camera module, a second sub-camera module, and an imaging circuit serving as a position correcting unit. The position correcting unit corrects a position of an image at a cross point of imaging lenses. The position correcting unit corrects the position of the image acquired through the image-capture by the second sub-camera module.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2010-121758, filed on May 27, 2010; the entire contents of all of which are incorporated herein by reference.
FIELD
Embodiments described herein relates generally to a camera module and an image recording method.
BACKGROUND
Conventionally, a photograph of a 3D stereovision image has been realized by using a camera module for a special 3D camera module or plural ordinary camera modules and the like. However, a 3D special camera module has a problem of increased development cost or production cost. In a method of using plural camera modules, variation in camera modules during the production greatly affects an image quality. Therefore, in order to acquire a high-definition 3D stereovision image by using plural camera modules, there arises a problem of reduction in a yield of the camera module, and significant increase in cost for suppressing variation during the production.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram illustrating a configuration of a camera module according to a first embodiment;
FIG. 2 is a flowchart for describing a procedure of a signal process by a signal processing unit;
FIGS. 3 to 7 are explanatory views for realizing a 3D stereovision image;
FIG. 8 is a flowchart for describing a procedure of setting a correction coefficient;
FIG. 9 is a view illustrating an example of an adjustment chart;
FIG. 10 is a block diagram illustrating a configuration of a camera module according to a second embodiment;
FIG. 11 is a flowchart illustrating a setting procedure in order to allow images of respective sub-camera modules to agree with one another at a cross point; and
FIG. 12 is a view illustrating an example of a relationship between an extended amount of a lens and a distance to a subject.
DETAILED DESCRIPTION
In general, according to one embodiment, a camera module includes a first sub-camera module, a second sub-camera module, and a position correcting unit. The first sub-camera module includes a first image sensor, and a first imaging optical system. The first imaging optical system takes light into the first image sensor. The second sub-camera module includes a second image sensor, and a second imaging optical system. The second imaging optical system takes light into the second image sensor. The position correcting unit corrects a position of an image at a cross point by the first imaging optical system and the second imaging optical system. When the first sub-camera module, of the first sub-camera module and the second sub-camera module, is defined as a reference, the position correcting unit corrects the position of the image captured by the second sub-camera module.
Exemplary embodiments of a camera module and an image recording method will be explained below in detail with reference to the accompanying drawings. The present invention is not limited to the following embodiments.
FIG. 1 is a block diagram illustrating a configuration of a camera module according to a first embodiment. A camera module includes a first sub-camera module 1, a second sub-camera module 2, and an ISP (image signal processor) 3.
The first sub-camera module 1 includes an imaging lens 11, an IR cut filter 12, a sensor 13, and an imaging circuit 14. The imaging lens 11 functions as the first imaging optical system that takes light from a subject into the sensor 13. The IR cut filter 12 removes infrared light from the light taken by the imaging lens 11. The sensor 13 functions as the first image sensor that converts the light taken by the imaging lens 11 into a signal charge.
The imaging circuit 14 takes sensitivity levels of red (R), green (G), and blue (B) in the order corresponding to a Bayer pattern so as to generate an analog image signal. The imaging circuit 14 also sequentially amplifies the analog image signal with a gain according to an imaging condition, and then, converts the obtained image signal into a digital form from an analog form.
The second sub-camera module 2 includes an imaging lens 21, an IR cut filter 22, a sensor 23, an imaging circuit 24, and an OTP (one time programmable memory) 25. The imaging lens 21 functions as the second imaging optical system that takes light from a subject into the sensor 23. The IR cut filter 22 removes infrared light from the light taken by the imaging lens 21. The sensor 23 functions as the second image sensor that converts the light taken by the imaging lens 21 into a signal charge.
The imaging circuit 24 takes sensitivity levels of R, G, and B in the order corresponding to a Bayer pattern so as to generate an analog image signal. The imaging circuit 24 also sequentially amplifies the analog image signal with a gain according to an imaging condition, and then, converts the obtained image signal into a digital form from an analog form. The OTP 25 stores a correction coefficient for a coordinate conversion of an image.
The ISP 3 includes a first sub-camera module I/F 31, a second sub-camera module I/F 32, an image taking unit 33, a signal processing unit 34, and a driver I/F 35. A RAW image captured by the first sub-camera module 1 is taken into the image taking unit 33 from the first sub-camera module I/F 31, A RAW image captured by the second sub-camera module 2 is taken into the image taking unit 33 from the second sub-camera module I/F 32.
The signal processing unit 34 performs a signal process to the RAW image taken into the image taking unit 33. The driver I/F 35 outputs the image signal, which has been subject to the signal process at the signal processing unit 34, to an unillustrated display driver. The display driver displays the image acquired by the first sub-camera module 1 and the image acquired by the second sub-camera module 2.
FIG. 2 is a flowchart for describing a procedure of the signal process by the signal processing unit. The signal processing unit 34 performs a shading correction to the RAW image captured by the first sub-camera module 1 and the RAW image captured by the second sub-camera module 2, respectively (step S1). In the shading correction, a luminous unevenness due to a difference between a light quantity at the central part and the light quantity at the peripheral part caused by the imaging lenses 11 and 21 is corrected.
The signal processing unit 34 performs a noise reduction (step S2) for removing noises such as fixed-pattern noise, dark-current noise, or shot noise, a resolution reconstruction process (step S3), and an contour enhancement process (step S4). Then, the signal processing unit 34 performs a pixel interpolation process (de-mosaicing) to the digital image signal transmitted in the order of the Bayer pattern (step S5). In the de-mosaicing, a photosensitivity level of an insufficient color component by the interpolation process to the image signal obtained by the capture of the subject image is generated. The signal processing unit 34 synthesizes color images of the first sub-camera module 1 and the second sub-camera module 2 by the de-mosaicing.
The signal processing unit 34 performs an automatic white balance control (AWB) to the color image (step S6). The signal processing unit 34 also performs a linear color matrix process (step S7) for obtaining color reproduction, and a gamma correction (step S8) for correcting a color saturation or brightness of the image displayed on a display, etc. The procedure of the signal process described in the present embodiment is only illustrative, and the other processes may be added, skippable processes may be skipped, the order may be changed, and so on if necessary. The signal process by the respective components may be executed by the first sub-camera module 1 and the second sub-camera module 2, or the ISP 3, and the signal process may be shared.
FIGS. 3 to 7 are explanatory views for realizing a 3D stereovision image. For example, it is supposed that a circle α in FIG. 3 is viewed on the same position on a surface A from both the left eye and the right eye of an observer. The circle α is on the position (cross point) where there is no parallax by left and right eyes. When it is supposed that a rectangle β is present in front of the circle α from the observer, an image β1 of the rectangle β is formed on the surface A on the gaze passing the rectangle β from the left eye, and an image β2 of the rectangle β is formed on the surface A on the gaze passing the rectangle β from the right eye, as illustrated in FIG. 4.
When it is supposed that a triangle γ is present at the back of the circle α from the observer, an image γ1 of the triangle γ is formed on the surface A on the gaze reaching the triangle γ from the left eye, and an image γ2 is formed on the surface A on the gaze reaching the triangle γ from the right eye, as illustrated in FIG. 5. When the respective images β1, β2, γ1, and γ2 are displayed together with the circle α on the surface A as illustrated in FIG. 6, a stereoscopic effect or a depth feel in which the rectangle β is present in front of the circle α, and the triangle γ is present at the back of the circle α can be obtained as illustrated in FIG. 3.
As is schematically illustrated in FIG. 7, the camera module captures an image for a right eye and an image for a left eye by the first sub-camera module 1 and the second sub-camera module 2, which are arranged side by side, in order to form a stereovision image. It is desirable that the camera module allows the image obtained from the first sub-camera module 1 and the image obtained from the second sub-camera module 2 to completely agree with each other at the cross point where there is no parallax by the imaging lens 11 and the imaging lens 21.
On the other hand, when there is a great deviation or distortion between the image for the left eye and the image for the right eye on the cross point due to the effect of the variation in the imaging lenses 11 and 21 during the manufacture or the mounting error of the first sub-camera module 1 and the second sub-camera module 2, etc., it is difficult for the camera module to obtain a high-definition 3D stereovision image. The camera module according to the present embodiment corrects the position of the image at the cross point by performing a coordinate conversion using a correction coefficient that is set beforehand.
FIG. 8 is a flowchart for describing a procedure of setting the correction coefficient. The correction coefficient is set in the process after the first sub-camera module 1 and the second sub-camera module 2 are mounted during the production process of the camera module. In step S11, a distance to a subject for obtaining the cross point is determined. In step S12, an adjustment chart is arranged on the cross point determined in step S11, and the adjustment chart is photographed by the first sub-camera module 1 and the second sub-camera module 2.
FIG. 9 is a view illustrating an example of the adjustment chart. The adjustment chart 40 has plural adjustment markers 41. Each of the adjustment markers 41 is a mark including two mutually-perpendicular straight lines, wherein the intersection of two lines is defined as a coordinate. The adjustment markers 41 are arranged in a matrix such that five markers are arranged in the longitudinal direction, and five markers are, for example, arranged in the lateral direction. It is desirable that 25 or more adjustment markers 41 are present, for example, but the number of the markers is not limited, so long as there are plural markers. Any adjustment markers 41 can be used, so long as they can specify the position on the adjustment chart 40. The shape of the adjustment markers 41 is not particularly limited. The arrangement of the adjustment markers 41 also may appropriately be changed. For example, when there is a range where a particularly high-definition photography is desired, many adjustment markers 41 may be arranged within this range.
In step S13, an image composed of a signal of G (hereinafter appropriately referred to as “G image”) among R, G, and B is generated from the respective RAW images acquired from the first sub-camera module 1 and the second sub-camera module 2. As for a pixel for R and a pixel for B, the sensors 13 and 23 generate a signal value of G by interpolating the signal value of the peripheral G pixels. The G image is generated by using the signal value through the detection of the G pixel and the signal value through the interpolation process for the R pixel and the B pixel. When an image is photographed under low illumination, or when the sensitivity of the sensors 13 and 23 is low, the G image may be generated after the execution of noise reduction.
In step S14, the coordinates of the respective adjustment markers 41 are calculated from the G image generated in the step S13. In step S15, a correction coefficient is calculated from the coordinate of the adjustment marker 41 calculated in the step S14. The correction coefficient is calculated with the first sub-camera module 1, of the first sub-camera module 1 and the second sub-camera module 2, being set as a reference. In step S16, the correction coefficient calculated in the step S15 is written in the OTP 25 of the second sub-camera module 2. The correction coefficient may be written in the next ISP 3.
The correction coefficient is supposed to be a coefficient of a matrix operation. The correction coefficient is obtained from an equation described below by the least-square method, for example.
Y=kX
k=YXt [XXt]−1
It is supposed that k is the correction coefficient, and Y and X are coordinates of the adjustment markers 41, wherein Y is the coordinate calculated from the image by the first sub-camera module 1, and X is the coordinate calculated from the image by the second sub-camera module 2. Xt is defined as a transposed matrix of X. [XXt]−1 is defined as an inverse matrix of XXt. The correction coefficient may be obtained by using other algorithm such as nonlinear optimization method, instead of the least-square method.
The imaging circuit 24 (see FIG. 1) of the second sub-camera module 2 reads the correction coefficient, which is set beforehand according to the above-mentioned procedure, from the OTP 25 when a subject is photographed by the camera module. The imaging circuit 24 performs the coordinate conversion using the correction coefficient, which is read from the OTP 25, to the RAW image obtained from the second sub-camera module 2. The imaging circuit 24 functions as the position correcting unit for correcting the position, at the cross point, of the image acquired by the second sub-camera module 2.
The imaging circuit 24 performs the coordinate conversion by the operation described below, for example.
[ x y 1 ] = [ k 11 k 12 k 13 k 21 k 22 k 23 0 0 1 ] · [ x y 1 ]
Here, kij is defined as the correction coefficient, (x, y) is defined as the coordinate before the correction, and (x′, y′) is defined as the coordinate after the correction. The camera module performs the coordinate conversion using the correction coefficient to the image obtained from the second sub-camera module 2, thereby allowing the image obtained by the first sub-camera module 1 and the image obtained by the second sub-camera module 2 to agree with each other at the cross point. Thus, the camera module can obtain a high-definition 3D stereovision image according to the present embodiment without performing a high-precise variation adjustment for the structure of plural sub-camera modules.
The image obtained by the second sub-camera module 2 may sometimes have uneven density in the resolution due to the coordinate conversion. The imaging circuit 24 may perform the interpolation process of the signal value of the peripheral pixel for the “coarse” portion, i.e., the portion where the pixel spacing is wide, caused by the coordinate conversion, thereby producing a new pixel, for example. With this process, a uniform and high-quality image can be obtained.
The correction correcting unit may perform a local coordinate conversion by the operation using the correction coefficient, which is appropriately changed according to the height of the image, instead of executing the coordinate conversion at one time by the matrix operation. The position correcting unit may perform the coordinate conversion by referring to a look-up table, for example, instead of the matrix operation.
It is not limited that the camera module generates the G image from the RAW image obtained by the capture by the sub-camera module in order to calculate the correction coefficient. The camera module may extract the G image from the images that have already been subject to the de-mosaicing at the signal processing unit 34 in order to calculate the correction coefficient, for example.
In the present embodiment, in the case of the camera module including sub-camera modules having the same resolution, an optional one of the plural sub-camera modules is defined as a reference, and the coordinate conversion is performed for the other sub-camera modules, whereby the adjustment for obtaining a high-definition stereovision image can be made.
The present embodiment is not limited to the camera module having sub-camera modules of the same resolution, but may be adapted to a camera module including sub-camera modules, each having a different resolution. In this case, an image of a sub-camera module having a great pixel number may be subject to scaling to reduce the pixel number according to the other sub-camera module, in order to perform the coordinate conversion similar to that in the present embodiment.
Since a moving image generally has lower resolution than that of a still image, it is considered that an image is subject to scaling to be used during the photograph of a moving image. In this case, the portion of the image corresponding to the center of the lens may be cropped (cut) to be used. Since the image at the portion corresponding to the center of the lens is used, the effect of distortion or shading due to the peripheral portion of the lens is reduced, whereby the image quality can be more enhanced.
For example, when a wafer level camera such as CSCM (chip scale camera module) is used, only cameras at both ends of a camera array, which is diced as an array of 3×1 or 4×1, for example, may be functioned as a sub-camera module. Even in this case, the photograph of 3D stereovision image is possible, and the production is easier compared to the case in which a wafer level camera, which is diced one by one, is further assembled.
Each of the first sub-camera module 1 and the second sub-camera module 2 illustrated in FIG. 1 may have the signal processing unit 34. It is desirable that the image taking unit 33 compares the output image of the first sub-camera module 1 and the output image of the second sub-camera module 2, and if there is a difference, the output image of the second sub-camera module 2 is corrected by a feedback circuit.
FIG. 10 is a block diagram illustrating a configuration of a camera module according to a second embodiment. The camera module according to the present embodiment has an autofocus function. The camera module according to the present embodiment is characterized in that a lens extended amount for a focus adjustment is adjusted, thereby allowing the images of the respective sub-camera modules at the cross point to agree with one another. The components same as those in the first embodiment are identified by the same numerals, and the overlapping description will be skipped.
A first sub-camera module 51 includes an imaging lens 11, an IR cut filter 12, a sensor 13, an imaging circuit 14, and a focus adjusting unit 52. The focus adjusting unit 52 functions as a focus adjusting unit for adjusting the focus of the imaging lens 11 and the imaging lens 21. The camera module according to the present embodiment makes an adjustment by adding or subtracting an offset value, which is set beforehand according to the distance to the subject, to or from the lens extended amount of the imaging lens 21, thereby allowing the focal point of the imaging lens 21 to agree with the cross point.
FIG. 11 is a flowchart illustrating a setting procedure in order to allow images of respective sub-camera modules to agree with one another at a cross point. The offset value and the correction coefficient are set in the process after the first sub-camera module 51 and the second sub-camera module 2 are mounted during the production process of the camera module.
In step S21, a sub-camera module that is to be defined as a reference is determined. The first sub-camera module 51 is set as a reference, of the first sub-camera module 51 and the second sub-camera module 2. In step S22, the relationship between the lens extended amount and the distance to a subject is obtained for the imaging lens 11 of the first sub-camera module 51 and the imaging lens 21 of the second sub-camera module 2.
FIG. 12 is a view illustrating an example of a relationship between the lens extended amount and the distance to a subject. The relationship between the lens extended amount and the distance to a subject is obtained such that a focus is adjusted for each of the first sub-camera module 51 and the second sub-camera module 2, every time the distance to a subject is made different, and the lens extended amounts at the respective distances to a subject are plotted. The relationship which is obtained beforehand through the photograph of the chart at the stage where the respective sub-camera modules are produced may be used as the relationship between the lens extended amount and the distance to a subject.
In step S23, an offset value Ov of the lens extended amount of the imaging lens 21 of the second sub-camera module 2 with respect to the lens extended amount of the imaging lens 11 of the first sub-camera module 51 is calculated. The offset value Ov is the difference between the extended amount of the imaging lens 11 and the extended amount of the imaging lens 21, and it is changed according to the distance to a subject, for example. The focus adjusting unit 52 holds the calculated offset value Ov.
In step S24, a correction coefficient according to the distance to a subject is calculated for the image acquired by the second sub-camera module 2. The correction coefficient is calculated with the distance to a subject being set to be 30 cm, 50 cm, 100 cm, 300 cm, and infinite, for example. The distance to a subject for calculating the correction coefficient can appropriately be set according to the performance of the imaging lenses 11 and 21. The OTP 25 holds the calculated correction coefficient. The correction coefficient may be held at the next ISP 3.
When the camera module photographs a subject, the focus adjusting unit 52 determines the lens extended amount for the focus adjustment to the first sub-camera module 51. The focus adjusting unit 52 adds or subtracts an offset value Ov, which is set beforehand to the lens extended amount for the first sub-camera module 51, to or from the second sub-camera module 2, thereby calculating the lens extended amount of the imaging lens 21. The focus adjusting unit 52 adjusts the extended amount of the imaging lens 21 of the second sub-camera module 2 according to the extended amount of the imaging lens 11 of the first sub-camera module 51, thereby allowing the focal point of the imaging lens 21 to agree with the cross point.
The camera module can uniquely determine the lens extended amount of the other sub-camera module with the determination of the lens extended amount of the sub-camera module that is defined as the reference. Since the individual focus adjustment for every sub-camera module is unnecessary, it is sufficient that the focus adjusting unit is provided only to the sub-camera module that is defined as the reference. Accordingly, a configuration is simplified, compared to the case in which the individual focus adjusting unit is provided to every sub-camera module, whereby the deviation of the focal points of the respective sub-camera modules can be suppressed.
The imaging circuit 24 of the second sub-camera module 2 performs the coordinate conversion, which uses the correction coefficient according to the distance to a subject, to the RAW image obtained by the second sub-camera module 2. When the distance to a subject during the photographing agrees with the distance to a subject during the calculation of the correction coefficient, the imaging circuit 24 uses the correction coefficient read from the OTP 25 as the coordinate conversion. When the distance to a subject during the photographing is different from the distance to a subject during the calculation of the correction coefficient, the imaging circuit 24 performs a linear interpolation to the correction coefficient read from the OTP 25 so as to calculate the correction coefficient corresponding to the distance to a subject during the photographing, for example. Then, the imaging circuit 24 uses the calculated correction coefficient for the coordinate conversion.
The camera module allows the focal points of the sub-camera modules to be agreed with each other, and allows the images to be agreed with each other at the cross point, through the adjustment of the lens extended amount by the first sub-camera module 51 and the second sub-camera module 2. Thus, the camera module can acquire a high-definition 3D stereovision image.
The focus adjusting unit employs the addition or subtraction of the offset value for the adjustment of the lens extended amount, and in addition to this, it may employ a numerical conversion by referring to a look-up table, etc., for example. It is desirable that the camera module allows the brightness of each image acquired by each sub-camera module to be agreed with each other by the shading correction at the signal processing unit 34, and allows a color tone to be agreed with each other by a linear color matrix process.
The present embodiment is not limited to the camera module having sub-camera modules of the same resolution, but may be adapted to a camera module including sub-camera modules, each having a different resolution. In this case, an image of a sub-camera module having a great pixel number may be subject to scaling to reduce the pixel number according to the other sub-camera module, in order to perform the coordinate conversion similar to that in the present embodiment.
The portion of the image, corresponding to the center of the lens, of the sub-camera module having a great pixel number may be cropped (cut). Since the image at the portion corresponding to the center of the lens is used, the effect of distortion or shading due to the peripheral portion of the lens is reduced, whereby the image quality can be more enhanced.
The first sub-camera module 51 and the second sub-camera module 2 illustrated in FIG. 10 may respectively have the signal processing unit 34. It is desirable that the image taking unit 33 compares the output image of the first sub-camera module 51 and the output image of the second sub-camera module 2, and if there is a difference, the output image of the second sub-camera module 2 is corrected by a feedback circuit.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (7)

What is claimed is:
1. An image recording method, comprising:
determining a distance to a subject that is a cross point of a first sub-camera module and a second sub-camera module;
imaging an adjustment chart arranged on the cross point by the first sub-camera module and the second sub-camera module;
generating a green image composed from a RAW image of the adjustment chart acquired from the first sub-camera module;
generating a green image composed from a RAW image of the adjustment chart acquired from the second sub-camera module;
calculating a correction coefficient used for a coordinate conversion of an image from the green image obtained from the second sub-camera module, wherein the green image acquired by the first sub-camera module is defined as a reference;
executing a focus adjustment of a first imaging optical system provided to the first sub-camera module and a second imaging optical system provided to the second sub-camera module; and
performing the coordinate conversion, using the correction coefficient, to the image acquired by the second sub-camera module,
wherein a lens extended amount of a second imaging optical system is uniquely determined with a determination of a lens extended amount of the first imaging optical system.
2. The image recording method according to claim 1, wherein the position of the image at the cross point is corrected by the coordinate conversion.
3. The image recording method according to claim 1, wherein
the lens extended amount of the second imaging optical system is adjusted according to the lens extended amount of the first imaging optical system, so as to allow the focal point of the second imaging optical system to agree with the cross point, and
the coordinate conversion using the correction coefficient according to the distance to a subject is executed.
4. The image recording method according to claim 3, wherein an offset value, set beforehand according to the distance to a subject, is added or subtracted to or from the lens extended amount of the second imaging optical system.
5. The image recording method according to claim 3, wherein the coordinate conversion, using the correction coefficient set beforehand for each of the distance to a subject, is performed to the image acquired by the second sub-camera module.
6. The image recording method according to claim 1, wherein an interpolation process of a signal value for each pixel is performed with respect to uneven density in resolution caused by the coordinate conversion.
7. The image recording method according to claim 1, wherein a part of at least one of images acquired by the first sub-camera module and the second sub-camera module is cropped to be used.
US13/049,455 2010-05-27 2011-03-16 Camera module and image recording method Expired - Fee Related US9143762B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010121758A JP2011250177A (en) 2010-05-27 2010-05-27 Camera module and image recording method
JP2010-121758 2010-05-27

Publications (2)

Publication Number Publication Date
US20110292182A1 US20110292182A1 (en) 2011-12-01
US9143762B2 true US9143762B2 (en) 2015-09-22

Family

ID=45021788

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/049,455 Expired - Fee Related US9143762B2 (en) 2010-05-27 2011-03-16 Camera module and image recording method

Country Status (2)

Country Link
US (1) US9143762B2 (en)
JP (1) JP2011250177A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150103143A1 (en) * 2013-10-14 2015-04-16 Etron Technology, Inc. Calibration system of a stereo camera and calibration method of a stereo camera

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5398750B2 (en) 2011-01-26 2014-01-29 株式会社東芝 The camera module
US10523854B2 (en) * 2015-06-25 2019-12-31 Intel Corporation Array imaging system having discrete camera modules and method for manufacturing the same
JP7057097B2 (en) * 2017-10-27 2022-04-19 キヤノン株式会社 Control methods and programs for distance measuring devices, distance measuring systems, imaging devices, moving objects, and distance measuring devices

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0193984A (en) 1987-10-05 1989-04-12 Sharp Corp Automatic adjusting mechanism for stereoscopic camera convergence angle
JPH06133339A (en) 1992-10-21 1994-05-13 Kanji Murakami Stereoscopic image pickup aid device
EP0472015B1 (en) * 1990-08-10 1995-12-27 Kanji Murakami Stereoscopic video pickup device
JPH10271534A (en) 1997-03-26 1998-10-09 Nippon Hoso Kyokai <Nhk> Stereoscopic image photographing device
JP3117303B2 (en) 1992-11-09 2000-12-11 キヤノン株式会社 Optical device
JP2002077947A (en) 2000-09-05 2002-03-15 Sanyo Electric Co Ltd Method for correcting stereoscopic image and stereoscopic image apparatus using the same
US6373518B1 (en) * 1998-05-14 2002-04-16 Fuji Jukogyo Kabushiki Kaisha Image correction apparatus for stereo camera
US6385334B1 (en) * 1998-03-12 2002-05-07 Fuji Jukogyo Kabushiki Kaisha System and method for adjusting stereo camera
US20030210343A1 (en) * 2002-05-13 2003-11-13 Minolta Co., Ltd. Image shift correcting device, image capturing device, and digital camera using same
US6809765B1 (en) * 1999-10-05 2004-10-26 Sony Corporation Demosaicing for digital imaging device using perceptually uniform color space
US20060115177A1 (en) * 2002-12-27 2006-06-01 Nikon Corporation Image processing device and image processing program
US20060256231A1 (en) * 2005-05-13 2006-11-16 Casio Computer Co., Ltd. Image pick-up apparatus having function of detecting shake direction
US20070229697A1 (en) * 2006-03-31 2007-10-04 Samsung Electronics Co., Ltd. Apparatus and method for out-of-focus shooting using portable terminal
US7403634B2 (en) * 2002-05-23 2008-07-22 Kabushiki Kaisha Toshiba Object tracking apparatus and method
US20080239107A1 (en) * 2007-03-27 2008-10-02 Fujifilm Corporation Imaging apparatus
US20080239064A1 (en) * 2007-03-29 2008-10-02 Fujifilm Corporation Stereoscopic image pickup apparatus and method of adjusting optical axis
US20080266386A1 (en) * 2007-04-25 2008-10-30 Canon Kabushiki Kaisha System
US7479982B2 (en) * 2002-07-03 2009-01-20 Topcon Corporation Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
US20090179824A1 (en) * 2008-01-10 2009-07-16 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and system
US20090201384A1 (en) * 2008-02-13 2009-08-13 Samsung Electronics Co., Ltd. Method and apparatus for matching color image and depth image
US20090324135A1 (en) * 2008-06-27 2009-12-31 Sony Corporation Image processing apparatus, image processing method, program and recording medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002352241A (en) * 2001-05-25 2002-12-06 Nippon Hoso Kyokai <Nhk> Block matching device
JP4024581B2 (en) * 2002-04-18 2007-12-19 オリンパス株式会社 Imaging device

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0193984A (en) 1987-10-05 1989-04-12 Sharp Corp Automatic adjusting mechanism for stereoscopic camera convergence angle
EP0472015B1 (en) * 1990-08-10 1995-12-27 Kanji Murakami Stereoscopic video pickup device
JPH06133339A (en) 1992-10-21 1994-05-13 Kanji Murakami Stereoscopic image pickup aid device
JP3117303B2 (en) 1992-11-09 2000-12-11 キヤノン株式会社 Optical device
JPH10271534A (en) 1997-03-26 1998-10-09 Nippon Hoso Kyokai <Nhk> Stereoscopic image photographing device
US6385334B1 (en) * 1998-03-12 2002-05-07 Fuji Jukogyo Kabushiki Kaisha System and method for adjusting stereo camera
US6373518B1 (en) * 1998-05-14 2002-04-16 Fuji Jukogyo Kabushiki Kaisha Image correction apparatus for stereo camera
US6809765B1 (en) * 1999-10-05 2004-10-26 Sony Corporation Demosaicing for digital imaging device using perceptually uniform color space
JP2002077947A (en) 2000-09-05 2002-03-15 Sanyo Electric Co Ltd Method for correcting stereoscopic image and stereoscopic image apparatus using the same
US20030210343A1 (en) * 2002-05-13 2003-11-13 Minolta Co., Ltd. Image shift correcting device, image capturing device, and digital camera using same
US7403634B2 (en) * 2002-05-23 2008-07-22 Kabushiki Kaisha Toshiba Object tracking apparatus and method
US7479982B2 (en) * 2002-07-03 2009-01-20 Topcon Corporation Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
US20060115177A1 (en) * 2002-12-27 2006-06-01 Nikon Corporation Image processing device and image processing program
US20060256231A1 (en) * 2005-05-13 2006-11-16 Casio Computer Co., Ltd. Image pick-up apparatus having function of detecting shake direction
US20070229697A1 (en) * 2006-03-31 2007-10-04 Samsung Electronics Co., Ltd. Apparatus and method for out-of-focus shooting using portable terminal
US20080239107A1 (en) * 2007-03-27 2008-10-02 Fujifilm Corporation Imaging apparatus
US20080239064A1 (en) * 2007-03-29 2008-10-02 Fujifilm Corporation Stereoscopic image pickup apparatus and method of adjusting optical axis
US20080266386A1 (en) * 2007-04-25 2008-10-30 Canon Kabushiki Kaisha System
US20090179824A1 (en) * 2008-01-10 2009-07-16 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and system
US20090201384A1 (en) * 2008-02-13 2009-08-13 Samsung Electronics Co., Ltd. Method and apparatus for matching color image and depth image
US20090324135A1 (en) * 2008-06-27 2009-12-31 Sony Corporation Image processing apparatus, image processing method, program and recording medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Japanese Office Action issued Aug. 6, 2013 in Patent Application No. 2010-121758 with English Translation.
Japanese Office Action Issued May 14, 2013 in Patent Application No. 2010-121758 (with English translation).
U.S. Appl. No. 12/023,825, filed Feb. 9, 2011, Hidaka.
U.S. Appl. No. 13/353,537, filed Jan. 19, 2012, Ogasahara.
Wikipedia Article on RAW image format. Published on Dec. 30, 2008. http://web.archive.org/web/20081230013634/http://en.wikipedia.org/wiki/RAW-image-format. *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150103143A1 (en) * 2013-10-14 2015-04-16 Etron Technology, Inc. Calibration system of a stereo camera and calibration method of a stereo camera
US9628778B2 (en) * 2013-10-14 2017-04-18 Eys3D Microelectronics, Co. Calibration system of a stereo camera and calibration method of a stereo camera

Also Published As

Publication number Publication date
US20110292182A1 (en) 2011-12-01
JP2011250177A (en) 2011-12-08

Similar Documents

Publication Publication Date Title
US11856291B2 (en) Thin multi-aperture imaging system with auto-focus and methods for using same
US9055181B2 (en) Solid-state imaging device, image processing apparatus, and a camera module having an image synthesizer configured to synthesize color information
JP5701785B2 (en) The camera module
US20150281542A1 (en) Image generation apparatus and method for generating plurality of images with different resolution and/or brightness from single image
US9369693B2 (en) Stereoscopic imaging device and shading correction method
US10630920B2 (en) Image processing apparatus
JP6004221B2 (en) Image processing device
US9008412B2 (en) Image processing device, image processing method and recording medium for combining image data using depth and color information
JP6312487B2 (en) Image processing apparatus, control method therefor, and program
JP2014006388A (en) Imaging apparatus, and its control method and program
US9143762B2 (en) Camera module and image recording method
JP2014026051A (en) Image capturing device and image processing device
JP2008141658A (en) Electronic camera and image processor
CN103460702B (en) Color image capturing element and image capturing device
JP6762767B2 (en) Image sensor, image sensor, and image signal processing method
JP5786355B2 (en) Defocus amount detection device and electronic camera
JP5173664B2 (en) Image processing apparatus and image processing method
JP2021097347A (en) Imaging apparatus, control method of the same, and program
JP2015163915A (en) Image processor, imaging device, image processing method, program, and storage medium
JP2015076754A (en) Image processing apparatus, image processing method, and image processing program
JP2014026050A (en) Image capturing device and image processing device
JP2012124650A (en) Imaging apparatus, and imaging method
US9596402B2 (en) Microlens array for solid-state image sensing device, solid-state image sensing device, imaging device, and lens unit
JP6652294B2 (en) Image processing device, imaging device, image processing method, program, and storage medium
US10334161B2 (en) Image processing apparatus, image processing method, computer program and imaging apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OGASAHARA, TAKAYUKI;REEL/FRAME:026334/0795

Effective date: 20110311

ZAAA Notice of allowance and fees due

Free format text: ORIGINAL CODE: NOA

ZAAB Notice of allowance mailed

Free format text: ORIGINAL CODE: MN/=.

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: SURCHARGE FOR LATE PAYMENT, LARGE ENTITY (ORIGINAL EVENT CODE: M1554); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230922