US20120075432A1 - Image capture using three-dimensional reconstruction - Google Patents

Image capture using three-dimensional reconstruction Download PDF

Info

Publication number
US20120075432A1
US20120075432A1 US13/246,821 US201113246821A US2012075432A1 US 20120075432 A1 US20120075432 A1 US 20120075432A1 US 201113246821 A US201113246821 A US 201113246821A US 2012075432 A1 US2012075432 A1 US 2012075432A1
Authority
US
United States
Prior art keywords
image
polarized
sensor
luminance
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/246,821
Inventor
Brett Bilbrey
Michael F. Culbert
David I. Simon
Rich DeVaul
Mushtaq Sarwar
David S. Gere
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Priority to US13/246,821 priority Critical patent/US20120075432A1/en
Assigned to APPLE INC. reassignment APPLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DEVAUL, RICH, CULBERT, MICHAEL F., SARWAR, MUSHTAQ, BILBREY, BRETT, GERE, DAVID S., SIMON, DAVID I.
Publication of US20120075432A1 publication Critical patent/US20120075432A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J4/00Measuring polarisation of light
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/22Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type
    • G02B30/25Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type using polarisation techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/557Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps

Definitions

  • the disclosed embodiments relate generally to image sensing devices and, more particularly, to image sensing devices that utilize three-dimensional reconstruction to form a three-dimensional image.
  • Existing three-dimensional image capture devices can derive limited three-dimensional visual information for objects located within a captured area.
  • some imaging devices can extract approximate depth information relating to objects located within the captured area, but are incapable of obtaining detailed geometric information relating to the surfaces of these objects.
  • Such sensors may be able to approximate the distances of objects within the captured area, but cannot accurately reproduce the three-dimensional shape of the objects.
  • other imaging devices can obtain and reproduce surface detail information for objects within the captured area, but are incapable of extracting depth information. Accordingly, these sensors may be incapable of differentiating between a small object positioned close to the sensor and a large object positioned far away from the sensor.
  • Embodiments described herein relate to systems, apparatuses and methods for capturing a three-dimensional image using one or more dedicated imaging devices.
  • One embodiment may take the form of a three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized image, the first sensor including a first imaging device and a polarized filter associated with the first imaging device; a second sensor for capturing a first non-polarized image; a third sensor for capturing a second non-polarized image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first non-polarized image and the second non-polarized image, the processing module further operative to combine the polarized image, the first non-polarized image, and the second non-polarized image to form a composite three-dimensional image.
  • Another embodiment may take the form of three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized chrominance image and determining surface information for the one or more objects, the first sensor including a color imaging device and a polarized filter associated with the color imaging device; a second sensor for capturing a first luminance image; a third sensor for capturing a second luminance image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first luminance image and the second luminance image and combining the polarized chrominance image, the first luminance image, and the second luminance image to form a composite three-dimensional image utilizing the surface information and the depth information.
  • Still another embodiment may take the form of a method for capturing at least one image of an object, comprising: capturing a polarized image of the object; capturing a first non-polarized image of the object; capturing a second non-polarized image of the object; deriving depth information for the object from at least the first non-polarized image and the second non-polarized image; determining a plurality of surface normals for the object, the plurality of surface normals derived from the polarized image; and creating a three-dimensional image from the depth information and the plurality of surface normals.
  • FIG. 1A is a functional block diagram that illustrates certain components of one embodiment of a three-dimensional imaging apparatus
  • FIG. 1B is a close-up view of one embodiment of the second imaging device shown in FIG. 1A ;
  • FIG. 1C is a close-up view of another embodiment of the second imaging device shown in FIG. 1A ;
  • FIG. 1D is a close-up view of another embodiment of the second imaging device shown in FIG. 1A ;
  • FIG. 2 is a functional block diagram that illustrates certain components of another embodiment of a three-dimensional imaging apparatus
  • FIG. 3 is a functional block diagram that illustrates certain components of another embodiment of a three-dimensional imaging apparatus
  • FIG. 4 depicts a sample polarization filter that may be used in accordance with embodiments discussed herein, including the imaging apparatuses of FIGS. 1A-3 ;
  • FIG. 5 depicts a second sample polarization filter that may be used in accordance with embodiments discussed herein, including the imaging apparatuses of FIGS. 1A-3 .
  • One embodiment may take the form of a three-dimensional imaging apparatus, including a first and second imaging device.
  • the first imaging device may have two unique imaging devices that may be used in concert to derive depth data for objects within the field of detection of the sensors.
  • the first imaging device may have a single imaging device that provides depth data.
  • the second imaging device may be at least partially overlaid with a polarizing filter in order to obtain polarization data of light impacting the device, and thus the surface orientation of any objects reflecting such light.
  • the first imaging device may derive approximate depth information relating to objects within its field of detection and supply the depth information to an image processing device.
  • the second imaging device may capture surface detail information relating to objects within its field of detection and supply the surface detail information to the image processing device.
  • the image processing device may combine the depth information with the surface detail information in order to create a three-dimensional image that includes both surface detail and accurate depth information for objects in the image.
  • the term “image sensing device” includes, without limitation, any electronic device that can capture still or moving images.
  • the image sensing device may utilize analog or digital sensors, or a combination thereof, for capturing the image.
  • the image sensing device may be configured to convert or facilitate converting the captured image into digital image data.
  • the image sensing device may be hosted in various electronic devices including, but not limited to, digital cameras, personal computers, personal digital assistants (PDAs), mobile telephones, or any other devices that can be configured to process image data.
  • Sample image sensing devices include charge-coupled device (CCD) sensors, complementary metal-oxide-semiconductor sensors, infrared sensors, light detection and ranging sensors, and the like. Further, the image sensing devices may be sensitive to a range of colors and/or luminances, and may employ various color separation mechanisms such as Bayer arrays, Foveon X3 configurations, multiple CCD devices, dichroic prisms and the like.
  • FIG. 1A is a functional block diagram of one embodiment of a three-dimensional imaging apparatus for capturing and storing image data.
  • the three-dimensional imaging apparatus may be a component within an electronic device.
  • the three-dimensional imaging apparatus may be employed in a standalone digital camera, a laptop computer, a media player, a mobile phone, and so on and so forth.
  • the three-dimensional imaging apparatus 100 may include a first imaging device 102 , a second imaging device 104 , and an image processing module 106 .
  • the first imaging device 102 may include a first imaging device and the second imaging device 104 may include a second imaging device and a polarizing filter 108 associated with the second imaging device.
  • the first imaging device 102 may be configured to derive approximate depth information relating to objects in the image
  • the second imaging device 104 may be configured to derive surface orientation information relating to objects in the image.
  • the fields of view of the first and second imaging devices 112 , 114 may be offset so that the received images are slightly different.
  • the field of view 112 of the first imaging device 102 may be vertically, diagonally, or horizontally offset from the second imaging device 104 , or may be closer or further away from a reference plane or point.
  • offsetting the fields of view of the first and second imaging devices 112 , 114 may provide data useful for generating stereo disparity maps, as well as extracting depth information.
  • the fields of view of the first and second imaging devices 112 , 114 may be substantially the same.
  • the first and second imaging devices 102 , 104 may be each be formed from an array of light-sensitive pixels. That is, each pixel of the imaging devices may detect at least one of the various wavelengths that make up visible light. The signal generated by each such pixel may vary depending on the wavelength of light impacting it so that the array may thus reproduce a composite image of the object.
  • the first and second imaging devices 102 , 104 may have substantially identical pixel array configurations.
  • the first and second imaging devices may have the same number of pixels, the same pixel aspect ratio, the same arrangement of pixels, and/or the same size of pixels.
  • the first and second imaging devices may have different numbers of pixels, pixel sizes, and/or layouts.
  • the first imaging device 102 may have a smaller number of pixels than the second imaging device 104 , or vice versa, or the arrangement of pixels may be different between the sensors.
  • the first imaging device 102 may be configured to capture a first image and process the image to detect depth or distance information relating to objects in the image.
  • the first imaging device 102 may be configured to derive an approximate relative distance of an object 110 by measuring properties of electromagnetic waves as they are reflected off or scattered by the object and captured by the first imaging device.
  • the first imaging device may be a Light Detection And Ranging (LIDAR) sensor.
  • the LIDAR sensor may emit laser pulses that are reflected off of the surfaces of objects in the image and detect the reflected signal.
  • the LIDAR sensor may then calculate the distance of an object from the sensor by measuring the time delay between transmission of a laser pulse and the detection of the reflected signal.
  • Other embodiments may utilize other types of depth-detection techniques, such as infrared reflection, RADAR, laser detection and ranging, and the like.
  • a stereo disparity map may be generated to derive depth or distance information relating to objects present in the image.
  • a stereo disparity map may be formed from the first image captured by the first imaging device and a second image captured by the second imaging device.
  • the stereo disparity map is a depth map in which depth information for objects shown in the images is derived from the offset first and second images.
  • the second image may include some or all of the objects captured in the first image, but with the position of the objects being shifted in one direction (typically, although not necessarily, horizontally). This shift may be measured and used to calculate the distance of the objects from the first and second imaging devices.
  • the second imaging device 104 may be configured to capture a second image and derive detailed surface information for objects in the image.
  • a polarizing filter 108 may be positioned between the second imaging device and an object 110 , such that light reflected off the object passes through the polarizing filter to produce polarized light. The polarized light is then transmitted by the filter 108 to the second imaging device 104 .
  • the second imaging device 104 may be any electronic sensor capable of detecting various wavelengths of light, such as those commonly used in digital cameras, digital video cameras, mobile telephones and personal digital assistants, web cameras, and so on and so forth.
  • the second imaging device 104 may be, but is not limited to, a charge-coupled device (CCD) imaging device or a complementary metal-oxide-semiconductor (CMOS) sensor.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • a polarizing filter may overlay the second imaging device.
  • the polarizing filter 108 may include an array of polarizing subfilters 120 .
  • Each of the polarizing subfilters 122 within the array may overlay one or more pixels 124 of the second imaging device 104 .
  • the polarizing filter 108 may be overlaid over the second imaging device 104 so that each polarizing subfilter 122 in the array 120 is aligned with a corresponding pixel 124 .
  • the polarizing subfilters 122 may have different types of polarizations.
  • a first polarizing subfilter may have a horizontal polarization
  • a second subfilter may have a vertical polarization
  • a third may have +45 degree polarization
  • a fourth may have a ⁇ 45 degree polarization, and so on and so forth.
  • left and right-hand circular polarizations may be used. Accordingly, the polarized light that is transmitted from the polarizing filter 108 to the second imaging device 104 may be polarized differently for some of the pixels than for others.
  • FIG. 1C Another embodiment, shown in FIG. 1C , may include a microlens array 130 overlaying the polarization filter 108 .
  • Each of the microlenses 132 in the microlens array 130 may overlay one or more polarizing subfilters 122 to focus polarized light onto a corresponding pixel 124 of the second imaging device.
  • the microlenses 132 in the array 130 may each be configured to refract light impacting on the second imaging device, as well as transmit light to an underlying polarizing subfilter 122 . Accordingly, each microlens 132 may correspond to one of the pixels 124 of the second imaging device 104 .
  • the microlenses 132 can be formed from any suitable material for transmitting and diffusing light through the light guide, including plastic, acrylic, silica, glass, and so on and so forth. Additionally, the light guide may include combinations of reflective material, highly transparent material, light absorbing material, opaque material, metallic material, optic material, and/or any other functional material to provide extra modification of optical performance.
  • the microlenses 134 of the microlens array 136 may be polarized. In this embodiment, the polarized microlens array 136 may overlay the pixels 122 of the second imaging device 104 so that polarized light is focused onto the pixels 122 of the second imaging device.
  • the microlenses 136 may be convex and have a substantially rounded configuration. Other embodiments may have different configurations. For example, in one embodiment, the microlenses 136 may have a conical configuration, in which the top end of each microlens is pointed. In other embodiments, the microlenses 136 may define truncated cones, in which the tops of the microlenses form a substantially flat surface. Additionally, in some embodiments, the microlenses 136 may be concave surfaces, rather than convex. As is known, the microlenses may be formed using a variety of techniques, including laser-cutting techniques, and/or micro-machining techniques, such as diamond turning.
  • an electrochemical finishing technique may be used to coat and/or finish the microlenses to increase their longevity and/or enhance or add any desired optical properties.
  • Other methods for forming the microlenses may entail the use of other techniques and/or machinery, as is known.
  • Unpolarized light that is reflected off of the surfaces of objects in the image may be fully or partially polarized according to Fresnel's laws.
  • the polarization may be correlated to the plane angle of incidence on the surface, as well as to the physical properties of the material. For example, light reflecting off highly reflective materials, such as polished metal, may be less polarized than light reflecting off of a dull surface.
  • light that is reflected off the surfaces of objects in the image may be passed through the array of polarization filters. The resulting polarized light may be captured by the pixels of the second imaging device so that each such pixel of the second imaging device receives light only if that light is polarized according to the polarization scheme of its corresponding filter.
  • the second imaging device 104 may then measure the polarization of the light impacting on each pixel and derive the surface geometry of the object. For example, the second imaging device 104 may determine the orientation and/or curvature of the surface of an object. In one embodiment, the orientation and/or curvature of the surface may be determined for each pixel of the second imaging device 104 and combined to obtain the surface geometry for all of the surfaces of the object.
  • the first and second images may be transmitted to the image processing module 106 , which may combine the first image captured by and transmitted from the first imaging device 102 with the second image captured by and transmitted from the second imaging device 104 , to output a composite three-dimensional image. This may be accomplished by aligning the first and second images and overlaying one of the images on top of the other using a variety of techniques, including warping the first and second images, selectively cropping at least one of these images, using calibration data for the image processing module, and so on and so forth.
  • the first image may supply depth information relating to the objects in the image
  • the second image may supply surface geometry information for the objects in the image.
  • the combined three-dimensional image may include accurate depth information for each object, while also providing accurate object surface detail.
  • the first image supplying the depth information may have a lower or coarser resolution (e.g., lower pixel count per unit area), than the second image supplying the surface geometry information.
  • the composite three-dimensional image may include high resolution surface detail for objects in the image, but the amount of overall processing by the image processing module may be reduced due to the lower resolution of the first image.
  • other embodiments may produce first and second images having substantially the same resolution, or the first image supplying the depth information may have a higher resolution than the second image.
  • the imaging apparatus 200 generally includes a first chrominance sensor 202 , a luminance sensor 204 , a second chrominance sensor 206 , and an image processing module 208 .
  • the luminance sensor 204 may be configured to capture a luminance component of incoming light.
  • each of the chrominance sensors 202 may be configured to capture color components of incoming light.
  • the chrominance sensors 202 , 206 may sense the R (Red), G (Green), and B (Blue) components of an image and process these components to derive chrominance information.
  • Other embodiments may be configured to sense other color components, such as yellow, cyan, magenta, and so on.
  • two luminance sensors and a single chrominance sensor may be used. That is, certain embodiments may employ a first luminance sensor, a first chrominance sensor and a second luminance sensor, such that a stereo disparity (e.g., stereo depth) map may be generated based on the offsets of the two luminance images.
  • Each luminance sensor captures one of the two luminance images in this embodiment.
  • the chrominance sensor may be used to capture color information for a picture, while one or both luminance sensors capture luminance information.
  • both of the luminance sensors may be overlaid, fitted, or otherwise associated with one or more polarizing filters to receive and capture surface normal information for a surface, as described in more detail herein.
  • Multiple luminance sensors with polarizing filters may be used, for example, in low light conditions where chrominance information may be lost or muted.
  • Chrinance sensors 202 , 206 may be implemented in a variety of fashions and may sense/capture more than just chrominance.
  • the chrominance sensor(s) 202 , 206 may be implemented as a Bayer array, an RGB sensor, a CMOS sensor, and so on and so forth. Accordingly, it should be appreciated that a chrominance sensor may also capture luminance information; chrominance is typically derived from the RGB sensor data.
  • the first chrominance sensor 202 may take the form of a first color imaging device.
  • the luminance sensor may take the form of a luminance imaging device that is overlaid by a polarizing filter.
  • the second chrominance sensor 206 may take the form of a second color imaging device.
  • the luminance sensor 204 and two chrominance sensors 202 , 206 may be separate integrated circuits.
  • the luminance and chrominance sensors may be formed on the same circuit and/or formed on a single board or other element.
  • the polarizing filter 210 may be placed over either of the chrominance sensors instead of (or in addition to) the luminance sensor.
  • the polarizing filter 210 may be positioned between the luminance sensor 204 and an object 211 , such that light reflected off the object passes through the polarizing filter and impacts the corresponding luminance sensor.
  • the luminance sensor 204 may be any electronic sensor capable of detecting various wavelengths of light, such as those commonly used in digital cameras, digital video cameras, mobile telephones and personal digital assistants, web cameras, and so on and so forth.
  • the luminance and chrominance sensors 202 , 204 , 206 may be formed from an array of color-sensitive pixels.
  • the pixel arrangement may vary between sensors or may be identical, in a manner similar to that previously discussed.
  • respective color filters may overlay the first and second color sensors and allow the sensors to capture the color portions of a sensed image as chrominance images.
  • an additional filter may overlay the luminance sensors and allow the imaging device to capture the luminance portion of a sensed image as a luminance image.
  • the luminance image, along with the chrominance images, may be transmitted to the image processing module 208 .
  • the image processing module 208 may combine the luminance image captured by and transmitted from the luminance sensor 204 with the chrominance images captured by and transmitted from the chrominance sensors, to output a composite image.
  • luminance of an image may be expressed as a weighted sum of red, green and blue wavelengths of the image, in the following manner:
  • the chrominance portion of an image may be the difference between the full color image and the luminance image. Accordingly, the full color image may be the chrominance portion of the image combined with the luminance portion of the image.
  • the chrominance portion may be derived by mathematically processing the R, G, and B components of an image, and may be expressed as two signals or a two dimensional vector for each pixel of an imaging device. For example, the chrominance portion may be defined by two separate components Cr and Cb, where Cr may be proportional to detected red light less detected luminance, and where Cb may be proportional to detected blue light less detected luminance.
  • the first and second chrominance sensors 202 , 206 may be configured to detect red and blue light and not green light, for example, by covering pixel elements of the color imaging devices with a red and blue filter array. This may be done in a checkerboard pattern of red and blue filter portions.
  • the filters may include a Bayer-pattern filter array, which includes red, blue, and green filters.
  • the filter may be a CYGM (cyan, yellow, green, magenta) or RGBE (red, green, blue, emerald) filter.
  • the luminance portion of a color image may have a greater influence on the overall image resolution than the chrominance portions of a color image.
  • the luminance sensor 204 may be an imaging device that has a higher pixel count than that of the chrominance sensors 202 , 206 . Accordingly, the luminance image generated by the luminance sensor 204 may be a higher resolution image than the chrominance images generated by the chrominance sensors 202 , 206 . In other embodiments, the luminance image may be stored at a higher resolution or transmitted at higher bandwidth than the chrominance images.
  • the fields of view of any two of the luminance and chrominance sensors may be offset so that the produced images are slightly different.
  • the image processing module may combine the high resolution luminance image captured by and transmitted from luminance sensor 204 with the first and second chrominance images captured by and transmitted from the first and second chrominance 202 , 206 sensors to output a composite three-dimensional image.
  • the image processing module 204 may use a variety of techniques to account for differences between the high-resolution luminance image and first and second chrominance images to form the composite three-dimensional image.
  • Depth information for the composite image may be derived from the two chrominance images.
  • the fields of view 212 , 216 of the first and second chrominance sensors 202 , 206 may be offset from one another and the image processing module 208 may be configured to compute depth information for objects in the image by comparing the first chrominance image with the second chrominance image.
  • the pixel offsets may be used to form a stereo disparity map between the two chrominance images.
  • the stereo disparity map may be a depth map in which depth information for objects in the images is derived from the offset first and second chrominance images.
  • depth information for the composite image may be derived from the two chrominance images in conjunction with the luminance image.
  • the image processing module may further compare the luminance image with one or both of the chrominance images to form further stereo disparity maps between the luminance image and the chrominance images.
  • the image processing module may be configured to refine the accuracy of the stereo disparity map generated initially using only the two chrominance sensors 202 , 206 .
  • the luminance sensor 204 may include a luminance imaging device and an associated polarizing filter 210 .
  • the polarizing filter 210 may include an array of polarizing subfilters, with each of the polarizing subfilters within the array corresponding to a pixel of the second imaging device.
  • the polarizing filter may be overlaid over the luminance imaging device so that each polarizing subfilter in the array is aligned with a corresponding pixel.
  • the polarizing filters in the array may have different types of polarizations. However, in other embodiments, the polarizing subfilters in the array may have the same type of polarization.
  • Light reflected off the surfaces of objects in the image may be passed through the array of polarization subfilters.
  • the resulting polarized light may be captured by the pixels of the luminance imaging device so that each pixel of the luminance imaging device may receive light that is polarized according to the polarization scheme of its corresponding subfilter.
  • the luminance imaging device may then measure the polarization of the light impacting on the pixels and derive the surface geometry of the object.
  • the orientation and/or curvature of the surface may be determined for each pixel of the luminance imaging device and combined to obtain the surface geometry for all of the surfaces of the object 211 .
  • the polarizing filter 210 may be overlaid by a corresponding microlens 130 array to focus the light onto the pixels of the luminance imaging device.
  • the microlens array 134 may be polarized, so that a separate polarizing filter overlaying the luminance imaging device is not needed.
  • the luminance and first and second chrominance images may then be transmitted to the image processing module.
  • the image processing module 208 may combine the luminance image captured by and transmitted from the luminance imaging device 205 , with the first and second chrominance images captured by and transmitted from the chrominance imaging devices 203 , 207 , to output a composite three-dimensional image. In one embodiment, this may be accomplished by warping the luminance and two chrominance images, such as to compensate for depth of field effects or stereo effects, and substantially aligning the images to form the composite three-dimensional image.
  • Other techniques for aligning the luminance and chrominance images include selectively cropping at least one of these images by identifying fiducials in the fields of view of the first and second chrominance images and/or luminance images, or by using calibration data for the image processing module 208 .
  • the stereo disparity map generated between the two chrominance images may supply depth information relating to the objects in the image, while the luminance image supplies surface geometry information for the objects in the image.
  • the combined three-dimensional image may include accurate depth information for each object, while also providing accurate object surface detail.
  • the first and second chrominance images supplying the depth information may each have a lower pixel count than the luminance image supplying the surface geometry information. As discussed above, this may result in a composite three-dimensional image that has high resolution surface detail of objects in the image and approximated depth information. Accordingly, the amount of overall processing by the image processing module may be reduced due to the lower resolution of the chrominance images.
  • Each of the luminance and chrominance sensors can have a blind region due to a near field object 211 that may partially or fully obstruct the fields of view of the sensors.
  • the near field object may block the field of view of the sensors to prevent the sensors from detecting part or all of a background or a far field object that is positioned further from the sensors than the near-field object 211 .
  • the chrominance sensors 202 , 206 may be positioned such that the blind regions of the chrominance sensors do not overlap. Accordingly, chrominance information that is missing from one of the chrominance sensors due to a near field object may, in many cases, be captured by the other chrominance sensor of the three-dimensional imaging apparatus.
  • the captured color information may then be combined with the luminance information from the luminance sensor and incorporated into the final image, as previously described. Due to the offset blind regions of the chrominance sensors 202 , 206 , stereo imaging artifacts may be reduced in the final image by ensuring that color information is supplied by at least one of the chrominance sensors where needed. In other words, color information for each of the pixels of the luminance sensor 204 may be supplied by at least one of the chrominance sensors 202 , 206 .
  • the luminance sensor 204 may be positioned between the chrominance sensors 202 , 206 so that the blind region of the luminance sensor may be between the blind regions of the first and second chrominance sensors. This configuration may prevent or reduce overlap between the blind regions of the first and second chrominance sensors, while also allowing for a more compact arrangement of sensors within the three-dimensional imaging apparatus.
  • the chrominance sensors may be positioned directly adjacent one another and the luminance sensor may be positioned on either side of the chrominance sensors, rather than in-between the sensors.
  • other embodiments may utilize two luminance sensors and a single chrominance sensor positioned between the luminance sensors.
  • the chrominance sensor may include a polarizing filter and a color imaging device associated with the polarizing filter.
  • the chrominance sensor may be configured to derive surface geometry information for objects in the image and the luminance images generated by the luminance sensors may be processed to extract depth information for objects in the image.
  • FIG. 3 Another embodiment of a three-dimensional imaging apparatus 300 may include four or more sensors. As shown in FIG. 3 , one embodiment may include two chrominance sensors 302 , 306 , two luminance sensors 304 , 310 , and an image processing module 308 . Similar to the embodiment shown in FIG. 2 , this embodiment may include two chrominance sensors 302 , 306 positioned on either side of a first luminance sensor 304 . In contrast to the embodiment shown in FIG. 2 , however, the surface detail information may be supplied by a second luminance sensor 310 that includes a polarizing filter 312 and a luminance imaging device associated with the polarizing filter. In one embodiment, the second luminance sensor may be positioned on top of or below the first luminance sensor.
  • the second luminance sensor 310 may be horizontally or diagonally offset from the first luminance sensor 304 . Additionally, the second luminance sensor 310 may be positioned in front of or behind the first luminance sensor 304 . Each of the luminance and chrominance sensors generally (although not necessarily) interacts directly with the image processing module 308 .
  • depth information may be obtained by forming a stereo disparity map between the first luminance sensor 304 and the two chrominance sensors 302 , 306
  • the surface detail information may be obtained by the second luminance sensor 310 .
  • This embodiment may allow for generating a more accurate stereo disparity map, since the map may be formed from the luminance image generated by the first luminance sensor 304 and the two chrominance images generated by the chrominance sensors 302 , 306 , and is not limited to the data provided by the two chrominance images. This may allow for more accurate depth calculation, as well as for better alignment of the produced images to form the composite three-dimensional image.
  • the stereo disparity map may be generated from the two luminance images generated by the first and second luminance sensors 304 , 310 , as well as the two chrominance images generated by the chrominance sensors 302 , 306 .
  • embodiments discussed herein may employ a polarized filter 312 that is placed atop, above, or otherwise between a light source and an imaging sensor.
  • polarized filters are shown in FIG. 1C as a separate layer beneath microlenses and in FIG. 1D as integrated with microlenses.
  • the polarized filter 312 may be patterned in the manner shown in FIG. 4 . That is, the polarization filter 312 may take the form of a set of individually polarized elements 314 , each of which passes through a different polarization of light. As shown in FIG.
  • the individually polarized elements 314 may be vertically polarized 316 , horizontally polarized 318 , +45 degree polarized 320 and ⁇ 45 degree polarized 322 . These four individually polarized elements may be arranged in a two-by-two array in certain embodiments.
  • each individually polarized element may overlay a single pixel (or, in some embodiments, a group of pixels).
  • the pixel beneath the individually polarized element thus receives and senses light having a single polarity.
  • the surface orientation of an object reflecting light onto a pixel may be determined through the polarization of the light impacting that pixel.
  • each group of pixels here, each group of four pixels
  • FIG. 5 is a top-down view of an alternative arrangement of polarized subfilters 330 (e.g., individually polarized elements) that may overlay the pixels of a digital image sensor.
  • each element of the filter array marked with an “X” indicates a group of polarized subfilters 330 arranged, for example, in the two-by-two grid previously mentioned.
  • Each element of the filter that is unmarked has no polarized subfilters. Rather, light impinges upon pixels beneath these portions of the filter without any filtering at all.
  • the filter shown in FIG. 5 is a partial filter; certain groups of pixels sense polarized light while others are not so filtered and sense unpolarized light.
  • the groups of polarized subfilters 330 and the non-filtered sections of the filter generally alternate.
  • Other patterns of polarized and nonpolarized areas may be formed.
  • a filter may have half as many polarized areas as nonpolarized or twice as many. Accordingly, the pattern shown in FIG. 5 is illustrative only.
  • the polarized filter may be configured to provide three different light levels, designated A, B, and C for purposes of this discussion.
  • Each pixel or pixel group
  • Each pixel may be placed beneath or otherwise adjacent to a portion of a filter that is polarized to permit light of levels A, B, or C therethrough.
  • the image sensor when taken as a whole, could be considered to simultaneously capture three images that are not physically offset (or are very minimally physically offset, such as by the width of a few pixels) but have different luminance levels.
  • the varying light levels may be achieved by changing the polarization patterns and/or degree of polarization.
  • any number of different, distinct levels of light may be created by the filter, and thus any number of images having varying light levels may be captured by the associated sensor.
  • embodiments may create multiple images that may be employed to create a single high dynamic range image, as described in more detail below.
  • each group of pixels beneath a group of polarized subfilters 330 receives less light than a group of pixels beneath an unpolarized area of the filter.
  • the unpolarized groups of pixels may record an image having a first light level while the polarized pixel groups record the same image but at a darker light level.
  • the images are recorded from the same vantage and at the same time since the pixels are interlaced with one another.
  • the embodiment trades resolution for the ability to capture an image at two different light levels simultaneously and without displacement.
  • the image having a higher light level e.g., the “first image”
  • the image having a lower light level e.g., the “second image”
  • the two images may be used to create high dynamic range (HDR) images.
  • HDR images are generally created by overlaying and merging images that are captured at near-identical or identical locations, temporally close to one another, and at different lighting levels. The variance in lighting level is typically achieved by changing the exposure time of the image capture device. Since the present embodiment captures two images effectively with different exposure times , it should be appreciated that these images may be combined to create HDR images.
  • the polarization pattern of the filter may be varied to effectively create three or more images at three or more total exposures (for example, by suitable varying and/or interleaving various groups of polarized subfilters and unpolarized areas). This may permit an even wider range of images to be combined to create a HDR image. “Total exposure” may be achieved by varying any, all, or a combination of f-stop, exposure time and filtering.
  • these images may be combined to create a variety of effects.
  • the variance in light intensity between images may cause certain objects in the first image to be more detailed than those objects are in the second image, and vice versa.
  • the images show a person standing in front of a sunlit window, the person may be dark and details of the person may be lost in the first image, since the light from the window will overpower the person.
  • the second image the person may be more visible and detailed but the window may appear dull and/or details of the space through the window may be lost.
  • the two images may easily be combined, such that the portion of the second image showing the person is overlaid into the identical space in the first image.
  • a single image with both the person and background in detail may be created. Because there is little or no pixel offset, these types of substitutions may be relatively easily accomplished.
  • HDR images may be further enhanced.
  • Many HDR images suffer from “halos” or bleeding effects around objects. This is typically caused by blending together the multiple images into the HDR image and attempting to normalize for abrupt changes in color or luminance caused by the boundary between an object and the background, or between two objects. Because traditional images lack depth and surface orientation data, they cannot distinguish between objects. Thus, the HDR process creates a visual artifact around high contrast boundaries.
  • embodiments disclosed herein can effectively map objects in space and obtain surface information about these objects, boundaries of the objects may be determined prior to the HDR process.
  • the HDR process may be performed not on a per-image basis, but on a per-object basis within the image.
  • backgrounds may be treated and processed separately from any objects in the image.
  • the halo/bleeding effects may be reduced or removed entirely, since the HDR process is not attempting to blend color and/or luminance across two discrete elements in an image.
  • a polarizing filter may employ a polarizing filter having non-checkered and/or asymmetric patterns.
  • a polarizing filter maybe associated with an image sensor such that the majority of pixels receive polarized light. This permits the image sensor to gather additional surface normal data but at a cost of luminance, and potentially chrominance, information.
  • Non-polarized sections of the filter may overlay a certain number of pixels. For example, every fifth, 10 th , 20 th and so on pixel may receive unpolarized light. In this fashion, the data captured by the pixels receiving unpolarized light may be used to estimate and enhance the luminance/chrominance information captured by the pixels underlying polarized portions of the filter.
  • the unpolarized image data may be used to correct the polarized image data.
  • An embodiment may, for example create a curve fitted to the image data captured by the unpolarized pixels. This curve may be applied to data captured by pixels underlying the polarized sections of the filter and the corresponding polarized data may be fitted to the curve. This may improve the luminance and/or chrominance of the overall image.
  • the polarization filter may vary in any fashion that facilitates processing image data captured by the pixels of the sensor array.
  • the following array shows a sample polarization filter, where the first letter indicates if the pixel receives polarized or clear/unpolarized light (designated by a “P” or a “C,” respectively) while the second letter indicates the wavelength to which the particular pixel is sensitive/filtered (“R” for red, “G” for green and “B” for blue):
  • the exact pattern may be varied to enhance, facilitate, and/or speed up image processing.
  • the polarization pattern may vary according to the purpose of the image sensor, the software with which it is used, and the like. It should be appreciated that the foregoing array is an example only. Checkerboard or other repeating patterns need not be used; certain embodiments may use differing patterns depending on the end application or result desired.
  • the ability to extract both surface detail and depth information for objects in an image can extend the performance of the three-dimensional imaging apparatus in a number of ways.
  • the depth information may be used to derive size information for the objects in the image.
  • the three-dimensional imaging apparatus may be capable of differentiating a large object positioned far away from the camera from a small object positioned close to the camera and having the same shape as the large object.
  • the three-dimensional imaging apparatus may be used for gaze detection or eye tracking.
  • Existing gaze detection techniques require transmitting infrared (IR) waves to an individual's retina and sensing the reflected infrared waves with a camera to determine the location of the individual's pupil and lens.
  • IR infrared
  • Such techniques can be inaccurate, since an individual's retina is located behind the lens and cornea and may not be readily detectable.
  • This technique can be improved by utilizing surface detail and depth information to measure the flatness of an individual's lens and determine the location of the lens with respect to the individual's eye.
  • the three-dimensional imaging apparatus may be used to enhance imaging of partially obscured scenes.
  • the three-dimensional sensing device may determine the surface detail associated with an unobscured object, and save this information, for example, to a memory device.
  • the saved surface information can be retrieved and superimposed into images in which the object is otherwise obscured.
  • One example of such a situation is when an object is partially obscured by fog. In this situation, the image can be artificially enhanced by superimposing the object into the scene using the saved surface detail information.
  • the three-dimensional imaging apparatus may be used to artificially reposition the light source within an image.
  • the intensity of the light source may be adjusted to brighten or darken objects in the image.
  • the surface detail information which can include the curvature and orientation of the surfaces of an object, may be used to calculate the position and/or intensity of the light source in the image. Accordingly, the light source may be virtually moved to a different positioned, or the intensity of the light source can be changed. Additionally, the surface detail information may further be used to calculate shadow positions that would naturally appear due to repositioning or changing the intensity of a light source.
  • the three-dimensional imaging apparatus may be used to alter the balance of light within the image from different light sources.
  • the image may include light from two different light sources, including, but not limited to, natural light, florescent light, incandescent light, and so on and so forth.
  • the different types of light may have different polarization signatures as it is reflected off of surfaces of objects. This may allow for calculating both the position of the light sources, as discussed above, as well as for identifying the light sources that are impacting on surfaces of objects in an image. Accordingly, the intensity of each light source, as well as the positions of the light sources, may be manipulated by a user to balance the light sources in the image according to the user's preference.
  • the three-dimensional imaging apparatus may be configured to remove unwanted visual effects caused by light sources within the image, such as, but not limited to, glare and gloss.
  • Glare and gloss may be caused by the presence of a large luminance ratio between the surface of a captured object and the glare source, which may be sunlight or an artificial light source.
  • the three-dimensional imaging apparatus may use the surface detail information to determine the position and/or intensity of the light source in the image, and alter the parameters of the light source to remove or reduce the amount of glare and gloss caused by the light source.
  • the light source may be virtually repositioned or dimmed so that the amount of light impacting on a surface is reduced.
  • the three-dimensional imaging apparatus may further be used to artificially reposition and/or remove objects within an image.
  • the surface detail information may be used to calculate the position and/or intensity of the light source in the image. Once the position of the light source is obtained, the imaging apparatus may calculate shadow positions of the objects after they have been moved based on the calculated position of the light source, as well as the surface detail information.
  • the depth information may be used to calculate the size of objects within the image, so that the objects may be appropriately sized as they are virtually positioned further or closer to the camera.
  • the three-dimensional imaging apparatus may be used to artificially modify the shape of objects within the image.
  • the surface detail information may be calculated and modified according to various parameters input by a user.
  • the surface detail information may be used to calculate the position and/or intensity of the light source within the image, and corresponding shadow positions may be calculated according to the modified surface orientations.
  • the three-dimensional imaging apparatus may be used for recognizing facial gestures.
  • Facial gestures may include, but are not limited to, smiling, grimacing, frowning, winking, and so on and so forth. In one embodiment, this may be accomplished by detecting the orientation of various facial muscles using surface geometry data, such as the mouth, eyes, nose, forehead, cheeks, and so on, and correlating the detected orientations with various gestures. The gestures may then be correlated to various emotions associated with the gestures to determine the emotion of an individual in an image.
  • the three-dimensional imaging apparatus may be used to scan an object, for example, to create a three-dimensional model of the object. This embodiment may be accomplished by taking multiple photographs of the object or video while rotating the object. As the object is rotated, the image sensing device may capture more of the surface geometry and use the geometry to create a three-dimensional model of the object.
  • multiple photographs or video may be taken while the image sensing device is moved relative to the object, and used to construct a three-dimensional model of the objects within the captured image(s). For example, a user may take video of a home while walking through the home and the image sensing device could use the calculated depth and surface detail information to create a three-dimensional model of the home. The depth and surface detail information of multiple photographs or video stills may then be matched to construct a seamless composite three-dimensional model that combines the surface detail and depth from each of the photos or video.
  • the three-dimensional imaging apparatus may be used to correct geometric distortions in a two or three-dimensional image. For example, if an image is taken using a fish-eye lens, the image processing module may warp the image so that it appears undistorted. In one embodiment, this may be accomplished by using the surface detail information to recognize various objects in the distorted image and warping the image based on saved surface detail information of the undistorted object. Accordingly, the three-dimensional imaging apparatus may recognize an object in the image, such as a table, using image recognition techniques, and calculate the distance of the table in the captured scene from the imaging apparatus. The three-dimensional imaging apparatus may then substitute a saved image of the object from another image, position the image of the object into the image of the scene at the calculated depth, and modify the image of the object so that it is properly scaled within the image of the scene.
  • the image processing module may warp the image so that it appears undistorted. In one embodiment, this may be accomplished by using the surface detail information to recognize various objects in the distorted image and warping the image based on saved surface detail information of
  • the surface normal data may be used to construct a stereo disparity map.
  • Most stereo disparity mapping systems look for repeating patterns of pixels and, based on the offset of the repeating pattern between images, assign the pattern a particular distance from the sensor. This may be crude since objects may be differently aligned with respect to one another when the image is captured by a first sensor and a second offset sensor. That is, the offset of the first and second sensors may cause objects to appear to have a different spatial relationship to one another, and thus the pixels representing those objects may vary between images.
  • the surface normals of each object should appear the same to each image sensor, so long at the sensors are coplanar.
  • the stereo mapping may be enhanced and refined.
  • the surface normals may be compared. If the surface normals differ, then the pixels may represent different objects in each image and depth information may be difficult or impossible to assign. If the surface normals match, then the pixels represent the same object(s) in each image and a depth may be more definitively determined and assigned.

Abstract

Embodiments may take the form of three-dimensional image sensing devices configured to capture an image including one or more objects. In one embodiments, the three-dimensional image sensing device includes a first image device configured to capture a first image and extract depth information for the one or more objects. Additionally, the image sensing device includes a second imaging device configured to capture a second image and determine an orientation of a surface of the one or more objects.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit under 35 U.S.C. §119(e) to 61/386,865, filed Sep. 27, 2010 and titled “Image Capture Using Three-Dimensional Reconstruction,” the disclosure of which is hereby incorporated herein in its entirety.
  • BACKGROUND
  • I. Technical Field
  • The disclosed embodiments relate generally to image sensing devices and, more particularly, to image sensing devices that utilize three-dimensional reconstruction to form a three-dimensional image.
  • II. Background Discussion
  • Existing three-dimensional image capture devices, such as digital cameras and video recorders, can derive limited three-dimensional visual information for objects located within a captured area. For example, some imaging devices can extract approximate depth information relating to objects located within the captured area, but are incapable of obtaining detailed geometric information relating to the surfaces of these objects. Such sensors may be able to approximate the distances of objects within the captured area, but cannot accurately reproduce the three-dimensional shape of the objects. Alternatively other imaging devices can obtain and reproduce surface detail information for objects within the captured area, but are incapable of extracting depth information. Accordingly, these sensors may be incapable of differentiating between a small object positioned close to the sensor and a large object positioned far away from the sensor.
  • SUMMARY
  • Embodiments described herein relate to systems, apparatuses and methods for capturing a three-dimensional image using one or more dedicated imaging devices. One embodiment may take the form of a three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized image, the first sensor including a first imaging device and a polarized filter associated with the first imaging device; a second sensor for capturing a first non-polarized image; a third sensor for capturing a second non-polarized image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first non-polarized image and the second non-polarized image, the processing module further operative to combine the polarized image, the first non-polarized image, and the second non-polarized image to form a composite three-dimensional image.
  • Another embodiment may take the form of three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising: a first sensor for capturing a polarized chrominance image and determining surface information for the one or more objects, the first sensor including a color imaging device and a polarized filter associated with the color imaging device; a second sensor for capturing a first luminance image; a third sensor for capturing a second luminance image; and at least one processing module for deriving depth information for the one or more objects utilizing at least the first luminance image and the second luminance image and combining the polarized chrominance image, the first luminance image, and the second luminance image to form a composite three-dimensional image utilizing the surface information and the depth information.
  • Still another embodiment may take the form of a method for capturing at least one image of an object, comprising: capturing a polarized image of the object; capturing a first non-polarized image of the object; capturing a second non-polarized image of the object; deriving depth information for the object from at least the first non-polarized image and the second non-polarized image; determining a plurality of surface normals for the object, the plurality of surface normals derived from the polarized image; and creating a three-dimensional image from the depth information and the plurality of surface normals.
  • This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Other features, details, utilities, and advantages will be apparent from the following more particular written description of various embodiments, as further illustrated in the accompanying drawings and defined in the appended claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a functional block diagram that illustrates certain components of one embodiment of a three-dimensional imaging apparatus;
  • FIG. 1B is a close-up view of one embodiment of the second imaging device shown in FIG. 1A;
  • FIG. 1C is a close-up view of another embodiment of the second imaging device shown in FIG. 1A;
  • FIG. 1D is a close-up view of another embodiment of the second imaging device shown in FIG. 1A;
  • FIG. 2 is a functional block diagram that illustrates certain components of another embodiment of a three-dimensional imaging apparatus;
  • FIG. 3 is a functional block diagram that illustrates certain components of another embodiment of a three-dimensional imaging apparatus;
  • FIG. 4 depicts a sample polarization filter that may be used in accordance with embodiments discussed herein, including the imaging apparatuses of FIGS. 1A-3; and
  • FIG. 5 depicts a second sample polarization filter that may be used in accordance with embodiments discussed herein, including the imaging apparatuses of FIGS. 1A-3.
  • DETAILED DESCRIPTION
  • One embodiment may take the form of a three-dimensional imaging apparatus, including a first and second imaging device. The first imaging device may have two unique imaging devices that may be used in concert to derive depth data for objects within the field of detection of the sensors. Alternatively, the first imaging device may have a single imaging device that provides depth data. The second imaging device may be at least partially overlaid with a polarizing filter in order to obtain polarization data of light impacting the device, and thus the surface orientation of any objects reflecting such light.
  • The first imaging device may derive approximate depth information relating to objects within its field of detection and supply the depth information to an image processing device. The second imaging device may capture surface detail information relating to objects within its field of detection and supply the surface detail information to the image processing device. The image processing device may combine the depth information with the surface detail information in order to create a three-dimensional image that includes both surface detail and accurate depth information for objects in the image.
  • In the following discussion of illustrative embodiments, the term “image sensing device” includes, without limitation, any electronic device that can capture still or moving images. The image sensing device may utilize analog or digital sensors, or a combination thereof, for capturing the image. In some embodiments, the image sensing device may be configured to convert or facilitate converting the captured image into digital image data. The image sensing device may be hosted in various electronic devices including, but not limited to, digital cameras, personal computers, personal digital assistants (PDAs), mobile telephones, or any other devices that can be configured to process image data. Sample image sensing devices include charge-coupled device (CCD) sensors, complementary metal-oxide-semiconductor sensors, infrared sensors, light detection and ranging sensors, and the like. Further, the image sensing devices may be sensitive to a range of colors and/or luminances, and may employ various color separation mechanisms such as Bayer arrays, Foveon X3 configurations, multiple CCD devices, dichroic prisms and the like.
  • FIG. 1A is a functional block diagram of one embodiment of a three-dimensional imaging apparatus for capturing and storing image data. In one embodiment, the three-dimensional imaging apparatus may be a component within an electronic device. For example, the three-dimensional imaging apparatus may be employed in a standalone digital camera, a laptop computer, a media player, a mobile phone, and so on and so forth.
  • As shown in FIG. 1A, the three-dimensional imaging apparatus 100 may include a first imaging device 102, a second imaging device 104, and an image processing module 106. The first imaging device 102 may include a first imaging device and the second imaging device 104 may include a second imaging device and a polarizing filter 108 associated with the second imaging device. As will be further discussed below, the first imaging device 102 may be configured to derive approximate depth information relating to objects in the image, and the second imaging device 104 may be configured to derive surface orientation information relating to objects in the image.
  • In one embodiment, the fields of view of the first and second imaging devices 112, 114 may be offset so that the received images are slightly different. For example, the field of view 112 of the first imaging device 102 may be vertically, diagonally, or horizontally offset from the second imaging device 104, or may be closer or further away from a reference plane or point. As will be further discussed below, offsetting the fields of view of the first and second imaging devices 112, 114 may provide data useful for generating stereo disparity maps, as well as extracting depth information. However, in other embodiments, the fields of view of the first and second imaging devices 112, 114 may be substantially the same.
  • The first and second imaging devices 102, 104 may be each be formed from an array of light-sensitive pixels. That is, each pixel of the imaging devices may detect at least one of the various wavelengths that make up visible light. The signal generated by each such pixel may vary depending on the wavelength of light impacting it so that the array may thus reproduce a composite image of the object. In one embodiment, the first and second imaging devices 102, 104 may have substantially identical pixel array configurations. For example, the first and second imaging devices may have the same number of pixels, the same pixel aspect ratio, the same arrangement of pixels, and/or the same size of pixels. However, in other embodiments, the first and second imaging devices may have different numbers of pixels, pixel sizes, and/or layouts. For example, in one embodiment, the first imaging device 102 may have a smaller number of pixels than the second imaging device 104, or vice versa, or the arrangement of pixels may be different between the sensors.
  • The first imaging device 102 may be configured to capture a first image and process the image to detect depth or distance information relating to objects in the image. For example, the first imaging device 102 may be configured to derive an approximate relative distance of an object 110 by measuring properties of electromagnetic waves as they are reflected off or scattered by the object and captured by the first imaging device. In one embodiment, the first imaging device may be a Light Detection And Ranging (LIDAR) sensor. The LIDAR sensor may emit laser pulses that are reflected off of the surfaces of objects in the image and detect the reflected signal. The LIDAR sensor may then calculate the distance of an object from the sensor by measuring the time delay between transmission of a laser pulse and the detection of the reflected signal. Other embodiments may utilize other types of depth-detection techniques, such as infrared reflection, RADAR, laser detection and ranging, and the like.
  • Alternatively, a stereo disparity map may be generated to derive depth or distance information relating to objects present in the image. In one embodiment, a stereo disparity map may be formed from the first image captured by the first imaging device and a second image captured by the second imaging device. Various methods and processes for creating stereo disparity maps from two offset images are known to those skilled in the art and thus are not discussed further herein. Generally, the stereo disparity map is a depth map in which depth information for objects shown in the images is derived from the offset first and second images. For example, the second image may include some or all of the objects captured in the first image, but with the position of the objects being shifted in one direction (typically, although not necessarily, horizontally). This shift may be measured and used to calculate the distance of the objects from the first and second imaging devices.
  • The second imaging device 104 may be configured to capture a second image and derive detailed surface information for objects in the image. As shown in FIG. 1A, in one embodiment, a polarizing filter 108 may be positioned between the second imaging device and an object 110, such that light reflected off the object passes through the polarizing filter to produce polarized light. The polarized light is then transmitted by the filter 108 to the second imaging device 104. The second imaging device 104 may be any electronic sensor capable of detecting various wavelengths of light, such as those commonly used in digital cameras, digital video cameras, mobile telephones and personal digital assistants, web cameras, and so on and so forth. For example, the second imaging device 104 may be, but is not limited to, a charge-coupled device (CCD) imaging device or a complementary metal-oxide-semiconductor (CMOS) sensor.
  • In one embodiment, a polarizing filter may overlay the second imaging device. As shown in FIG. 1B, the polarizing filter 108 may include an array of polarizing subfilters 120. Each of the polarizing subfilters 122 within the array may overlay one or more pixels 124 of the second imaging device 104. In one embodiment, the polarizing filter 108 may be overlaid over the second imaging device 104 so that each polarizing subfilter 122 in the array 120 is aligned with a corresponding pixel 124. The polarizing subfilters 122 may have different types of polarizations. For example, a first polarizing subfilter may have a horizontal polarization, a second subfilter may have a vertical polarization, a third may have +45 degree polarization, a fourth may have a −45 degree polarization, and so on and so forth. In some embodiments, left and right-hand circular polarizations may be used. Accordingly, the polarized light that is transmitted from the polarizing filter 108 to the second imaging device 104 may be polarized differently for some of the pixels than for others.
  • Another embodiment, shown in FIG. 1C, may include a microlens array 130 overlaying the polarization filter 108. Each of the microlenses 132 in the microlens array 130 may overlay one or more polarizing subfilters 122 to focus polarized light onto a corresponding pixel 124 of the second imaging device. The microlenses 132 in the array 130 may each be configured to refract light impacting on the second imaging device, as well as transmit light to an underlying polarizing subfilter 122. Accordingly, each microlens 132 may correspond to one of the pixels 124 of the second imaging device 104. The microlenses 132 can be formed from any suitable material for transmitting and diffusing light through the light guide, including plastic, acrylic, silica, glass, and so on and so forth. Additionally, the light guide may include combinations of reflective material, highly transparent material, light absorbing material, opaque material, metallic material, optic material, and/or any other functional material to provide extra modification of optical performance. In another embodiment, shown in FIG. 1D, the microlenses 134 of the microlens array 136 may be polarized. In this embodiment, the polarized microlens array 136 may overlay the pixels 122 of the second imaging device 104 so that polarized light is focused onto the pixels 122 of the second imaging device.
  • In one embodiment, the microlenses 136 may be convex and have a substantially rounded configuration. Other embodiments may have different configurations. For example, in one embodiment, the microlenses 136 may have a conical configuration, in which the top end of each microlens is pointed. In other embodiments, the microlenses 136 may define truncated cones, in which the tops of the microlenses form a substantially flat surface. Additionally, in some embodiments, the microlenses 136 may be concave surfaces, rather than convex. As is known, the microlenses may be formed using a variety of techniques, including laser-cutting techniques, and/or micro-machining techniques, such as diamond turning. After the microlenses 136 are formed, an electrochemical finishing technique may be used to coat and/or finish the microlenses to increase their longevity and/or enhance or add any desired optical properties. Other methods for forming the microlenses may entail the use of other techniques and/or machinery, as is known.
  • Unpolarized light that is reflected off of the surfaces of objects in the image may be fully or partially polarized according to Fresnel's laws. Generally, the polarization may be correlated to the plane angle of incidence on the surface, as well as to the physical properties of the material. For example, light reflecting off highly reflective materials, such as polished metal, may be less polarized than light reflecting off of a dull surface. In one embodiment, light that is reflected off the surfaces of objects in the image may be passed through the array of polarization filters. The resulting polarized light may be captured by the pixels of the second imaging device so that each such pixel of the second imaging device receives light only if that light is polarized according to the polarization scheme of its corresponding filter. The second imaging device 104 may then measure the polarization of the light impacting on each pixel and derive the surface geometry of the object. For example, the second imaging device 104 may determine the orientation and/or curvature of the surface of an object. In one embodiment, the orientation and/or curvature of the surface may be determined for each pixel of the second imaging device 104 and combined to obtain the surface geometry for all of the surfaces of the object.
  • The first and second images may be transmitted to the image processing module 106, which may combine the first image captured by and transmitted from the first imaging device 102 with the second image captured by and transmitted from the second imaging device 104, to output a composite three-dimensional image. This may be accomplished by aligning the first and second images and overlaying one of the images on top of the other using a variety of techniques, including warping the first and second images, selectively cropping at least one of these images, using calibration data for the image processing module, and so on and so forth. As discussed above, the first image may supply depth information relating to the objects in the image, while the second image may supply surface geometry information for the objects in the image. Accordingly, the combined three-dimensional image may include accurate depth information for each object, while also providing accurate object surface detail.
  • In one embodiment, the first image supplying the depth information may have a lower or coarser resolution (e.g., lower pixel count per unit area), than the second image supplying the surface geometry information. In this embodiment, the composite three-dimensional image may include high resolution surface detail for objects in the image, but the amount of overall processing by the image processing module may be reduced due to the lower resolution of the first image. As discussed above, other embodiments may produce first and second images having substantially the same resolution, or the first image supplying the depth information may have a higher resolution than the second image.
  • Another embodiment of a three-dimensional imaging apparatus 200 is shown in FIG. 2. The imaging apparatus 200 generally includes a first chrominance sensor 202, a luminance sensor 204, a second chrominance sensor 206, and an image processing module 208. The luminance sensor 204 may be configured to capture a luminance component of incoming light. Additionally, each of the chrominance sensors 202 may be configured to capture color components of incoming light. In one embodiment, the chrominance sensors 202,206 may sense the R (Red), G (Green), and B (Blue) components of an image and process these components to derive chrominance information. Other embodiments may be configured to sense other color components, such as yellow, cyan, magenta, and so on. Further, in some embodiments, two luminance sensors and a single chrominance sensor may be used. That is, certain embodiments may employ a first luminance sensor, a first chrominance sensor and a second luminance sensor, such that a stereo disparity (e.g., stereo depth) map may be generated based on the offsets of the two luminance images. Each luminance sensor captures one of the two luminance images in this embodiment. Further, in such an embodiment, the chrominance sensor may be used to capture color information for a picture, while one or both luminance sensors capture luminance information. In this embodiment, both of the luminance sensors may be overlaid, fitted, or otherwise associated with one or more polarizing filters to receive and capture surface normal information for a surface, as described in more detail herein. Multiple luminance sensors with polarizing filters may be used, for example, in low light conditions where chrominance information may be lost or muted.
  • “Chrominance sensors” 202, 206 may be implemented in a variety of fashions and may sense/capture more than just chrominance. For example, the chrominance sensor(s) 202, 206 may be implemented as a Bayer array, an RGB sensor, a CMOS sensor, and so on and so forth. Accordingly, it should be appreciated that a chrominance sensor may also capture luminance information; chrominance is typically derived from the RGB sensor data.
  • Returning to an embodiment having two chrominance sensors 202, 206 and a single luminance sensor 104, the first chrominance sensor 202 may take the form of a first color imaging device. The luminance sensor may take the form of a luminance imaging device that is overlaid by a polarizing filter. The second chrominance sensor 206 may take the form of a second color imaging device. In one embodiment, the luminance sensor 204 and two chrominance sensors 202, 206 may be separate integrated circuits. However, in other embodiments, the luminance and chrominance sensors may be formed on the same circuit and/or formed on a single board or other element. In alternative embodiments, the polarizing filter 210 may be placed over either of the chrominance sensors instead of (or in addition to) the luminance sensor.
  • As shown in FIG. 2, the polarizing filter 210 may be positioned between the luminance sensor 204 and an object 211, such that light reflected off the object passes through the polarizing filter and impacts the corresponding luminance sensor. The luminance sensor 204 may be any electronic sensor capable of detecting various wavelengths of light, such as those commonly used in digital cameras, digital video cameras, mobile telephones and personal digital assistants, web cameras, and so on and so forth.
  • As discussed above with respect to FIGS. 1A and 1B, the luminance and chrominance sensors 202, 204, 206 may be formed from an array of color-sensitive pixels. The pixel arrangement may vary between sensors or may be identical, in a manner similar to that previously discussed.
  • In one embodiment, respective color filters may overlay the first and second color sensors and allow the sensors to capture the color portions of a sensed image as chrominance images. Similarly, an additional filter may overlay the luminance sensors and allow the imaging device to capture the luminance portion of a sensed image as a luminance image. The luminance image, along with the chrominance images, may be transmitted to the image processing module 208. As will be further described below, the image processing module 208 may combine the luminance image captured by and transmitted from the luminance sensor 204 with the chrominance images captured by and transmitted from the chrominance sensors, to output a composite image.
  • It should be appreciated that the luminance of an image may be expressed as a weighted sum of red, green and blue wavelengths of the image, in the following manner:

  • L=0.59 G+0.3 R+0.11 B
  • Where L is luminance, G is detected green light, R is detected red light, and B is detected blue light. The chrominance portion of an image may be the difference between the full color image and the luminance image. Accordingly, the full color image may be the chrominance portion of the image combined with the luminance portion of the image. The chrominance portion may be derived by mathematically processing the R, G, and B components of an image, and may be expressed as two signals or a two dimensional vector for each pixel of an imaging device. For example, the chrominance portion may be defined by two separate components Cr and Cb, where Cr may be proportional to detected red light less detected luminance, and where Cb may be proportional to detected blue light less detected luminance. In some embodiments, the first and second chrominance sensors 202, 206 may be configured to detect red and blue light and not green light, for example, by covering pixel elements of the color imaging devices with a red and blue filter array. This may be done in a checkerboard pattern of red and blue filter portions. In other embodiments, the filters may include a Bayer-pattern filter array, which includes red, blue, and green filters. Alternatively, the filter may be a CYGM (cyan, yellow, green, magenta) or RGBE (red, green, blue, emerald) filter.
  • As discussed above, the luminance portion of a color image may have a greater influence on the overall image resolution than the chrominance portions of a color image. In some embodiments, the luminance sensor 204 may be an imaging device that has a higher pixel count than that of the chrominance sensors 202, 206. Accordingly, the luminance image generated by the luminance sensor 204 may be a higher resolution image than the chrominance images generated by the chrominance sensors 202, 206. In other embodiments, the luminance image may be stored at a higher resolution or transmitted at higher bandwidth than the chrominance images.
  • In some embodiments, the fields of view of any two of the luminance and chrominance sensors may be offset so that the produced images are slightly different. As discussed above, the image processing module may combine the high resolution luminance image captured by and transmitted from luminance sensor 204 with the first and second chrominance images captured by and transmitted from the first and second chrominance 202, 206 sensors to output a composite three-dimensional image. As will be further discussed below, the image processing module 204 may use a variety of techniques to account for differences between the high-resolution luminance image and first and second chrominance images to form the composite three-dimensional image.
  • Depth information for the composite image may be derived from the two chrominance images. In this embodiment, the fields of view 212, 216 of the first and second chrominance sensors 202, 206 may be offset from one another and the image processing module 208 may be configured to compute depth information for objects in the image by comparing the first chrominance image with the second chrominance image. The pixel offsets may be used to form a stereo disparity map between the two chrominance images. As discussed above, the stereo disparity map may be a depth map in which depth information for objects in the images is derived from the offset first and second chrominance images.
  • In some embodiments, depth information for the composite image may be derived from the two chrominance images in conjunction with the luminance image. In this embodiment, the image processing module may further compare the luminance image with one or both of the chrominance images to form further stereo disparity maps between the luminance image and the chrominance images. Alternatively, the image processing module may be configured to refine the accuracy of the stereo disparity map generated initially using only the two chrominance sensors 202, 206.
  • Surface detail information may be derived from the luminance sensor 204. As previously mentioned, the luminance sensor 204 may include a luminance imaging device and an associated polarizing filter 210. The polarizing filter 210 may include an array of polarizing subfilters, with each of the polarizing subfilters within the array corresponding to a pixel of the second imaging device. In one embodiment, the polarizing filter may be overlaid over the luminance imaging device so that each polarizing subfilter in the array is aligned with a corresponding pixel. In some embodiments, the polarizing filters in the array may have different types of polarizations. However, in other embodiments, the polarizing subfilters in the array may have the same type of polarization.
  • Light reflected off the surfaces of objects in the image may be passed through the array of polarization subfilters. The resulting polarized light may be captured by the pixels of the luminance imaging device so that each pixel of the luminance imaging device may receive light that is polarized according to the polarization scheme of its corresponding subfilter. The luminance imaging device may then measure the polarization of the light impacting on the pixels and derive the surface geometry of the object. In one embodiment, the orientation and/or curvature of the surface may be determined for each pixel of the luminance imaging device and combined to obtain the surface geometry for all of the surfaces of the object 211.
  • As discussed above with respect to FIG. 1C, in some embodiments, the polarizing filter 210 may be overlaid by a corresponding microlens 130 array to focus the light onto the pixels of the luminance imaging device. In other embodiments, such as that shown in FIG. 1D, the microlens array 134 may be polarized, so that a separate polarizing filter overlaying the luminance imaging device is not needed.
  • The luminance and first and second chrominance images may then be transmitted to the image processing module. The image processing module 208 may combine the luminance image captured by and transmitted from the luminance imaging device 205, with the first and second chrominance images captured by and transmitted from the chrominance imaging devices 203, 207, to output a composite three-dimensional image. In one embodiment, this may be accomplished by warping the luminance and two chrominance images, such as to compensate for depth of field effects or stereo effects, and substantially aligning the images to form the composite three-dimensional image. Other techniques for aligning the luminance and chrominance images include selectively cropping at least one of these images by identifying fiducials in the fields of view of the first and second chrominance images and/or luminance images, or by using calibration data for the image processing module 208. As discussed above, the stereo disparity map generated between the two chrominance images may supply depth information relating to the objects in the image, while the luminance image supplies surface geometry information for the objects in the image. Accordingly, the combined three-dimensional image may include accurate depth information for each object, while also providing accurate object surface detail.
  • In one embodiment, the first and second chrominance images supplying the depth information may each have a lower pixel count than the luminance image supplying the surface geometry information. As discussed above, this may result in a composite three-dimensional image that has high resolution surface detail of objects in the image and approximated depth information. Accordingly, the amount of overall processing by the image processing module may be reduced due to the lower resolution of the chrominance images.
  • Each of the luminance and chrominance sensors can have a blind region due to a near field object 211 that may partially or fully obstruct the fields of view of the sensors. For example, the near field object may block the field of view of the sensors to prevent the sensors from detecting part or all of a background or a far field object that is positioned further from the sensors than the near-field object 211. In one embodiment, the chrominance sensors 202, 206 may be positioned such that the blind regions of the chrominance sensors do not overlap. Accordingly, chrominance information that is missing from one of the chrominance sensors due to a near field object may, in many cases, be captured by the other chrominance sensor of the three-dimensional imaging apparatus. The captured color information may then be combined with the luminance information from the luminance sensor and incorporated into the final image, as previously described. Due to the offset blind regions of the chrominance sensors 202, 206, stereo imaging artifacts may be reduced in the final image by ensuring that color information is supplied by at least one of the chrominance sensors where needed. In other words, color information for each of the pixels of the luminance sensor 204 may be supplied by at least one of the chrominance sensors 202, 206.
  • Still with respect to FIG. 2, the luminance sensor 204 may be positioned between the chrominance sensors 202, 206 so that the blind region of the luminance sensor may be between the blind regions of the first and second chrominance sensors. This configuration may prevent or reduce overlap between the blind regions of the first and second chrominance sensors, while also allowing for a more compact arrangement of sensors within the three-dimensional imaging apparatus. However, in other embodiments, the chrominance sensors may be positioned directly adjacent one another and the luminance sensor may be positioned on either side of the chrominance sensors, rather than in-between the sensors.
  • As alluded to above, other embodiments may utilize two luminance sensors and a single chrominance sensor positioned between the luminance sensors. The chrominance sensor may include a polarizing filter and a color imaging device associated with the polarizing filter. In this embodiment, the chrominance sensor may be configured to derive surface geometry information for objects in the image and the luminance images generated by the luminance sensors may be processed to extract depth information for objects in the image.
  • Another embodiment of a three-dimensional imaging apparatus 300 may include four or more sensors. As shown in FIG. 3, one embodiment may include two chrominance sensors 302, 306, two luminance sensors 304, 310, and an image processing module 308. Similar to the embodiment shown in FIG. 2, this embodiment may include two chrominance sensors 302, 306 positioned on either side of a first luminance sensor 304. In contrast to the embodiment shown in FIG. 2, however, the surface detail information may be supplied by a second luminance sensor 310 that includes a polarizing filter 312 and a luminance imaging device associated with the polarizing filter. In one embodiment, the second luminance sensor may be positioned on top of or below the first luminance sensor. In other embodiments, the second luminance sensor 310 may be horizontally or diagonally offset from the first luminance sensor 304. Additionally, the second luminance sensor 310 may be positioned in front of or behind the first luminance sensor 304. Each of the luminance and chrominance sensors generally (although not necessarily) interacts directly with the image processing module 308.
  • In this embodiment, depth information may be obtained by forming a stereo disparity map between the first luminance sensor 304 and the two chrominance sensors 302, 306, and the surface detail information may be obtained by the second luminance sensor 310. This embodiment may allow for generating a more accurate stereo disparity map, since the map may be formed from the luminance image generated by the first luminance sensor 304 and the two chrominance images generated by the chrominance sensors 302, 306, and is not limited to the data provided by the two chrominance images. This may allow for more accurate depth calculation, as well as for better alignment of the produced images to form the composite three-dimensional image. In another embodiment, the stereo disparity map may be generated from the two luminance images generated by the first and second luminance sensors 304, 310, as well as the two chrominance images generated by the chrominance sensors 302, 306.
  • As previously mentioned, embodiments discussed herein may employ a polarized filter 312 that is placed atop, above, or otherwise between a light source and an imaging sensor. For example, polarized filters are shown in FIG. 1C as a separate layer beneath microlenses and in FIG. 1D as integrated with microlenses. In any of the embodiments disclosed herein, the polarized filter 312 may be patterned in the manner shown in FIG. 4. That is, the polarization filter 312 may take the form of a set of individually polarized elements 314, each of which passes through a different polarization of light. As shown in FIG. 4, the individually polarized elements 314 may be vertically polarized 316, horizontally polarized 318, +45 degree polarized 320 and −45 degree polarized 322. These four individually polarized elements may be arranged in a two-by-two array in certain embodiments.
  • In such embodiments, each individually polarized element may overlay a single pixel (or, in some embodiments, a group of pixels). The pixel beneath the individually polarized element thus receives and senses light having a single polarity. As discussed above, the surface orientation of an object reflecting light onto a pixel may be determined through the polarization of the light impacting that pixel. Thus, each group of pixels (here, each group of four pixels) may cooperate to determine the surface orientation (e.g., surface normal) of a portion of an object reflecting light onto the group of pixels. Accordingly, resolution of an image may be traded in exchange for the ability to detect surface detail and curvature of objects.
  • FIG. 5 is a top-down view of an alternative arrangement of polarized subfilters 330 (e.g., individually polarized elements) that may overlay the pixels of a digital image sensor. On FIG. 5, each element of the filter array marked with an “X” indicates a group of polarized subfilters 330 arranged, for example, in the two-by-two grid previously mentioned. Alternative arrangements are possible. Each element of the filter that is unmarked has no polarized subfilters. Rather, light impinges upon pixels beneath these portions of the filter without any filtering at all. Thus, the filter shown in FIG. 5 is a partial filter; certain groups of pixels sense polarized light while others are not so filtered and sense unpolarized light.
  • It can be seen from FIG. 5 that the groups of polarized subfilters 330 and the non-filtered sections of the filter generally alternate. Other patterns of polarized and nonpolarized areas may be formed. For example, a filter may have half as many polarized areas as nonpolarized or twice as many. Accordingly, the pattern shown in FIG. 5 is illustrative only.
  • As one example of another polarizing filter pattern, the polarized filter may be configured to provide three different light levels, designated A, B, and C for purposes of this discussion. Each pixel (or pixel group) may be placed beneath or otherwise adjacent to a portion of a filter that is polarized to permit light of levels A, B, or C therethrough. Thus, the image sensor, when taken as a whole, could be considered to simultaneously capture three images that are not physically offset (or are very minimally physically offset, such as by the width of a few pixels) but have different luminance levels. It should be appreciated that the varying light levels may be achieved by changing the polarization patterns and/or degree of polarization. Further, it should be appreciated that any number of different, distinct levels of light may be created by the filter, and thus any number of images having varying light levels may be captured by the associated sensor. Thus, embodiments may create multiple images that may be employed to create a single high dynamic range image, as described in more detail below.
  • It should be appreciated that each group of pixels beneath a group of polarized subfilters 330 receives less light than a group of pixels beneath an unpolarized area of the filter. Essentially, the unpolarized groups of pixels may record an image having a first light level while the polarized pixel groups record the same image but at a darker light level. The images are recorded from the same vantage and at the same time since the pixels are interlaced with one another. Thus, there is practically no offset for the images captured by the polarized and unpolarized pixel groups. Essentially, this replicates capturing two images simultaneously from the exact same vantage point, but at two different exposure times. The embodiment trades resolution for the ability to capture an image at two different light levels simultaneously and without displacement.
  • Given the foregoing, it should be appreciated that the image having a higher light level (e.g., the “first image”) and the image having a lower light level (e.g., the “second image”) may be combined to achieve a number of effects. As but one example, the two images may be used to create high dynamic range (HDR) images. As known in the art, HDR images are generally created by overlaying and merging images that are captured at near-identical or identical locations, temporally close to one another, and at different lighting levels. The variance in lighting level is typically achieved by changing the exposure time of the image capture device. Since the present embodiment captures two images effectively with different exposure times , it should be appreciated that these images may be combined to create HDR images.
  • Further, the polarization pattern of the filter may be varied to effectively create three or more images at three or more total exposures (for example, by suitable varying and/or interleaving various groups of polarized subfilters and unpolarized areas). This may permit an even wider range of images to be combined to create a HDR image. “Total exposure” may be achieved by varying any, all, or a combination of f-stop, exposure time and filtering.
  • Likewise, these images may be combined to create a variety of effects. As one example, the variance in light intensity between images may cause certain objects in the first image to be more detailed than those objects are in the second image, and vice versa. As one example, if the images show a person standing in front of a sunlit window, the person may be dark and details of the person may be lost in the first image, since the light from the window will overpower the person. However, in the second image, the person may be more visible and detailed but the window may appear dull and/or details of the space through the window may be lost. The two images may easily be combined, such that the portion of the second image showing the person is overlaid into the identical space in the first image. Thus, a single image with both the person and background in detail may be created. Because there is little or no pixel offset, these types of substitutions may be relatively easily accomplished.
  • Further, since the embodiment may determine image depths and surface orientations, HDR images may be further enhanced. Many HDR images suffer from “halos” or bleeding effects around objects. This is typically caused by blending together the multiple images into the HDR image and attempting to normalize for abrupt changes in color or luminance caused by the boundary between an object and the background, or between two objects. Because traditional images lack depth and surface orientation data, they cannot distinguish between objects. Thus, the HDR process creates a visual artifact around high contrast boundaries.
  • Since embodiments disclosed herein can effectively map objects in space and obtain surface information about these objects, boundaries of the objects may be determined prior to the HDR process. Thus, the HDR process may be performed not on a per-image basis, but on a per-object basis within the image. Likewise, backgrounds may be treated and processed separately from any objects in the image. Thus, the halo/bleeding effects may be reduced or removed entirely, since the HDR process is not attempting to blend color and/or luminance across two discrete elements in an image.
  • Further, it should be appreciated that certain embodiments may employ a polarizing filter having non-checkered and/or asymmetric patterns. As one example, a polarizing filter maybe associated with an image sensor such that the majority of pixels receive polarized light. This permits the image sensor to gather additional surface normal data but at a cost of luminance, and potentially chrominance, information. Non-polarized sections of the filter may overlay a certain number of pixels. For example, every fifth, 10th, 20th and so on pixel may receive unpolarized light. In this fashion, the data captured by the pixels receiving unpolarized light may be used to estimate and enhance the luminance/chrominance information captured by the pixels underlying polarized portions of the filter. Essentially, the unpolarized image data may be used to correct the polarized image data. An embodiment may, for example create a curve fitted to the image data captured by the unpolarized pixels. This curve may be applied to data captured by pixels underlying the polarized sections of the filter and the corresponding polarized data may be fitted to the curve. This may improve the luminance and/or chrominance of the overall image.
  • Likewise, the polarization filter may vary in any fashion that facilitates processing image data captured by the pixels of the sensor array. For example, the following array shows a sample polarization filter, where the first letter indicates if the pixel receives polarized or clear/unpolarized light (designated by a “P” or a “C,” respectively) while the second letter indicates the wavelength to which the particular pixel is sensitive/filtered (“R” for red, “G” for green and “B” for blue):
  • PR PG PR PG CR CG PR PG PR PG
    PG PB PG PB CG CB PG PB PG PB
    CR CG CR CG CR CG CR CG CR CG
    CG CB CG CB CG CB CG CB CG CB
    PR PG PR PG CR CG PR PG PR PG
    PG PB PG PB CG CB PG PB PG PB
  • The exact pattern may be varied to enhance, facilitate, and/or speed up image processing. Thus, the polarization pattern may vary according to the purpose of the image sensor, the software with which it is used, and the like. It should be appreciated that the foregoing array is an example only. Checkerboard or other repeating patterns need not be used; certain embodiments may use differing patterns depending on the end application or result desired.
  • The ability to extract both surface detail and depth information for objects in an image can extend the performance of the three-dimensional imaging apparatus in a number of ways. For example, the depth information may be used to derive size information for the objects in the image. Accordingly, the three-dimensional imaging apparatus may be capable of differentiating a large object positioned far away from the camera from a small object positioned close to the camera and having the same shape as the large object.
  • In another embodiment, the three-dimensional imaging apparatus may be used for gaze detection or eye tracking. Existing gaze detection techniques require transmitting infrared (IR) waves to an individual's retina and sensing the reflected infrared waves with a camera to determine the location of the individual's pupil and lens. Such techniques can be inaccurate, since an individual's retina is located behind the lens and cornea and may not be readily detectable. This technique can be improved by utilizing surface detail and depth information to measure the flatness of an individual's lens and determine the location of the lens with respect to the individual's eye.
  • In another embodiment, the three-dimensional imaging apparatus may be used to enhance imaging of partially obscured scenes. For example, the three-dimensional sensing device may determine the surface detail associated with an unobscured object, and save this information, for example, to a memory device. The saved surface information can be retrieved and superimposed into images in which the object is otherwise obscured. One example of such a situation is when an object is partially obscured by fog. In this situation, the image can be artificially enhanced by superimposing the object into the scene using the saved surface detail information.
  • In another embodiment, the three-dimensional imaging apparatus may be used to artificially reposition the light source within an image. Alternatively or in addition, the intensity of the light source may be adjusted to brighten or darken objects in the image. In this embodiment, the surface detail information, which can include the curvature and orientation of the surfaces of an object, may be used to calculate the position and/or intensity of the light source in the image. Accordingly, the light source may be virtually moved to a different positioned, or the intensity of the light source can be changed. Additionally, the surface detail information may further be used to calculate shadow positions that would naturally appear due to repositioning or changing the intensity of a light source.
  • In related embodiments, the three-dimensional imaging apparatus may be used to alter the balance of light within the image from different light sources. For example, the image may include light from two different light sources, including, but not limited to, natural light, florescent light, incandescent light, and so on and so forth. The different types of light may have different polarization signatures as it is reflected off of surfaces of objects. This may allow for calculating both the position of the light sources, as discussed above, as well as for identifying the light sources that are impacting on surfaces of objects in an image. Accordingly, the intensity of each light source, as well as the positions of the light sources, may be manipulated by a user to balance the light sources in the image according to the user's preference.
  • Similarly, the three-dimensional imaging apparatus may be configured to remove unwanted visual effects caused by light sources within the image, such as, but not limited to, glare and gloss. Glare and gloss may be caused by the presence of a large luminance ratio between the surface of a captured object and the glare source, which may be sunlight or an artificial light source. To remove areas of glare and gloss in the image, the three-dimensional imaging apparatus may use the surface detail information to determine the position and/or intensity of the light source in the image, and alter the parameters of the light source to remove or reduce the amount of glare and gloss caused by the light source. For example, the light source may be virtually repositioned or dimmed so that the amount of light impacting on a surface is reduced.
  • In another related embodiment, the three-dimensional imaging apparatus may further be used to artificially reposition and/or remove objects within an image. In this embodiment, the surface detail information may be used to calculate the position and/or intensity of the light source in the image. Once the position of the light source is obtained, the imaging apparatus may calculate shadow positions of the objects after they have been moved based on the calculated position of the light source, as well as the surface detail information. The depth information may be used to calculate the size of objects within the image, so that the objects may be appropriately sized as they are virtually positioned further or closer to the camera.
  • In a further embodiment, the three-dimensional imaging apparatus may be used to artificially modify the shape of objects within the image. For example, the surface detail information may be calculated and modified according to various parameters input by a user. As discussed above, the surface detail information may be used to calculate the position and/or intensity of the light source within the image, and corresponding shadow positions may be calculated according to the modified surface orientations.
  • In another embodiment, the three-dimensional imaging apparatus may be used for recognizing facial gestures. Facial gestures may include, but are not limited to, smiling, grimacing, frowning, winking, and so on and so forth. In one embodiment, this may be accomplished by detecting the orientation of various facial muscles using surface geometry data, such as the mouth, eyes, nose, forehead, cheeks, and so on, and correlating the detected orientations with various gestures. The gestures may then be correlated to various emotions associated with the gestures to determine the emotion of an individual in an image.
  • In another embodiment, the three-dimensional imaging apparatus may be used to scan an object, for example, to create a three-dimensional model of the object. This embodiment may be accomplished by taking multiple photographs of the object or video while rotating the object. As the object is rotated, the image sensing device may capture more of the surface geometry and use the geometry to create a three-dimensional model of the object. In another related embodiment, multiple photographs or video may be taken while the image sensing device is moved relative to the object, and used to construct a three-dimensional model of the objects within the captured image(s). For example, a user may take video of a home while walking through the home and the image sensing device could use the calculated depth and surface detail information to create a three-dimensional model of the home. The depth and surface detail information of multiple photographs or video stills may then be matched to construct a seamless composite three-dimensional model that combines the surface detail and depth from each of the photos or video.
  • In another embodiment, the three-dimensional imaging apparatus may be used to correct geometric distortions in a two or three-dimensional image. For example, if an image is taken using a fish-eye lens, the image processing module may warp the image so that it appears undistorted. In one embodiment, this may be accomplished by using the surface detail information to recognize various objects in the distorted image and warping the image based on saved surface detail information of the undistorted object. Accordingly, the three-dimensional imaging apparatus may recognize an object in the image, such as a table, using image recognition techniques, and calculate the distance of the table in the captured scene from the imaging apparatus. The three-dimensional imaging apparatus may then substitute a saved image of the object from another image, position the image of the object into the image of the scene at the calculated depth, and modify the image of the object so that it is properly scaled within the image of the scene.
  • In still another embodiment, the surface normal data may be used to construct a stereo disparity map. Most stereo disparity mapping systems look for repeating patterns of pixels and, based on the offset of the repeating pattern between images, assign the pattern a particular distance from the sensor. This may be crude since objects may be differently aligned with respect to one another when the image is captured by a first sensor and a second offset sensor. That is, the offset of the first and second sensors may cause objects to appear to have a different spatial relationship to one another, and thus the pixels representing those objects may vary between images.
  • However, the surface normals of each object should appear the same to each image sensor, so long at the sensors are coplanar. Thus, by comparing surface normals (as received and recorded through the polarized filters) detected by each image sensor against one another, the stereo mapping may be enhanced and refined. Once a pixel match or near-match is determined, the surface normals may be compared. If the surface normals differ, then the pixels may represent different objects in each image and depth information may be difficult or impossible to assign. If the surface normals match, then the pixels represent the same object(s) in each image and a depth may be more definitively determined and assigned.
  • The foregoing represent certain embodiments, systems and methods and are intended to be examples only. Accordingly, the proper scope of protection should not be limited by any of the foregoing examples.

Claims (21)

1. A three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising:
a first sensor for capturing a polarized image, the first sensor including a first imaging device and a polarized filter associated with the first imaging device;
a second sensor for capturing a first non-polarized image;
a third sensor for capturing a second non-polarized image; and
at least one processing module for deriving depth information for the one or more objects utilizing at least the first non-polarized image and the second non-polarized image, the processing module further operative to combine the polarized image, the first non-polarized image, and the second non-polarized image to form a composite three-dimensional image.
2. The three-dimensional imaging apparatus of claim 1, wherein the first sensor is positioned between the second and third sensors such that a blind region of the first sensor is between blind regions of the second and third sensors.
3. The three-dimensional imaging apparatus of claim 1, wherein:
the first sensor is a luminance sensor;
the second sensor is a first chrominance sensor; and
the third sensor is a second chrominance sensor.
4. The three-dimensional imaging apparatus of claim 3, wherein a field of view of the first sensor is offset from both a field of view of the second sensor and a field of view of the third sensor.
5. The three-dimensional imaging apparatus of claim 3, wherein:
the polarized image is a polarized luminance image;
the first non-polarized image is a first chrominance image;
the second non-polarized image is a second chrominance image;
the at least one processing module is configured to generate a stereo disparity map from at least the first and second chrominance images; and
the at least one processing module is configured to derive depth information at least partially from the stereo disparity map.
6. The three-dimensional imaging apparatus of claim 5, further comprising a fourth sensor configured to capture a second luminance image; wherein
the at least one processing module is further configured to refine the stereo disparity map based on the second luminance image.
7. The three-dimensional imaging apparatus of claim 1, wherein the polarized filter comprises an array of polarizing subfilters.
8. The three-dimensional imaging apparatus of claim 7, wherein:
the first sensor comprises at least one pixel; and
a first polarized subfilter of the array of polarizing subfilters overlays the at least one pixel.
9. The three-dimensional imaging apparatus of claim 8, wherein the first polarized subfilter of the array of polarizing subfilters has a different type of polarization than a second polarized subfilter of the array of polarizing subfilters.
10. The three-dimensional imaging apparatus of claim 8, wherein:
the at least one pixel receives polarized light reflected from an imaged object, the polarized light corresponding to a polarization type of the first polarized subfilter; and
the first sensor determines a surface normal of the imaged object by measuring a polarization of the light received by the at least one pixel.
11. The three-dimensional imaging apparatus of claim 10, further comprising:
at least a second pixel adjacent to the at least one pixel; wherein
the second polarized subfilter overlays the at least a second pixel; and
the first sensor determines a surface normal of the imaged object by measuring a polarization of the light received by the at least a second pixel and comparing it to the polarization of the light received by the at least one pixel.
12. The three-dimensional imaging apparatus of claim 8, wherein the luminance imaging device includes at least one additional pixel that corresponds to an unpolarized area of the polarized filter.
13. The three-dimensional imaging apparatus of claim 12, wherein the polarized luminance image is a high dynamic range image created from a first luminance image recorded at least by the at least one pixel and a second luminance image recorded at least by the at least one additional pixel.
14. The three-dimensional imaging apparatus of claim 8, wherein:
the first sensor includes a microlens array; and
at least one microlens of the microlens array corresponds to the at least one pixel and is configured to focus light onto the at least one pixel.
15. The three-dimensional imaging apparatus of claim 1, wherein the at least one processing module is further configured to identify at least one face in the composite three-dimensional image utilizing at least one of the surface information or the depth information.
16. A three-dimensional imaging apparatus configured to capture at least one image including one or more objects, comprising:
a first sensor for capturing a polarized chrominance image and determining surface information for the one or more objects, the first sensor including a color imaging device and a polarized filter associated with the color imaging device;
a second sensor for capturing a first luminance image;
a third sensor for capturing a second luminance image; and
at least one processing module for deriving depth information for the one or more objects utilizing at least the first luminance image and the second luminance image and combining the polarized chrominance image, the first luminance image, and the second luminance image to form a composite three-dimensional image utilizing the surface information and the depth information.
17. A method for capturing at least one image of an object, comprising:
capturing a polarized image of the object;
capturing a first non-polarized image of the object;
capturing a second non-polarized image of the object;
deriving depth information for the object from at least the first non-polarized image and the second non-polarized image;
determining a plurality of surface normals for the object, the plurality of surface normals derived from the polarized image;
creating a three-dimensional image from the depth information and the plurality of surface normals.
18. The method of claim 17, wherein the operation of deriving depth information for the object comprises creating a stereo disparity from the first non-polarized image and the second non-polarized image.
19. The method of claim 17, further comprising:
determining, based on the surface normals, a simulated lighting of the object; and
altering the three-dimensional image to insert the simulated lighting of the object.
20. The method of claim 17, wherein the operation of determining a plurality of surface normals for the object comprises:
grouping each pixel of a pixel array into a subarray with at least one other pixel;
evaluating the polarized light received by the subarray; and
based on the evaluation, assigning a surface normal to a portion of the image recorded by the subarray.
21. The method of claim 17, wherein the polarized image and first non-polarized image are captured by a single sensor.
US13/246,821 2010-09-27 2011-09-27 Image capture using three-dimensional reconstruction Abandoned US20120075432A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/246,821 US20120075432A1 (en) 2010-09-27 2011-09-27 Image capture using three-dimensional reconstruction

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US38686510P 2010-09-27 2010-09-27
US13/246,821 US20120075432A1 (en) 2010-09-27 2011-09-27 Image capture using three-dimensional reconstruction

Publications (1)

Publication Number Publication Date
US20120075432A1 true US20120075432A1 (en) 2012-03-29

Family

ID=44947179

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/088,286 Active 2031-12-26 US8760517B2 (en) 2010-09-27 2011-04-15 Polarized images for security
US13/246,821 Abandoned US20120075432A1 (en) 2010-09-27 2011-09-27 Image capture using three-dimensional reconstruction
US14/279,648 Active 2032-05-29 US9536362B2 (en) 2010-09-27 2014-05-16 Polarized images for security

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/088,286 Active 2031-12-26 US8760517B2 (en) 2010-09-27 2011-04-15 Polarized images for security

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/279,648 Active 2032-05-29 US9536362B2 (en) 2010-09-27 2014-05-16 Polarized images for security

Country Status (3)

Country Link
US (3) US8760517B2 (en)
TW (1) TWI489858B (en)
WO (1) WO2012044619A1 (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110316983A1 (en) * 2010-01-05 2011-12-29 Panasonic Corporation Three-dimensional image capture device
US20120257815A1 (en) * 2011-04-08 2012-10-11 Markus Schlosser Method and apparatus for analyzing stereoscopic or multi-view images
US20130027548A1 (en) * 2011-07-28 2013-01-31 Apple Inc. Depth perception device and system
US20130335787A1 (en) * 2012-06-13 2013-12-19 Pfu Limited Overhead image reading apparatus
US20140139642A1 (en) * 2012-11-21 2014-05-22 Omnivision Technologies, Inc. Camera Array Systems Including At Least One Bayer Type Camera And Associated Methods
US8760517B2 (en) 2010-09-27 2014-06-24 Apple Inc. Polarized images for security
US20140177942A1 (en) * 2012-12-26 2014-06-26 Industrial Technology Research Institute Three dimensional sensing method and three dimensional sensing apparatus
WO2014168678A1 (en) * 2013-04-09 2014-10-16 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US8937591B2 (en) 2012-04-06 2015-01-20 Apple Inc. Systems and methods for counteracting a perceptual fading of a movable indicator
US20150023589A1 (en) * 2012-01-16 2015-01-22 Panasonic Corporation Image recording device, three-dimensional image reproducing device, image recording method, and three-dimensional image reproducing method
US20150049169A1 (en) * 2013-08-15 2015-02-19 Scott Krig Hybrid depth sensing pipeline
US8994780B2 (en) 2012-10-04 2015-03-31 Mcci Corporation Video conferencing enhanced with 3-D perspective control
US20150103200A1 (en) * 2013-10-16 2015-04-16 Broadcom Corporation Heterogeneous mix of sensors and calibration thereof
WO2015087323A1 (en) * 2013-12-09 2015-06-18 Mantisvision Ltd Emotion based 3d visual effects
US20150199008A1 (en) * 2014-01-16 2015-07-16 Samsung Electronics Co., Ltd. Display apparatus and method of controlling the same
US9101320B2 (en) 2013-04-09 2015-08-11 Elc Management Llc Skin diagnostic and image processing methods
US9128188B1 (en) * 2012-07-13 2015-09-08 The United States Of America As Represented By The Secretary Of The Navy Object instance identification using template textured 3-D model matching
US20150348323A1 (en) * 2014-06-02 2015-12-03 Lenovo (Singapore) Pte. Ltd. Augmenting a digital image with distance data derived based on actuation of at least one laser
WO2015200490A1 (en) * 2014-06-24 2015-12-30 Photon-X Visual cognition system
US9324136B2 (en) * 2014-06-12 2016-04-26 Htc Corporation Method, electronic apparatus, and computer readable medium for processing reflection in image
US9435888B1 (en) * 2012-03-26 2016-09-06 The United States Of America As Represented By The Secretary Of The Navy Shape matching automatic recognition methods, systems and articles of manufacturing
TWI571827B (en) * 2012-11-13 2017-02-21 財團法人資訊工業策進會 Electronic device and method for determining depth of 3d object image in 3d environment image
US20170084044A1 (en) * 2015-09-22 2017-03-23 Samsung Electronics Co., Ltd Method for performing image process and electronic device thereof
JP2017072499A (en) * 2015-10-08 2017-04-13 キヤノン株式会社 Processor, processing system, imaging apparatus, processing method, program, and recording medium
EP3147823A3 (en) * 2015-09-22 2017-06-14 Jenetric GmbH Device and method for direct optical recording of documents and/or living areas of skin without imaging optical elements
US20170337692A1 (en) * 2015-01-27 2017-11-23 Apical Ltd Method, system and computer program product for automatically altering a video stream
JPWO2016136086A1 (en) * 2015-02-27 2017-12-07 ソニー株式会社 Imaging apparatus, image processing apparatus, and image processing method
CN107635129A (en) * 2017-09-29 2018-01-26 周艇 Three-dimensional three mesh camera devices and depth integration method
CN107680156A (en) * 2017-09-08 2018-02-09 西安电子科技大学 Three-dimensional rebuilding method based on polarization information
EP3228977A4 (en) * 2014-12-01 2018-07-04 Sony Corporation Image-processing device and image-processing method
DE102017205619A1 (en) * 2017-04-03 2018-10-04 Robert Bosch Gmbh LiDAR system and method for operating a LiDAR system
WO2019010214A1 (en) * 2017-07-05 2019-01-10 Facebook Technologies, Llc Eye tracking based on light polarization
CN109191560A (en) * 2018-06-29 2019-01-11 西安电子科技大学 Monocular based on scattered information correction polarizes three-dimensional rebuilding method
WO2019021569A1 (en) * 2017-07-26 2019-01-31 ソニー株式会社 Information processing device, information processing method, and program
US20190041562A1 (en) * 2018-06-29 2019-02-07 Intel Corporation Pixel level polarizer for flexible display panels
US20190064337A1 (en) * 2017-08-28 2019-02-28 Samsung Electronics Co., Ltd. Method and apparatus to identify object
JP2019045299A (en) * 2017-09-01 2019-03-22 学校法人東京電機大学 Three-dimensional information acquisition device
US10311584B1 (en) 2017-11-09 2019-06-04 Facebook Technologies, Llc Estimation of absolute depth from polarization measurements
WO2019125427A1 (en) * 2017-12-20 2019-06-27 Olympus Corporation System and method for hybrid depth estimation
US10372191B2 (en) 2011-05-12 2019-08-06 Apple Inc. Presence sensing
US20190278458A1 (en) * 2018-03-08 2019-09-12 Ebay Inc. Online pluggable 3d platform for 3d representations of items
US20190320103A1 (en) * 2015-04-06 2019-10-17 The Texas A&M University System Fusion of inertial and depth sensors for movement measurements and recognition
US10538326B1 (en) * 2016-08-31 2020-01-21 Amazon Technologies, Inc. Flare detection and avoidance in stereo vision systems
US10630925B1 (en) * 2018-12-03 2020-04-21 Facebook Technologies, Llc Depth determination using polarization of light and camera assembly with augmented pixels
US10641897B1 (en) 2019-04-24 2020-05-05 Aeye, Inc. Ladar system and method with adaptive pulse duration
US10791286B2 (en) 2018-12-13 2020-09-29 Facebook Technologies, Llc Differentiated imaging using camera assembly with augmented pixels
US10791282B2 (en) 2018-12-13 2020-09-29 Fenwick & West LLP High dynamic range camera assembly with augmented pixels
US10855896B1 (en) 2018-12-13 2020-12-01 Facebook Technologies, Llc Depth determination using time-of-flight and camera assembly with augmented pixels
US10902623B1 (en) 2019-11-19 2021-01-26 Facebook Technologies, Llc Three-dimensional imaging with spatial and temporal coding for depth camera assembly
US10918287B2 (en) 2012-12-31 2021-02-16 Omni Medsci, Inc. System for non-invasive measurement using cameras and time of flight detection
US10928374B2 (en) 2012-12-31 2021-02-23 Omni Medsci, Inc. Non-invasive measurement of blood within the skin using array of laser diodes with Bragg reflectors and a camera system
US10925465B2 (en) 2019-04-08 2021-02-23 Activ Surgical, Inc. Systems and methods for medical imaging
CN112528942A (en) * 2020-12-23 2021-03-19 深圳市汇顶科技股份有限公司 Fingerprint identification device, display screen and electronic equipment
WO2021055406A1 (en) * 2019-09-17 2021-03-25 Facebook Technologies, Llc Polarization capture device, system, and method
US11160455B2 (en) 2012-12-31 2021-11-02 Omni Medsci, Inc. Multi-wavelength wearable device for non-invasive blood measurements in tissue
US11179218B2 (en) 2018-07-19 2021-11-23 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
US11194160B1 (en) 2020-01-21 2021-12-07 Facebook Technologies, Llc High frame rate reconstruction with N-tap camera sensor
US11488354B2 (en) * 2017-05-22 2022-11-01 Sony Corporation Information processing apparatus and information processing method
US11567203B2 (en) * 2019-03-12 2023-01-31 Sick Ag Light line triangulation apparatus
US11741584B2 (en) * 2018-11-13 2023-08-29 Genesys Logic, Inc. Method for correcting an image and device thereof
US11783444B1 (en) * 2019-11-14 2023-10-10 Apple Inc. Warping an input image based on depth and offset information
CN117523112A (en) * 2024-01-05 2024-02-06 深圳市宗匠科技有限公司 Three-dimensional model building method and system, control equipment and storage medium thereof

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100004518A1 (en) 2008-07-03 2010-01-07 Masimo Laboratories, Inc. Heat sink for noninvasive medical sensor
US8630691B2 (en) 2008-08-04 2014-01-14 Cercacor Laboratories, Inc. Multi-stream sensor front ends for noninvasive measurement of blood constituents
KR100933175B1 (en) * 2009-02-05 2009-12-21 이영범 System and method for monitoring restricted documents
US20110082711A1 (en) 2009-10-06 2011-04-07 Masimo Laboratories, Inc. Personal digital assistant or organizer for monitoring glucose levels
US9408542B1 (en) 2010-07-22 2016-08-09 Masimo Corporation Non-invasive blood pressure measurement system
US20120287031A1 (en) 2011-05-12 2012-11-15 Apple Inc. Presence sensing
US9667883B2 (en) 2013-01-07 2017-05-30 Eminent Electronic Technology Corp. Ltd. Three-dimensional image sensing device and method of sensing three-dimensional images
CA2937497A1 (en) * 2014-01-22 2015-10-08 Polaris Sensor Technologies, Inc. Polarization imaging for facial recognition enhancement system and method
US9589195B2 (en) 2014-01-22 2017-03-07 Polaris Sensor Technologies, Inc. Polarization-based mapping and perception method and system
US10395113B2 (en) * 2014-01-22 2019-08-27 Polaris Sensor Technologies, Inc. Polarization-based detection and mapping method and system
US9293023B2 (en) 2014-03-18 2016-03-22 Jack Ke Zhang Techniques for emergency detection and emergency alert messaging
US8952818B1 (en) 2014-03-18 2015-02-10 Jack Ke Zhang Fall detection apparatus with floor and surface elevation learning capabilites
US9883118B2 (en) 2014-03-24 2018-01-30 Htc Corporation Method of image correction and image capturing device thereof
CN105025193B (en) 2014-04-29 2020-02-07 钰立微电子股份有限公司 Portable stereo scanner and method for generating stereo scanning result of corresponding object
TWI589149B (en) * 2014-04-29 2017-06-21 鈺立微電子股份有限公司 Portable three-dimensional scanner and method of generating a three-dimensional scan result corresponding to an object
US9197082B1 (en) 2014-12-09 2015-11-24 Jack Ke Zhang Techniques for power source management using a wrist-worn device
CN104599365A (en) * 2014-12-31 2015-05-06 苏州福丰科技有限公司 Flat open induction door control system and method based on three-dimensional facial image recognition
US9300925B1 (en) * 2015-05-04 2016-03-29 Jack Ke Zhang Managing multi-user access to controlled locations in a facility
US10448871B2 (en) 2015-07-02 2019-10-22 Masimo Corporation Advanced pulse oximetry sensor
CN105069883A (en) * 2015-08-19 2015-11-18 湖州高鼎智能科技有限公司 Human body comparison intelligent lock system
US9830506B2 (en) 2015-11-09 2017-11-28 The United States Of America As Represented By The Secretary Of The Army Method of apparatus for cross-modal face matching using polarimetric image data
CN105959667A (en) * 2016-07-06 2016-09-21 南京捷特普智能科技有限公司 Three-dimensional image collection device and system
CN106127908A (en) * 2016-07-27 2016-11-16 北京集创北方科技股份有限公司 Living things feature recognition lock and control method thereof
TWI624170B (en) * 2016-10-19 2018-05-11 財團法人工業技術研究院 Image scanning system and method thereof
US10462444B2 (en) * 2017-04-17 2019-10-29 Faro Technologies, Inc. Three-dimensional inspection
US10817594B2 (en) 2017-09-28 2020-10-27 Apple Inc. Wearable electronic device having a light field camera usable to perform bioauthentication from a dorsal side of a forearm near a wrist
WO2019071155A1 (en) 2017-10-05 2019-04-11 University Of Utah Research Foundation Translucent imaging system and related methods
EP3578126B1 (en) * 2018-06-08 2023-02-22 Stryker European Operations Holdings LLC Surgical navigation system
CN109886166A (en) * 2019-01-31 2019-06-14 杭州创匠信息科技有限公司 Method for anti-counterfeit and device based on polarization characteristic
CN110708353A (en) * 2019-09-03 2020-01-17 上海派拉软件技术有限公司 Database risk control method based on Mysql agent
CN115718307A (en) * 2021-02-02 2023-02-28 华为技术有限公司 Detection device, control method, fusion detection system and terminal
US11927769B2 (en) 2022-03-31 2024-03-12 Metalenz, Inc. Polarization sorting metasurface microlens array device

Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5028138A (en) * 1989-05-23 1991-07-02 Wolff Lawrence B Method of and apparatus for obtaining object data by machine vision form polarization information
US5745163A (en) * 1994-12-16 1998-04-28 Terumo Kabushiki Kaisha Ocular fundus camera
US5793374A (en) * 1995-07-28 1998-08-11 Microsoft Corporation Specialized shaders for shading objects in computer generated images
US6028608A (en) * 1997-05-09 2000-02-22 Jenkins; Barry System and method of perception-based image generation and encoding
US20030086590A1 (en) * 2001-11-05 2003-05-08 Koninklijke Philips Electronics N.V. Method for computing optical flow under the epipolar constraint
US20050012742A1 (en) * 2003-03-07 2005-01-20 Jerome Royan Process for managing the representation of at least one 3D model of a scene
US20050195190A1 (en) * 2004-03-04 2005-09-08 Williams James P. Visualization of volume-rendered data with occluding contour multi-planar-reformats
US20060038817A1 (en) * 2004-08-20 2006-02-23 Diehl Avionik Systeme Gmbh Method and apparatus for representing a three-dimensional topography
US20060061569A1 (en) * 2004-09-21 2006-03-23 Kunio Yamada Pseudo 3D image creation device, pseudo 3D image creation method, and pseudo 3D image display system
US20060125936A1 (en) * 2004-12-15 2006-06-15 Gruhike Russell W Multi-lens imaging systems and methods
US20060178570A1 (en) * 2005-02-09 2006-08-10 Robinson M R Methods and apparatuses for noninvasive determinations of analytes
US20060209231A1 (en) * 2003-03-12 2006-09-21 Sadao Ioki Image display unit and game machine
US20060274302A1 (en) * 2005-06-01 2006-12-07 Shylanski Mark S Machine Vision Vehicle Wheel Alignment Image Processing Methods
US20070024614A1 (en) * 2005-07-26 2007-02-01 Tam Wa J Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
US20070241267A1 (en) * 2006-04-18 2007-10-18 The Trustees Of The University Of Pennsylvania Sensor and polarimetric filters for real-time extraction of polarimetric information at the focal plane, and method of making same
US20080186390A1 (en) * 2006-05-29 2008-08-07 Matsushita Electric Industrial Co., Ltd. Super-resolution device, super-resolution method, super-resolution program, and super-resolution system
US20080243383A1 (en) * 2006-12-12 2008-10-02 Ching-Fang Lin Integrated collision avoidance enhanced GN&C system for air vehicle
US20080252846A1 (en) * 2005-09-29 2008-10-16 Essilor International (Compagnie Generale D'optique) Polarizing Ophthalmic Lens Adapted to a Wearer's Eye/Head Behavior
US20080266655A1 (en) * 2005-10-07 2008-10-30 Levoy Marc S Microscopy Arrangements and Approaches
US20080278571A1 (en) * 2007-05-10 2008-11-13 Mora Assad F Stereoscopic three dimensional visualization system and method of use
US20090040295A1 (en) * 2007-08-06 2009-02-12 Samsung Electronics Co., Ltd. Method and apparatus for reproducing stereoscopic image using depth control
US20090051793A1 (en) * 2007-08-21 2009-02-26 Micron Technology, Inc. Multi-array sensor with integrated sub-array for parallax detection and photometer functionality
US20090072996A1 (en) * 2007-08-08 2009-03-19 Harman Becker Automotive Systems Gmbh Vehicle illumination system
US20090279807A1 (en) * 2007-02-13 2009-11-12 Katsuhiro Kanamorl System, method and apparatus for image processing and image format
US20100013965A1 (en) * 2006-07-18 2010-01-21 The Trustees Of The University Of Pennsylvania Separation and contrast enhancement of overlapping cast shadow components and target detection in shadow using polarization
US20100134492A1 (en) * 2008-12-02 2010-06-03 Disney Enterprises, Inc. Tertiary lighting system
US20100208060A1 (en) * 2009-02-16 2010-08-19 Masanori Kobayashi Liquid droplet recognition apparatus, raindrop recognition apparatus, and on-vehicle monitoring apparatus
US20100277412A1 (en) * 1999-07-08 2010-11-04 Pryor Timothy R Camera Based Sensing in Handheld, Mobile, Gaming, or Other Devices
US20100283885A1 (en) * 2009-05-11 2010-11-11 Shih-Schon Lin Method for aligning pixilated micro-grid polarizer to an image sensor
US20100283883A1 (en) * 2008-06-26 2010-11-11 Panasonic Corporation Image processing apparatus, image division program and image synthesising method
US20100302235A1 (en) * 2009-06-02 2010-12-02 Horizon Semiconductors Ltd. efficient composition of a stereoscopic image for a 3-D TV
US20100315419A1 (en) * 2008-06-10 2010-12-16 Brandon Baker Systems and Methods for Estimating a Parameter for a 3D model
US7964840B2 (en) * 2008-06-19 2011-06-21 Omnivision Technologies, Inc. High dynamic range image sensor including polarizer and microlens
US20110169997A1 (en) * 2008-11-27 2011-07-14 Canon Kabushiki Kaisha Solid-state image sensing element and image sensing apparatus
US20120002018A1 (en) * 2010-01-05 2012-01-05 Panasonic Corporation Three-dimensional image capture device
US20120195583A1 (en) * 2010-10-08 2012-08-02 Vincent Pace Integrated 2D/3D Camera with Fixed Imaging Parameters
US20130127823A1 (en) * 2008-09-16 2013-05-23 Stephen J. DiVerdi Generating a Depth Map Based on a Single Image

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2485773A1 (en) 1980-06-24 1981-12-31 Promocab SYSTEM FOR PROTECTING A ZONE AGAINST HUMAN AGGRESSION
US5537144A (en) * 1990-06-11 1996-07-16 Revfo, Inc. Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in stereoscopically viewing the same with high image quality and resolution
US5384588A (en) * 1991-05-13 1995-01-24 Telerobotics International, Inc. System for omindirectional image viewing at a remote location without the transmission of control signals to select viewing parameters
CN1183780C (en) * 1996-12-04 2005-01-05 松下电器产业株式会社 Optical disc for high resolution and three-D image recording and reproducing device and recording device thereof
US5903350A (en) 1997-02-03 1999-05-11 Optiphase, Inc. Demodulator and method useful for multiplexed optical sensors
US6072894A (en) * 1997-10-17 2000-06-06 Payne; John H. Biometric face recognition for applicant screening
US6146332A (en) * 1998-07-29 2000-11-14 3416704 Canada Inc. Movement detector
US7006236B2 (en) 2002-05-22 2006-02-28 Canesta, Inc. Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices
US6775397B1 (en) 2000-02-24 2004-08-10 Nokia Corporation Method and apparatus for user recognition using CCD cameras
US6802016B2 (en) 2001-02-08 2004-10-05 Twinhead International Corp. User proximity sensor and signal processing circuitry for determining whether to power a computer on or off
US20020158750A1 (en) 2001-04-30 2002-10-31 Almalik Mansour Saleh System, method and portable device for biometric identification
JP2002350555A (en) 2001-05-28 2002-12-04 Yamaha Motor Co Ltd Human presence detector
US6832006B2 (en) 2001-07-23 2004-12-14 Eastman Kodak Company System and method for controlling image compression based on image emphasis
US7136513B2 (en) * 2001-11-08 2006-11-14 Pelco Security identification system
JP4210137B2 (en) * 2002-03-04 2009-01-14 三星電子株式会社 Face recognition method and apparatus using secondary ICA
US7369685B2 (en) 2002-04-05 2008-05-06 Identix Corporation Vision-based operating method and system
JP4251358B2 (en) * 2002-04-10 2009-04-08 サーモ フィッシャー サイエンティフィック(アッシュビル)エルエルシー Automated protein crystallization imaging
US7151530B2 (en) 2002-08-20 2006-12-19 Canesta, Inc. System and method for determining an input selected by a user through a virtual interface
US7436569B2 (en) * 2003-03-12 2008-10-14 General Photonics Corporation Polarization measurement and self-calibration based on multiple tunable optical polarization rotators
JP2004348349A (en) 2003-05-21 2004-12-09 Masayuki Watabiki Household appliance power controller
US7009378B2 (en) 2003-09-05 2006-03-07 Nxtphase T & D Corporation Time division multiplexed optical measuring system
US7117380B2 (en) 2003-09-30 2006-10-03 International Business Machines Corporation Apparatus, system, and method for autonomic power adjustment in an electronic device
US7324824B2 (en) 2003-12-09 2008-01-29 Awarepoint Corporation Wireless network monitoring system
KR100580630B1 (en) 2003-11-19 2006-05-16 삼성전자주식회사 Apparatus and method for discriminating person using infrared rays
US20050221791A1 (en) 2004-04-05 2005-10-06 Sony Ericsson Mobile Communications Ab Sensor screen saver
US8509472B2 (en) * 2004-06-24 2013-08-13 Digimarc Corporation Digital watermarking methods, programs and apparatus
DE602004024322D1 (en) 2004-12-15 2010-01-07 St Microelectronics Res & Dev Device for the detection of computer users
US9760214B2 (en) 2005-02-23 2017-09-12 Zienon, Llc Method and apparatus for data entry input
JP2006318364A (en) 2005-05-16 2006-11-24 Funai Electric Co Ltd Image processing device
US7806604B2 (en) * 2005-10-20 2010-10-05 Honeywell International Inc. Face detection and tracking in a wide field of view
KR100745981B1 (en) * 2006-01-13 2007-08-06 삼성전자주식회사 Method and apparatus scalable face recognition based on complementary features
US8462989B2 (en) * 2006-06-06 2013-06-11 Tp Vision Holding B.V. Scaling an image based on a motion vector
US7840031B2 (en) * 2007-01-12 2010-11-23 International Business Machines Corporation Tracking a range of body movement based on 3D captured image streams of a user
US7903166B2 (en) 2007-02-21 2011-03-08 Sharp Laboratories Of America, Inc. Methods and systems for display viewer motion compensation based on user image data
US7929729B2 (en) 2007-04-02 2011-04-19 Industrial Technology Research Institute Image processing methods
GB2453163B (en) 2007-09-26 2011-06-29 Christopher Douglas Blair Three-dimensional imaging system
EP2216999B1 (en) 2007-12-07 2015-01-07 Panasonic Corporation Image processing device, image processing method, and imaging device
US8698753B2 (en) 2008-02-28 2014-04-15 Lg Electronics Inc. Virtual optical input device with feedback and method of controlling the same
US8107721B2 (en) * 2008-05-29 2012-01-31 Mitsubishi Electric Research Laboratories, Inc. Method and system for determining poses of semi-specular objects
US20100008900A1 (en) 2008-07-14 2010-01-14 The University Of Hong Kong Annexin ii compositions for treating or monitoring inflammation or immune-mediated disorders
CN101639800A (en) 2008-08-01 2010-02-03 华为技术有限公司 Display method of screen and terminal
WO2010021375A1 (en) 2008-08-22 2010-02-25 国立大学法人大阪大学 Process for production of protease, protease solution, and solution of pro-form of protease
CN102119530B (en) 2008-08-22 2013-08-21 索尼公司 Image display device and control method
US20110148858A1 (en) * 2008-08-29 2011-06-23 Zefeng Ni View synthesis with heuristic view merging
JP4636149B2 (en) * 2008-09-09 2011-02-23 ソニー株式会社 Image data analysis apparatus, image data analysis method, and program
DE102008047413A1 (en) 2008-09-16 2010-04-15 Hella Kgaa Hueck & Co. Motor vehicle object detection system has a stereo camera with two gray image sensors and a mono camera with a color image sensor
KR20100073191A (en) 2008-12-22 2010-07-01 한국전자통신연구원 Method and apparatus for face liveness using range data
WO2010073547A1 (en) 2008-12-25 2010-07-01 パナソニック株式会社 Image processing device and pseudo-3d image creation device
RU2534073C2 (en) 2009-02-20 2014-11-27 Конинклейке Филипс Электроникс Н.В. System, method and apparatus for causing device to enter active mode
US9159208B2 (en) 2009-03-31 2015-10-13 Koninklijke Philips N.V. Energy efficient cascade of sensors for automatic presence detection
US20100253782A1 (en) * 2009-04-07 2010-10-07 Latent Image Technology Ltd. Device and method for automated verification of polarization-variant images
WO2010127488A1 (en) 2009-05-07 2010-11-11 Joemel Caballero System for controlling light emission of television
US8154615B2 (en) 2009-06-30 2012-04-10 Eastman Kodak Company Method and apparatus for image display control according to viewer factors and responses
US20100328074A1 (en) 2009-06-30 2010-12-30 Johnson Erik J Human presence detection techniques
US8264536B2 (en) * 2009-08-25 2012-09-11 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
KR101648353B1 (en) 2009-09-25 2016-08-17 삼성전자 주식회사 Image sensor having depth sensor
TWI413896B (en) 2010-04-21 2013-11-01 Elitegroup Computer Sys Co Ltd Energy saving methods for electronic devices
US9357024B2 (en) 2010-08-05 2016-05-31 Qualcomm Incorporated Communication management utilizing destination device user presence probability
US8760517B2 (en) 2010-09-27 2014-06-24 Apple Inc. Polarized images for security
US8912877B2 (en) 2011-02-18 2014-12-16 Blackberry Limited System and method for activating an electronic device using two or more sensors
WO2012155105A1 (en) 2011-05-12 2012-11-15 Apple Inc. Presence sensing
US20120287031A1 (en) 2011-05-12 2012-11-15 Apple Inc. Presence sensing

Patent Citations (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5028138A (en) * 1989-05-23 1991-07-02 Wolff Lawrence B Method of and apparatus for obtaining object data by machine vision form polarization information
US5745163A (en) * 1994-12-16 1998-04-28 Terumo Kabushiki Kaisha Ocular fundus camera
US5793374A (en) * 1995-07-28 1998-08-11 Microsoft Corporation Specialized shaders for shading objects in computer generated images
US6028608A (en) * 1997-05-09 2000-02-22 Jenkins; Barry System and method of perception-based image generation and encoding
US20100277412A1 (en) * 1999-07-08 2010-11-04 Pryor Timothy R Camera Based Sensing in Handheld, Mobile, Gaming, or Other Devices
US20030086590A1 (en) * 2001-11-05 2003-05-08 Koninklijke Philips Electronics N.V. Method for computing optical flow under the epipolar constraint
US20050012742A1 (en) * 2003-03-07 2005-01-20 Jerome Royan Process for managing the representation of at least one 3D model of a scene
US20060209231A1 (en) * 2003-03-12 2006-09-21 Sadao Ioki Image display unit and game machine
US20050195190A1 (en) * 2004-03-04 2005-09-08 Williams James P. Visualization of volume-rendered data with occluding contour multi-planar-reformats
US20060038817A1 (en) * 2004-08-20 2006-02-23 Diehl Avionik Systeme Gmbh Method and apparatus for representing a three-dimensional topography
US20060061569A1 (en) * 2004-09-21 2006-03-23 Kunio Yamada Pseudo 3D image creation device, pseudo 3D image creation method, and pseudo 3D image display system
US20060125936A1 (en) * 2004-12-15 2006-06-15 Gruhike Russell W Multi-lens imaging systems and methods
US20060178570A1 (en) * 2005-02-09 2006-08-10 Robinson M R Methods and apparatuses for noninvasive determinations of analytes
US20060274302A1 (en) * 2005-06-01 2006-12-07 Shylanski Mark S Machine Vision Vehicle Wheel Alignment Image Processing Methods
US20070024614A1 (en) * 2005-07-26 2007-02-01 Tam Wa J Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
US20080252846A1 (en) * 2005-09-29 2008-10-16 Essilor International (Compagnie Generale D'optique) Polarizing Ophthalmic Lens Adapted to a Wearer's Eye/Head Behavior
US20080266655A1 (en) * 2005-10-07 2008-10-30 Levoy Marc S Microscopy Arrangements and Approaches
US20070241267A1 (en) * 2006-04-18 2007-10-18 The Trustees Of The University Of Pennsylvania Sensor and polarimetric filters for real-time extraction of polarimetric information at the focal plane, and method of making same
US20080186390A1 (en) * 2006-05-29 2008-08-07 Matsushita Electric Industrial Co., Ltd. Super-resolution device, super-resolution method, super-resolution program, and super-resolution system
US20100013965A1 (en) * 2006-07-18 2010-01-21 The Trustees Of The University Of Pennsylvania Separation and contrast enhancement of overlapping cast shadow components and target detection in shadow using polarization
US20080243383A1 (en) * 2006-12-12 2008-10-02 Ching-Fang Lin Integrated collision avoidance enhanced GN&C system for air vehicle
US20090279807A1 (en) * 2007-02-13 2009-11-12 Katsuhiro Kanamorl System, method and apparatus for image processing and image format
US20100290713A1 (en) * 2007-02-13 2010-11-18 Panasonic Corporation System, method and apparatus for image processing and image format
US20080278571A1 (en) * 2007-05-10 2008-11-13 Mora Assad F Stereoscopic three dimensional visualization system and method of use
US20090040295A1 (en) * 2007-08-06 2009-02-12 Samsung Electronics Co., Ltd. Method and apparatus for reproducing stereoscopic image using depth control
US20090072996A1 (en) * 2007-08-08 2009-03-19 Harman Becker Automotive Systems Gmbh Vehicle illumination system
US20090051793A1 (en) * 2007-08-21 2009-02-26 Micron Technology, Inc. Multi-array sensor with integrated sub-array for parallax detection and photometer functionality
US20100315419A1 (en) * 2008-06-10 2010-12-16 Brandon Baker Systems and Methods for Estimating a Parameter for a 3D model
US7964840B2 (en) * 2008-06-19 2011-06-21 Omnivision Technologies, Inc. High dynamic range image sensor including polarizer and microlens
US20100283883A1 (en) * 2008-06-26 2010-11-11 Panasonic Corporation Image processing apparatus, image division program and image synthesising method
US20130127823A1 (en) * 2008-09-16 2013-05-23 Stephen J. DiVerdi Generating a Depth Map Based on a Single Image
US20110169997A1 (en) * 2008-11-27 2011-07-14 Canon Kabushiki Kaisha Solid-state image sensing element and image sensing apparatus
US20100134492A1 (en) * 2008-12-02 2010-06-03 Disney Enterprises, Inc. Tertiary lighting system
US20100208060A1 (en) * 2009-02-16 2010-08-19 Masanori Kobayashi Liquid droplet recognition apparatus, raindrop recognition apparatus, and on-vehicle monitoring apparatus
US20100283885A1 (en) * 2009-05-11 2010-11-11 Shih-Schon Lin Method for aligning pixilated micro-grid polarizer to an image sensor
US20100302235A1 (en) * 2009-06-02 2010-12-02 Horizon Semiconductors Ltd. efficient composition of a stereoscopic image for a 3-D TV
US20120002018A1 (en) * 2010-01-05 2012-01-05 Panasonic Corporation Three-dimensional image capture device
US20120195583A1 (en) * 2010-10-08 2012-08-02 Vincent Pace Integrated 2D/3D Camera with Fixed Imaging Parameters

Cited By (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9041770B2 (en) * 2010-01-05 2015-05-26 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional image capture device
US20120002018A1 (en) * 2010-01-05 2012-01-05 Panasonic Corporation Three-dimensional image capture device
US9036004B2 (en) * 2010-01-05 2015-05-19 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional image capture device
US20110316983A1 (en) * 2010-01-05 2011-12-29 Panasonic Corporation Three-dimensional image capture device
US8760517B2 (en) 2010-09-27 2014-06-24 Apple Inc. Polarized images for security
US9536362B2 (en) 2010-09-27 2017-01-03 Apple Inc. Polarized images for security
US20120257815A1 (en) * 2011-04-08 2012-10-11 Markus Schlosser Method and apparatus for analyzing stereoscopic or multi-view images
US10372191B2 (en) 2011-05-12 2019-08-06 Apple Inc. Presence sensing
US20130027548A1 (en) * 2011-07-28 2013-01-31 Apple Inc. Depth perception device and system
US20150023589A1 (en) * 2012-01-16 2015-01-22 Panasonic Corporation Image recording device, three-dimensional image reproducing device, image recording method, and three-dimensional image reproducing method
US9495795B2 (en) * 2012-01-16 2016-11-15 Panasonic Intellectual Property Management Co., Ltd. Image recording device, three-dimensional image reproducing device, image recording method, and three-dimensional image reproducing method
US9435888B1 (en) * 2012-03-26 2016-09-06 The United States Of America As Represented By The Secretary Of The Navy Shape matching automatic recognition methods, systems and articles of manufacturing
US8937591B2 (en) 2012-04-06 2015-01-20 Apple Inc. Systems and methods for counteracting a perceptual fading of a movable indicator
US20130335787A1 (en) * 2012-06-13 2013-12-19 Pfu Limited Overhead image reading apparatus
US9219838B2 (en) * 2012-06-13 2015-12-22 Pfu Limited Overhead image reading apparatus
CN103491274A (en) * 2012-06-13 2014-01-01 株式会社Pfu Overhead image reading apparatus
US9128188B1 (en) * 2012-07-13 2015-09-08 The United States Of America As Represented By The Secretary Of The Navy Object instance identification using template textured 3-D model matching
US8994780B2 (en) 2012-10-04 2015-03-31 Mcci Corporation Video conferencing enhanced with 3-D perspective control
TWI571827B (en) * 2012-11-13 2017-02-21 財團法人資訊工業策進會 Electronic device and method for determining depth of 3d object image in 3d environment image
US9924142B2 (en) * 2012-11-21 2018-03-20 Omnivision Technologies, Inc. Camera array systems including at least one bayer type camera and associated methods
US20140139642A1 (en) * 2012-11-21 2014-05-22 Omnivision Technologies, Inc. Camera Array Systems Including At Least One Bayer Type Camera And Associated Methods
CN103903222A (en) * 2012-12-26 2014-07-02 财团法人工业技术研究院 Three-dimensional sensing method and three-dimensional sensing device
US20140177942A1 (en) * 2012-12-26 2014-06-26 Industrial Technology Research Institute Three dimensional sensing method and three dimensional sensing apparatus
US9230330B2 (en) * 2012-12-26 2016-01-05 Industrial Technology Research Institute Three dimensional sensing method and three dimensional sensing apparatus
US11353440B2 (en) 2012-12-31 2022-06-07 Omni Medsci, Inc. Time-of-flight physiological measurements and cloud services
US11241156B2 (en) 2012-12-31 2022-02-08 Omni Medsci, Inc. Time-of-flight imaging and physiological measurements
US11160455B2 (en) 2012-12-31 2021-11-02 Omni Medsci, Inc. Multi-wavelength wearable device for non-invasive blood measurements in tissue
US10928374B2 (en) 2012-12-31 2021-02-23 Omni Medsci, Inc. Non-invasive measurement of blood within the skin using array of laser diodes with Bragg reflectors and a camera system
US10918287B2 (en) 2012-12-31 2021-02-16 Omni Medsci, Inc. System for non-invasive measurement using cameras and time of flight detection
US9256963B2 (en) 2013-04-09 2016-02-09 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US9101320B2 (en) 2013-04-09 2015-08-11 Elc Management Llc Skin diagnostic and image processing methods
WO2014168678A1 (en) * 2013-04-09 2014-10-16 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
AU2014251372B2 (en) * 2013-04-09 2017-05-18 Elc Management Llc Skin diagnostic and image processing systems, apparatus and articles
US20150049169A1 (en) * 2013-08-15 2015-02-19 Scott Krig Hybrid depth sensing pipeline
US10497140B2 (en) * 2013-08-15 2019-12-03 Intel Corporation Hybrid depth sensing pipeline
US20150103200A1 (en) * 2013-10-16 2015-04-16 Broadcom Corporation Heterogeneous mix of sensors and calibration thereof
WO2015087323A1 (en) * 2013-12-09 2015-06-18 Mantisvision Ltd Emotion based 3d visual effects
US9804670B2 (en) * 2014-01-16 2017-10-31 Samsung Electronics Co., Ltd. Display apparatus and method of controlling the same
US10133349B2 (en) 2014-01-16 2018-11-20 Samsung Electronics Co., Ltd. Display apparatus and method of controlling the same
US20150199008A1 (en) * 2014-01-16 2015-07-16 Samsung Electronics Co., Ltd. Display apparatus and method of controlling the same
US20150348323A1 (en) * 2014-06-02 2015-12-03 Lenovo (Singapore) Pte. Ltd. Augmenting a digital image with distance data derived based on actuation of at least one laser
US9324136B2 (en) * 2014-06-12 2016-04-26 Htc Corporation Method, electronic apparatus, and computer readable medium for processing reflection in image
WO2015200490A1 (en) * 2014-06-24 2015-12-30 Photon-X Visual cognition system
US11206388B2 (en) 2014-12-01 2021-12-21 Sony Corporation Image processing apparatus and image processing method for aligning polarized images based on a depth map and acquiring a polarization characteristic using the aligned polarized images
EP3228977A4 (en) * 2014-12-01 2018-07-04 Sony Corporation Image-processing device and image-processing method
US10861159B2 (en) * 2015-01-27 2020-12-08 Apical Limited Method, system and computer program product for automatically altering a video stream
US20170337692A1 (en) * 2015-01-27 2017-11-23 Apical Ltd Method, system and computer program product for automatically altering a video stream
US11405603B2 (en) 2015-02-27 2022-08-02 Sony Corporation Imaging device, image processing device and image processing method
JPWO2016136086A1 (en) * 2015-02-27 2017-12-07 ソニー株式会社 Imaging apparatus, image processing apparatus, and image processing method
US10798367B2 (en) 2015-02-27 2020-10-06 Sony Corporation Imaging device, image processing device and image processing method
US20190320103A1 (en) * 2015-04-06 2019-10-17 The Texas A&M University System Fusion of inertial and depth sensors for movement measurements and recognition
US10116886B2 (en) 2015-09-22 2018-10-30 JENETRIC GmbH Device and method for direct optical image capture of documents and/or live skin areas without optical imaging elements
EP3147823A3 (en) * 2015-09-22 2017-06-14 Jenetric GmbH Device and method for direct optical recording of documents and/or living areas of skin without imaging optical elements
US20170084044A1 (en) * 2015-09-22 2017-03-23 Samsung Electronics Co., Ltd Method for performing image process and electronic device thereof
US10341641B2 (en) * 2015-09-22 2019-07-02 Samsung Electronics Co., Ltd. Method for performing image process and electronic device thereof
JP2017072499A (en) * 2015-10-08 2017-04-13 キヤノン株式会社 Processor, processing system, imaging apparatus, processing method, program, and recording medium
US20170103280A1 (en) * 2015-10-08 2017-04-13 Canon Kabushiki Kaisha Processing apparatus, processing system, image pickup apparatus, processing method, and non-transitory computer-readable storage medium
US10538326B1 (en) * 2016-08-31 2020-01-21 Amazon Technologies, Inc. Flare detection and avoidance in stereo vision systems
DE102017205619A1 (en) * 2017-04-03 2018-10-04 Robert Bosch Gmbh LiDAR system and method for operating a LiDAR system
US11435478B2 (en) 2017-04-03 2022-09-06 Robert Bosch Gmbh LIDAR system and method for operating a LIDAR system
US11488354B2 (en) * 2017-05-22 2022-11-01 Sony Corporation Information processing apparatus and information processing method
WO2019010214A1 (en) * 2017-07-05 2019-01-10 Facebook Technologies, Llc Eye tracking based on light polarization
JP7103357B2 (en) 2017-07-26 2022-07-20 ソニーグループ株式会社 Information processing equipment, information processing methods, and programs
WO2019021569A1 (en) * 2017-07-26 2019-01-31 ソニー株式会社 Information processing device, information processing method, and program
US11189042B2 (en) * 2017-07-26 2021-11-30 Sony Corporation Information processing device, information processing method, and computer program
JPWO2019021569A1 (en) * 2017-07-26 2020-06-11 ソニー株式会社 Information processing apparatus, information processing method, and program
US20190064337A1 (en) * 2017-08-28 2019-02-28 Samsung Electronics Co., Ltd. Method and apparatus to identify object
US10838055B2 (en) * 2017-08-28 2020-11-17 Samsung Electronics Co., Ltd. Method and apparatus to identify object
JP2019045299A (en) * 2017-09-01 2019-03-22 学校法人東京電機大学 Three-dimensional information acquisition device
CN107680156A (en) * 2017-09-08 2018-02-09 西安电子科技大学 Three-dimensional rebuilding method based on polarization information
CN107635129A (en) * 2017-09-29 2018-01-26 周艇 Three-dimensional three mesh camera devices and depth integration method
US10692224B1 (en) 2017-11-09 2020-06-23 Facebook Technologies, Llc Estimation of absolute depth from polarization measurements
US10311584B1 (en) 2017-11-09 2019-06-04 Facebook Technologies, Llc Estimation of absolute depth from polarization measurements
US11238598B1 (en) 2017-11-09 2022-02-01 Facebook Technologies, Llc Estimation of absolute depth from polarization measurements
WO2019125427A1 (en) * 2017-12-20 2019-06-27 Olympus Corporation System and method for hybrid depth estimation
US11048374B2 (en) * 2018-03-08 2021-06-29 Ebay Inc. Online pluggable 3D platform for 3D representations of items
CN111837100A (en) * 2018-03-08 2020-10-27 电子湾有限公司 Online pluggable three-dimensional platform for three-dimensional representation of an item
US20190278458A1 (en) * 2018-03-08 2019-09-12 Ebay Inc. Online pluggable 3d platform for 3d representations of items
CN109191560A (en) * 2018-06-29 2019-01-11 西安电子科技大学 Monocular based on scattered information correction polarizes three-dimensional rebuilding method
US10598838B2 (en) * 2018-06-29 2020-03-24 Intel Corporation Pixel level polarizer for flexible display panels
CN109191560B (en) * 2018-06-29 2021-06-15 西安电子科技大学 Monocular polarization three-dimensional reconstruction method based on scattering information correction
US20190041562A1 (en) * 2018-06-29 2019-02-07 Intel Corporation Pixel level polarizer for flexible display panels
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
US11179218B2 (en) 2018-07-19 2021-11-23 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
US11741584B2 (en) * 2018-11-13 2023-08-29 Genesys Logic, Inc. Method for correcting an image and device thereof
US10630925B1 (en) * 2018-12-03 2020-04-21 Facebook Technologies, Llc Depth determination using polarization of light and camera assembly with augmented pixels
US10791282B2 (en) 2018-12-13 2020-09-29 Fenwick & West LLP High dynamic range camera assembly with augmented pixels
US11509803B1 (en) 2018-12-13 2022-11-22 Meta Platforms Technologies, Llc Depth determination using time-of-flight and camera assembly with augmented pixels
US10791286B2 (en) 2018-12-13 2020-09-29 Facebook Technologies, Llc Differentiated imaging using camera assembly with augmented pixels
US10855896B1 (en) 2018-12-13 2020-12-01 Facebook Technologies, Llc Depth determination using time-of-flight and camera assembly with augmented pixels
US11399139B2 (en) 2018-12-13 2022-07-26 Meta Platforms Technologies, Llc High dynamic range camera assembly with augmented pixels
US11567203B2 (en) * 2019-03-12 2023-01-31 Sick Ag Light line triangulation apparatus
US11754828B2 (en) 2019-04-08 2023-09-12 Activ Surgical, Inc. Systems and methods for medical imaging
US11389051B2 (en) 2019-04-08 2022-07-19 Activ Surgical, Inc. Systems and methods for medical imaging
US10925465B2 (en) 2019-04-08 2021-02-23 Activ Surgical, Inc. Systems and methods for medical imaging
US11513223B2 (en) 2019-04-24 2022-11-29 Aeye, Inc. Ladar system and method with cross-receiver
US10921450B2 (en) 2019-04-24 2021-02-16 Aeye, Inc. Ladar system and method with frequency domain shuttering
US10656272B1 (en) 2019-04-24 2020-05-19 Aeye, Inc. Ladar system and method with polarized receivers
US10641897B1 (en) 2019-04-24 2020-05-05 Aeye, Inc. Ladar system and method with adaptive pulse duration
US11405601B2 (en) 2019-09-17 2022-08-02 Meta Platforms Technologies, Llc Polarization capture device, system, and method
WO2021055406A1 (en) * 2019-09-17 2021-03-25 Facebook Technologies, Llc Polarization capture device, system, and method
US11783444B1 (en) * 2019-11-14 2023-10-10 Apple Inc. Warping an input image based on depth and offset information
US10902623B1 (en) 2019-11-19 2021-01-26 Facebook Technologies, Llc Three-dimensional imaging with spatial and temporal coding for depth camera assembly
US11348262B1 (en) 2019-11-19 2022-05-31 Facebook Technologies, Llc Three-dimensional imaging with spatial and temporal coding for depth camera assembly
US11194160B1 (en) 2020-01-21 2021-12-07 Facebook Technologies, Llc High frame rate reconstruction with N-tap camera sensor
CN112528942A (en) * 2020-12-23 2021-03-19 深圳市汇顶科技股份有限公司 Fingerprint identification device, display screen and electronic equipment
CN117523112A (en) * 2024-01-05 2024-02-06 深圳市宗匠科技有限公司 Three-dimensional model building method and system, control equipment and storage medium thereof

Also Published As

Publication number Publication date
US20140247361A1 (en) 2014-09-04
US20120075473A1 (en) 2012-03-29
US8760517B2 (en) 2014-06-24
TWI489858B (en) 2015-06-21
TW201230773A (en) 2012-07-16
US9536362B2 (en) 2017-01-03
WO2012044619A1 (en) 2012-04-05

Similar Documents

Publication Publication Date Title
US20120075432A1 (en) Image capture using three-dimensional reconstruction
US8497897B2 (en) Image capture using luminance and chrominance sensors
US10349037B2 (en) Structured-stereo imaging assembly including separate imagers for different wavelengths
CN101971072B (en) Image sensor and focus detection apparatus
CN204697179U (en) There is the imageing sensor of pel array
US20110063421A1 (en) Stereoscopic image display apparatus
WO2015180645A1 (en) Projection processor and associated method
US8902289B2 (en) Method for capturing three dimensional image
US11438568B2 (en) Imaging apparatus
CN105306786A (en) Image processing methods for image sensors with phase detection pixels
US11252315B2 (en) Imaging apparatus and information processing method
WO2019082820A1 (en) Camera system
CN107534747A (en) Filming apparatus
CN104756493A (en) Image capture device, image processing device, image capture device control program, and image processing device control program
CN103621078A (en) Image processing apparatus and image processing program
KR20220072834A (en) Electronics
JP2006267767A (en) Image display device
CN103503447B (en) The control method of filming apparatus and filming apparatus
JP2002218510A (en) Three-dimensional image acquisition device
TW201322754A (en) High dynamic range image sensing device and image sensing method and manufacturing method thereof
JP2005117316A (en) Apparatus and method for photographing and program
JP2014216783A (en) Image processing apparatus, method and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: APPLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BILBREY, BRETT;CULBERT, MICHAEL F.;SIMON, DAVID I.;AND OTHERS;SIGNING DATES FROM 20110923 TO 20110927;REEL/FRAME:026977/0705

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION