US20120133798A1 - Electronic camera and object scene image reproducing apparatus - Google Patents

Electronic camera and object scene image reproducing apparatus Download PDF

Info

Publication number
US20120133798A1
US20120133798A1 US13/298,509 US201113298509A US2012133798A1 US 20120133798 A1 US20120133798 A1 US 20120133798A1 US 201113298509 A US201113298509 A US 201113298509A US 2012133798 A1 US2012133798 A1 US 2012133798A1
Authority
US
United States
Prior art keywords
image
region
position information
face
scene image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/298,509
Inventor
Ryo SAKAJI
Nobuhiko ICHII
Yurie SAKAI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanyo Electric Co Ltd
Original Assignee
Sanyo Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanyo Electric Co Ltd filed Critical Sanyo Electric Co Ltd
Assigned to SANYO ELECTRIC CO., LTD. reassignment SANYO ELECTRIC CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ICHII, NOBUHIKO, SAKAI, YURIE, SAKAJI, RYO
Publication of US20120133798A1 publication Critical patent/US20120133798A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/72Combination of two or more compensation controls
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method

Definitions

  • the present invention relates to an electronic camera and an object-scene-image reproducing apparatus. More specifically, the preset invention relates to an electronic camera and an object-scene-image reproducing apparatus, for reproducing by noticing a specific position within an object scene image, for example.
  • an image reproducing apparatus which, upon reproduction of image data obtained by photographing using a digital still camera which is an electronic camera, detects a face image included in a subject image represented by the image data and displays the detected face image in an enlarged manner.
  • a face image which is probably noticed upon photographing is prioritized over other portions and displayed in an enlarged manner, thereby facilitating confirming whether or not a face image portion is in focus.
  • the face image included in the subject image represented by the image data is determined as a portion which is detected upon reproduction and displayed in an enlarged manner.
  • the portion displayed in an enlarged manner does no match a portion designated when photographing, i.e., a portion to be focused.
  • a configuration of a detector, provided in the reproducing apparatus, for detecting the portion becomes complicated.
  • An electronic camera comprises: an imager, having an imaging surface for capturing an object scene, for generating an object scene image; a designator for designating a specific position within the object scene image generated by the imager; a recorder for recording, together with position information of the specific position designated by the designator, the object scene image generated by the imager; and a reproducer for reproducing the object scene image recorded by the recorder, using the position information recorded by the recorder.
  • a searcher for searching a feature image included in the object scene image generated by the imager is further provided, wherein the designator designates the specific position based on a position of the feature image detected by the searcher.
  • an adjustor for adjusting a photographing condition of the imager based on the object scene image at the specific position designated by the designator is further provided, wherein the recorder records an object scene image created in accordance with the imaging condition adjusted by the adjustor.
  • the photographing condition is a focal distance of the imager.
  • the reproducer enlarges and reproduces the object scene image about a position specified by using the position information recorded by the recorder.
  • An object-scene-image reproducing apparatus is an object-scene-image reproducing apparatus for reproducing an object scene image from a recording medium recorded thereon with position information indicating a specific position within the object scene image, together with the object scene image, and the object-scene-image reproducing apparatus comprises a reproducer for reproducing the object scene image using the position information.
  • the reproducer enlarges and reproduces the object scene image about a position specified by using the position information.
  • FIG. 1 is a block diagram showing a digital camera which is a first embodiment of the present invention
  • FIG. 2 is a descriptive diagram for describing an operation of the first embodiment of the present invention
  • FIG. 3(A) is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 3(B) is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 3(C) is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 4 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 5 is a descriptive diagram for describing the operation of the embodiment of the present invention.
  • FIG. 6 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 7 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 8 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 9 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 10 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 11 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 12(A) is a descriptive diagram used for a comparison with the operation of the embodiment of the present invention.
  • FIG. 12(B) is a descriptive diagram used for a comparison with the operation of the embodiment of the present invention.
  • FIG. 12(C) is a descriptive diagram used for a comparison with the operation of the embodiment of the present invention.
  • FIG. 13(A) is a descriptive diagram for describing the operation of the embodiment of the present invention.
  • FIG. 13(B) is a descriptive diagram for describing the operation of the embodiment of the present invention.
  • FIG. 13(C) is a descriptive diagram for describing the operation of the embodiment of the present invention.
  • FIG. 14A is a flowchart for describing an operation of the first embodiment of the present invention.
  • FIG. 14B is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 15 is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 16 is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 17A is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 17B is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 18 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 19 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 20A is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 20B is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 21 is a flowchart for describing the operation of the first embodiment of the present invention.
  • FIG. 22 is a descriptive diagram for describing the operation of the first embodiment of the present invention.
  • FIG. 23 is a descriptive diagram used for a comparison with the operation of the first embodiment of the present invention.
  • FIG. 24 is a block diagram showing an image reproducing apparatus which is a second embodiment of the present invention.
  • FIG. 25A is a flowchart for describing an operation of the second embodiment of the present invention.
  • FIG. 25B is a flowchart for describing the operation of the second embodiment of the present invention.
  • FIG. 26 is a descriptive diagram for describing the embodiment of the present invention.
  • a digital camera 10 which is a first embodiment of the present invention includes an optical lens 12 .
  • An optical image of an object scene is irradiated onto an imaging surface 14 f of an image sensor 14 through the optical lens 12 , and then, photoelectrically converted. Thereby, an electric charge representing the object scene, i.e., a raw image signal is generated.
  • a CPU 42 instructs a TG/SG 18 to repeatedly perform a pre-exposure and a thinning-out reading in order to execute a through-image process.
  • the TG/SG 18 applies a plurality of timing signals to the image sensor 14 in order to execute a pre-exposure of the imaging surface 14 f of the image sensor 14 and a thinning-out reading of the electric charge thus obtained.
  • the raw image signal generated on the imaging surface 14 f is read out according to an order of raster scanning in response to a vertical synchronization signal Vsync generated at a rate of once each 1/30 seconds.
  • the raw image signal outputted from the image sensor 14 is applied to a series of processes, such as a correlative double sampling, an automatic gain adjustment, and an A/D conversion by a CDS/AGC/AD circuit 16 .
  • a signal-processing circuit 20 applies processes such as a white balance adjustment, a color separation, and a YUV conversion to the raw image data outputted from the CDS/AGC/AD circuit 16 and writes YUV-formatted image data to a display image region 28 a of an SDRAM 28 through a memory control circuit 26 .
  • a video encoder 30 reads out the image data accommodated in the display image region 28 a through the memory control circuit 26 at every 1/30 seconds, and converts the read image data into a composite video signal. Thus, a real-time moving image (through image) representing the object scene is displayed on an LCD monitor 32 .
  • An AE/AF evaluation circuit 24 creates a luminance evaluation value indicating a brightness of the object scene and a focus evaluation value indicating a degree of focus of the object scene, based on the image data outputted from the signal processing circuit 20 .
  • the created luminance evaluation value and focus evaluation value are applied to the CPU 42 .
  • AE is an abbreviation of “Auto Exposure”
  • AF is an an abbreviation of “Auto Focus”.
  • the CPU 42 executes an AE process for a through image and an AF process.
  • a pre-exposure time period set to the TG/SG 18 is controlled based on the luminance evaluation value from the AE/AF evaluation circuit 24 .
  • the brightness of the through image is moderately adjusted.
  • the optical lens 12 is driven by a driver 44 .
  • the display image region 28 a is made up of image data having 240 pixels vertically and 320 pixels horizontally, and set as a search region in which a face detection is performed. Then, a maximum-sized face determining region shown in FIG. 3(A) is arranged at an upper left of the search region. Coordinates at the upper left of the face determining region match those at the upper left of the search region.
  • a feature amount of a partial image belonging to the face determining region is checked against that of a dictionary stored in a flash memory 48 .
  • face information in which a size of the face determining region at this point, a central position of the face determining region, and a degree of reliability are described is created, and accommodated in a face information region 28 d of the SDRAM 28 .
  • the degree of reliability indicates a matching ratio therebetween in the checking process in which a feature amount of the partial image belonging to the face determining region is checked against that of the dictionary stored in the flash memory 48 .
  • the higher the matching ratio the greater the degree of reliability in which the image is determined as a face.
  • the face determining region moves over the search region in a manner shown in FIG. 4 .
  • the degree of reliability is dependent on the dictionary stored in the flash memory 48 , and a face facing a front can generally be detected with a higher degree of reliability than a face facing obliquely or looking down.
  • a middle-sized face determining region shown in FIG. 3(B) is arranged at an upper left of the search region in place of the face determining region shown in FIG. 3(A) to thereby execute the processes as described above again.
  • a minimum-sized face determining region shown in FIG. 3(C) is arranged at the upper left of the search region to thereby repeat the processes as described above.
  • the checking process of the feature amounts and the moving process of the face determining region are executed three times by utilizing in turn the three face determining regions in descending order by size, i.e., the maximum size, the middle size, and the minimum size.
  • the face information in which the central position, the size, and the degree of reliability of the face determining region at this point are described is created, and thereby, the face information accommodated in the face information region 28 d is updated.
  • the CPU 42 instructs a character generator 34 to perform an OSD display of a character C 1 defined by the face information.
  • the character generator 34 applies character data to the LCD monitor 32 in order to display the character C 1 having the size written in the face information at the position written in the face information.
  • each of the character C 1 is displayed so as to be overlapped with the through image in a manner shown in FIG. 6 .
  • a region for obtaining the focus evaluation value is set to a position of the face where the face is detected, and when a plurality of faces are detected, the region for obtaining the focus evaluation value is set to a position of the face nearest the center position of an angle of view.
  • the character data is applied to the LCD monitor 32 .
  • the position of the face nearest the center position of the angle of view is the position of the face of the person P 3 .
  • a character C 2 is displayed to be overlapped with the through image according to a manner shown in FIG. 7 .
  • the CPU 42 executes the AF process and the AE process in a different mode depending on the detection result of the face information.
  • the CPU 42 executes the AE process and the AF process, using the central region of the imaging surface as a reference.
  • the central region of the imaging surface is provided at the center of the imaging surface as a region having a high possibility of including a subject to be photographed. However, the detailed description is omitted.
  • the CPU 42 uses the face information to determine a designated region to be designated on the imaging surface, and applies the character data to the LCD monitor 32 in order to display the designated region.
  • a character C 3 is displayed to be overlapped with the through image in a manner shown in FIG. 8 at a time when a setting of the focal position of the optical lens 12 is completed by the AF process described later.
  • a user becomes able to know that the AF process is completed.
  • the designated region is set to a position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to a position of the face determining region when a face nearest the center position of the angle of view is detected in the face determining process.
  • the AE process is executed by giving importance to the designated region, and the AF process is executed using the designated region as a reference, i.e., using the image signal obtained from the designated region.
  • the exposure time period set to the TG/SG 18 is set to an optimum value.
  • the optical lens 12 is set to a focal position by the driver 44 .
  • the face information is detected in four face determining regions as shown in FIG. 6 .
  • the position of the face determining region where the face nearest the center position of the angle of view is detected is the determination region where the face of the person P 3 is detected, and therefore, as shown in FIG.
  • the region equivalent to the determination region where the face of the person P 1 is detected is a region E 1 ; the region equivalent to the determination region where the face of the person P 2 is detected is a region E 2 ; the region equivalent to the determination region where the face of the person P 3 is detected is a region E 3 ; and the region equivalent to the determination region where the face of the person P 4 is detected is a region E 4 , the designated region is the region E 3 equivalent to the determination region where the face of the person P 3 is detected.
  • the AE process is performed, importance is given to the luminance evaluation value obtained from the region E 3 which becomes the designated region while the luminance evaluation values obtained from the regions E 1 , E 2 , E 4 which are other regions are also used.
  • the AE process is performed using the luminance evaluation value calculated in a manner that a degree of contribution of the luminance evaluation value obtained from the region E 3 is 50%, and a whole degree of contribution of the luminance evaluation values obtained from the regions E 1 , E 2 , and E 3 is 50%.
  • the CPU 42 instructs the TG/SG 18 to perform a main exposure and all-pixel reading, and instructs a JPEG encoder 36 to perform a JPEG compression in order to execute a recording process.
  • the positions and the sizes of the regions E 1 , E 2 , E 3 , and E 4 are set based on the positions and sizes of the determination regions where the faces of the persons P 1 , P 2 , P 3 , and P 4 are detected, and however, the positions and the sizes thereof may not strictly be the same.
  • the position and the size of each of the regions E 1 , E 2 , E 3 , and E 4 are set by combining a total of 256 partial regions, i.e., 16 vertical regions ⁇ 16 horizontal regions, set to the imaging surface 14 f, for example.
  • the TG/SG 18 applies a plurality of timing signals to the image sensor 14 in order to execute a main exposure of the imaging surface 14 f of the image sensor 14 and reading out of all the electric charges thus obtained.
  • the raw image signal generated on the imaging surface 14 f is read out according to an order of raster scanning.
  • the raw image signal outputted from the image sensor 14 is applied to a series of processes, such as a correlative double sampling, an automatic gain adjustment, and an A/D conversion by a CDS/AGC/AD circuit 16 .
  • the signal processing circuit 20 applies processes such as white balance adjustment, a color separation, a YUV conversion, etc., to the raw image data outputted from the CDS/AGC/AD circuit 16 so that the raw image data is converted into image data in a YUV format with a resolution higher than that of the image data accommodated in the display image region 28 a, i.e., the image data being configured by all pixels of the image sensor 14 of which the total number of pixels is about 5 millions, i.e., having 1944 pixels vertically and 2592 pixels horizontally.
  • the converted image data is written to an uncompressed image region 28 b of the SDRAM 28 through the memory control circuit 26 .
  • the JPEG encoder 36 reads out the image data accommodated in the uncompressed image region 28 b through the memory control circuit 26 , compresses the read image data in a JPEG format, and writes the compressed image data, i.e., JPEG data, to a compressed image region 28 c through the memory control circuit 26 .
  • the JPEG data thus obtained is thereafter read out by the CPU 42 , and is recorded together with the position information in the recording medium 40 in a file format through the I/F 38 when there is position information indicating a position of the designated region determined by the detection of the face information.
  • the recording medium 40 is capable of recording a plurality of image files.
  • One of the files recorded in the recording medium 40 in a file format via the I/F 38 is selected to read out the JPEG data therefrom, and the read JPEG data is written to the compressed image region 28 c of the SDRAM 28 .
  • a JPEG decoder 37 reads out the JPEG data accommodated in the compressed image region 28 c through the memory control circuit 26 , decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 28 b through the memory control circuit 26 .
  • the image data written to the uncompressed image region 28 b is read out through the memory control circuit 26 , and from the read image data, image data for display, having a resolution lower than that of the image data is created and written to the display image region 28 a of the SDRAM 28 .
  • the video encoder 30 reads out the image data accommodated in the display image region 28 a through the memory control circuit 26 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on the LCD monitor 32 .
  • a zoom display is so performed that a central position of a reproduction zoom process is set based on the position information.
  • the zoom display is so performed that the center of the image is set to the central position of the reproduction zoom process.
  • the zoom display is so performed that image data obtained by performing a zoom process on the image data written to the uncompressed image region 28 b based on a zoom magnification and a zoom center position is accommodated in the display image region 28 a.
  • the position information recorded in the recording medium 40 is position information represented by the number of pixels on the image data accommodated in the display image region 28 a, and therefore, in reproducing, it is thus converted into the position information represented by the number of pixels on the image data written to the uncompressed image region 28 b of the SDRAM 28 , and the converted position information is used for the reproduction zoom process.
  • the display image region 28 a is made up of the image data having 240 pixels vertically and 320 pixels horizontally.
  • the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data is made up of image data having 1944 pixels vertically and 2592 pixels horizontally, a value of “8.1” obtained by dividing 1944 by 240 is multiplied by a value representing a vertical position of the image data written to the display image region 28 a, and a value of “8.1” obtained by dividing 2592 by 320 is multiplied by the value representing the horizontal position of the image data written to the display image region 28 a.
  • the position information recorded in the recording medium 40 is converted into the position information indicating the position on the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data, and the converted position information is used for the reproduction zoom process.
  • the character data is applied to the LCD monitor 32 in order to display the central position.
  • a character C 4 is displayed to be overlapped with the reproduced image in a manner shown in FIG. 10 .
  • the character C 4 serves to indicate the set central position.
  • the character data indicating that the central position of the reproduction zoom process is set is applied to the LCD monitor 32 based on the position information corresponding to the JPEG data, and in this state, a character C 5 may be displayed to be overlapped with the reproduced image in a manner shown in FIG. 11 .
  • the character C 4 and the character C 5 like these may not be displayed.
  • the center of the image is the central position of the zoom process and is displayed in an enlarged manner, as shown in FIG. 12(A) to FIG. 12(C) .
  • the central position needs to be changed.
  • a position corresponding to the position information is the central position and is displayed in an enlarged manner, as shown in FIG. 13(A) to FIG. 13(C) .
  • the CPU 42 executes in parallel a plurality of tasks including a photograph main task shown in FIG. 14A , FIG. 14B , and FIG. 15 and a face detecting task shown in FIG. 16 , FIG. 17A , and FIG. 17B . It is noted that a control program corresponding to these tasks is stored in a flash memory 48 .
  • the face detecting task is activated in a step S 1 , and the through-image process is executed in a step S 3 .
  • the process in the step S 1 a process of the face detecting task shown in FIG. 16 , FIG. 17A , and FIG. 17B is started.
  • the through image is displayed on the LCD monitor 32 .
  • a key state signal is fetched from the key input device 46 .
  • a step S 7 it is determined whether or not the shutter button 46 S is half-depressed, and when NO is determined, the AE/AF process for a through image is executed in a step S 9 , and the process returns to the step S 5 .
  • the AE/AF process for a through image shown in the step S 9 is performed according to a flowchart shown in FIG. 15 .
  • a step S 911 it is determined whether or not a value of a face detection flag indicating that the face is detected by a face searching process to be described later is “1”, and when YES is determined, the face information is used to determine the designated region in a step S 913 .
  • the detected face information is one, the designated region is set to the central position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to the central position of the face determining region when the face nearest the center position of the angle of view is detected in the face determining process.
  • a character display (display of the character C 2 ) indicating the designated region is performed in a step S 915
  • an AE process giving importance to the designated region is performed in a step S 917
  • an AF process using the designated region as a reference is performed in a step S 919 , and then, the process is restored to a routine at a hierarchical upper level.
  • the AE process is performed by giving importance to the luminance evaluation value obtained from the designated region, and by also using the luminance evaluation values obtained from the regions equivalent to the other face determining regions.
  • step S 911 when NO is determined in the step S 911 , an AE process giving importance to the central region of the object scene image is performed in a step S 923 , an AF process using the central region of the object scene image as a reference is performed in a step S 925 , and then, the process is restored to a routine at a hierarchical upper level.
  • the AE/AF process for a through image shown in the step S 9 irrespective of whether or not the face is detected by the face searching process, the AE process giving importance to the central region of the object scene image and the AF process using the central region of the object scene image as a reference may be performed as a simple AE/AF process.
  • step S 7 it is determined whether or not the value of the face detection flag for indicating that the face is detected by the face searching process is “1” in a step S 11 , and when YES is determined, the face information is used to determine the designated region in a step S 13 .
  • the detected face information is one
  • the designated region is set to a position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to a position of the face determining region when a face nearest the center position of the angle of view is detected in the face determining process.
  • a character display (display of the character C 3 ) indicating the designated region is performed in a step S 15
  • the AE process giving importance to the designated region is performed in a step S 17
  • the AF process using the designated region as a reference is performed in a step S 19 , and then, the process proceeds to a step S 21 .
  • the AE process is performed by giving importance to the luminance evaluation value obtained from the face determining region as the designated region while using, together therewith, the luminance evaluation values obtained from other face determining regions.
  • the face information is detected in the four face determining regions as shown in FIG. 6 .
  • the position of the face determining region where the face nearest the center position of the angle of view is detected is the determination region where the face of the person P 3 is detected, and therefore, as shown in FIG.
  • the region equivalent to the determination region where the face of the person P 1 is detected is a region E 1 ; the area equivalent to the determination area where the face of the person P 2 is detected is a region E 2 ; the region equivalent to the determination region where the face of the person P 3 is detected is a region E 3 ; and the region equivalent to the determination region where the face of the person P 4 is detected is a region E 4 , the designated region is the region E 3 equivalent to the determination region where the face of the person P 3 is detected.
  • an AE process is performed using the luminance evaluation value calculated in a manner that a degree of contribution of the luminance evaluation value obtained from the region E 3 is 50%, and a whole degree of contribution of the luminance evaluation values obtained from the regions E 1 , E 2 , and E 3 is 50%.
  • step S 11 when NO is determined in the step S 11 , the AE process giving importance to the central region of the object scene image is performed in a step S 23 , and the AF process using the central region of the object scene image as a reference is performed in a step S 25 , and then, the process proceeds to the step S 21 .
  • step S 21 similar to the step S 5 , the key state signal is fetched from the key input device 46 .
  • a step S 27 it is determined whether or not the shutter button 46 S is in a half-depressed state, and when YES is determined, the process returns to the step S 21 .
  • the half-depressed state of the shutter button 46 S is held, the character display in the step S 15 and adjusted values of a photographing condition in the steps S 17 and S 19 , or steps S 23 and S 25 are fixed.
  • step S 27 it is determined whether or not the shutter button 46 S is completely depressed in a step S 29 , and when YES is determined, a recording process is executed in a step S 31 and ended.
  • NO it is determined that the half-depressed state is canceled without the shutter button 46 S being completely depressed, and thus, a process in a step S 33 for deleting the character indicating the designated region displayed in the step S 15 is executed, and the process proceeds to the step S 9 .
  • the JPEG data representing the object scene image at a time when the shutter button 46 S is operated is recorded in the recording medium 40 in a file format. The detail is described later.
  • a step S 41 the face information is initialized to set to a state that no face information is obtained.
  • the vertical synchronization signal Vsync is generated, YES is determined in a step S 43 , the face searching process is executed in a step S 45 , and it is determined whether or not the value of the face detection flag for indicating that the face is detected by the face searching process is “1” in a step S 47 .
  • the character C 1 is displayed according to the face information, and when NO is determined, the character C 1 is non-displayed, and then, the process returns to the step S 43 .
  • the character C 1 is displayed to be overlapped with the through image in a manner shown in FIG. 6 .
  • the face searching process shown as the step S 45 is executed according to a subroutine shown in FIG. 17A and FIG. 17B .
  • the setting of the face determining region is initialized.
  • the maximum-sized face determining region is arranged at the upper left of the search region set to the display image region 28 a.
  • the face determining region is set on the display image region 28 a shown in FIG. 2 so that the coordinates at the upper left of the face determining region match the coordinates at the upper left of the search region.
  • the value of the face detection flag for indicating that the face is detected is initialized to “0” which means that the face is not detected.
  • a step S 65 the feature amount of the set face determining region is detected, and in a step S 67 , the detected feature amount is compared with the feature amount of the dictionary. In a step S 69 , it is determined whether or not the partial image belonging to the face determining region is a face image based on the checking result in the step S 67 .
  • the face information is updated in a step S 71 .
  • the face information includes: the central position and the size of the face determining region when it is determined to be the face image; and the degree of reliability, as shown in FIG. 18 .
  • the value of the face detection flag is set to “1”, and then, the process proceeds to a step S 75 .
  • the degree of reliability indicates, in the checking process in which the feature amount of the partial image belonging to the face determining region is checked against that of the dictionary stored in the flash memory 48 , a ratio of being coincident therebetween. The higher the matching ratio, the greater the degree of reliability in which the image is determined as a face.
  • step S 69 When NO is determined in the step S 69 , the process proceeds to the step S 75 without performing the steps S 71 and S 73 .
  • step S 75 it is determined whether or not the coordinates at the lower right of the face determining region are coincident with the coordinates at the lower right of the search region.
  • the face determining region is moved by a predetermined amount in a raster direction in a step S 77 , and the process returns to the step S 65 .
  • step S 75 it is determined whether or not the size of the face determining region is “minimum” in a step S 79 .
  • the process is restored to a routine at a hierarchical upper level, assuming that the search of the face image from the search region is ended.
  • the size of the face determining region is one of “maximum” and “middle”
  • the size of the face determining region is reduced by one step in a step S 81
  • the face determining region is arranged at the upper left of the search region in a step S 83 , and then, the process returns to the step S 65 .
  • the process in the step S 31 is described.
  • the JPEG data representing the object scene image at a time when the shutter button 46 S is operated is recorded in the recording medium 40 in a file format shown in FIG. 19 . That is, the number of pixels of the JPEG data, as header data, is recorded in the recording medium 40 as one file (however, when there is the position information indicating the position of the designated region set in the step S 13 , the number of pixels of the JPEG data are recorded, together with this position information).
  • the characters C 1 , C 2 , and C 3 shown in FIG. 6 and FIG. 7 are merely one example, and the color, pattern, thickness, strict size, transmittance within the frame, etc., of each of the characters can arbitrarily be set. Furthermore, the display of the character C 3 may be substituted by changing any one of the color, pattern, thickness, transmittance within the frame of the character C 2 .
  • the CPU 42 executes a reproducing task shown FIG. 20A and FIG. 20B at a reproducing operation time. It is noted that the control program corresponding to the reproducing task is stored in the flash memory 48 similar to the control program corresponding to the tasks executed in the photographing operation.
  • a file to be reproduced is selected.
  • the JPEG data within the selected file is used to create a display image, and the created display image is displayed on the LCD monitor 32 . More specifically, one JPEG data recorded in the recording medium 40 in a file format via the I/F 38 is selected and read out, and written to the compressed image region 28 c of the SDRAM 28 .
  • the JPEG decoder 37 reads out the JPEG data accommodated in the compressed image region 28 c through the memory control circuit 26 , decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 28 b through the memory control circuit 26 .
  • the image data written to the uncompressed image region 28 b is read out through the memory control circuit 26 , and from the read image data, the image data used for display, having a resolution lower than that of the image data, is created and written to the display image region 28 a of the SDRAM 28 .
  • the video encoder 30 reads out the image data accommodated in the display image region 28 a through the memory control circuit 26 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on the LCD monitor 32 .
  • a step S 104 the CPU 42 sets the value of the zoom magnification, held by the CPU 42 , to “1” as an initial value.
  • a zoom center of the zoom process to be performed later in a step S 107 is set by utilizing the position information, a character indicating the position set as the zoom center is displayed in a step S 109 , and the process proceeds to a step S 113 .
  • the position information recorded in the recording medium 40 is position information represented by the number of pixels on the image data accommodated in the display image region 28 a, and therefore, in reproducing, it is thus converted into the position information represented by the number of pixels on the image data written to the uncompressed image region 28 b of the SDRAM 28 , and the converted position information is used for the reproduction zoom process.
  • the display image region 28 a is made up of the image data having 240 pixels vertically and 320 pixels horizontally.
  • the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data is made up of image data having 1944 pixels vertically and 2592 pixels horizontally
  • a value of 8.1, obtained by dividing 1944 by 240 is multiplied by a value representing a vertical position of the image data written to the display image region 28 a
  • a value of 8.1, obtained by dividing 2592 by 320 is multiplied by the value representing the horizontal position of the image data written to the display image region 28 a.
  • the position information recorded in the recording medium 40 is converted into the position information representing the position on the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data, and the converted position information is used for the reproduction zoom process.
  • the character display by the step S 109 may be omitted, or the displayed character may be non-displayed after the display is continued for a predetermined time or at a time when any operation is thereafter performed.
  • step S 105 when NO is determined in the step S 105 , the zoom center in the zoom process to be performed later in a step S 111 is set to the center of the image data written to the uncompressed image region 28 b, and then, the process proceeds to the step S 113 .
  • the key state signal is fetched from the key input device 46 , and it is determined whether or not a tele-button 46 T is depressed to perform an enlargement operation in a step S 15 , it is determined whether or not a wide button 46 W is depressed to perform a reduction operation in a step S 117 , it is determined whether or not a position change button 46 S is depressed to perform a change operation of the zoom center position in a step S 119 , and it is determined whether or not a forward button 46 F or a back button 46 B is depressed to perform a selection operation of a file in a step S 121 .
  • step S 115 it is detected whether or not the value of the zoom magnification is a maximum value in a step S 123 .
  • the process returns to the S 113 as it is.
  • NO the value of the zoom magnification is increased by a predetermined amount in a step S 125 .
  • step S 127 an enlargement process is performed on the image data written to the uncompressed image region 28 b based on the updated zoom magnification and the zoom center position, and by updating the image data accommodated in the display image region 28 a, an image to be displayed on the LCD monitor 32 is enlarged, and then, the process returns to the step S 113 .
  • step S 117 When YES is determined in the step S 117 , it is detected whether or not the value of the zoom magnification is “1” as an initial value in a step S 129 .
  • a multi-screen display is performed in a step S 135 , and the process returns to the step S 113 .
  • NO is determined in the step S 129 , the value of the zoom magnification is reduced by a predetermined amount in a step S 131 .
  • a reduction process is performed on the image data written to the uncompressed image region 28 b based on the zoom magnification updated and the zoom center position, and by updating the image data accommodated in the display image region 28 a, an image to be displayed on the LCD monitor 32 is reduced, and then, the process returns to the step S 113 .
  • the multi-screen display shown in the step S 135 is performed according to a flowchart shown in FIG. 21 .
  • image data obtained by performing a trimming process and the reduction process on the image data written to the uncompressed image region 28 b according to the position information are displayed as one of the multi screen in a step S 1353 while when NO is determined in the step S 1351 , image data obtained by performing the reduction process on the entire image data written to the uncompressed image region 28 b is displayed as one of the multi screen, and the process is restored to a routine at a hierarchical upper level.
  • the multi-display obtained as a result of the execution of the step S 1353 is as shown in FIG. 22
  • the multi-screen display obtained as a result of the execution of the step S 1355 is as shown in FIG. 23 .
  • the multi-display obtained as a result of the execution of the step S 1353 only a portion of the image including an important portion is multi-screen displayed. Thus, it becomes easy to select the image.
  • the number of divisions of the multi-screen display is not restricted to 4.
  • a relative position, between the images displayed before being changed to the multi-screen display in the step S 135 and the images to be displayed in other regions, is arbitrarily set for a digital camera.
  • the images to be displayed in the other regions are obtained from another file recorded in the recording medium 40 .
  • the file includes, in addition to the JPEG data as a main image, thumbnail image data smaller in resolution (the number of pixels) than the JPEG data.
  • the thumbnail image data may be regarded as image data to be used for the multi-screen display.
  • the position information used in the step S 1353 is converted as needed and used depending on the number of pixels of the thumbnail image data.
  • step S 137 the image data written to the uncompressed image region 28 b is processed, and the image data accommodated in the display image region 28 a is updated to the image data in which the zoom center position is changed, whereby the central position of the enlarged image to be displayed on the LCD monitor 32 is updated, and then, the process returns to the step S 113 .
  • step S 121 When YES is determined in the step S 121 , the process returns to the step S 101 to change a file which is a target to be reproduced. When NO is determined, the process returns to the step S 113 .
  • the center of the image is the central position of the zoom process and is displayed in an enlarged manner, as shown in FIG. 12(A) to FIG. 12(C) .
  • the central position needs to be changed.
  • a position corresponding to the position information is the central position and is displayed in an enlarged manner, as shown in FIG. 13(A) to FIG. 13(C) .
  • the designated region in the first embodiment is set to a central position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to a central position of the face determining region when the face nearest the central position of the angle of view is detected in the face determining process.
  • the designation of the designated region i.e., a designating method of a specified position within the object scene image generated by the imager is not restricted thereto.
  • the designated region may be set to a central position of the face determining region when a largest face is detected, or be set to a central position of the face determining region when a face is detected with a highest degree of reliability, for example.
  • the face detection as shown in FIG. 5
  • the face detection according to the first embodiment enables a plurality of faces to be detected. It may be so configured that when even one face image is discovered in the course of the detection process, the face detection process is ended to determine the designated region based on the detection result.
  • the checking process between the feature amounts and the moving process of the face determining region are executed by using three face determining regions in descending order of size: the maximum size; the middle size; and the minimum size, and therefore, a larger face is preferentially detected in the object scene.
  • An image reproducing apparatus 100 is a reproducing apparatus for reproducing an object scene image from a recording medium recorded thereon with position information indicating a specific position within the object scene image as that which is obtained in the digital camera 10 according to the first embodiment of the present invention, together with the object scene image.
  • JPEG data recorded in a recording medium 140 in a file format via an PF 138 is selected and read out, and the resultant data is written to a compressed image region 128 c of an SDRAM 128 .
  • a JPEG decoder 137 reads out the JPEG data accommodated in the compressed image region 128 c through the memory control circuit 126 , decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 128 b through the memory control circuit 126 .
  • the image data written to the uncompressed image region 128 b is read out through the memory control circuit 126 , and from the read image data, image data for display having a resolution lower than that of the image data is created and written to a display image region 128 a of the SDRAM 128 .
  • a video encoder 130 reads out the image data accommodated in the display image region 128 a through the memory control circuit 126 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on an LCD monitor 132 .
  • position information indicating a position designated at the time of photographing is recorded in the JPEG data, and when the position information may be read out, the central position of the reproduction zoom process is set based on the position information, and in this state, a zoom display is performed.
  • the center of the image is set to the central position of the reproduction zoom process, and in this state, the zoom display is performed.
  • the position information is position information having a value corresponding to the number of pixels of the JPEG data, and thus, there is no need of converting the value as in the first embodiment.
  • the zoom display is performed by accommodating in the display image region 28 a image data obtained by applying a zoom process to the image data written in the uncompressed image region 128 b based on the zoom magnification and the zoom center position.
  • a character generator 134 applies character data to the LCD monitor 32 in order to display the designated region. Such a character display may be omitted.
  • a CPU 142 executes a reproducing operation shown in FIG. 25A and FIG. 25B at a reproducing operation time. It is noted that a control program for executing the reproducing operation is stored in a flash memory 148 .
  • a file to be reproduced is selected.
  • the JPEG data within the selected file is used to create a display image, and the created image is displayed on the LCD monitor 132 . More specifically, any one JPEG data recorded in the recording medium 140 in a file format via the PF 138 is selected and read out, and written to the compressed image region 128 c of the SDRAM 128 .
  • a JPEG decoder 137 reads out the JPEG data accommodated in the compressed image region 128 c through the memory control circuit 126 , decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 128 b through the memory control circuit 126 .
  • the image data written to the uncompressed image region 128 b is read out through the memory control circuit 26 , and from the read image data, image data used for display having a resolution lower than that of the image data is created and written to the display image region 28 a of the SDRAM 128 .
  • a video encoder 130 reads out the image data accommodated in the display image region 128 a through the memory control circuit 126 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on an LCD monitor 132 .
  • a step S 204 the CPU 142 sets a held value of the zoom magnification to “1” as an initial value.
  • the zoom center in the zoom process to be performed later is set by using the position information in a step S 207 , a character indicating the position set as the center of the zoom is displayed in a step S 209 , and the process proceeds to a step S 213 .
  • the character display in the step S 209 may be omitted, or the displayed character may be non-displayed after the display is continued for a predetermined time or at a time when any operation is thereafter performed.
  • the zoom center in the zoom process to be performed later in a step S 211 is set to the center of the image data written to the uncompressed image region 128 b, and the process proceeds to the step S 213 .
  • a key state signal is fetched from the key input device 146 , and it is determined whether or not a tele-button 146 T is depressed to perform an enlargement operation in a step S 215 , it is determined whether or not a wide button 146 W is depressed to perform a reduction operation in a step S 217 , it is determined whether or not a position change button 146 S is depressed to perform a change operation of the zoom center position in a step S 219 , and it is determined whether or not a forward button 146 F or a back button 146 B is depressed to perform a selection operation of a file in a step S 221 .
  • step S 215 When YES is determined in the step S 215 , it is detected whether or not the value of the zoom magnification is a maximum value in a step S 223 . When YES is determined, the process returns to the S 213 as it is. However, when NO is determined, the value of the zoom magnification is increased by a predetermined amount in a step S 225 . In a step S 227 , an enlargement process is performed on the image data written to the uncompressed image region 128 b based on the updated zoom magnification and the zoom center position, and by updating the image data accommodated in the display image region 128 a, an image to be displayed on the LCD monitor 132 is enlarged, and then, the process returns to the step S 213 .
  • step S 217 When YES is determined in the step S 217 , it is detected whether or not the value of the zoom magnification is “1” as an initial value in a step S 229 . When YES is determined, the process returns to the S 213 as it is. However, when NO is determined in the step S 229 , the value of the zoom magnification is reduced by a predetermined amount in a step S 231 , a reduction process is performed on the image data written to the uncompressed image region 128 b based on the updated zoom magnification and zoom center position in the step S 231 , and by updating the image data accommodated in the display image region 128 a, an image to be displayed on the LCD monitor 132 is reduced, and then, the process returns to the step S 213 .
  • step S 237 the image data written to the uncompressed image region 128 b is processed, and the image data accommodated in the display image region 128 a is updated to the image data in which the zoom center position is changed, whereby the central position of the enlarged image to be displayed on the LCD monitor 132 is updated, and then, the process returns to the step S 113 .
  • step S 221 When YES is determined in the step S 221 , the process returns to the step S 201 to change the file which is a target to be reproduced. When NO is determined, the process returns to the step S 213 .
  • the center of the image is the central position of the zoom process and is displayed in an enlarged manner, as shown in FIG. 12(A) to FIG. 12(C) .
  • the central position needs to be changed.
  • a position corresponding to the position information is the central position and is displayed in an enlarged manner, as shown in FIG. 13(A) to FIG. 13(C) .
  • An electronic camera may be so configured that for one object scene image, positions, sizes and degrees of reliabilities of a plurality of face information may be recorded in the recording medium to use these items, as shown in FIG. 26 . Then, in reproducing, selection may be made as to which position information is to be used. In selecting, depending on the value of the size and the magnitude of the degree of reliability, an order and a priority for selection may be determined. Furthermore, by using the value of the size, an initial value of a zoom magnification for the enlarged display may be determined.
  • the specified position may not be a position designated by utilizing the image recognition process like a face detection, etc., but may be a position of the nearest subject, a position of the farthest subject, and a position of the subject nearest the center of the angle of view which are detected by an AF function, and a position directly pointed by the user with a pointing device like a touch panel, etc., when photographing.
  • the reproduction using position information is not restricted to the enlarged reproduction and the trimming reproduction, and an object scene image may be reproduced from the position indicated by the position information as if a hole is expanded, and an object scene image may be reproduced while being rotated about the position indicated by the position information.
  • the object scene image needs not to be recorded by compression, and may be recorded in a non-compression state.
  • As the position information not the number of pixels but a ratio on the monitor (a position of X % in a longitudinal direction and Y % in a horizontal direction) may be used to specify.
  • the object scene image may not only be a still image but also be a moving image or a part of the moving image, such as an I picture (Intra-Coded Picture) within MPEG image data.
  • a plurality of position information for one object scene image may be recorded in the recording medium to use these items. Then, in reproducing, selection may be made as to which position information is to be used for reproduction.
  • the position information used in reproducing is not restricted to one, and reproduction by using a plurality of position information, such as, enlarged reproduction and trimming reproduction of the region enclosed by a plurality of position information may be performed.

Abstract

An electronic camera is provided with: an imager, having an imaging surface for capturing an object scene, for generating an object scene image; a designator for designating a specific position within the object scene image generated by the imager; a recorder for recording, together with position information of the specific position designated by the designator, the object scene image generated by the imager; and a reproducer for reproducing the object scene image recorded by the recorder, using the position information recorded by the recorder.

Description

    CROSS REFERENCE OF RELATED APPLICATION
  • The disclosure of Japanese Patent Application No. 2007-207281 filed on Aug. 8, 2007 is incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to an electronic camera and an object-scene-image reproducing apparatus. More specifically, the preset invention relates to an electronic camera and an object-scene-image reproducing apparatus, for reproducing by noticing a specific position within an object scene image, for example.
  • 2. Description of the Related Art
  • There is known an image reproducing apparatus which, upon reproduction of image data obtained by photographing using a digital still camera which is an electronic camera, detects a face image included in a subject image represented by the image data and displays the detected face image in an enlarged manner. In one example of such an image reproducing apparatus, a face image which is probably noticed upon photographing is prioritized over other portions and displayed in an enlarged manner, thereby facilitating confirming whether or not a face image portion is in focus.
  • However, in the above-described example, the face image included in the subject image represented by the image data is determined as a portion which is detected upon reproduction and displayed in an enlarged manner. Thus, it is probable that the portion displayed in an enlarged manner does no match a portion designated when photographing, i.e., a portion to be focused. Furthermore, there is a problem in that when a technology disclosed in the above-described example is applied to enable the use of not only a face but also a building and a background, for example, as the portion to be focused when photographing, in order that the reproducing apparatus detects, upon reproduction, the portion to be displayed in an enlarged manner, a configuration of a detector, provided in the reproducing apparatus, for detecting the portion becomes complicated.
  • SUMMARY OF THE INVENTION
  • An electronic camera according to the present invention comprises: an imager, having an imaging surface for capturing an object scene, for generating an object scene image; a designator for designating a specific position within the object scene image generated by the imager; a recorder for recording, together with position information of the specific position designated by the designator, the object scene image generated by the imager; and a reproducer for reproducing the object scene image recorded by the recorder, using the position information recorded by the recorder.
  • Preferably, a searcher for searching a feature image included in the object scene image generated by the imager is further provided, wherein the designator designates the specific position based on a position of the feature image detected by the searcher.
  • Preferably, an adjustor for adjusting a photographing condition of the imager based on the object scene image at the specific position designated by the designator is further provided, wherein the recorder records an object scene image created in accordance with the imaging condition adjusted by the adjustor.
  • Further preferably, the photographing condition is a focal distance of the imager.
  • Preferably, the reproducer enlarges and reproduces the object scene image about a position specified by using the position information recorded by the recorder.
  • An object-scene-image reproducing apparatus according to the present invention is an object-scene-image reproducing apparatus for reproducing an object scene image from a recording medium recorded thereon with position information indicating a specific position within the object scene image, together with the object scene image, and the object-scene-image reproducing apparatus comprises a reproducer for reproducing the object scene image using the position information.
  • Preferably, the reproducer enlarges and reproduces the object scene image about a position specified by using the position information.
  • The above described features and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing a digital camera which is a first embodiment of the present invention;
  • FIG. 2 is a descriptive diagram for describing an operation of the first embodiment of the present invention;
  • FIG. 3(A) is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 3(B) is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 3(C) is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 4 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 5 is a descriptive diagram for describing the operation of the embodiment of the present invention;
  • FIG. 6 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 7 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 8 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 9 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 10 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 11 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 12(A) is a descriptive diagram used for a comparison with the operation of the embodiment of the present invention;
  • FIG. 12(B) is a descriptive diagram used for a comparison with the operation of the embodiment of the present invention;
  • FIG. 12(C) is a descriptive diagram used for a comparison with the operation of the embodiment of the present invention;
  • FIG. 13(A) is a descriptive diagram for describing the operation of the embodiment of the present invention;
  • FIG. 13(B) is a descriptive diagram for describing the operation of the embodiment of the present invention;
  • FIG. 13(C) is a descriptive diagram for describing the operation of the embodiment of the present invention;
  • FIG. 14A is a flowchart for describing an operation of the first embodiment of the present invention;
  • FIG. 14B is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 15 is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 16 is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 17A is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 17B is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 18 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 19 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 20A is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 20B is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 21 is a flowchart for describing the operation of the first embodiment of the present invention;
  • FIG. 22 is a descriptive diagram for describing the operation of the first embodiment of the present invention;
  • FIG. 23 is a descriptive diagram used for a comparison with the operation of the first embodiment of the present invention;
  • FIG. 24 is a block diagram showing an image reproducing apparatus which is a second embodiment of the present invention;
  • FIG. 25A is a flowchart for describing an operation of the second embodiment of the present invention;
  • FIG. 25B is a flowchart for describing the operation of the second embodiment of the present invention; and
  • FIG. 26 is a descriptive diagram for describing the embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • With reference to FIG. 1, a digital camera 10 which is a first embodiment of the present invention includes an optical lens 12. An optical image of an object scene is irradiated onto an imaging surface 14 f of an image sensor 14 through the optical lens 12, and then, photoelectrically converted. Thereby, an electric charge representing the object scene, i.e., a raw image signal is generated.
  • When a power source is turned on, a CPU 42 instructs a TG/SG 18 to repeatedly perform a pre-exposure and a thinning-out reading in order to execute a through-image process. The TG/SG 18 applies a plurality of timing signals to the image sensor 14 in order to execute a pre-exposure of the imaging surface 14 f of the image sensor 14 and a thinning-out reading of the electric charge thus obtained. The raw image signal generated on the imaging surface 14 f is read out according to an order of raster scanning in response to a vertical synchronization signal Vsync generated at a rate of once each 1/30 seconds.
  • The raw image signal outputted from the image sensor 14 is applied to a series of processes, such as a correlative double sampling, an automatic gain adjustment, and an A/D conversion by a CDS/AGC/AD circuit 16. A signal-processing circuit 20 applies processes such as a white balance adjustment, a color separation, and a YUV conversion to the raw image data outputted from the CDS/AGC/AD circuit 16 and writes YUV-formatted image data to a display image region 28 a of an SDRAM 28 through a memory control circuit 26.
  • A video encoder 30 reads out the image data accommodated in the display image region 28 a through the memory control circuit 26 at every 1/30 seconds, and converts the read image data into a composite video signal. Thus, a real-time moving image (through image) representing the object scene is displayed on an LCD monitor 32.
  • An AE/AF evaluation circuit 24 creates a luminance evaluation value indicating a brightness of the object scene and a focus evaluation value indicating a degree of focus of the object scene, based on the image data outputted from the signal processing circuit 20. The created luminance evaluation value and focus evaluation value are applied to the CPU 42.
  • It is noted that “AE” is an abbreviation of “Auto Exposure” and “AF” is an an abbreviation of “Auto Focus”.
  • When a shutter button 46S provided on a key input device 46 is not operated, the CPU 42 executes an AE process for a through image and an AF process. A pre-exposure time period set to the TG/SG 18 is controlled based on the luminance evaluation value from the AE/AF evaluation circuit 24. Thereby, the brightness of the through image is moderately adjusted. Based on the AF process based on the focus evaluation value from the AE/AF evaluation circuit 24, i.e., a so-called hill-climbing autofocus process for setting the optical lens 12 such that a high-frequency component of the image signal is maximized, the optical lens 12 is driven by a driver 44.
  • With reference to FIG. 2, the display image region 28 a is made up of image data having 240 pixels vertically and 320 pixels horizontally, and set as a search region in which a face detection is performed. Then, a maximum-sized face determining region shown in FIG. 3(A) is arranged at an upper left of the search region. Coordinates at the upper left of the face determining region match those at the upper left of the search region.
  • A feature amount of a partial image belonging to the face determining region is checked against that of a dictionary stored in a flash memory 48. As a result of the checking process, when the partial image to be noticed is determined as a face image, face information in which a size of the face determining region at this point, a central position of the face determining region, and a degree of reliability are described is created, and accommodated in a face information region 28 d of the SDRAM 28. The degree of reliability indicates a matching ratio therebetween in the checking process in which a feature amount of the partial image belonging to the face determining region is checked against that of the dictionary stored in the flash memory 48. The higher the matching ratio, the greater the degree of reliability in which the image is determined as a face. The face determining region is moved by a predetermined amount (=one pixel) in a raster direction. The face determining region moves over the search region in a manner shown in FIG. 4.
  • It is noted that the degree of reliability is dependent on the dictionary stored in the flash memory 48, and a face facing a front can generally be detected with a higher degree of reliability than a face facing obliquely or looking down.
  • When the face determining region reaches a lower right of the search region, i.e., when the coordinates at the lower right of the face determining region match the coordinates at the lower right of the search region, a middle-sized face determining region shown in FIG. 3(B) is arranged at an upper left of the search region in place of the face determining region shown in FIG. 3(A) to thereby execute the processes as described above again. When the middle-sized face determining region reaches the lower right of the search region, a minimum-sized face determining region shown in FIG. 3(C) is arranged at the upper left of the search region to thereby repeat the processes as described above.
  • Thus, the checking process of the feature amounts and the moving process of the face determining region are executed three times by utilizing in turn the three face determining regions in descending order by size, i.e., the maximum size, the middle size, and the minimum size.
  • When the face image is discovered in the course of the process, the face information in which the central position, the size, and the degree of reliability of the face determining region at this point are described is created, and thereby, the face information accommodated in the face information region 28 d is updated.
  • When the face information is obtained, the CPU 42 instructs a character generator 34 to perform an OSD display of a character C1 defined by the face information. The character generator 34 applies character data to the LCD monitor 32 in order to display the character C1 having the size written in the face information at the position written in the face information. In a case of an object scene image including four persons P1, P2, P3, and P4 as shown in FIG. 5, as a result of the face detection, each of the character C1 is displayed so as to be overlapped with the through image in a manner shown in FIG. 6.
  • When the obtained face information is one, a region for obtaining the focus evaluation value is set to a position of the face where the face is detected, and when a plurality of faces are detected, the region for obtaining the focus evaluation value is set to a position of the face nearest the center position of an angle of view. In order to display that the setting is performed, the character data is applied to the LCD monitor 32. In the case of the object scene image including the four persons P1, P2, P3, and P4 as shown in FIG. 5, the position of the face nearest the center position of the angle of view is the position of the face of the person P3. Thus, a character C2 is displayed to be overlapped with the through image according to a manner shown in FIG. 7.
  • When the shutter button 46S is half-depressed, the CPU 42 executes the AF process and the AE process in a different mode depending on the detection result of the face information. When the face information is not detected, the CPU 42 executes the AE process and the AF process, using the central region of the imaging surface as a reference. The central region of the imaging surface is provided at the center of the imaging surface as a region having a high possibility of including a subject to be photographed. However, the detailed description is omitted. Contrary thereto, when the face information is detected, the CPU 42 uses the face information to determine a designated region to be designated on the imaging surface, and applies the character data to the LCD monitor 32 in order to display the designated region. In the case of the object scene image including the four persons P1, P2, P3, and P4 as shown in FIG. 5, a character C3 is displayed to be overlapped with the through image in a manner shown in FIG. 8 at a time when a setting of the focal position of the optical lens 12 is completed by the AF process described later. As a result of the character C3 being displayed, a user becomes able to know that the AF process is completed. When the detected face information is one, the designated region is set to a position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to a position of the face determining region when a face nearest the center position of the angle of view is detected in the face determining process. Then, the AE process is executed by giving importance to the designated region, and the AF process is executed using the designated region as a reference, i.e., using the image signal obtained from the designated region. As a result of the AE process, the exposure time period set to the TG/SG 18 is set to an optimum value. Furthermore, as a result of the AF process, the optical lens 12 is set to a focal position by the driver 44.
  • In the case of the object scene image including the four persons P1, P2, P3, and P4 as shown in FIG. 5, the face information is detected in four face determining regions as shown in FIG. 6. The position of the face determining region where the face nearest the center position of the angle of view is detected is the determination region where the face of the person P3 is detected, and therefore, as shown in FIG. 9, assuming that the region equivalent to the determination region where the face of the person P1 is detected is a region E1; the region equivalent to the determination region where the face of the person P2 is detected is a region E2; the region equivalent to the determination region where the face of the person P3 is detected is a region E3; and the region equivalent to the determination region where the face of the person P4 is detected is a region E4, the designated region is the region E3 equivalent to the determination region where the face of the person P3 is detected. The AE process is performed, importance is given to the luminance evaluation value obtained from the region E3 which becomes the designated region while the luminance evaluation values obtained from the regions E1, E2, E4 which are other regions are also used. In this embodiment, the AE process is performed using the luminance evaluation value calculated in a manner that a degree of contribution of the luminance evaluation value obtained from the region E3 is 50%, and a whole degree of contribution of the luminance evaluation values obtained from the regions E1, E2, and E3 is 50%.
  • When the shutter button 46S is completely depressed, the CPU 42 instructs the TG/SG 18 to perform a main exposure and all-pixel reading, and instructs a JPEG encoder 36 to perform a JPEG compression in order to execute a recording process.
  • It is noted that the positions and the sizes of the regions E1, E2, E3, and E4 are set based on the positions and sizes of the determination regions where the faces of the persons P1, P2, P3, and P4 are detected, and however, the positions and the sizes thereof may not strictly be the same. The position and the size of each of the regions E1, E2, E3, and E4 are set by combining a total of 256 partial regions, i.e., 16 vertical regions×16 horizontal regions, set to the imaging surface 14 f, for example.
  • The TG/SG 18 applies a plurality of timing signals to the image sensor 14 in order to execute a main exposure of the imaging surface 14 f of the image sensor 14 and reading out of all the electric charges thus obtained. The raw image signal generated on the imaging surface 14 f is read out according to an order of raster scanning. The raw image signal outputted from the image sensor 14 is applied to a series of processes, such as a correlative double sampling, an automatic gain adjustment, and an A/D conversion by a CDS/AGC/AD circuit 16. The signal processing circuit 20 applies processes such as white balance adjustment, a color separation, a YUV conversion, etc., to the raw image data outputted from the CDS/AGC/AD circuit 16 so that the raw image data is converted into image data in a YUV format with a resolution higher than that of the image data accommodated in the display image region 28 a, i.e., the image data being configured by all pixels of the image sensor 14 of which the total number of pixels is about 5 millions, i.e., having 1944 pixels vertically and 2592 pixels horizontally. The converted image data is written to an uncompressed image region 28 b of the SDRAM 28 through the memory control circuit 26.
  • The JPEG encoder 36 reads out the image data accommodated in the uncompressed image region 28 b through the memory control circuit 26, compresses the read image data in a JPEG format, and writes the compressed image data, i.e., JPEG data, to a compressed image region 28 c through the memory control circuit 26. The JPEG data thus obtained is thereafter read out by the CPU 42, and is recorded together with the position information in the recording medium 40 in a file format through the I/F 38 when there is position information indicating a position of the designated region determined by the detection of the face information. The recording medium 40 is capable of recording a plurality of image files.
  • Next, a reproducing operation is described. One of the files recorded in the recording medium 40 in a file format via the I/F 38 is selected to read out the JPEG data therefrom, and the read JPEG data is written to the compressed image region 28 c of the SDRAM 28. A JPEG decoder 37 reads out the JPEG data accommodated in the compressed image region 28 c through the memory control circuit 26, decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 28 b through the memory control circuit 26. The image data written to the uncompressed image region 28 b is read out through the memory control circuit 26, and from the read image data, image data for display, having a resolution lower than that of the image data is created and written to the display image region 28 a of the SDRAM 28.
  • The video encoder 30 reads out the image data accommodated in the display image region 28 a through the memory control circuit 26 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on the LCD monitor 32.
  • When the above-described position information, together with the JPEG data, are recorded in the recording medium 40 in a state of being capable of read out, a zoom display is so performed that a central position of a reproduction zoom process is set based on the position information. In a case of the JPEG data in which the position information is not obtained, the zoom display is so performed that the center of the image is set to the central position of the reproduction zoom process.
  • The zoom display is so performed that image data obtained by performing a zoom process on the image data written to the uncompressed image region 28 b based on a zoom magnification and a zoom center position is accommodated in the display image region 28 a.
  • It is noted that the position information recorded in the recording medium 40 is position information represented by the number of pixels on the image data accommodated in the display image region 28 a, and therefore, in reproducing, it is thus converted into the position information represented by the number of pixels on the image data written to the uncompressed image region 28 b of the SDRAM 28, and the converted position information is used for the reproduction zoom process. The display image region 28 a is made up of the image data having 240 pixels vertically and 320 pixels horizontally. When the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data is made up of image data having 1944 pixels vertically and 2592 pixels horizontally, a value of “8.1” obtained by dividing 1944 by 240 is multiplied by a value representing a vertical position of the image data written to the display image region 28 a, and a value of “8.1” obtained by dividing 2592 by 320 is multiplied by the value representing the horizontal position of the image data written to the display image region 28 a. In this manner, the position information recorded in the recording medium 40 is converted into the position information indicating the position on the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data, and the converted position information is used for the reproduction zoom process.
  • It is noted that when the position information is set to the central position of reproduction zoom process, the character data is applied to the LCD monitor 32 in order to display the central position. A character C4 is displayed to be overlapped with the reproduced image in a manner shown in FIG. 10. The character C4 serves to indicate the set central position. In addition, the character data indicating that the central position of the reproduction zoom process is set is applied to the LCD monitor 32 based on the position information corresponding to the JPEG data, and in this state, a character C5 may be displayed to be overlapped with the reproduced image in a manner shown in FIG. 11. Furthermore, the character C4 and the character C5 like these may not be displayed.
  • On the assumption that when the center of the reproduction zoom process is not set by the position information accompanying the JPEG data read out from the recording medium 40, the center of the image is the central position of the zoom process and is displayed in an enlarged manner, as shown in FIG. 12(A) to FIG. 12(C). Thus, after an enlargement display operation, the central position needs to be changed. However, even with respect to the same reproduced image, when the center of the reproduction zoom process is set by the position information, a position corresponding to the position information is the central position and is displayed in an enlarged manner, as shown in FIG. 13(A) to FIG. 13(C). Thus, it is possible to reproduce the image more easily in an enlarged manner corresponding to a position noticed by the digital camera 10 when photographing.
  • In a photographing operation, the CPU 42 executes in parallel a plurality of tasks including a photograph main task shown in FIG. 14A, FIG. 14B, and FIG. 15 and a face detecting task shown in FIG. 16, FIG. 17A, and FIG. 17B. It is noted that a control program corresponding to these tasks is stored in a flash memory 48.
  • At first, with reference to FIG. 14A and FIG. 14B, the face detecting task is activated in a step S1, and the through-image process is executed in a step S3. By the process in the step S1, a process of the face detecting task shown in FIG. 16, FIG. 17A, and FIG. 17B is started. By the process in the step S3, the through image is displayed on the LCD monitor 32.
  • In a step S5, a key state signal is fetched from the key input device 46. In a step S7, it is determined whether or not the shutter button 46S is half-depressed, and when NO is determined, the AE/AF process for a through image is executed in a step S9, and the process returns to the step S5.
  • The AE/AF process for a through image shown in the step S9 is performed according to a flowchart shown in FIG. 15. In a step S911, it is determined whether or not a value of a face detection flag indicating that the face is detected by a face searching process to be described later is “1”, and when YES is determined, the face information is used to determine the designated region in a step S913. When the detected face information is one, the designated region is set to the central position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to the central position of the face determining region when the face nearest the center position of the angle of view is detected in the face determining process.
  • Then, a character display (display of the character C2) indicating the designated region is performed in a step S915, an AE process giving importance to the designated region is performed in a step S917, and an AF process using the designated region as a reference is performed in a step S919, and then, the process is restored to a routine at a hierarchical upper level.
  • The AE process is performed by giving importance to the luminance evaluation value obtained from the designated region, and by also using the luminance evaluation values obtained from the regions equivalent to the other face determining regions.
  • On the other hand, when NO is determined in the step S911, an AE process giving importance to the central region of the object scene image is performed in a step S923, an AF process using the central region of the object scene image as a reference is performed in a step S925, and then, the process is restored to a routine at a hierarchical upper level.
  • It is noted that as the AE/AF process for a through image shown in the step S9, irrespective of whether or not the face is detected by the face searching process, the AE process giving importance to the central region of the object scene image and the AF process using the central region of the object scene image as a reference may be performed as a simple AE/AF process.
  • Now, returning to FIG. 14A and FIG. 14B, the description is continued. When YES is determined in the step S7, it is determined whether or not the value of the face detection flag for indicating that the face is detected by the face searching process is “1” in a step S11, and when YES is determined, the face information is used to determine the designated region in a step S13. When the detected face information is one, the designated region is set to a position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to a position of the face determining region when a face nearest the center position of the angle of view is detected in the face determining process.
  • Then, a character display (display of the character C3) indicating the designated region is performed in a step S15, the AE process giving importance to the designated region is performed in a step S17, and the AF process using the designated region as a reference is performed in a step S19, and then, the process proceeds to a step S21.
  • The AE process is performed by giving importance to the luminance evaluation value obtained from the face determining region as the designated region while using, together therewith, the luminance evaluation values obtained from other face determining regions. In the case of the object scene image including the four persons P1, P2, P3, and P4 as shown in FIG. 5, the face information is detected in the four face determining regions as shown in FIG. 6. The position of the face determining region where the face nearest the center position of the angle of view is detected is the determination region where the face of the person P3 is detected, and therefore, as shown in FIG. 9, assuming that the region equivalent to the determination region where the face of the person P1 is detected is a region E1; the area equivalent to the determination area where the face of the person P2 is detected is a region E2; the region equivalent to the determination region where the face of the person P3 is detected is a region E3; and the region equivalent to the determination region where the face of the person P4 is detected is a region E4, the designated region is the region E3 equivalent to the determination region where the face of the person P3 is detected. Then, an AE process is performed using the luminance evaluation value calculated in a manner that a degree of contribution of the luminance evaluation value obtained from the region E3 is 50%, and a whole degree of contribution of the luminance evaluation values obtained from the regions E1, E2, and E3 is 50%.
  • On the other hand, when NO is determined in the step S11, the AE process giving importance to the central region of the object scene image is performed in a step S23, and the AF process using the central region of the object scene image as a reference is performed in a step S25, and then, the process proceeds to the step S21.
  • In the step S21, similar to the step S5, the key state signal is fetched from the key input device 46.
  • In a step S27, it is determined whether or not the shutter button 46S is in a half-depressed state, and when YES is determined, the process returns to the step S21. Thus, when the half-depressed state of the shutter button 46S is held, the character display in the step S15 and adjusted values of a photographing condition in the steps S17 and S19, or steps S23 and S25 are fixed.
  • When NO is determined in the step S27, it is determined whether or not the shutter button 46S is completely depressed in a step S29, and when YES is determined, a recording process is executed in a step S31 and ended. When NO is determined in the step S29, it is determined that the half-depressed state is canceled without the shutter button 46S being completely depressed, and thus, a process in a step S33 for deleting the character indicating the designated region displayed in the step S15 is executed, and the process proceeds to the step S9.
  • By the process in the step S31, the JPEG data representing the object scene image at a time when the shutter button 46S is operated is recorded in the recording medium 40 in a file format. The detail is described later.
  • Next, the face detecting task is described. With reference to FIG. 16, in a step S41, the face information is initialized to set to a state that no face information is obtained. When the vertical synchronization signal Vsync is generated, YES is determined in a step S43, the face searching process is executed in a step S45, and it is determined whether or not the value of the face detection flag for indicating that the face is detected by the face searching process is “1” in a step S47. When YES, i.e., when the value is “1”, which means that the value of the face detection flag indicates that the face is detected by the face searching process, the character C1 is displayed according to the face information, and when NO is determined, the character C1 is non-displayed, and then, the process returns to the step S43. In the case of the object scene image in which the four persons P1, P2, P3, and P4 are photographed as shown in FIG. 5, the character C1 is displayed to be overlapped with the through image in a manner shown in FIG. 6.
  • The face searching process shown as the step S45 is executed according to a subroutine shown in FIG. 17A and FIG. 17B. At first, in a step S61, the setting of the face determining region is initialized. Thereby, the maximum-sized face determining region is arranged at the upper left of the search region set to the display image region 28 a. The face determining region is set on the display image region 28 a shown in FIG. 2 so that the coordinates at the upper left of the face determining region match the coordinates at the upper left of the search region. In a step S63, in the face searching process, the value of the face detection flag for indicating that the face is detected is initialized to “0” which means that the face is not detected.
  • In a step S65, the feature amount of the set face determining region is detected, and in a step S67, the detected feature amount is compared with the feature amount of the dictionary. In a step S69, it is determined whether or not the partial image belonging to the face determining region is a face image based on the checking result in the step S67.
  • When YES is determined in the step S69, the face information is updated in a step S71. The face information includes: the central position and the size of the face determining region when it is determined to be the face image; and the degree of reliability, as shown in FIG. 18. Then, in a step S73, the value of the face detection flag is set to “1”, and then, the process proceeds to a step S75. The degree of reliability indicates, in the checking process in which the feature amount of the partial image belonging to the face determining region is checked against that of the dictionary stored in the flash memory 48, a ratio of being coincident therebetween. The higher the matching ratio, the greater the degree of reliability in which the image is determined as a face.
  • When NO is determined in the step S69, the process proceeds to the step S75 without performing the steps S71 and S73. In the step S75, it is determined whether or not the coordinates at the lower right of the face determining region are coincident with the coordinates at the lower right of the search region. When NO is determined in this step, the face determining region is moved by a predetermined amount in a raster direction in a step S77, and the process returns to the step S65.
  • When YES is determined in the step S75, it is determined whether or not the size of the face determining region is “minimum” in a step S79. When the size of the face determining region is “minimum”, the process is restored to a routine at a hierarchical upper level, assuming that the search of the face image from the search region is ended. When the size of the face determining region is one of “maximum” and “middle”, the size of the face determining region is reduced by one step in a step S81, the face determining region is arranged at the upper left of the search region in a step S83, and then, the process returns to the step S65.
  • Now, returning to FIG. 14A and FIG. 14B, the process in the step S31 is described. By the process in the step S31, the JPEG data representing the object scene image at a time when the shutter button 46S is operated is recorded in the recording medium 40 in a file format shown in FIG. 19. That is, the number of pixels of the JPEG data, as header data, is recorded in the recording medium 40 as one file (however, when there is the position information indicating the position of the designated region set in the step S13, the number of pixels of the JPEG data are recorded, together with this position information).
  • It is noted that the characters C1, C2, and C3 shown in FIG. 6 and FIG. 7 are merely one example, and the color, pattern, thickness, strict size, transmittance within the frame, etc., of each of the characters can arbitrarily be set. Furthermore, the display of the character C3 may be substituted by changing any one of the color, pattern, thickness, transmittance within the frame of the character C2.
  • The CPU 42 executes a reproducing task shown FIG. 20A and FIG. 20B at a reproducing operation time. It is noted that the control program corresponding to the reproducing task is stored in the flash memory 48 similar to the control program corresponding to the tasks executed in the photographing operation.
  • In a step S101, a file to be reproduced is selected. In a step S103, the JPEG data within the selected file is used to create a display image, and the created display image is displayed on the LCD monitor 32. More specifically, one JPEG data recorded in the recording medium 40 in a file format via the I/F 38 is selected and read out, and written to the compressed image region 28 c of the SDRAM 28. The JPEG decoder 37 reads out the JPEG data accommodated in the compressed image region 28 c through the memory control circuit 26, decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 28 b through the memory control circuit 26. The image data written to the uncompressed image region 28 b is read out through the memory control circuit 26, and from the read image data, the image data used for display, having a resolution lower than that of the image data, is created and written to the display image region 28 a of the SDRAM 28. The video encoder 30 reads out the image data accommodated in the display image region 28 a through the memory control circuit 26 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on the LCD monitor 32.
  • In a step S104, the CPU 42 sets the value of the zoom magnification, held by the CPU 42, to “1” as an initial value.
  • Upon detection that the position information, together with the JPEG data, are recorded in the recording medium 40 by a step S105, a zoom center of the zoom process to be performed later in a step S107 is set by utilizing the position information, a character indicating the position set as the zoom center is displayed in a step S109, and the process proceeds to a step S113.
  • It is noted that the position information recorded in the recording medium 40 is position information represented by the number of pixels on the image data accommodated in the display image region 28 a, and therefore, in reproducing, it is thus converted into the position information represented by the number of pixels on the image data written to the uncompressed image region 28 b of the SDRAM 28, and the converted position information is used for the reproduction zoom process. The display image region 28 a is made up of the image data having 240 pixels vertically and 320 pixels horizontally. When the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data is made up of image data having 1944 pixels vertically and 2592 pixels horizontally, a value of 8.1, obtained by dividing 1944 by 240 is multiplied by a value representing a vertical position of the image data written to the display image region 28 a, and a value of 8.1, obtained by dividing 2592 by 320 is multiplied by the value representing the horizontal position of the image data written to the display image region 28 a. In this manner, the position information recorded in the recording medium 40 is converted into the position information representing the position on the image data written to the uncompressed image region 28 b of the SDRAM 28 by reproducing the JPEG data, and the converted position information is used for the reproduction zoom process.
  • Furthermore, the character display by the step S109 may be omitted, or the displayed character may be non-displayed after the display is continued for a predetermined time or at a time when any operation is thereafter performed.
  • On the other hand, when NO is determined in the step S105, the zoom center in the zoom process to be performed later in a step S111 is set to the center of the image data written to the uncompressed image region 28 b, and then, the process proceeds to the step S113.
  • In the step S113, the key state signal is fetched from the key input device 46, and it is determined whether or not a tele-button 46T is depressed to perform an enlargement operation in a step S15, it is determined whether or not a wide button 46W is depressed to perform a reduction operation in a step S117, it is determined whether or not a position change button 46S is depressed to perform a change operation of the zoom center position in a step S119, and it is determined whether or not a forward button 46F or a back button 46B is depressed to perform a selection operation of a file in a step S121.
  • When YES is determined in the step S115, it is detected whether or not the value of the zoom magnification is a maximum value in a step S123. When YES is determined in this step, the process returns to the S113 as it is. However, when NO is determined, the value of the zoom magnification is increased by a predetermined amount in a step S125. In a step S127, an enlargement process is performed on the image data written to the uncompressed image region 28 b based on the updated zoom magnification and the zoom center position, and by updating the image data accommodated in the display image region 28 a, an image to be displayed on the LCD monitor 32 is enlarged, and then, the process returns to the step S113.
  • When YES is determined in the step S117, it is detected whether or not the value of the zoom magnification is “1” as an initial value in a step S129. When YES is determined, a multi-screen display is performed in a step S135, and the process returns to the step S113. When NO is determined in the step S129, the value of the zoom magnification is reduced by a predetermined amount in a step S131. In a step S133, a reduction process is performed on the image data written to the uncompressed image region 28 b based on the zoom magnification updated and the zoom center position, and by updating the image data accommodated in the display image region 28 a, an image to be displayed on the LCD monitor 32 is reduced, and then, the process returns to the step S113.
  • The multi-screen display shown in the step S135 is performed according to a flowchart shown in FIG. 21. Upon detection that the position information, together with the JPEG data, are recorded in the recording medium 40 in a step S1351, image data obtained by performing a trimming process and the reduction process on the image data written to the uncompressed image region 28 b according to the position information are displayed as one of the multi screen in a step S1353 while when NO is determined in the step S1351, image data obtained by performing the reduction process on the entire image data written to the uncompressed image region 28 b is displayed as one of the multi screen, and the process is restored to a routine at a hierarchical upper level.
  • For example, in the case of the image in which the four persons P1, P2, P3, and P4 are photographed as shown in FIG. 5, the multi-display obtained as a result of the execution of the step S1353 is as shown in FIG. 22, and the multi-screen display obtained as a result of the execution of the step S1355 is as shown in FIG. 23. In the multi-display obtained as a result of the execution of the step S1353, only a portion of the image including an important portion is multi-screen displayed. Thus, it becomes easy to select the image.
  • It is noted that the number of divisions of the multi-screen display is not restricted to 4. A relative position, between the images displayed before being changed to the multi-screen display in the step S135 and the images to be displayed in other regions, is arbitrarily set for a digital camera. The images to be displayed in the other regions are obtained from another file recorded in the recording medium 40. It is noted that although the description is omitted, the file includes, in addition to the JPEG data as a main image, thumbnail image data smaller in resolution (the number of pixels) than the JPEG data. Thus, the thumbnail image data may be regarded as image data to be used for the multi-screen display. At this time, the position information used in the step S1353 is converted as needed and used depending on the number of pixels of the thumbnail image data.
  • Returning to FIG. 20A and FIG. 20B, the description continues. When YES is determined in the step S119, in a step S137, the image data written to the uncompressed image region 28 b is processed, and the image data accommodated in the display image region 28 a is updated to the image data in which the zoom center position is changed, whereby the central position of the enlarged image to be displayed on the LCD monitor 32 is updated, and then, the process returns to the step S113.
  • When YES is determined in the step S121, the process returns to the step S101 to change a file which is a target to be reproduced. When NO is determined, the process returns to the step S113.
  • According to a first embodiment, on the assumption that the center of the reproduction zoom process is not set by the position information accompanying the JPEG data read out from the recording medium 40, the center of the image is the central position of the zoom process and is displayed in an enlarged manner, as shown in FIG. 12(A) to FIG. 12(C). Thus, after an enlarge display operation, the central position needs to be changed. However, even with respect to the same reproduced image, when the center of the reproduction zoom process is set by the position information, a position corresponding to the position information is the central position and is displayed in an enlarged manner, as shown in FIG. 13(A) to FIG. 13(C). Thus, it is possible to reproduce the image more easily in an enlarged manner corresponding to a position noticed by the digital camera 10 when photographing.
  • Furthermore, when the detected face information is one, the designated region in the first embodiment is set to a central position of the face determining region when the face is detected in the face determining process, and when a plurality of faces are detected, the designated region is set to a central position of the face determining region when the face nearest the central position of the angle of view is detected in the face determining process. However, the designation of the designated region, i.e., a designating method of a specified position within the object scene image generated by the imager is not restricted thereto. When a plurality of faces are detected, the designated region may be set to a central position of the face determining region when a largest face is detected, or be set to a central position of the face determining region when a face is detected with a highest degree of reliability, for example. In the case of the object scene image including the four persons P1, P2, P3, and P4 as shown in FIG. 5, and as a result of the face detection, the face detection as shown in FIG. 6 is obtained, when the designated region is set to the central position of the face determining region when the largest face is detected, the position of the face of the person P4 is set, and when the designated region is set to the central position of the face determining region when the face is detected with the highest degree of reliability, the position of the face of the person P1 who faces a front is set.
  • Additionally, the face detection according to the first embodiment enables a plurality of faces to be detected. It may be so configured that when even one face image is discovered in the course of the detection process, the face detection process is ended to determine the designated region based on the detection result. In this case, the checking process between the feature amounts and the moving process of the face determining region are executed by using three face determining regions in descending order of size: the maximum size; the middle size; and the minimum size, and therefore, a larger face is preferentially detected in the object scene.
  • With reference to FIG. 24, a second embodiment of the present invention is described. An image reproducing apparatus 100 according to the second embodiment of the present invention is a reproducing apparatus for reproducing an object scene image from a recording medium recorded thereon with position information indicating a specific position within the object scene image as that which is obtained in the digital camera 10 according to the first embodiment of the present invention, together with the object scene image.
  • Any one of JPEG data recorded in a recording medium 140 in a file format via an PF 138 is selected and read out, and the resultant data is written to a compressed image region 128 c of an SDRAM 128. A JPEG decoder 137 reads out the JPEG data accommodated in the compressed image region 128 c through the memory control circuit 126, decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 128 b through the memory control circuit 126. The image data written to the uncompressed image region 128 b is read out through the memory control circuit 126, and from the read image data, image data for display having a resolution lower than that of the image data is created and written to a display image region 128 a of the SDRAM 128.
  • A video encoder 130 reads out the image data accommodated in the display image region 128 a through the memory control circuit 126 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on an LCD monitor 132.
  • In the recording medium 140, position information indicating a position designated at the time of photographing is recorded in the JPEG data, and when the position information may be read out, the central position of the reproduction zoom process is set based on the position information, and in this state, a zoom display is performed. When the JPEG data in which no position information is obtained is read out, the center of the image is set to the central position of the reproduction zoom process, and in this state, the zoom display is performed.
  • It is noted that unlike in the first embodiment, the position information is position information having a value corresponding to the number of pixels of the JPEG data, and thus, there is no need of converting the value as in the first embodiment.
  • The zoom display is performed by accommodating in the display image region 28 a image data obtained by applying a zoom process to the image data written in the uncompressed image region 128 b based on the zoom magnification and the zoom center position.
  • It is noted that when the position information is set to the central position of the reproduction zoom process, a character generator 134 applies character data to the LCD monitor 32 in order to display the designated region. Such a character display may be omitted.
  • A CPU 142 executes a reproducing operation shown in FIG. 25A and FIG. 25B at a reproducing operation time. It is noted that a control program for executing the reproducing operation is stored in a flash memory 148.
  • In a step S201, a file to be reproduced is selected. In a step S203, the JPEG data within the selected file is used to create a display image, and the created image is displayed on the LCD monitor 132. More specifically, any one JPEG data recorded in the recording medium 140 in a file format via the PF 138 is selected and read out, and written to the compressed image region 128 c of the SDRAM 128. A JPEG decoder 137 reads out the JPEG data accommodated in the compressed image region 128 c through the memory control circuit 126, decompresses the read JPEG data, and writes the obtained image data to the uncompressed image region 128 b through the memory control circuit 126. The image data written to the uncompressed image region 128 b is read out through the memory control circuit 26, and from the read image data, image data used for display having a resolution lower than that of the image data is created and written to the display image region 28 a of the SDRAM 128. A video encoder 130 reads out the image data accommodated in the display image region 128 a through the memory control circuit 126 at every 1/30 seconds, and converts the read image data into a composite video signal. As a result, a reproduced image is displayed on an LCD monitor 132.
  • In a step S204, the CPU 142 sets a held value of the zoom magnification to “1” as an initial value.
  • Upon detection that the position information, together with the JPEG data, are recorded in the recording medium 140 by a step S205, the zoom center in the zoom process to be performed later is set by using the position information in a step S207, a character indicating the position set as the center of the zoom is displayed in a step S209, and the process proceeds to a step S213.
  • It is noted that the character display in the step S209 may be omitted, or the displayed character may be non-displayed after the display is continued for a predetermined time or at a time when any operation is thereafter performed.
  • On the other hand, when NO is determined in the step S205, the zoom center in the zoom process to be performed later in a step S211 is set to the center of the image data written to the uncompressed image region 128 b, and the process proceeds to the step S213.
  • In the step S213, a key state signal is fetched from the key input device 146, and it is determined whether or not a tele-button 146T is depressed to perform an enlargement operation in a step S215, it is determined whether or not a wide button 146W is depressed to perform a reduction operation in a step S217, it is determined whether or not a position change button 146S is depressed to perform a change operation of the zoom center position in a step S219, and it is determined whether or not a forward button 146F or a back button 146B is depressed to perform a selection operation of a file in a step S221.
  • When YES is determined in the step S215, it is detected whether or not the value of the zoom magnification is a maximum value in a step S223. When YES is determined, the process returns to the S213 as it is. However, when NO is determined, the value of the zoom magnification is increased by a predetermined amount in a step S225. In a step S227, an enlargement process is performed on the image data written to the uncompressed image region 128 b based on the updated zoom magnification and the zoom center position, and by updating the image data accommodated in the display image region 128 a, an image to be displayed on the LCD monitor 132 is enlarged, and then, the process returns to the step S213.
  • When YES is determined in the step S217, it is detected whether or not the value of the zoom magnification is “1” as an initial value in a step S229. When YES is determined, the process returns to the S213 as it is. However, when NO is determined in the step S229, the value of the zoom magnification is reduced by a predetermined amount in a step S231, a reduction process is performed on the image data written to the uncompressed image region 128 b based on the updated zoom magnification and zoom center position in the step S231, and by updating the image data accommodated in the display image region 128 a, an image to be displayed on the LCD monitor 132 is reduced, and then, the process returns to the step S213.
  • When YES is determined in the step S219, in a step S237, the image data written to the uncompressed image region 128 b is processed, and the image data accommodated in the display image region 128 a is updated to the image data in which the zoom center position is changed, whereby the central position of the enlarged image to be displayed on the LCD monitor 132 is updated, and then, the process returns to the step S113.
  • When YES is determined in the step S221, the process returns to the step S201 to change the file which is a target to be reproduced. When NO is determined, the process returns to the step S213.
  • According to this embodiment, on the assumption that when the center of the reproduction zoom process is not set by the position information accompanying the JPEG data read out from the recording medium 140, the center of the image is the central position of the zoom process and is displayed in an enlarged manner, as shown in FIG. 12(A) to FIG. 12(C). Thus, after an enlargement display operation, the central position needs to be changed. However, even with respect to the same reproduced image, when the center of the reproduction zoom process is set by the position information, a position corresponding to the position information is the central position and is displayed in an enlarged manner, as shown in FIG. 13(A) to FIG. 13(C). Thus, it is possible to reproduce the image more easily in an enlarged manner corresponding to a position noticed when photographing.
  • Although the embodiments of the present invention are described in the foregoing, the present invention is not restricted to the above-described embodiments.
  • An electronic camera may be so configured that for one object scene image, positions, sizes and degrees of reliabilities of a plurality of face information may be recorded in the recording medium to use these items, as shown in FIG. 26. Then, in reproducing, selection may be made as to which position information is to be used. In selecting, depending on the value of the size and the magnitude of the degree of reliability, an order and a priority for selection may be determined. Furthermore, by using the value of the size, an initial value of a zoom magnification for the enlarged display may be determined.
  • As a feature image, not face image but images of a soccer ball and small animals may be searched to thereby designate a specific position within the object scene image. The specified position may not be a position designated by utilizing the image recognition process like a face detection, etc., but may be a position of the nearest subject, a position of the farthest subject, and a position of the subject nearest the center of the angle of view which are detected by an AF function, and a position directly pointed by the user with a pointing device like a touch panel, etc., when photographing.
  • In the electronic camera or the object-scene-image reproducing apparatus, the reproduction using position information is not restricted to the enlarged reproduction and the trimming reproduction, and an object scene image may be reproduced from the position indicated by the position information as if a hole is expanded, and an object scene image may be reproduced while being rotated about the position indicated by the position information.
  • The object scene image needs not to be recorded by compression, and may be recorded in a non-compression state. As the position information, not the number of pixels but a ratio on the monitor (a position of X % in a longitudinal direction and Y % in a horizontal direction) may be used to specify.
  • The object scene image may not only be a still image but also be a moving image or a part of the moving image, such as an I picture (Intra-Coded Picture) within MPEG image data. As shown in FIG. 26, a plurality of position information for one object scene image may be recorded in the recording medium to use these items. Then, in reproducing, selection may be made as to which position information is to be used for reproduction. The position information used in reproducing is not restricted to one, and reproduction by using a plurality of position information, such as, enlarged reproduction and trimming reproduction of the region enclosed by a plurality of position information may be performed.
  • Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustration and example only and is not to be taken by way of limitation, the spirit and scope of the present invention being limited only by the terms of the appended claims.

Claims (9)

1.-7. (canceled)
8. An electronic camera, comprising:
an imager, having an imaging surface capturing an object scene, which generates a scene image;
a searcher which searches for a feature image included in the scene image generated by said imager;
a designator which designates a specific position based on a position of the feature image detected by said searcher;
a recorder which records, together with position information of the specific position designated by said designator, the scene image generated by said imager; and
a reproducer which reproduces the scene image recorded by said recorder, using the position information recorded by said recorder, wherein said reproducer is capable of selecting any of the position information to be used for reproducing when there is a plurality of position information.
9. An electronic camera according to claim 8, further comprising an adjustor which adjusts a photographing condition of said imager based on the scene image at the specific position designated by said designator, wherein said recorder records a scene image created in accordance with the imaging condition adjusted by said adjustor.
10. An electronic camera according to claim 9, wherein said photographing condition is a focal distance of said imager.
11. An electronic camera according to claim 8, wherein said reproducer enlarges and reproduces the scene image centering around a position specified by using the position information recorded by said recorder.
12. A scene-image reproducing apparatus which reproduces a scene image from a recording medium recorded thereon with position information of a specific position designated based on a position of a feature image included in the scene image, together with the scene image, said scene-image reproducing apparatus, comprising a reproducer which reproduces the scene image by using the position information, wherein said reproducer is capable of selecting any of the position information to be used for reproducing when there is a plurality of position information.
13. A scene-image reproducing apparatus according to claim 12, wherein said reproducer enlarges and reproduces the scene image centering around a position specified by using the position information.
14. An electronic camera, comprising:
an imager, having an imaging surface capturing an object scene, which generates a scene image;
a designator which designates a specific position within the scene image generated by said imager;
a recorder which records, together with position information of the specific position designated by said designator, the scene image generated by said imager; and
a reproducer which reproduces the scene image recorded by said recorder, using the position information recorded by said recorder, wherein said reproducer reproduces an image trimmed corresponding to the position information when the scene image is reproduced by a multi-screen display.
15. A scene-image reproducing apparatus which reproduces a scene image from a recording medium recorded thereon with position information indicating a specific position within the scene image, together with the scene image, said scene-image reproducing apparatus, comprising a reproducer which reproduces the scene image by using the position information, wherein said reproducer reproduces an image trimmed corresponding to the position information when the scene image is reproduced by a multi-screen display.
US13/298,509 2007-08-08 2011-11-17 Electronic camera and object scene image reproducing apparatus Abandoned US20120133798A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007207281A JP2009044463A (en) 2007-08-08 2007-08-08 Electronic camera and field image reproduction device
JP2007-207281 2007-08-08

Publications (1)

Publication Number Publication Date
US20120133798A1 true US20120133798A1 (en) 2012-05-31

Family

ID=40346607

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/186,818 Expired - Fee Related US8081804B2 (en) 2007-08-08 2008-08-06 Electronic camera and object scene image reproducing apparatus
US13/298,509 Abandoned US20120133798A1 (en) 2007-08-08 2011-11-17 Electronic camera and object scene image reproducing apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US12/186,818 Expired - Fee Related US8081804B2 (en) 2007-08-08 2008-08-06 Electronic camera and object scene image reproducing apparatus

Country Status (3)

Country Link
US (2) US8081804B2 (en)
JP (1) JP2009044463A (en)
CN (1) CN101365063B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5187139B2 (en) * 2008-10-30 2013-04-24 セイコーエプソン株式会社 Image processing apparatus and program
US8681236B2 (en) * 2009-06-18 2014-03-25 Samsung Electronics Co., Ltd. Apparatus and method for reducing shutter lag of a digital camera
JP2012095236A (en) * 2010-10-28 2012-05-17 Sanyo Electric Co Ltd Imaging device
JP5782813B2 (en) * 2011-04-27 2015-09-24 株式会社リコー Imaging apparatus and image display method
EP2727078B1 (en) * 2011-06-29 2022-09-07 Koninklijke Philips N.V. Zooming of medical images
JP6295534B2 (en) * 2013-07-29 2018-03-20 オムロン株式会社 Programmable display, control method, and program
JP6512810B2 (en) * 2014-12-11 2019-05-15 キヤノン株式会社 Image pickup apparatus, control method and program
US10049273B2 (en) * 2015-02-24 2018-08-14 Kabushiki Kaisha Toshiba Image recognition apparatus, image recognition system, and image recognition method
KR20190094677A (en) * 2018-02-05 2019-08-14 삼성전자주식회사 Apparatus and method for recognizing voice and face based on changing of camara mode

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060210264A1 (en) * 2005-03-17 2006-09-21 Canon Kabushiki Kaisha Imaging apparatus and method for controlling display device
US20070291104A1 (en) * 2006-06-07 2007-12-20 Wavetronex, Inc. Systems and methods of capturing high-resolution images of objects

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3649468B2 (en) 1995-05-31 2005-05-18 株式会社日立製作所 Electronic album system with shooting function
JP2000278578A (en) 1999-03-24 2000-10-06 Matsushita Electric Ind Co Ltd Electronic still camera
KR100547992B1 (en) * 2003-01-16 2006-02-01 삼성테크윈 주식회사 Digital camera and control method thereof
JP4307301B2 (en) * 2003-07-31 2009-08-05 キヤノン株式会社 Image processing apparatus and method
JP4489608B2 (en) * 2004-03-31 2010-06-23 富士フイルム株式会社 DIGITAL STILL CAMERA, IMAGE REPRODUCTION DEVICE, FACE IMAGE DISPLAY DEVICE, AND CONTROL METHOD THEREOF
JP4379191B2 (en) * 2004-04-26 2009-12-09 株式会社ニコン Electronic camera and image processing program
JP2005354333A (en) 2004-06-10 2005-12-22 Casio Comput Co Ltd Image reproducer and program
JP2006109005A (en) * 2004-10-04 2006-04-20 Canon Inc Image output device and imaging device
JP4487872B2 (en) * 2005-07-11 2010-06-23 ソニー株式会社 Image processing apparatus and method, program, and recording medium
JP2007041866A (en) * 2005-08-03 2007-02-15 Canon Inc Information processing device, information processing method, and program
JP2007067559A (en) * 2005-08-29 2007-03-15 Canon Inc Image processing method, image processing apparatus, and control method of imaging apparatus
CN101300826A (en) * 2005-11-02 2008-11-05 奥林巴斯株式会社 Electric camera
JP4559964B2 (en) * 2005-12-26 2010-10-13 株式会社日立国際電気 Image processing program
JP2007195099A (en) * 2006-01-23 2007-08-02 Fujifilm Corp Photographing apparatus
JP4943769B2 (en) * 2006-08-15 2012-05-30 富士フイルム株式会社 Imaging apparatus and in-focus position search method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060210264A1 (en) * 2005-03-17 2006-09-21 Canon Kabushiki Kaisha Imaging apparatus and method for controlling display device
US20070291104A1 (en) * 2006-06-07 2007-12-20 Wavetronex, Inc. Systems and methods of capturing high-resolution images of objects

Also Published As

Publication number Publication date
JP2009044463A (en) 2009-02-26
US20090041355A1 (en) 2009-02-12
CN101365063B (en) 2012-10-24
CN101365063A (en) 2009-02-11
US8081804B2 (en) 2011-12-20

Similar Documents

Publication Publication Date Title
US8081804B2 (en) Electronic camera and object scene image reproducing apparatus
JP5056061B2 (en) Imaging device
JP4626493B2 (en) Image processing apparatus, image processing method, program for image processing method, and recording medium recording program for image processing method
JP4457358B2 (en) Display method of face detection frame, display method of character information, and imaging apparatus
US8031228B2 (en) Electronic camera and method which adjust the size or position of a feature search area of an imaging surface in response to panning or tilting of the imaging surface
US8421874B2 (en) Image processing apparatus
JP2011160044A (en) Imaging device
JP2009053448A (en) Electronic camera
US8237802B2 (en) Method and apparatus for determining shaken image by using auto focusing
US20120229678A1 (en) Image reproducing control apparatus
KR20130031176A (en) Display apparatus and method
JP4842919B2 (en) Display device, photographing device, and display method
US9137448B2 (en) Multi-recording image capturing apparatus and control method for multi-recording image capturing apparatus for enabling the capture of two image areas having two different angles of view
JP2007265149A (en) Image processor, image processing method and imaging device
US8181121B2 (en) Display unit and display method
JP2001211418A (en) Electronic camera
JP4948014B2 (en) Electronic camera
JP5217843B2 (en) Composition selection apparatus, composition selection method and program
US8437552B2 (en) Information processing apparatus and method, and a recording medium storing a program for implementing the method
JP5157528B2 (en) Imaging device
JP4769693B2 (en) Imaging apparatus, method, and program
JP2019047436A (en) Image processing device, image processing method, image processing program, and imaging device
JP2009077175A (en) Electronic camera
JP2009021893A (en) Imaging device and method
JP2009081502A (en) Photographing device and image reproducing device

Legal Events

Date Code Title Description
AS Assignment

Owner name: SANYO ELECTRIC CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAKAJI, RYO;ICHII, NOBUHIKO;SAKAI, YURIE;REEL/FRAME:027244/0714

Effective date: 20080728

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION