WO2006126141A1 - Images identification method and apparatus - Google Patents

Images identification method and apparatus Download PDF

Info

Publication number
WO2006126141A1
WO2006126141A1 PCT/IB2006/051553 IB2006051553W WO2006126141A1 WO 2006126141 A1 WO2006126141 A1 WO 2006126141A1 IB 2006051553 W IB2006051553 W IB 2006051553W WO 2006126141 A1 WO2006126141 A1 WO 2006126141A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
feature database
group
feature
temporary
Prior art date
Application number
PCT/IB2006/051553
Other languages
French (fr)
Inventor
Declan Patrick Kelly
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2006126141A1 publication Critical patent/WO2006126141A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Definitions

  • the present invention relates to the field of image identification, specifically to a method and apparatus capable of identifying images rapidly and accurately.
  • the US Patent Application No. US2004/0008906 (invented by Steven L. Webb, licensed to the Hewlett-Packard Company, and filed on July 10, 2002) has disclosed a method for arrangement of digital images according to the faces of people in the images.
  • the method first locates a face in the image, and then compares the face with the database of known faces. If the face is in the database, then the name of the person is added to the metadata of the images. If it is not, the user is prompted to input the name of the person and add the face to the database.
  • the patent application has two drawbacks. First, said solution requires a very good face recognition function, otherwise it cannot locate faces. Second, with the increase of identified images, the database of known faces would increase as well. As a result, the amount of computation for comparing located faces with the database of known faces would be tremendous, and it therefore would require much longer time or require higher processing capability.
  • One object of the present invention is to reduce the amount of computation and to improve the accuracy of identification.
  • a method for identifying objects in a group of images comprising the steps of: establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and searching for corresponding objects in the group of images according to the feature in the temporary feature database.
  • the temporary feature database contains at least two features directing to the same object. Search the object in the group of images according to at least two said features directing to the same object.
  • the method further comprises: receiving an input from a user for selecting at least one part of an image in the group of images as the features of the temporary feature database.
  • the method further comprises the step of: selecting, according to a predetermined rule, at least one part of an image in the group of images as the features of the temporary feature database.
  • the method further comprises the step of: extracting, according to a predetermined rule, a part of the user feature database containing a plurality of features to establish the temporary feature database.
  • the method further comprises the step of: receiving an input from a user for extracting a part of the user feature database containing a plurality of features to establish the temporary feature database.
  • an apparatus for identifying objects in a group of images comprising: temporary feature database establishing means for establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and object identifying means for searching corresponding objects in the group of images according to feature in the temporary feature database.
  • the object identifying means searches objects in the group of images according to at least two features directing to the same object.
  • the temporary feature database establishing means comprising: feature selecting means for receiving an input from an user for selecting at least one part of an image in the group of images as the feature of the temporary feature database.
  • the temporary feature database establishing means comprising: feature selecting means for selecting at least one part of ant least one image in the group of images as the features of the temporary feature database according a predetermined rule.
  • the temporary feature database establishing means comprising: feature extracting means for extracting part of a user feature database containing a plurality of features to establish the temporary feature database according a predetermined rule.
  • the temporary feature database establishing means comprising: feature extracting means for receiving an input from a user for extracting a part of the user feature database containing a plurality of features to establish the temporary feature database.
  • the present invention provides a computer program product for identifying objects in a group of images, wherein the group of images having a theme, comprising the steps of: codes for establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and codes for searching for corresponding objects in the group of images according to the feature in the temporary feature database.
  • the present invention also relates to a storage carrier containing such computer program product.
  • a temporary feature database is established according to the theme information of the images, which contains features that are relatively small in number and greatly relevant.
  • searching objects in the images according to the features of the temporary feature database may simplify the algorithms of image identification and improve the efficiency of identification.
  • one object generally has a plurality of features. When object is being identified, even if the matching of each feature is reduced, the total matching of the objects is improved thanks to the use of the correlative information between the features. Additionally, in a group of images having a theme, some features of one object normally remain unchanged. The matching proceeded according to these unchanged features improves the accuracy of the identification.
  • Fig. 1 is a flow chart of the method for identification of a group of images according to one embodiment of the present invention
  • Fig. 2 is a block schematic diagram of the apparatus for identification of a group of images according to one embodiment of the present invention.
  • Fig. 3 is a perspective schematic view of the user interface for the object in the identified images according to one embodiment of the present invention.
  • a group of images taken by people has a theme.
  • the theme of a group of images may be family. That is, the images only contain his family members, or though there are many other people in the images, he only wants to take photographs of his family members, the other people being strangers getting into the images by accident. If a group of images is captured with his colleagues or classmates, its theme would be colleagues or classmates. In this way, themes are classified, according to the people in the images, into oneself, family, relatives, colleagues, secondary school classmates, or college classmates, etc.
  • the themes of a group of images may be classified according to events. For example, a group of images captured of one's family visiting the Imperial Palace Museum, the theme of the group of images is family visiting the Imperial Palace Museum. Another group captured of one's son at his birthday party with his classmates would have the theme of his son at the birthday party.
  • the group of images When a group of images are transmitted to one's PC after he captured the images with his digital camera, the group of images usually has a theme. Furthermore, a group of images captured with a digital camera within a period of time is likely not only to have a theme, but the people in the images may also share some features, for example, clothes they wear in the images. For example, in several images captured on the same day, one is likely to have the same hairstyle, wear the same tie, coat, or hat. All these features could provide very reliable auxiliary information to identify people in the images.
  • a user extracts, according to the theme of a group of images, the features relevant to the theme from the user feature database containing a plurality of features to establish a temporary feature database; or the user selects, according to the theme of a group of images, the features of individual objects in one or more images and the relevant auxiliary features to establish a temporary feature database.
  • the features that a user provides or selects are usually accurate and few in number, therefore it is able to simplify the image identification algorithm and improve identification efficiency.
  • Fig. 1 is a flow chart of the method for identification of objects in a group of images according to one embodiment of the present invention.
  • the group of images has a theme.
  • a user and his family capture a group of images with a digital camera on a tour.
  • the theme of the group of images is family on a tour.
  • a user may have accumulated an album of image, which includes hundreds of images captured from time to time.
  • the images in the album have a plurality of themes.
  • the image album is divided into groups, and the images in each group are relevant to each other.
  • the user may select a part of the relevant images as a group of images having a theme. For example, select 50 photographs taken on the tour to the Huangshan Mountain as a group of images.
  • a temporary feature database is first established containing the feature of the object relevant to the theme of the group of images.
  • the temporary feature database may include information, such as the feature vector and thumb picture of the object, and the name of the features.
  • a temporary feature database containing the feature the objects in the image may possibly have, and the temporary feature database may be established in a plurality of modes as described in detail as the following.
  • Mode 1 Extracting a part of features from an existing user feature database to establish the temporary feature database.
  • the image object identifying means has a user feature database, including features of objects the user has identified before.
  • the face identification means in the US Patent Application No. 2004/0008906A1 contains a database of known faces for storing identified facial feature information. So the user feature database contains many people's features, such as those of his own and the facial features of his family, relatives, classmates, colleagues, and other relevant features. For that matter, features in the user feature database may be divided into many subclasses, each of which contains different people's features. For example, the "family" is a subclass only containing the facial features of his family members.
  • the identification means only extracts, according to the predetermined rule, the features of the object in the user feature database that are relevant to the theme of the images to be identified to establish a temporary feature database.
  • a user captures a group of images with a digital camera when he is on a tour with his family.
  • the images in the digital camera are transmitted to a PC, he knows that the group of images are those of his family, not of his relatives, nor his classmates, nor his colleagues.
  • the theme of the group of images is family members.
  • the user may, before identifying the object, input the key words "family members" to automatically extract the features of his family from the user feature database to establish a temporary feature database. It is thus unnecessary, when performing image identification, to match with any irrelevant features in the feature database (e.g. his relatives, classmates, colleagues, etc.); hence the amount of computation for identification is reduced.
  • Such extraction from the user's feature database according to the predetermined rule may also be extraction of one's own recent features, or his last year's features, etc.
  • the temporary feature database may also be established according to the user input. For example, all the features in the user feature database on a screen are displayed, and it is for the user to select a part of them by clicking. The feature the user clicked are extracted from the user feature database, so as to establish a temporary feature database for identifying the objects of the group.
  • Mode 2 Selecting the feature of an object from one or more images to establish a temporary feature database
  • the identifying means When the images in a digital camera are transmitted to a PC, the identifying means displays the first image.
  • the identifying means receives user's mouse operation, selects the various parts thereof, such as the face of each person in the images, and other appearance features relevant to them (such as ties, coats, pants, shoes, socks, wrist watches, etc.). Respectively, the names of the people whom the parts correspond to are annotated.
  • the thumb pictures of the parts that the user has selected, the feature vector of individual parts, and the names in the database are stored in the database to establish a temporary feature database.
  • the temporary feature database may include at least two features directing to an object.
  • the temporary feature database there are both the face feature of the person and his other relevant features, e. g.
  • the temporary feature database established in this way is of fewer features and more relevant.
  • the identifying means may also display, for a user, an image containing the most features in the group of images, and the user selects the feature of the object on the basis of the image. In this way, there are more features to be used in identification of other images.
  • At least one part of an image in the group of images may also be selected according to a predetermined rule as the features of the temporary feature database.
  • the predetermined rule is to select the face of the first, the first and the last, the first two or any random image of the group of images.
  • any other combination of the rules is possible.
  • the identifying means displays the first image.
  • the user extracts from a user feature database a group or groups of features relevant to the theme of the images to be identified, and displays them around the images in the form of the thumb picture.
  • the user selects, with the mouse, the features in the identified images and draws a connection line between the selected features and the corresponding thumb picture.
  • the part for which the user has drawn a connection line and the annotation information of the feature stored in the feature database are linked, so as to optimize the temporary feature database. Since the temporary feature database generated this way has stored the features of the current object, while the temporary feature database in the first mode stored the previous features of the objects; hence this mode is more accurate than the first mode. It is unnecessary for a user to input the annotation information in the third mode, so it is more convenient than the second mode.
  • At least one representative image is selected from a group of images for identification of objects.
  • the representative image may be the first, the first and the last, the first two or a random image of the group of images. If the objects identified in the representative image are family members, then it is possible to guess that the theme of the group of images is family member. Thus, the features of the family member are automatically extracted from the user feature database to establish a temporary feature database for identification of the other images in the group of images.
  • Step S 120 objects in the image are searched according to the features (including facial features and other auxiliary features) of the temporary feature database.
  • features including facial features and other auxiliary features
  • US Patent No. 5,164,992 invented by Turk; Matthew (Cambridge, MA); Pentland; Alex P. (Cambridge, MA); licensed to Massachusetts Institute of Technology (Cambridge, MA), with its application filed on November 1, 1990.
  • the disclosure of the US Patent No. 5,164,992 is incorporated here.
  • the US Patent has disclosed a method for comparing a face in digital images with a group of reference faces to determine whether the reference faces appear in the digital images. It should be noted that the object identification in the present invention is not limited to the face identification.
  • the relevant features that the user designates in the temporary feature database may also be used.
  • a person's facial expressions may be varied, but his tie is the same.
  • the object remains identifiable by matching the relevant features of his tie and the distance between his tie and his face, even if the facial features do not completely match.
  • the relevant features which can be made use of may be varied.
  • the unchanging part of the group of images may be selected by the user to be used for auxiliary identification. In this way, when identifying an object, the match degree of face features may not be high. Since the auxiliary identification may be made in combination with other relevant features, the total identification rate is relatively high.
  • step S 120 there are likely three results of the search done in step S 120.
  • the first result is that an object is determined in the images uniquely according to the features of the temporary feature database.
  • the second is that there are possible objects that match with the features in the images.
  • the third is that there is no object in the determined images. Following are the separate description of the processing of the three results.
  • step S 130 annotation is generated for the image. For example, the name of the object is added to the metadata of the image.
  • Another way to annotate image is to add the name of the object to the image in the file name of the image, or generate an index for the image.
  • step S 140 the identified feature of the object is stored in the user feature database for future identification, so as to increase the features stored in the user feature database. If the feature is already in the user feature database, the objects of the user feature database may be updated to make the features in the user feature database more accurate and closer to the present state.
  • step S 145 It is determined in step S 145 whether there is still image to be identified. If there is, go back to step S120, and proceed to identify the next object in the image. If there is not, the identification is ended.
  • step S 120 The result of identification in step S 120 is likely to be that there is possible matching of object in the image, which includes that a feature possibly matches with an object in the image and a plurality of objects possibly match the feature.
  • an object is recommended to the user according to the other information of the user feature database.
  • the identifying means identifies, according to the features of the temporary feature database, that the person in the image is likely to be Tom, or identifies a person who is likely to be Tom or Peter.
  • a person who has already been identified is Mary. According to the records of the user feature database, Mary often has picture taken with Tom, but seldom with Peter. Then, the person whom the identifying means recommends to the user is Tom.
  • step S 155 it is determined whether the user accepts the recommendation. If he does, the identifying means performs steps S130 -S150 to complete the steps of annotation and updating.
  • step S 160 is performed to display the image, and the user selects an object in step S 170.
  • step S 170 For the detail, see the following description.
  • step S 120 The search result in step S 120 is possibly that there is no matching object.
  • the identifying means displays the image, and in step S170, it is for the user to select an object in the image. Then, the identifying means performs steps S130 - S150 to complete the steps of annotation and updating.
  • someone in the user feature database may have a plurality of features. For example, features of a person in different periods of time are stored in the temporary feature database. Then, when the features are extracted from the user feature database to establish a temporary feature database, a person's feature may be extracted that is closest to the time the image was captured.
  • Fig. 2 is a block diagram of the apparatus for identification of a group of images according to one embodiment of the present invention.
  • Image identifying means 200 is part of a computer for identifying images stored therein or transmitted thereto.
  • Image identifying means 200 may also be part of a digital camera or digital video camera for identifying the images , video clips or video it captured.
  • Image identifying means 200 comprises a temporary feature database establishing unit 210 and object identifying unit 220.
  • Temporary feature database establishing unit 210 is used for establishing a temporary feature database, containing features relevant to the theme of a group of images to be processed.
  • Object identifying unit 220 matches the features in the temporary feature database with the objects in the images to identify an object in the image.
  • temporary feature database establishing unit 210 may also include a feature extracting unit 212 and/or a feature selecting unit 214.
  • Image identifying means 200 may also further comprise a volatile memory 230, a non-volatile memory 240, an image annotating unit 250, and a user feature database updating unit 260.
  • Volatile memory 230 is used for storing the temporary feature database to make the access speed faster.
  • Non-volatile memory 240 is used for storing user feature database containing the features of the previously identified objects.
  • Feature extracting unit 212 is used for extracting, according to the predetermined rule, the features relevant to the theme from the user feature database to establish a temporary feature database.
  • the predetermined rule may be one's own recent features, his features of two years ago or those of his colleagues. They may also be a subclass, for example "family", in the user feature database.
  • Feature extracting unit 212 may also extract, according to the user input, the features from the user feature database. For example, all the features in the user feature database are displayed on the screen for the user to click a part of them for selecting. The features clicked by the user are extracted from the user feature database for establishing a temporary feature database for identifying the object in the group.
  • Feature selecting unit 214 is for selecting, according to the user input, at least one part in at least one image as the features of the temporary feature database. For example, the user selects, with the mouse, the respective parts in one or more images displayed, e.g. face of each person and other appearance features relevant to individual person in the images. The user then respectively annotates the name of the respective person and appearance features of each person. The thumb pictures of the parts the user has selected, the feature vector of each part and names are stored in the database to establish a temporary feature database.
  • feature selecting unit 214 may also select, according to a predetermined rule, at least one image from the group of images, and select at least one part thereof as the features of the temporary feature database.
  • the predetermined rule may be that of selecting the faces of the first, the first and the last, the first two or a random image of the group of images, or any other combination of the predetermined rules.
  • feature extracting unit 212 extracts the feature of the object related to the theme from the user feature database
  • the object is displayed around the image in the form of thumb picture.
  • Feature selecting unit 214 selects the object in the image. The user draws a connection line between the selected object and the corresponding thumb picture.
  • temporary feature database establishing unit 210 links the selected object with the feature corresponding to the temporary feature database, so as to further optimize the temporary feature database.
  • Image annotating unit 250 is used for adding, according to the user input, annotation to the image after object identifying unit 220 identifies an object, for example, adding to the metadata the name of the person identified or naming the image or establishing an index, etc.
  • user feature database updating unit 260 adds the new object and its feature to the user feature database. If what has been identified is a new feature of an object already in the user feature database, user feature database updating unit 260 updates the feature of the object.
  • object identifying unit 220 cannot match the objects in the image, the user may select the object in the image though the feature selecting unit 214, and annotate the image through image annotating unit 250.
  • User feature database updating unit 260 writes the identified object into the feature database or updates the original features in the user feature database.
  • object identifying unit 220 recommends an object to the user according to the frequency of occurrence of the possibly matching object in the user feature database or the relationship between the objects. The user confirms the recommended object or selects an object himself.
  • the present invention may also be realized by appropriately programmed computer.
  • the program of the computer can identify objects in a group of images having a theme.
  • the computer program product comprises: codes for establishing temporary feature database containing the features of the object related to theme; and codes for searching corresponding object in the group of images according to the features of the temporary feature database.
  • the computer program product can be stored in a storage carrier.
  • the code parts of the program can be provided to a processor to form a machine, so as to make it possible to execute the codes on the processor to generate means to perform the function.
  • Fig. 3 is a schematic view of the user interface 300 for identifying object in images according to one embodiment of the present invention, in which an image 310 is displayed in the middle of the screen. In the center of the image is Peter, with his classmates Tom and Mary on his left and right respectively. The theme of the image is classmate.
  • image 310 there are a plurality of feature areas 320 - 326 for displaying the thumb pictures of the user's classmates extracted from a user feature database, such as thumb picture 330; and the thumb pictures of the features about each object acquired in the image, such as the thumb picture of Tom displayed in position 320, the thumb picture of Peter displayed in positions 340 and 360, and the thumb picture of Mary displayed in position 340.
  • the user may select, with the mouse, Tom's face in the image, for example, the user draws a selected area, in a rectangular, round, or any other arbitrary form, around the face.
  • the selected part is dragged and dropped to feature area 320 to acquire the Tom' s thumb picture, and the thumb picture is stored in the temporary feature database.
  • the user may input explanation information to indicate that the thumb picture is one feature of Tom.
  • Mary's thumb picture 350 may be acquired by performing the same operation.
  • the user may select Peter' s face, and insert this part into feature area 350, acquiring Peter's thumb picture, and indicating through the explanation information that the thumb picture is one feature of Peter.
  • thumb picture 330 in the user feature database e.g. one feature of Peter
  • thumb picture 340 may be automatically linked to thumb picture 340, and, as a result, the user input is reduced.
  • Peter's tie may also be selected, put into feature area 360, and stored in the temporary feature database.
  • Thumb device 360 is used for auxiliarily identifying Peter in the other images of the group of images. For example, it is used together with thumb pictures 330 and 340.
  • the identifying means identifies, according to the feature of the identified objects (thumb pictures 320, 330, 340, 350 and 360), objects of the other images having the same theme, so as to simplify identification of the other images.
  • identifying means may display the grids on the image, and automatically divide the image into a plurality of areas. And one of the areas selected by clicking is stored as a feature in the temporary feature database.
  • the object identification could be one of people and other things in the images, such as a person's pet (cat or dog, etc.).
  • the present invention may be used not only for identification of objects in images, but also identification of objects in video images. As long as a group of images has one theme, the theme information may also be used to simplify the object identification algorithm.

Abstract

The present invention relates to the field of image identification. The present invention provides a method and apparatus for identifying objects in a group of images, wherein the group of images having a theme. The method comprises the steps of: establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and searching for corresponding objects in the group of images according to the feature in the temporary feature database. The image identification method of the present invention reduces the amount of computation and improves the accuracy of image identification.

Description

IMAGES IDENTIFICATION METHOD AND APPARATUS
FIELD OF THE INVENTION
The present invention relates to the field of image identification, specifically to a method and apparatus capable of identifying images rapidly and accurately.
BACKGROUND OF THE INVENTION
The storage capacity for users' personal contents is infinite. In normal situation, a user never deletes his personal images or video recordings captured with a video camera. When his memory is full, the user would buy a new one to store all personal contents. With the increase of these contents, arranging and searching them become a key matter. Therefore, indexing the images using the names of people identified in the images makes the management of personal contents convenient. However, it is very difficult for most users to input the names of all the people for each image. It would be convenient for users, if face recognition software is used for automatic search for images. But, people are rich in facial expressions, and the faces in images do not all face the front, which renders the face recognition software inaccurate, so it is very unreliable to index images with face recognition software.
The US Patent Application No. US2004/0008906 (invented by Steven L. Webb, licensed to the Hewlett-Packard Company, and filed on July 10, 2002) has disclosed a method for arrangement of digital images according to the faces of people in the images. The method first locates a face in the image, and then compares the face with the database of known faces. If the face is in the database, then the name of the person is added to the metadata of the images. If it is not, the user is prompted to input the name of the person and add the face to the database. The patent application has two drawbacks. First, said solution requires a very good face recognition function, otherwise it cannot locate faces. Second, with the increase of identified images, the database of known faces would increase as well. As a result, the amount of computation for comparing located faces with the database of known faces would be tremendous, and it therefore would require much longer time or require higher processing capability.
SUMMARY OF THE INVENTION One object of the present invention is to reduce the amount of computation and to improve the accuracy of identification.
According to one aspect of the present invention, a method for identifying objects in a group of images is provided, wherein the group of images having a theme, comprising the steps of: establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and searching for corresponding objects in the group of images according to the feature in the temporary feature database.
According to one embodiment of the present invention, the temporary feature database contains at least two features directing to the same object. Search the object in the group of images according to at least two said features directing to the same object.
According to one embodiment of the present invention, the method further comprises: receiving an input from a user for selecting at least one part of an image in the group of images as the features of the temporary feature database.
According to one embodiment of the present invention, the method further comprises the step of: selecting, according to a predetermined rule, at least one part of an image in the group of images as the features of the temporary feature database.
According to one embodiment of the present invention, the method further comprises the step of: extracting, according to a predetermined rule, a part of the user feature database containing a plurality of features to establish the temporary feature database.
According to one embodiment of the present invention, the method further comprises the step of: receiving an input from a user for extracting a part of the user feature database containing a plurality of features to establish the temporary feature database.
According to another aspect of the present invention, an apparatus for identifying objects in a group of images is provided, wherein the group of images having a theme, comprising: temporary feature database establishing means for establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and object identifying means for searching corresponding objects in the group of images according to feature in the temporary feature database.
According to one embodiment of the present invention, the object identifying means searches objects in the group of images according to at least two features directing to the same object.
According to one embodiment of the present invention, the temporary feature database establishing means comprising: feature selecting means for receiving an input from an user for selecting at least one part of an image in the group of images as the feature of the temporary feature database.
According to one embodiment of the present invention, the temporary feature database establishing means comprising: feature selecting means for selecting at least one part of ant least one image in the group of images as the features of the temporary feature database according a predetermined rule.
According to one embodiment of the present invention, wherein the temporary feature database establishing means comprising: feature extracting means for extracting part of a user feature database containing a plurality of features to establish the temporary feature database according a predetermined rule.
According to one embodiment of the present invention, the temporary feature database establishing means comprising: feature extracting means for receiving an input from a user for extracting a part of the user feature database containing a plurality of features to establish the temporary feature database. According to another aspect of the present invention, the present invention provides a computer program product for identifying objects in a group of images, wherein the group of images having a theme, comprising the steps of: codes for establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and codes for searching for corresponding objects in the group of images according to the feature in the temporary feature database.
The present invention also relates to a storage carrier containing such computer program product.
In the present invention, a temporary feature database is established according to the theme information of the images, which contains features that are relatively small in number and greatly relevant. As a result, searching objects in the images according to the features of the temporary feature database may simplify the algorithms of image identification and improve the efficiency of identification. In the present invention, one object generally has a plurality of features. When object is being identified, even if the matching of each feature is reduced, the total matching of the objects is improved thanks to the use of the correlative information between the features. Additionally, in a group of images having a theme, some features of one object normally remain unchanged. The matching proceeded according to these unchanged features improves the accuracy of the identification.
BRIEF DESCRIPTION OF THE DRAWINGS
Following is a detailed description of the implementation of exploiting the present invention in conjunction with the accompanying drawings, in which:
Fig. 1 is a flow chart of the method for identification of a group of images according to one embodiment of the present invention;
Fig. 2 is a block schematic diagram of the apparatus for identification of a group of images according to one embodiment of the present invention; and
Fig. 3 is a perspective schematic view of the user interface for the object in the identified images according to one embodiment of the present invention.
In the drawings, the identical referential numbers indicate the identical or similar features and functions.
DETAILED DESCRIPTION OF THE INVENTION
OVERVIEW
As a rule, a group of images taken by people (including photographs, video- recordings, and video clips, etc.) has a theme. For example, the theme of a group of images may be family. That is, the images only contain his family members, or though there are many other people in the images, he only wants to take photographs of his family members, the other people being strangers getting into the images by accident. If a group of images is captured with his colleagues or classmates, its theme would be colleagues or classmates. In this way, themes are classified, according to the people in the images, into oneself, family, relatives, colleagues, secondary school classmates, or college classmates, etc.
Also, the themes of a group of images may be classified according to events. For example, a group of images captured of one's family visiting the Imperial Palace Museum, the theme of the group of images is family visiting the Imperial Palace Museum. Another group captured of one's son at his birthday party with his classmates would have the theme of his son at the birthday party.
When a group of images are transmitted to one's PC after he captured the images with his digital camera, the group of images usually has a theme. Furthermore, a group of images captured with a digital camera within a period of time is likely not only to have a theme, but the people in the images may also share some features, for example, clothes they wear in the images. For example, in several images captured on the same day, one is likely to have the same hairstyle, wear the same tie, coat, or hat. All these features could provide very reliable auxiliary information to identify people in the images.
To improve the of identification rate of images, some existing image identification devices seek, as much as possible, to optimize algorithm so as to identify people's facial features. Another category of existing image identification devices match, as much as possible, the objects in the images by way of constantly increasing the user's feature database. These identification devices all require great amount of computation, with low identification accuracy.
In the present invention, a user extracts, according to the theme of a group of images, the features relevant to the theme from the user feature database containing a plurality of features to establish a temporary feature database; or the user selects, according to the theme of a group of images, the features of individual objects in one or more images and the relevant auxiliary features to establish a temporary feature database. Thus, it is possible to identify the objects in a group of images having a theme according to the features of the temporary feature database. Since the features that a user provides or selects are usually accurate and few in number, therefore it is able to simplify the image identification algorithm and improve identification efficiency.
Following is the detailed description of the preferred embodiments of the method and apparatus of the present invention.
Fig. 1 is a flow chart of the method for identification of objects in a group of images according to one embodiment of the present invention. The group of images has a theme. A user and his family capture a group of images with a digital camera on a tour. When the images in the digital camera are transmitted to a PC, the theme of the group of images is family on a tour.
In another situation, a user may have accumulated an album of image, which includes hundreds of images captured from time to time. The images in the album have a plurality of themes. In case like this, before image identification, the image album is divided into groups, and the images in each group are relevant to each other. When identifying the objects in the image album, the user may select a part of the relevant images as a group of images having a theme. For example, select 50 photographs taken on the tour to the Huangshan Mountain as a group of images.
In step SIlO, a temporary feature database is first established containing the feature of the object relevant to the theme of the group of images. The temporary feature database may include information, such as the feature vector and thumb picture of the object, and the name of the features.
It is possible to establish, before identifying objects, a temporary feature database containing the feature the objects in the image may possibly have, and the temporary feature database may be established in a plurality of modes as described in detail as the following.
Mode 1: Extracting a part of features from an existing user feature database to establish the temporary feature database.
The image object identifying means has a user feature database, including features of objects the user has identified before. For example, the face identification means in the US Patent Application No. 2004/0008906A1 contains a database of known faces for storing identified facial feature information. So the user feature database contains many people's features, such as those of his own and the facial features of his family, relatives, classmates, colleagues, and other relevant features. For that matter, features in the user feature database may be divided into many subclasses, each of which contains different people's features. For example, the "family" is a subclass only containing the facial features of his family members. The identification means only extracts, according to the predetermined rule, the features of the object in the user feature database that are relevant to the theme of the images to be identified to establish a temporary feature database.
For example, a user captures a group of images with a digital camera when he is on a tour with his family. When the images in the digital camera are transmitted to a PC, he knows that the group of images are those of his family, not of his relatives, nor his classmates, nor his colleagues. The theme of the group of images is family members. In case like this, the user may, before identifying the object, input the key words "family members" to automatically extract the features of his family from the user feature database to establish a temporary feature database. It is thus unnecessary, when performing image identification, to match with any irrelevant features in the feature database (e.g. his relatives, classmates, colleagues, etc.); hence the amount of computation for identification is reduced. Such extraction from the user's feature database according to the predetermined rule may also be extraction of one's own recent features, or his last year's features, etc.
Additionally, the temporary feature database may also be established according to the user input. For example, all the features in the user feature database on a screen are displayed, and it is for the user to select a part of them by clicking. The feature the user clicked are extracted from the user feature database, so as to establish a temporary feature database for identifying the objects of the group.
Mode 2: Selecting the feature of an object from one or more images to establish a temporary feature database
When the images in a digital camera are transmitted to a PC, the identifying means displays the first image. The identifying means receives user's mouse operation, selects the various parts thereof, such as the face of each person in the images, and other appearance features relevant to them (such as ties, coats, pants, shoes, socks, wrist watches, etc.). Respectively, the names of the people whom the parts correspond to are annotated. The thumb pictures of the parts that the user has selected, the feature vector of individual parts, and the names in the database are stored in the database to establish a temporary feature database. Of course, the temporary feature database may include at least two features directing to an object. For example, in the temporary feature database there are both the face feature of the person and his other relevant features, e. g. his tie, hairstyle, shoes, etc. These features are likely to occur in other images with the same theme, and are easier to identify than people's face, so that highly relevant auxiliary features are provided for identifying the same object in other images. The temporary feature database established in this way is of fewer features and more relevant. When the same object in other images having the same theme is being identified, the amount of computation for identification is reduced, and the identification efficiency is improved.
The identifying means may also display, for a user, an image containing the most features in the group of images, and the user selects the feature of the object on the basis of the image. In this way, there are more features to be used in identification of other images.
Additionally, at least one part of an image in the group of images may also be selected according to a predetermined rule as the features of the temporary feature database. For example, the predetermined rule is to select the face of the first, the first and the last, the first two or any random image of the group of images. Also, any other combination of the rules is possible. Mode 3: Mixed Mode
When the images in a digital camera are transmitted to a PC, the identifying means displays the first image. Meanwhile, the user extracts from a user feature database a group or groups of features relevant to the theme of the images to be identified, and displays them around the images in the form of the thumb picture. The user selects, with the mouse, the features in the identified images and draws a connection line between the selected features and the corresponding thumb picture. The part for which the user has drawn a connection line and the annotation information of the feature stored in the feature database are linked, so as to optimize the temporary feature database. Since the temporary feature database generated this way has stored the features of the current object, while the temporary feature database in the first mode stored the previous features of the objects; hence this mode is more accurate than the first mode. It is unnecessary for a user to input the annotation information in the third mode, so it is more convenient than the second mode.
Mode 4: Automatic Mode
At least one representative image is selected from a group of images for identification of objects. The representative image may be the first, the first and the last, the first two or a random image of the group of images. If the objects identified in the representative image are family members, then it is possible to guess that the theme of the group of images is family member. Thus, the features of the family member are automatically extracted from the user feature database to establish a temporary feature database for identification of the other images in the group of images.
Next, in Step S 120, objects in the image are searched according to the features (including facial features and other auxiliary features) of the temporary feature database. There are many technical solutions in the prior art to search objects in images according to some features, e.g. US Patent No. 5,164,992 (invented by Turk; Matthew (Cambridge, MA); Pentland; Alex P. (Cambridge, MA); licensed to Massachusetts Institute of Technology (Cambridge, MA), with its application filed on November 1, 1990). The disclosure of the US Patent No. 5,164,992 is incorporated here. The US Patent has disclosed a method for comparing a face in digital images with a group of reference faces to determine whether the reference faces appear in the digital images. It should be noted that the object identification in the present invention is not limited to the face identification. When people in the images are identified, other relevant features that the user designates in the temporary feature database may also be used. For example, in a group of photographs having a theme, a person's facial expressions may be varied, but his tie is the same. The object remains identifiable by matching the relevant features of his tie and the distance between his tie and his face, even if the facial features do not completely match. Moreover, the relevant features which can be made use of may be varied. Especially, the unchanging part of the group of images may be selected by the user to be used for auxiliary identification. In this way, when identifying an object, the match degree of face features may not be high. Since the auxiliary identification may be made in combination with other relevant features, the total identification rate is relatively high.
Due to the uncertainty of object identification, especially the uncertainty of face identification, there are likely three results of the search done in step S 120. The first result is that an object is determined in the images uniquely according to the features of the temporary feature database. The second is that there are possible objects that match with the features in the images. The third is that there is no object in the determined images. Following are the separate description of the processing of the three results.
If a object is determined uniquely in the image according to the features of the temporary feature database, then in step S 130, annotation is generated for the image. For example, the name of the object is added to the metadata of the image. Another way to annotate image is to add the name of the object to the image in the file name of the image, or generate an index for the image.
Then, in step S 140, the identified feature of the object is stored in the user feature database for future identification, so as to increase the features stored in the user feature database. If the feature is already in the user feature database, the objects of the user feature database may be updated to make the features in the user feature database more accurate and closer to the present state.
It is determined in step S 145 whether there is still image to be identified. If there is, go back to step S120, and proceed to identify the next object in the image. If there is not, the identification is ended.
The result of identification in step S 120 is likely to be that there is possible matching of object in the image, which includes that a feature possibly matches with an object in the image and a plurality of objects possibly match the feature. In case like this, in step S 150, an object is recommended to the user according to the other information of the user feature database. For example, in step S 120, the identifying means identifies, according to the features of the temporary feature database, that the person in the image is likely to be Tom, or identifies a person who is likely to be Tom or Peter. In the image, a person who has already been identified is Mary. According to the records of the user feature database, Mary often has picture taken with Tom, but seldom with Peter. Then, the person whom the identifying means recommends to the user is Tom.
After that, in step S 155, it is determined whether the user accepts the recommendation. If he does, the identifying means performs steps S130 -S150 to complete the steps of annotation and updating.
If the user does not, then step S 160 is performed to display the image, and the user selects an object in step S 170. For the detail, see the following description.
The search result in step S 120 is possibly that there is no matching object. In this case, in step S160, the identifying means displays the image, and in step S170, it is for the user to select an object in the image. Then, the identifying means performs steps S130 - S150 to complete the steps of annotation and updating.
Additionally, someone in the user feature database may have a plurality of features. For example, features of a person in different periods of time are stored in the temporary feature database. Then, when the features are extracted from the user feature database to establish a temporary feature database, a person's feature may be extracted that is closest to the time the image was captured.
Fig. 2 is a block diagram of the apparatus for identification of a group of images according to one embodiment of the present invention. Image identifying means 200 is part of a computer for identifying images stored therein or transmitted thereto. Image identifying means 200 may also be part of a digital camera or digital video camera for identifying the images , video clips or video it captured.
Image identifying means 200 comprises a temporary feature database establishing unit 210 and object identifying unit 220. Temporary feature database establishing unit 210 is used for establishing a temporary feature database, containing features relevant to the theme of a group of images to be processed. Object identifying unit 220 matches the features in the temporary feature database with the objects in the images to identify an object in the image. Furthermore, temporary feature database establishing unit 210 may also include a feature extracting unit 212 and/or a feature selecting unit 214.
Image identifying means 200 may also further comprise a volatile memory 230, a non-volatile memory 240, an image annotating unit 250, and a user feature database updating unit 260.
Volatile memory 230 is used for storing the temporary feature database to make the access speed faster. Non-volatile memory 240 is used for storing user feature database containing the features of the previously identified objects.
Feature extracting unit 212 is used for extracting, according to the predetermined rule, the features relevant to the theme from the user feature database to establish a temporary feature database. The predetermined rule may be one's own recent features, his features of two years ago or those of his colleagues. They may also be a subclass, for example "family", in the user feature database.
Feature extracting unit 212 may also extract, according to the user input, the features from the user feature database. For example, all the features in the user feature database are displayed on the screen for the user to click a part of them for selecting. The features clicked by the user are extracted from the user feature database for establishing a temporary feature database for identifying the object in the group.
Feature selecting unit 214 is for selecting, according to the user input, at least one part in at least one image as the features of the temporary feature database. For example, the user selects, with the mouse, the respective parts in one or more images displayed, e.g. face of each person and other appearance features relevant to individual person in the images. The user then respectively annotates the name of the respective person and appearance features of each person. The thumb pictures of the parts the user has selected, the feature vector of each part and names are stored in the database to establish a temporary feature database.
Additionally, feature selecting unit 214 may also select, according to a predetermined rule, at least one image from the group of images, and select at least one part thereof as the features of the temporary feature database. For example, the predetermined rule may be that of selecting the faces of the first, the first and the last, the first two or a random image of the group of images, or any other combination of the predetermined rules.
Alternatively, after feature extracting unit 212 extracts the feature of the object related to the theme from the user feature database, the object is displayed around the image in the form of thumb picture. Feature selecting unit 214 selects the object in the image. The user draws a connection line between the selected object and the corresponding thumb picture. According to the user's selection, temporary feature database establishing unit 210 links the selected object with the feature corresponding to the temporary feature database, so as to further optimize the temporary feature database.
Image annotating unit 250 is used for adding, according to the user input, annotation to the image after object identifying unit 220 identifies an object, for example, adding to the metadata the name of the person identified or naming the image or establishing an index, etc.
After image annotating unit 250 adds annotation, if the identified object is a new object in the user feature database, user feature database updating unit 260 adds the new object and its feature to the user feature database. If what has been identified is a new feature of an object already in the user feature database, user feature database updating unit 260 updates the feature of the object.
If object identifying unit 220 cannot match the objects in the image, the user may select the object in the image though the feature selecting unit 214, and annotate the image through image annotating unit 250.
User feature database updating unit 260 writes the identified object into the feature database or updates the original features in the user feature database.
If object identifying unit 220 cannot uniquely match the objects of the image, object identifying unit 220 recommends an object to the user according to the frequency of occurrence of the possibly matching object in the user feature database or the relationship between the objects. The user confirms the recommended object or selects an object himself.
It should be understood that the identifying means and some or all of its various units may also be realized using software.
The present invention may also be realized by appropriately programmed computer. The program of the computer can identify objects in a group of images having a theme. The computer program product comprises: codes for establishing temporary feature database containing the features of the object related to theme; and codes for searching corresponding object in the group of images according to the features of the temporary feature database. The computer program product can be stored in a storage carrier. The code parts of the program can be provided to a processor to form a machine, so as to make it possible to execute the codes on the processor to generate means to perform the function.
Fig. 3 is a schematic view of the user interface 300 for identifying object in images according to one embodiment of the present invention, in which an image 310 is displayed in the middle of the screen. In the center of the image is Peter, with his classmates Tom and Mary on his left and right respectively. The theme of the image is classmate.
Around image 310 there are a plurality of feature areas 320 - 326 for displaying the thumb pictures of the user's classmates extracted from a user feature database, such as thumb picture 330; and the thumb pictures of the features about each object acquired in the image, such as the thumb picture of Tom displayed in position 320, the thumb picture of Peter displayed in positions 340 and 360, and the thumb picture of Mary displayed in position 340.
The user may select, with the mouse, Tom's face in the image, for example, the user draws a selected area, in a rectangular, round, or any other arbitrary form, around the face. The selected part is dragged and dropped to feature area 320 to acquire the Tom' s thumb picture, and the thumb picture is stored in the temporary feature database. Meanwhile, the user may input explanation information to indicate that the thumb picture is one feature of Tom. Mary's thumb picture 350 may be acquired by performing the same operation.
The user may select Peter' s face, and insert this part into feature area 350, acquiring Peter's thumb picture, and indicating through the explanation information that the thumb picture is one feature of Peter.
Another alternative operation mode to acquire the explanation information is that the user selects Peter's face, draws a connection line between the selected part and the Peter's original thumb picture (feature area 330). The part is thus linked to the corresponding thumb picture, and stored in the user feature database. Now, the feature explanation information about thumb picture 330 in the user feature database, e.g. one feature of Peter, may be automatically linked to thumb picture 340, and, as a result, the user input is reduced.
Additionally, Peter's tie may also be selected, put into feature area 360, and stored in the temporary feature database. Thumb device 360 is used for auxiliarily identifying Peter in the other images of the group of images. For example, it is used together with thumb pictures 330 and 340. The identifying means identifies, according to the feature of the identified objects (thumb pictures 320, 330, 340, 350 and 360), objects of the other images having the same theme, so as to simplify identification of the other images.
In addition, when selecting a part of the image as the feature of the object, identifying means may display the grids on the image, and automatically divide the image into a plurality of areas. And one of the areas selected by clicking is stored as a feature in the temporary feature database.
It should be understood that the object identification could be one of people and other things in the images, such as a person's pet (cat or dog, etc.). Furthermore, the present invention may be used not only for identification of objects in images, but also identification of objects in video images. As long as a group of images has one theme, the theme information may also be used to simplify the object identification algorithm.
It should be understood that those skilled in the art could also make all sorts of alternatives, modifications and variations on the basis of what has been described above. When any alternatives, modifications and variations fall within the spirit and scope of the claims of the present invention, they should be what have been covered in the present invention.

Claims

CLAIMS:
1. A method for identifying objects in a group of images, wherein the group of images having a theme, comprising the steps of: a. establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and b. searching for corresponding objects in the group of images according to the feature in the temporary feature database.
2. A method as claimed in claim 1, wherein the group of images is part of an image collection.
3. A method as claimed in claim 1, wherein the temporary feature database contains at least two features directing to the same object.
4. A method as claimed in claim 3, wherein step b comprises the step of: searching the object in the group of images according to the at least two features directing to the same object.
5. A method as claimed in claim 1, wherein step a comprises the step of: receiving an input from a user for selecting at least one part of an image in the group of images as the feature of the temporary feature database.
6. A method as claimed in claim 5, wherein the input is based on an image containing the most the features in the group of images.
7. A method as claimed in claim 1, wherein step a comprises the step of: selecting at least one part of an image in the group of images as the feature of the temporary feature database according to a predetermined rule.
8. A method as claimed in claim 1, wherein step a comprises the step of: extracting a part of the user feature database containing a plurality of features to establish the temporary feature database according to a predetermined rule.
9. A method as claimed in claim 1, wherein step a comprises the step of: receiving an input from a user for extracting part of a user feature database containing a plurality of features to establish the temporary feature database.
10. A method as claimed in claim 1, further comprising the step of: c. updating a user feature database according to the temporary feature database.
11. A method as claimed in claim 1, further comprising the step of: d. annotating images according to the search result.
12. An apparatus for identifying objects in a group of images, wherein the group of images having a theme, comprising: temporary feature database establishing means for establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and object identifying means for searching corresponding objects in the group of images according to feature in the temporary feature database.
13. An apparatus as claimed in claim 12, wherein the temporary feature database contains at least two features directing to the same object.
14. An apparatus as claimed in claim 13, wherein the object identifying means searches objects in the group of images according to at least two features directing to the same object.
15. An apparatus as claimed in claim 12, wherein the temporary feature database establishing means comprising: feature selecting means for receiving an input from an user for selecting at least one part of an image in the group of images as the feature of the temporary feature database.
16. An apparatus as claimed in claim 12, wherein the temporary feature database establishing means comprising: feature selecting means for selecting at least one part of ant least one image in the group of images as the features of the temporary feature database according a predetermined rule.
17. An apparatus as claimed in claim 12, wherein the temporary feature database establishing means comprising: feature extracting means for extracting part of a user feature database containing a plurality of features to establish the temporary feature database according a predetermined rule.
18. An apparatus as claimed in claim 12, wherein the temporary feature database establishing means comprising: feature extracting means for receiving an input from a user for extracting a part of the user feature database containing a plurality of features to establish the temporary feature database.
19. An apparatus as claimed in claim 12, further comprising: user feature database updating means for updating a user feature database according to the temporary feature database.
20. An apparatus as claimed in claim 12, further comprising: image annotating means for annotating images according to the search result.
21. A computer program product for identifying objects in a group of images, wherein the group of images having a theme, comprising the steps of: codes for establishing a temporary feature database, the temporary feature containing feature of the objects related to the theme; and codes for searching for corresponding objects in the group of images according to the feature in the temporary feature database.
22. A storage carrier containing the computer program product as claimed in claim 21.
PCT/IB2006/051553 2005-05-27 2006-05-17 Images identification method and apparatus WO2006126141A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200510073947 2005-05-27
CN200510073947.6 2005-05-27

Publications (1)

Publication Number Publication Date
WO2006126141A1 true WO2006126141A1 (en) 2006-11-30

Family

ID=36845418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/051553 WO2006126141A1 (en) 2005-05-27 2006-05-17 Images identification method and apparatus

Country Status (1)

Country Link
WO (1) WO2006126141A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008152437A1 (en) * 2007-06-15 2008-12-18 Sony Ericsson Mobile Communications Ab Digital camera and method of storing image data with person related metadata
JP2013239004A (en) * 2012-05-15 2013-11-28 Sony Corp Information processing apparatus, information processing method, computer program, and image display apparatus
WO2019129444A1 (en) * 2017-12-25 2019-07-04 Arcelik Anonim Sirketi A system and method for face recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030128877A1 (en) * 2002-01-09 2003-07-10 Eastman Kodak Company Method and system for processing images for themed imaging services
US20040008906A1 (en) * 2002-07-10 2004-01-15 Webb Steven L. File management of digital images using the names of people identified in the images
WO2004081814A1 (en) * 2003-03-14 2004-09-23 Eastman Kodak Company Method for the automatic identification of entities in a digital image
US20050055344A1 (en) * 2000-10-30 2005-03-10 Microsoft Corporation Image retrieval systems and methods with semantic and feature based relevance feedback

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050055344A1 (en) * 2000-10-30 2005-03-10 Microsoft Corporation Image retrieval systems and methods with semantic and feature based relevance feedback
US20030128877A1 (en) * 2002-01-09 2003-07-10 Eastman Kodak Company Method and system for processing images for themed imaging services
US20040008906A1 (en) * 2002-07-10 2004-01-15 Webb Steven L. File management of digital images using the names of people identified in the images
WO2004081814A1 (en) * 2003-03-14 2004-09-23 Eastman Kodak Company Method for the automatic identification of entities in a digital image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Similarity-based object retrieval", 4 January 2004 (2004-01-04), pages 1 - 8, XP007901033, Retrieved from the Internet <URL:http://www.llnl.gov/CASC/sapphire/sbor/sbor.html> *
VIOLA P ET AL: "Robust Real-Time Object Detection", WORKSHOP ON STATISTICAL AND COMPUTATIONAL THEORIES OF VISION SCTV, XX, XX, 13 July 2001 (2001-07-13), pages 1 - 26, XP002391053 *
ZHANG L. ET AL: "Face Annotation for Family Photo Management", INTERNATIONAL JORUNAL OF IMAGE AND GRAPHICS, WORLD SCIENTIFIC, vol. 3, no. 1, January 2003 (2003-01-01), pages 1 - 14, XP002258648 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008152437A1 (en) * 2007-06-15 2008-12-18 Sony Ericsson Mobile Communications Ab Digital camera and method of storing image data with person related metadata
JP2013239004A (en) * 2012-05-15 2013-11-28 Sony Corp Information processing apparatus, information processing method, computer program, and image display apparatus
WO2019129444A1 (en) * 2017-12-25 2019-07-04 Arcelik Anonim Sirketi A system and method for face recognition

Similar Documents

Publication Publication Date Title
US11636149B1 (en) Method and apparatus for managing digital files
CN103412951B (en) Human connection association analysis based on portrait photographs management System and method for
CN101425133B (en) Human image retrieval system
US7734654B2 (en) Method and system for linking digital pictures to electronic documents
KR101384931B1 (en) Method, apparatus or system for image processing
CN103902656B (en) Media object metadata association and ranking
KR100641791B1 (en) Tagging Method and System for Digital Data
JP4367355B2 (en) PHOTO IMAGE SEARCH DEVICE, PHOTO IMAGE SEARCH METHOD, RECORDING MEDIUM, AND PROGRAM
US20060044635A1 (en) Image file processing method and related technique thereof
EP2549390A1 (en) Data processing device and data processing method
JP2019508826A (en) Masking limited access control system
US8725718B2 (en) Content management apparatus, content management method, content management program, and integrated circuit
JP4643735B1 (en) Electronic device and video processing method
WO2008014408A1 (en) Method and system for displaying multimedia content
JP2012504806A (en) Interactive image selection method
CN105243084A (en) Photographed image file storage method and system and photographed image file search method and system
WO2017067485A1 (en) Picture management method and device, and terminal
Li et al. Automatic summarization for personal digital photos
Adams et al. Extraction of social context and application to personal multimedia exploration
JP2015141530A (en) information processing apparatus, score calculation method, program, and system
JP5924114B2 (en) Information processing apparatus, information processing method, computer program, and image display apparatus
CN108268644A (en) Video searching method, server and video searching system
JP2003099434A (en) Electronic album device
WO2006126141A1 (en) Images identification method and apparatus
Monaghan et al. Automating photo annotation using services and ontologies

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06744958

Country of ref document: EP

Kind code of ref document: A1