WO2007119186A2 - Object recognition - Google Patents

Object recognition Download PDF

Info

Publication number
WO2007119186A2
WO2007119186A2 PCT/IB2007/051170 IB2007051170W WO2007119186A2 WO 2007119186 A2 WO2007119186 A2 WO 2007119186A2 IB 2007051170 W IB2007051170 W IB 2007051170W WO 2007119186 A2 WO2007119186 A2 WO 2007119186A2
Authority
WO
WIPO (PCT)
Prior art keywords
determining
positions
features
positional relation
feature
Prior art date
Application number
PCT/IB2007/051170
Other languages
French (fr)
Other versions
WO2007119186A3 (en
Inventor
Richard P. Kleihorst
Renaud S. Deysine
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007119186A2 publication Critical patent/WO2007119186A2/en
Publication of WO2007119186A3 publication Critical patent/WO2007119186A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Definitions

  • the invention relates to recognition of objects from an image of at least the object, and in particular to object recognition of objects such as walls, doors or other housing objects.
  • Object recognition as described herein can be used for visual Simultaneous Localization and Mapping (SLAM).
  • SLAM Simultaneous Localization and Mapping
  • US 2004168148 describes methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation.
  • SLAM Simultaneous Localization and Mapping
  • Such visual techniques can be used to autonomously generate and update a map.
  • An object in an image can be recognized by finding local object features that typically belong to the object.
  • all corners of an object can be detected from the image.
  • the corners can each be associated with additional information, characteristic or description such as a type of the corner, a gradient around the corner or intensity around or within the corner. All characteristics generally describe a type and/or a sub-type of the object feature.
  • a method of recognizing an object from an image of at least the object comprising detecting a plurality of object features, determining a plurality of positions within the image of the object features relative to at least one axis of the image, and comparing the positions with reference positions of a reference structure of a reference object.
  • a method is provided for recognizing the object, e.g. when a similarity between the determined positions and the reference positions relation is beyond a predetermined threshold.
  • a possible advantage that may be obtained by determining a plurality of positions within the image of the object features relative to at least one axis is that structural information at least partly describing a structure of the object is provided and used for the comparison and recognition of the object.
  • structural information at least partly describing a structure of the object is provided and used for the comparison and recognition of the object.
  • object recognition with a relative low ambiguity may be provided.
  • first and second axis are substantially orthogonally arranged and are substantially extending in one plane or when first and second directions of the first and second axis are arranged along a substantially horizontal and a substantially vertical edge associated with the image
  • a possible advantage is that methods of providing a position of the object features relative to the image and/or each other is provided.
  • determining a plurality of positions within the image of the object features comprises projecting each of the object features on the at least one axis of the first and second axis, a possible advantage is that a method found to be suitable for determining the positions of the features or a positional relation between object features is provided.
  • the method further comprises characterizing each object feature with at least one type characteristic associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and determining a positional relation between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features, the positional relation at least partly providing structural information of the object, and the method step of comparing the positions comprises comparing the positional relation with a reference positional relation of a reference structure of a reference object, a possible advantage is that structural information at least partly describing a structure of the object is provided which can be used for the comparison and recognition of the object.
  • An object may be recognized as the same object as a reference object, e.g. when a similarity between the determined positional relation and the reference positional relation is beyond a predetermined threshold.
  • the type characteristic may be a description of the feature such as a type of the feature, such as 'corner', and/or a subtype of the type of feature such as type of corner, e.g. 'upper left corner', gradient around corner, intensity around and in the corner, such as e.g. '100'.
  • the object features may be any other suitable object local object features belonging to the object.
  • a possible advantage is that a method suitable for providing a positional relation is provided.
  • a possible advantage is that a method which is necessitating a relative low amount of data for description of the object but yet effective method for recognition is provided.
  • determining a positional relation between the object features comprises determining a path provided by and comprising the feature relationship and the coordinate relationship
  • a possible advantage is that an effective recognition method is provided, which only requires a relative low amount of data for description of the object and/or for description of a reference object.
  • the type characteristic preferably at least indicating a sub- characteristic of the object such as a sub-shape of the object
  • the method step of determining the plurality of positions comprises, and determining at least one coordinate of each of the object features, and determining a positional relation between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features by determining an order of appearance of the object features on the first and/or second axis, the positional relation at least partly providing structural information of the object, and comparing the order of appearance with a reference order of appearance of a reference object
  • determining the plurality of positions comprises determining at least one coordinate of each of the object features
  • comparing the positions comprises determining a ratio between at least two coordinates and comparing the ratio with a reference ratio of a reference structure of a reference object
  • a possible advantage is that objects being similarly shaped in a manner not recognizable as being distinct, with the ratio, can be recognized as distinct.
  • a further possible advantage is that a ratio may be provided without knowing the actual position within the image of the object features but e.g. only a relative position between the object features, such as first, second, third object feature on an axis. This may require less needed processing power than using the actual positions for the recognition and hereby providing a better solution e.g.
  • a possible advantage is that a precise method of distinguishing similarly shaped objects is provided.
  • FIG. 1 shows a conventional method of recognizing an object.
  • FIG. 2 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using an order of appearance of object features.
  • FIG. 3 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using a relative position of object features.
  • FIG. 4 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using a relative ratio between object features.
  • FIG. 5 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using the positions of object features.
  • FIG. 6 shows an apparatus in accordance with an embodiment of the present invention.
  • FIG. 1 a first image 102 with a first object 104 and a second object 106 are shown.
  • the first object 104 is a rectangle that has four corners. Each of the four corners is a detectable object feature 108, 110, 112 and 114 of the first object 104 in the first image 102.
  • the second object 106 in the first image 102 has four object features 116, 118, 120 and 122.
  • the third object 136 has detectable object feature 126, 128, 130 and 132 and the fourth object 146 in the second image 134 has four object features 138, 140,
  • the detected features 114 and 122 in the first image are both recognized as the same object feature, e.g. with a first type characteristic a
  • first object 104 and the second object are 'lower left corner' and a 'lower right corner', symbolized in the figure below the first and second image as corners 148 and 150. Furthermore the first object 104 and the second object
  • 106 are both recognized as the same objects each comprising an 'upper left corner' 108, 116, an 'upper right corner' 110, 118, an lower left corner 114, 122 and an 'lower right corner' 112,
  • the third object 136 and the fourth object 146 are also recognized as two objects with each an 'upper left corner' 138, 126, an 'upper right corner' 140, 128, an lower left corner 144, 132 and an 'lower right corner' 142, 130, the images 102 and 134 are recognized as the same images.
  • a description 124 of both the first image 102 and the second image 134 is therefore provided as shown at 124.
  • the description 124 may be described as two 'upper left corners', two 'upper right corners', two 'lower left corners' and two 'lower right corners'.
  • the images in the above and the following examples are images of parts of a house.
  • the objects may be objects such as walls, doors or the like objects in the house.
  • the object features are described and shown as corners, but the object features may be all types of detectable local features that typically belong to the object.
  • the referral to a 'corner' or a referral to e.g. the 'upper left corner' of an object is a referral to a type characteristic of the object.
  • the 'corner' or the 'upper left corner' is a sub-characteristic of the object and both characteristics refers to a sub-shape, 'a corner', of the object.
  • the first image 102 and the second image 134 are shown, each image being associated with a first axis 204, an X-axis, and a second axis 202, a Y-axis.
  • the first and second axes are orthogonally arranged and are shown to extend in the plane provided by the images 102.
  • the first and second axes are arranged along a substantially horizontal and a substantially vertical edge of the image 102.
  • the object features in the image 102 are provided with a position relative to the first and second axis 204 and 202.
  • the lower left corner object feature 114 of the first object 104 is projected, shown with a dashed line 208, on the second axis 202 and provided with a coordinate Yl in the direction of the second axis and projected, shown with a dashed line 210, on the first axis and provided with a coordinate Xl on the first axis.
  • the position provided by the two coordinates (Xl, Yl) of the object feature 114 may be an actual position of the object feature 114 within the image 102.
  • the denotation Yl is because the coordinate is one of the two features 114, 112 which have a lowest appearance on the first axis 202 relative to the other object features detected in the image 102.
  • a positional relation 206 (an order of appearance), relative to the Y-axis, of the feature 114 is that the feature is one of the two features 114, 112 which appear first on the second axis 202, relative to the other features of the image or of the object.
  • An object feature that has a lowest coordinate on an axis may in accordance with the invention be referred to as an object feature which has a first appearance on the axis.
  • An order of appearance of the object features of the objects 104 and 106, relative to the second axis 202, is then 112 and 114, then 120 and 122, then 116 and 118, and finally 108 and 110.
  • the order of appearance of the object features may alternatively or additionally be provided relative to the projections of the object features on the first axis 204.
  • the object feature 122 is provided with a position (X3, Y2) within in the first image 102.
  • first object 104 and the second object 106 are detected and recognized as being different objects due to the objects features of each object having different positions within the image 102.
  • the first image 102 can now be recognized as being different from the second image 134.
  • a description of the image in accordance with the present invention may be stored in a database as a reference image, possibly with a number of other reference images of one or more other objects.
  • the image 102 is seen or detected again the image is recognized to be the same image by comparing the detected image with the reference images.
  • a description of a reference structure of a reference object using positions or positional relation as described herein will additional to providing a lower ambiguity also require a relative shorter descriptor compared to conventional techniques.
  • an alternative solution of describing the position in accordance with the present invention, may be provided.
  • the alternative solution may be using a position of an object feature described e.g. as a vector, having a certain length and a certain angle. The length being measured from an origin of a coordinate system and the angle being relative to e.g. the first or second axis' of the coordinate system.
  • a depth of the features which is e.g. equal to a distance from an object detection means to an object feature
  • one of the first and second axis' may be extending in a direction of the depth or alternatively, the method and system according to the invention may be provided for three dimensions.
  • FIG. 3 illustrates a method in accordance with an embodiment of the present invention.
  • Three different objects 302, 304, 306 are shown. Each object may be provided within different images (not shown) or within the same image (not shown).
  • the objects 302, 304, 306 are characterized with a type characteristic 308, in this example a number which indicates an inside value. In this example the type characteristic is 100 within the objects and 0 outside the objects.
  • the description of object 302, 304 and 306 is shown as a positional relation 324, 326 and 328, respectively.
  • a feature relationship 310 is shown with a circle.
  • the circle means that the coordinates on the first and second axis of this feature belongs to the 'same object feature' just projected on a different axis.
  • This feature relationship 310 may be provided by determining that these two coordinates are coordinates of an object feature with the same type description.
  • a coordinate relationship 312 is shown with a '+'.
  • the sign '+' means that these coordinates are close to or 'closest to' each other on the axis among the object features.
  • the object features 314 and 322 are not only 'close to beyond a predetermined threshold' or 'closest to' each other, for simplicity in the example the object features as projected on the axis have an equal projection on the axis.
  • a positional relation of the object features may then described as: 'Upper left corner with additional type description 100' + 'lower left corner with additional type description' as shown in 324 in FIG. 1. Still on the X-axis the object feature 315 is closest to the object feature 316 and finally the object feature 318 is closest to the object feature 320. A description is similarly provided for the projection of the object features on the Y-axis.
  • the objects 302, 304 and 306 can then be described by a path. Such a path 330 is shown for the object 306. The path is provided by following the 'closest to' and 'same object feature' path.
  • the path 330 runs along the object feature 'upper left corner' 332 which on the X-axis is 'closest to' 340 the 'lower left corner' 338.
  • the 'lower left corner' is the same 342 object feature which is closest to the 'lower right corner' 336.
  • the 'lower right corner' is the same object feature which is the same as the object feature which is closest to the 'upper right corner' 334 on the X-axis.
  • the path is completed until it constitutes at least one closed loop path, such as the path 330. In this manner a description of an object by a path as described may be stored for different objects.
  • the detected positional relation of object features comprised in the object which positional relation is e.g. described by a path 330 is compared to a reference structure of a reference object or compared to a structure of another object within the image, and hereby an effect may be that an object is recognized with an increased precision than the precision of the known methods.
  • the method may be size invariant.
  • the method may be invariant to a zoom factor of an object feature detection means such as a camera.
  • FIG. 4 two similarly shaped objects 402 and 404 are shown in an image 406.
  • the image 406 is associated with first and second axis as described for FIG. 2 and FIG. 3.
  • the method as described in FIG. 3 may not be able to distinguish between the objects 402 and 404 only by the relative position of the object features.
  • a search for a more precise match may be provided by using a relative ratio between object features.
  • the relative ratio between object features may then be compared to the reference objects and the reference structure that matches the object is recognized as the object.
  • the reference object with reference ratio(s) which matches one of the objects 402 e.g. beyond a certain similarity threshold is chosen.
  • the ratio for the object 402 may be described as the ratio between relative distances between the positions of object features as projected on the first and second axis.
  • an increased number of ratios may be provided and used for recognition of the object.
  • positions of the object features of the objects 502 and 504 on one axis is used to recognize the objects as different objects.
  • the positions of the object features of object 502 as projected on the second axis are shown as positions 506, 508 and 510 on the second axis.
  • positions of the object features of object 504 as projected on the second axis are shown as positions 516, 514 and 512 on the second axis.
  • the object 502 can be recognized as being different from the reference object in that the position 508 of the object 502 is different from the position 514 of the reference object.
  • a ratio between the positions of object features as projected on the second axis can be used to recognize that the object 502 is not the same as the object 504.
  • FIG. 6 shows an example of an apparatus 602 in accordance with an embodiment of the present invention.
  • the apparatus in this example is a robot, such as a robot used for home robotics, the invention may also be used for a mobile phone, for an intelligent camera or the alike intelligent applications used for recognizing objects.
  • the apparatus 602 comprises detecting means 606 for detecting a plurality of object features.
  • the detecting means may be one or more cameras.
  • the apparatus furthermore comprises axis associating means 604 for associating the image with at least one axis of a first and a second axis.
  • the axis associating means 604 may be a control logic adapted to associate each edge of the image with a virtual axis.
  • the apparatus furthermore comprise first determining means 610.
  • the first determining means 610 may be a control logic adapted to determine the positions. The positions may be determined by projecting the detected features on the at least one axis of the first and the second axis.
  • the apparatus comprises first comparing means 614 for comparing the positions with reference positions of a reference structure of a reference object.
  • the comparing means may also be provided as the control logic.
  • the apparatus may furthermore comprise characterizing means 608, shown with a dashed line box indicating these optional means, for characterizing each object feature with at least one type characteristic associated with the object feature.
  • the characterizing means 608 may be a control logic adapted to characterize the detected object features with the sub-characteristic of the object such as the sub-shape of the object.
  • the apparatus may furthermore comprise second determining means 612, shown with a dashed line box.
  • These determining means 612 may be a control logic to determine the positional relation of the object features.
  • the apparatus may comprise second comparing means 618, shown with a dashed line box, for comparing the positional relation with a reference positional relation of a reference structure of a reference object.
  • the second comparing means may also be provided in the control logic.
  • first and second means may be provided by one processor unit or control logic 616.
  • the apparatus 602 may furthermore comprise data storage 620 with e.g. a database for storing a description of reference objects.
  • the control logic is adapted to be in e.g. a wireless connection with the data storage.
  • the reference objects stored in the database is preferably described in accordance with the present invention.
  • the description of reference objects in accordance with the present invention comprises a description of a plurality of reference positions within an image of the reference object features relative to at least one axis, or the description comprises a description of the reference positional relation between the reference object features from the plurality of reference positions, or the description comprises a description of the reference positional relation between the reference object features from the plurality of reference positions and from the type characteristic associated with the plurality of reference object features.
  • the database is searched in order to recognize an object in an image of the at least one object by comparing the object in the image with the reference objects stored in the database. Recognizing the object by comparing the detected object with the reference object may be provided by the determination means and comparing means and an object is for example recognized as the reference object when a similarity is beyond a predetermined level.
  • the invention may be used to recognize objects such as walls, doors or the like house objects.
  • the recognition of the objects may be used for navigation in an environment with the objects.
  • the invention is implemented on a general purpose computer with a processor and a working memory.
  • Software containing various modules for executing the various functions described above, is loaded into the working memory and executed by the processor.
  • the computer has a local storage, e.g. a hard disk, for storing the software and storing the description of the reference objects.
  • the reference objects are stored at a remote location and the computer is provided with a network access to access the reference objects.
  • the computer normally includes a display and one or more input devices, such as a keyboard for interaction with a user.
  • the computer normally also comprises one or more output devices, such as graphical display or a loudspeaker for interaction with the user of the apparatus in accordance with the invention.
  • Part of the software may be realized in dedicated hardware.
  • a device can be made completely containing dedicated hardware for realizing the various functions.
  • the computer is at least connected to detection means such as a camera.
  • detection means such as a camera.
  • the computer or control logic and detection means as described is provided in connection with e.g. a robot.

Abstract

The invention relates to recognition of objects from an image of at least the object, and in particular to object recognition of objects such as walls, doors or other housing objects. The invention discloses recognizing an object 104, 106, 136, 146, 302, 304, 306, 404, 402, 502, 504 from an image 102, 134, 406 of at least the object, comprising detecting a plurality of object features, such as 108, 110, 112, 114, 116, 118, 120, 122, 314, 315, 316, 318, 320, 322, determining a plurality of positions within the image of the object features relative to at least one axis, and comparing the positions with reference positions of a reference structure of a reference object, e.g. in order to decrease an ambiguity with which objects can be recognized.

Description

Object recognition
FIELD OF THE INVENTION
The invention relates to recognition of objects from an image of at least the object, and in particular to object recognition of objects such as walls, doors or other housing objects.
BACKGROUND OF THE INVENTION
Object recognition as described herein can be used for visual Simultaneous Localization and Mapping (SLAM).
US 2004168148 describes methods and apparatus that use a visual sensor and dead reckoning sensors to process Simultaneous Localization and Mapping (SLAM). These techniques can be used in robot navigation. Advantageously, such visual techniques can be used to autonomously generate and update a map. Unlike with laser rangefmders, the visual techniques are economically practical in a wide range of applications and can be used in relatively dynamic environments, such as environments in which people move. An object in an image can be recognized by finding local object features that typically belong to the object. As an example all corners of an object can be detected from the image. The corners can each be associated with additional information, characteristic or description such as a type of the corner, a gradient around the corner or intensity around or within the corner. All characteristics generally describe a type and/or a sub-type of the object feature. Although having a number of advantages, recognition based on such characteristics may lead to ambiguity and therefore alter a precision with which precision the objects are recognized.
When the same corner is seen or detected in another image the same corners with the same description are found and by searching through a database with corners, the object can be recognized and identified as being in the image. Although such known conventional object recognition techniques have a number of advantages, the inventor of the present invention has appreciated that an improved recognition of an object from an image is of benefit, and has in consequence devised the present invention. SUMMARY OF THE INVENTION
The present invention seeks to provide an improved recognition of an object from an image. The invention is defined by the independent claims. The dependent claims relate to advantageous embodiments. Accordingly there is provided, in a first aspect, a method of recognizing an object from an image of at least the object, the method comprising detecting a plurality of object features, determining a plurality of positions within the image of the object features relative to at least one axis of the image, and comparing the positions with reference positions of a reference structure of a reference object. Thus a method is provided for recognizing the object, e.g. when a similarity between the determined positions and the reference positions relation is beyond a predetermined threshold. By determining a plurality of positions within the image of the object features a possible advantage is that positions may be relative to each other.
A possible advantage that may be obtained by determining a plurality of positions within the image of the object features relative to at least one axis is that structural information at least partly describing a structure of the object is provided and used for the comparison and recognition of the object. By providing the structural information a possible advantage may be that object recognition with a relative low ambiguity may be provided. In an example where the first and second axis are substantially orthogonally arranged and are substantially extending in one plane or when first and second directions of the first and second axis are arranged along a substantially horizontal and a substantially vertical edge associated with the image, a possible advantage is that methods of providing a position of the object features relative to the image and/or each other is provided.
When determining a plurality of positions within the image of the object features comprises projecting each of the object features on the at least one axis of the first and second axis, a possible advantage is that a method found to be suitable for determining the positions of the features or a positional relation between object features is provided.
When the method further comprises characterizing each object feature with at least one type characteristic associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and determining a positional relation between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features, the positional relation at least partly providing structural information of the object, and the method step of comparing the positions comprises comparing the positional relation with a reference positional relation of a reference structure of a reference object, a possible advantage is that structural information at least partly describing a structure of the object is provided which can be used for the comparison and recognition of the object.
By providing the structural information a possible advantage may be that object recognition with a relative low ambiguity may be provided. An object may be recognized as the same object as a reference object, e.g. when a similarity between the determined positional relation and the reference positional relation is beyond a predetermined threshold.
The type characteristic may be a description of the feature such as a type of the feature, such as 'corner', and/or a subtype of the type of feature such as type of corner, e.g. 'upper left corner', gradient around corner, intensity around and in the corner, such as e.g. '100'. The object features may be any other suitable object local object features belonging to the object.
When determining two coordinates of each of the object features, and characterizing each object feature with at least one type characteristic associated with the object feature, and determining a positional relation between the object features from the plurality of positions and from the type characteristic by determining a feature relationship by determining which of the coordinates on the first and second axis belongs to the same object feature, and comparing the positional relation with a reference positional relation of a reference structure of a reference object, a possible advantage is that a method suitable for providing a positional relation is provided.
When determining two coordinates of each of the object features, and characterizing each object feature with at least one type characteristic associated with the object feature, and determining a positional relation between the object features from the plurality of positions and from the type characteristic by determining a coordinate relationship by determining which of the coordinates in the first direction are close to each other beyond a predetermined threshold and determining which of the coordinates in the second direction are close to each other beyond a predetermined threshold, the positional relation at least partly providing structural information of the object, and comparing the positional relation with a reference positional relation of a reference structure of a reference object, a possible advantage is that a method which is necessitating a relative low amount of data for description of the object but yet effective method for recognition is provided.
When determining a positional relation between the object features comprises determining a path provided by and comprising the feature relationship and the coordinate relationship, a possible advantage is that an effective recognition method is provided, which only requires a relative low amount of data for description of the object and/or for description of a reference object.
When characterizing each object feature with at least one type characteristic associated with the object feature, the type characteristic preferably at least indicating a sub- characteristic of the object such as a sub-shape of the object, and wherein the method step of determining the plurality of positions comprises, and determining at least one coordinate of each of the object features, and determining a positional relation between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features by determining an order of appearance of the object features on the first and/or second axis, the positional relation at least partly providing structural information of the object, and comparing the order of appearance with a reference order of appearance of a reference object, a possible advantage is that an effective recognition method is provided, which only requires a relative low amount of data for description of the object. When determining the plurality of positions comprises determining at least one coordinate of each of the object features, and when comparing the positions comprises determining a ratio between at least two coordinates and comparing the ratio with a reference ratio of a reference structure of a reference object, a possible advantage is that objects being similarly shaped in a manner not recognizable as being distinct, with the ratio, can be recognized as distinct. A further possible advantage is that a ratio may be provided without knowing the actual position within the image of the object features but e.g. only a relative position between the object features, such as first, second, third object feature on an axis. This may require less needed processing power than using the actual positions for the recognition and hereby providing a better solution e.g. for being able to simultaneously localize and map a position of a robot within an area based on the recognition of the objects. In an example of determining two coordinates of each of the object features, and determining a ratio between at least two object feature coordinates and comparing the ratio with a reference ratio of a reference structure of a reference object, a possible advantage is that a precise method of distinguishing similarly shaped objects is provided.
In general the various aspects and advantages of the invention may be combined and coupled in any way possible within the scope of the invention.
These and other aspects, features and/or advantages of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will be described, by way of example only, with reference to the drawings, in which FIG. 1 shows a conventional method of recognizing an object.
FIG. 2 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using an order of appearance of object features.
FIG. 3 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using a relative position of object features. FIG. 4 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using a relative ratio between object features.
FIG. 5 shows a method of recognizing an object in accordance with an embodiment of the present invention, e.g. using the positions of object features.
FIG. 6 shows an apparatus in accordance with an embodiment of the present invention.
DESCRIPTION OF EMBODIMENTS
In FIG. 1 a first image 102 with a first object 104 and a second object 106 are shown. The first object 104 is a rectangle that has four corners. Each of the four corners is a detectable object feature 108, 110, 112 and 114 of the first object 104 in the first image 102.
Similarly the second object 106 in the first image 102 has four object features 116, 118, 120 and 122.
In the second image 134 a third object 136 and a fourth object 146 are shown.
Both objects are rectangles. The third object 136 has detectable object feature 126, 128, 130 and 132 and the fourth object 146 in the second image 134 has four object features 138, 140,
142 and 144.
Using conventional techniques the detected features 114 and 122 in the first image are both recognized as the same object feature, e.g. with a first type characteristic a
'lower left corner' and a 'lower right corner', symbolized in the figure below the first and second image as corners 148 and 150. Furthermore the first object 104 and the second object
106 are both recognized as the same objects each comprising an 'upper left corner' 108, 116, an 'upper right corner' 110, 118, an lower left corner 114, 122 and an 'lower right corner' 112,
120. Still further, because in the second image 134, the third object 136 and the fourth object 146 are also recognized as two objects with each an 'upper left corner' 138, 126, an 'upper right corner' 140, 128, an lower left corner 144, 132 and an 'lower right corner' 142, 130, the images 102 and 134 are recognized as the same images. A description 124 of both the first image 102 and the second image 134 is therefore provided as shown at 124. The description 124 may be described as two 'upper left corners', two 'upper right corners', two 'lower left corners' and two 'lower right corners'. In order for conventional techniques to more specifically identify object features in order to decrease ambiguity of the recognition may need a descriptor of up to 128 points long. The images in the above and the following examples are images of parts of a house. The objects may be objects such as walls, doors or the like objects in the house. In the examples the object features are described and shown as corners, but the object features may be all types of detectable local features that typically belong to the object.
The referral to a 'corner' or a referral to e.g. the 'upper left corner' of an object is a referral to a type characteristic of the object. In this example the 'corner' or the 'upper left corner' is a sub-characteristic of the object and both characteristics refers to a sub-shape, 'a corner', of the object.
In FIG. 2 the first image 102 and the second image 134 are shown, each image being associated with a first axis 204, an X-axis, and a second axis 202, a Y-axis. The first and second axes are orthogonally arranged and are shown to extend in the plane provided by the images 102. The first and second axes are arranged along a substantially horizontal and a substantially vertical edge of the image 102.
In accordance with the present invention, the object features in the image 102 are provided with a position relative to the first and second axis 204 and 202. As an example the lower left corner object feature 114 of the first object 104 is projected, shown with a dashed line 208, on the second axis 202 and provided with a coordinate Yl in the direction of the second axis and projected, shown with a dashed line 210, on the first axis and provided with a coordinate Xl on the first axis. The position provided by the two coordinates (Xl, Yl) of the object feature 114, may be an actual position of the object feature 114 within the image 102. In this example the denotation Yl is because the coordinate is one of the two features 114, 112 which have a lowest appearance on the first axis 202 relative to the other object features detected in the image 102. In other words, a positional relation 206 (an order of appearance), relative to the Y-axis, of the feature 114 is that the feature is one of the two features 114, 112 which appear first on the second axis 202, relative to the other features of the image or of the object.
An object feature that has a lowest coordinate on an axis may in accordance with the invention be referred to as an object feature which has a first appearance on the axis. An order of appearance of the object features of the objects 104 and 106, relative to the second axis 202, is then 112 and 114, then 120 and 122, then 116 and 118, and finally 108 and 110.
The order of appearance of the object features may alternatively or additionally be provided relative to the projections of the object features on the first axis 204. Similarly, the object feature 122 is provided with a position (X3, Y2) within in the first image 102. By the provided different positions within the image of the object features 114 and 122, the object features 114 and 122 can be detected and recognized as being different.
Further, the first object 104 and the second object 106 are detected and recognized as being different objects due to the objects features of each object having different positions within the image 102.
Still further and e.g. due to the provided positions or the provided positional relation between the object features or order of appearance of the object features within each image as described, the first image 102 can now be recognized as being different from the second image 134.
When an image as the image 102, or of an image with only one object, is detected for the first time, a description of the image in accordance with the present invention may be stored in a database as a reference image, possibly with a number of other reference images of one or more other objects. When the image 102 is seen or detected again the image is recognized to be the same image by comparing the detected image with the reference images. A description of a reference structure of a reference object using positions or positional relation as described herein will additional to providing a lower ambiguity also require a relative shorter descriptor compared to conventional techniques.
As an alternative to using coordinates, an alternative solution of describing the position, in accordance with the present invention, may be provided. The alternative solution may be using a position of an object feature described e.g. as a vector, having a certain length and a certain angle. The length being measured from an origin of a coordinate system and the angle being relative to e.g. the first or second axis' of the coordinate system. In case an image is provided where a depth of the features, which is e.g. equal to a distance from an object detection means to an object feature, one of the first and second axis' may be extending in a direction of the depth or alternatively, the method and system according to the invention may be provided for three dimensions. For simplicity, the object features in FIG. 1 and FIG. 2 is only described with a type characteristic, such as 'corner' or more precise 'upper left corner', but the features may alternatively or additionally be described with a type characteristic such as gradient around corner, intensity of pixels inside or around the corner, indicated e.g. by a number associated with the symbolized object feature.
FIG. 3 illustrates a method in accordance with an embodiment of the present invention. Three different objects 302, 304, 306 are shown. Each object may be provided within different images (not shown) or within the same image (not shown). The objects 302, 304, 306 are characterized with a type characteristic 308, in this example a number which indicates an inside value. In this example the type characteristic is 100 within the objects and 0 outside the objects. The description of object 302, 304 and 306 is shown as a positional relation 324, 326 and 328, respectively.
A feature relationship 310 is shown with a circle. The circle means that the coordinates on the first and second axis of this feature belongs to the 'same object feature' just projected on a different axis. This feature relationship 310 may be provided by determining that these two coordinates are coordinates of an object feature with the same type description. A coordinate relationship 312 is shown with a '+'. The sign '+' means that these coordinates are close to or 'closest to' each other on the axis among the object features. On the X axis the object feature 314 is closest to the object feature 322. In this example the object features 314 and 322 are not only 'close to beyond a predetermined threshold' or 'closest to' each other, for simplicity in the example the object features as projected on the axis have an equal projection on the axis.
A positional relation of the object features may then described as: 'Upper left corner with additional type description 100' + 'lower left corner with additional type description' as shown in 324 in FIG. 1. Still on the X-axis the object feature 315 is closest to the object feature 316 and finally the object feature 318 is closest to the object feature 320. A description is similarly provided for the projection of the object features on the Y-axis. The objects 302, 304 and 306 can then be described by a path. Such a path 330 is shown for the object 306. The path is provided by following the 'closest to' and 'same object feature' path.
For the object 306 the path 330 runs along the object feature 'upper left corner' 332 which on the X-axis is 'closest to' 340 the 'lower left corner' 338. The 'lower left corner' is the same 342 object feature which is closest to the 'lower right corner' 336. The 'lower right corner' is the same object feature which is the same as the object feature which is closest to the 'upper right corner' 334 on the X-axis. The path is completed until it constitutes at least one closed loop path, such as the path 330. In this manner a description of an object by a path as described may be stored for different objects. When an object is detected, the detected positional relation of object features comprised in the object which positional relation is e.g. described by a path 330, is compared to a reference structure of a reference object or compared to a structure of another object within the image, and hereby an effect may be that an object is recognized with an increased precision than the precision of the known methods.
In general, a possible advantage provided by using the positional relation or an order of appearance compared to an actual position within the image is that the method may be size invariant. Hereby, the method may be invariant to a zoom factor of an object feature detection means such as a camera.
In FIG. 4 two similarly shaped objects 402 and 404 are shown in an image 406. The image 406 is associated with first and second axis as described for FIG. 2 and FIG. 3. The method as described in FIG. 3 may not be able to distinguish between the objects 402 and 404 only by the relative position of the object features. Alternatively or additionally to recognizing whether both the objects 402 and 404 match a rectangular reference structure of a reference object, a search for a more precise match may be provided by using a relative ratio between object features. The relative ratio between object features may then be compared to the reference objects and the reference structure that matches the object is recognized as the object. Alternatively, the reference object with reference ratio(s) which matches one of the objects 402 e.g. beyond a certain similarity threshold is chosen.
As an example the ratio for the object 402 may be described as the ratio between relative distances between the positions of object features as projected on the first and second axis. For the object 402 and the object 404 the ratio may then be provided as follows: Ratio 410 for object 404: (Y4 - Y3)/(X2 - Xl) = 1.0 [1]
Similarly, the ratio 408 for the object 404 is: (Y2 - Y1)/(X4 - X3) = 0.7 [2]
For objects with an increased number of features an increased number of ratios may be provided and used for recognition of the object.
In FIG. 5 only the positions of the object features of the objects 502 and 504 on one axis is used to recognize the objects as different objects. The positions of the object features of object 502 as projected on the second axis are shown as positions 506, 508 and 510 on the second axis. Similarly, positions of the object features of object 504 as projected on the second axis are shown as positions 516, 514 and 512 on the second axis.
When the object 504 is a reference object with a reference structure, the object 502 can be recognized as being different from the reference object in that the position 508 of the object 502 is different from the position 514 of the reference object.
Furthermore, a ratio between the positions of object features as projected on the second axis can be used to recognize that the object 502 is not the same as the object 504.
FIG. 6 shows an example of an apparatus 602 in accordance with an embodiment of the present invention. Although the apparatus in this example is a robot, such as a robot used for home robotics, the invention may also be used for a mobile phone, for an intelligent camera or the alike intelligent applications used for recognizing objects.
The apparatus 602 comprises detecting means 606 for detecting a plurality of object features. The detecting means may be one or more cameras. The apparatus furthermore comprises axis associating means 604 for associating the image with at least one axis of a first and a second axis. The axis associating means 604 may be a control logic adapted to associate each edge of the image with a virtual axis. The apparatus furthermore comprise first determining means 610. The first determining means 610 may be a control logic adapted to determine the positions. The positions may be determined by projecting the detected features on the at least one axis of the first and the second axis. Still further, the apparatus comprises first comparing means 614 for comparing the positions with reference positions of a reference structure of a reference object. The comparing means may also be provided as the control logic.
The apparatus may furthermore comprise characterizing means 608, shown with a dashed line box indicating these optional means, for characterizing each object feature with at least one type characteristic associated with the object feature. The characterizing means 608 may be a control logic adapted to characterize the detected object features with the sub-characteristic of the object such as the sub-shape of the object.
The apparatus may furthermore comprise second determining means 612, shown with a dashed line box. These determining means 612 may be a control logic to determine the positional relation of the object features.
Still further, the apparatus may comprise second comparing means 618, shown with a dashed line box, for comparing the positional relation with a reference positional relation of a reference structure of a reference object. The second comparing means may also be provided in the control logic. Although the wording first and second means is used herein, such first and second means may be provided by one processor unit or control logic 616. The apparatus 602 may furthermore comprise data storage 620 with e.g. a database for storing a description of reference objects. Alternatively the control logic is adapted to be in e.g. a wireless connection with the data storage. The reference objects stored in the database is preferably described in accordance with the present invention. The description of reference objects in accordance with the present invention comprises a description of a plurality of reference positions within an image of the reference object features relative to at least one axis, or the description comprises a description of the reference positional relation between the reference object features from the plurality of reference positions, or the description comprises a description of the reference positional relation between the reference object features from the plurality of reference positions and from the type characteristic associated with the plurality of reference object features.
The database is searched in order to recognize an object in an image of the at least one object by comparing the object in the image with the reference objects stored in the database. Recognizing the object by comparing the detected object with the reference object may be provided by the determination means and comparing means and an object is for example recognized as the reference object when a similarity is beyond a predetermined level.
Generally, the invention may be used to recognize objects such as walls, doors or the like house objects. The recognition of the objects may be used for navigation in an environment with the objects.
In one realization, the invention is implemented on a general purpose computer with a processor and a working memory. Software, containing various modules for executing the various functions described above, is loaded into the working memory and executed by the processor. The computer has a local storage, e.g. a hard disk, for storing the software and storing the description of the reference objects. Alternatively, the reference objects are stored at a remote location and the computer is provided with a network access to access the reference objects. The computer normally includes a display and one or more input devices, such as a keyboard for interaction with a user. The computer normally also comprises one or more output devices, such as graphical display or a loudspeaker for interaction with the user of the apparatus in accordance with the invention. Part of the software may be realized in dedicated hardware. As an alternative to using a general purpose computer, a device can be made completely containing dedicated hardware for realizing the various functions. The computer is at least connected to detection means such as a camera. In one application of the invention the computer or control logic and detection means as described is provided in connection with e.g. a robot.
Although the present invention has been described in connection with preferred embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims.
In this section, certain specific details of the disclosed embodiment are set forth for purposes of explanation rather than limitation, so as to provide a clear and thorough understanding of the present invention. However, it should be understood readily by those skilled in this art, that the present invention may be practiced in other embodiments which do not conform exactly to the details set forth herein, without departing from the scope of this invention as defined by the independent claims. Further, in this context, and for the purposes of brevity and clarity, detailed descriptions of well-known apparatus, circuits and methodology have been omitted so as to avoid unnecessary detail and possible confusion. In the claims, the term "comprising" or "comprises" does not exclude the presence of other elements or steps. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. In addition, singular references do not exclude a plurality. Thus, references to "a", "an", "first", "second" etc. do not preclude a plurality. Reference signs are included in the claims; however, the inclusion of the reference signs should not be construed as limiting the scope of the claims.

Claims

CLAIMS:
1. A method of recognizing an object (104, 106, 136, 146, 302, 304, 306, 404,
402, 502, 504) from an image (102, 134, 406) of at least the object, the method comprising: detecting a plurality of object features (108, 110, 112, 114, 116, 118, 120, 122, 126, 128, 130, 132, 138, 140, 142, 144, 314, 315, 316, 318, 320, 322, 332, 334, 336, 338), determining a plurality of positions within the image of the object features relative to at least one axis (202, 204) of the image, and comparing the positions with reference positions of a reference structure of a reference object.
2. The method according to claim 1, wherein the step of determining a plurality of positions within the image of the object features, comprises projecting (208, 210) each of the object features on the at least one axis.
3. The method according to claim 1, wherein the method further comprises characterizing each object feature with at least one type characteristic (148,
150, 308) associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and determining a positional relation (206, 324, 326, 328) between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features, the positional relation at least partly providing structural information of the object, and the method step of comparing the positions comprises comparing the positional relation with a reference positional relation of a reference structure of a reference object.
4. The method according to claim 1, wherein the method further comprises: determining two coordinates of each of the object features, and characterizing each object feature with at least one type characteristic (148, 150, 308) associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and wherein the step of determining the plurality of positions comprises: determining a positional relation (206, 324, 326, 328) between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features, wherein determining the positional relation between the object features comprises determining a feature relationship (310) by determining which of the coordinates on the at least one axis belongs to the same object feature, the positional relation at least partly providing structural information of the object, and wherein the method step of comparing the positions comprises comparing the positional relation with a reference positional relation of a reference structure of a reference object.
5. The method according to claim 1, wherein the method further comprises: determining two coordinates of each of the object features, and characterizing each object feature with at least one type characteristic (148,
150, 308) associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and wherein the method step of determining the plurality of positions comprises: determining a positional relation (206, 324, 326, 328) between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features, wherein determining the positional relation between the object features comprises determining a coordinate relationship (312) by determining which of the coordinates in a first direction are close to each other beyond a predetermined threshold and determining which of the coordinates in a second direction are close to each other beyond a predetermined threshold, the positional relation at least partly providing structural information of the object, and wherein the method step of comparing the positions comprises: comparing the positional relation with a reference positional relation of a reference structure of a reference object.
6. The method according to claim 1, wherein the method further comprises: determining two coordinates of each of the object features, and characterizing each object feature with at least one type characteristic (148,
150, 308) associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and wherein the step of determining the plurality of positions comprises: determining a positional relation (206, 324, 326, 328) between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features, wherein determining the positional relation between the object features comprises determining a feature relationship (310) by determining which of the coordinates on a first and a second axis belongs to the same object feature, and determining a coordinate relationship (312) by determining which of the coordinates in a first direction are close to each other beyond a predetermined threshold and determining which of the coordinates in a second direction are close to each other beyond a predetermined threshold, the positional relation at least partly providing structural information of the object, and determining a path (330) provided by and comprising the feature relationship (310) and the coordinate relationship (312), and wherein the method step of comparing the positions comprises comparing the path (330) with a reference path of a reference object.
7. The method according to claim 1, wherein the method further comprises: characterizing each object feature with at least one type characteristic (148, 150, 308) associated with the object feature, the type characteristic preferably at least indicating a sub-characteristic of the object such as a sub-shape of the object, and wherein the method step of determining the plurality of positions comprises, and wherein the step of determining a plurality of positions comprises: determining at least one coordinate of each of the object features, and determining a positional relation (206, 324, 326, 328) between the object features from the plurality of positions and from the type characteristic associated with the plurality of object features by determining an order of appearance (206) of the object features on the first and/or second axis, the positional relation at least partly providing structural information of the object, and wherein the step of comparing the positions comprises: comparing the order of appearance (206) with a reference order of appearance of a reference object.
8. The method according to claim 1, wherein the step of determining a plurality of positions comprises: determining at least one coordinate of each of the object features, and wherein the step of comparing the positions comprises: determining a ratio (408, 410) between at least two coordinates and comparing the ratio with a reference ratio of a reference structure of a reference object.
9. An apparatus (502) for recognizing an object (104, 106, 136, 146, 302, 304, 306, 404, 402, 502, 504) from an image (102, 134, 406) of at least the object, wherein the apparatus comprises detecting means (606) for detecting a plurality of object features (108, 110,
112, 114, 116, 118, 120, 122, 126, 128, 130, 132, 138, 140, 142, 144, 314, 315, 316, 318,
320, 322, 332, 334, 336, 338), determining means (610) for detecting a plurality of positions within the image of the object features relative to at least one axis (202,204), and comparing means (614) for comparing the positions with reference positions of a reference structure of a reference object.
10. A computer readable code enabling a processor to perform the method steps according to claim 1.
PCT/IB2007/051170 2006-04-13 2007-04-02 Object recognition WO2007119186A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06300368.5 2006-04-13
EP06300368 2006-04-13

Publications (2)

Publication Number Publication Date
WO2007119186A2 true WO2007119186A2 (en) 2007-10-25
WO2007119186A3 WO2007119186A3 (en) 2007-12-27

Family

ID=38529747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/051170 WO2007119186A2 (en) 2006-04-13 2007-04-02 Object recognition

Country Status (1)

Country Link
WO (1) WO2007119186A2 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000215315A (en) * 1999-01-26 2000-08-04 Ricoh Co Ltd Graphic classifying method, graphic retrieving method, graphic classification and retrieval system, and recording medium
US20040062419A1 (en) * 2002-10-01 2004-04-01 Samsung Electronics Co., Ltd. Landmark, apparatus, and method for effectively determining position of autonomous vehicles

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000215315A (en) * 1999-01-26 2000-08-04 Ricoh Co Ltd Graphic classifying method, graphic retrieving method, graphic classification and retrieval system, and recording medium
US20040062419A1 (en) * 2002-10-01 2004-04-01 Samsung Electronics Co., Ltd. Landmark, apparatus, and method for effectively determining position of autonomous vehicles

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BOURDON O ET AL: "Object recognition using geometric hashing on the Connection Machine" PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION. ATLANTIC CITY, JUNE 16 - 21, 1990. CONFERENCE A : COMPUTER VISION AND CONFERENCE B : PATTERN RECOGNITION SYSTEMS AND APPLICATIONS, LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. VOL. 1 CONF. 10, 16 June 1990 (1990-06-16), pages 596-600, XP010020500 ISBN: 0-8186-2062-5 *
EDWARDS J ET AL: "Recognition of multiple objects using geometric hashing techniques" PROCEEDINGS OF THE CONFERENCE ON DECISION AND CONTROL. SAN ANTONIO, DEC. 15 - 17, 1993, NEW YORK, IEEE, US, vol. VOL. 3 CONF. 32, 15 December 1993 (1993-12-15), pages 1617-1622, XP010116521 ISBN: 0-7803-1298-8 *
GEORGE BEBIS ET AL: "Brief PapersUsing Self-Organizing Maps to Learn Geometric Hash Functions for Model-Based Object Recognition" IEEE TRANSACTIONS ON NEURAL NETWORKS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 9, no. 3, May 1998 (1998-05), XP011040130 ISSN: 1045-9227 *
HUTTENLOCHER D P ET AL: "OBJECT RECOGNITION USING ALIGNMENT" IMAGE UNDERSTANDING WORKSHOP. PROCEEDINGS, vol. 1, 23 February 1987 (1987-02-23), pages 370-380, XP008018730 *
LAMDAN Y ET AL: "Geometric Hashing: A General And Efficient Model-based Recognition Scheme" SECOND INTERNATIONAL CONFERENCE ON COMPUTER VISION, 1988, 5 December 1988 (1988-12-05), pages 238-249, XP010225219 *
LAMDAN Y ET AL: "Object recognition by affine invariant matching" PROCEEDINGS OF THE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. ANN ARBOR, JUNE 5 - 9, 1988, WASHINGTON, IEEE COMP. SOC. PRESS, US, 5 June 1988 (1988-06-05), pages 335-344, XP010012869 ISBN: 0-8186-0862-5 *

Also Published As

Publication number Publication date
WO2007119186A3 (en) 2007-12-27

Similar Documents

Publication Publication Date Title
US11127203B2 (en) Leveraging crowdsourced data for localization and mapping within an environment
US11644338B2 (en) Ground texture image-based navigation method and device, and storage medium
CN110807350B (en) System and method for scan-matching oriented visual SLAM
US11151281B2 (en) Video monitoring method for mobile robot
US9304970B2 (en) Extended fingerprint generation
CN110095752B (en) Positioning method, apparatus, device and medium
US20130195314A1 (en) Physically-constrained radiomaps
CN110243360A (en) Map structuring and localization method of the robot in moving region
US11373410B2 (en) Method, apparatus, and storage medium for obtaining object information
US20230331485A1 (en) A method for locating a warehousing robot, a method of constructing a map, robot and storage medium
US10001544B2 (en) Method and electronic device identifying indoor location
EP3674661A1 (en) Locating method and device, storage medium, and electronic device
Hile et al. Information overlay for camera phones in indoor environments
Tsuru et al. Online object searching by a humanoid robot in an unknown environment
CN114529621B (en) Household type graph generation method and device, electronic equipment and medium
JP2024502523A (en) Location method and apparatus, computer equipment, and computer readable storage medium
WO2019183928A1 (en) Indoor robot positioning method and robot
WO2007119186A2 (en) Object recognition
Liu et al. Modeling of structure landmark for indoor pedestrian localization
AU2020230251B2 (en) Method for relocating a mobile vehicle in a slam map and mobile vehicle
CN114527456A (en) UWB-based motion trajectory identification method and electronic equipment
CN107729862B (en) Secret processing method for robot video monitoring
CN113960999A (en) Mobile robot repositioning method, system and chip
CN108732925B (en) Intelligent device and advancing control method and device thereof
Abadi et al. Manhattan World Constraint for Indoor Line-based Mapping Using Ultrasonic Scans

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07735355

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07735355

Country of ref document: EP

Kind code of ref document: A2