WO2000070558A1 - Procede et dispositif de traitement d'image dynamique et support - Google Patents
Procede et dispositif de traitement d'image dynamique et support Download PDFInfo
- Publication number
- WO2000070558A1 WO2000070558A1 PCT/JP2000/003179 JP0003179W WO0070558A1 WO 2000070558 A1 WO2000070558 A1 WO 2000070558A1 JP 0003179 W JP0003179 W JP 0003179W WO 0070558 A1 WO0070558 A1 WO 0070558A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- image processing
- moving image
- point
- information
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/00174—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
- G07C9/00563—Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/20—Means to switch the anti-theft system on or off
- B60R25/2045—Means to switch the anti-theft system on or off by hand gestures
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/20—Means to switch the anti-theft system on or off
- B60R25/25—Means to switch the anti-theft system on or off using biometry
- B60R25/252—Fingerprint recognition
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R25/00—Fittings or systems for preventing or indicating unauthorised use or theft of vehicles
- B60R25/30—Detection related to theft or to other events relevant to anti-theft systems
- B60R25/305—Detection related to theft or to other events relevant to anti-theft systems using a camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- E—FIXED CONSTRUCTIONS
- E05—LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
- E05F—DEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION; CHECKS FOR WINGS; WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
- E05F15/00—Power-operated mechanisms for wings
-
- E—FIXED CONSTRUCTIONS
- E05—LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
- E05Y—INDEXING SCHEME RELATING TO HINGES OR OTHER SUSPENSION DEVICES FOR DOORS, WINDOWS OR WINGS AND DEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION, CHECKS FOR WINGS AND WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
- E05Y2400/00—Electronic control; Power supply; Power or signal transmission; User interfaces
- E05Y2400/80—User interfaces
- E05Y2400/85—User input means
- E05Y2400/852—Sensors
- E05Y2400/856—Actuation thereof
- E05Y2400/858—Actuation thereof by body parts
- E05Y2400/86—Actuation thereof by body parts by hand
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present invention relates to a moving image processing method, a moving image processing device, and a medium, and in particular, to a feature extraction technique based on basic shape features, a moving image processing method for calculating motion information of a target object by tracking a feature portion in an image, and a moving image processing method.
- the present invention relates to a moving image processing device and a medium. Background art
- Conventional moving image processing methods and moving image processing devices that extract features from time-series images obtained by capturing the motion of the target object and estimate the motion information of the target object from the detected features include the following: There is a method of attaching a marker (JP-A-6-89342) and a method of wearing gloves of a specific color.
- a method for estimating motion information there is a method of wearing a glove or the like with a magnetic sensor attached to a hand without photographing the motion of a target object.
- a specific device for example, an automobile
- the conventional moving image processing method and moving image processing apparatus as described above have the following problems.
- Another technique is to track the contour of the subject using an energy minimization method and extract the motion of the subject from the obtained contour (Japanese Patent Laid-Open No. 5-12443, Japanese Patent Publication No. 8-263666). Gazette).
- a point corresponding to the reference point in an adjacent image frame can be obtained by using an image pattern in a small area around the reference point.
- a point having a distinct feature such as a corner is effective as a reference point, but the number is limited, so the upper limit of the number of reference points is determined by the shape of the subject. Therefore, reference points may be missing from the image due to occlusion or the like, and tracking may not be possible.
- the present invention has been made in view of the above-mentioned problems of the related art, and has as its object to provide a moving image processing method, a moving image processing apparatus, and a moving image processing method capable of stably tracking an arbitrary movement of a subject having an arbitrary shape.
- the present invention is applied to user authentication by using a specific device (for example, an automobile) to draw a figure in a space with a finger or the like and calculate exercise information thereof, thereby obtaining user information. It is an object of the present invention to provide a moving image processing method, a moving image processing device, and a medium. Disclosure of the invention
- a moving image processing method performs user authentication when using a specific device using motion information of a subject obtained from a plurality of frames that are different in time series. It is characterized by.
- a first step of processing a frame of the time-series image to extract a contour shape feature of the subject included in the time-series image A second step of detecting a reference point from the frame, and a third step of temporally tracking the reference point and calculating motion information of the subject in a three-dimensional space. Is preferred.
- the moving image processing method of the present invention is a moving image processing method for tracking the movement of a subject included in a time-series image using a contour shape feature, wherein the frame of the time-series image is processed by: A first step of extracting a shape feature; a second step of detecting a reference point from within the frame using the shape feature; and temporally tracking the reference point, in the three-dimensional space of the subject. And a fourth step of operating a display object prepared in advance based on the movement information.
- a contour line is extracted based on edge information of an input image of a subject, and a bending point where the direction of an edge gradient sharply changes based on the edge information.
- an inflection point where the sign of the curvature at a point on the contour line is inverted and the direction of the edge gradient changes gently, and the curvature of the point on the contour line is from zero to non-zero or non-zero.
- a transition point where the transition to zero and the direction of the edge gradient changes gently is detected, and the contour is divided into a straight line and a concave or convex curve segment based on the characteristic point.
- a specific part is detected from the projected image of the subject by combining a plurality of segments obtained in the same frame. Further, it is preferable that a reference point for tracking is determined on the contour of the detected specific part from the projected image of the subject.
- the contour of the subject is expressed by a plurality of segments separated by feature points, and that a reference point for tracking a corresponding point on the segment be determined. is there.
- reference points for tracking the corresponding points at least four or more points located on the same plane in a three-dimensional space are selected from the image in the initial frame, and the initial frame to be tracked is selected. It is preferable to detect corresponding points corresponding to the respective reference points from different frames in a time series. From a plurality of tracking reference points and corresponding points obtained from a plurality of frames different in time series, a finite motion of a plane in the three-dimensional space where the pair of the reference points and the corresponding points is located on the same plane in the three-dimensional space Based on the assumption, the motion information of the plane in the three-dimensional space is calculated, and a model or a boyfriend prepared in advance can be operated using the calculated motion information.
- the user when the user uses the specific device, the user inputs exercise information, compares the exercise information with an initial registration pattern input in advance, and If it is determined that it is different from the initial registration pattern, the use of a specific device is prohibited.
- the present invention provides a moving image processing apparatus that authenticates a user when using a specific device using motion information of a subject obtained from a plurality of frames different in time series.
- means for calculating motion information of the subject in the moving image processing apparatus means for processing a frame of the time-series image to extract a contour linear feature of the subject included in the time-series image; Means for detecting a reference point from within the frame, and tracking the reference point over time, in the three-dimensional space of the subject. Means for calculating exercise information of the subject.
- the present invention also provides a moving image processing apparatus that tracks the movement of a subject included in a time-series image by using a contour shape feature.
- the apparatus comprises: means for processing a frame of the time-series image to extract a shape feature; means for detecting a reference point from the frame using the shape feature; and temporally tracking the reference point. And a means for calculating motion information of the subject in a three-dimensional space; and a means for operating a display object prepared in advance based on the motion information.
- the means for extracting the shape feature in the moving image processing apparatus of the present invention extracts a contour line based on edge information of an input image of a subject, and a direction of an edge gradient changes rapidly based on the edge information. And the inflection point where the sign of the curvature at the point on the contour line is inverted and the direction of the edge gradient changes gradually, and the curvature of the point on the contour line is from zero to non-zero or non-zero.
- a transition point where the transition from zero to zero and the direction of the edge gradient changes gently is detected, and based on the characteristic point, the contour is straight-lined and a concave or convex curved segment is formed. It is preferable that a specific part is detected from the projected image of the subject by dividing the image into a plurality of segments and combining a plurality of segments obtained in the same frame.
- the moving image processing apparatus of the present invention may be configured such that at least four or more points located on the same plane in a three-dimensional space are used as reference points for tracking the corresponding points (referred to as “tracking reference points” as appropriate). It is preferable that a plurality of points are selected from the image in the initial frame, and corresponding points corresponding to the reference points are detected from frames different in time series from the initial frame to be tracked.
- the moving image processing device of the present invention based on a plurality of tracking reference points and corresponding points obtained from a plurality of frames different in time series, sets the pair of the reference points and the corresponding points on the same plane in a three-dimensional space.
- the motion information of the plane in the three-dimensional space is calculated from the finite motion assumption of the plane in the three-dimensional space, and using the calculated motion information, the operation of the model or the boyne prepared in advance is performed. It is preferred to do so.
- the moving image processing apparatus of the present invention includes: a motion information input unit for inputting motion information; inputting motion information from the motion information input unit; comparing the motion information with an initial registration pattern input in advance. Movement information comparing means, and comparison by the movement information comparing means As a result, when it is determined that the exercise information is different from the initial registration pattern, it is preferable to include a use permission control unit that transmits a signal to disallow use of the specific device to the specific device. is there.
- the present invention also provides a medium that can be read by a computer (or information processing device) that records a program that tracks the movement of a subject included in a time-series image using contour shape features.
- a computer or information processing device
- the convenience process processes the frame of the time-series image, extracts a shape feature, detects a reference point from the frame using the shape feature, and temporally determines the reference point.
- the motion information of the subject in the three-dimensional space is calculated, and a display object prepared in advance is operated based on the motion information.
- a contour line is extracted based on edge information of an input image of a subject, and a bending point where the direction of an edge gradient sharply changes based on the edge information; An inflection point where the sign of the curvature at the point is inverted and the direction of the edge gradient changes gently, and the curvature of the point on the contour transitions from zero to non-zero or from non-zero to zero, and A transition point where the direction of the page gradient changes gradually is detected, and a feature point is detected.
- the contour line is divided into a straight line and a concave or convex curve segment based on the feature point, and is obtained in the same frame.
- a specific part from an image in which the subject is projected.
- reference points for tracking the corresponding points at least four or more points located on the same plane in a three-dimensional space are selected from the image in the initial frame, and the initial file to be tracked is selected. It is preferable to detect a corresponding point corresponding to each of the reference points from a frame different from a frame in a time series.
- the combination is based on a plurality of tracking reference points and corresponding points obtained from a plurality of frames that are different in time series. Based on the finite motion assumption of a plane in the three-dimensional space that is located on the same plane in the three-dimensional space, motion information of the plane in the three-dimensional space is calculated, and prepared in advance using the calculated motion information. It is preferable to operate the model or pointer that has been set.
- a computer authenticates a user when using a specific device by using motion information of a subject obtained from a plurality of frames different in time series.
- the present invention provides a user authentication system for permitting or disallowing the use of a specific device using motion information of a subject obtained from a plurality of frames different in time series.
- the user authentication device of the specific device compares the initial registration pattern including the initial motion information with the motion information of the subject input when using the specific device. Authentication of the user who operates a specific device is performed. If the comparison results in the determination that the use is unauthorized, the use of the specific device is not permitted. Allows the use of the specific device.
- the user authentication device of the specific device in the user authentication system includes: an exercise information input unit for inputting exercise information when a user uses the specific device; and a permission to use the specific device, or Use permission control means for disabling, exercise information comparison means for comparing exercise information input by a user when using the specific device with an initial registration pattern input in advance, and the use permission control Transmitting means for transmitting a radio wave when the use of the specific device is prohibited by the means, and wherein the user authentication device is connected to a server via a network, Comprises a receiving means for receiving a radio wave transmitted from the transmitting means, and when the user uses the specific device, inputs exercise information from the exercise information input means.
- the exercise information is compared with the initial registration pattern set and registered in the exercise information storage means by the exercise information comparison means, and the exercise information is compared with the initial registration pattern set and registered in the exercise information storage means.
- the use permission control means transmits a use rejection signal to the specific device
- the server-side receiving means receives a radio wave transmitted from the transmission means and receives the specific device. It is preferable to acquire the position information of the above.
- the user authentication device sends an abnormal signal to the server to attempt to use the specific device illegally. It is preferable that the server receives the signal, and that the receiving means on the server side starts receiving the radio wave transmitted from the transmitting means.
- the image processing method of the present invention is a method of tracking a subject in a time-series image and extracting the movement thereof, wherein the reference point used as a tracking target between adjacent image frames of the subject captured by the imaging means is set to be: It is characterized in that selection is made from points on the contour using information obtained from the contour of the subject.
- the normal direction of the contour of the subject is preferable to use information on the normal direction of the contour of the subject as the information obtained from the contour of the subject, and the normal direction of the reference point has an angle difference of about 90 degrees. It is preferable to select the reference points so as to include at least two normal directions.
- a tentative corresponding point corresponding to the reference point is obtained by using information obtained from the outline, and a polynomial in which the position of the corresponding point on the image frame that can express the movement of the subject is used as a variable. It is preferable that the reference point is tracked by obtaining a coefficient of a polynomial using the coordinates of the provisional corresponding point.
- a normal direction of the contour line as information obtained from the contour line.
- the present invention provides a moving image processing apparatus that tracks a subject in a time-series image and extracts the movement.
- This apparatus is characterized in that a reference point used as a tracking target between adjacent image frames of a subject captured by an imaging unit is selected from points on the contour using information obtained from the contour of the subject. I do.
- the information obtained from the contour of the subject it is preferable to use information on the normal direction of the contour of the subject, and the normal direction of the reference point has an angle difference of about 90 degrees. It is preferred that the reference points be selected so that they consist of at least two normal directions.
- a temporary corresponding point corresponding to the reference point is obtained using information obtained from the contour line, and a position of the corresponding point on the image frame that can express the movement of the subject is a variable. It is preferable that the reference point is tracked by using a polynomial to calculate the coefficient of the polynomial using the coordinates of the provisional corresponding point.
- the present invention provides a computer-readable medium for recording a program for tracking a subject in a time-series image and extracting a movement thereof.
- the computer is at least captured by the imaging means.
- a reference point to be used as a tracking target between adjacent image frames of the captured subject is selected from points on the contour using information obtained from the contour of the subject. It is preferable to use information on the normal direction of the contour of the subject as the information obtained from the contour of the subject. It is preferable that the reference point is selected such that the normal direction of the reference point includes at least two normal directions having an angle difference of about 90 degrees.
- the present invention provides a program for tracking the movement of a subject included in a time-series image using a contour shape feature.
- the program when executed by a computer, at least processes a frame of the time-series image to extract a shape feature, detects a reference point from the frame using the shape feature, and refers to the above.
- the points are tracked in time, motion information of the subject in a three-dimensional space is calculated, and a display object prepared in advance is operated based on the motion information.
- the present invention provides a program for tracking a subject in a time-series image and extracting its movement.
- This program is executed at the time of the combination, and at least a reference point to be used as a tracking target between adjacent image frames of the subject captured by the imaging unit is obtained by using information obtained from the contour of the subject. To select from the points on the contour.
- the program of the present invention can be recorded on a medium such as a computer-readable CD-ROM, FD, hard disk, or semiconductor memory.
- the program recorded on the medium is installed on the computer and executed on the computer.
- FIG. 1 is a schematic configuration diagram of a moving image processing device according to the present invention.
- FIG. 2 is a flowchart of the moving image processing method according to the present invention.
- FIG. 3 is another flowchart of the moving image processing method according to the present invention.
- FIG. 4 is a diagram showing feature points on a contour line.
- FIG. 5 is a diagram showing feature extraction based on a combination of basic shape features.
- FIG. 6 is a diagram showing a process of selecting a segment and a reference point for avoiding the opening problem.
- FIG. 7 is a flowchart for selecting a reference point and determining a corresponding point.
- FIG. 8 is an explanatory diagram illustrating an example of a selected reference point.
- FIG. 9 is an explanatory diagram showing the obtained provisional corresponding points.
- FIG. 10 is a diagram showing a process in which a reference point transitions to a corresponding point.
- FIG. 11 is a diagram showing a concept when calculating exercise information.
- FIG. 12 is a diagram showing an application example of the present invention.
- FIG. 13 is a diagram showing a user authentication device.
- FIG. 14 is an explanatory diagram of inputting exercise information to the user authentication device and comparing the information.
- FIG. 15 is a diagram showing the configuration of the user authentication system.
- FIG. 16 is a flowchart showing the processing of the user authentication system.
- FIG. 17 is a moving image processing flowchart according to another embodiment. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a schematic configuration diagram of a moving image processing device of the present invention.
- Reference numeral 1 denotes an image pickup unit having an image sensor for picking up an image of a subject, and specifically corresponds to a camera.
- 2 is an A / D converter for converting the image information of the analog signal taken into the imaging unit 1 into image information composed of a digital signal
- 3 is a digital image information converted by the A / D converter for each time-series frame.
- a frame memory 4 stores central processing means (CPU) for controlling the entire processing apparatus, and stores an algorithm including a processing flow of the moving image processing method of the present invention.
- Reference numeral 5 denotes a main memory used for signal processing of the central processing means 4
- reference numeral 6 denotes a display means for displaying an image processed by the present processing device.
- imaging unit 1 is an analog camera as an example, the object of the present invention can also be realized by a digital camera having both functions of the imaging unit 1 and the A / D conversion unit 2. .
- FIG. 2 is a flowchart showing a processing procedure of the moving image processing method according to one embodiment of the present invention.
- the moving image processing method according to the present invention will be described while describing this procedure.
- this method consists of image input (S20), feature extraction (S21), and reference. It includes each processing of illuminated point tracking (S22), motion information calculation (S23), model operation and image display (S24).
- S22 illuminated point tracking
- S23 motion information calculation
- S24 model operation
- image display S24
- step S20 a finger as a subject is imaged by the imaging means 1 in advance, and contour shape information of the finger is acquired.
- FIG. 4 shows an example of detection of a feature point on a contour line composed of edge information of a finger detected from an initial frame of an input time-series image.
- the symbols in the figure indicate the inflection point (mouth), the inflection point ( ⁇ ), and the transition point (X) as feature points.
- the “bend point” refers to a point at which the direction of the edge gradient changes rapidly based on the edge information
- the “inflection point” refers to the curvature of a point on the contour line of the subject.
- step S21 in FIG. 2 first, the contour is divided into a straight line and a concave or convex curve segment based on these feature points.
- a reference point for tracking (referred to as a “tracking reference point” as appropriate) is determined from points on the upper-level shape feature.
- Fig. 5 shows the basic shape features using a straight line segment (e1 in the figure) and a curved segment (e2 in the figure), and the upper part consisting of the segments el, e2, and e3 in the relation of connection at the transition point. 4 shows an example of describing a shape feature.
- FIG. 6 shows an example in which a reference point is set for a shape feature composed of a combination of segments.
- a point indicated by hata in the segment indicated by reference numeral a in the figure is selected as a reference point
- the horizontal movement in the figure corresponds to a correct correspondence from the image in a different frame.
- Can search for points As for the vertical movement, it is impossible to correctly find the corresponding points except for a few points on the curved part. This is a problem called the aperture problem (“the problem that the corresponding point at the time (t + 1) corresponding to the reference point for tracking at the time t cannot be uniquely determined”). Need to be avoided.
- avoidance is achieved by selecting a plurality of segments constituting a higher-order shape feature representing a finger, for example, a segment a and a segment b so as to form an angle of 90 degrees. It is possible to do.
- a segment other than the segment indicated by the symbol b in FIG. 6 may be used as long as the opening problem can be avoided.
- points on the segment are sampled at equal intervals, for example, as shown by a fist point in segment a in Fig. 6, and tracking is performed on the segment that constitutes the top-level feature. It may be good for.
- a plurality of, for example, at least four or more points located on the same plane in the three-dimensional space are selected from the reference points (Hata) in the initial frame selected in the step S21.
- the reference points (Hata) in another frame, that is, how the above-mentioned reference point transits and moves to the corresponding point after a predetermined time is searched.
- FIG. 7 is a flowchart showing a process of selecting a reference point and determining a corresponding point. As shown in the figure, the present method includes two steps, a process for the first image frame (S1) and a process for the subsequent image frames (S2).
- step S110 the contour of the subject is extracted from the first image frame, and the normal direction at each point on the contour is calculated from the neighboring contour (S110).
- step S120 a reference point is selected from points on the contour line in consideration of the normal direction (S120).
- the condition is such that the normal direction of the selected point is not biased in one direction and includes two or more normal directions having an angle difference of about 90 degrees. Select the reference point down.
- 20 reference points are selected.
- the angle of the selected reference point is set to about 90 degrees, but the angle is not limited to this, and a range of 45 degrees to 135 degrees is possible, and a range of 80 degrees to 100 degrees is also possible. Is possible.
- FIG. 8 shows an example of a reference point selected so as to satisfy the above condition in the image of the finger.
- ⁇ is a reference point, and each line segment at each reference point indicates a normal direction.
- step S2 the tracking process of the reference point obtained in step S1 is performed in the subsequent image frames.
- the relationship between the reference point and the corresponding point can be expressed by affinity transformation.
- affinity transformation if the movement of the finger is a parallel movement and a rotational movement on an arbitrary plane, it can be expressed by affinity transformation.
- the “corresponding point” refers to a point corresponding to the reference point after the finger as the subject has moved.
- a, b, c, and d are coefficients indicating a rotation component of the motion of the subject, and e and f are coefficients indicating a translation component of the motion of the subject.
- step S210 the intersection of the normal line of the reference point of the subject before moving (the subject in the previous image frame) and the contour line of the subject after moving (the subject in the next image frame) is provisionally determined.
- the intersection between the normal line of the reference point and the contour line is determined as a point where the luminance values of pixels existing on the normal line of the reference point are greatly different from those of the adjacent pixels on the normal line.
- FIG. 9 is a diagram showing a reference point and a tentative corresponding point corresponding to the reference point.
- the outline in Fig. 9 is the position of the finger in the previous image frame (before movement), ⁇ is the reference point, and 10 is the tentative corresponding point.
- the normal of the reference point on the outline of the finger in the previous image frame shall be adopted.
- step S220 the coefficients a, b, c, d, e, and f of the above-described polynomials (1) and (2) are obtained based on the coordinates of the reference point and the provisional corresponding point. Since the tentative corresponding points are not true corresponding points, the coefficients a, b, c, d, e, and f cannot be uniquely determined.
- step S230 using the polynomial coefficients a, b, c, d, e, and f obtained in step S220, the 20 reference points move to any position.
- FIG. 10 shows the transition of the reference points shown in FIG. 9 to which temporary reference points.
- the “reference point” has moved to the “first temporary reference point”.
- the "first temporary reference point” is closer to the "corresponding point”, but their coordinates do not match.
- step S240 if the distance between the “coordinate of the n-th temporary reference point” and the “coordinate of the n-th temporary corresponding point” is equal to or greater than a certain threshold, the reference point is set to the n-th temporary reference point.
- Update replace
- the reference points are sequentially changed to a first provisional reference point and a second provisional reference point.
- the approximate solutions of the coefficients a, b, c, d, e, and f converge, and finally the corresponding points are obtained (S220, S230, and S240).
- step S250 as a result of the processing in steps S220 and 230, it was found that the "reference point” finally transits to the "corresponding point", and the corresponding point is determined. (S250).
- step S260 if there is a next image frame, the corresponding point determined in step S250 is used as a reference point in the next image frame (S260) o
- step S23 in FIG. 2 the image of the reference point in the initial frame tracked in the process of step S22 and the corresponding point of the reference point in a frame different in time series
- the motion information of the plane pattern is calculated from the relationship of the coordinates inside. I do.
- the focal length of the lens of the imaging system three degrees of freedom of translation and three degrees of rotation, the normal direction of the plane, and the foot of the perpendicular drawn down from the origin of the coordinate system to the plane are obtained as motion information. Is obtained.
- the translation vector is obtained as a ratio with the distance, it is necessary to give the distance by some method in advance in order to calculate an accurate movement amount. If it is sufficient that the relative motion can be calculated, an appropriate value may be given as the distance.
- an example of calculation of exercise information will be described.
- the point P (x, y, z) is the point P '(x', y ', ⁇ ') in the moved coordinate system.
- X, fx 5 / z '
- V fy, / z.
- Equation (6) is derived from the following equations (4) and (5).
- TU is expressed as follows.
- k is a constant.
- step S24 in FIG. 2 based on the motion information calculated in the process of step S23, for example, an object such as a model or a pointer prepared in advance in the computer is subjected to coordinate transformation.
- the operation of the target according to the motion of the subject given as the input image is performed.
- the user performs operations such as moving and rotating the 3D model inside the computer and moving the cursor according to the movement of his / her finger.
- Figure 3 shows a specific device (for example, a client device equipped with a user authentication device, such as a computer or entrance / exit management system, or an airplane managed or owned by an individual or a corporation, using the above-mentioned exercise information).
- a user authentication device such as a computer or entrance / exit management system, or an airplane managed or owned by an individual or a corporation, using the above-mentioned exercise information.
- This is a flowchart showing the processing procedure of a system that authenticates users of moving objects such as ships, motorcycles, bicycles, and motorcycles) and permits or disallows the use of the specific equipment.
- steps S20 to S23 in FIG. 3 Since the processing of steps S20 to S23 in FIG. 3 is the same as the processing of steps S20 to S23 shown in FIG. 2, the description is omitted here, and steps S34 and subsequent steps are omitted.
- the processing procedure will be described with reference to the apparatus configuration diagram in FIG. 13 and the motion trajectory diagram in FIG.
- the user of the specific device needs to input the authentication information input means 10 (the imaging means 1, A / D conversion means 2, and frame memory 3 shown in FIG. 1) in advance. It is necessary to set and register authentication information such as exercise information as an initial registration pattern by using the authentication information.
- the authentication information such as the exercise information is stored in the authentication information storage unit 11.
- step S34 in FIG. 3 exercise information is input to an object (for example, a finger) using the authentication information input means 10. Thereafter, the exercise information calculated in the step S23 and the authentication information storage means 1 By comparing the initial registration pattern 1 0 0 (initial exercise information) registered and set in 1 with the authentication information comparing means 12, the exercise equal to the initial registration pattern 100 (initial exercise information) is obtained. Identify whether you are a regular registrant because the information 102 has been entered, or not a regular registrant because the exercise information 104 different from the initial registration pattern 100 has been entered. ⁇ to decide.
- step S35 if it is determined in step S35 that the user is a legitimate registered user, in step S36 the use permission control means 13 transmits a signal for permitting use to a specific device.
- step S37 if it is determined that the unauthorized use is performed by a person other than the authorized registered user, in step S37, the use permission control means 13 transmits a signal for disallowing use to a specific device. I do.
- the exercise information as a kind of identification information.
- the vehicle user is released from having a key as in the past.
- Fig. 15 shows an example of exchanging predetermined data between a server that manages security and the above-mentioned specific device via a network (Internet, mobile phone, wireless communication). It is a figure showing a form.
- a user of a specific device performs initial registration in advance using authentication information input means 10 (imaging means 1, A / D conversion means 2, and frame memory 3 shown in FIG. 1).
- a pattern (initial exercise information) is set and registered in the authentication information storage unit 11, and then a user of a specific device inputs exercise information using a finger or the like when using the specific device, and as a result,
- the point that the use permission control means 13 transmits a use permission / non-permission signal to a specific device has been described with reference to FIG. 13 described above.
- An example is “car.” If a device is stolen and taken away, a system that can obtain the location information of the specific device has been constructed, and by using Figs. 15 and 16 The operation will be described.
- step S41 of FIG. 16 the user of the car inputs exercise information from the authentication information input means 10 when using the car, and as a result of the comparison by the authentication information comparison means 12, the exercise information is It is determined whether it is different from or the same as the initial registration pattern (initial exercise information) set and registered in the authentication information storage means 11.
- step S42 If it is determined that the pattern is different from the initial registration pattern (initial exercise information) set and registered in the authentication information storage means 11, the use permission control means 13 transmits a use rejection signal to the vehicle in step S43. Then, in step S44, a signal is transmitted that the vehicle door is locked or the engine cannot be started overnight.
- step S45 the central control means 16 of the user authentication device issues an abnormal signal indicating that the use-disapproval signal has been transmitted via the communication means 15, the network, and the communication means 22 of the server. , And the identification number (ID) of the vehicle handled by the user authentication device. At the same time, the central control means 16 sends a command to the transmitting / receiving means 14 to output a transmitting electric wave.
- step S46 upon receiving the abnormal signal and the identification number, the server performs an abnormal response process. Specifically, the receiving / transmitting means 21 on the server side receives the transmitted radio wave transmitted from the transmitting / receiving means 14 on the user authentication device side via satellite means, and the user authentication device 21 Detects the position information of the car to be handled.
- the specific device is an automobile
- the following processing may be performed instead of the hook in step S44.
- the processing is not particularly limited to the above processing.
- the present invention can be applied to client devices equipped with a user authentication device, such as airplanes, ships, motorcycles, bicycles, and safe deposit boxes managed and owned by individuals and corporations.
- the server may be able to transmit a signal for canceling the state of the login via the network.
- the image was taken with one camera. This has the effect of enabling the user to input three-dimensional coordinate transformation parameters from a monochrome grayscale image and to operate the model according to the input parameters.
- the index finger and thumb are detected from the input image and the reference point for tracking is automatically determined, it is possible to input the motion information without requiring a special coordinate input device or equipment. If a different pattern is detected by comparing exercise information input when using a specific device with the initial registration pattern (initial exercise information) registered in advance, an abnormal condition is detected. By detecting as, unauthorized use of specific devices can be prevented.
- the initial exercise information is used for the authentication of a specific device as the initial registration pattern.
- well-known iris, fingerprint, and voice information of the number can be used as other initial registration patterns. is there.
- FIG. 17 is a flowchart showing a processing procedure of a moving image processing method according to another embodiment of the present invention, and corresponds to a modification of the processing of FIG. 7 in the above embodiment.
- the method of the present embodiment includes two steps of processing for the first image frame (S 1) and processing for the subsequent image frames (S 2).
- S 1 the first image frame
- S 2 the subsequent image frames
- step S110 a finger as a subject is imaged by the imaging means 1 in advance, and contour shape information of the finger of the subject is acquired in step S120. S120).
- step S140 in selecting a reference point for tracking the movement of the subject from the upper point of the contour line, if the feature extraction process in step S140 is necessary, the process is performed. If it is not necessary, it is decided to skip to the following step S150 (S130).
- step S150 a description will be given of the feature extraction process when it is determined in step S130 that the feature extraction process of step S140 is necessary.
- step S150 a reference point is selected from the points on the contour in consideration of the normal direction (S150).
- the reference point is selected under the condition that the normal direction of the selected point is not biased in one direction and includes two or more normal directions with an angle difference of about 90 degrees. I do.
- 20 reference points are selected.
- the angle of the selected reference point is set to about 90 degrees, but the angle is not limited to this, and a range of 45 degrees to 135 degrees is possible, and a range of 80 degrees to 100 degrees is also possible. Is possible. lb.
- step S140 a feature point is detected from each point on the contour of the subject extracted in step S120, and the contour is divided into straight and curved segments by the feature point. Further, the contour line shape is extracted as a shape feature based on the description expressed by the combination of the segments (S140). Further steps
- a reference point for tracking the subject is determined based on the extracted shape features (S150).
- step S2 the tracking process of the reference point obtained in step S1 is performed in the subsequent image frames.
- Reference point tracking (S210 to S270), motion information calculation (S280), and model operation and image display (S290) are included.
- the relationship between the reference point and the corresponding point can be expressed by affinity transformation.
- a plurality of, for example, at least four points located on the same plane in the three-dimensional space are selected from the reference points in the initial frame selected in the step S150.
- a search is made as to how the above-mentioned reference point transits and moves to a corresponding point in a different frame, that is, after a predetermined time.
- a relationship is established between two plane patterns obtained by projecting a point on a plane in a three-dimensional space onto two different planes (for example, an imaging plane of an image). Therefore, this may be used.
- step S300 if there is a next image frame, the corresponding point determined in step S270 is used as a reference point in the next image frame (S310).
- the motion of the subject can be tracked from the coordinate transformation parameters from the “reference point” to the “corresponding point” obtained as described above. lib. Exercise information calculation (S280)
- the relationship between the reference point in the initial frame and the corresponding points in the frames different in the time series tracked in the process of step S270 is the relationship between the coordinates in the image.
- the motion information of the subject is calculated from the assumption of the finite motion of the plane in the three-dimensional space (assuming that the subject is always within the field of view even if the frames differ in time series and is always imaged).
- the motion information includes three degrees of freedom of translation and three degrees of rotation, the normal direction of the plane, and the distance from the origin of the coordinate system to the foot of the perpendicular descended to the plane. The distance is obtained.
- an object such as a model or a pointer prepared in advance in the computer is coordinate-transformed based on the motion information calculated in the step S280. .
- the operation of the target according to the movement of the subject given as the input image is performed.
- it performs operations such as moving and rotating the 3D model inside the computer, and moving the force sol according to the movement of the fingers.
- an application such as a character operation of a game in an online shopping or amusement field using a three-dimensional model can be performed.
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP00927804A EP1139286A1 (en) | 1999-05-18 | 2000-05-18 | Dynamic image processing method and device and medium |
US09/719,374 US6993157B1 (en) | 1999-05-18 | 2000-05-18 | Dynamic image processing method and device and medium |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP11/137410 | 1999-05-18 | ||
JP13741099 | 1999-05-18 | ||
JP13741199 | 1999-05-18 | ||
JP11/137411 | 1999-05-18 | ||
JP2000/71340 | 2000-03-14 | ||
JP2000071340A JP2001034764A (ja) | 1999-05-18 | 2000-03-14 | 動画像処理方法、及び装置並びに媒体 |
JP2000095825A JP2001034767A (ja) | 1999-05-18 | 2000-03-30 | 動画像処理方法、動画像処理装置、及び媒体並びにユーザ認証システム |
JP2000/95825 | 2000-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2000070558A1 true WO2000070558A1 (fr) | 2000-11-23 |
Family
ID=27472062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2000/003179 WO2000070558A1 (fr) | 1999-05-18 | 2000-05-18 | Procede et dispositif de traitement d'image dynamique et support |
Country Status (3)
Country | Link |
---|---|
US (1) | US6993157B1 (ja) |
EP (1) | EP1139286A1 (ja) |
WO (1) | WO2000070558A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2378338A (en) * | 2001-07-31 | 2003-02-05 | Hewlett Packard Co | Automatic identification of features of interest within a video signal |
US10845186B2 (en) | 2016-03-09 | 2020-11-24 | Sony Corporation | Information processing device, information processing method, and information processing system |
Families Citing this family (105)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4172941B2 (ja) * | 2002-03-01 | 2008-10-29 | 日立ソフトウエアエンジニアリング株式会社 | 土地区画データ作成方法および装置 |
US20040140885A1 (en) * | 2003-01-17 | 2004-07-22 | Slicker James M. | Vehicle security system |
US7471846B2 (en) * | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US9129381B2 (en) * | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7620218B2 (en) * | 2006-08-11 | 2009-11-17 | Fotonation Ireland Limited | Real-time face tracking with reference images |
US8155397B2 (en) * | 2007-09-26 | 2012-04-10 | DigitalOptics Corporation Europe Limited | Face tracking in a camera processor |
US7362368B2 (en) * | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US9692964B2 (en) | 2003-06-26 | 2017-06-27 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7440593B1 (en) * | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US7792970B2 (en) * | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US8682097B2 (en) * | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8330831B2 (en) * | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US8363951B2 (en) | 2007-03-05 | 2013-01-29 | DigitalOptics Corporation Europe Limited | Face recognition training method and apparatus |
US8553949B2 (en) * | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US8498452B2 (en) * | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US7844076B2 (en) * | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US7269292B2 (en) * | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US8948468B2 (en) * | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US7616233B2 (en) * | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7792335B2 (en) | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US8593542B2 (en) * | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US8989453B2 (en) * | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US7564994B1 (en) | 2004-01-22 | 2009-07-21 | Fotonation Vision Limited | Classification system for consumer digital images using automatic workflow and face detection and recognition |
US9826159B2 (en) | 2004-03-25 | 2017-11-21 | Clear Imaging Research, Llc | Method and apparatus for implementing a digital graduated filter for an imaging apparatus |
US10721405B2 (en) | 2004-03-25 | 2020-07-21 | Clear Imaging Research, Llc | Method and apparatus for implementing a digital graduated filter for an imaging apparatus |
WO2005093654A2 (en) | 2004-03-25 | 2005-10-06 | Fatih Ozluturk | Method and apparatus to correct digital image blur due to motion of subject or imaging device |
CA2576528A1 (en) * | 2004-08-09 | 2006-02-16 | Classifeye Ltd. | Non-contact optical means and method for 3d fingerprint recognition |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
KR100619758B1 (ko) * | 2004-11-10 | 2006-09-07 | 엘지전자 주식회사 | 로봇청소기의 움직임 추적장치 및 방법 |
US7281208B2 (en) * | 2004-11-18 | 2007-10-09 | Microsoft Corporation | Image stitching methods and systems |
US8488023B2 (en) * | 2009-05-20 | 2013-07-16 | DigitalOptics Corporation Europe Limited | Identifying facial expressions in acquired digital images |
US8503800B2 (en) * | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US7315631B1 (en) * | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
KR100724939B1 (ko) | 2005-06-20 | 2007-06-04 | 삼성전자주식회사 | 카메라부를 이용한 유저 인터페이스 구현 방법 및 이를위한 이동통신단말기 |
US20070126864A1 (en) * | 2005-12-05 | 2007-06-07 | Kiran Bhat | Synthesizing three-dimensional surround visual field |
JP2007243904A (ja) * | 2006-02-09 | 2007-09-20 | Seiko Epson Corp | 撮像装置及び画像処理装置 |
US7804983B2 (en) | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US8433157B2 (en) * | 2006-05-04 | 2013-04-30 | Thomson Licensing | System and method for three-dimensional object reconstruction from two-dimensional images |
ATE497218T1 (de) * | 2006-06-12 | 2011-02-15 | Tessera Tech Ireland Ltd | Fortschritte bei der erweiterung der aam- techniken aus grauskalen- zu farbbildern |
US7515740B2 (en) * | 2006-08-02 | 2009-04-07 | Fotonation Vision Limited | Face recognition with combined PCA-based datasets |
US7403643B2 (en) * | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
US8055067B2 (en) * | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
EP2115662B1 (en) * | 2007-02-28 | 2010-06-23 | Fotonation Vision Limited | Separating directional lighting variability in statistical face modelling based on texture space decomposition |
KR101247147B1 (ko) | 2007-03-05 | 2013-03-29 | 디지털옵틱스 코포레이션 유럽 리미티드 | 디지털 영상 획득 장치에서의 얼굴 탐색 및 검출 |
WO2008109622A1 (en) | 2007-03-05 | 2008-09-12 | Fotonation Vision Limited | Face categorization and annotation of a mobile phone contact list |
US7916971B2 (en) * | 2007-05-24 | 2011-03-29 | Tessera Technologies Ireland Limited | Image processing method and apparatus |
FR2920172B1 (fr) * | 2007-08-21 | 2009-12-04 | Valeo Securite Habitacle | Procede de deverrouillage automatique d'un ouvrant de vehicule automobile pour systeme mains-libre et dispositif pour la mise en oeuvre du procede |
US8750578B2 (en) * | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
US7855737B2 (en) * | 2008-03-26 | 2010-12-21 | Fotonation Ireland Limited | Method of making a digital camera image of a scene including the camera user |
CN106919911A (zh) * | 2008-07-30 | 2017-07-04 | 快图有限公司 | 使用脸部检测的自动脸部和皮肤修饰 |
US8516561B2 (en) * | 2008-09-29 | 2013-08-20 | At&T Intellectual Property I, L.P. | Methods and apparatus for determining user authorization from motion of a gesture-based control unit |
FR2936546B1 (fr) * | 2008-10-01 | 2017-03-10 | Valeo Securite Habitacle | Dispositif de deverrouillage automatique d'un ouvrant de vehicule automatique. |
FR2936545B1 (fr) * | 2008-10-01 | 2017-03-10 | Valeo Securite Habitacle | Dispositif de deverrouillage automatique d'un ouvrant de vehicule automatique. |
US8423916B2 (en) * | 2008-11-20 | 2013-04-16 | Canon Kabushiki Kaisha | Information processing apparatus, processing method thereof, and computer-readable storage medium |
WO2010063463A2 (en) * | 2008-12-05 | 2010-06-10 | Fotonation Ireland Limited | Face recognition using face tracker classifier data |
JP5552252B2 (ja) | 2009-04-02 | 2014-07-16 | 任天堂株式会社 | 情報処理システム、プログラム、および情報処理装置 |
JP5599156B2 (ja) * | 2009-04-02 | 2014-10-01 | 任天堂株式会社 | 情報処理システム、プログラム、および情報処理装置 |
JP2011008601A (ja) * | 2009-06-26 | 2011-01-13 | Sony Computer Entertainment Inc | 情報処理装置および情報処理方法 |
DE102010020896A1 (de) * | 2009-09-04 | 2011-06-01 | Volkswagen Ag | Verfahren zum Bedienen eines Elektrofahrzeugs |
US20110061100A1 (en) * | 2009-09-10 | 2011-03-10 | Nokia Corporation | Method and apparatus for controlling access |
US8379917B2 (en) * | 2009-10-02 | 2013-02-19 | DigitalOptics Corporation Europe Limited | Face recognition performance using additional image features |
CN101840508B (zh) * | 2010-04-26 | 2013-01-09 | 中国科学院计算技术研究所 | 自动识别人体链状结构中特征点的方法及其系统 |
EP2421252A1 (en) | 2010-08-17 | 2012-02-22 | LG Electronics | Display device and control method thereof |
WO2012108552A1 (en) * | 2011-02-08 | 2012-08-16 | Lg Electronics Inc. | Display device and control method thereof |
US8885878B2 (en) * | 2011-07-22 | 2014-11-11 | Microsoft Corporation | Interactive secret sharing |
EP2575106B1 (en) * | 2011-09-30 | 2014-03-19 | Brainlab AG | Method and device for displaying changes in medical image data |
US9001326B2 (en) | 2011-12-13 | 2015-04-07 | Welch Allyn, Inc. | Method and apparatus for observing subsurfaces of a target material |
US10691219B2 (en) | 2012-01-17 | 2020-06-23 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US11493998B2 (en) | 2012-01-17 | 2022-11-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
US9501152B2 (en) | 2013-01-15 | 2016-11-22 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US8638989B2 (en) | 2012-01-17 | 2014-01-28 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US9679215B2 (en) | 2012-01-17 | 2017-06-13 | Leap Motion, Inc. | Systems and methods for machine control |
US9070019B2 (en) | 2012-01-17 | 2015-06-30 | Leap Motion, Inc. | Systems and methods for capturing motion in three-dimensional space |
US20150253428A1 (en) | 2013-03-15 | 2015-09-10 | Leap Motion, Inc. | Determining positional information for an object in space |
US8693731B2 (en) | 2012-01-17 | 2014-04-08 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
EP2896770B1 (en) * | 2012-09-12 | 2020-03-18 | Nissan Motor Co., Ltd | Control device and control method for operating an opening/closing element of a vehicle |
US9285893B2 (en) | 2012-11-08 | 2016-03-15 | Leap Motion, Inc. | Object detection and tracking with variable-field illumination devices |
US20140181710A1 (en) * | 2012-12-26 | 2014-06-26 | Harman International Industries, Incorporated | Proximity location system |
US10609285B2 (en) | 2013-01-07 | 2020-03-31 | Ultrahaptics IP Two Limited | Power consumption in motion-capture systems |
US9465461B2 (en) | 2013-01-08 | 2016-10-11 | Leap Motion, Inc. | Object detection and tracking with audio and optical signals |
US9696867B2 (en) | 2013-01-15 | 2017-07-04 | Leap Motion, Inc. | Dynamic user interactions for display control and identifying dominant gestures |
US9459697B2 (en) | 2013-01-15 | 2016-10-04 | Leap Motion, Inc. | Dynamic, free-space user interactions for machine control |
US10620709B2 (en) | 2013-04-05 | 2020-04-14 | Ultrahaptics IP Two Limited | Customized gesture interpretation |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
US9747696B2 (en) | 2013-05-17 | 2017-08-29 | Leap Motion, Inc. | Systems and methods for providing normalized parameters of motions of objects in three-dimensional space |
US10281987B1 (en) | 2013-08-09 | 2019-05-07 | Leap Motion, Inc. | Systems and methods of free-space gestural interaction |
JP6221505B2 (ja) * | 2013-08-22 | 2017-11-01 | 富士通株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
US9721383B1 (en) | 2013-08-29 | 2017-08-01 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US9632572B2 (en) | 2013-10-03 | 2017-04-25 | Leap Motion, Inc. | Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation |
US9996638B1 (en) | 2013-10-31 | 2018-06-12 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US9613262B2 (en) | 2014-01-15 | 2017-04-04 | Leap Motion, Inc. | Object detection and tracking for providing a virtual device experience |
US9598049B2 (en) * | 2014-07-09 | 2017-03-21 | Toyota Motor Engineering & Manufacturing North America, Inc. | Hands free access system for a vehicle closure |
DE202014103729U1 (de) | 2014-08-08 | 2014-09-09 | Leap Motion, Inc. | Augmented-Reality mit Bewegungserfassung |
US10043279B1 (en) * | 2015-12-07 | 2018-08-07 | Apple Inc. | Robust detection and classification of body parts in a depth map |
US10366278B2 (en) | 2016-09-20 | 2019-07-30 | Apple Inc. | Curvature-based face detector |
US10140442B1 (en) * | 2017-05-19 | 2018-11-27 | International Business Machines Corporation | Impression authentication |
CN108045344A (zh) * | 2017-11-23 | 2018-05-18 | 胡佳佳 | 一种汽车电子防盗系统 |
FR3078224B1 (fr) * | 2018-02-16 | 2020-02-07 | Renault S.A.S | Methode de surveillance d’un environnement de vehicule automobile en stationnement comprenant une camera asynchrone |
US11875012B2 (en) | 2018-05-25 | 2024-01-16 | Ultrahaptics IP Two Limited | Throwable interface for augmented reality and virtual reality environments |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0613097A2 (en) * | 1993-02-26 | 1994-08-31 | Fujitsu Limited | A dynamic image processor |
JPH0757103A (ja) * | 1993-08-23 | 1995-03-03 | Toshiba Corp | 情報処理装置 |
JPH10222241A (ja) * | 1997-02-04 | 1998-08-21 | Canon Inc | 電子ペン及び個人認証システム並びに個人認証方法 |
JPH1123293A (ja) * | 1997-07-04 | 1999-01-29 | Yasuyuki Yamamoto | 盗難車現在位置送信システム |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0512443A (ja) | 1991-07-05 | 1993-01-22 | Nippon Telegr & Teleph Corp <Ntt> | 動物体の輪郭追跡方法 |
JPH05197809A (ja) | 1992-01-20 | 1993-08-06 | Nippon Telegr & Teleph Corp <Ntt> | 物体追跡処理装置 |
JP3244798B2 (ja) | 1992-09-08 | 2002-01-07 | 株式会社東芝 | 動画像処理装置 |
KR100287211B1 (ko) * | 1994-08-30 | 2001-04-16 | 윤종용 | 양방향 움직임 추정방법 및 장치 |
KR0170932B1 (ko) * | 1994-12-29 | 1999-03-20 | 배순훈 | 영상의 시각적, 기하학적 특성에 따른 고속 움직임 추정장치 |
US7019614B2 (en) * | 1995-02-07 | 2006-03-28 | Harrow Products, Inc. | Door security system audit trail |
JP2822007B2 (ja) | 1995-03-24 | 1998-11-05 | 株式会社エイ・ティ・アール通信システム研究所 | 画像中の輪郭の抽出・追跡方法 |
US6421453B1 (en) * | 1998-05-15 | 2002-07-16 | International Business Machines Corporation | Apparatus and methods for user recognition employing behavioral passwords |
DE69936620T2 (de) * | 1998-09-28 | 2008-05-21 | Matsushita Electric Industrial Co., Ltd., Kadoma | Verfahren und Vorrichtung zum Segmentieren von Handgebärden |
US6714201B1 (en) * | 1999-04-14 | 2004-03-30 | 3D Open Motion, Llc | Apparatuses, methods, computer programming, and propagated signals for modeling motion in computer applications |
US6654483B1 (en) * | 1999-12-22 | 2003-11-25 | Intel Corporation | Motion detection using normal optical flow |
-
2000
- 2000-05-18 WO PCT/JP2000/003179 patent/WO2000070558A1/ja not_active Application Discontinuation
- 2000-05-18 EP EP00927804A patent/EP1139286A1/en not_active Withdrawn
- 2000-05-18 US US09/719,374 patent/US6993157B1/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0613097A2 (en) * | 1993-02-26 | 1994-08-31 | Fujitsu Limited | A dynamic image processor |
JPH0757103A (ja) * | 1993-08-23 | 1995-03-03 | Toshiba Corp | 情報処理装置 |
JPH10222241A (ja) * | 1997-02-04 | 1998-08-21 | Canon Inc | 電子ペン及び個人認証システム並びに個人認証方法 |
JPH1123293A (ja) * | 1997-07-04 | 1999-01-29 | Yasuyuki Yamamoto | 盗難車現在位置送信システム |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2378338A (en) * | 2001-07-31 | 2003-02-05 | Hewlett Packard Co | Automatic identification of features of interest within a video signal |
GB2380348A (en) * | 2001-07-31 | 2003-04-02 | Hewlett Packard Co | Determination of features of interest by analysing the movement of said features over a plurality of frames |
GB2380348B (en) * | 2001-07-31 | 2003-10-01 | Hewlett Packard Co | Automatic photography |
US7030909B2 (en) | 2001-07-31 | 2006-04-18 | Hewlett-Packard Development Company, L.P. | Automatic photography |
US10845186B2 (en) | 2016-03-09 | 2020-11-24 | Sony Corporation | Information processing device, information processing method, and information processing system |
Also Published As
Publication number | Publication date |
---|---|
US6993157B1 (en) | 2006-01-31 |
EP1139286A1 (en) | 2001-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2000070558A1 (fr) | Procede et dispositif de traitement d'image dynamique et support | |
US9785823B2 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
KR100597152B1 (ko) | 사용자 인증 방법 및 사용자 인증 장치 | |
CN106096582B (zh) | 区分真人面部与平坦表面 | |
US20210084285A1 (en) | Systems and methods of creating a three-dimensional virtual image | |
US20050147282A1 (en) | Image matching apparatus, image matching method, and image matching program | |
KR20160018318A (ko) | 지문 인식 방법, 장치 및 시스템 | |
CN107438173A (zh) | 视频处理装置、视频处理方法和存储介质 | |
WO2013145280A1 (ja) | 生体認証装置、生体認証方法及び生体認証用コンピュータプログラム | |
CN104246793A (zh) | 移动设备的三维脸部识别 | |
KR100438418B1 (ko) | 개인식별을 위한 손등 혈관패턴 인식 시스템 및 그 방법 | |
CN106030654A (zh) | 面部认证系统 | |
KR20120114564A (ko) | 대표 지문 템플릿 생성 장치 및 방법 | |
JP2001101429A (ja) | 顔面の観測方法および顔観測装置ならびに顔観測処理用の記録媒体 | |
JP2000251078A (ja) | 人物の3次元姿勢推定方法および装置ならびに人物の肘の位置推定方法および装置 | |
CN112528957A (zh) | 人体运动基础信息检测方法、系统及电子设备 | |
JP2961264B1 (ja) | 3次元物体モデル生成方法及び3次元物体モデル生成プログラムを記録したコンピュータ読み取り可能な記録媒体 | |
JPH10275233A (ja) | 情報処理システム、ポインティング装置および情報処理装置 | |
KR20160126842A (ko) | 지문 인증 방법 및 장치 | |
CN211087230U (zh) | 眼球追踪解锁系统 | |
JP2006277146A (ja) | 照合方法および照合装置 | |
JP2001034764A (ja) | 動画像処理方法、及び装置並びに媒体 | |
JP7277855B2 (ja) | 被写体別特徴点分離装置、被写体別特徴点分離方法及びコンピュータプログラム | |
US20210012513A1 (en) | Method and software system for modeling, tracking and identifying animate beings at rest and in motion and compensating for surface and subdermal changes | |
JPH07264458A (ja) | 移動物体追跡装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09719374 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2000927804 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2000927804 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2000927804 Country of ref document: EP |