CN102981599A - Three-dimensional human-computer interface system and method thereof - Google Patents

Three-dimensional human-computer interface system and method thereof Download PDF

Info

Publication number
CN102981599A
CN102981599A CN201110259995XA CN201110259995A CN102981599A CN 102981599 A CN102981599 A CN 102981599A CN 201110259995X A CN201110259995X A CN 201110259995XA CN 201110259995 A CN201110259995 A CN 201110259995A CN 102981599 A CN102981599 A CN 102981599A
Authority
CN
China
Prior art keywords
action
axis
data
dimensional
feature object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110259995XA
Other languages
Chinese (zh)
Inventor
陈国仁
温明华
林顺正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SERAFIM TECHNOLOGIES Inc
Original Assignee
SERAFIM TECHNOLOGIES Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SERAFIM TECHNOLOGIES Inc filed Critical SERAFIM TECHNOLOGIES Inc
Priority to CN201110259995XA priority Critical patent/CN102981599A/en
Publication of CN102981599A publication Critical patent/CN102981599A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional human-computer interface system and a method of the three-dimensional human-computer interface system. A shot is utilized and is matched with a digital signal processing function so as to detect and calculate a data of a relative position and an action of a characteristic object on a two-dimensional surface of an X axis and a Y axis. Meanwhile, a proximity sensor is utilized and is matched with an infrared light source and a digital signal processing function so as to detect and calculate a relative depth value or an action of the characteristic object on the other dimension of a Z axis which is relative to the two dimensional surface of the X axis and the Y axis to simultaneously produce a corresponding one dimensional coordinate and an action of the Z axis on a display screen. A digital element signal processing function of a system software is utilized so that a data of a two-dimensional coordinate and an action of the X axis and the Y axis is coupled to a data of one-dimensional coordinate and the action of the Z axis so as to calculate and output a relative position and an action of the characteristic object on the three-dimensional space of the X axis, the Y axis and the Z axis, and thus corresponding coordinates and actions of the X axis, the Y axis and the Z axis can be simultaneously occurred on the display screen and the using effect of the three-dimensional human-computer interface system can be achieved.

Description

Three-dimensional human-computer interface system and method thereof
Technical field
The present invention is relevant a kind of three-dimensional human-computer interface system and method (3D human interface system and method thereof) thereof, espespecially a kind of utilize that a camera lens and a collocation infrared light sources use closely connect inductor (proximity sensor), and by digital signal processing (DSP) function, with detect respectively and calculate a feature object on the data of the relative position on the X-Y axle two dimensional surface and action and the 3rd axle Z axis in its correspondence the relative depth value and the data of action, the recycling system software is so that this X, the data of Y coordinate or action further are coupled with the data of Z axis coordinate or action, and calculate and export this feature object at this X, Y, the data of the three-dimensional relative position of Z and action, on a display screen, to produce synchronously corresponding X, Y, the result of use of a three-dimensional human-computer interface system is reached in Z coordinate and action.
Background technology
User interface (user interface) system and method for at present existing multiple different kenels can generally be divided into touch and distance type user interface; Wherein this touch user interface system comprises multiple different touch-control system and method such as resistance-type (Resistive), condenser type (Capacitive), surface acoustic wave type (SAW, Surface Acoustic Wave), infrared light formula (IR, Infrared), optical profile type (optical imaging) etc., it is directly to touch on a touch display screen with the various functions of controlling this display as clicking operation such as finger or pointer by touch control object, image switching, zoom in/out picture or touch-control game etc. are in order to replace general common button or rocker-type control mode; This distance type user interface system then is to utilize a feature object such as gesture or human body one position, make it produce the variation of relative position and action at an X, Y, Z three dimensions, to control the various functions of this display by remote control mode, namely this feature object does not directly touch on this display screen.
In the correlative technology field of distance type user interface (user interface) system and method, had multiple prior art at present, comprise: PCT International Publication WO 03/071410; United States Patent (USP): US 7,348,963, US 7,433,024, US 6,560,019; U.S. Patent Publication No.: US 2008/0240502, US 2008/0106746, US 2009/0185274, US 2009/0096783, US 2009/0034649, US 2009/0185274, US 2009/0183125, US 2010/0020078 etc.; And Taiwan patent publication No.: 200847061,201033938,201003564,201010424,201112161 etc.
The major technique of the relevant distance type user of these prior arts interface (user interface) system and method, mostly be to set up first a three-dimensional map (3D mapping) to be used as a comparison database, when a feature object exists in this visual field and produces action, namely by comparing to learn that this feature object is in the data of this three-dimensional relative position and action, further on a display screen that matches, to produce synchronously corresponding X, Y, Z coordinate and action, reach the result of use of user's gesture interface system of a three-dimensional.Take passages the part prior art in these prior arts and be described as follows at this:
Such as US Patent No. 7,433,024, it is to disclose a kind of drawing practice (a method for mapping), comprises the following step: by light fixture projection one main hot spot pattern to a target area (projecting aprimary speckle pattern from an illumination assembly into a target region); Gather a plurality of with reference to it is the image (capturing a plurality of reference images of the primary speckle pattern at different, respective distances from the illumination assembly in the targe tregion) that leaves the main hot spot pattern of the different indivedual distances of this light fixture in this target area with image; Gather this main hot spot pattern testing image its be to be projected on the surface of article of target area (capturing a test image of the primary speckle pattern that is projected onto a surface of an object in the targe tregion); With this testing image and this a plurality of references with image relatively, to determine wherein one with reference to image main hot spot pattern (comparing the test image to the reference images so as to identify areference image in which the primary speckle pattern most closely matches the primarys peckle pattern in the test image) on closely identical this testing image of the main hot spot pattern utmost point on it; And leave the distance of this light fixture based on this fixed reference with image, with the position (estimating alocation of the object based on a distance of the identified reference ima ge from the illumination assemb ly) of calculating these article.Therefore, US 7,433,024 discloses again a kind of plotting unit (Apparatus for mapping), comprise: a light fixture, it is in order to throw main hot spot pattern to a target area (anill umination assembly, which is configured to project a primary speckle pattern into a target region); One image component, they are a plurality of with reference to it is the image that leaves the main hot spot pattern of the different indivedual distances of this light fixture in this target area with image in order to gather, and gather this main hot spot pattern testing image its be to be projected to (an imaging assembly on the surface of article of target area, which is configured to capture a plurality of reference images of the primary speckle pattern at different, respective distances from the illumination assembly in the target region, and tocapture a test image of the primary speckle pattern that is projected onto a surface of an object in the target region); An and image processor, it is combined so that relatively this testing image is a plurality of with reference to using image with this, to determine wherein one with reference to the main hot spot pattern on closely identical this testing image of its upper main hot spot pattern utmost point of image, and leave the distance of this light fixture, the position of calculating these article based on this fixed reference with image.(a?image?processor,which?is?coupled?to?compare?the?test?image?to?the?reference?images?so?as?to?identify?a?reference?image?in?which?the?primary?speckle?pattern?most?closely?matches?the?primary?speckle?pattern?in?the?test?image,and?toe?stimate?a?location?of?the?object?based?on?adistance?of?the?identified?reference?image?from?the?illumina?tion?assembly)。
U.S. Patent Publication No. US 2009/0183125 for another example, it is method and the device (three-dimensional user interface method and apparatus) thereof that discloses a kind of three-dimensional user interface, wherein comprises a step in the step of the method and is: gather a succession of depth map (capturing a sequence of depth maps over time of at least a part of a body of a human subject) of at least one position in the time on the human body; Wherein comprising constitutive requirements in this device is: a sensing apparatus, it is with gathering a succession of depth map (a sensing device, which is configured to capture a sequence of depth maps over time of at least a part of a body of a human subject) of at least one position in the time on the human body.
As from the foregoing, setting up first a three-dimensional map (3D mapping) is the guardian technique means that become these prior arts necessity on three-dimensional user's interfacial process to be used as a comparison with database, yet, with regard to the method at a distance type user interface (user interface) and system equipment thereof are namely carried out the essential device of the method, set up first a three-dimensional map and produce more complicated processing formula to I haven't seen you for ages and waste the problem of setting up cost to be used as a technological means of comparing with database, be unfavorable for mass production and the generalization of this technology; For example: when these prior arts will gather a plurality of references with image (reference images), its image component (an imaging assembly) must utilize at least two camera lenses, confession can be detected respectively the image data that reaches different depth on another dimension (being defined herein as Z axis) in its target area at the image data of the upper main hot spot pattern of a two dimensional surface (being defined herein as X-Y plane), and the image data of main hot spot pattern on this two dimensional surface and the image data of the upper different depth of another dimension are coupled, a plurality of with reference to using image (reference images) to set up, namely be considered as a three-dimensional map (3D mapping) for comparison in this this a plurality of references with image, namely be used as a comparison database; Therefore, the use of these prior arts, on system equipment, must use at least two camera lenses and have the processor (processor) that digital signal is processed (DSP) function, and on method, then must utilize these two camera lenses with gather respectively a plurality of two dimension with reference to the image of image (reference images) and corresponding one dimension different depth making up a three-dimensional map (3D mapping) as a comparison usefulness, so relatively can be more complicated and waste and set up cost.
As from the foregoing, in the technical field of distance type user interface (user interface), the development and design a kind of on system equipment, must not adopt two camera lenses and must not set up first one as one the comparison usefulness three-dimensional map (3D mapping), and designs simplification, cost savings and meet three-dimensional user's gesture interface system and the method thereof that efficient requires have its desirability really.
Summary of the invention
Fundamental purpose of the present invention is to be to provide a kind of three-dimensional human-computer interface system and method (3D human interface system and method thereof) thereof, it is to utilize a camera lens and the digital signal of arranging in pairs or groups to process (DSP) function, with detecting and calculate a feature object on an X-Y axle two dimensional surface relative position or the data of action, for can on a display screen, producing corresponding X, Y-axis two-dimensional coordinate and action synchronously; Utilize simultaneously again closely connecing inductor (proximity sensor) and processing (DSP) function by digital signal of collocation infrared ray (IR light) light source use, to detect and to calculate this feature object with respect to the relative depth value on another dimension Z axis of this X-Y two dimensional surface or the data of action, supply and to produce synchronously corresponding Z axis one dimension coordinate and action at this display screen; And the digital element signal of recycling system software is processed (DSP) function, so that the data of this X, Y two-dimensional coordinate or action further are coupled with the data of this Z axis one dimension coordinate or action, in order to calculate and to export this feature object in the data of this X, Y, the three-dimensional relative position of Z axis and action, and then can on this display screen, produce corresponding X, Y, Z coordinate and action synchronously, reach the result of use of a three-dimensional human-computer interface system.
Still a further object of the present invention is to be to provide a kind of three-dimensional human-computer interface system and method thereof, it is to utilize one closely to connect the use of inductor (proximity sensor) collocation one infrared ray (IR light) light source, in order to detecting and calculate this feature object with respect to another dimension of this X-Y two dimensional surface on Z axis the relative depth value or the data of action, use replace or avoid and related prior art must utilize in addition at least one camera lens with gather a plurality of one dimensions (Z axis) image for again by image processing to set up first technology and the trouble thereof of comparison three-dimensional map (3D mapping) database with corresponding a plurality of one dimensions (Z axis) image, reach service efficiency raising and cost-effective result of use.
For reaching above-mentioned purpose, the present invention utilizes a camera lens, as the VGA camera lens, and by digital signal processing (DSP, digital signal processing) function, in order to detect and to calculate a feature object such as hand or human body one position, on a two dimensional surface, be defined herein as relative position and the action data of X-Y plane, be X, Y coordinate data or as up, down, turn left, turn right, the action datas such as rotation or convergent-divergent, for can on a display screen, producing corresponding X, Y coordinate and action synchronously; Utilize simultaneously again one closely to connect inductor (proximity sensor) and a ruddiness line (IR light) light source, and by digital signal processing (DSP) function, in order to detect and to calculate this feature object on another dimension with respect to this two dimensional surface (X-Y plane), be defined herein as the relative depth Value Data of Z axis, be the Z coordinate data or as with respect to this closely connect inductor make forward near or backward away from etc. the gesture motion data, for producing synchronously corresponding Z axis coordinate and action at this display screen; The digital element signal of recycling system software is processed (DSP) function, so that above-mentioned X, the data of Y coordinate or its action further combine with the data of Z coordinate or its action, in order to calculate this feature object at this three dimensions, be defined herein as the three-dimensional relative position of X-Y-Z and action data, be the data of X-Y-Z coordinate or its action, for the data of exportable this feature object in the three-dimensional relative position of X-Y-Z and action, on this display screen, to produce synchronously corresponding X-Y-Z coordinate and action, form distance type synchronization interactive relation, and then reach three-dimensional user's gesture interface system, the i.e. result of use of three-dimensional human-computer interface system.
By the three-dimensional human-computer interface system of the present invention and method thereof, can avoid related prior art mostly to utilize at least two camera lenses with the spot pattern (speckle pattern) that the gathers at least one reference surface trouble with three-dimensional map (3D mapping) database of setting up first a comparison usefulness for a field range, namely when a feature object causes forming three-dimensional map after a spot pattern changes because of action in this field range, prior art must begin and can produce corresponding X-Y-Z coordinate and action at this display screen by comparing with this three-dimensional map database, so relatively, having method, the present invention and prior art simplify, designs simplification, and cost-effective advantage.
Description of drawings
Fig. 1 is the schematic perspective view of three-dimensional human-computer interface system one embodiment of the present invention;
Fig. 2 is the method key diagram of the three-dimensional human-computer interface system of the present invention.
The three-dimensional human-computer interface system of description of reference numerals: 10-; The 11-structure; The 20-camera lens; 30-closely connects inductor; The 40-infrared light sources; The 50-display; The 51-display screen; The 52-position; The 52a-fixed position; 60-user; 61-feature object (hand); The corresponding image of 61a-; The 70-processor; 81,82,83,84-step.
Embodiment
For making the present invention more clear and definite full and accurate, cooperate following diagram to describe in detail structure of the present invention, technical characterictic and the method for designing as after:
With reference to shown in Figure 1, it is the three-dimensional human-computer interface system (3D human interface system) of the present invention, or be called three-dimensional user's gesture interface (3D user gesture interface system), the system architecture of an embodiment and mode of operation schematic perspective view.This three-dimensional human-computer interface system 10 comprises: at least one camera lens 20, at least one inductor (proximity sensor) 30, at least one infrared ray (IR light) light source 40 and at least one display (monitor) 50 of closely connecing, can be in the operation of the place ahead of three-dimensional human-computer interface system 10 at least one user 60, be to be defined as this camera lens 20, closely to connect action direction and the scope of inductor 30, infrared light sources 40 and display 50 in this this " the place ahead ".Its this display (monitor) 50 can consist of with the display screen (or being called screen, panel) 51 of various different images Display Modes and not limit, such as cathode-ray tube (CRT) (CRT, Cathode Ray Tube) screen, liquid crystal (LCD, Liquid Crystal Display) screen backlight and light emitting diode (LED) screen backlight etc.
This camera lens 20, as the VGA camera lens, it is to process (DSP by digital signal, digital signal processing) function, in order to detect and to calculate a feature object 61, a hand 61 or human body one position such as user 60, on a two dimensional surface, X-Y plane as shown in Figure 1, relative position and action data, the i.e. X of this feature object 61, Y coordinate data or this feature object 61 or up, or down, or turn left, or turn right, or rotation, or the action data such as convergent-divergent, for can on the display screen 51 of this display 50, producing corresponding X synchronously, Y coordinate and action, such as on display screen shown in Figure 1 51, produce synchronously the corresponding image 61a of this feature object 61 and be movable to display screen 51 a certain fixed position 52a in all multiposition 52 of sign.
This closely connects inductor (proximity sensor) the 30th, and this infrared ray of arranging in pairs or groups (IR light) light source 40 uses, this infrared light sources 40 is towards user's 60 directions, especially comprise the position of feature object 61 or contain the scope zone of this feature object, the infrared ray of projection some strength, this closely connects 30 of inductors (proximity sensor) in order to this ultrared reflected light of induction, especially comprises the rear infrared ray that reflects that this infrared ray is incident upon this feature object 61.When feature object 61 produces relatively moving of Z-direction, can be incident upon the impact that the feature object 61 rear ruddiness linear light intensity that reflect produce a degree to this ruddiness line source 40, such as relative reinforcement or relatively weaken the light intensity of a certain degree, this closely connects the variation that inductor 30 can pass through the ruddiness linear light intensity of sensing, for example: when this feature object 61 in the movement of Z-direction for relatively forward near this infrared light sources 40, then this closely connects the inductor 30 ruddiness linear light intensity of sensing and will relatively strengthen; When this feature object 61 in the movement of Z-direction for relatively leaving backward this infrared light sources 40, then this closely connects the inductor 30 ruddiness linear light intensity of sensing and will relatively weaken; Therefore closely connect the inductor 30 relative enhancing of ruddiness linear light intensity or the variation that weakens of sensing by this, can in order to judge this this feature object 61 the movement of Z-direction be relatively forward near or leave backward this infrared light sources 40.So this closely connects inductor 30 collocation this infrared ray (IR light) light sources 40 and uses, and process (DSP) function by digital signal again, can and calculate this feature object 61 at the relative depth Value Data of another dimension on Z axis with respect to this two dimension X-Y plane in order to detecting, be the Z coordinate data or with respect to this closely connect inductor 30 make forward near or backward away from etc. gesture motion (being moving forward and backward of feature object 61) data, for can producing synchronously corresponding Z axis coordinate and action at this display screen 51, on this fixed point position 52a, produce synchronously forward push action (as be similar to click action) or departure motion backward such as the corresponding image 61a at this feature object 61 on the display screen shown in Figure 1 51.
The digital element signal of the three-dimensional human-computer interface system 10 recycling system softwares of the present invention is processed (DSP) function, so that above-mentioned X by these camera lens 20 gained, the data of Y coordinate or its action, further closely connect the Z coordinate of inductor 30 gained by this this or the data of its action are coupled with above-mentioned, in order to calculate this feature object 61 at the three-dimensional relative position of this X-Y-Z and action data, be the data of X-Y-Z coordinate or its action, for the data of exportable this feature object 61 in the three-dimensional relative position of X-Y-Z and action, on this display screen 51, to produce synchronously X-Y-Z coordinate and the action at this fixed point position 52a corresponding to the corresponding image 61a of this feature object 61, namely between this feature object 61 and this display screen 51, form distance type and make synchronously the interactive relationship of action, reach a three-dimensional man-machine interface (human interface) or be called the result of use of user's gesture interface (user gesture interface) system.
Again with reference to shown in Figure 1, in the present embodiment, this camera lens 20, closely connecing inductor (proximity sensor) 30 and infrared ray (IR light) although light source 40 is to arrange to be arranged on the same structure body 11, is not the structure arrangement that limits between this three; Namely, this camera lens 21, closely connect inductor 22 and infrared light sources 23 and can be arranged on the same structure body 11 as shown in Figure 1, also can separately be arranged on the different structures, or among the three wantonly two can be in conjunction with being arranged on the same structure body, therefore at structure arrangement various structural design is arranged, can select wherein best mode with fabricator or user's convenience.Although this camera lens 20 in the present embodiment, the position that closely connects inductor 30 and infrared light sources 40 are sequentially to be arranged to a vertical line spread at structure 11 again, but be not position and the arrangement mode that limits between this three, namely, this camera lens 21, closely connect inductor 22 and infrared light sources 23 can sequentially not arranged, also can not be arranged to a vertical or horizontal line spread.But for promoting the service efficiency of the three-dimensional human-computer interface system 10 of the present invention, as in subsequent step, making this X, the data of Y two-dimensional coordinate or action can be efficiently be coupled with the data of this Z axis one dimension coordinate or action, this camera lens 21, closely connect the action direction of inductor 22 and infrared light sources 23 and scope to form overlap condition for better but do not limit, namely, this camera lens 21 of the present invention, closely connect inductor 22 and infrared light sources 23 camera lenses 20 and closely connect action direction and the scope of inductor 30 and do not require and to form overlap condition for best as far as possible, yet related prior art utilize one or two camera lens with the spot pattern (speckle pattern) that gathers at least one reference surface in order to set up first one when being provided as three-dimensional map (3D mapping) database of comparison usefulness, should be in order to the camera lens that gathers a reference surface spot pattern or action direction and the scope of light supply apparatus, the i.e. field range of this camera lens or light supply apparatus, be required that palpiform becomes overlap condition, otherwise must utilize in addition processor (processor) calculating the visual field difference degree between this two and to compensate, namely disclose such as U.S. Patent Publication No. US 2009/0096783 and when the visual field difference degree produces, get profit with processor (processor) specifically to calculate and correlation technique such as its Fig. 2 and shown in Figure 3 of compensate function; Therefore, the present invention and related prior art relatively must not utilize processor (processor) specifically calculating and to compensate, this can be considered use effect of the present invention advantage one.
In addition, process (DSP) function for disclosed system software or its digital element signal that has, can provide a processor (processor) 70 as shown in Figure 1, the digital element signal that is had with these system software or its of setting tool is processed (DSP) function but is not limited, namely these system softwares or its digital element signal that has process (DSP) function also can in be built in structure 11 or this display screen 51.
Refer again to shown in Figure 2ly, it is the schematic flow sheet of action method one embodiment of the three-dimensional human-computer interface system of the present invention (3D human interface system).The action method of three-dimensional human-computer interface system 10 of the present invention (3Dhuman interface system) comprises the following step:
Step 81: utilize a camera lens 20 to comprise the image of a feature object 61 with collection, and by the digital signal processing capacity, in order to detecting and calculate this feature object 61 on an X-Y axle two dimensional surface relative position or the data of action;
Step 82: utilize one closely to connect inductor (proximity sensor) 30 to cooperate an infrared light sources 40 to use, and by the digital signal processing capacity, in order to detecting and calculate this feature object 61 with respect to the relative depth value on another dimension Z axis of this X-Y two dimensional surface or the data of action;
Step 83: the digital element signal processing capacity of utilizing a system software, so that the data of this X, Y two-dimensional coordinate or action further are coupled with the data of this Z axis one dimension coordinate or action, in order to calculating this feature object 61 in the data of this X, Y, the three-dimensional relative position of Z axis and action, and export this feature object 61 at data to a display screen of this X, Y, the three-dimensional relative position of Z axis and action; And
Step 84: the application of controlling this display screen 51 is with synchronously corresponding to this X, Y, Z coordinate and action.
With regard to the action method of the three-dimensional human-computer interface system 10 of the present invention, can provide in addition a processor 70 as shown in Figure 1, process (DSP) function, i.e. evaluation work in the pool place orangutan above steps 81,82,83,84 in order to the digital signal of planning as a whole to carry out in the above steps 81,82,83,84.
This camera lens 20 that comprises with regard to the three-dimensional human-computer interface system 10 of the present invention again, this closely connects inductor 30, this infrared light sources 40, the functional devices such as this display 50 and the digital signal that utilizes are processed (DSP) function, because above-mentioned each functional devices 20,30,40,50 and digital signal process (DSP) function, all can utilize the prior art of the art to reach the action function of each functional devices among the present invention itself, and these functional devices 20,30,40,50 and digital signal process (DSP) function individual itself, it is not the technical characteristics of the three-dimensional human-computer interface system 10 of the present invention and method thereof, be that each individuality itself is not the demand emphasis of patent of the present invention, so do not describe again in addition each individual action function own in detail.
By the three-dimensional human-computer interface system 10 of the present invention and method thereof, can avoid related prior art mostly to utilize at least two camera lenses with the spot pattern (speckle pattern) that the gathers at least one reference surface trouble with three-dimensional map (3D mapping) database of setting up first a comparison usefulness for a field range, namely when a feature object causes forming three-dimensional map after a spot pattern changes because of action in this field range, prior art must begin and can produce corresponding X-Y-Z coordinate and action at this display screen by comparing with this three-dimensional map database, so relatively, having method, the present invention and prior art simplify, designs simplification, and cost-effective advantage.
More than shown in only be the preferred embodiments of the present invention, only be illustrative for the purpose of the present invention, and nonrestrictive.Common knowledge personnel understand at this professional skill field tool, can carry out many changes to it in the spirit and scope that claim of the present invention limits, revise, even the change of equivalence, but all will fall within the scope of protection of the present invention.

Claims (8)

1. the method for a three-dimensional man-machine interface is characterized in that, comprises:
Utilize a camera lens and by the digital signal processing capacity, in order to detecting and calculate a feature object on an X, Y-axis two dimensional surface relative position or the data of action, namely form the data of X, Y-axis two-dimensional coordinate or action;
Utilize one closely to connect inductor to cooperate an infrared light sources to use, and by the digital signal processing capacity, in order to detecting and calculate this feature object with respect to another dimension relative depth value of Z axis of this X, Y-axis two dimensional surface or the data of action, namely form the data of Z axis one dimension coordinate or action;
Utilize the digital element signal processing capacity of a system software, so that the data of this X, Y-axis two-dimensional coordinate or action further are coupled with the data of this Z axis one dimension coordinate or action, in order to calculate this feature object in the data of this X, Y, the three-dimensional relative position of Z axis and action, namely form the data of X, Y, Z axis three-dimensional coordinate or action, and export the display screen of data to a display of X, Y, Z axis three-dimensional coordinate or the action of this feature object; And
The application of controlling this display screen is with synchronously corresponding to this X, Y, Z coordinate and action.
2. the method for three-dimensional man-machine interface according to claim 1 is characterized in that, the hand that this feature object is the user or a position of user's human body.
3. the method for three-dimensional man-machine interface according to claim 1, it is characterized in that, X, Y two-dimensional coordinate data or this feature object that the data of this X, Y-axis two-dimensional coordinate or action comprise this feature object up, down, turn left, turn right, rotation, zoom action data, for can on this display screen, producing corresponding X, Y coordinate and action synchronously.
4. the method for three-dimensional man-machine interface according to claim 1, it is characterized in that, this closely connects inductor is that this infrared light sources of collocation uses, this infrared light sources is towards this feature object, comprise the position of this feature object or contain the scope zone of this feature object, the infrared ray of projection some strength, this closely connects inductor in order to respond to the variation of this ultrared catoptrical light intensity, when this feature object produces relatively moving of Z-direction, if this closely connect inductor sense ruddiness line reflection light light intensity be changed to relative enhancing the time, namely judge this feature object in the movement of Z-direction for relatively forward near this infrared light sources, if this closely connects being changed to when relatively weakening of light intensity that inductor is sensed ruddiness line reflection light, namely judge this feature object in the movement of Z-direction for leaving relatively backward this infrared light sources, namely form the data of Z axis one dimension coordinate or action.
5. a three-dimensional human-computer interface system is characterized in that, comprises:
One camera lens, it in order to detect relative position or the action of a feature object on an X, Y-axis two dimensional surface, namely forms the data of X, Y-axis two-dimensional coordinate or action by the digital signal processing capacity;
One closely connects inductor and at least one infrared light sources that matches, and in order to detect this feature object with respect to relative depth value or action on another dimension Z axis of this X-Y two dimensional surface, namely forms the data of Z axis one dimension coordinate or action;
One processor, in order to the data of processing this X, Y two-dimensional coordinate or action further to be coupled with the data of this Z axis one dimension coordinate or action, in order to calculate this feature object in the data of this X, Y, the three-dimensional relative position of Z axis and action, namely form the data of X, Y, Z axis three-dimensional coordinate or action, and export data to a display screen of X, Y, Z axis three-dimensional coordinate or the action of this feature object; And
One display screen, in order to the control of accepting processor on this display screen, to produce synchronously the application corresponding to this X, Y, Z axis three-dimensional coordinate coordinate and action, to reach the result of use of a three-dimensional man-machine interface.
6. three-dimensional human-computer interface system according to claim 5 is characterized in that, the hand that this feature object is the user or a position of user's human body.
7. three-dimensional human-computer interface system according to claim 5, it is characterized in that, X, Y two-dimensional coordinate data or this feature object that the data of this X, Y-axis two-dimensional coordinate or action comprise this feature object up, down, turn left, turn right, rotation, zoom action data, for can on this display screen, producing corresponding X, Y coordinate and action synchronously.
8. three-dimensional human-computer interface system according to claim 5, it is characterized in that, this closely connects inductor is that this infrared light sources of collocation uses, this infrared light sources is towards this feature object, comprise the position of this feature object or contain the scope zone of this feature object, the infrared ray of projection some strength, this closely connects inductor in order to respond to the variation of this ultrared catoptrical light intensity, when this feature object produces relatively moving of Z-direction, if this closely connect inductor sense ruddiness line reflection light light intensity be changed to relative enhancing the time, namely judge this feature object in the movement of Z-direction for relatively forward near this infrared light sources, if this closely connects being changed to when relatively weakening of light intensity that inductor is sensed ruddiness line reflection light, namely judge this feature object in the movement of Z-direction for leaving relatively backward this infrared light sources, namely form the data of Z axis one dimension coordinate or action.
CN201110259995XA 2011-09-05 2011-09-05 Three-dimensional human-computer interface system and method thereof Pending CN102981599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110259995XA CN102981599A (en) 2011-09-05 2011-09-05 Three-dimensional human-computer interface system and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110259995XA CN102981599A (en) 2011-09-05 2011-09-05 Three-dimensional human-computer interface system and method thereof

Publications (1)

Publication Number Publication Date
CN102981599A true CN102981599A (en) 2013-03-20

Family

ID=47855730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110259995XA Pending CN102981599A (en) 2011-09-05 2011-09-05 Three-dimensional human-computer interface system and method thereof

Country Status (1)

Country Link
CN (1) CN102981599A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105425937A (en) * 2014-09-03 2016-03-23 液态三维系统有限公司 Gesture control system capable of interacting with 3D (three-dimensional) image
CN112435525A (en) * 2020-11-20 2021-03-02 郑州捷安高科股份有限公司 Simulated fire extinguishing method, device, computer equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113018A1 (en) * 2001-07-18 2003-06-19 Nefian Ara Victor Dynamic gesture recognition from stereo sequences
US20040151366A1 (en) * 2003-02-04 2004-08-05 Nefian Ara V. Gesture detection from digital video images
CN101281422A (en) * 2007-04-02 2008-10-08 原相科技股份有限公司 Apparatus and method for generating three-dimensional information based on object as well as using interactive system
CN101655739A (en) * 2008-08-22 2010-02-24 原创奈米科技股份有限公司 Device for three-dimensional virtual input and simulation
TW201120684A (en) * 2009-10-07 2011-06-16 Microsoft Corp Human tracking system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030113018A1 (en) * 2001-07-18 2003-06-19 Nefian Ara Victor Dynamic gesture recognition from stereo sequences
US20040151366A1 (en) * 2003-02-04 2004-08-05 Nefian Ara V. Gesture detection from digital video images
CN101281422A (en) * 2007-04-02 2008-10-08 原相科技股份有限公司 Apparatus and method for generating three-dimensional information based on object as well as using interactive system
CN101655739A (en) * 2008-08-22 2010-02-24 原创奈米科技股份有限公司 Device for three-dimensional virtual input and simulation
TW201120684A (en) * 2009-10-07 2011-06-16 Microsoft Corp Human tracking system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105425937A (en) * 2014-09-03 2016-03-23 液态三维系统有限公司 Gesture control system capable of interacting with 3D (three-dimensional) image
CN112435525A (en) * 2020-11-20 2021-03-02 郑州捷安高科股份有限公司 Simulated fire extinguishing method, device, computer equipment and medium

Similar Documents

Publication Publication Date Title
CN103477311B (en) Multiple point touching interactive device based on camera, system and method
CN102799318B (en) A kind of man-machine interaction method based on binocular stereo vision and system
CN101231450B (en) Multipoint and object touch panel arrangement as well as multipoint touch orientation method
CN101419513B (en) Remote virtual touch system of infrared laser pen
TWI476364B (en) Detecting method and apparatus
US8743089B2 (en) Information processing apparatus and control method thereof
JP5308359B2 (en) Optical touch control system and method
CN101644976A (en) Surface multipoint touching device and positioning method thereof
CN102508578B (en) Projection positioning device and method as well as interaction system and method
JP2006031275A (en) Coordinate inputting device and its control method
US10379675B2 (en) Interactive projection apparatus and touch position determining method thereof
CN104102394A (en) Optical multi-point touch control equipment and method
CN201489503U (en) Surface multipoint touch device
CN100478860C (en) Electronic plane display positioning system and positioning method
CN104142739A (en) Laser point tracking system and method based on optical mouse sensing array
CN203386146U (en) Infrared video positioning-based man-machine interactive device
KR20150112198A (en) multi-user recognition multi-touch interface apparatus and method using depth-camera
CN201369027Y (en) Remote finger virtual touch system with infrared laser pen
CN104571726B (en) Optical touch system, touch detection method and computer program product
CN102981599A (en) Three-dimensional human-computer interface system and method thereof
CN105718121B (en) Optical touch device
CN102799344B (en) Virtual touch screen system and method
CN101950221A (en) Multi-touch device based on sphere display and multi-touch method thereof
US9652081B2 (en) Optical touch system, method of touch detection, and computer program product
CN102043543B (en) Optical touch control system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130320