CN102799273A - Interaction control system and method - Google Patents
Interaction control system and method Download PDFInfo
- Publication number
- CN102799273A CN102799273A CN201210240378XA CN201210240378A CN102799273A CN 102799273 A CN102799273 A CN 102799273A CN 201210240378X A CN201210240378X A CN 201210240378XA CN 201210240378 A CN201210240378 A CN 201210240378A CN 102799273 A CN102799273 A CN 102799273A
- Authority
- CN
- China
- Prior art keywords
- control
- range
- information
- gesture
- motion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention provides an interaction control system. The interaction control system comprises a front-end information acquisition unit, an information storage unit and a control range prediction unit, wherein the front-end information acquisition unit is used for acquiring gesture control information in real time; the information storage unit is used for storing the gesture control information of a current time period; and the control range prediction unit is used for judging a control state of a current moment according to the information stored by the information storage unit and generating gesture control range predicting information according to the control state of the current moment. The invention also provides an interaction control method. The interaction control system and the interaction control method can be used for predicting and prompting a gesture control range, so that the gesture control can be realized more accurately.
Description
Technical field
The present invention relates to mutual control technology, particularly relate to a kind of intersection control routine and the interaction control method thereof that can predict and point out the gesture range of control.
Background technology
Along with the continuous development of computer vision technique, the gesture interaction technology is progressively applied.Based on the gesture interaction of computer vision, its core is to utilize technology such as computer vision, Flame Image Process that the video sequence that image capture device collects is handled, and obtains the understanding to gesture control behavior, thereby carries out respective response.
Aerial gesture control has brought more natural interactive experience as a kind of interactive means emerging in the man-machine interaction, is widely used in the intelligent television operation.Yet, to compare with contact interactive mode such as keyboard and mouse, aerial gesture control has non-accurate characteristics.In order to overcome non-accurate problem, need the interbehavior prediction of mutual intuitively prompting and intelligence, with the arithmetic capability that makes full use of computing machine itself and with the collaborative adjustment between the user, realize the control purpose of convenient intelligence, the raising interactive experience.
Summary of the invention
Based on this, a kind of intersection control routine and the interaction control method thereof that can predict and point out the gesture range of control is provided, can realize gesture control more accurately.
A kind of intersection control routine comprises: the front-end information acquiring unit is used to obtain the gesture control information; Information memory cell is used to store the said gesture control information of current period; The range of control predicting unit is used for judging the current time state of a control according to information memory cell institute canned data, and produces prediction gesture range of control information according to this current moment state of a control.
Among embodiment, this front-end information acquiring unit also is used to obtain interactive interface information and is stored in information memory cell therein.
Therein among embodiment; This intersection control routine further comprises the range of control Tip element; This range of control Tip element is used to contrast the prediction gesture range of control information that interactive interface information and range of control predicting unit are produced; Judge the interactive interface information in the now forecast gesture range of control, and this prediction gesture range of control and control made mistakes point out.
Therein among embodiment; This front-end information acquiring unit looks like to carry out analyzing and processing with the gesture control chart that gets access to; Obtain the relative position information of each gesture constantly, and the gesture relative position between adjacent two width of cloth images is compared calculating, obtain corresponding human hand movement speed.
Therein among embodiment, the image information that this range of control predicting unit is obtained according to this front-end information acquiring unit and the movement velocity of staff are confirmed state of a control.
Among embodiment, this state of a control comprises in setting in motion, the motion, motion arrives, motion turns to therein; When the range of control predicting unit is in the state in the motion at present according to the judgement staff, then produce corresponding fan-shaped range; When the range of control predicting unit judges that staff is in setting in motion, motion arrival at present or moves the state that turns to, then produce corresponding oval range of control.
Therein among embodiment; Said range of control Tip element carries out faulty operation detection and error correction according to surface chart target number that exists in the estimation range or matching degree; If there is not the interface icon that can be complementary with staff action in the estimation range; Then prompting operation is made mistakes, and as only there being unique icon in the estimation range, then is adsorbed onto on the target icon automatically.
A kind of interaction control method comprises: obtain gesture control information and storage in real time; And judge the current time state of a control, and produce the gesture range of control information of prediction according to this current moment state of a control according to the gesture control information of the current period of being stored.
Among embodiment, further comprise therein: obtain interactive interface information and storage; And the prediction gesture range of control information that produced of contrast interactive interface information and range of control predicting unit, judge the interior interactive interface information of now forecast gesture range of control, and this prediction gesture range of control and control made mistakes point out.
Therein among embodiment; This prediction gesture range of control and the control step of pointing out of makeing mistakes are comprised: carry out faulty operation according to the surface chart target number that exists in the estimation range or matching degree and detect and error correction; If there is not the interface icon that can be complementary with staff action in the estimation range; Then prompting operation is made mistakes, and as only there being unique icon in the estimation range, then is adsorbed onto on the target icon automatically.
Intersection control routine of the present invention and exchange method thereof have following beneficial effect:
(1) solved the prediction and the prompting problem of range of control in the aerial gesture interaction process, made the user sense organ clearly arranged current control situation; (2) owing to can predict the range of control of future time, a kind of detection and bearing calibration to error situation can be provided, improve user experience.
Description of drawings
Fig. 1 is the functional block diagram of the intersection control routine device that provides of one embodiment of the invention;
Fig. 2 is that gesture motion control rate and time state concern synoptic diagram;
Fig. 3 a, 3b are the synoptic diagram of gesture range of control prediction;
Fig. 4 is the synoptic diagram that the gesture range of control produces prompting;
Fig. 5 is the process flow diagram of the interaction control method of one embodiment of the invention;
Description of reference numerals:
100, intersection control routine, 10, the front-end information acquiring unit, 20, information memory cell, 30, the range of control predicting unit, 40, the range of control Tip element.
Embodiment
Below in conjunction with a plurality of accompanying drawings the present technique scheme is done further explanation.
See also Fig. 1, one embodiment of the invention provides a kind of intersection control routine 100, and it can carry out prediction of gesture range of control and prompting.This intersection control routine 100 comprises front-end information acquiring unit 10, information memory cell 20, range of control predicting unit 30 and range of control Tip element 40.
In one embodiment, this front-end information acquiring unit 10 is used to obtain gesture control information and interactive interface information.This information memory cell 20 is used to store gesture control information and the described interactive interface information of current period.The current time state of a control is judged in the gesture control information that this range of control predicting unit 30 is used for being stored according to information memory cell 20, and produces prediction gesture range of control information according to this current moment state of a control.This range of control Tip element 40 is used for the prediction gesture range of control information that interactive interface information that comparative information storage unit 20 stored and range of control predicting unit are produced; Judge the interactive interface information in the now forecast gesture range of control, and this prediction gesture range of control and control made mistakes point out.
Please combine Fig. 2, concrete, this front-end information acquiring unit 10 as front end sensors, continues to catch the gesture motion image in the gesture interaction process through an image input device (figure does not show).In the present embodiment, this image input device is common camera or degree of depth camera, and computing machine continues to obtain the sequence of video images in the image capture device.More specifically; This front-end information acquiring unit 10 carries out analyzing and processing with the image sequence that captures; Obtain each relative position information [x (t), y (t)] of the gesture of t constantly, and the gesture relative position between adjacent two width of cloth images is compared calculating; Thereby obtain corresponding gesture (staff) motion velocity information v (t), this gesture motion velocity information v (t) is the wherein a kind of of gesture control information.
This information memory cell 20 can be stored current period and the gesture control information of period in the past.The gesture control information of these information memory cell 20 storages can be represented single or one group of control fragment, and 20 pairs of these control fragments of this information memory cell have certain Memorability and study property.
This information memory cell 20 is used to store interactive interface information simultaneously, and this interactive interface information comprises the information of task context, surface chart target size and surface chart target position distribution.In the present embodiment; This task context refers to the one or more operational orders that can carry out continuously; For example; For accomplishing " search engine " such task, its step that need carry out is: import retrieving head in the search box, click search button, drag down is browsed the retrieval link, click down one page (optional), click Object linking, get into the new page, the context of this task just is called in wherein one the front and back correlation step order of above-mentioned a plurality of steps.This interactive interface information comprises size and the information of position distribution of the icon of whole interface.
This range of control predicting unit 30 is used for the gesture control information of being stored according to information memory cell 20, judges the state of a control that the current time gesture is positioned, and according to the state of a control in this current moment, is created in the gesture range of control information of future time.Concrete, the movement velocity of image information that this range of control predicting unit 30 is obtained according to above-mentioned front-end information acquiring unit 10 and the staff that calculates is confirmed the state of control behavior, just state of a control.At last, according to determined state of a control PREDICTIVE CONTROL scope.In the present embodiment, this state of a control is divided in setting in motion, the motion, motion arrives, motion turns to four kinds of states.This range of control predicting unit 30 judges that the principle of state of a control is following: in the gesture control procedure; The movement velocity of hand is followed the general physical rule with the control purpose: (1) hand is moving to from starting point the process of target location point; Movement velocity accelerates to peak from 0, decelerates to 0 by peak again; (2) near target or when bigger variation took place direction of motion, movement velocity was slack-off; (3) target location is far away more apart from current location, and the motion average velocity of hand is big more.According to above rule, this range of control predicting unit 30 is according to movement velocity, and people's control behavior is divided into above-mentioned one of four states: in setting in motion, the motion, motion arrives, motion turns to.In control procedure,, the range of control of staff gesture is predicted accordingly according to this one of four states.Shown in Fig. 3 a, 3b, having defined with the position of hand is the range of movement estimation range of RP, with fan-shaped or oval the expression.Wherein, Fig. 3 a representes to be in motion state (in the motion) relatively fast, and motion prediction wider range is far away under this state, is fan-shaped; Fig. 3 b representes to be in motion state (comprising that setting in motion, motion arrive, moving turns to) relatively at a slow speed, and the motion prediction scope is less nearer under this state, ovalize.In the present embodiment, when range of control predicting unit 30 judges that according to the movement velocity of gesture staff is in the state in the motion at present, then produce corresponding fan-shaped range; Otherwise, when range of control predicting unit 30 judges that staff is in setting in motion, motion arrival at present or moves the state that turns to, then produce corresponding oval range of control.
See also Fig. 4; This range of control Tip element 40 is used for the prediction gesture range of control information that interactive interface information that comparative information storage unit 20 stored and range of control predicting unit are produced; Judge the interactive interface information in the now forecast gesture range of control, and should predict the gesture range of control and control whether to make mistakes and point out and feed back to the user.Concrete; Range of control Tip element 40 has read the interactive interface information in the information memory cell 20 in advance; Corresponding with it comparison of estimation range information that then above-mentioned range of control predicting unit 30 is obtained, thus judge the interactive interface information that is arranged in the estimation range.After judging the interface icon that is arranged in the estimation range, on the one hand this estimation range is pointed out; On the other hand, carry out faulty operation detection and error correction according to surface chart target number that exists in the estimation range or matching degree.For example, if there is not the interface icon that can be complementary with staff action in the estimation range, then prompting operation is made mistakes.As only there being unique icon in the estimation range, then be adsorbed onto on the target icon automatically, thereby realize operation fast and easily.
In addition; In long reciprocal process; This intersection control routine 100 utilizes information memory cell 20, contrasts the existing operation history and the present similarity degree of operation automatically, and this similarity degree is that the prediction of control area provides auxiliary analytical information by certain weight.
See also Fig. 5, one embodiment of the invention also provides a kind of interaction control method.Said method comprising the steps of:
Step S501: obtain gesture control information and interactive interface information.
Step S503: store the gesture control information of current period and period in past and store said interactive interface information.In the present embodiment, said interactive interface information comprises task context, interface icon sizes and interface icon distributed intelligence.
Step S505: the current time state of a control is judged in the gesture control information according to the current period of being stored, and produces the gesture range of control information of prediction according to this current moment state of a control, just appears at the gesture range of control information of future time.
Step S507: the prediction gesture range of control information that interactive interface information that contrast is stored and step S505 are produced, judge the interactive interface information in the now forecast gesture range of control, and point out.Said prompting operation comprises to be pointed out prediction gesture range of control; And carry out faulty operation according to the surface chart target number that exists in the estimation range or matching degree and detect and error correction; If there is not the interface icon that can be complementary with staff action in the estimation range, then prompting operation is made mistakes; As only there being unique icon in the estimation range, then be adsorbed onto on the target icon automatically, thereby realize operation fast and easily.
(1) solved the prediction and the prompting problem of range of control in the aerial gesture interaction process, made the user sense organ clearly arranged current control situation; (2) owing to can predict the range of control of future time, a kind of detection and bearing calibration to error situation can be provided, improve user experience.
The above embodiment has only expressed several kinds of embodiments of the present invention, and it describes comparatively concrete and detailed, but can not therefore be interpreted as the restriction to claim of the present invention.Should be pointed out that for the person of ordinary skill of the art under the prerequisite that does not break away from the present invention's design, can also make some distortion and improvement, these all belong to protection scope of the present invention.
Claims (10)
1. intersection control routine comprises:
The front-end information acquiring unit is used for obtaining in real time the gesture control information;
Information memory cell is used to store the said gesture control information of current period; And
The range of control predicting unit is used for judging the current time state of a control according to information memory cell institute canned data, and produces prediction gesture range of control information according to this current moment state of a control.
2. intersection control routine as claimed in claim 1 is characterized in that, this front-end information acquiring unit also is used to obtain interactive interface information and is stored in information memory cell.
3. intersection control routine as claimed in claim 1; It is characterized in that; Further comprise the range of control Tip element; Be used to contrast the prediction gesture range of control information that interactive interface information and range of control predicting unit are produced, judge the interactive interface information in the now forecast gesture range of control, and this prediction gesture range of control and control made mistakes point out.
4. intersection control routine as claimed in claim 1; It is characterized in that; This front-end information acquiring unit looks like to carry out analyzing and processing with the gesture control chart that gets access to; Obtain the relative position information of each gesture constantly, and the gesture relative position between adjacent two width of cloth images is compared calculating, obtain corresponding human hand movement speed.
5. intersection control routine as claimed in claim 4 is characterized in that, the image information that this range of control predicting unit is obtained according to this front-end information acquiring unit and the movement velocity of staff are confirmed state of a control.
6. intersection control routine as claimed in claim 5 is characterized in that, this state of a control comprises in setting in motion, the motion, motion arrives, motion turns to; When the range of control predicting unit is in the state in the motion at present according to the judgement staff, then produce corresponding fan-shaped range; When the range of control predicting unit judges that staff is in setting in motion, motion arrival at present or moves the state that turns to, then produce corresponding oval range of control.
7. intersection control routine as claimed in claim 3; It is characterized in that; The range of control Tip element carries out faulty operation according to the surface chart target number that exists in the estimation range or matching degree and detects and error correction, if there is not the interface icon that can be complementary with the staff action in the estimation range, then prompting operation is made mistakes; As only there being unique icon in the estimation range, then be adsorbed onto on the target icon automatically.
8. interaction control method comprises:
Obtain gesture control information and storage in real time; And
The current time state of a control is judged in gesture control information according to the current period of being stored, and produces the gesture range of control information of prediction according to this current moment state of a control.
9. interaction control method as claimed in claim 8 is characterized in that, further comprises:
Obtain interactive interface information and storage; And
The prediction gesture range of control information that contrast interactive interface information and range of control predicting unit are produced is judged the interior interactive interface information of now forecast gesture range of control, and this prediction gesture range of control and control is made mistakes point out.
10. interaction control method as claimed in claim 9 is characterized in that, this prediction gesture range of control and the control step of pointing out of makeing mistakes are comprised:
Carry out faulty operation detection and error correction according to surface chart target number that exists in the estimation range or matching degree; If there is not the interface icon that can be complementary with staff action in the estimation range; Then prompting operation is made mistakes; As only there being unique icon in the estimation range, then be adsorbed onto on the target icon automatically.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210240378.XA CN102799273B (en) | 2012-07-11 | 2012-07-11 | Interaction control system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201210240378.XA CN102799273B (en) | 2012-07-11 | 2012-07-11 | Interaction control system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102799273A true CN102799273A (en) | 2012-11-28 |
CN102799273B CN102799273B (en) | 2015-04-15 |
Family
ID=47198397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201210240378.XA Expired - Fee Related CN102799273B (en) | 2012-07-11 | 2012-07-11 | Interaction control system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102799273B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103558958A (en) * | 2013-10-29 | 2014-02-05 | 宇龙计算机通信科技(深圳)有限公司 | Application program function calling method and terminal |
CN103885597A (en) * | 2014-03-27 | 2014-06-25 | 广东威创视讯科技股份有限公司 | Space input recognition method and system |
CN104486654A (en) * | 2014-12-15 | 2015-04-01 | 四川长虹电器股份有限公司 | Method for providing guidance and television |
CN104699238A (en) * | 2013-12-10 | 2015-06-10 | 现代自动车株式会社 | System and method for gesture recognition of vehicle |
CN104777900A (en) * | 2015-03-12 | 2015-07-15 | 广东威法科技发展有限公司 | Gesture trend-based graphical interface response method |
CN107430431A (en) * | 2015-01-09 | 2017-12-01 | 雷蛇(亚太)私人有限公司 | Gesture identifying device and gesture identification method |
CN107967058A (en) * | 2017-12-07 | 2018-04-27 | 联想(北京)有限公司 | Information processing method, electronic equipment and computer-readable recording medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1885343A (en) * | 2005-06-21 | 2006-12-27 | 三星电子株式会社 | Apparatus and method for displaying 3-dimensional graphics |
WO2009018161A1 (en) * | 2007-07-27 | 2009-02-05 | Gesturetek, Inc. | Enhanced camera-based input |
CN102193626A (en) * | 2010-03-15 | 2011-09-21 | 欧姆龙株式会社 | Gesture recognition apparatus, method for controlling gesture recognition apparatus, and control program |
CN102272697A (en) * | 2008-12-31 | 2011-12-07 | 惠普开发有限公司 | Computing device and method for selecting display regions responsive to non-discrete directional input actions and intelligent content analysis |
US8166421B2 (en) * | 2008-01-14 | 2012-04-24 | Primesense Ltd. | Three-dimensional user interface |
-
2012
- 2012-07-11 CN CN201210240378.XA patent/CN102799273B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1885343A (en) * | 2005-06-21 | 2006-12-27 | 三星电子株式会社 | Apparatus and method for displaying 3-dimensional graphics |
WO2009018161A1 (en) * | 2007-07-27 | 2009-02-05 | Gesturetek, Inc. | Enhanced camera-based input |
US8166421B2 (en) * | 2008-01-14 | 2012-04-24 | Primesense Ltd. | Three-dimensional user interface |
CN102272697A (en) * | 2008-12-31 | 2011-12-07 | 惠普开发有限公司 | Computing device and method for selecting display regions responsive to non-discrete directional input actions and intelligent content analysis |
CN102193626A (en) * | 2010-03-15 | 2011-09-21 | 欧姆龙株式会社 | Gesture recognition apparatus, method for controlling gesture recognition apparatus, and control program |
Non-Patent Citations (1)
Title |
---|
柳宗浦等: "一种融合Kalman预测和Mean-shift搜索的视频运动目标跟踪新方法", 《光电子技术》, vol. 29, no. 1, 31 March 2009 (2009-03-31), pages 30 - 33 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103558958A (en) * | 2013-10-29 | 2014-02-05 | 宇龙计算机通信科技(深圳)有限公司 | Application program function calling method and terminal |
CN104699238A (en) * | 2013-12-10 | 2015-06-10 | 现代自动车株式会社 | System and method for gesture recognition of vehicle |
CN104699238B (en) * | 2013-12-10 | 2019-01-22 | 现代自动车株式会社 | System and method of the gesture of user to execute the operation of vehicle for identification |
CN103885597A (en) * | 2014-03-27 | 2014-06-25 | 广东威创视讯科技股份有限公司 | Space input recognition method and system |
CN103885597B (en) * | 2014-03-27 | 2017-06-13 | 广东威创视讯科技股份有限公司 | Space input recognition method and system |
CN104486654A (en) * | 2014-12-15 | 2015-04-01 | 四川长虹电器股份有限公司 | Method for providing guidance and television |
CN107430431A (en) * | 2015-01-09 | 2017-12-01 | 雷蛇(亚太)私人有限公司 | Gesture identifying device and gesture identification method |
CN107430431B (en) * | 2015-01-09 | 2021-06-04 | 雷蛇(亚太)私人有限公司 | Gesture recognition device and gesture recognition method |
CN104777900A (en) * | 2015-03-12 | 2015-07-15 | 广东威法科技发展有限公司 | Gesture trend-based graphical interface response method |
CN107967058A (en) * | 2017-12-07 | 2018-04-27 | 联想(北京)有限公司 | Information processing method, electronic equipment and computer-readable recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN102799273B (en) | 2015-04-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105373785B (en) | Gesture identification detection method and device based on deep neural network | |
CN102799273A (en) | Interaction control system and method | |
US10074186B2 (en) | Image search system, image search apparatus, and image search method | |
US9514551B2 (en) | Efficient fetching of a map data during animation | |
CN105353634B (en) | Utilize the home appliance and method of gesture identification control operation | |
US8373654B2 (en) | Image based motion gesture recognition method and system thereof | |
CN110325949A (en) | For predicting to touch the multitask machine learning explained | |
US7692627B2 (en) | Systems and methods using computer vision and capacitive sensing for cursor control | |
US20120326995A1 (en) | Virtual touch panel system and interactive mode auto-switching method | |
CN108845668B (en) | Man-machine interaction system and method | |
JP2021524951A (en) | Methods, devices, devices and computer readable storage media for identifying aerial handwriting | |
CN103294401A (en) | Icon processing method and device for electronic instrument with touch screen | |
CN105229582A (en) | Based on the gestures detection of Proximity Sensor and imageing sensor | |
CN104252264A (en) | Information processing apparatus and control method | |
CN102236409A (en) | Motion gesture recognition method and motion gesture recognition system based on image | |
WO2012054060A1 (en) | Evaluating an input relative to a display | |
JP2015162088A (en) | Electronic device, method, and program | |
CN101976330A (en) | Gesture recognition method and system | |
JP2015114976A (en) | Electronic device and method | |
CN106202140A (en) | Browsing data device and method for browsing data | |
CN103105924A (en) | Man-machine interaction method and device | |
US11301128B2 (en) | Intended input to a user interface from detected gesture positions | |
JP2013042360A5 (en) | ||
CN107479714A (en) | Content switching display methods, device and robot | |
CN102799271A (en) | Method and system for identifying interactive commands based on human hand gestures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150415 Termination date: 20210711 |