CN102945078A - Human-computer interaction equipment and human-computer interaction method - Google Patents

Human-computer interaction equipment and human-computer interaction method Download PDF

Info

Publication number
CN102945078A
CN102945078A CN2012104543440A CN201210454344A CN102945078A CN 102945078 A CN102945078 A CN 102945078A CN 2012104543440 A CN2012104543440 A CN 2012104543440A CN 201210454344 A CN201210454344 A CN 201210454344A CN 102945078 A CN102945078 A CN 102945078A
Authority
CN
China
Prior art keywords
eeg signals
microprocessor
virtual scene
scene
human
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012104543440A
Other languages
Chinese (zh)
Inventor
程俊
陶大程
陈裕华
张子锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN2012104543440A priority Critical patent/CN102945078A/en
Publication of CN102945078A publication Critical patent/CN102945078A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses human-computer interaction equipment. The human-computer interaction equipment comprises an electroencephalogram electrifying electrode, a signal processing module, a video acquiring module, a microprocessor and a video display module, wherein the electroencephalogram electrifying electrode is used for acquiring a user electroencephalogram signal; the signal processing module is used for acquiring and amplifying the user electroencephalogram signal; the video acquiring module is used for acquiring a user hand gesture and a surrounding scene picture; the microprocessor is used for receiving the amplified electroencephalogram signal, meanwhile forming a control command according to the electroencephalogram signal, processing the surrounding scene picture to form a virtual scene, positioning a target object in the virtual scene according to the hand gesture, and controlling the change of the target object according to the control command; and the video display module is electrically connected to the microprocessor and is used for displaying the virtual scene. The human-computer interaction equipment provided by the invention can realize interaction between real scene and virtual scene through single hand gesture and the electroencephalogram signal, and is high in operability, precise, simple and convenient to use. In addition, the invention also discloses a human-computer interaction method.

Description

Human-computer interaction device and man-machine interaction method
Technical field
The present invention relates to multimedia technology, relate in particular to a kind of human-computer interaction device and man-machine interaction method.
Background technology
Along with the development of infotech and popularizing of information application system, the user has been devoted to seek more the computing machine of " handy, applicable, easy-to-use ", hope with the process of computing machine " cooperation " in, computing machine can progressively be understood user's demand, hobby and level, and " knowledge " of user and computing machine increases together, and the technical merit of this " human-machine intelligence increases altogether " and application result are one of computer technology and artificial intelligence technology important symbols of developing into a new stage.In the electronic product epoch take computing machine or class computing machine as core, human-computer interaction technology is just becoming one of emphasis of various countries' research.
At the commitment of human-computer interaction technology, usually take keyboard as main, the user calculates and carries out corresponding instruction by keyboard and mouse input computer instruction; Modern man-machine interaction mode also has touch-control, the user contacts touch screen with hand or other touch-control instruments (for example pointer), corresponding variation will occur in the resistance at the place of being touched or electric capacity, computing machine is by gathering the numerical value of difference, just can judge the position of touch point, thus can realize certain operations as pull, convergent-divergent etc.; Recently also have the man-machine interaction of body sense type, this interactive mode is that the motion by the acceleration transducer perception user hand in the telepilot realizes; A kind of interactive mode in addition is data glove, this mode be equally by on the gloves with the bend sensor hand motion that gathers the people realize mutual.
For the interactive mode of traditional keyboard, mouse and so on, need the user to contact keyboard, mouse operates; Touch-control requires the user to use before equipment, and contact control screen then can't be mutual away from touch screen; The interactive mode of human body temperature type, the remote control equipment that needs the user to hold belt sensor are made significantly, and limb action operates; And for more according to the number of sensors of this pressure sensor interface of gloves, and the demarcation of response and decoupling zero calculate very complicatedly, and difficulty is larger when processing in real time.The data glove reach is also little; In addition, data glove can gather limb action, but the trouble of dress, operation inconvenience and for handicapped, or the people of physical disabilities, and operability is also poor.
Summary of the invention
Based on this, being necessary provides a kind of workable, simple and practical human-computer interaction device for the defective of above-mentioned man-machine interactive system existence.
A kind of human-computer interaction device comprises: brain electricity power-collecting electrode is used for obtaining user's EEG signals; Signal processing module is used for gathering described EEG signals, and described EEG signals is amplified; Video acquisition module, be used for catching user's gesture motion and reach scene graph on every side, microprocessor, be used for receiving the EEG signals after amplifying, simultaneously according to the instruction of described EEG signals formation control, described microprocessor processes also is used for described scene graph on every side to form virtual scene, and described microprocessor also is used for locating the target object of described virtual scene according to described gesture motion, and controls the variation of described target object according to described steering order; And video display module, be electrically connected at described microprocessor, be used for showing described virtual scene.
In the present embodiment, described video acquisition module is comprised of at least 1 camera.
In the present embodiment, described video display module is the glasses display screens that decline
In addition, the present invention also provides a kind of method of man-machine interaction, comprises the steps: that brain electricity power-collecting electrode obtains user's EEG signals; Signal processing module gathers described EEG signals, and described EEG signals is amplified; Video acquisition module is caught user's gesture motion and is reached scene graph on every side; Microprocessor converts scene graph around described to virtual scene, and shows described virtual scene by video acquisition module; Microprocessor receives the EEG signals after amplifying, simultaneously with the instruction of described EEG signals formation control; Microprocessor is located target object in the described virtual scene according to described gesture motion; Control the variation of described target object according to described steering order.
In the present embodiment, wherein, microprocessor receives the EEG signals after amplifying, and simultaneously according to the instruction of described EEG signals formation control, comprises the steps: that the EEG signals that will receive carries out denoising; With the EEG signals wavelet decomposition after the denoising and extract characteristic of division; Classify according to the SVM training classifier based on described characteristic of division; According to the instruction of described classification formation control.
In the present embodiment, wherein, locate target object in the described virtual scene according to described gesture motion, comprise the steps: the coordinate mapping relations of scene graph around described are converted to plane of delineation coordinate; Be partitioned into finger areas; Calculate the plane of delineation coordinate of each finger areas; The plane of delineation coordinate of the virtual scene of the plane of delineation coordinate of above-mentioned each finger areas and above-mentioned demonstration is mated.
Above-mentioned human-computer interaction device converts the EEG signals that collects to steering order by gathering EEG signals, and gesture motion is located target object in the virtual scene, and according to the variation of steering order control target object, realizes with virtual scene mutual.These equipment such as touching dish, mouse, screen because the user need not to catch, need not in wearable sensors on hand, the user can come with the outdoor scene virtual reality mutual by simple gesture motion and EEG signals, can realize amplifying, dwindle the target object in scene, location, the mobile virtual scene, workable, accurate and simple and convenient.
Description of drawings
A kind of human-computer interaction device's that Fig. 1 provides for the embodiment of the invention structural representation.
The flow chart of steps of the method for a kind of man-machine interaction that Fig. 2 provides for the embodiment of the invention.
The microprocessor that Fig. 3 provides for the embodiment of the invention receives the EEG signals after amplifying, and the while is according to the flow chart of steps of described EEG signals formation control instruction.
The flow chart of steps according to the target object in the virtual scene of gesture motion location that Fig. 4 provides for the embodiment of the invention.
Embodiment
See also Fig. 1, a kind of human-computer interaction device's 100 that Fig. 1 provides for the embodiment of the invention structural representation.
A kind of human-computer interaction device 100 comprises: brain electricity power-collecting electrode 110, signal processing module 120, video acquisition module 130, microprocessor 140 and video display module 150.
Brain electricity power-collecting electrode 110 is used for obtaining user's EEG signals.Be appreciated that brain electricity power-collecting electrode 110 contact with user's scalp, for the EEG signals of obtaining the collection user, such as notice, the EEG signals such as blink, frown.
Signal processing module 120 is electrically connected at brain electricity power-collecting electrode 110, is used for gathering the EEG signals that brain electricity power-collecting electrode 110 obtains, and EEG signals is amplified.
Video acquisition module 130 is used for catching user's gesture motion and reaches scene graph on every side.In embodiment provided by the invention, video acquisition module 130 is preferably by at least 1 camera and forms, and scene graph was caught by camera around user's gesture motion reached.
Microprocessor 140 is electrically connected at signal processing module 120 and video acquisition module 130.Microprocessor 140 receives the EEG signals after signal processing module 120 amplifies, simultaneously according to the instruction of EEG signals formation control.
Microprocessor 140 also is used for receiving the scene graph on every side of being caught by video acquisition module 130, and scene graph converts virtual scene on every side.Microprocessor 140 also is used for the target object according to gesture motion location virtual scene, and controls the variation of described target object according to steering order.
Video display module 150 is electrically connected at microprocessor 140, is used for showing virtual scene.Video display module 150 is preferably the glasses display screen that declines in embodiment provided by the invention, by video acquisition module 130 catch around scene graph show through video video display module 150.Be appreciated that change procedure and result by above-mentioned steering order control target object also show through video display module 150.
See also Fig. 2, the flow chart of steps of the method for a kind of man-machine interaction that Fig. 2 provides for the embodiment of the invention, it comprises the steps:
Step S210: brain electricity power-collecting electrode 110 obtains user's EEG signals.In embodiment provided by the invention, brain electricity power-collecting electrode 110 contacts with user's scalp, is used for gathering user's EEG signals.The user has different EEG signals under different ideology, have different EEG signals such as the user under " notice ", " nictation " or ideology such as " frowning ".
Step S220: signal processing module 120 gathers EEG signals, and EEG signals is amplified.Signal processing module 120 gathers user's EEG signals that brains electricity power-collecting electrodes 110 obtain, and such as " notice ", EEG signals under " nictation " or " frowning ", and above-mentioned EEG signals is amplified.
Step S230: video acquisition module 130 is caught user's gesture motion and is reached scene graph on every side.Be appreciated that the scene graph on every side that video acquisition module 130 is caught is outdoor scene.
Step S240: microprocessor 110 on every side scene graph converts virtual scene to, and shows virtual scene.In embodiment provided by the invention, microprocessor 110 with video acquisition module 130 catch around scene graph merge, by geometric transformation, cut out, projection equal matrix operation transform, scene graph converts virtual scene on every side, and shows by video display module 150.
Step S250: the EEG signals that microprocessor 110 receives after amplifying, simultaneously with the instruction of EEG signals formation control.Be appreciated that, because the EEG signals of human brain belongs to idea, be difficult to by the object in the scene of the direct accurately location of EEG signals, but, if convert EEG signals to can control steering order, the action such as movement, convergent-divergent, rotation of controlling the object in the virtual scene by steering order again then can realize.
See also Fig. 3, the EEG signals that the microprocessor 110 that Fig. 3 provides for the embodiment of the invention receives after amplifying, according to the flow chart of steps of described EEG signals formation control instruction, it comprises the steps: simultaneously
Step S251: microprocessor 110 carries out denoising with the EEG signals that receives.In embodiment provided by the invention, denoising is based on the method for Wavelet Denoising Method commonly used at present.
Step S252: microprocessor 110 is with the EEG signals wavelet decomposition after the denoising and extract data characteristics.In embodiment provided by the invention, microprocessor 110 extracts EEG signals data characteristics after the denoising by wavelet decomposition.For example, microprocessor 110 extracts the data characteristics of " notice " by wavelet decomposition.
Step S253: classify according to the SVM training classifier based on above-mentioned data characteristics.Be appreciated that the difference of the data characteristics of extracting based on step S252 microprocessor 110, classify according to the SVM training classifier.Be appreciated that because the difference of the data characteristics of " notice ", " nictation " or " frowning " utilizes the SVM training classifier to classify.
Step S254: based on the instruction of above-mentioned classification formation control.Be appreciated that the difference according to the EEG signals data characteristics, formed different classifications, different classes of corresponding to different steering orders.For example, classify according to the data characteristics of " notice ", the steering order of formation can be " movement "; " nictation ", then the steering order of correspondence can be " amplification ".
Be appreciated that by above-mentioned steps the EEG signals that will be difficult to directly to control object converts the steering order that can control object to.
Step S260: microprocessor 110 is according to the target object in the virtual scene of gesture motion location.In embodiment provided by the invention, at first by the target object in the virtual scene of gesture location, again by above-mentioned steering order control target object, thereby realize man-machine mutual.
See also Fig. 4, the flow chart of steps according to the target object in the virtual scene of gesture motion location that Fig. 4 provides for the embodiment of the invention, it comprises the steps:
Step S261: the coordinate mapping relations of scene graph are converted to plane of delineation coordinate on every side.
In embodiment provided by the invention, the coordinate mapping relations of scene graph on every side are converted to plane of delineation coordinate mainly adopt coordinate transformation method commonly used at present, namely at first adopt three-dimensional geometry method will around the object coordinates system of scene graph be transformed into world coordinate system, again with world coordinate system, become the camera coordinate system through rotation, translation transformation, at last the camera coordinate system is carried out the 3-D clipping projection and obtain image coordinate.
Step S262: based on above-mentioned plane of delineation coordinate, be partitioned into finger areas.
Being appreciated that microprocessor 110 receives catches user's gesture motion by video acquisition module 130, and on above-mentioned plane of delineation coordinate basis, microprocessor 110 utilizes the difference of the finger colours of skin and profile to be partitioned into finger areas.Particularly, based on the finger colour of skin and background color, microprocessor 110 utilizes the hsv color spatial model to cut apart, and goes out finger areas further combined with finger shape Image Segmentation Methods Based on Features.
Step S263: the plane of delineation coordinate that calculates each finger areas.
Step S264: the plane of delineation coordinate of the virtual scene of the plane of delineation coordinate of above-mentioned each finger areas and above-mentioned demonstration is mated.
In embodiments of the present invention, point in the plane of delineation coordinate of each finger areas that to change corresponding then be the change of the target object scene in the plane of delineation coordinate of the virtual scene that is complementary with it, thereby locate the target object in the current virtual scene.For example, the switching of single finger between the target object scene that the sliding action behind the elements of a fix can be thought and it is complementary; If the opposite and distance of direction of motion diminishes and can think dwindling of the target object scene that is complementary with it between two fingers, the opposite and distance of direction of motion becomes the amplification of then thinking greatly the target object scene that is complementary with it.
Step S270: microprocessor 110 is controlled the variation of above-mentioned target object according to steering order.
Based on above-mentioned target object, microprocessor 110 is according to the variation of steering order control target object.For example, when steering order is " movement ", microprocessor 110 is according to " movement " control target objects execution shift actions; When steering order was " amplification ", microprocessor 110 was carried out " amplification " action according to " amplification " control target object.
Be appreciated that after single finger is located in real time to the target object in the virtual scene, can lock the target object of pointing current location or cancel locking by EEG signals.
Above-mentioned human-computer interaction device's method by gathering user's EEG signals, and is processed EEG signals, the formation control instruction, and by the target object in the virtual scene of user's gesture motion location, the variation of steering order control target object, thus realized man-machine interaction.These equipment such as touching dish, mouse, screen because the user need not to catch, need not in wearable sensors on hand, the user can come with the outdoor scene virtual reality mutual by simple gesture motion and EEG signals, can realize amplifying, dwindle the target object in scene, location, the mobile virtual scene, workable, accurate and simple and convenient.
The above, it only is preferred embodiment of the present invention, be not that the present invention is done any pro forma restriction, although the present invention discloses as above with preferred embodiment, yet be not to limit the present invention, any those skilled in the art, within not breaking away from the technical solution of the present invention scope, when the technology contents that can utilize above-mentioned announcement is made a little change or is modified to the equivalent embodiment of equivalent variations, in every case be not break away from the technical solution of the present invention content, any simple modification that foundation technical spirit of the present invention is done above embodiment, equivalent variations and modification all still belong in the scope of technical solution of the present invention.

Claims (6)

1. a human-computer interaction device is characterized in that, comprising:
Brain electricity power-collecting electrode is used for obtaining user's EEG signals;
Signal processing module is used for gathering described EEG signals, and described EEG signals is amplified;
Video acquisition module is used for catching user's gesture motion and reaches scene graph on every side;
Microprocessor, be used for receiving the EEG signals after amplifying, simultaneously according to the instruction of described EEG signals formation control, described microprocessor processes also for the treatment of scene graph around described to form virtual scene, described microprocessor also is used for locating the target object of described virtual scene according to described gesture motion, and controls the variation of described target object according to described steering order; And
Video display module is electrically connected at described microprocessor, is used for showing described virtual scene.
2. human-computer interaction device according to claim 1 is characterized in that, described video acquisition module is comprised of at least 1 camera.
3. human-computer interaction device according to claim 1 is characterized in that, described video display module is the glasses display screens that decline.
4. the method for a man-machine interaction is characterized in that, comprises the steps:
Brain electricity power-collecting electrode obtains user's EEG signals;
Signal processing module gathers described EEG signals, and described EEG signals is amplified;
Video acquisition module is caught user's gesture motion and is reached scene graph on every side;
Microprocessor converts scene graph around described to virtual scene, and shows described virtual scene by video acquisition module;
Microprocessor receives the EEG signals after amplifying, simultaneously with the instruction of described EEG signals formation control;
Microprocessor is located target object in the described virtual scene according to described gesture motion;
Control the variation of described target object according to described steering order.
5. the method for man-machine interaction according to claim 4 is characterized in that, wherein, microprocessor receives the EEG signals after amplifying, and simultaneously according to the instruction of described EEG signals formation control, comprises the steps:
The EEG signals that receives is carried out denoising;
With the EEG signals wavelet decomposition after the denoising and extract characteristic of division;
Classify according to the SVM training classifier based on described characteristic of division;
According to the instruction of described classification formation control.
6. the method for man-machine interaction according to claim 4 is characterized in that, wherein, microprocessor is located target object in the described virtual scene according to described gesture motion, comprises the steps:
The coordinate mapping relations of scene graph around described are converted to plane of delineation coordinate;
Based on described plane of delineation coordinate, be partitioned into finger areas;
Calculate the plane of delineation coordinate of each finger areas;
The plane of delineation coordinate of the virtual scene of the plane of delineation coordinate of above-mentioned each finger areas and above-mentioned demonstration is mated.
CN2012104543440A 2012-11-13 2012-11-13 Human-computer interaction equipment and human-computer interaction method Pending CN102945078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012104543440A CN102945078A (en) 2012-11-13 2012-11-13 Human-computer interaction equipment and human-computer interaction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012104543440A CN102945078A (en) 2012-11-13 2012-11-13 Human-computer interaction equipment and human-computer interaction method

Publications (1)

Publication Number Publication Date
CN102945078A true CN102945078A (en) 2013-02-27

Family

ID=47728028

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012104543440A Pending CN102945078A (en) 2012-11-13 2012-11-13 Human-computer interaction equipment and human-computer interaction method

Country Status (1)

Country Link
CN (1) CN102945078A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955269A (en) * 2014-04-09 2014-07-30 天津大学 Intelligent glass brain-computer interface method based on virtual real environment
CN104615243A (en) * 2015-01-15 2015-05-13 深圳市掌网立体时代视讯技术有限公司 Head-wearable type multi-channel interaction system and multi-channel interaction method
CN105700690A (en) * 2016-03-31 2016-06-22 云南省交通科学研究所 Mobile platform based electroencephalogram multi-media control system
WO2016115982A1 (en) * 2015-01-23 2016-07-28 Beijing Zhigu Rui Tuo Tech Co., Ltd. Methods and apparatuses for determining head movement
CN105912110A (en) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 Method, device and system for performing target selection in virtual reality space
CN106055383A (en) * 2016-05-26 2016-10-26 北京京东尚科信息技术有限公司 Request processing method and device
CN106249879A (en) * 2016-07-19 2016-12-21 深圳市金立通信设备有限公司 The display packing of a kind of virtual reality image and terminal
CN106484111A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 A kind of method of image procossing and virtual reality device
CN107811735A (en) * 2017-10-23 2018-03-20 广东工业大学 One kind auxiliary eating method, system, equipment and computer-readable storage medium
CN108478189A (en) * 2018-03-06 2018-09-04 西安科技大学 A kind of human body ectoskeleton mechanical arm control system and method based on EEG signals
CN109101807A (en) * 2018-09-10 2018-12-28 清华大学 A kind of brain electricity identity authority control system and method
CN109151555A (en) * 2018-10-29 2019-01-04 奇想空间(北京)教育科技有限公司 Amusement facility and the method for handling video image
CN109313486A (en) * 2016-03-14 2019-02-05 内森·斯特林·库克 E.E.G virtual reality device and method
CN111576539A (en) * 2020-04-30 2020-08-25 三一重机有限公司 Excavator control method and device, computer equipment and readable storage medium
CN111624770A (en) * 2015-04-15 2020-09-04 索尼互动娱乐股份有限公司 Pinch and hold gesture navigation on head mounted display

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771294B1 (en) * 1999-12-29 2004-08-03 Petri Pulli User interface
CN1776572A (en) * 2005-12-08 2006-05-24 清华大学 Computer man-machine interacting method based on steady-state vision induced brain wave
CN101571748A (en) * 2009-06-04 2009-11-04 浙江大学 Brain-computer interactive system based on reinforced realization
CN101810003A (en) * 2007-07-27 2010-08-18 格斯图尔泰克股份有限公司 enhanced camera-based input
CN101907954A (en) * 2010-07-02 2010-12-08 中国科学院深圳先进技术研究院 Interactive projection system and interactive projection method
CN102012740A (en) * 2010-11-15 2011-04-13 中国科学院深圳先进技术研究院 Man-machine interaction method and system
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN102749990A (en) * 2011-04-08 2012-10-24 索尼电脑娱乐公司 Systems and methods for providing feedback by tracking user gaze and gestures
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771294B1 (en) * 1999-12-29 2004-08-03 Petri Pulli User interface
CN1776572A (en) * 2005-12-08 2006-05-24 清华大学 Computer man-machine interacting method based on steady-state vision induced brain wave
CN101810003A (en) * 2007-07-27 2010-08-18 格斯图尔泰克股份有限公司 enhanced camera-based input
CN101571748A (en) * 2009-06-04 2009-11-04 浙江大学 Brain-computer interactive system based on reinforced realization
CN101907954A (en) * 2010-07-02 2010-12-08 中国科学院深圳先进技术研究院 Interactive projection system and interactive projection method
CN102012740A (en) * 2010-11-15 2011-04-13 中国科学院深圳先进技术研究院 Man-machine interaction method and system
CN102749990A (en) * 2011-04-08 2012-10-24 索尼电脑娱乐公司 Systems and methods for providing feedback by tracking user gaze and gestures
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN102769802A (en) * 2012-06-11 2012-11-07 西安交通大学 Man-machine interactive system and man-machine interactive method of smart television

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955269A (en) * 2014-04-09 2014-07-30 天津大学 Intelligent glass brain-computer interface method based on virtual real environment
CN104615243A (en) * 2015-01-15 2015-05-13 深圳市掌网立体时代视讯技术有限公司 Head-wearable type multi-channel interaction system and multi-channel interaction method
US10824225B2 (en) 2015-01-23 2020-11-03 Beijing Zhigu Rui Tuo Tech Co., Ltd. Methods and apparatuses for determining head movement
WO2016115982A1 (en) * 2015-01-23 2016-07-28 Beijing Zhigu Rui Tuo Tech Co., Ltd. Methods and apparatuses for determining head movement
CN111624770B (en) * 2015-04-15 2022-05-03 索尼互动娱乐股份有限公司 Pinch and hold gesture navigation on head mounted display
CN111624770A (en) * 2015-04-15 2020-09-04 索尼互动娱乐股份有限公司 Pinch and hold gesture navigation on head mounted display
CN109313486A (en) * 2016-03-14 2019-02-05 内森·斯特林·库克 E.E.G virtual reality device and method
CN105700690A (en) * 2016-03-31 2016-06-22 云南省交通科学研究所 Mobile platform based electroencephalogram multi-media control system
CN105912110A (en) * 2016-04-06 2016-08-31 北京锤子数码科技有限公司 Method, device and system for performing target selection in virtual reality space
CN105912110B (en) * 2016-04-06 2019-09-06 北京锤子数码科技有限公司 A kind of method, apparatus and system carrying out target selection in virtual reality space
CN106055383A (en) * 2016-05-26 2016-10-26 北京京东尚科信息技术有限公司 Request processing method and device
CN106249879A (en) * 2016-07-19 2016-12-21 深圳市金立通信设备有限公司 The display packing of a kind of virtual reality image and terminal
CN106484111A (en) * 2016-09-30 2017-03-08 珠海市魅族科技有限公司 A kind of method of image procossing and virtual reality device
CN106484111B (en) * 2016-09-30 2019-06-28 珠海市魅族科技有限公司 A kind of method and virtual reality device of image procossing
CN107811735B (en) * 2017-10-23 2020-01-07 广东工业大学 Auxiliary eating method, system, equipment and computer storage medium
CN107811735A (en) * 2017-10-23 2018-03-20 广东工业大学 One kind auxiliary eating method, system, equipment and computer-readable storage medium
CN108478189A (en) * 2018-03-06 2018-09-04 西安科技大学 A kind of human body ectoskeleton mechanical arm control system and method based on EEG signals
CN109101807A (en) * 2018-09-10 2018-12-28 清华大学 A kind of brain electricity identity authority control system and method
CN109101807B (en) * 2018-09-10 2022-12-02 清华大学 Electroencephalogram identity authority control system and method
CN109151555A (en) * 2018-10-29 2019-01-04 奇想空间(北京)教育科技有限公司 Amusement facility and the method for handling video image
CN111576539A (en) * 2020-04-30 2020-08-25 三一重机有限公司 Excavator control method and device, computer equipment and readable storage medium
CN111576539B (en) * 2020-04-30 2022-07-29 三一重机有限公司 Excavator control method, excavator control device, computer equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN102945078A (en) Human-computer interaction equipment and human-computer interaction method
Yang et al. Gesture interaction in virtual reality
Kaur et al. A review: Study of various techniques of Hand gesture recognition
Quek Eyes in the interface
Suarez et al. Hand gesture recognition with depth images: A review
CN105487673A (en) Man-machine interactive system, method and device
CN104298340A (en) Control method and electronic equipment
Moradi et al. Compare of machine learning and deep learning approaches for human activity recognition
Nandwana et al. A survey paper on hand gesture recognition
Wu et al. An overview of gesture recognition
CN104077784B (en) Extract the method and electronic equipment of destination object
Hartanto et al. Real time hand gesture movements tracking and recognizing system
CN106383583A (en) Method and system capable of controlling virtual object to be accurately located and used for air man-machine interaction
CN109634408A (en) A kind of extended method of Hololens gesture identification
TWI657352B (en) Three-dimensional capacitive wear human-computer interaction device and method thereof
Premaratne et al. Hand gesture tracking and recognition system for control of consumer electronics
Abdallah et al. An overview of gesture recognition
WO2018076609A1 (en) Terminal and method for operating terminal
Chaudhary Finger-stylus for non touch-enable systems
Li et al. Feature Point Matching for Human-Computer Interaction Multi-Feature Gesture Recognition Based on Virtual Reality VR Technology
CN104375631A (en) Non-contact interaction method based on mobile terminal
CN103019389B (en) Gesture identification system and gesture identification
Maidi et al. Interactive media control using natural interaction-based Kinect
Feng et al. FM: Flexible mapping from one gesture to multiple semantics
CN104363494A (en) Gesture recognition system for smart television

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20130227

RJ01 Rejection of invention patent application after publication