CN1731316A - Human-computer interaction method for dummy ape game - Google Patents

Human-computer interaction method for dummy ape game Download PDF

Info

Publication number
CN1731316A
CN1731316A CNA2005100862540A CN200510086254A CN1731316A CN 1731316 A CN1731316 A CN 1731316A CN A2005100862540 A CNA2005100862540 A CN A2005100862540A CN 200510086254 A CN200510086254 A CN 200510086254A CN 1731316 A CN1731316 A CN 1731316A
Authority
CN
China
Prior art keywords
ape
participant
game
action
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2005100862540A
Other languages
Chinese (zh)
Inventor
齐越
沈旭昆
赵沁平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNA2005100862540A priority Critical patent/CN1731316A/en
Publication of CN1731316A publication Critical patent/CN1731316A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a human-machine interactive method of virtual monkey game, which adopts sports capture equipment to obtain the sports data of the monkey game played by the verity athlete, and then builds 3D virtual environment and virtual human, then it endues the sports data to the virtual human to achieve the virtual reproduce of the monkey game by 3D picture generator. It sets three check points according to the monkey game movement and uses computer vision technology to do real time pick-up and to identify the moving information of the participation, if the participation dose correct imitation, then he can access to the check point and use deformation technology to deform the movement of pacification on the corresponding monkey movement; when he accesses all the three check point, the system closes and returns to the initial condition; if the participation can not do correct imitation during the given time, the imitation course fails and the system returns to the initial condition.

Description

The man-machine interaction method of dummy ape game
Technical field
The present invention relates to the man-machine interaction method of the dummy ape game in a kind of widespread use and science and technology center and the museum's numbers show mode, belong to the computer virtual reality technology field.
Background technology
At present, in science and technology center, museum, a lot of showpieces all technology of using a computer are showed, in order to strengthen user's property of participation, also provide some man-machine interactive meanses.In the occasion of this mass participation, if adopt complicated human-computer interaction device, realize man-machine interaction as the helmet, data glove etc., not only apparatus expensive, and user is dressed inconvenience, and interactive mode is nature not.In order to realize harmonious, natural more man-machine interaction, some systems adopt research achievements, by analyzing the video information of human motion, realize man-machine interaction.Virtual dancing system as U.S. Carnegie Mellon University structure, as Liu Ren1, Gregory Shakhnarovich2, Jessica K.Hodgins1.LearningSilhouette Features for Control of Human Motion.Technical Report ofCarnegie Mellon University, July 2004 introduces, it uses 3 video cameras to take a real human body motion conditions, extract the human motion profile then, and compare with template in the template base, obtain body motion information, realize virtual dancing with the visual human in these information Real Time Drive 3D virtual environment at last.2002 in the Dary exhibition of paintings that the Beijing Century altar is held, there is a piece of digital showpiece to adopt computing machine to show the paint of Dary, and catch participant's hand exercise information with video camera, shown works constantly change with participant's hand exercise, thereby realize man-machine interaction.
Utilize computer vision technique to realize that the key of man-machine interaction is identification participant's a action message.Analyze the human body athletic posture exactly and need set up complicated manikin, as S.Yonemoto, N.Tsuruta, and R.Taniguchi.Tracking of 3D multi-part objects usingmultipie viewpoint time-varying sequences.In Proc.Int.Conf.PatternRecognition, pages 490-494,1998 introduce and T.Nunomaki, S.Yonemoto, D.Arita, R.Taniguchi, and N.Tsuruta.Multi-part non-rigid object trackingbased on time model-space gradients.In Proc.Int.Workshop ArticulatedMotion and Deformable Objects, pages 720-82,2002 introductions, L.Herda, P.Fua, R.Plankers, R.Boulic, and D.Thalmann.Skeleton-Based MotionCapture for Robust Reconstruction of Human Motion.In Proc.ComputerAnimation, pages 77-83,2000 introduce all need a large amount of calculating, thereby system is difficult to realize Real time identification.For the real-time analysis human motion, the general manikin of simplifying that adopts, it at present mainly is the method that adopts the human body contour outline regional analysis, as M.K.Leung and Y-H.Yang.First sight:a human body outline Iabeling system.IEEE Trans.Pattern Analysis andMachine Intelligence, (17) 4:359-377,1995 introduce and K.Takahashi, T.Sakaguchi, and J.Ohya.Remarks on a real-time 3D human body postureestimation method using trinocular images.In Proc.Int.Conf.PatternRecognition, Vol.4, pages 693-697,2000 introductions, the deficiency of these documents all estimates the human motion attitude from limited image information.
Chinese scholar is according to saying that Huatuo is by observing the motion of animal in " the Three Kingdoms book side's Wei skill biography ", characteristics according to the different animals activity, respectively five kinds of birds and beasts representative form and action of (comprising ape, hawk, tiger, bear, deer), according to the requirement of taking exercises, " five-animal exercises " (comprising ape play, hawk play, tiger play, bear play, deer play) invented in addition layout.The appearance of " five-animal exercises " indicates that Ancient Times in China guiding art develops into a brand-new level.But present virtual reality method is the user need dress interactive device, could reproduce " ape play " in " five-animal exercises ", uses very inconvenient.
Summary of the invention
Technology of the present invention is dealt with problems: overcome the deficiencies in the prior art, a kind of natural man-machine interaction mode is provided, the user needn't dress any interactive device, also needn't use traditional interactive devices such as keyboard, mouse, can participate in " ape play ", reach the man-machine interaction method of the dummy ape game of the happy purpose of residence religion residence.
Technical solution of the present invention: the man-machine interaction method of dummy ape game, its characteristics are that it may further comprise the steps:
(1) performs a complete set of ape play action by real sportsman, adopt capturing movement equipment to obtain the exercise data of human body;
(2) the commercial 3D modeling software of utilization structure 3D virtual scene and virtual portrait, and utilize the 3D graphics engine that the ape play exercise data of catching is given on one's body the visual human, realize the virtual reappearance of ape play;
(3) in the playing process of dummy ape game, complexity according to action is set three outposts of the tax office, when dummy ape game is played to each outpost of the tax office, playing process suspends, require the participant that the current action of dummy ape game is imitated, participant's action message is by the video camera captured in real time that is installed in its front;
(4) the dummy ape game system discerns participant's action, if the participant imitates correctly, then system adopts deformation method, the image of action that the participant does is gradient in the corresponding true ape action, be one section true ape great rejoicing video that jumps in the Nature then, after this outpost of the tax office was passed through, system continued to play the ape play, until to next outpost of the tax office, continue to move according to step (4); If the participant then imitates successfully by three outposts of the tax office, system returns original state;
(5) if the participant can't correctly imitate ape play action at the appointed time, then imitation failure, system returns original state.
Action identification method described in the above-mentioned steps (4) is:
(1) at three actions will discerning, sets up the standard operation template base;
(2) extraction participant's image, the participant carries out ape play action imitation before blue background, and spectators' image is taken by system, the blue background image when this figure image subtraction does not have spectators, thereby the extract real-time of realization participant image;
(3) identification participant action.
Above-mentioned steps (3) adopts the method for masterplate coupling to discern, earlier participant's motion images of gathering is carried out normalized, image size and personage position in image are unified, then template corresponding in normalized image and the template base is compared, if satisfy the matching condition of being formulated, show that then the participant moves correctly, otherwise failure.
The step and the method for the 3D graphics engine in the above-mentioned steps (2) are as follows:
(1) is written into 3D scene and visual human's model data of building earlier, draws then with 3DS MAX;
(2) draw for visual human's motion, obtain the exercise data that real motion person performs the ape play, draw each frame, realize continuous virtual ape play performance drafting according to visual human's bone and skin site according to capturing movement equipment.
Deformation method in the above-mentioned steps (4) is as follows: by the matching process described in the above-mentioned steps (3), set up the image of action that the participant does and the corresponding relation between the corresponding true ape motion image, employing realizes by participant's image to the continuous modification the true ape image based on the morphing in territory.
Man-machine interaction mode of the present invention is compared with existing interactive mode, its beneficial effect is: the user does not need to use any interactive device, only need just can realize the man-machine interaction of dummy ape game by limb action, very natural can participating in " dummy ape game " process reaches the happy purpose of residence religion residence.This exchange method can widespread use and the numbers show mode in science and technology center, museum in.
Description of drawings
Fig. 1 illustrates the main process flow diagram of man-machine interaction method of dummy ape game of the present invention;
Fig. 2 illustrates system layout of the present invention;
Fig. 3 illustrates action identification method figure of the present invention;
Fig. 4 illustrates 3D graphics engine structural drawing of the present invention;
Fig. 5 illustrates anamorphose algorithm pattern of the present invention.
Table 1 illustrates the scale calibration of judgement user action correctness of the present invention.
Embodiment
As shown in Figure 1, the man-machine interaction method of dummy ape game of the present invention adopts following steps:
(1) virtual reappearance of ape play adopts capturing movement equipment to obtain the action data that real motion person performs the ape play.
(2) adopt business modeling software,, then the ape play exercise data that is obtained is imparted on the visual human, utilize the overall process of the 3D graphics engine virtual reappearance ape play of exploitation as 3DS MAX, Maya structure 3D virtual scene and visual human.
3D graphics engine structure as shown in Figure 4, the 3D model utilizes 3DS MAX structure.3DS MAX is one of the most frequently used modeling software, and it adopts a kind of framework of opening, and its most contents all is made of plug-in unit, and the developer can write corresponding insert according to the demand of oneself.Scene derivation plug-in unit is responsible for obtaining the 3D model data from 3DS MAX among Fig. 4, and is kept in the corresponding document.Play up administration module and at first be written into this file, 3D virtual scene and visual human's model data is put in the defined data structure, draw then; Motion for the visual human is drawn, obtaining in the exercise data that real motion person performs ape play the position of defined visual human's bone and skin according to capturing movement equipment draws, this is a dynamic process, thereby realizes continuous virtual ape play performance drafting.So just can export one section continuous motion of virtual human.
(3) in the playing process of dummy ape game, complexity according to action is set three outposts of the tax office, when dummy ape game is played to each outpost of the tax office, playing process suspends, require the participant that the current action of dummy ape game is imitated, participant's action message is by the video camera captured in real time that is installed in its front.
(4) the dummy ape game system discerns participant's action, if the participant imitates correctly, then system adopts deformation method, the image of action that the participant does is gradient in the corresponding true ape action, be one section true ape great rejoicing video that jumps in the Nature then, after this outpost of the tax office was passed through, system continued to play the ape play, until to next outpost of the tax office, continue to move according to step (4); If the participant then imitates successfully by three outposts of the tax office, system returns original state.This step mainly comprises following five technical essentials:
(i) set up the action template base of standard.As shown in Figure 3, recognized action to have three, corresponding respectively 3 action template file act1.txt, act2.txt, act3.txt.Human body is divided into 8 parts, and red part is the various piece of the identification maneuver of wanting among Fig. 3.The image information in these 8 zones all is recorded in the template corresponding library file.
(ii) dummy ape game system constructing.Total system is built as shown in Figure 2, and the participant stands in the blue background front, places a large screen display apart from its about 4 meters distance, is a video camera above the display.Display and video camera are all controlled by computing machine.System plays the 3D animation of a complete set of ape play action earlier, allows spectators understand the ape play more all sidedly.Then segmentation is play, and sets three outposts of the tax office according to the complexity of ape play action.When dummy ape game was played to each outpost of the tax office, playing process suspended, and this moment, the participant will imitate it, and participant's action is caught by the video camera that is installed in its front.Computing machine links to each other by video frequency collection card with video camera.The video frequency collection card model that the present invention adopts is DahengCG300 Video Capture, and the video camera model is SAERIM DSP 220X DlGITALZOOM (COLOR CAMERA).
(iii) extract real-time participant's profile information.Do not have spectators' blue background image on the scene with captured participant figure image subtraction, thereby realize the extract real-time of participant's profile.For the ease of two width of cloth image subtractions, image all adopts the HSV form.
(iv) participant's action is discerned.Adopt the method for masterplate coupling to discern, specific practice is:
A. dwindle conversion: participant's contour images is contracted to the resolution of 64*64, and promptly the height of participant's profile must be 64 pixels.Can avoid the highly different and recognition failures problem that causes of different participants like this.
B. translation transformation: participant's profile is carried out translation on the image of 64*64, make head part's the left side or the right, promptly in the position of the 32nd pixel in the centre position of image.The identification error problem that can avoid the difference because of participant institute station location to cause like this.
C. Dong Zuo block information: with participant's action according to the various piece that is divided into shown in Figure 3, as head, health, right-hand man etc., and between the lane place of record various piece, leave file act1.txt in, act2.txt, act3.txt, in.Every row of these files is the intervals at a position, with form: x1.y1.x2.y2.x3.y3.x4.y4 deposits, wherein, (x1 y1) is interval upper left corner position coordinates, (x2, y2) be interval upper right corner position coordinates, (x3 y3) is interval lower left corner position coordinates, (x4 y4) is interval lower right corner position coordinates.The information in 8 zones has been used in the identification of each action.Be each position explanation of 3 actions will discerning below:
D.act1: head, trunk, right-hand man, the position of left and right sides leg, and right-hand man's restricted area;
E.act2: head, trunk, right-hand man, waist, shank, and right-hand man's restricted area;
F.act3: head, trunk, right-hand man, waist, shank, and right-hand man's restricted area.
G. Dong Zuo Pixel Information: the number of pixels (resolution with 64*64 be standard) of each several part in each interval of basis of calculation action, with this as the yardstick of judging the user action correctness.Concrete standard is as shown in table 1.
Table 1
ACT1 (number of pixels) ACT2 (number of pixels) ACT3 (number of pixels)
Zone 1 >50 >90 >75
Zone 2 >200 >250 >150
Zone 3 >250 >20 >40
Zone 4 <230 >40 >50
Zone 5 >20 >100 >120
Zone 6 >20 >250 >150
Zone 7 <5 <5 <5
Zone 8 <5 <5 <5
The human body height Reduce by 20 Reduce by 40 Reduce by 15
(the v) deformation method that adopts of system, implementation procedure is as follows:
As Fig. 5, suppose that the match point of 1 X in source images in the deformation pattern is X ', the position of X ' is by the distance decision of weighted mean X to each bar feature line segment PQ.To the line segment P ' Q ' in each bar source images, X, X ' and PQ satisfy:
u = ( X - P ) · ( Q - P ) | | Q - P | | 2
v = ( X - P ) · Perpend ( Q - P ) | | Q - P | |
X ′ = P ′ + u · ( Q ′ - P ′ ) + v · Perpend ( Q ′ - P ′ ) | | Q ′ - P ′ | |
Wherein Perpend (●) represents vertical vector.
Each feature line segment is calculated by following formula definite role of final X ':
weight = ( length p a + dist ) b
Wherein, length is a line segment length, and dist is the bee-line that pixel arrives current line segment; A is the strength factor of line segment influence, and the intensity of the more little line segment influence of a is big more; B represents that different line segments influence intensity and reduce with the increase of distance, and when b=0, all line segments have identical influence; P represents the influence of line segment length to weight, if p=0, then all line segments have identical weight, if p=1, then long line segment has higher weight.
Algorithmic procedure is as follows:
For each the pixel X in the target image
DSUM=(0,0);
weightsum=0;
For every line P iQ i
Based on P iQ iCalculate u, v
Based on u, v and P i' Q i' calculating X '
On this line, calculate D i=X i'-X i
Dist=is from X to P iQ iBee-line
weight = ( length p a + dist ) b
DSUM+=D i*weight
weightsum+=weight
X’=X+DSUM/weightsum
destinationlmage(X)=sourcelmage(X’)
End
(5) if the participant can't correctly simulate ape play action at the appointed time, then imitation failure, system returns original state.

Claims (5)

1, the man-machine interaction method of dummy ape game is characterized in that it may further comprise the steps:
(1) performs a complete set of ape play action by real sportsman, adopt capturing movement equipment to obtain the exercise data of human body;
(2) the commercial 3D modeling software of utilization structure 3D virtual scene and virtual portrait, and utilize the 3D graphics engine that the ape play exercise data of catching is given on one's body the visual human, realize the virtual reappearance of ape play;
(3) in the playing process of dummy ape game, complexity according to action is set three outposts of the tax office, when dummy ape game is played to each outpost of the tax office, playing process suspends, require the participant that the current action of dummy ape game is imitated, participant's action message is by the video camera captured in real time that is installed in its front;
(4) the dummy ape game system discerns participant's action, if the participant imitates correctly, then system adopts deformation method, the image of action that the participant does is gradient in the corresponding true ape action, be one section true ape great rejoicing video that jumps in the Nature then, after this outpost of the tax office was passed through, system continued to play the ape play, until to next outpost of the tax office, continue to move according to step (4); If the participant then imitates successfully by three outposts of the tax office, system returns original state;
(5) if the participant can't correctly imitate ape play action at the appointed time, then imitation failure, system returns original state.
2, the man-machine interaction method of dummy ape game as claimed in claim 1 is characterized in that: at the action identification method described in the described step (4) be:
(1) at three actions will discerning, sets up the standard operation template base;
(2) extraction participant's image, the participant carries out ape play action imitation before blue background, and spectators' image is taken by system, the blue background image when this figure image subtraction does not have spectators, thereby the extract real-time of realization participant image;
(3) identification participant action.
3, the man-machine interaction method of dummy ape game as claimed in claim 1, it is characterized in that: described step (3) adopts the method for masterplate coupling to discern, earlier participant's motion images of gathering is carried out normalized, image size and personage position in image are unified, then template corresponding in normalized image and the template base is compared, if satisfy the matching condition of being formulated, show that then the participant moves correctly, otherwise failure.
4, the man-machine interaction method of dummy ape game as claimed in claim 1 is characterized in that: the step and the method for the 3D graphics engine in the described step (2) are as follows:
(1) is written into 3D scene and visual human's model data of building earlier, draws then with 3DS MAX;
(2) draw for visual human's motion, obtain the exercise data that real motion person performs the ape play, draw each frame, realize continuous virtual ape play performance drafting according to visual human's bone and skin site according to capturing movement equipment.
5, the man-machine interaction method of dummy ape game as claimed in claim 1, it is characterized in that: the deformation method in the described step (4) is as follows: by the matching process described in the step (3), set up the image of action that the participant does and the corresponding relation between the corresponding true ape motion image, employing realizes by participant's image to the continuous modification the true ape image based on the morphing in territory.
CNA2005100862540A 2005-08-19 2005-08-19 Human-computer interaction method for dummy ape game Pending CN1731316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2005100862540A CN1731316A (en) 2005-08-19 2005-08-19 Human-computer interaction method for dummy ape game

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2005100862540A CN1731316A (en) 2005-08-19 2005-08-19 Human-computer interaction method for dummy ape game

Publications (1)

Publication Number Publication Date
CN1731316A true CN1731316A (en) 2006-02-08

Family

ID=35963686

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2005100862540A Pending CN1731316A (en) 2005-08-19 2005-08-19 Human-computer interaction method for dummy ape game

Country Status (1)

Country Link
CN (1) CN1731316A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090042695A1 (en) * 2007-08-10 2009-02-12 Industrial Technology Research Institute Interactive rehabilitation method and system for movement of upper and lower extremities
CN101947386A (en) * 2010-09-21 2011-01-19 浙江大学 Spatial position calculating technology based realization method and device for playing music cantor games
CN102136070A (en) * 2010-01-22 2011-07-27 财团法人工业技术研究院 Posture recognizing method and system as well as computer program product
CN102448561A (en) * 2009-05-29 2012-05-09 微软公司 Gesture coach
CN101202994B (en) * 2006-12-14 2012-10-24 北京三星通信技术研究有限公司 Method and device assistant to user for body-building
CN101564594B (en) * 2008-04-25 2012-11-07 财团法人工业技术研究院 Interactive type limb action recovery method and system
CN103179437A (en) * 2013-03-15 2013-06-26 苏州跨界软件科技有限公司 System and method for recording and playing virtual character videos
CN103327235A (en) * 2012-03-21 2013-09-25 卡西欧计算机株式会社 Image processing device and image processing method
CN103325134A (en) * 2012-03-23 2013-09-25 天津生态城动漫园投资开发有限公司 Real-time three-dimensional animation (2K) creation platform
WO2016107226A1 (en) * 2014-12-29 2016-07-07 深圳Tcl数字技术有限公司 Image processing method and apparatus
CN108307183A (en) * 2018-02-08 2018-07-20 广州华影广告有限公司 Virtual scene method for visualizing and system
CN110531854A (en) * 2019-08-27 2019-12-03 深圳创维-Rgb电子有限公司 A kind of action imitation display methods, action imitation display system and storage medium
CN115292548A (en) * 2022-09-29 2022-11-04 合肥市满好科技有限公司 Virtual technology-based drama propaganda method and system and propaganda platform

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101202994B (en) * 2006-12-14 2012-10-24 北京三星通信技术研究有限公司 Method and device assistant to user for body-building
US20140295393A1 (en) * 2007-08-10 2014-10-02 Industrial Technology Research Institute Interactive rehabilitation method and system for movement of upper and lower extremities
US20090042695A1 (en) * 2007-08-10 2009-02-12 Industrial Technology Research Institute Interactive rehabilitation method and system for movement of upper and lower extremities
CN101564594B (en) * 2008-04-25 2012-11-07 财团法人工业技术研究院 Interactive type limb action recovery method and system
CN102448561A (en) * 2009-05-29 2012-05-09 微软公司 Gesture coach
CN102448561B (en) * 2009-05-29 2013-07-10 微软公司 Gesture coach
CN102136070B (en) * 2010-01-22 2013-10-30 财团法人工业技术研究院 Posture recognizing method and system as well as computer program product
CN102136070A (en) * 2010-01-22 2011-07-27 财团法人工业技术研究院 Posture recognizing method and system as well as computer program product
CN101947386A (en) * 2010-09-21 2011-01-19 浙江大学 Spatial position calculating technology based realization method and device for playing music cantor games
CN103327235A (en) * 2012-03-21 2013-09-25 卡西欧计算机株式会社 Image processing device and image processing method
CN103325134A (en) * 2012-03-23 2013-09-25 天津生态城动漫园投资开发有限公司 Real-time three-dimensional animation (2K) creation platform
CN103179437A (en) * 2013-03-15 2013-06-26 苏州跨界软件科技有限公司 System and method for recording and playing virtual character videos
WO2016107226A1 (en) * 2014-12-29 2016-07-07 深圳Tcl数字技术有限公司 Image processing method and apparatus
CN105809653A (en) * 2014-12-29 2016-07-27 深圳Tcl数字技术有限公司 Image processing method and device
CN105809653B (en) * 2014-12-29 2019-01-01 深圳Tcl数字技术有限公司 Image processing method and device
CN108307183A (en) * 2018-02-08 2018-07-20 广州华影广告有限公司 Virtual scene method for visualizing and system
CN110531854A (en) * 2019-08-27 2019-12-03 深圳创维-Rgb电子有限公司 A kind of action imitation display methods, action imitation display system and storage medium
CN115292548A (en) * 2022-09-29 2022-11-04 合肥市满好科技有限公司 Virtual technology-based drama propaganda method and system and propaganda platform
CN115292548B (en) * 2022-09-29 2022-12-09 合肥市满好科技有限公司 Virtual technology-based drama propaganda method and system and propaganda platform

Similar Documents

Publication Publication Date Title
CN1731316A (en) Human-computer interaction method for dummy ape game
WO2021129064A1 (en) Posture acquisition method and device, and key point coordinate positioning model training method and device
Singh et al. Video benchmarks of human action datasets: a review
CN106650687B (en) Posture correction method based on depth information and skeleton information
Negin et al. A decision forest based feature selection framework for action recognition from rgb-depth cameras
CN105107200B (en) Face Changing system and method based on real-time deep body feeling interaction and augmented reality
CN106843460B (en) Multiple target position capture positioning system and method based on multi-cam
CN101853399B (en) Method for realizing blind road and pedestrian crossing real-time detection by utilizing computer vision technology
CN103186775B (en) Based on the human motion identification method of mix description
Chen et al. Computer-assisted self-training system for sports exercise using kinects
CN110929596A (en) Shooting training system and method based on smart phone and artificial intelligence
CN102800126A (en) Method for recovering real-time three-dimensional body posture based on multimodal fusion
CN109766856A (en) A kind of method of double fluid RGB-D Faster R-CNN identification milking sow posture
CN106599770A (en) Skiing scene display method based on body feeling motion identification and image matting
CN1949274A (en) 3-D visualising method for virtual crowd motion
Aloba et al. Kinder-Gator: The UF Kinect Database of Child and Adult Motion.
Zhang et al. Research on volleyball action standardization based on 3D dynamic model
He et al. A new Kinect-based posture recognition method in physical sports training based on urban data
Liu et al. Trampoline motion decomposition method based on deep learning image recognition
CN103310191A (en) Human body action identification method for motion information imaging
CN114565976A (en) Training intelligent test method and device
CN110348344A (en) A method of the special facial expression recognition based on two and three dimensions fusion
WO2016107226A1 (en) Image processing method and apparatus
CN110743160B (en) Real-time pace tracking system and pace generation method based on somatosensory capture device
CN103020631B (en) Human movement identification method based on star model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication