CN102179048A - Method for implementing realistic game based on movement decomposition and behavior analysis - Google Patents

Method for implementing realistic game based on movement decomposition and behavior analysis Download PDF

Info

Publication number
CN102179048A
CN102179048A CN2011100469754A CN201110046975A CN102179048A CN 102179048 A CN102179048 A CN 102179048A CN 2011100469754 A CN2011100469754 A CN 2011100469754A CN 201110046975 A CN201110046975 A CN 201110046975A CN 102179048 A CN102179048 A CN 102179048A
Authority
CN
China
Prior art keywords
game
information
obtains
image
player
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011100469754A
Other languages
Chinese (zh)
Inventor
黄立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN CITY GAODE ELECTRIC CO Ltd
Original Assignee
WUHAN CITY GAODE ELECTRIC CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN CITY GAODE ELECTRIC CO Ltd filed Critical WUHAN CITY GAODE ELECTRIC CO Ltd
Priority to CN2011100469754A priority Critical patent/CN102179048A/en
Publication of CN102179048A publication Critical patent/CN102179048A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a method for implementing realistic game, which comprises the following steps of: (1) establishing a human body skeleton model; (2) establishing a game movement library under an offline state, establishing movement libraries respectively according to game items, and performing multi-frame movement decomposition on a single semantic movement;(3) calibrating a binocular camera to acquire parameters of the binocular camera and polar calibration; (5) background modeling; (6) selecting an interactive characteristic label; (7) foreground partitioning; (8) initializing the information of the characteristic label; (9) detecting human face and skin color; (10) multi-target tracking; (11) completing sparse stereo matching; (12) acquiring a 3D skeleton; and (13) matching with the movements in the offline movement libraries to realize movement recognition, combining single-frame image analytic matching with multi-frame image analytic matching to obtain the semantic movement, transferring the semantic movement to a game executing unit to implement the function of realistic game. Being stronger in interactivity and reality as well as simpler and more convenient in operation, the method of the invention is a game implementation method with low cost and is more suitable for being extensively accepted by general people.

Description

Method based on movement decomposition and behavioural analysis realization real scale game
Technical field
The present invention relates to a kind of method based on movement decomposition and behavioural analysis realization real scale game.
Background technology
Recreation market development at present is rapid, and various game hosts emerge in an endless stream, and collocation has colourful Games Software, to satisfy the demands of consumers.The interactive mode of recreation is passed through keyboard and the game operation handle carries out from early stage, the mode based on sensors such as gravity, speed of being evolved into gradually.Early stage pass through the mode that keyboard and handle come operate game, the player constantly is sitting in operate game before the screen often for a long time, and player's physical condition is had very big injury.And still there is the not high shortcoming of playability and convenience in sensor-based real scale game, and cost is higher.Therefore need be a kind of interactive better, it is more convenient to operate, and gives player's sense of reality stronger, and the mode of operation that cost is lower improves the playability of recreation and satisfies ordinary populace and accept extensively requirement.
Summary of the invention
Purpose of the present invention is in order to overcome problem and the shortcoming that above-mentioned prior art exists, a kind of method based on movement decomposition and behavioural analysis realization real scale game is provided, the present invention has that interactivity is stronger, operation is easier, the sense of reality is stronger, real scale game implementation method with low cost.
Device based on movement decomposition and behavioural analysis realization real scale game comprises the synchronous video camera of binocular, game host, image frame display and various features label, and wherein game host is divided into four modules 1) the prepackage DBM; 2) time-sequence control module; 3) human body behavior command module is obtained in graphical analysis; 4) recreation Executive Module, structure drawing of device is as shown in Figure 1; The synchronous video camera of binocular is connected by data wire with game host, and game host is connected by data wire with the image frame display, and the various features label is worn on user's the crucial joint of human body.
Described prepackage DBM is shared by behavior command analysis module and recreation execution model.Described time-sequence control module realization behavior command information and communication instruction signal transmit between prepackage DBM, behavior command analysis module and recreation Executive Module and carry out.
The occupation mode of this device is: the user selectively wears the signature thing on the crucial joint of human body, the synchronous video camera of binocular obtains the user's who wears the signature thing motion state image in real time, the behavior command of game host generates mechanism and reads the binocular image that binocular camera is taken, carry out human posture's analysis, after obtaining human posture's instruction, this instruction is sent to the recreation performance element, realization allows virtual portrait move by user's manner of execution, and the recreation performance element is sent to the image frame display with the game image that generates.The information of prepackage DBM comprises: human skeleton model and game action decompose gesture library, and above information is contained in game host inside in advance.During recreation, pass through interactive mode, the user selects the signature thing, configuration various features label is to avoid with player's clothes or background is conflicted mutually and cause the feature target to detect, and selects to wear the number of signature thing and signature thing when carrying out actual play according to the needs of type of play.The label volume is little, and is in light weight, and convenience gaming the person wear, and player's operation is not had any restriction and influence.The binocular image that the behavior command generation mechanism of game host obtains by analysis produces the pose information that captures, and communicates with the recreation performance element, and the virtual portrait in the feasible recreation performance element moves by user's manner of execution.
Based on the method for movement decomposition and behavioural analysis realization real scale game, carry out according to the following steps: (1) sets up the human skeleton model in game host under off-line state; (2) under off-line state, in game host, set up the game action storehouse with the human skeleton model, set up maneuver library respectively by type of play, and single Semantic Actions is carried out the multiframe movement decomposition; (3) under off-line state, binocular camera demarcated obtain the calibration of binocular camera parameter and binocular camera polar curve; (4) in game host, set up background model; (5) player's human body key node place wears the signature thing; (6) game host carries out foreground segmentation by synchronous video camera of binocular and background model to game area; (7) the signature thing worn by the synchronous video camera initialization of binocular player of game host, and obtain the positional information and the corresponding limbs title of signature thing obtains player's body sizes information and preserves; (8) game host detects the people's face and the colour of skin by the synchronous video camera of binocular, obtains human body key node positional information; (9) game host is followed the tracks of face, both hands and signature thing by the synchronous video camera of binocular, obtains player's area-of-interest piece and obtains continuous human body key node zone position information; (10) game host carries out sparse three-dimensional coupling by the synchronous video camera of binocular to the face in the image of the left and right sides, both hands and signature thing, obtains each limbs position parallax information on the image of the left and right sides of player; (11) game host obtains each limbs position space coordinates information by the synchronous video camera of binocular, obtains human body 3D skeleton pattern in real time and carries out normalized and obtain the standard manikin; (12) game host mates the realization action recognition with action in player's action and the offline play storehouse, in conjunction with single-frame images analysis coupling and multiple image analysis coupling, carry out action recognition, obtain being delivered to after the Semantic Actions performance element of game host, realize real scale game.
In the present invention, game device can be carried out multiple recreation, as tennis, and shuttlecock, table tennis game etc.Relate to a plurality of actions in each recreation, the instruction that the performance element of game host sends according to the game action recognition unit, the behavior of the virtual role in the control recreation, and the result shown as game picture and export.
In the present invention, do not need to contact the action behavior that just is appreciated that the player with the player.Binocular camera shooting mechanism collects player's view data, is identified player's attitude behavior then by image analyzing unit, and can analyze the speed of player's action, and then generates the behavior of the virtual role in the game command control recreation.
Description of drawings
Fig. 1 is an apparatus structure schematic diagram of the present invention.
Fig. 2 a is human body skeleton pattern figure;
Fig. 2 b, Fig. 2 c are respectively the human body exploded view;
Fig. 3 is polar curve geometrical relationship figure;
Fig. 4 a is left side, a polar curve calibration back effect schematic diagram;
Fig. 4 b is right side, a polar curve calibration back effect schematic diagram;
Fig. 5 is foreground segmentation figure as a result;
Fig. 6 a is area-of-interest figure;
Fig. 6 b is color cluster figure as a result;
Fig. 7 face testing result figure that behaves;
Fig. 8 obtains figure as a result for area of skin color;
Fig. 9 is parallax and depth relationship schematic diagram;
Figure 10 is a human body normalization schematic diagram;
Figure 11 is the movement locus schematic diagram;
Figure 12 obtains flow chart for human action information.
The specific embodiment
Apparatus of the present invention as shown in Figure 1, device based on movement decomposition and behavioural analysis realization real scale game, comprise the synchronous video camera of binocular, game host, image frame display and 15-20 kind signature thing, wherein game host is divided into four modules 1) the prepackage DBM; 2) time-sequence control module; 3) human body behavior command module is obtained in graphical analysis; 4) recreation Executive Module, structure drawing of device is as shown in Figure 1; The synchronous video camera of binocular is connected by data wire with game host, and game host is connected by data wire with the image frame display, and 15-20 kind signature thing is worn on user's the crucial joint of human body.Described prepackage DBM is shared by behavior command analysis module and recreation execution model.Described time-sequence control module realization behavior command information and communication instruction signal transmit between prepackage DBM, behavior command analysis module and recreation Executive Module and carry out.
Method with regard to committed step of the present invention-obtain player's behavior command is described in detail below.
(1) human skeleton model modeling (off-line carries out)
The manikin (head, left side shoulder, right shoulder, left elbow, right elbow, left hand, the right hand, belly, left hip, right hip, left knee, right knee, left foot, right crus of diaphragm) of setting up human body key node composition is shown in Fig. 2 a human skeleton illustraton of model.Shown in Fig. 2 b, Fig. 2 c human body exploded view, set up the standard manikin according to each torso length percent information of adult.Overall height with the people is a unit length 1, and upper arm length is 0.18, and little arm lengths is 0.14 etc.(adding up Chinese 80% crowd's average (get rid of 20% too high or short excessively crowd of height, do not do statistics) man: the range of age: 18-25 year, woman: the range of age: 18-25 year)
(2) the game action storehouse is set up
According to the game action feature, maneuver library is caught and is set up in the action that relates in the recreation.For guaranteeing the efficient of later stage action coupling, we divide maneuver library according to game name.Maneuver library of each game configuration is preserved with the xml file, less, the real-time action matching efficiency height of action that relates to because of each money recreation.Press type of play and divide maneuver library, make system have the motion blur recognition function and also reduce the hunting zone and improve matching precision for the action coupling in later stage.
Hereinafter we are that example is described with the tennis game.Mainly based on the action of hand, wherein the right hand occupies leading role in tennis.When making up the maneuver library of tennis game, we analyze common action according to the motion characteristic of tennis game.The action that relates in the tennis game mainly contains: service, and the forehand return of serve, the backhand return of serve, the high-order return of serves of forehand etc. according to the variation speed of action, form different striking speeds in each action, increase the diversity of game action and improve user's game experiencing.
Action 1, fast, middling speed, at a slow speed
Action 2, fast, middling speed, at a slow speed
Action 3, fast, middling speed, at a slow speed
Action 4, fast, middling speed, at a slow speed .... etc.
Each action is made of 15-20 frame continuous action sequence: 20 two field pictures.
Every two field picture: record manikin key node 3D coordinate information, according to the human body limb architectural feature, set up the node line, the length of record line, angle information.
Node 1:Node1 (X1, Y1, Z1),
Node 2:Node2 (X2, Y2, Z2)
Line 1:Line1 (node 1, node 2), Angle (L1)=θ 1, Length (L1)=a1,
The successive frame interocclusal record is somebody's turn to do action key node trace information Curve1, Curve2.Above information is preserved with the xml document form.
(3) binocular camera is demarcated (off-line carries out), obtains the binocular camera parameter.
Obtain the confidential reference items parameter (focal distance f, center) and the outer ginseng parameter (spatial relation) of binocular camera.Obtain parameter and save as file.
Open binocular camera simultaneously, adjust the left and right cameras focal length and make that (1-2m scope) locates scenery about observation 1.5m, left and right sides image reaches basic identical definition.Adopt the Zhang Zhengyou camera marking method, selecting black and white gridiron pattern flat board is calibration sample.
According to the transformational relation between image coordinate and world coordinates as the formula (1): the two dimensional image coordinate (u, v), the space world coordinates (X, Y, Z), camera chain inner parameter (f x, f y, u o, v o).World coordinates is through outer ginseng matrix (R 3*3Be spin matrix, t 3*1Be translation matrix) be converted to camera coordinates after changing, camera coordinates is converted to image coordinate after through the matrixing of video camera confidential reference items.
Figure BDA0000048111760000051
Calibration gridiron pattern specification is: 7 lattice *, 9 lattice, the gridiron pattern actual size is 30mm.Obtain in real time and to carry out Corner Detection in the image, according to demarcate angle point in the material object (X, Y, Z), with through the angle point of imaging after the projective transformation (u, the v) relation between are set up equation group as the formula (2):
u i v i 1 = 1 z c f x 0 u 0 0 0 f y v 0 0 0 0 1 0 R 3 * 3 t 3 * 1 0 T 1 X i Y i Z i 1 i = 1 . . . . . m - - - ( 2 )
Adopt least square solution nonhomogeneous equation group, try to achieve optimal solution, obtain intrinsic parameters of the camera and external parameter, preserve with file, m is the natural number greater than 1.
Below (4)-(5) and (7)-(10) step all in the image of the left and right sides, carry out respectively.
(4) polar curve calibration
The binocular image that binocular camera is obtained carries out the polar curve calibration, raises the efficiency for the sparse three-dimensional coupling of follow-up characteristic point, and each two field picture is handled in real time.
The relation that has the polar curve constraint in the three-dimensional matching process of binocular vision, polar curve geometrical relationship as shown in Figure 3: in the space a bit at the picture point P1 of left figure, its match point one in right figure fix on the corresponding right polar curve of left figure polar curve (polar curve of P1 point generation) on.Can calculate corresponding polar curve by binocular camera basis perspective projection matrix,, polar curve can be calibrated to the scan line of image, as shown in Figure 4 for improving computational efficiency.
(5) set up background model
In unmanned scene, the gauss hybrid models modeling is carried out in the background area, the learning model parameter, the background area is to the interference of human body complexion and label detection in the eliminating scene of game.Each pixel is set up a mixed Gaussian to distribute:
Figure BDA0000048111760000053
It is 5 that k Gaussian distribution k is set, and the key parameter of Gaussian distribution is the weight of average, variance and each distribution, sets variance yields σ k, learning rate α, the weight w of each Gaussian distribution k, average m k, the learning rate that guarantees first frame is 1, first back (w that distributes of initialization k, σ k, m k), model parameter σ is upgraded by the learning rate α that is provided with in the back tT-1* (1-α)+σ t* α, setting learning rate α in the learning process is 0.05, after 3 times in second reached the study of 90 two field pictures, it is stable that the Gaussian distribution model of background can reach substantially.
(6) select the signature thing
Participate in game item at the player, be chosen in and wear corresponding signature thing on the designated key limbs node.The player enters in the camera visual field, selects recreation on interactive interface, provides corresponding prompting according to game item, selects the signature thing to be worn on the appointed positions.As in tennis game, takeing on and right ancon difference wear markers thing on the right side.
(7) foreground segmentation
According to the background model of having obtained, to being arranged, people's scene image carries out foreground segmentation, for increasing the adaptivity of model, need update background module in the cutting procedure in real time, the learning rate of She Zhiing should be lower here, as 0.005.Obtain the human body zone, increase morphologic filtering corrosion operation, the interference of some spuious zonules of filtering, foreground segmentation result schematic diagram as shown in Figure 5.
(8) initialization feature label information
The identifiability of judging characteristic label, the signature quality testing is surveyed, and obtains the initial position (left and right sides image) of signature thing.At the prospect human region, the detected characteristics label.Obtain signature thing information, after the identifiability of affirmation signature thing, the positional information of obtaining the signature thing reaches in the corresponding relation of limbs title, and obtains player's body sizes information preservation.In initialization procedure, the player shows action by prompting and waits for that recreation begins prompting.The posture of supposing the player is for standing, both hands are placed the health both sides naturally, here we adopt the integrated information based on color and geometric properties to detect, therefore when selecting the signature thing, should select and making marks that player's clothing differs greatly, for increasing user's range of choice, the quantity of signature thing should the 15-20 kind or more than, to satisfy different garment player's needs.
Signature thing position and signature thing number are worn in setting according to the game item content, so signature thing number is known, and human region is known through foreground segmentation, so the position of signature thing is probably known.As being that right shoulder and right ancon have the signature thing in the embodiment recreation, then in foreground image Image1, region of interest ROI is the left side of image, arm length and head height ratio, then the height of hand is in the height the people about ratio 0.4-0.5, and the zone is an area-of-interest more than the 0.4 ratio height in the image of left side.As shown in Figure 6: carry out color cluster in this zone, cluster result result from top to bottom is followed successively by cluster1, cluster2 ... our cluster result that obtains is 5 classes here.The signature thing be shaped as the rectangle belt shape, and signature thing size is known promptly known with the general dimension scale of body sizes.According to the geometrical Characteristics Analysis of cluster areas, can obtain cluster4, cluster5 is a signature thing cluster result.Position relation according to two cluster areas can obtain corresponding limbs title limb1, limb2.Shift onto in the RGB color space according to cluster areas is counter, obtain the three channel histogram information of signature thing cluster areas.As signature thing 1 is cluster areas cluster4 zone, although signature thing color is consistent, through after the imaging, the color of signature object area has certain distribution, statistics is respectively HistR in GRB color space three channel histogram, HistG, HistB.With HistR is example, gets rid of the less gray value of statistics number, setting threshold Tvalue=Count*0.15 here in the histogram; Count is the total number of pixel in the histogram.Number is rejected less than the gray value of Tvalue value, the maximum of statistics residue Gray Histogram value, minimum of a value and average MaxHistR, MinHistR, MeanHistR is this signature thing color characteristic in the R passage, because the illumination of signature thing changes in the motion process, according to the test maximin fluctuation is arranged, but the stability of average can be higher.Other passages adopt same procedure, promptly obtain the signature thing at R, G, and the color characteristic in the B triple channel, and preserve.Therefore the characteristic information that obtains for each signature thing comprises: the minimum boundary rectangle rect of marked region Area.The center of marked region (X, Y), three channel histogram information (MeanHistR, MeanHistG, MeanHistB), and right shoulder of limbs limb name information or right elbow etc.
(9) people's face and Face Detection
Adopt method for detecting human face, obtain human face colour of skin information, and then the position that obtains the human body both hands.Provide positive slightly face as far as possible and carry out the detection of people's face to video camera extracting image.
A) method for detecting human face
Adopt ripe adboost training method, realize detection, obtain people's face positional information people's face.Differ greatly zone to the statistics colour of skin histogrammic influence with the colour of skin for avoiding in people's face after obtaining people's face, human face region is carried out simple threshold process, it is non-area of skin color that the zone that is lower than this threshold value is considered to.The image that video camera directly obtains is the RGB color space, and human body complexion has the quite good detecting characteristic on the YCrCb color space, therefore image transitions is handled to the YCrCb color space.
Area of skin color is characterized as the YCrCb color space: 133<Cr<173
77<Cb<127
Through the non-area of skin color of threshold process eliminating human face, as shown in Figure 7.
B) hand position is obtained
After obtaining human face region, the color histogram information of statistics face.According to staff and the more consistent feature of face's colour of skin, can instead release the position in human hands zone, (player can not be with gloves), as shown in Figure 8.In the initialization procedure, the human posture needs more definite, promptly obtains desired posture unanimity in the signature thing characteristic procedure.This process is surveyed with signature quality testing above and can be adopted the mode of two-wire journey to improve program execution speed synchronously.
(10) multiple target tracking
Face, both hands and signature object location are followed the tracks of.Preliminary treatment according to signature thing colouring information that obtains in the initialization procedure and features of skin colors information, is carried out the threshold value preliminary treatment to current observation image, and the threshold value setting is provided with by following two features:
A) features of skin colors: 133<Cr<173
77<Cb<127
B) signature thing color characteristic: MaxHistR, MinHistR
MaxHistG,MinHistG
MaxHistB,MinHistB
Merge two threshold process rear regions, obtain target area-of-interest piece.The zone of avoiding obtaining has comprised the non-object pixel of part, needs here the area-of-interest piece that obtains is carried out morphologic corrosion treatment, obtains a plurality of object block zone.Calculate detected object block provincial characteristics, mate, determine limbs position under the object block zone with the original state target signature.In conjunction with kalman filter method, the position of tracking target is predicted the linear movement model is adopted in prediction.
X t=X t-1+Δx
Y t=Y t-1+Δy
(X t, Y t) be the former frame dbjective state, Δ x, Δ y are respectively the X in the previous frame, the displacement of Y direction.Positional information (X according to observation c, Y c) and color characteristic information, revise prediction, upgrade dbjective state.Revise foundation:
Figure BDA0000048111760000081
Figure BDA0000048111760000082
Figure BDA0000048111760000083
Minimum, positional information and colouring information respectively account for a weight proportion in the formula.Down stable as illumination, the colouring information weight can increase, and adopts each weight of 0.5 to get final product generally speaking.
(11) sparse three-dimensional coupling
Left and right sides image face, both hands and signature thing characteristic matching (sparse three-dimensional coupling) are obtained each limbs position parallax information on the image of the left and right sides.
Parallax information obtains, and according to face in the image of the left and right sides, the positional information of both hands and signature thing is carried out solid coupling to the same area zone respectively, obtain this position about parallax information on two width of cloth images.Adopt sparse three-dimensional coupling, choose the same area characteristic point, carry out Feature Points Matching.Adopt the Feature Points Matching algorithm, after image is calibrated through polar curve, characteristic point left and right sides image is arranged, can dwindle the match search scope in constraint advantage with delegation.
With left figure is reference diagram, left and right sides image is through the polar curve calibration, after the limbs information alignment processing, area information to be matched is approaching, only there are parallax and occlusion issue partly between the image of the left and right sides, therefore adopt the wave filter of same size can arrive the local extremum characteristic point that obtains the comparison unanimity, because of the point-to-point coupling that does not need limbs or mark overall region are had a few, only need find the obvious characteristics point of several features to get final product, and the auxilliary feature that the characteristic point row constraint is arranged, and be not subjected to the influence of illumination, characteristic matching is more stable.
(12) the 3D skeleton obtains
Each limbs position space coordinates information is obtained, and obtains human body 3D skeleton pattern and normalized in real time.According to the polar curve calibration parameter, by visual field information calculations space coordinates parameter, the relation of parallax and spatial depth obtains the space coordinates information at each position as shown in Figure 9.Obtain human body 3D skeleton pattern,, the manikin that obtains is in real time normalized to the standard manikin according to player's body sizes information.
A) parallax calculates, and matching characteristic point L1, L2 correspond respectively to same point positional information in the image of the left and right sides in the space, and then parallax D=L1 (x)-L2 (x) is the difference of the x coordinate of unknown message.
B) model normalized: human body height is normalized to 1, and is consistent with setpoint distance in the maneuver library apart from being transformed into from video camera, obtains the scaling coefficient, according to this each body joint point coordinate position of coefficient levelling human body, limbs length information etc.For the action coupling in later stage is prepared, as shown in figure 10.
(13) action recognition
Mate with the action in the offline play storehouse, carry out action recognition.The normalized 3D skeleton pattern that gets access to and the action in the offline play storehouse are mated, comprise that single-frame images coupling and multiple image mate, and carry out action recognition.
Single-frame images coupling: node location P, multinode ordinal relation S (from top to bottom, whether the node sequence of appearance is consistent on the image, and from left to right whether node sequence is consistent), adjacent node wire length L (mainly considers long bone limbs position here, as upper arm, underarm.Whether limbs length is constant in the three dimensions, considers 2D length here, approaching as upper arm length in the 2D space), line angle A (angle of limbs and reference coordinate, because the length relation of 2D is arranged, the angular relationship in can only considering in the 2D plane gets final product).
The multiple image coupling: end node movement locus C is a n two field picture track, as shown in figure 11.For avoiding the player that tangible time-delay is arranged, n<20;
Matching degree P=p wherein 1+ p 2+ p 3+ ... the matching degree of all nodes, L=l 1+ l 2+ ... all limbs position matching degrees, A=a 1+ a 2+ ... all limbs position matching degrees, the diversity factor minimum of some actions or action sequence, i.e. matching degree maximum in current action or the action sequence maneuver library.
Above execution in step as shown in figure 12.
Though the present invention illustrates as above with preferred implementation, yet it is not to only limit to above-mentioned embodiment in order to limit the present invention.Therefore, in not departing from the scope of the present invention, make an amendment, all should be included in claims protection domain.

Claims (9)

1. based on the method for movement decomposition and behavioural analysis realization real scale game, it is characterized in that carrying out according to the following steps: (1) sets up the human skeleton model in game host under off-line state; (2) under off-line state, in game host, set up the game action storehouse with the human skeleton model, set up maneuver library respectively by type of play, and single Semantic Actions is carried out the multiframe movement decomposition; (3) under off-line state, binocular camera demarcated obtain the calibration of binocular camera parameter and binocular camera polar curve; (4) in game host, set up background model; (5) player's human body key node place wears the signature thing; (6) game host carries out foreground segmentation by synchronous video camera of binocular and background model to game area; (7) the signature thing worn by the synchronous video camera initialization of binocular player of game host, and obtain the positional information and the corresponding limbs title of signature thing obtains player's body sizes information and preserves; (8) game host detects the people's face and the colour of skin by the synchronous video camera of binocular, obtains human body key node positional information; (9) game host is followed the tracks of face, both hands and signature thing by the synchronous video camera of binocular, obtains player's area-of-interest piece and obtains continuous human body key node zone position information; (10) game host carries out sparse three-dimensional coupling by the synchronous video camera of binocular to the face in the image of the left and right sides, both hands and signature thing, obtains each limbs position parallax information on the image of the left and right sides of player; (11) game host obtains each limbs position space coordinates information by the synchronous video camera of binocular, obtains human body 3D skeleton pattern in real time and carries out normalized and obtain the standard manikin; (12) game host mates the realization action recognition with action in player's action and the offline play storehouse, in conjunction with single-frame images analysis coupling and multiple image analysis coupling, carry out action recognition, obtain being delivered to after the Semantic Actions performance element of game host, realize real scale game.
2. method according to claim 1 is characterized in that: the described human skeleton model of setting up is mainly the crux node of the head in the manikin, left side shoulder, right shoulder, left elbow, right elbow, left hand, the right hand, belly, left hip, right hip, left knee, right knee, left foot, right crus of diaphragm is set up the standard manikin.
3. method according to claim 1, it is characterized in that: maneuver library is caught and set up in the described game action storehouse of setting up for the action that will relate in will playing, and according to game name division maneuver library, maneuver library of each game configuration is preserved with document form.
4. method according to claim 1, it is characterized in that: set up background model: in unmanned scene, the gauss hybrid models modeling is carried out in the background area, the learning model parameter, each pixel is set up a mixed Gaussian to distribute: the learning rate that guarantees first frame is 1, after first distribution of initialization, model parameter is upgraded by the learning rate that is provided with in the back, sets learning rate in the learning process less than 1.
5. method according to claim 1 is characterized in that: initialization feature label information: in initialization procedure, the player shows action by prompting and waits for that recreation begins prompting; Signature thing position and signature thing number are worn in setting according to the game item content, because signature thing number is known, human region is known through foreground segmentation, the location aware of signature thing; Position according to the signature thing, determine the area-of-interest in foreground image, carry out color cluster, and signature thing size is known known with dimension scales player's body sizes in this zone, according to the geometrical Characteristics Analysis of cluster areas, obtain signature thing cluster result; According to the relative position relation of cluster areas and human body, obtain corresponding limbs title; Shift onto in the RGB color space according to cluster areas is counter, obtain the three channel histogram information and the preservation of signature thing cluster areas; Therefore the characteristic information that obtains for each signature thing comprises: the minimum boundary rectangle of marked region; The center of marked region, three channel histogram information and limbs name information.
6. method according to claim 1, it is characterized in that: face, both hands and signature thing are followed the tracks of obtained player's area-of-interest piece and obtain continuous human body key node zone position information: according to signature thing colouring information that obtains in the initialization procedure and features of skin colors information, current observation image is carried out the threshold value preliminary treatment, the threshold value setting is by a) features of skin colors and b) two features of signature thing color characteristic are provided with, merge two threshold process rear regions, obtain target area-of-interest piece; Calculate detected object block provincial characteristics, mate with the original state target signature, determine limbs position under the object block zone, in conjunction with kalman filter method, adopt the linear movement model to predict to the position of tracking target, according to the positional information and the color characteristic information of observation, revise prediction, upgrade dbjective state.
7. method according to claim 1, it is characterized in that: game host carries out sparse three-dimensional coupling by the synchronous video camera of binocular to the face in the image of the left and right sides, both hands and signature thing, obtain each limbs position parallax information on the image of the left and right sides of player: according to the positional information of the face in the image of the left and right sides, both hands and signature thing, choose the same area characteristic point, carry out Feature Points Matching, adopt the Feature Points Matching algorithm, after image is calibrated through polar curve, characteristic point left and right sides image is arranged in constraint advantage with delegation.
8. method according to claim 1, it is characterized in that: human body 3D skeleton pattern obtains: according to the polar curve calibration parameter, go out each position space coordinates parameter of player by the visual field information calculations, relation according to parallax and spatial depth, obtain the space coordinates information at each position, thereby obtain human body 3D skeleton pattern,, the human body 3D skeleton pattern that obtains is in real time normalized to the standard manikin according to player's body sizes information; Parallax calculates, and the matching characteristic point corresponds respectively to same point positional information in the image of the left and right sides in the space, obtains parallax; The model normalized: human body height is normalized to 1, be transformed into consistent from the video camera distance with setpoint distance in the maneuver library, obtain the scaling coefficient, according to this coefficient levelling each body joint point coordinate position of human body and limbs length information, for the action coupling in later stage is prepared.
9. method according to claim 1 is characterized in that action recognition: normalized human body 3D skeleton pattern that will get access to and the action in the offline play storehouse are mated, and comprise single-frame images coupling and multiple image coupling, carry out action recognition; The single-frame images coupling is: limbs node location, many limbs node sequence relation, adjacent limbs node wire length, adjacent limbs node line angle; Multiple image coupling: the movement locus of crucial limbs node movement locus in the n two field picture; Matching degree comprises the obtaining of matching degree of the matching degree of single-frame images limbs node and multiple image, and as the diversity factor minimum of some postures or action sequence in current posture or action sequence and the maneuver library, promptly the matching degree maximum obtains denomination of dive information.
CN2011100469754A 2011-02-28 2011-02-28 Method for implementing realistic game based on movement decomposition and behavior analysis Pending CN102179048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011100469754A CN102179048A (en) 2011-02-28 2011-02-28 Method for implementing realistic game based on movement decomposition and behavior analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011100469754A CN102179048A (en) 2011-02-28 2011-02-28 Method for implementing realistic game based on movement decomposition and behavior analysis

Publications (1)

Publication Number Publication Date
CN102179048A true CN102179048A (en) 2011-09-14

Family

ID=44565450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011100469754A Pending CN102179048A (en) 2011-02-28 2011-02-28 Method for implementing realistic game based on movement decomposition and behavior analysis

Country Status (1)

Country Link
CN (1) CN102179048A (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102609684A (en) * 2012-01-16 2012-07-25 宁波江丰生物信息技术有限公司 Human body posture detection method and device
CN102749993A (en) * 2012-05-30 2012-10-24 无锡掌游天下科技有限公司 Motion recognition method based on skeleton node data
CN102855379A (en) * 2012-05-30 2013-01-02 无锡掌游天下科技有限公司 Skeleton joint data based standardizing method
CN103135883A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control of window
CN103285586A (en) * 2012-09-24 2013-09-11 天津思博科科技发展有限公司 Movement health care game device based on internet framework
CN103520923A (en) * 2013-10-17 2014-01-22 智尊应用程序开发有限公司 Game control method and equipment
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
WO2014190870A1 (en) * 2013-05-29 2014-12-04 华为技术有限公司 Method and system for identifying user activity type
CN104331158A (en) * 2014-10-29 2015-02-04 山东大学 Gesture-controlled human-computer interaction method and device
CN104474710A (en) * 2014-12-09 2015-04-01 山东大学 System and method for tracking group users in large-scale scenes on basis of Kinect networks
CN104598867A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Automatic evaluation method of human body action and dance scoring system
CN106780681A (en) * 2016-12-01 2017-05-31 北京像素软件科技股份有限公司 A kind of role action generation method and device
CN106984039A (en) * 2017-04-14 2017-07-28 苏州蜗牛数字科技股份有限公司 A kind of method for projecting to real human body in VR game
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device
CN107632698A (en) * 2016-07-18 2018-01-26 岱宇国际股份有限公司 Motion analysis system and method based on image
CN107656618A (en) * 2013-03-15 2018-02-02 奇跃公司 Display system and method
CN108970084A (en) * 2018-06-29 2018-12-11 西安深睐信息科技有限公司 A kind of moving scene analogy method of Behavior-based control identification
CN108986197A (en) * 2017-11-30 2018-12-11 成都通甲优博科技有限责任公司 3D skeleton line construction method and device
CN109685037A (en) * 2019-01-08 2019-04-26 北京汉王智远科技有限公司 A kind of real-time action recognition methods, device and electronic equipment
CN109711263A (en) * 2018-11-29 2019-05-03 国政通科技有限公司 Examination system and its processing method
CN110245634A (en) * 2019-06-20 2019-09-17 招商局重庆交通科研设计院有限公司 Multiposition, multi-angle crag deformation judgement and analysis method
CN110826422A (en) * 2019-10-18 2020-02-21 北京量健智能科技有限公司 System and method for obtaining motion parameter information
WO2020063009A1 (en) * 2018-09-25 2020-04-02 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium, and electronic device
CN111325170A (en) * 2020-02-27 2020-06-23 清华大学 Construction site dangerous behavior real-time identification device and method based on preposed action
CN111369687A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Method and device for synthesizing action sequence of virtual object
CN112435316A (en) * 2020-11-30 2021-03-02 上海米哈游天命科技有限公司 Anti-threading method and device in game, electronic equipment and storage medium
WO2021185317A1 (en) * 2020-03-19 2021-09-23 北京市商汤科技开发有限公司 Action recognition method and device, and storage medium
CN113687715A (en) * 2021-07-20 2021-11-23 温州大学 Human-computer interaction system and interaction method based on computer vision
CN114602173A (en) * 2022-05-13 2022-06-10 上海英立视电子有限公司 Human body image clearing method and system in real-person game
CN114693871A (en) * 2022-03-21 2022-07-01 苏州大学 Method and system for calculating three-dimensional imaging depth of double detectors based on scanning electron microscope
CN115880776A (en) * 2022-12-13 2023-03-31 北京百度网讯科技有限公司 Method for determining key point information and method and device for generating offline action library

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0412621A (en) * 1990-04-27 1992-01-17 Sharp Corp Apparatus for switching polarity of power supply
US20060055706A1 (en) * 2004-09-15 2006-03-16 Perlman Stephen G Apparatus and method for capturing the motion of a performer
US20080031512A1 (en) * 2006-03-09 2008-02-07 Lars Mundermann Markerless motion capture system
US20100197400A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0412621A (en) * 1990-04-27 1992-01-17 Sharp Corp Apparatus for switching polarity of power supply
US20060055706A1 (en) * 2004-09-15 2006-03-16 Perlman Stephen G Apparatus and method for capturing the motion of a performer
US20080031512A1 (en) * 2006-03-09 2008-02-07 Lars Mundermann Markerless motion capture system
US20100197400A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Visual target tracking

Cited By (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103135883A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control of window
CN103127717A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Method and system for control and operation of game
CN103135881A (en) * 2011-12-02 2013-06-05 深圳泰山在线科技有限公司 Display control method and system
CN103135881B (en) * 2011-12-02 2016-12-14 深圳泰山体育科技股份有限公司 Display control method and system
CN103135883B (en) * 2011-12-02 2016-07-06 深圳泰山在线科技有限公司 Control the method and system of window
CN103127717B (en) * 2011-12-02 2016-02-10 深圳泰山在线科技有限公司 The method and system of control operation game
CN102609684A (en) * 2012-01-16 2012-07-25 宁波江丰生物信息技术有限公司 Human body posture detection method and device
CN102609684B (en) * 2012-01-16 2013-12-18 宁波江丰生物信息技术有限公司 Human body posture detection method and device
CN102855379B (en) * 2012-05-30 2015-06-17 北京敏思创想软件技术有限公司 Skeleton joint data based standardizing method
CN102749993A (en) * 2012-05-30 2012-10-24 无锡掌游天下科技有限公司 Motion recognition method based on skeleton node data
CN102855379A (en) * 2012-05-30 2013-01-02 无锡掌游天下科技有限公司 Skeleton joint data based standardizing method
CN102749993B (en) * 2012-05-30 2015-06-17 北京敏思创想软件技术有限公司 Motion recognition method based on skeleton node data
CN103285586A (en) * 2012-09-24 2013-09-11 天津思博科科技发展有限公司 Movement health care game device based on internet framework
CN107656618B (en) * 2013-03-15 2021-03-23 奇跃公司 Display system and method
CN107656618A (en) * 2013-03-15 2018-02-02 奇跃公司 Display system and method
WO2014190870A1 (en) * 2013-05-29 2014-12-04 华为技术有限公司 Method and system for identifying user activity type
US9984304B2 (en) 2013-05-29 2018-05-29 Huawei Technologies Co., Ltd. Method and system for recognizing user activity type
CN103520923A (en) * 2013-10-17 2014-01-22 智尊应用程序开发有限公司 Game control method and equipment
CN104598867B (en) * 2013-10-30 2017-12-01 中国艺术科技研究所 A kind of human action automatic evaluation method and dancing points-scoring system
CN104598867A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Automatic evaluation method of human body action and dance scoring system
CN103617432B (en) * 2013-11-12 2017-10-03 华为技术有限公司 A kind of scene recognition method and device
CN103617432A (en) * 2013-11-12 2014-03-05 华为技术有限公司 Method and device for recognizing scenes
CN104331158A (en) * 2014-10-29 2015-02-04 山东大学 Gesture-controlled human-computer interaction method and device
CN104331158B (en) * 2014-10-29 2018-05-25 山东大学 The man-machine interaction method and device of a kind of gesture control
CN104474710B (en) * 2014-12-09 2015-09-02 山东大学 Based on large scale scene group of subscribers tracking system and the method for Kinect network
CN104474710A (en) * 2014-12-09 2015-04-01 山东大学 System and method for tracking group users in large-scale scenes on basis of Kinect networks
CN107632698A (en) * 2016-07-18 2018-01-26 岱宇国际股份有限公司 Motion analysis system and method based on image
CN107632698B (en) * 2016-07-18 2020-04-28 岱宇国际股份有限公司 Motion analysis system and method based on image
CN106780681A (en) * 2016-12-01 2017-05-31 北京像素软件科技股份有限公司 A kind of role action generation method and device
CN106780681B (en) * 2016-12-01 2020-04-28 北京像素软件科技股份有限公司 Role action generation method and device
CN106984039A (en) * 2017-04-14 2017-07-28 苏州蜗牛数字科技股份有限公司 A kind of method for projecting to real human body in VR game
CN107551551A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Game effect construction method and device
CN108986197A (en) * 2017-11-30 2018-12-11 成都通甲优博科技有限责任公司 3D skeleton line construction method and device
CN108986197B (en) * 2017-11-30 2022-02-01 成都通甲优博科技有限责任公司 3D skeleton line construction method and device
CN108970084A (en) * 2018-06-29 2018-12-11 西安深睐信息科技有限公司 A kind of moving scene analogy method of Behavior-based control identification
WO2020063009A1 (en) * 2018-09-25 2020-04-02 Oppo广东移动通信有限公司 Image processing method and apparatus, storage medium, and electronic device
CN109711263A (en) * 2018-11-29 2019-05-03 国政通科技有限公司 Examination system and its processing method
CN109685037A (en) * 2019-01-08 2019-04-26 北京汉王智远科技有限公司 A kind of real-time action recognition methods, device and electronic equipment
CN110245634A (en) * 2019-06-20 2019-09-17 招商局重庆交通科研设计院有限公司 Multiposition, multi-angle crag deformation judgement and analysis method
CN110826422A (en) * 2019-10-18 2020-02-21 北京量健智能科技有限公司 System and method for obtaining motion parameter information
CN111325170A (en) * 2020-02-27 2020-06-23 清华大学 Construction site dangerous behavior real-time identification device and method based on preposed action
CN111369687A (en) * 2020-03-04 2020-07-03 腾讯科技(深圳)有限公司 Method and device for synthesizing action sequence of virtual object
WO2021185317A1 (en) * 2020-03-19 2021-09-23 北京市商汤科技开发有限公司 Action recognition method and device, and storage medium
CN112435316A (en) * 2020-11-30 2021-03-02 上海米哈游天命科技有限公司 Anti-threading method and device in game, electronic equipment and storage medium
CN112435316B (en) * 2020-11-30 2023-05-12 上海米哈游天命科技有限公司 Method and device for preventing mold penetration in game, electronic equipment and storage medium
CN113687715A (en) * 2021-07-20 2021-11-23 温州大学 Human-computer interaction system and interaction method based on computer vision
CN114693871A (en) * 2022-03-21 2022-07-01 苏州大学 Method and system for calculating three-dimensional imaging depth of double detectors based on scanning electron microscope
CN114602173A (en) * 2022-05-13 2022-06-10 上海英立视电子有限公司 Human body image clearing method and system in real-person game
CN114602173B (en) * 2022-05-13 2022-08-05 上海英立视电子有限公司 Human body image clearing method and system in real-person game
CN115880776A (en) * 2022-12-13 2023-03-31 北京百度网讯科技有限公司 Method for determining key point information and method and device for generating offline action library
CN115880776B (en) * 2022-12-13 2023-11-03 北京百度网讯科技有限公司 Determination method of key point information and generation method and device of offline action library

Similar Documents

Publication Publication Date Title
CN102179048A (en) Method for implementing realistic game based on movement decomposition and behavior analysis
Zhang et al. Detailed, accurate, human shape estimation from clothed 3D scan sequences
JP7061694B2 (en) Image processing methods and equipment, imaging equipment, and storage media
Mehta et al. Monocular 3d human pose estimation in the wild using improved cnn supervision
CN102549619B (en) Human tracking system
CN104699247B (en) A kind of virtual reality interactive system and method based on machine vision
CN101715581B (en) Volume recognition method and system
CN102129152B (en) There is the depth projection instrument system of integrated VCSEL array
CN107466411A (en) Two-dimensional infrared depth sense
CN107301370A (en) A kind of body action identification method based on Kinect three-dimensional framework models
CN107423729A (en) A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene
CN105556508A (en) Devices, systems and methods of virtualizing a mirror
CN102918489A (en) Limiting avatar gesture display
Xiao et al. 3D human postures recognition using kinect
CN107688391A (en) A kind of gesture identification method and device based on monocular vision
WO2017092196A1 (en) Method and apparatus for generating three-dimensional animation
CN102665838A (en) Methods and systems for determining and tracking extremities of a target
AU2009281762A1 (en) Method and apparatus for estimating body shape
CN103106604A (en) Three dimensional (3D) virtual fitting method based on somatosensory technology
CN102470274A (en) Auto-generating a visual representation
CN106471523B (en) Color identification using infrared imaging
WO2014081394A1 (en) Method, apparatus and system for virtual clothes modelling
CN104353240A (en) Running machine system based on Kinect
WO2020147796A1 (en) Image processing method and apparatus, image device, and storage medium
CN107656611A (en) Somatic sensation television game implementation method and device, terminal device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20110914