WO2006117563A2 - Detection of movement in three dimensions using eye toy - Google Patents
Detection of movement in three dimensions using eye toy Download PDFInfo
- Publication number
- WO2006117563A2 WO2006117563A2 PCT/GB2006/001631 GB2006001631W WO2006117563A2 WO 2006117563 A2 WO2006117563 A2 WO 2006117563A2 GB 2006001631 W GB2006001631 W GB 2006001631W WO 2006117563 A2 WO2006117563 A2 WO 2006117563A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- focus
- participant
- amusement device
- depth
- region
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/55—Controlling game characters or game objects based on the game progress
- A63F13/57—Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/24—Constructional details thereof, e.g. game controllers with detachable joystick handles
- A63F13/245—Constructional details thereof, e.g. game controllers with detachable joystick handles specially adapted to a particular type of game, e.g. steering wheels
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/60—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
- A63F13/65—Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/20—Input arrangements for video game devices
- A63F13/21—Input arrangements for video game devices characterised by their sensors, purposes or types
- A63F13/213—Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/53—Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/10—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
- A63F2300/1062—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to a type of game, e.g. steering wheel
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/69—Involving elements of the real world in the game world, e.g. measurement in live races, real video
Definitions
- This invention relates to amusement devices of the type (hereinafter "the type referred to") comprising a video imaging device that is associated with a monitor, which may be a television screen, on which is played out an activity, which may, for example, be kick boxing, by means of video software.
- a monitor which may be a television screen
- an activity which may, for example, be kick boxing
- One such device is known as the Eye Toy, marketed by Sony.
- the imaging device set atop the monitor, images one taking part in the amusement, in front of the monitor, and superimposes an image of the participant on to the screen activity.
- the arrangement may be interactive, so that, in a kick boxing game, landing a punch, say, on a virtual protagonist, has certain consequences, for example, the virtual protagonist falls down.
- the present invention provides methods for introducing a third, depth dimension into the Eye Toy concept at a cost that is commensurate with the Eye Toy itself.
- the invention comprises, in one aspect, a method for introducing a third, depth dimension into amusement devices of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that has a defined-focus region and, closer to and/or further away from said defined-focus region, other regions in which the focus is different from the defined focus, and detecting whether a participant, or a part of a participant, is in the defined-focus region or in another region.
- the invention also comprises a method for introducing a third, depth dimension into amusement devices of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that has an in-focus region and, closer to and further away from the imaging device, out-of-focus regions, and detecting whether a participant, or a part of a participant, is in the in-focus region or an out-of-focus region.
- the invention also comprises an amusement device of the type referred to in which a third, depth dimension is introduced, comprising a depth imaging device forming an image of a participant space that has an in-focus region and, closer to and further away from the imaging device, out-of-focus regions, and means for detecting whether a participant, or part of a participant, is in the in-focus region or an out-of-focus region,
- the method involves detection simply of whether the participant is or is not in the in-focus region. If the participant, in this form, is in an out-of-focus region, no further information is available as to where, in the out-of-focus region, the participant actually is.
- the game software will be adapted to this form of the method by having depth information about virtual objects in the screen image generated by the game software. It is not necessary to use this depth information to display a three dimensional image on the screen, although this could be done, if desired, at a price, but it might, for example, be matched to visual clues as to depth such as size - a kick boxing protagonist appears smaller if further away.
- the software will then be adapted so that a decision as to whether an interaction has taken place will depend not only on the X, Y co-ordinates matching but also the depth co-ordinates, that is to say, the punch will only be 'landed' if the virtual protagonist's assigned depth co-ordinate corresponds to the in-focus region, and the participant's punch penetrates that region.
- a second depth imaging device which has an in-focus region spaced from the in-focus region of the first depth imaging device, interactions at two different depths can be detected, giving an enhanced impression of depth. This is two-bit resolution.
- further depth imaging devices may be added further to enhance the impression.
- An eight bit arrangement gives 256 depth levels, which, for most purposes, is sufficient to produce a realistic three dimensional model of a scene.
- the invention also comprises a method for introducing a third, depth dimension into an amusement device of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that is progressively out of focus. In this method, the participant does not cross the in-focus region.
- a first imaging device may form an image of the participant space which is essentially in focus, while a second imaging device forms an out-of-focus image from which depth information is derived.
- both imaging devices may form out-of-focus images of the participant space, from which depth information may be derived, and from which an in focus image can be created by appropriate software.
- Such additional depth imaging devices may be provided in a single depth camera, with two or more CCD arrays placed at different distances behind a single lens system.
- Other, dynamic, methods of providing two or more in-focus regions in a single camera include variable aperture, moving or changeable focus lens systems, moving CCD array and liquid lens arrangements.
- the depth imaging device may also be the primary imaging device, or may be provided in addition to a primary imaging device, though the primary imaging device and the depth imaging device may be contained in the same case.
- the primary imaging device may be a colour camera, but a depth imaging device, used purely as such, can be monochrome.
- the image with in-focus and out- of-focus regions may be formed by the primary imaging camera itself, using a structured light illumination arrangement with an in-focus region in which a participant, or part of a participant, is illuminated with a sharp pattern, for example of a grid, and the image of the grid analysed by appropriate software, to determine whether it is in or out of focus.
- the grid pattern projected on to the participant is easily removed by software, but may, in the circumstances, not be intrusive, and may, for example be invisible, infrared illumination.
- chroma-keying which involves imaging the participant against a plain backdrop illuminated with a particular colour (usually blue), the same colour can be used to cast the grid pattern, and this will automatically be eliminated by the chroma-keying software from the final image.
- the illumination may involve, for example, a moving grid, generated as by a filter wheel as used with Digital Mirror Devices (DMD) and Digital Light Processing (DLP).
- DMD Digital Mirror Devices
- DLP Digital Light Processing
- the filter wheel may be divided, say, into three 120° sectors, each of which has a grid, the grids being similar to, but radially displaced relative to the others.
- the grid could, for example, be a spiral line. As each sector comes into play, the grid pattern superimposed on the object changes with rotation of the wheel, and this moving grid pattern can be utilized by the software aforementioned to yield depth information.
- US4965840 discloses a method of determining the distance between a surface patch of a 3-D spatial scene and a camera system, in which the distance is measured using a pair of images with a change in the value of at least one camera parameter during the image formation process, such as
- First and second two dimensional images of the scene are formed using at least one different parameter value, and first and second sub-images are selected which correspond to the surface patch, and the distance of the patch calculated on the basis of constraints between the spread parameters of the point spread function corresponding to the two images. Whilst this is used, according to US4965480, for automatic focus in a camera system, it could, according to the present invention, be used in amusement devices of the type referred to.
- US5193124 discloses an improvement on the disclosure of US4965840 in which first and second digital images of the surface patch are preprocessed to form normalised images, and sets of Fourier coefficients are calculated for the images to provide sets of ratio values from which the distance of the surface patch from the camera system is calculated. This distance is then used to focus the object.
- the system is useful in machine vision systems, e.g. in robot vision, autonomous vehicle navigation and stereo vision systems, as well as in television microscopy and commercial television broadcasting where objects in both background and foreground are in focus.
- US5231443 discloses a similar arrangement using a different computational method.
- Depth information may also be derived from special areas of the participant. Boxing gloves and boots, for example, may have patterns which can have the same function as active grid pattern illumination, without the need to provide such illumination.
- the viewing screen - television screen or monitor - can be a conventional 2-D screen or a 3-D screen, which may be of the type in which two images are directed in slightly different directions so that one enters only one eye, and the other enters only the other eye.
- the game software may be devised so as, on this type of screen, to give a 3D effect.
- the software should be compatible with viewing on a regular 2D screen.
- Figure 1 is a diagrammatic illustration showing a first embodiment of an Eye
- Figure 2 is a diagrammatic illustration like Figure 1 of a second embodiment
- Figure 3 is a diagrammatic illustration like Figure 1 of a third embodiment
- Figure 4 is a diagrammatic illustration of a first imaging arrangement
- Figure 5 is a diagrammatic illustration of a second imaging arrangement
- Figure 6 is a diagrammatic illustration of an active depth-from-defocus arrangement
- Figure 7 is a diagrammatic illustration of a feature of the arrangement of Figure 6;
- Figure 8 is a diagrammatic representation of a moving grid arrangement for use in the arrangement of Figure 6;
- Figure 9 is a diagrammatic representation of an arrangement depending on participant-borne texture.
- the drawings illustrate methods for introducing a third, depth dimension into amusement devices 11 of the type referred to, in this instance, an Eye Toy, comprising forming, with a depth imaging device 12, an image of the participant space 13 that has an in-focus region 14 and, closer to and further away from the imaging device, out-of- focus regions 15, 16 respectively, and detecting whether a participant 17, or apart of a participant 18, is in the in-focus region 15 or an out-of-focus region 15 or 16.
- the Eye Toy 11 comprises a camera 19, mounted atop a television screen or monitor 21, and software displaying a moving, software-generated image on the screen 21.
- the camera forms an image of the participant space 13 and casts it on the screen 21, superimposed over the software image, so that, to participants, it looks as though they are taking part in the screen action.
- the Eye Toy imaging has been essentially two-dimensional.
- the participant endeavours to land a punch, or a kick, on the screen opponent, who is, of course, taking evasive action as a result of interactive software.
- the game is made more interesting, as the participant must aim, not only to hit the two-dimensional coordinate representing the target, but also take into account the depth of the target.
- a passive depth-by-defocus technique is used to detect simply when the participant 17, or a part, such as a glove or boot, of the participant 17 is in the in-focus region 14.
- a suitable arrangement for this would be that disclosed in the US patents above referred to.
- the software triggers an action of the on-screen opponent appropriate to the nature of the intrusion.
- Figure 2 illustrates a method for introducing a third, depth dimension into an amusement device 11 of the type referred to, comprising forming, with a depth imaging device 12, an image of the participant space 13 that is progressively out of focus. In this method, the participant does not cross the in-focus region 14.
- the participant 17 is shown in the out-of-focus region 16 beyond the in-focus region 14. It is possible to derive depth information for a participant in this region by several means, including means as disclosed in the US patents referred to.
- the out-of-focus image can be analysed as by Fourier analysis to determine its frequency spectrum. High frequencies correspond to substantially in-focus depths, while progressively lower predominant frequencies correspond to progressively out-of-focus depths.
- Figure 3 illustrates an arrangement in which there are several in-focus regions 14a, 14b, etc. These would correspond to multiple imaging devices.
- FIGS 4 and 5 illustrate imaging devices that could be used in the methods of the invention.
- Ih Figure 4 two cameras 19a, 19b are shown side-by-side in the same casing. This arrangement could be used for any of the methods above described.
- one of the cameras could be used to form the screen image, which the other is used as a depth-by-defocus camera, providing in- or out-of-focus depth information for use by the interactive software of the Eye Toy game.
- both cameras can be set to give out-of- focus images with regard to focal regions 14 that are spaced apart depthwise. From two such images, depth can be calculated for any pixel, and a sharp image produced by suitable software.
- the cameras 19a, 19b could have two focal regions, separated, again, depthwise, and the software set to detect when either focal region was breached.
- Figure 5 illustrates a camera 19 that has only one lens 19c, but two ccd arrays 19d, 19e, one behind the other. It is possible to have 'transparent' ccd arrays inasmuch as the charge coupled devices that make up the front array can be spaced apart to let light through to devices of the rear array.
- Figure 6 illustrates how depth can, in an inexpensive way, be introduced by an active depth-by-defocus technique.
- a lamp 22 casts an image of a grating into the participant space 13 that is in focus only in the in-focus region 14. If the participant 17 is in the in-focus region 14, the lamp 22 casts a sharp image of the grating on to the participant 17, otherwise the image is out of focus.
- Software as disclosed in WO2004/068400 can determine whether the focus is sharp or otherwise and thus determine if the participant is in the in-focus region 14 or not.
- Figure 7 illustrates the image of the grating in front of (a), in (b), and behind (c) the in-focus region 14.
- the grating image can be subtracted from the image before displaying it on the screen.
- Figure 8 shows a moving grid arrangement for use in the arrangement of Figure 6. This comprises a rotary disc with different grating patterns, which can, as taught in WO2004/068400 be used to enhance the depth perception.
- Figure 9 shows how a participant may carry special grating-like areas 91 from which depth information may be calculated exactly as if they were grating images cast by a lamp arrangement as shown in Figure 6.
- the defocus of the image of these areas 91 here illustrated as grating patterns on boxing gloves 92, can indicate specifically the location in depth of the gloves 92.
- the screen image of the participant may be superimposed on the software-generated screen image by a technique akin to chroma-keying, but which does not need the blue screen background, by various techniques.
- One such technique would be to suppress all signals from the imaging camera that did not change from frame to frame, so that only the changing pixels, corresponding to the moving participant, would be superimposed.
- Another technique would be to limit used signal to those corresponding to a predetermined range of defocus, so that pixels corresponding to a more distant background would be suppressed.
- a software representation of a virtual participant is superimposed on to the software image corresponding in position to the participant.
- the virtual participant could be a representation of an actual person, say a well-known boxer, or a cartoon character such as Spiderman or Laura Croft.
- the choice of 3D enhancement system my well be influenced by the type of game being played. It would be possible to use different method embodiments using the same equipment, and to build in software that would automatically select the preferred system for each type of game.
- Means may be provided for calibrating the system, so that games can be played in different sizes of spatial environment. It may be desirable, in some instances, to confine the participant's activity to a predetermined area, such as might be defined by a mat, which may itself have a functionality in the set-up.
Abstract
A method and apparatus are disclosed for introducing a third, depth dimension into amusement devices of the type, such as an Eye Toy, comprising a video imaging device that is associated with a monitor, which may be a television screen, on which is played out an activity, which may, for example, be kick boxing, by means of video software, comprising forming, with a depth imaging device, an image of the participant space that has a defined-focus region and, closer to and/or further away from said defined-focus region, other regions in which the focus is different from the defined focus, and detecting whether a participant, or a part of a participant, is in the defined-focus region or in another region.
Description
Three Dimensional Effects in an Eye Toy
This invention relates to amusement devices of the type (hereinafter "the type referred to") comprising a video imaging device that is associated with a monitor, which may be a television screen, on which is played out an activity, which may, for example, be kick boxing, by means of video software. One such device is known as the Eye Toy, marketed by Sony. The imaging device, set atop the monitor, images one taking part in the amusement, in front of the monitor, and superimposes an image of the participant on to the screen activity. The arrangement may be interactive, so that, in a kick boxing game, landing a punch, say, on a virtual protagonist, has certain consequences, for example, the virtual protagonist falls down.
Such an arrangement is an entirely two-dimensional experience, account being taken only of the screen X and Y co-ordinates of the virtual protagonist in deciding whether or not a bit has been registered. It would clearly add to the realism of the amusement if a third, depth, dimension could be introduced, so that a punch landed in front of the protagonist would not count.
Methods for achieving this are not difficult to envision. However, the obvious methods require three-dimensional video rate imaging, which is cutting edge technology, and far too expensive for the Eye Toy application.
The present invention provides methods for introducing a third, depth dimension into the Eye Toy concept at a cost that is commensurate with the Eye Toy itself.
The invention comprises, in one aspect, a method for introducing a third, depth dimension into amusement devices of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that has a defined-focus region and, closer to and/or further away from said defined-focus region, other regions in which the focus is different from the defined focus, and detecting whether a participant, or a part of a participant, is in the defined-focus region or in another region.
By 'defined-focus' is meant a specific focus state, in which an image is either in focus or is out of focus by a prescribed amount. The invention also comprises a method for introducing a third, depth dimension into amusement devices of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that has an in-focus region and, closer to and further away from the imaging device, out-of-focus regions, and detecting whether a participant, or a part of a participant, is in the in-focus region or an out-of-focus region.
The invention also comprises an amusement device of the type referred to in which a third, depth dimension is introduced, comprising a depth imaging device forming an image of a participant space that has an in-focus region and, closer to and further away from the imaging device, out-of-focus regions, and means for detecting whether
a participant, or part of a participant, is in the in-focus region or an out-of-focus region,
In an elementary form, referred to herein as single-bit resolution, the method involves detection simply of whether the participant is or is not in the in-focus region. If the participant, in this form, is in an out-of-focus region, no further information is available as to where, in the out-of-focus region, the participant actually is.
This is adequate for many purposes, and is an inexpensive solution to the problem of introducing a third dimension.
The game software will be adapted to this form of the method by having depth information about virtual objects in the screen image generated by the game software. It is not necessary to use this depth information to display a three dimensional image on the screen, although this could be done, if desired, at a price, but it might, for example, be matched to visual clues as to depth such as size - a kick boxing protagonist appears smaller if further away.
The software will then be adapted so that a decision as to whether an interaction has taken place will depend not only on the X, Y co-ordinates matching but also the depth co-ordinates, that is to say, the punch will only be 'landed' if the virtual protagonist's assigned depth co-ordinate corresponds to the in-focus region, and the participant's punch penetrates that region. If a second depth imaging device is added, which has an in-focus region spaced from the in-focus region of the first depth imaging device, interactions at two different depths can be detected, giving an enhanced impression of depth. This is two-bit resolution. Clearly, further depth imaging devices may be added further to enhance the impression. An eight bit arrangement gives 256 depth levels, which, for most purposes, is sufficient to produce a realistic three dimensional model of a scene.
The invention also comprises a method for introducing a third, depth dimension into an amusement device of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that is progressively out of focus. In this method, the participant does not cross the in-focus region.
In this case, a first imaging device may form an image of the participant space which is essentially in focus, while a second imaging device forms an out-of-focus image from which depth information is derived. However, both imaging devices may form out-of-focus images of the participant space, from which depth information may be derived, and from which an in focus image can be created by appropriate software.
Such additional depth imaging devices may be provided in a single depth camera, with two or more CCD arrays placed at different distances behind a single lens system. Other, dynamic, methods of providing two or more in-focus regions in a single camera include variable aperture, moving or changeable focus lens systems, moving CCD array and liquid lens arrangements.
The depth imaging device may also be the primary imaging device, or may be provided in addition to a primary imaging device, though the primary imaging device
and the depth imaging device may be contained in the same case. The primary imaging device may be a colour camera, but a depth imaging device, used purely as such, can be monochrome. In another arrangement, according to the invention, the image with in-focus and out- of-focus regions may be formed by the primary imaging camera itself, using a structured light illumination arrangement with an in-focus region in which a participant, or part of a participant, is illuminated with a sharp pattern, for example of a grid, and the image of the grid analysed by appropriate software, to determine whether it is in or out of focus. The use of more than one grid can provide enhanced depth information, exactly as with the depth imaging arrangement above described. But software can analyse the extent of defocus in such a structured light arrangement, and can give full depth information from only a single grid. Moreover, this is achieved using only a single imaging device. Software, disclosed in principle in WO2004/068400, analyses the extent of defocus of the grid pattern at each part of the image and correlates that to distance from an in-focus region. To avoid anomalous results, it is best to ignore one of the two out-of-focus regions, and restrict the participant space to the region in front of or the region behind the in-focus region. The grid pattern projected on to the participant is easily removed by software, but may, in the circumstances, not be intrusive, and may, for example be invisible, infrared illumination. Where chroma-keying, which involves imaging the participant against a plain backdrop illuminated with a particular colour (usually blue), the same colour can be used to cast the grid pattern, and this will automatically be eliminated by the chroma-keying software from the final image.
In yet another arrangement, according to the invention, using structured light illumination, the illumination may involve, for example, a moving grid, generated as by a filter wheel as used with Digital Mirror Devices (DMD) and Digital Light Processing (DLP).
The filter wheel may be divided, say, into three 120° sectors, each of which has a grid, the grids being similar to, but radially displaced relative to the others. The grid could, for example, be a spiral line. As each sector comes into play, the grid pattern superimposed on the object changes with rotation of the wheel, and this moving grid pattern can be utilized by the software aforementioned to yield depth information.
Methods of deriving depth information by defocus are disclosed in US4965840,
US5193124, US5148209 and US5231443. These are passive depth-by-defocus methods, not involving active illumination as by a grid pattern
US4965840 discloses a method of determining the distance between a surface patch of a 3-D spatial scene and a camera system, in which the distance is measured using a pair of images with a change in the value of at least one camera parameter during the image formation process, such as
(i) the distance between the second principal plane of the image forming system and the image detector plane of the camera system;
(ii) the diameter of the camera aperture; and
(iii) the focal length of the image forming system.
First and second two dimensional images of the scene are formed using at least one different parameter value, and first and second sub-images are selected which correspond to the surface patch, and the distance of the patch calculated on the basis of constraints between the spread parameters of the point spread function corresponding to the two images. Whilst this is used, according to US4965480, for automatic focus in a camera system, it could, according to the present invention, be used in amusement devices of the type referred to.
US5193124 discloses an improvement on the disclosure of US4965840 in which first and second digital images of the surface patch are preprocessed to form normalised images, and sets of Fourier coefficients are calculated for the images to provide sets of ratio values from which the distance of the surface patch from the camera system is calculated. This distance is then used to focus the object. The system is useful in machine vision systems, e.g. in robot vision, autonomous vehicle navigation and stereo vision systems, as well as in television microscopy and commercial television broadcasting where objects in both background and foreground are in focus.
US5148209 discloses a similar arrangement.
US5231443 discloses a similar arrangement using a different computational method.
Depth information may also be derived from special areas of the participant. Boxing gloves and boots, for example, may have patterns which can have the same function as active grid pattern illumination, without the need to provide such illumination. The viewing screen - television screen or monitor - can be a conventional 2-D screen or a 3-D screen, which may be of the type in which two images are directed in slightly different directions so that one enters only one eye, and the other enters only the other eye. The game software may be devised so as, on this type of screen, to give a 3D effect. Preferably, the software should be compatible with viewing on a regular 2D screen.
Embodiments of amusement devices according to the invention will now be described with reference to the accompanying drawings, in which: Figure 1 is a diagrammatic illustration showing a first embodiment of an Eye
Toy arrangement with a participant and added 3-D effect;
Figure 2 is a diagrammatic illustration like Figure 1 of a second embodiment
Figure 3 is a diagrammatic illustration like Figure 1 of a third embodiment;
Figure 4 is a diagrammatic illustration of a first imaging arrangement;
Figure 5 is a diagrammatic illustration of a second imaging arrangement;
Figure 6 is a diagrammatic illustration of an active depth-from-defocus arrangement;
Figure 7 is a diagrammatic illustration of a feature of the arrangement of Figure 6;
Figure 8 is a diagrammatic representation of a moving grid arrangement for use in the arrangement of Figure 6; and Figure 9 is a diagrammatic representation of an arrangement depending on participant-borne texture.
The drawings illustrate methods for introducing a third, depth dimension into amusement devices 11 of the type referred to, in this instance, an Eye Toy, comprising forming, with a depth imaging device 12, an image of the participant space 13 that has an in-focus region 14 and, closer to and further away from the imaging device, out-of- focus regions 15, 16 respectively, and detecting whether a participant 17, or apart of a participant 18, is in the in-focus region 15 or an out-of-focus region 15 or 16. The Eye Toy 11 comprises a camera 19, mounted atop a television screen or monitor 21, and software displaying a moving, software-generated image on the screen 21. The camera forms an image of the participant space 13 and casts it on the screen 21, superimposed over the software image, so that, to participants, it looks as though they are taking part in the screen action.
Hitherto, the Eye Toy imaging has been essentially two-dimensional. In a boxing, or kick boxing game, the participant endeavours to land a punch, or a kick, on the screen opponent, who is, of course, taking evasive action as a result of interactive software. By introducing a third dimension, the game is made more interesting, as the participant must aim, not only to hit the two-dimensional coordinate representing the target, but also take into account the depth of the target.
In Figure 1, a passive depth-by-defocus technique is used to detect simply when the participant 17, or a part, such as a glove or boot, of the participant 17 is in the in-focus region 14. A suitable arrangement for this would be that disclosed in the US patents above referred to. When intrusion into the in-focus region 14 is detected, the software triggers an action of the on-screen opponent appropriate to the nature of the intrusion.
Figure 2 illustrates a method for introducing a third, depth dimension into an amusement device 11 of the type referred to, comprising forming, with a depth imaging device 12, an image of the participant space 13 that is progressively out of focus. In this method, the participant does not cross the in-focus region 14. hi Figure 2, the participant 17 is shown in the out-of-focus region 16 beyond the in-focus region 14. It is possible to derive depth information for a participant in this region by several means, including means as disclosed in the US patents referred to. The out-of-focus image can be analysed as by Fourier analysis to determine its frequency spectrum. High frequencies correspond to substantially in-focus depths, while progressively lower predominant frequencies correspond to progressively out-of-focus depths. Figure 3 illustrates an arrangement in which there are several in-focus regions 14a, 14b, etc. These would correspond to multiple imaging devices.
Figures 4 and 5 illustrate imaging devices that could be used in the methods of the invention. Ih Figure 4, two cameras 19a, 19b are shown side-by-side in the same casing. This arrangement could be used for any of the methods above described. In
regard to the embodiment of Figure 1, one of the cameras could be used to form the screen image, which the other is used as a depth-by-defocus camera, providing in- or out-of-focus depth information for use by the interactive software of the Eye Toy game. In regard to the Figure 2 arrangement, both cameras can be set to give out-of- focus images with regard to focal regions 14 that are spaced apart depthwise. From two such images, depth can be calculated for any pixel, and a sharp image produced by suitable software. For the Figure 3 arrangement, the cameras 19a, 19b could have two focal regions, separated, again, depthwise, and the software set to detect when either focal region was breached.
Figure 5 illustrates a camera 19 that has only one lens 19c, but two ccd arrays 19d, 19e, one behind the other. It is possible to have 'transparent' ccd arrays inasmuch as the charge coupled devices that make up the front array can be spaced apart to let light through to devices of the rear array.
Figure 6 illustrates how depth can, in an inexpensive way, be introduced by an active depth-by-defocus technique.
A lamp 22 casts an image of a grating into the participant space 13 that is in focus only in the in-focus region 14. If the participant 17 is in the in-focus region 14, the lamp 22 casts a sharp image of the grating on to the participant 17, otherwise the image is out of focus. Software as disclosed in WO2004/068400 can determine whether the focus is sharp or otherwise and thus determine if the participant is in the in-focus region 14 or not.
Figure 7 illustrates the image of the grating in front of (a), in (b), and behind (c) the in-focus region 14. The grating image can be subtracted from the image before displaying it on the screen. Figure 8 shows a moving grid arrangement for use in the arrangement of Figure 6. This comprises a rotary disc with different grating patterns, which can, as taught in WO2004/068400 be used to enhance the depth perception.
Figure 9 shows how a participant may carry special grating-like areas 91 from which depth information may be calculated exactly as if they were grating images cast by a lamp arrangement as shown in Figure 6. The defocus of the image of these areas 91, here illustrated as grating patterns on boxing gloves 92, can indicate specifically the location in depth of the gloves 92. The screen image of the participant may be superimposed on the software-generated screen image by a technique akin to chroma-keying, but which does not need the blue screen background, by various techniques. One such technique would be to suppress all signals from the imaging camera that did not change from frame to frame, so that only the changing pixels, corresponding to the moving participant, would be superimposed. Another technique would be to limit used signal to those corresponding to a predetermined range of defocus, so that pixels corresponding to a more distant background would be suppressed.
Rather than sharpen the image of the participant, it might be arranged that a software representation of a virtual participant is superimposed on to the software image
corresponding in position to the participant. In this regard, the virtual participant could be a representation of an actual person, say a well-known boxer, or a cartoon character such as Spiderman or Laura Croft. There are, of course, numerous types of game that can be played using the Eye Toy concept, and the choice of 3D enhancement system my well be influenced by the type of game being played. It would be possible to use different method embodiments using the same equipment, and to build in software that would automatically select the preferred system for each type of game.
Means may be provided for calibrating the system, so that games can be played in different sizes of spatial environment. It may be desirable, in some instances, to confine the participant's activity to a predetermined area, such as might be defined by a mat, which may itself have a functionality in the set-up.
Claims
Claims
I A method for introducing a third, depth dimension into amusement devices of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that has a defmed-focus region and, closer to and/or further away from said defmed-focus region, other regions in which the focus is different from the defined focus, and detecting whether a participant, or a part of a participant, is in the defmed-focus region or in another region. 2 A method for introducing a third, depth dimension into amusement devices of the type referred to, comprising forming, with a depth imaging device, an image of the participant space that has an in-focus region and, closer to and further away from the imaging device, out-of-focus regions, and detecting whether a participant, or a part of a participant, is in the in-focus region or an out-of-focus region.
3 A method according to claim 2, comprising detection simply of whether the participant or a part thereof is or is not in the in-focus region.
4 A method according to claim 2, in which, if the participant, is in an out-of- focus region, further information is available as to where, in the out-of-focus region, the participant actually is.
5 A method according to claim 4, in which such further information is derived from an image property other than focus.
6 A method according to claim 5, in which such image property is size.
7 A method according to any one of claims 2 to 6, in which software-generated virtual objects in the activity screen are assigned depths.
8 A method according to claim 7, in which the assigned depths are not used to generate a three-dimensional screen image.
9 A method according to claim 7, in which the assigned depths are used to generate a three-dimensional screen image.
10 A method according to any one of claims 7 to 9, in which the assigned depths are used in the comparison of virtual object size with participant size to determine participant depth.
I I A method according to any one of claims 2 to 10, comprising forming an image of a participant space that has a plurality of in-focus regions at different depths, and, closer to and further away from at least one of said in-focus regions, out-of-focus regions, and means for detecting whether a participant, or part of a participant, is in one or other of the in-focus regions.
12 A method according to claim 11, in which there are two in-focus regions at different depths. 13 A method according to claim 11, in which there are eight in-focus regions at
different depths.
14 A method according to claim 1, in which the defmed-focus region is not an in- focus region.
15 A method according to claim 14, in which the participant space is progressively out of focus with increasing distance from the imaging device.
16 A method according to any one of claims 1 to 15, in which software is adapted to decide whether an interaction has taken place between a participant, or part of a participant, and a defined focus region depending not only on the X, Y co-ordinates matching but also the depth co-ordinates
17 An amusement device of the type referred to in which a third, depth dimension is introduced, comprising a depth imaging device forming an image of a participant space that has a defmed-focus region and, closer to and further away from the imaging device, other regions in which the focus is different from the defined focus, and means for detecting whether a participant, or part of a participant, is in the defmed- focus region or another region.
18 An amusement device according to claim 17, in which the defmed-focus region is an in-focus region.
19 An amusement device according to claim 18, in which there are multiple in- focus regions.
20 An amusement device according to claim 19, in which there are two in-focus regions. 21 An amusement device according to claim 19, in which there are eight in-focus regions.
22 An amusement device according to any one of claims 17 to 21, comprising a first imaging device which forms an image of the participant space which is essentially in focus, and a second imaging device which forms an image which is not all in focus, from which depth information is obtained.
23 An amusement device according to any one of claims 17 to 22, comprising a camera with at least two ccd arrays placed at different distances behind a single lens system.
24 A amusement device according to any one of claims 17 to 22, comprising a camera with a variable aperture, operated to form images with different focus properties from which depth information may be obtained.
25 An amusement device according to any one of claims 17 to 22, comprising a camera with a moving lens system, operated to form images with different focus properties from which depth information may be obtained. 26 An amusement device according to any one of claims 17 to 22, comprising a
camera with a changeable focus lens system, operated to form images with different focus properties, from which depth information may be obtained,
27 An amusement device according to any one of claims 17 to 22, comprising a camera with a moving ccd array, operated to form images with different focus properties, from which depth information may be obtained,
28 An amusement device according to any one of claims 17 to 22, comprising a camera with a liquid lens arrangement, operated to form images with different focus properties, from which depth information may be obtained,
29 An amusement device according to any one of claims 17 to 28, in which the depth imaging device is also the primary imaging device. 30 An amusement device according to any one of claims 17 to 28, in which the depth and primary imaging devices are different.
31 An amusement device according to claim 30, in which the depth and primary imaging devices are contained in the same case.
32 An amusement device according to any one of claims 17 to 31, comprising a colour camera.
33 An amusement device according to claim 32, of which the colour camera is the primary imaging device.
34 An amusement device according to claim 32 or claim 33, comprising a monochrome depth camera. 35 An amusement device according to any one of claims 17 to 34, comprising a structured light illumination system illuminating the participant region.
36 An amusement device according to claim 35, in which the structured light illumination system comprises a system casting an image of a grid.
37 An amusement device according to claim 36, in which the grid is a moving grid.
38 An amusement device according to claim 37, in which the moving grid is generated by a filter wheel.
39 An amusement device according to claim 38, in which the filter wheel is divided into three sectors each having a grid similar to but radially displaced relative to the others.
40 An amusement device according to any one of claims 17 to 39, used in conjunction with participant clothing having patterning from which depth information can be obtained. 41 An amusement device according to any one of claims 17 to 40, used in
conjunction with a two-dimensional monitor screen.
42 An amusement device according to any one of claims 17 to 40, used in conjunction with a three-dimensional monitor screen.
43 An amusement device according to claim 43, compatible with use with a two- dimensional monitor screen.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP06727004A EP1893312A2 (en) | 2005-05-04 | 2006-05-04 | Detection of movement in three dimensions using eye toy |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0509172.3 | 2005-05-04 | ||
GBGB0509172.3A GB0509172D0 (en) | 2005-05-04 | 2005-05-04 | Three dimensional effects in an eye toy |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2006117563A2 true WO2006117563A2 (en) | 2006-11-09 |
WO2006117563A3 WO2006117563A3 (en) | 2007-01-11 |
Family
ID=34685123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2006/001631 WO2006117563A2 (en) | 2005-05-04 | 2006-05-04 | Detection of movement in three dimensions using eye toy |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1893312A2 (en) |
GB (2) | GB0509172D0 (en) |
WO (1) | WO2006117563A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8988437B2 (en) | 2009-03-20 | 2015-03-24 | Microsoft Technology Licensing, Llc | Chaining animations |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010038822A1 (en) * | 2008-10-01 | 2010-04-08 | 株式会社ソニー・コンピュータエンタテインメント | Information processing apparatus, information processing method, information recording medium, and program |
KR20120051208A (en) * | 2010-11-12 | 2012-05-22 | 엘지전자 주식회사 | Method for gesture recognition using an object in multimedia device device and thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6398670B1 (en) * | 2000-05-25 | 2002-06-04 | Xolf, Inc. | Golf training and game system |
GB2398691A (en) * | 2003-02-21 | 2004-08-25 | Sony Comp Entertainment Europe | Control of data processing in dependence on detection of motion in an image region associated with a processor control function |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020010019A1 (en) * | 1998-03-16 | 2002-01-24 | Kazukuni Hiraoka | Game machine, and image processing method for use with the game machine |
JP2001204964A (en) * | 2000-01-28 | 2001-07-31 | Square Co Ltd | Computer-readable recording medium wherein program of game for ball game is recorded, method of displaying and processing image of game for ball game, and video game apparatus |
JP2005312729A (en) * | 2004-04-30 | 2005-11-10 | Aruze Corp | Game machine |
-
2005
- 2005-05-04 GB GBGB0509172.3A patent/GB0509172D0/en not_active Ceased
-
2006
- 2006-05-04 GB GB0608793A patent/GB2425910A/en not_active Withdrawn
- 2006-05-04 WO PCT/GB2006/001631 patent/WO2006117563A2/en not_active Application Discontinuation
- 2006-05-04 EP EP06727004A patent/EP1893312A2/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6398670B1 (en) * | 2000-05-25 | 2002-06-04 | Xolf, Inc. | Golf training and game system |
GB2398691A (en) * | 2003-02-21 | 2004-08-25 | Sony Comp Entertainment Europe | Control of data processing in dependence on detection of motion in an image region associated with a processor control function |
Non-Patent Citations (2)
Title |
---|
JOHN ENS, PETER LAWRENCE: "AN INVESTIGATION OF METHODS FOR DETERMINING DEPTH FROM FOCUS" IEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, no. 2, February 1993 (1993-02), pages 97-108, XP002398664 [retrieved on 2006-09-12] * |
PAOLO FAVARO: "Depth from Focus/Defocus"[Online] 25 June 2002 (2002-06-25), XP002398665 Retrieved from the Internet: URL:http://homepages.inf.ed.ac.uk/rbf> [retrieved on 2006-09-12] * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8988437B2 (en) | 2009-03-20 | 2015-03-24 | Microsoft Technology Licensing, Llc | Chaining animations |
US9478057B2 (en) | 2009-03-20 | 2016-10-25 | Microsoft Technology Licensing, Llc | Chaining animations |
US9824480B2 (en) | 2009-03-20 | 2017-11-21 | Microsoft Technology Licensing, Llc | Chaining animations |
Also Published As
Publication number | Publication date |
---|---|
EP1893312A2 (en) | 2008-03-05 |
GB0509172D0 (en) | 2005-06-15 |
GB2425910A (en) | 2006-11-08 |
WO2006117563A3 (en) | 2007-01-11 |
GB0608793D0 (en) | 2006-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5468933B2 (en) | Image processing apparatus, image processing system, and image processing method | |
CN104052938B (en) | Apparatus and method for the multispectral imaging using three-dimensional overlay | |
US10372209B2 (en) | Eye tracking enabling 3D viewing | |
US10810791B2 (en) | Methods and systems for distinguishing objects in a natural setting to create an individually-manipulable volumetric model of an object | |
CN102262725B (en) | The analysis of three-dimensional scenic | |
US9236032B2 (en) | Apparatus and method for providing content experience service | |
CN102647606B (en) | Stereoscopic image processor, stereoscopic image interaction system and stereoscopic image display method | |
KR20180113576A (en) | Information processing apparatus, information processing system, information processing method, and computer program | |
CN102938844A (en) | Generating free viewpoint video through stereo imaging | |
GB2481366A (en) | 3D interactive display and pointer control | |
KR20090038932A (en) | Method, apparatus, and computer program product for generating stereoscopic image | |
JP3442270B2 (en) | Image generating apparatus and information storage medium | |
GB2477793A (en) | A method of creating a stereoscopic image in a client device | |
EP2441245A1 (en) | System and method for providing depth imaging | |
CN107533833A (en) | Head mounted display, information processor, information processing system and content-data output intent | |
JP2023172882A (en) | Three-dimensional representation method and representation apparatus | |
KR20120106919A (en) | 3d stereoscopic image and video that is responsive to viewing angle and position | |
EP1893312A2 (en) | Detection of movement in three dimensions using eye toy | |
WO2007063306A2 (en) | Virtual computer interface | |
JP2019181074A (en) | VR game display device, VR game display method, and program | |
CN112104857A (en) | Image generation system, image generation method, and information storage medium | |
CN108269288A (en) | Intelligent abnormal projects contactless interactive system and method | |
Shin et al. | A comparison between two 3d free-viewpoint generation methods: Player-billboard and 3d reconstruction | |
Louis et al. | Rendering stereoscopic augmented reality scenes with occlusions using depth from stereo and texture mapping | |
CN109409351B (en) | Amusement equipment based on projection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006727004 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: RU |
|
WWP | Wipo information: published in national office |
Ref document number: 2006727004 Country of ref document: EP |