WO2016041088A1 - System and method for tracking wearable peripherals in augmented reality and virtual reality applications - Google Patents

System and method for tracking wearable peripherals in augmented reality and virtual reality applications Download PDF

Info

Publication number
WO2016041088A1
WO2016041088A1 PCT/CA2015/050918 CA2015050918W WO2016041088A1 WO 2016041088 A1 WO2016041088 A1 WO 2016041088A1 CA 2015050918 W CA2015050918 W CA 2015050918W WO 2016041088 A1 WO2016041088 A1 WO 2016041088A1
Authority
WO
WIPO (PCT)
Prior art keywords
hmd
location
orientation
motion
physical environment
Prior art date
Application number
PCT/CA2015/050918
Other languages
French (fr)
Inventor
Dhanushan Balachandreswaran
Jian Zhang
Original Assignee
Sulon Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sulon Technologies Inc. filed Critical Sulon Technologies Inc.
Publication of WO2016041088A1 publication Critical patent/WO2016041088A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/66Sonar tracking systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the following relates generally to wearable technologies, and more specifically to tracking of a head mounted device, user gestures and peripheral devices in augmented reality and virtual reality environments.
  • AR augmented reality
  • VR virtual reality
  • a system for generating a rendered image stream for display to a user of a head mounted device (HMD) for augmented reality and virtual reality applications comprising: a location, motion and orientation (LMO) system disposed upon each of a plurality of tracked dynamic objects, including the HMD, within the physical environment, the LMO system being configured to provide location, motion and orientation information for each tracked dynamic object; a memory having stored thereon data comprising a correlation between the location, motion and orientation information for the HMD and a field of view of the HMD; and a processor configured to: map the physical environment to a virtual map; obtain from each LMO system, the location, motion and orientation information for each tracked dynamic object; repeatedly and substantially continuously map to the virtual map the substantially real-time location and orientation of each tracked dynamic object within the physical environment; for the HMD, obtain the data from the memory, determine the location and orientation of the field of view based on the correlation and the location and orientation of the HMD, and repeatedly and substantially continuously map to the virtual map the
  • a method for generating and displaying a rendered image stream to a user of a head mounted device (HMD) for augmented reality and virtual reality applications comprising: obtaining from a memory, data comprising a correlation between the location, motion and orientation information for the HMD and a field of view of the HMD; mapping the physical environment to a virtual map; obtaining from a plurality of tracked dynamic objects, including the HMD, within the physical environment, location, motion and orientation information; repeatedly and substantially continuously mapping to the virtual map the substantially real-time location and orientation of each tracked dynamic object within the physical environment; by one or more processors, for the HMD, determining the location and orientation of the field of view based on the correlation and the location and orientation of the HMD, and repeatedly and substantially continuously mapping to the virtual map the field of view; rendering computer generated images related to at least one of the tracked dynamic objects; transforming the coordinates of the computer generated images from a coordinate system of the virtual map to a coordinate system of a display of the HMD;
  • a system for tracking dynamic objects in a physical environment to render an image stream for a head mounted device (HMD) for augmented reality and virtual reality applications comprising: a location, motion and orientation (LMO) system disposed upon each tracked dynamic object, configured to provide location, motion and orientation information for each respective tracked dynamic object; and a processor communicatively coupled to the location, motion and orientation systems, the processor configured to: map the physical environment to a virtual map; obtain the location, motion and orientation information for each tracked dynamic object; and repeatedly and substantially continuously map to the virtual map the location and orientation of each tracked dynamic object within the physical environment.
  • LMO location, motion and orientation
  • a method for tracking dynamic objects in a physical environment to render an image stream for a head mounted device (HMD) for augmented reality and virtual reality applications comprising: mapping the physical environment to a virtual map; obtaining from a plurality of tracked dynamic objects, within the physical environment, location, motion and orientation information; and by one or more processors, repeatedly and substantially continuously mapping to the virtual map the location and orientation of each tracked dynamic object within the physical environment.
  • HMD head mounted device
  • a head mounted device for augmented reality and virtual reality application
  • the HMD comprising: a location, motion and orientation system disposed upon the HMD for providing location, motion and orientation information
  • the location, motion and orientation system comprising: a positioning unit to provide a determined location and orientation of the HMD with reference to a physical environment surrounding the HMD; an inertial measurement unit to provide motion information, comprising a gyroscope and an accelerometer; and a processor communicatively coupled to the location, motion and orientation system, configured to calculate the substantially real-time location and orientation of the HMD based on adding the motion information from the inertial measurement unit to a most recent determined location and orientation from the information provided by the positioning unit.
  • a method to determine a substantially real-time location and orientation of a head mounted device (HMD) for augmented reality and virtual reality applications comprising: obtaining an absolute location and orientation of the HMD with reference to a physical environment surrounding the HMD; subsequently obtaining motion and orientation information from a gyroscope disposed upon the HMD and further motion and orientation information from an accelerometer disposed upon the HMD to correct a rotational bias of the gyroscope; and calculating, by one or more processors, the substantially real-time location and orientation of the HMD based on correcting the rotational bias of the gyroscope and adding the corrected motion and orientation information from the gyroscope to a most recent absolute location and orientation from the information provided by the positioning unit.
  • HMD head mounted device
  • FIG. 1 illustrates in schematic form a system for tracking dynamic objects in a physical environment occupied by a plurality of users equipped with head mounted devices;
  • Fig. 2 illustrates an exemplary configuration of location, motion and orientation systems upon a user equipped with an HMD
  • FIG. 3 illustrates an exemplary configuration of a head mounted device
  • Fig. 4 illustrates a method for mapping to a virtual map the respective locations and orientations of dynamic objects in a physical environment
  • Fig. 5 illustrates a field of view for a head mounted device in a physical environment
  • Fig. 6 illustrates a method for generating a rendered image stream for a head mounted device.
  • Any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical discs, or tape.
  • Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto.
  • any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors.
  • FIG. 1 and Fig. 3 an exemplary scenario is shown in which multiple users occupy a physical environment.
  • the users are equipped with HMDs 12 and peripherals 13.
  • Each dynamic object may be equipped with a location, motion and orientation (LMO) system to provide LMO information.
  • LMO location, motion and orientation
  • a processor 130 obtains the LMO information from the dynamic objects and maps their respective locations and orientations in a virtual map, as hereinafter described in greater detail.
  • the processor 130 may use the mapped orientations and locations for the dynamic objects to generate dynamic object-related computer generated imagery (CGI).
  • CGI computer generated imagery
  • the processor 130 may generate a rendered image stream comprising the CGI for display to the user on the display of the user's HMD 12.
  • processor may be distributed amongst the components occupying the physical environment, within the physical environment or in a server 300 in network communication with a network 17 accessible from the physical environment.
  • the processor 130 may be distributed between the HMDs 12 and a console 11 , or over the I nternet via the network 17.
  • Each user's HMD 12 may communicate with the user's peripheral 13, or the HMDs and peripherals 13 may communicate directly with the console 11 or the server 300 located over a network 17 accessible from the physical environment, as shown.
  • FIG. 2 an exemplary configuration of LMO systems 21 disposed upon a user 1 is shown.
  • the user 1 may be equipped with an HMD 12, as well as LMO systems 21 disposed upon her hands and feet, and one which is disposed upon the HMD 12.
  • Each LMO system 21 may provide LMO information for the limb or body part upon which it is disposed.
  • FIG. 3 an exemplary HMD 12 embodied as a helmet is shown; however, other embodiments are contemplated.
  • the HMD 12 may comprise a processor 130 for generating a rendered image stream comprising CGI.
  • the processor may be located apart from the HMD 12.
  • the processor may be communicatively coupled to the following components of the HMD 12: (i) a scanning system 132 for scanning the physical environment surrounding the HMD 12; (ii) an LMO system 141 comprising a local positioning unit for determining the HMD's 12 location within the physical environment, a motion sensing module for detecting the HMD's 12 motions within the physical environment, and/or an orientation detection module for detecting the orientation of the HMD 12; (iii) an imaging system 123, such as, for example, a camera system comprising one or more cameras, to capture a physical image stream of the physical environment; (iv) a display system 121 for displaying to a user of the HMD 12 the physical image stream and/or the rendered image stream; (v) a power management system 113 for distributing power to the components; (vi) a sensory feedback system 120 comprising, for example, haptic feedback devices, for providing sensory feedback
  • the HMD 12 may further comprise a wireless communication system 126 having, for example, antennae, to communicate with other components in an AR and/or VR system, such as, for example, other HMDs, peripherals, actors, a gaming console, or a router.
  • a wireless communication system 126 having, for example, antennae, to communicate with other components in an AR and/or VR system, such as, for example, other HMDs, peripherals, actors, a gaming console, or a router.
  • An exemplary peripheral in the form of a controller 13 is shown.
  • the controller 13 comprises a toggle-type actuator 131 for receiving user input.
  • Each LMO system may comprise a communication module configured to transmit LMO information for the system to the processor, or communication of LMO information may be routed through HMDs, as shown in Fig. 1 .
  • communication between an LMO system worn by a user or disposed upon a peripheral held by a user may be effected via a matched transmitter-receiver signal pair between an HMD and the LMO system.
  • an LMO system for a user's peripheral, or which is worn by a user may comprise a transmitter of a receiver-transmitter pair; the receiver of the receiver-transmitter pair may be disposed upon an HMD worn by the user.
  • the transmitter and receiver may be configured to cooperatively provide LMO information to the processor via the HMD.
  • the receiver may be disposed upon the peripheral while the transmitter may be disposed upon the HMD, and the LMO information for the HMD and peripheral may be provided to the processor via the peripheral.
  • the transmitter may transmit a signal comprising the LMO information for the transmitter relative to the receiver.
  • the transmitter obtains LMO information from its corresponding LMO system and encodes the LMO information into a signal.
  • the transmitter transmits the signal to the paired receiver.
  • the transmitter and receiver repeatedly and continuously (or substantially continuously, at a suitable frequency) transmit and receive, respectively, signals to exchange substantially real-time LMO information for the LMO system.
  • the receiver provides the transmitted LMO information from the transmitter to the processor.
  • the transmitter and receiver pair may communicate using ZigBee, Bluetooth, WiFi, radio- frequency (RF) or other suitable wireless communication protocol.
  • the transmitter may be a magnetic source and the receiver be a magnetic receiver/antenna or vice versa.
  • the pair can be configured so that the receiver is able to detect the field of the source, so that the transmitter-emitter pair may serve as an LMO system and communication system.
  • the LMO information of the peripheral may be provided to the processor as LMO information relative to the LMO information of the HMD.
  • the processor may then map the substantially real-time location and orientation of the peripheral according to a transformation to transform the LMO information of the peripheral, which is defined relative to the LMO information of the HMD, to LMO information relative to the physical environment.
  • a transformation to transform the LMO information of the peripheral, which is defined relative to the LMO information of the HMD, to LMO information relative to the physical environment.
  • the LMO system of the HMD may return the substantially instantaneous location and orientation of the HMD relative to the coordinates of the physical environment, while the LMO system of the peripheral may cooperate with the LMO system of the HMD to provide LMO information relative to the HMD.
  • the transformation may therefore be implemented in order to redefine the LMO information from the peripheral into the coordinate system defining the physical environment.
  • a user may be equipped with a peripheral comprising a paired transmitter- receiver LMO system, while a display in the physical environment which the user is occupying may comprise the opposite half of the paired transmitter-receiver LMO system.
  • the display may be, for example, an LCD, LED, OLED or plasma television or monitor.
  • a processor within the physical environment or within the display, and in communication with the display and its LMO system, may be configured to obtain the LMO information of the peripheral relative to the location and orientation of the display so that the user's gestures may be tracked by the processor for engaging with the display. A user may thereby interact with the display using gestures.
  • An LMO system may comprise one or more of the following: (i) a positioning sensor configured to track a location, the positioning sensor comprising, for example, one or more of a magnetic, ultrasound, radio-frequency (RF), LiDAR or radar type of sensor, whether alone or in combination; and (ii) a motion sensor configured to track motions, the motion sensor comprising, for example, one or more of the following: an accelerometer, a gyroscope, a magnetometer, whether alone or in combination.
  • the sensors in the LMO system may be, for example, MEMS-type sensors. Certain sensor configurations may provide all LMO information as a single suite.
  • the LMO system may comprise a 3D magnetic positioning unit, which alone may provide real-time location, positioning and orientation information.
  • the magnetic positioning unit provides the location and orientation in absolute terms.
  • absolute refers to locations and orientations which are determined relative to other features within the surrounding physical environment at a given point in time.
  • dead reckoning or relative positioning relies on determining positions and orientations based on measuring and adding motion information.
  • a magnetic positioning unit alone may be sufficiently fast and accurate to provide location and orientation information.
  • typical refresh rates of a magnetic positioning unit may be unsuitable.
  • certain magnetic positioning units are susceptible to magnetic "noise" in some environments which may render their readings ambiguous or unreliable. In such environments, it may be preferable, therefore, to implement other or further sensing systems.
  • the LMO system and the processor cooperate to implement dead- reckoning techniques, i.e., techniques in which relative changes to the motion and orientation of the LMO system are added to a known initial location and orientation to return the location of the LMO system at a given point in time.
  • Dead reckoning location calculation may suffer from cumulative errors and, further, the initial location and orientation of the sensor must be known or determined in certain applications. Therefore, in an embodiment of an LMO system using dead- reckoning techniques the LMO system comprises an inertial measurement unit (IMU) having a combination of an accelerometer and a gyroscope. Gyroscopes frequently experience latent rotational bias, while accelerometers may provide relatively less accurate dead-reckoning measurements.
  • IMU inertial measurement unit
  • the accelerometer preferably serves as a reference for the readings of the gyroscope from which the processor may determine the degree of rotational bias exhibited by the gyroscope and adjust the readings accordingly.
  • the processor may determine the degree of rotational bias exhibited by the gyroscope and adjust the readings accordingly.
  • the LMO system may comprise a positioning sensor, or the initial orientation may be determined from a motion sensor depending on the type of motion sensor used.
  • the processor may be configured to assume aspects of orientation based on the parameters returned by the motion sensor at rest.
  • the resting location of an accelerometer may provide the orientation of the LMO system with reference to the earth's gravity: the motion vector of an IMU at rest may be assumed by the processor to be pointing towards the earth.
  • the processor may use this information to assign an initial orientation, subsequent deviation from which will be provided by the IMU as motion information.
  • the processor may determine the orientation of the LMO system relative to the physical environment at a given point in time.
  • the LMO system may comprise a local positioning unit from which the initial orientation and location may be determined, as previously described.
  • the processor may determine the location and orientation of the HMD relative to an initial scan (or another prior scan) of the physical environment.
  • the processor may identify features common to both the initial scan (or another prior scan) and a later scan to determine the change in location and orientation of HMD giving rise to any deviation between the scans apparent in the common feature.
  • the use of the scanning system to determine the location and orientation of the HMD may be preferable to use of a magnetic positioning unit.
  • the processor obtains readings from an IMU and a magnetic positioning unit in order to periodically verify the accuracy of, and correct, any dead reckoning calculations. Since the magnetic positioning unit provides absolute instead of relative orientation and location information, the processor may incorporate that information to adjust the calculated location and orientation of the LMO system. If the rate at which the IMU provides LMO information exceeds the same rate for the magnetic positioning unit, the IMU may be used by the processor for interim analysis between readings from the magnetic positioning unit.
  • a user equipped with an HMD 12 may be equipped with additional LMO systems 21 configured to provide LMO information for various body parts to which the LMO systems 21 are coupled. For example, a user may wear a wrist band comprising an LMO system 21 , as shown in Fig. 2. As the user moves her hand and/or arm, the LMO system 21 may provide real-time LMO information which the processor may use to map the user's motions and gestures to the virtual map.
  • peripherals or other wearable devices may comprise myography sensors, such as, for example, electromyography (EMG), mechanomyography (MMG) or phonomyography sensors, to measure the user's gestures and body movements.
  • the myography sensors may provide motion information based on measurements to the processor so that the user's gestures may be mapped to the virtual map.
  • the processor may access a library of "gestures" against which to compare the output of the myography sensors in order to apply the gestures to a given application. For example, a user equipped with a myography sensor configured to measure finger movements may depress a virtual trigger of a virtual gun by contracting her index finger.
  • the processor may map the motion to the virtual map for rendering, while determining that the motion corresponds to a trigger motion in the gesture library. If the processor determines that the motion does correspond to the trigger motion, the processor will record the event to actuate an outcome according to predefined rules, such as, for example game parameters. Alternatively, the user may actuate the trigger event by, for example, depressing a trigger or other button on a peripheral device with her index finger.
  • the processor may obtain an approximate finger motion from the gesture library and map the user's finger motion to the virtual map based on an approximate finger motion expected to actuate the trigger motion.
  • an LMO system may be added to an existing peripheral so that the processor may map the motion of the peripheral to the virtual map.
  • the processor may use the LMO information to derive LMO information for virtual objects and actions represented in the rendered image streams. For example, the processor may correlate a given LMO sensor to a virtual device, such as a virtual gun, so that any virtual bullet fired therefrom is represented in the rendered image stream according to the location, motion and orientation of the virtual gun at a time of firing. The trajectory of the virtual bullet may be mapped to the virtual map so that CGI representing the virtual bullet may be rendered according to the mapped trajectory.
  • the LMO systems of dynamic objects coupled to a particular HMD may provide LMO information as values or vectors relative to the location and orientation of the HMD at a point in time.
  • the HMD and a peripheral may each comprise a 3D magnetic positioning sensor configured to provide location and orientation information for the dynamic object relative to the HMD.
  • the processor may map the location and orientation of the peripheral with reference to the HMD's location and orientation.
  • the processor may render a rendered image stream comprising CGI for all mapped dynamic objects in the physical environment according to the method illustrated in Fig. 4.
  • the processor generates or acquires a map of the physical environment, at block 401.
  • the processor may generate the map based on information provided by the scanning system.
  • the processor may obtain the coordinates from the scanning system of one or more HMDs for all scanned physical features of the physical environment and generate a virtual map as, for example, a 3D point cloud in which each point is defined according to its real- world (i.e., physical environment) dimensions.
  • the processor may acquire the map from a map library in which an already generated map of static features within the physical environment is stored.
  • the processor obtains the location and orientation information from the various LMO systems in the physical environment and maps their locations and orientations to the virtual map.
  • the processor may repeatedly and continuously refresh the map by updating the locations and orientations of the dynamic objects according to the substantially real-time LMO information provided by their respective LMO systems.
  • the processor may generate a rendered image stream comprising dynamic object- related CGI.
  • a user's display of the HMD with which the user is equipped may display a physical image stream as well as the rendered image stream.
  • the physical image stream may be captured by an imaging system, such as one or more cameras, and relayed to the display.
  • the processor may therefore generate a rendered image stream whose CGI will be displayed by the display over corresponding features of the physical environment within the physical image stream. For example, in a game in which a first user and a second user occupy a physical environment, the processor may generate a rendered image stream for the first user in which the second user is represented as a zombie when the second user appears within the field of view of the first user's physical imaging system.
  • the first user may perceive the second user as a zombie.
  • the HMD 12 comprises a camera 123 to capture a physical image stream of the physical environment 431 , and an LMO system 21.
  • the field of view of the camera 123 is shown by the dashed lines emanating outwardly therefrom.
  • the processor may ascertain the field of view for the camera 123 from memory or directly from the camera.
  • the LMO system 21 of the HMD provides substantially real-time location and orientation information for the HMD 12.
  • the processor may ascertain the relative location and orientation (shown as coordinates Xc, Yc, Zc, ⁇ , (3c, yc) of the camera 123 with respect to the HMD 12 from memory, so that the processor may determine the field of view of the camera 123 at a given time based on the LMO information of the HMD 12 at that time.
  • the processor may thereby map the field of view to the virtual map as a notional field of view.
  • the processor may map a notional field of view which has the same parameters as the field of view of the camera 123; however, the processor may map a notional field of view with different parameters. For example, if the rendered image stream is displayed alone (i.e., without the physical image stream), as in a pure VR application, then the processor may define any parameters for the notional field of view.
  • FIG. 6 A method for calibrating the rendered image stream is illustrated in Fig. 6. Assuming the processor has initiated a virtual map of the physical environment, as described above with respect to Fig. 4, at block 601 , the processor determines the location and orientation of the HMD based on the LMO information provided by the LMO system of the HMD. At block 603, the processor obtains, from a memory or from a camera system on the HMD, data comprising a correlation between the LMO information for the HMD and the field of view of the HMD. The correlation may be between the LMO information for the HMD and the actual parameters of field of view of the camera system, or the correlation may be between the LMO information for the HMD and a predetermined field of view having any defined parameters stored in the memory.
  • the processor maps the notional field of view to the virtual map based on the correlation and the LMO information.
  • the processor repeatedly and continuously updates the notional field of view in the virtual map based on substantially real-time LMO information provided by the LMO system of the HMD.
  • the processor maps and repeatedly and continuously updates in the virtual map the locations and orientations of the dynamic objects within the physical environment, as described with respect to blocks 403 and 405 of Fig. 4.
  • the processor either renders CGI for the entire physical environment, or renders CGI only for those features which are within the notional fields of view of any users occupying the physical environment.
  • a rendered image stream may comprise only that CGI which is within the notional field of view.
  • the processor applies a transformation, which may be provided by the imaging system, the display system or from the memory, to the CGI to transform the coordinates of the CGI from the coordinate system of the virtual map to the coordinate system of the display.
  • the transformed CGI is then provided to the display as a rendered image stream.
  • a physical image stream generated by the imaging system may be provided to the display substantially simultaneously to the rendered image stream, depending on whether AR or VR is to be displayed.
  • the processor may generate multiple rendered image streams, one for each LMO system-equipped HMD occupying the physical environment.
  • Each rendered image stream would account for the substantially real-time LMO information for the HMD to which it is provided, thereby enabling multiple users to experience individualised AR or VR according to their respective locations and orientations within the physical environment.
  • the processor may adjust the rendered image stream to change the field of view so that the user experiences a field of view that substantially correlates to the field of view he would experience if he were moving throughout the environment without wearing an HMD. For example, a first user whose natural field of view includes a second user would expect the second user's relative location within the field of view to change as the user turns his head; if the first user turns his head leftwards, the second user should displace rightwards within the first user's field of view.
  • the processor may simulate this effect by invoking the method described above with respect to Fig. 6.
  • the notional field of view preferably has the same real-world parameters (such as dimensions, focal length and orientation) as the physical world field of view of the user's HMD.
  • the user's display may then simultaneously display both the physical and rendered image streams to present an AR.
  • the foregoing systems and methods may enable a user equipped with a wearable LMO system to perform exercises, the motions of which may be mapped and tracked for analysis by, for example, the user's trainer.
  • the foregoing systems and methods may enable a user to use gesture controls to interact with a processor, for example to initiate AR or VR gameplay in a physical environment.
  • the foregoing systems and methods may enable the processor to generate CGI for dynamic objects in a physical environment, even while those dynamic objects lie outside the field of view or capture region of a user's HMD.

Abstract

Tracking of dynamic objects within a physical environment for display to a user equipped with a head mounted device is described. A system comprises the head mounted device, a processor in communication with a display of the head mounted device and further in communication with location, motion and orientation systems, each disposed upon the head mounted device, as well as other dynamic objects within the physical environment. The processor maps the substantially real-time location and orientation of the dynamic objects to a virtual map of the physical environment according to location, motion and orientation information provided by each of the location, motion and orientation systems. The processor repeatedly and continuously updates the virtual map according to changes in the location and orientation of the dynamic objects. The processor may further simulate a changing field of view experienced by the user of the head mounted device.

Description

SYSTEM AND METHOD FOR TRACKING WEARABLE PERIPHERALS IN AUGMENTED REALITY AND VI RTUAL REALITY APPLICATIONS TECHNICAL FIELD
[0001] The following relates generally to wearable technologies, and more specifically to tracking of a head mounted device, user gestures and peripheral devices in augmented reality and virtual reality environments.
BACKGROUND
[0002] The range of applications for augmented reality (AR) and virtual reality (VR) visualization has increased with the advent of wearable technologies and 3-dimensional (3D) rendering techniques. AR and VR exist on a continuum of mixed reality visualization.
SUMMARY
[0003] In one aspect, a system for generating a rendered image stream for display to a user of a head mounted device (HMD) for augmented reality and virtual reality applications is provided, the system comprising: a location, motion and orientation (LMO) system disposed upon each of a plurality of tracked dynamic objects, including the HMD, within the physical environment, the LMO system being configured to provide location, motion and orientation information for each tracked dynamic object; a memory having stored thereon data comprising a correlation between the location, motion and orientation information for the HMD and a field of view of the HMD; and a processor configured to: map the physical environment to a virtual map; obtain from each LMO system, the location, motion and orientation information for each tracked dynamic object; repeatedly and substantially continuously map to the virtual map the substantially real-time location and orientation of each tracked dynamic object within the physical environment; for the HMD, obtain the data from the memory, determine the location and orientation of the field of view based on the correlation and the location and orientation of the HMD, and repeatedly and substantially continuously map to the virtual map the field of view; render computer generated images related to at least one of the tracked dynamic objects; transform the coordinates of the computer generated images from a coordinate system of the virtual map to a coordinate system of a display of the HMD; and provide the transformed computer generated images situated within the field of view as an image stream.
[0004] In another aspect, a method for generating and displaying a rendered image stream to a user of a head mounted device (HMD) for augmented reality and virtual reality applications is provided, the method comprising: obtaining from a memory, data comprising a correlation between the location, motion and orientation information for the HMD and a field of view of the HMD; mapping the physical environment to a virtual map; obtaining from a plurality of tracked dynamic objects, including the HMD, within the physical environment, location, motion and orientation information; repeatedly and substantially continuously mapping to the virtual map the substantially real-time location and orientation of each tracked dynamic object within the physical environment; by one or more processors, for the HMD, determining the location and orientation of the field of view based on the correlation and the location and orientation of the HMD, and repeatedly and substantially continuously mapping to the virtual map the field of view; rendering computer generated images related to at least one of the tracked dynamic objects; transforming the coordinates of the computer generated images from a coordinate system of the virtual map to a coordinate system of a display of the HMD; and providing the transformed computer generated images situated within the field of view as an image stream.
[0005] In yet another aspect, a system for tracking dynamic objects in a physical environment to render an image stream for a head mounted device (HMD) for augmented reality and virtual reality applications is provided, the system comprising: a location, motion and orientation (LMO) system disposed upon each tracked dynamic object, configured to provide location, motion and orientation information for each respective tracked dynamic object; and a processor communicatively coupled to the location, motion and orientation systems, the processor configured to: map the physical environment to a virtual map; obtain the location, motion and orientation information for each tracked dynamic object; and repeatedly and substantially continuously map to the virtual map the location and orientation of each tracked dynamic object within the physical environment.
[0006] In still another aspect, a method for tracking dynamic objects in a physical environment to render an image stream for a head mounted device (HMD) for augmented reality and virtual reality applications is provided, the method comprising: mapping the physical environment to a virtual map; obtaining from a plurality of tracked dynamic objects, within the physical environment, location, motion and orientation information; and by one or more processors, repeatedly and substantially continuously mapping to the virtual map the location and orientation of each tracked dynamic object within the physical environment.
[0007] In a further aspect, a head mounted device (HMD) for augmented reality and virtual reality application is provided, the HMD comprising: a location, motion and orientation system disposed upon the HMD for providing location, motion and orientation information, the location, motion and orientation system comprising: a positioning unit to provide a determined location and orientation of the HMD with reference to a physical environment surrounding the HMD; an inertial measurement unit to provide motion information, comprising a gyroscope and an accelerometer; and a processor communicatively coupled to the location, motion and orientation system, configured to calculate the substantially real-time location and orientation of the HMD based on adding the motion information from the inertial measurement unit to a most recent determined location and orientation from the information provided by the positioning unit.
[0008] In yet a further aspect, a method to determine a substantially real-time location and orientation of a head mounted device (HMD) for augmented reality and virtual reality applications is provided, the method comprising: obtaining an absolute location and orientation of the HMD with reference to a physical environment surrounding the HMD; subsequently obtaining motion and orientation information from a gyroscope disposed upon the HMD and further motion and orientation information from an accelerometer disposed upon the HMD to correct a rotational bias of the gyroscope; and calculating, by one or more processors, the substantially real-time location and orientation of the HMD based on correcting the rotational bias of the gyroscope and adding the corrected motion and orientation information from the gyroscope to a most recent absolute location and orientation from the information provided by the positioning unit.
[0009] These and other aspects are contemplated and described herein. It will be appreciated that the foregoing summary sets out representative aspects of systems and methods for tracking dynamic objects within a physical environment to assist skilled readers in understanding the following detailed description.
DESCRIPTION OF THE DRAWI NGS
[0010] A greater understanding of the embodiments will be had with reference to the Figures, in which:
[001 1] Fig. 1 illustrates in schematic form a system for tracking dynamic objects in a physical environment occupied by a plurality of users equipped with head mounted devices;
[0012] Fig. 2 illustrates an exemplary configuration of location, motion and orientation systems upon a user equipped with an HMD;
[0013] Fig. 3 illustrates an exemplary configuration of a head mounted device;
[0014] Fig. 4 illustrates a method for mapping to a virtual map the respective locations and orientations of dynamic objects in a physical environment; [0015] Fig. 5 illustrates a field of view for a head mounted device in a physical environment; and [0016] Fig. 6 illustrates a method for generating a rendered image stream for a head mounted device.
DETAILED DESCRIPTION
[0017] For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the Figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practised without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
[0018] Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: "or" as used throughout is inclusive, as though written "and/or"; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; "exemplary" should be understood as "illustrative" or "exemplifying" and not necessarily as "preferred" over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description.
[0019] Any module, unit, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical discs, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Further, unless the context clearly indicates otherwise, any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media and executed by the one or more processors.
[0020] Referring now to Fig. 1 and Fig. 3, an exemplary scenario is shown in which multiple users occupy a physical environment. The users are equipped with HMDs 12 and peripherals 13. The HMDs 12 corresponding to users and their peripherals 13, along with other animate objects occupying the physical environment, are referred to collectively herein as "dynamic objects". Each dynamic object may be equipped with a location, motion and orientation (LMO) system to provide LMO information. A processor 130 obtains the LMO information from the dynamic objects and maps their respective locations and orientations in a virtual map, as hereinafter described in greater detail. The processor 130 may use the mapped orientations and locations for the dynamic objects to generate dynamic object-related computer generated imagery (CGI). For each user equipped with an HMD 12 who occupies the physical environment, the processor 130 may generate a rendered image stream comprising the CGI for display to the user on the display of the user's HMD 12.
[0021] The singular "processor" is used herein, but it will be appreciated that the processor may be distributed amongst the components occupying the physical environment, within the physical environment or in a server 300 in network communication with a network 17 accessible from the physical environment. For example, the processor 130 may be distributed between the HMDs 12 and a console 11 , or over the I nternet via the network 17.
[0022] Each user's HMD 12 may communicate with the user's peripheral 13, or the HMDs and peripherals 13 may communicate directly with the console 11 or the server 300 located over a network 17 accessible from the physical environment, as shown.
[0023] Referring now to Fig. 2, an exemplary configuration of LMO systems 21 disposed upon a user 1 is shown. The user 1 may be equipped with an HMD 12, as well as LMO systems 21 disposed upon her hands and feet, and one which is disposed upon the HMD 12. Other configurations are possible. Each LMO system 21 may provide LMO information for the limb or body part upon which it is disposed. [0024] Referring again to Fig. 3, an exemplary HMD 12 embodied as a helmet is shown; however, other embodiments are contemplated. The HMD 12 may comprise a processor 130 for generating a rendered image stream comprising CGI. The processor 130 shown in Fig. 3 is shown mounted within the HMD 12; however, as previously described, the processor may be located apart from the HMD 12. The processor may be communicatively coupled to the following components of the HMD 12: (i) a scanning system 132 for scanning the physical environment surrounding the HMD 12; (ii) an LMO system 141 comprising a local positioning unit for determining the HMD's 12 location within the physical environment, a motion sensing module for detecting the HMD's 12 motions within the physical environment, and/or an orientation detection module for detecting the orientation of the HMD 12; (iii) an imaging system 123, such as, for example, a camera system comprising one or more cameras, to capture a physical image stream of the physical environment; (iv) a display system 121 for displaying to a user of the HMD 12 the physical image stream and/or the rendered image stream; (v) a power management system 113 for distributing power to the components; (vi) a sensory feedback system 120 comprising, for example, haptic feedback devices, for providing sensory feedback to the user; and (vii) an audio system 124 with audio input and output to provide audio interaction. The HMD 12 may further comprise a wireless communication system 126 having, for example, antennae, to communicate with other components in an AR and/or VR system, such as, for example, other HMDs, peripherals, actors, a gaming console, or a router. An exemplary peripheral in the form of a controller 13 is shown. The controller 13 comprises a toggle-type actuator 131 for receiving user input.
[0025] Communication between the processor and the LMO systems may be via suitable wired or wireless communication. Each LMO system may comprise a communication module configured to transmit LMO information for the system to the processor, or communication of LMO information may be routed through HMDs, as shown in Fig. 1 . In embodiments, communication between an LMO system worn by a user or disposed upon a peripheral held by a user may be effected via a matched transmitter-receiver signal pair between an HMD and the LMO system. For example, an LMO system for a user's peripheral, or which is worn by a user, may comprise a transmitter of a receiver-transmitter pair; the receiver of the receiver-transmitter pair may be disposed upon an HMD worn by the user. The transmitter and receiver may be configured to cooperatively provide LMO information to the processor via the HMD. Alternatively, the receiver may be disposed upon the peripheral while the transmitter may be disposed upon the HMD, and the LMO information for the HMD and peripheral may be provided to the processor via the peripheral. The transmitter may transmit a signal comprising the LMO information for the transmitter relative to the receiver. The transmitter obtains LMO information from its corresponding LMO system and encodes the LMO information into a signal. The transmitter transmits the signal to the paired receiver. The transmitter and receiver repeatedly and continuously (or substantially continuously, at a suitable frequency) transmit and receive, respectively, signals to exchange substantially real-time LMO information for the LMO system. The receiver provides the transmitted LMO information from the transmitter to the processor. The transmitter and receiver pair may communicate using ZigBee, Bluetooth, WiFi, radio- frequency (RF) or other suitable wireless communication protocol. For magnetic tracking, the transmitter may be a magnetic source and the receiver be a magnetic receiver/antenna or vice versa. The pair can be configured so that the receiver is able to detect the field of the source, so that the transmitter-emitter pair may serve as an LMO system and communication system. Where a user is outfitted with an HMD and a peripheral configured with matched transmitter- receiver pairs, the LMO information of the peripheral may be provided to the processor as LMO information relative to the LMO information of the HMD. The processor may then map the substantially real-time location and orientation of the peripheral according to a transformation to transform the LMO information of the peripheral, which is defined relative to the LMO information of the HMD, to LMO information relative to the physical environment. For example, the LMO system of the HMD may return the substantially instantaneous location and orientation of the HMD relative to the coordinates of the physical environment, while the LMO system of the peripheral may cooperate with the LMO system of the HMD to provide LMO information relative to the HMD. The transformation may therefore be implemented in order to redefine the LMO information from the peripheral into the coordinate system defining the physical environment.
[0026] In aspects, a user may be equipped with a peripheral comprising a paired transmitter- receiver LMO system, while a display in the physical environment which the user is occupying may comprise the opposite half of the paired transmitter-receiver LMO system. The display may be, for example, an LCD, LED, OLED or plasma television or monitor. A processor within the physical environment or within the display, and in communication with the display and its LMO system, may be configured to obtain the LMO information of the peripheral relative to the location and orientation of the display so that the user's gestures may be tracked by the processor for engaging with the display. A user may thereby interact with the display using gestures. The gestures may be tracked by a peripheral or wearable LMO system, enabling user engagement with the display through movements and gestures within a physical environment. [0027] An LMO system may comprise one or more of the following: (i) a positioning sensor configured to track a location, the positioning sensor comprising, for example, one or more of a magnetic, ultrasound, radio-frequency (RF), LiDAR or radar type of sensor, whether alone or in combination; and (ii) a motion sensor configured to track motions, the motion sensor comprising, for example, one or more of the following: an accelerometer, a gyroscope, a magnetometer, whether alone or in combination. The sensors in the LMO system may be, for example, MEMS-type sensors. Certain sensor configurations may provide all LMO information as a single suite.
[0028] The LMO system may comprise a 3D magnetic positioning unit, which alone may provide real-time location, positioning and orientation information. The magnetic positioning unit provides the location and orientation in absolute terms. As used in this disclosure, the term "absolute" refers to locations and orientations which are determined relative to other features within the surrounding physical environment at a given point in time. In contrast, dead reckoning or relative positioning relies on determining positions and orientations based on measuring and adding motion information. Depending on the application, a magnetic positioning unit alone may be sufficiently fast and accurate to provide location and orientation information. However, in other cases, typical refresh rates of a magnetic positioning unit may be unsuitable. Further, applicant has found that certain magnetic positioning units are susceptible to magnetic "noise" in some environments which may render their readings ambiguous or unreliable. In such environments, it may be preferable, therefore, to implement other or further sensing systems.
[0029] In embodiments, the LMO system and the processor cooperate to implement dead- reckoning techniques, i.e., techniques in which relative changes to the motion and orientation of the LMO system are added to a known initial location and orientation to return the location of the LMO system at a given point in time. Dead reckoning location calculation may suffer from cumulative errors and, further, the initial location and orientation of the sensor must be known or determined in certain applications. Therefore, in an embodiment of an LMO system using dead- reckoning techniques the LMO system comprises an inertial measurement unit (IMU) having a combination of an accelerometer and a gyroscope. Gyroscopes frequently experience latent rotational bias, while accelerometers may provide relatively less accurate dead-reckoning measurements. Therefore, the accelerometer preferably serves as a reference for the readings of the gyroscope from which the processor may determine the degree of rotational bias exhibited by the gyroscope and adjust the readings accordingly. [0030] It should be noted that in a full VR environment (i.e., where the processor does not account for features of the physical environment when generating the rendered image stream), ascertaining the initial location for an LMO system may not be necessary, since it may be sufficient for the processor to use a default initial location stored in memory.
[0031] In order to determine an initial location for the LMO system, the LMO system may comprise a positioning sensor, or the initial orientation may be determined from a motion sensor depending on the type of motion sensor used. In the latter case, the processor may be configured to assume aspects of orientation based on the parameters returned by the motion sensor at rest. For example, the resting location of an accelerometer may provide the orientation of the LMO system with reference to the earth's gravity: the motion vector of an IMU at rest may be assumed by the processor to be pointing towards the earth. The processor may use this information to assign an initial orientation, subsequent deviation from which will be provided by the IMU as motion information. By adding the subsequent motion information determined by dead reckoning to the initial orientation, the processor may determine the orientation of the LMO system relative to the physical environment at a given point in time. Alternatively, the LMO system may comprise a local positioning unit from which the initial orientation and location may be determined, as previously described. For example, if the HMD comprises a LIDAR-type scanning system configured to periodically scan the physical environment surrounding the HMD, the processor may determine the location and orientation of the HMD relative to an initial scan (or another prior scan) of the physical environment. The processor may identify features common to both the initial scan (or another prior scan) and a later scan to determine the change in location and orientation of HMD giving rise to any deviation between the scans apparent in the common feature. In environments with excessive magnetic noise, the use of the scanning system to determine the location and orientation of the HMD may be preferable to use of a magnetic positioning unit.
[0032] In another embodiment, the processor obtains readings from an IMU and a magnetic positioning unit in order to periodically verify the accuracy of, and correct, any dead reckoning calculations. Since the magnetic positioning unit provides absolute instead of relative orientation and location information, the processor may incorporate that information to adjust the calculated location and orientation of the LMO system. If the rate at which the IMU provides LMO information exceeds the same rate for the magnetic positioning unit, the IMU may be used by the processor for interim analysis between readings from the magnetic positioning unit. [0033] A user equipped with an HMD 12 may be equipped with additional LMO systems 21 configured to provide LMO information for various body parts to which the LMO systems 21 are coupled. For example, a user may wear a wrist band comprising an LMO system 21 , as shown in Fig. 2. As the user moves her hand and/or arm, the LMO system 21 may provide real-time LMO information which the processor may use to map the user's motions and gestures to the virtual map.
[0034] In addition to the aforementioned sensors in the LMO system, peripherals or other wearable devices may comprise myography sensors, such as, for example, electromyography (EMG), mechanomyography (MMG) or phonomyography sensors, to measure the user's gestures and body movements. The myography sensors may provide motion information based on measurements to the processor so that the user's gestures may be mapped to the virtual map. The processor may access a library of "gestures" against which to compare the output of the myography sensors in order to apply the gestures to a given application. For example, a user equipped with a myography sensor configured to measure finger movements may depress a virtual trigger of a virtual gun by contracting her index finger. The processor may map the motion to the virtual map for rendering, while determining that the motion corresponds to a trigger motion in the gesture library. If the processor determines that the motion does correspond to the trigger motion, the processor will record the event to actuate an outcome according to predefined rules, such as, for example game parameters. Alternatively, the user may actuate the trigger event by, for example, depressing a trigger or other button on a peripheral device with her index finger. The processor may obtain an approximate finger motion from the gesture library and map the user's finger motion to the virtual map based on an approximate finger motion expected to actuate the trigger motion.
[0035] In some scenarios, an LMO system may be added to an existing peripheral so that the processor may map the motion of the peripheral to the virtual map.
[0036] In addition to using LMO information to map an LMO system, the processor may use the LMO information to derive LMO information for virtual objects and actions represented in the rendered image streams. For example, the processor may correlate a given LMO sensor to a virtual device, such as a virtual gun, so that any virtual bullet fired therefrom is represented in the rendered image stream according to the location, motion and orientation of the virtual gun at a time of firing. The trajectory of the virtual bullet may be mapped to the virtual map so that CGI representing the virtual bullet may be rendered according to the mapped trajectory. [0037] Further, the LMO systems of dynamic objects coupled to a particular HMD may provide LMO information as values or vectors relative to the location and orientation of the HMD at a point in time. For example, the HMD and a peripheral may each comprise a 3D magnetic positioning sensor configured to provide location and orientation information for the dynamic object relative to the HMD. The processor may map the location and orientation of the peripheral with reference to the HMD's location and orientation.
[0038] By mapping the location and orientation of real and virtual dynamic objects within the physical environment to the virtual map, the processor may render a rendered image stream comprising CGI for all mapped dynamic objects in the physical environment according to the method illustrated in Fig. 4. The processor generates or acquires a map of the physical environment, at block 401. The processor may generate the map based on information provided by the scanning system. The processor may obtain the coordinates from the scanning system of one or more HMDs for all scanned physical features of the physical environment and generate a virtual map as, for example, a 3D point cloud in which each point is defined according to its real- world (i.e., physical environment) dimensions. Alternatively, the processor may acquire the map from a map library in which an already generated map of static features within the physical environment is stored. At block 403, the processor obtains the location and orientation information from the various LMO systems in the physical environment and maps their locations and orientations to the virtual map. At block 405, once the initial map has been generated, the processor may repeatedly and continuously refresh the map by updating the locations and orientations of the dynamic objects according to the substantially real-time LMO information provided by their respective LMO systems.
[0039] The processor may generate a rendered image stream comprising dynamic object- related CGI. In certain applications, a user's display of the HMD with which the user is equipped may display a physical image stream as well as the rendered image stream. The physical image stream may be captured by an imaging system, such as one or more cameras, and relayed to the display. The processor may therefore generate a rendered image stream whose CGI will be displayed by the display over corresponding features of the physical environment within the physical image stream. For example, in a game in which a first user and a second user occupy a physical environment, the processor may generate a rendered image stream for the first user in which the second user is represented as a zombie when the second user appears within the field of view of the first user's physical imaging system. By calibrating the rendered image stream to align with the physical image stream, overlay of the rendered image stream over the physical image stream during simultaneous display of both image streams, the first user may perceive the second user as a zombie.
[0040] As shown in Fig. 5, a user 501 equipped with an HMD 12 occupies a physical environment 531 having a coordinate system originating, for example, at X, Y, Z = 0, 0, 0, with axes of rotation ψ, β, γ. The HMD 12 comprises a camera 123 to capture a physical image stream of the physical environment 431 , and an LMO system 21. The field of view of the camera 123 is shown by the dashed lines emanating outwardly therefrom. The processor may ascertain the field of view for the camera 123 from memory or directly from the camera. The LMO system 21 of the HMD provides substantially real-time location and orientation information for the HMD 12. The processor may ascertain the relative location and orientation (shown as coordinates Xc, Yc, Zc, ψο, (3c, yc) of the camera 123 with respect to the HMD 12 from memory, so that the processor may determine the field of view of the camera 123 at a given time based on the LMO information of the HMD 12 at that time. The processor may thereby map the field of view to the virtual map as a notional field of view. The processor may map a notional field of view which has the same parameters as the field of view of the camera 123; however, the processor may map a notional field of view with different parameters. For example, if the rendered image stream is displayed alone (i.e., without the physical image stream), as in a pure VR application, then the processor may define any parameters for the notional field of view.
[0041] A method for calibrating the rendered image stream is illustrated in Fig. 6. Assuming the processor has initiated a virtual map of the physical environment, as described above with respect to Fig. 4, at block 601 , the processor determines the location and orientation of the HMD based on the LMO information provided by the LMO system of the HMD. At block 603, the processor obtains, from a memory or from a camera system on the HMD, data comprising a correlation between the LMO information for the HMD and the field of view of the HMD. The correlation may be between the LMO information for the HMD and the actual parameters of field of view of the camera system, or the correlation may be between the LMO information for the HMD and a predetermined field of view having any defined parameters stored in the memory. At block 605, the processor maps the notional field of view to the virtual map based on the correlation and the LMO information. The processor repeatedly and continuously updates the notional field of view in the virtual map based on substantially real-time LMO information provided by the LMO system of the HMD. In parallel with the steps performed at blocks 601- 605, the processor maps and repeatedly and continuously updates in the virtual map the locations and orientations of the dynamic objects within the physical environment, as described with respect to blocks 403 and 405 of Fig. 4. At block 607, the processor either renders CGI for the entire physical environment, or renders CGI only for those features which are within the notional fields of view of any users occupying the physical environment. It will be appreciated that processing demands may be reduced by rendering CGI only for those features within the notional field of view of the user. In either case, a rendered image stream may comprise only that CGI which is within the notional field of view. At block 609, the processor applies a transformation, which may be provided by the imaging system, the display system or from the memory, to the CGI to transform the coordinates of the CGI from the coordinate system of the virtual map to the coordinate system of the display. The transformed CGI is then provided to the display as a rendered image stream. A physical image stream generated by the imaging system may be provided to the display substantially simultaneously to the rendered image stream, depending on whether AR or VR is to be displayed. Although the method of Fig. 6 has been described with respect to a single HMD and a single set of image streams, the processor may generate multiple rendered image streams, one for each LMO system-equipped HMD occupying the physical environment. Each rendered image stream would account for the substantially real-time LMO information for the HMD to which it is provided, thereby enabling multiple users to experience individualised AR or VR according to their respective locations and orientations within the physical environment.
[0042] As a user equipped with an HMD moves throughout the physical space, the processor may adjust the rendered image stream to change the field of view so that the user experiences a field of view that substantially correlates to the field of view he would experience if he were moving throughout the environment without wearing an HMD. For example, a first user whose natural field of view includes a second user would expect the second user's relative location within the field of view to change as the user turns his head; if the first user turns his head leftwards, the second user should displace rightwards within the first user's field of view. The processor may simulate this effect by invoking the method described above with respect to Fig. 6. In AR applications (in which the display may display a physical image stream combined with a rendered image stream), the notional field of view preferably has the same real-world parameters (such as dimensions, focal length and orientation) as the physical world field of view of the user's HMD. The user's display may then simultaneously display both the physical and rendered image streams to present an AR. [0043] In aspects, the foregoing systems and methods may enable a user equipped with a wearable LMO system to perform exercises, the motions of which may be mapped and tracked for analysis by, for example, the user's trainer.
[0044] In further aspects, the foregoing systems and methods may enable a user to use gesture controls to interact with a processor, for example to initiate AR or VR gameplay in a physical environment.
[0045] In still further aspects, the foregoing systems and methods may enable the processor to generate CGI for dynamic objects in a physical environment, even while those dynamic objects lie outside the field of view or capture region of a user's HMD.
[0046] Although the foregoing has been described with reference to certain specific embodiments, various modifications thereto will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the appended claims. The entire disclosures of all references recited above are incorporated herein by reference.

Claims

1. A system for generating a rendered image stream for display to a user of a head
mounted device (HMD) for augmented reality and virtual reality applications, the system comprising:
a. a location, motion and orientation (LMO) system disposed upon each of a
plurality of tracked dynamic objects, including the HMD, within the physical environment, the LMO system being configured to provide location, motion and orientation information for each tracked dynamic object;
b. a memory having stored thereon data comprising a correlation between the location, motion and orientation information for the HMD and a field of view of the HMD; and
c. a processor configured to:
i. map the physical environment to a virtual map;
ii. obtain from each LMO system, the location, motion and orientation
information for each tracked dynamic object;
iii. repeatedly and substantially continuously map to the virtual map the substantially real-time location and orientation of each tracked dynamic object within the physical environment;
iv. for the HMD, obtain the data from the memory, determine the location and orientation of the field of view based on the correlation and the location and orientation of the HMD, and repeatedly and substantially continuously map to the virtual map the field of view;
v. render computer generated images related to at least one of the tracked dynamic objects;
vi. transform the coordinates of the computer generated images from a coordinate system of the virtual map to a coordinate system of a display of the HMD; and
vii. provide the transformed computer generated images situated within the field of view as an image stream.
2. The system of claim 1 , wherein the LMO system for each of the tracked dynamic objects, other than the HMD, communicates with the HMD by a matched transmitter-receiver signal pair between the respective LMO system and the HMD.
3. The system of claim 1 , wherein the HMD further comprises a positioning sensor
configured to track a location of the HMD in the physical environment.
4. The system of claim 3, wherein the positioning sensor comprises one or more of: a
magnetic sensor, an ultrasound sensor, a radio-frequency (RF) sensor, a LiDAR sensor, and a radar sensor.
5. The system of claim 1 , wherein each LMO system comprises a motion sensor
comprising an accelerometer.
6. The system of claim 1 , wherein each LMO system comprises a motion sensor
comprising a gyroscope.
7. The system of claim 1 , wherein each LMO system comprises a motion sensor
comprising a magnetometer.
8. The system of claim 1 , wherein the HMD comprises a scanning system in
communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to an initial scan or prior scan of the physical environment.
9. The system of claim 8, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the initial scan or the prior scan.
10. The system of claim 8, wherein the scanning system comprises a LiDAR-type scanner.
11. The system of claim 1 , wherein at least one of the dynamic tracked objects comprises a myography sensor to measure the user's gestures and body movements for mapping to the virtual map.
12. The system of claim 11 , wherein the myography sensors comprise one or more of
electromyography (EMG), mechanomyography (MMG) or phonomyography sensors.
13. The system of claim 1 , wherein the processor uses the location, motion and orientation information of each tracked dynamic object to derive location, motion and orientation information for virtual objects and actions represented in the rendered image streams.
14. The system of claim 1 , wherein the memory further has stored thereon a map library, and the processor obtains the virtual map from the map library, the virtual map including a set of static features corresponding to static features of the physical environment.
15. The system of claim 14, wherein the HMD comprises a scanning system in
communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to the obtained virtual map.
16. The system of claim 15, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the obtained virtual map.
17. The system of claim 1 , wherein the HMD displays to the user a physical image stream as well as a rendered image stream.
18. The system of claim 17, wherein the physical image stream and rendered image stream are overlaid based on common features.
19. The system of claim 17, wherein the physical image stream is captured by an imaging system.
20. The system of claim 19, wherein the imaging system comprises one or more cameras.
21. A method for generating and displaying a rendered image stream to a user of a head mounted device (HMD) for augmented reality and virtual reality applications, the method comprising:
a. obtaining from a memory, data comprising a correlation between the location, motion and orientation information for the HMD and a field of view of the HMD; b. mapping the physical environment to a virtual map;
c. obtaining from a plurality of tracked dynamic objects, including the HMD, within the physical environment, location, motion and orientation information; d. repeatedly and substantially continuously mapping to the virtual map the
substantially real-time location and orientation of each tracked dynamic object within the physical environment;
e. by one or more processors, for the HMD, determining the location and orientation of the field of view based on the correlation and the location and orientation of the HMD, and repeatedly and substantially continuously mapping to the virtual map the field of view;
f. rendering computer generated images related to at least one of the tracked
dynamic objects;
g. transforming the coordinates of the computer generated images from a
coordinate system of the virtual map to a coordinate system of a display of the HMD; and
h. providing the transformed computer generated images situated within the field of view as an image stream.
22. The method of claim 21 , wherein the location, motion and orientation of each of the tracked dynamic objects, other than the HMD, is communicated to the HMD by a matched transmitter-receiver signal pair between the respective tracked dynamic objects and the HMD.
23. The method of claim 21 , wherein the location, motion and orientation information of the HMD is absolute relative to the physical environment.
24. The method of claim 23, wherein the location, motion and orientation information of the HMD is obtained from one or more of: a magnetic sensor, an ultrasound sensor, a radio- frequency (RF) sensor, a LiDAR sensor, and a radar sensor.
25. The method of claim 21 , wherein each of the tracked dynamic objects comprises a
motion sensor comprising a gyroscope.
26. The method of claim 25, wherein each motion sensor further comprises an
accelerometer.
27. The method of claim 21 , wherein each of the tracked dynamic objects comprises a
motion sensor comprising a magnetometer.
28. The method of claim 21 , wherein the HMD comprises a scanning system in
communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to an initial scan or prior scan of the physical environment.
29. The method of claim 28, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the initial scan or the prior scan.
30. The method of claim 28, wherein the scanning system comprises a LiDAR-type scanner.
31. The method of claim 21 , wherein the dynamic tracked objects comprise at least one myography sensor to measure the user's gestures and body movements for mapping to the virtual map.
32. The method of claim 31 , wherein the myography sensors comprise one or more of
electromyography (EMG), mechanomyography (MMG) or phonomyography sensors.
33. The method of claim 21 , wherein the processor uses the location, motion and orientation information of each tracked dynamic object to derive location, motion and orientation information for virtual objects and actions represented in the rendered image streams.
34. The method of claim 21 , wherein the memory further has stored thereon a map library, and the processor obtains the virtual map from the map library, the virtual map including a set of static features corresponding to static features of the physical environment.
35. The method of claim 34, wherein the HMD comprises a scanning system in
communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to the obtained virtual map.
36. The method of claim 35, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the obtained virtual map.
37. The method of claim 21 , wherein the HMD displays to the user a physical image stream as well as a rendered image stream.
38. The method of claim 37, wherein the physical image stream and rendered image stream are overlaid based on common features.
39. The method of claim 37, wherein the physical image stream is captured by an imaging system.
40. The method of claim 39, wherein the imaging system comprises one or more cameras.
41. A system for tracking dynamic objects in a physical environment to render an image stream for a head mounted device (HMD) for augmented reality and virtual reality applications, the system comprising:
a. a location, motion and orientation (LMO) system disposed upon each tracked dynamic object, configured to provide location, motion and orientation information for each respective tracked dynamic object; and
b. a processor communicatively coupled to the location, motion and orientation systems, the processor configured to:
i. map the physical environment to a virtual map;
ii. obtain the location, motion and orientation information for each tracked dynamic object; and
iii. repeatedly and substantially continuously map to the virtual map the location and orientation of each tracked dynamic object within the physical environment.
42. The system of claim 41 , wherein the LMO system for each of the tracked dynamic
objects communicates with the processor by a matched transmitter-receiver signal pair between the respective LMO system and the processor.
43. The system of claim 41 , wherein one of the tracked dynamic objects is the HMD, and the HMD comprises a positioning sensor configured to track a location of the HMD in the physical environment.
44. The system of claim 43, wherein the positioning sensor comprises one or more of: a magnetic sensor, an ultrasound sensor, a radio-frequency (RF) sensor, a LiDAR sensor, and a radar sensor.
45. The system of claim 41 , wherein each LMO system comprises a motion sensor
comprising a gyroscope.
46. The system of claim 45, wherein each motion sensor further comprises an
accelerometer.
47. The system of claim 41 , wherein each LMO system comprises a motion sensor
comprising a magnetometer.
48. The system of claim 41 , wherein one of the tracked dynamic objects is the HMD, and the HMD comprises a scanning system in communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to an initial scan or prior scan of the physical environment.
49. The system of claim 48, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the initial scan or the prior scan.
50. The system of claim 48, wherein the scanning system comprises a LiDAR-type scanner.
51. The system of claim 41 , wherein the dynamic tracked objects comprise at least one myography sensor to measure the user's gestures and body movements for mapping to the virtual map.
52. The system of claim 51 , wherein the myography sensors comprise one or more of
electromyography (EMG), mechanomyography (MMG) or phonomyography sensors.
53. The system of claim 41 , wherein the processor uses the location, motion and orientation information of each tracked dynamic object to derive location, motion and orientation information for virtual objects and actions represented in the rendered image streams.
54. The system of claim 41 , wherein the memory further has stored thereon a map library, and the processor obtains the virtual map from the map library, the virtual map including a set of static features corresponding to static features of the physical environment.
55. The system of claim 54, wherein one of the tracked dynamic objects is the HMD, and the HMD comprises a scanning system in communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to the obtained virtual map.
56. The system of claim 55, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the obtained virtual map.
57. The system of claim 41 , wherein the HMD displays to the user a physical image stream as well as a rendered image stream.
58. The system of claim 57, wherein the physical image stream and rendered image stream are overlaid based on common features.
59. The system of claim 57, wherein the physical image stream is captured by an imaging system.
60. The system of claim 59, wherein the imaging system comprises one or more cameras.
61. A method for tracking dynamic objects in a physical environment to render an image stream for a head mounted device (HMD) for augmented reality and virtual reality applications, the method comprising:
a. mapping the physical environment to a virtual map;
b. obtaining from a plurality of tracked dynamic objects, within the physical
environment, location, motion and orientation information; and
c. by one or more processors, repeatedly and substantially continuously mapping to the virtual map the location and orientation of each tracked dynamic object within the physical environment.
62. The method of claim 61 , wherein the location, motion and orientation of each of the tracked dynamic objects is communicates to the processor by a matched transmitter- receiver signal pair between the respective tracked dynamic objects and the processor.
63. The method of claim 61 , wherein one of the tracked dynamic objects is the HMD, and the HMD is configured to track a location of the HMD in the physical environment.
64. The method of claim 63, wherein the tracking is carried out by one or more of: a
magnetic sensor, an ultrasound sensor, a radio-frequency (RF) sensor, a LiDAR sensor, and a radar sensor.
65. The method of claim 61 , wherein each of the tracked dynamic objects comprises a
motion sensor comprising an accelerometer.
66. The method of claim 61 , wherein each of the tracked dynamic objects comprises a
motion sensor comprising a gyroscope.
67. The method of claim 61 , wherein each of the tracked dynamic objects comprises a
motion sensor comprising a magnetometer.
68. The method of claim 61 , wherein one of the tracked dynamic objects is the HMD, and the HMD comprises a scanning system in communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to an initial scan or prior scan of the physical environment.
69. The method of claim 68, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the initial scan or the prior scan.
70. The method of claim 68, wherein the scanning system comprises a LiDAR-type scanner.
71. The method of claim 61 , wherein the dynamic tracked objects comprise at least one myography sensor to measure the user's gestures and body movements for mapping to the virtual map.
72. The method of claim 71 , wherein the myography sensors comprise one or more of
electromyography (EMG), mechanomyography (MMG) or phonomyography sensors.
73. The method of claim 61 , wherein the processor uses the location, motion and orientation information of each tracked dynamic object to derive location, motion and orientation information for virtual objects and actions represented in the rendered image streams.
74. The method of claim 61 , wherein the memory further has stored thereon a map library, and the processor obtains the virtual map from the map library, the virtual map including a set of static features corresponding to static features of the physical environment.
75. The method of claim 74, wherein one of the tracked dynamic objects is the HMD, and the HMD comprises a scanning system in communication with the processor, the scanning system configured to periodically scan the physical environment surrounding the HMD, and the processor configured to determine the location and orientation of the HMD relative to the obtained virtual map.
76. The method of claim 75, wherein the determining of the location and orientation is based on a deviation of identified common features between the periodic scan and the obtained virtual map.
77. The method of claim 61 , wherein the HMD displays to the user a physical image stream as well as a rendered image stream.
78. The method of claim 77, wherein the physical image stream and rendered image stream are overlaid based on common features.
79. The method of claim 77, wherein the physical image stream is captured by an imaging system.
80. The method of claim 79, wherein the imaging system comprises one or more cameras.
81. A head mounted device (HMD) for augmented reality and virtual reality application, the HMD comprising:
a. a location, motion and orientation system disposed upon the HMD for providing location, motion and orientation information, the location, motion and orientation system comprising:
i. a positioning unit to provide a determined location and orientation of the HMD with reference to a physical environment surrounding the HMD; an inertial measurement unit to provide motion information, comprising a gyroscope and an accelerometer; and
b. a processor communicatively coupled to the location, motion and orientation system, configured to calculate the substantially real-time location and orientation of the HMD based on adding the motion information from the inertial measurement unit to a most recent determined location and orientation from the information provided by the positioning unit.
82. The HMD of claim 81 , wherein the positioning unit is a magnetic positioning unit.
83. The HMD of claim 81 , wherein:
a. the positioning unit is a scanning system configured to scan the physical
environment; and
b. the processor is further configured to calculate the absolute location and
orientation of the HMD based on deviations between subsequent scans of the physical environment.
84. The HMD of claim 83, wherein the processor is configured to correct the calculation of the substantially real-time location and orientation by comparison with the calculated absolute location and orientation.
85. The HMD of claim 82, wherein the correction of location and orientation comprises
comparing the location and orientation information obtained by the inertial measurement unit and the magnetic positioning unit.
86. The HMD of claim 81 , wherein the processor is further configured to correct a rotational bias in the gyroscope by comparing motion and orientation information from the gyroscope and the accelerometer.
87. A method to determine a substantially real-time location and orientation of a head
mounted device (HMD) for augmented reality and virtual reality applications, the method comprising:
i. obtaining an absolute location and orientation of the HMD with reference to a physical environment surrounding the HMD;
ii. subsequently obtaining motion information from a gyroscope disposed upon the HMD and further motion information from an accelerometer disposed upon the HMD to correct a rotational bias of the gyroscope; and b. calculating, by one or more processors, the substantially real-time location and orientation of the HMD based on correcting the rotational bias of the gyroscope and adding the corrected motion information from the gyroscope to a most recent absolute location and orientation from the information provided by the positioning unit.
88. The method of claim 87, wherein the absolute location and orientation are obtained from a magnetic positioning unit.
89. The method of claim 87, wherein the absolute location and orientation are substantially continuously obtained from a scanning system configured to scan the physical environment, and the one or more processors further calculating the absolute location and orientation of the HMD based on deviations between subsequent scans of the physical environment.
90. The method of claim 89, the one or more processors further correcting the calculation of the substantially real-time location and orientation by comparison with the calculated absolute location and orientation.
91. The method of claim 88, further comprising substantially continuously obtaining absolute location and orientation information and correcting the calculation of the substantially real-time location and orientation by comparison with the absolute location and orientation information.
PCT/CA2015/050918 2014-09-19 2015-09-18 System and method for tracking wearable peripherals in augmented reality and virtual reality applications WO2016041088A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201462052863P 2014-09-19 2014-09-19
US62/052,863 2014-09-19
US201462097331P 2014-12-29 2014-12-29
US62/097,331 2014-12-29
US201562099418P 2015-01-02 2015-01-02
US62/099,418 2015-01-02

Publications (1)

Publication Number Publication Date
WO2016041088A1 true WO2016041088A1 (en) 2016-03-24

Family

ID=55532396

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2015/050918 WO2016041088A1 (en) 2014-09-19 2015-09-18 System and method for tracking wearable peripherals in augmented reality and virtual reality applications

Country Status (1)

Country Link
WO (1) WO2016041088A1 (en)

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105785373A (en) * 2016-04-26 2016-07-20 上海与德通讯技术有限公司 Virtual reality position identification system and method
WO2017172661A1 (en) * 2016-03-31 2017-10-05 Microsoft Technology Licensing, Llc Electromagnetic tracking of objects for mixed reality
US20170351094A1 (en) * 2016-06-06 2017-12-07 Adam G. Poulos Optically augmenting electromagnetic tracking in mixed reality
WO2018022657A1 (en) * 2016-07-25 2018-02-01 Ctrl-Labs Corporation System and method for measuring the movements of articulated rigid bodies
WO2018075270A1 (en) * 2016-10-17 2018-04-26 Microsoft Technology Licensing, Llc Generating and displaying a computer generated image on a future pose of a real world object
WO2018118728A3 (en) * 2016-12-22 2018-07-26 Microsoft Technology Licensing, Llc Magnetic interference detection and correction
US10151606B1 (en) 2016-02-24 2018-12-11 Ommo Technologies, Inc. Tracking position and movement using a magnetic field
US10276289B1 (en) 2018-06-01 2019-04-30 Ommo Technologies, Inc. Rotating a permanent magnet in a position detection system
CN110168475A (en) * 2016-11-14 2019-08-23 罗技欧洲公司 User's interface device is imported into virtual reality/augmented reality system
US10409371B2 (en) 2016-07-25 2019-09-10 Ctrl-Labs Corporation Methods and apparatus for inferring user intent based on neuromuscular signals
US10460455B2 (en) 2018-01-25 2019-10-29 Ctrl-Labs Corporation Real-time processing of handstate representation model estimates
US10489986B2 (en) 2018-01-25 2019-11-26 Ctrl-Labs Corporation User-controlled tuning of handstate representation model parameters
CN110521203A (en) * 2017-04-25 2019-11-29 Ati科技无限责任公司 Display pacing in the configuration of bull head mounted displays virtual reality
US10496168B2 (en) 2018-01-25 2019-12-03 Ctrl-Labs Corporation Calibration techniques for handstate representation modeling using neuromuscular signals
US10504286B2 (en) 2018-01-25 2019-12-10 Ctrl-Labs Corporation Techniques for anonymizing neuromuscular signal data
CN110825219A (en) * 2018-08-14 2020-02-21 三星电子株式会社 Electronic device, control method of electronic device, and electronic system
US10592001B2 (en) 2018-05-08 2020-03-17 Facebook Technologies, Llc Systems and methods for improved speech recognition using neuromuscular information
US10684692B2 (en) 2014-06-19 2020-06-16 Facebook Technologies, Llc Systems, devices, and methods for gesture identification
US10687759B2 (en) 2018-05-29 2020-06-23 Facebook Technologies, Llc Shielding techniques for noise reduction in surface electromyography signal measurement and related systems and methods
GB2580915A (en) * 2019-01-29 2020-08-05 Sony Interactive Entertainment Inc Peripheral tracking system and method
US10772519B2 (en) 2018-05-25 2020-09-15 Facebook Technologies, Llc Methods and apparatus for providing sub-muscular control
US10817795B2 (en) 2018-01-25 2020-10-27 Facebook Technologies, Llc Handstate reconstruction based on multiple inputs
US10825241B2 (en) 2018-03-16 2020-11-03 Microsoft Technology Licensing, Llc Using a one-dimensional ray sensor to map an environment
US10842407B2 (en) 2018-08-31 2020-11-24 Facebook Technologies, Llc Camera-guided interpretation of neuromuscular signals
US10905383B2 (en) 2019-02-28 2021-02-02 Facebook Technologies, Llc Methods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces
US10921764B2 (en) 2018-09-26 2021-02-16 Facebook Technologies, Llc Neuromuscular control of physical objects in an environment
US10928888B2 (en) 2016-11-14 2021-02-23 Logitech Europe S.A. Systems and methods for configuring a hub-centric virtual/augmented reality environment
US10937414B2 (en) 2018-05-08 2021-03-02 Facebook Technologies, Llc Systems and methods for text input using neuromuscular information
CN112601975A (en) * 2018-05-31 2021-04-02 奇跃公司 Radar head pose positioning
US10970936B2 (en) 2018-10-05 2021-04-06 Facebook Technologies, Llc Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
US10970374B2 (en) 2018-06-14 2021-04-06 Facebook Technologies, Llc User identification and authentication with neuromuscular signatures
US10990174B2 (en) 2016-07-25 2021-04-27 Facebook Technologies, Llc Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
US11000211B2 (en) 2016-07-25 2021-05-11 Facebook Technologies, Llc Adaptive system for deriving control signals from measurements of neuromuscular activity
US11045137B2 (en) 2018-07-19 2021-06-29 Facebook Technologies, Llc Methods and apparatus for improved signal robustness for a wearable neuromuscular recording device
US11069148B2 (en) 2018-01-25 2021-07-20 Facebook Technologies, Llc Visualization of reconstructed handstate information
US11079846B2 (en) 2013-11-12 2021-08-03 Facebook Technologies, Llc Systems, articles, and methods for capacitive electromyography sensors
US11112856B2 (en) 2016-03-13 2021-09-07 Logitech Europe S.A. Transition between virtual and augmented reality
US11179066B2 (en) 2018-08-13 2021-11-23 Facebook Technologies, Llc Real-time spike detection and identification
US11216069B2 (en) 2018-05-08 2022-01-04 Facebook Technologies, Llc Systems and methods for improved speech recognition using neuromuscular information
CN114174959A (en) * 2019-09-11 2022-03-11 脸谱科技有限责任公司 Artificial reality triggered by physical objects
US11331045B1 (en) 2018-01-25 2022-05-17 Facebook Technologies, Llc Systems and methods for mitigating neuromuscular signal artifacts
US11481030B2 (en) 2019-03-29 2022-10-25 Meta Platforms Technologies, Llc Methods and apparatus for gesture detection and classification
US11481031B1 (en) 2019-04-30 2022-10-25 Meta Platforms Technologies, Llc Devices, systems, and methods for controlling computing devices via neuromuscular signals of users
US11493993B2 (en) 2019-09-04 2022-11-08 Meta Platforms Technologies, Llc Systems, methods, and interfaces for performing inputs based on neuromuscular control
US11567573B2 (en) 2018-09-20 2023-01-31 Meta Platforms Technologies, Llc Neuromuscular text entry, writing and drawing in augmented reality systems
US11635736B2 (en) 2017-10-19 2023-04-25 Meta Platforms Technologies, Llc Systems and methods for identifying biological structures associated with neuromuscular source signals
US11666264B1 (en) 2013-11-27 2023-06-06 Meta Platforms Technologies, Llc Systems, articles, and methods for electromyography sensors
US11797087B2 (en) 2018-11-27 2023-10-24 Meta Platforms Technologies, Llc Methods and apparatus for autocalibration of a wearable electrode sensor system
US11868531B1 (en) 2021-04-08 2024-01-09 Meta Platforms Technologies, Llc Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof
US11907423B2 (en) 2019-11-25 2024-02-20 Meta Platforms Technologies, Llc Systems and methods for contextualized interactions with an environment
US11921471B2 (en) 2013-08-16 2024-03-05 Meta Platforms Technologies, Llc Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source
US11961494B1 (en) 2020-03-27 2024-04-16 Meta Platforms Technologies, Llc Electromagnetic interference reduction in extended reality environments

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5844482A (en) * 1997-05-20 1998-12-01 Guthrie; Warren E. Tagging system using motion detector
US6054951A (en) * 1995-08-28 2000-04-25 Sypniewski; Jozef Multi-dimensional tracking sensor
US20050116823A1 (en) * 2003-12-03 2005-06-02 Torsten Paulsen System for tracking object locations using self-tracking tags
US20120092328A1 (en) * 2010-10-15 2012-04-19 Jason Flaks Fusing virtual content into real content
US20120195460A1 (en) * 2011-01-31 2012-08-02 Qualcomm Incorporated Context aware augmentation interactions
US20120320169A1 (en) * 2011-06-17 2012-12-20 Microsoft Corporation Volumetric video presentation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6054951A (en) * 1995-08-28 2000-04-25 Sypniewski; Jozef Multi-dimensional tracking sensor
US5844482A (en) * 1997-05-20 1998-12-01 Guthrie; Warren E. Tagging system using motion detector
US20050116823A1 (en) * 2003-12-03 2005-06-02 Torsten Paulsen System for tracking object locations using self-tracking tags
US20120092328A1 (en) * 2010-10-15 2012-04-19 Jason Flaks Fusing virtual content into real content
US20120195460A1 (en) * 2011-01-31 2012-08-02 Qualcomm Incorporated Context aware augmentation interactions
US20120320169A1 (en) * 2011-06-17 2012-12-20 Microsoft Corporation Volumetric video presentation

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11921471B2 (en) 2013-08-16 2024-03-05 Meta Platforms Technologies, Llc Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source
US11079846B2 (en) 2013-11-12 2021-08-03 Facebook Technologies, Llc Systems, articles, and methods for capacitive electromyography sensors
US11666264B1 (en) 2013-11-27 2023-06-06 Meta Platforms Technologies, Llc Systems, articles, and methods for electromyography sensors
US10684692B2 (en) 2014-06-19 2020-06-16 Facebook Technologies, Llc Systems, devices, and methods for gesture identification
US10704929B1 (en) 2016-02-24 2020-07-07 Ommo Technologies, Inc. Tracking position and movement using a magnetic field
US10151606B1 (en) 2016-02-24 2018-12-11 Ommo Technologies, Inc. Tracking position and movement using a magnetic field
US11112856B2 (en) 2016-03-13 2021-09-07 Logitech Europe S.A. Transition between virtual and augmented reality
WO2017172661A1 (en) * 2016-03-31 2017-10-05 Microsoft Technology Licensing, Llc Electromagnetic tracking of objects for mixed reality
CN105785373A (en) * 2016-04-26 2016-07-20 上海与德通讯技术有限公司 Virtual reality position identification system and method
US20170351094A1 (en) * 2016-06-06 2017-12-07 Adam G. Poulos Optically augmenting electromagnetic tracking in mixed reality
US10254546B2 (en) 2016-06-06 2019-04-09 Microsoft Technology Licensing, Llc Optically augmenting electromagnetic tracking in mixed reality
US11337652B2 (en) 2016-07-25 2022-05-24 Facebook Technologies, Llc System and method for measuring the movements of articulated rigid bodies
US10656711B2 (en) 2016-07-25 2020-05-19 Facebook Technologies, Llc Methods and apparatus for inferring user intent based on neuromuscular signals
US11000211B2 (en) 2016-07-25 2021-05-11 Facebook Technologies, Llc Adaptive system for deriving control signals from measurements of neuromuscular activity
US10990174B2 (en) 2016-07-25 2021-04-27 Facebook Technologies, Llc Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors
WO2018022657A1 (en) * 2016-07-25 2018-02-01 Ctrl-Labs Corporation System and method for measuring the movements of articulated rigid bodies
US10409371B2 (en) 2016-07-25 2019-09-10 Ctrl-Labs Corporation Methods and apparatus for inferring user intent based on neuromuscular signals
US10134192B2 (en) 2016-10-17 2018-11-20 Microsoft Technology Licensing, Llc Generating and displaying a computer generated image on a future pose of a real world object
WO2018075270A1 (en) * 2016-10-17 2018-04-26 Microsoft Technology Licensing, Llc Generating and displaying a computer generated image on a future pose of a real world object
CN110168475A (en) * 2016-11-14 2019-08-23 罗技欧洲公司 User's interface device is imported into virtual reality/augmented reality system
US10928888B2 (en) 2016-11-14 2021-02-23 Logitech Europe S.A. Systems and methods for configuring a hub-centric virtual/augmented reality environment
EP3539087A4 (en) * 2016-11-14 2020-09-30 Logitech Europe S.A. A system for importing user interface devices into virtual/augmented reality
CN110088711A (en) * 2016-12-22 2019-08-02 微软技术许可有限责任公司 Magnetic disturbance detection and correction
WO2018118728A3 (en) * 2016-12-22 2018-07-26 Microsoft Technology Licensing, Llc Magnetic interference detection and correction
CN110088711B (en) * 2016-12-22 2022-04-15 微软技术许可有限责任公司 Magnetic interference detection and correction
US10746815B2 (en) 2016-12-22 2020-08-18 Microsoft Technology Licensing, Llc Magnetic interference detection and correction
CN110521203B (en) * 2017-04-25 2022-03-11 Ati科技无限责任公司 Display pacing in a multi-head mounted display virtual reality configuration
CN110521203A (en) * 2017-04-25 2019-11-29 Ati科技无限责任公司 Display pacing in the configuration of bull head mounted displays virtual reality
US11635736B2 (en) 2017-10-19 2023-04-25 Meta Platforms Technologies, Llc Systems and methods for identifying biological structures associated with neuromuscular source signals
US11069148B2 (en) 2018-01-25 2021-07-20 Facebook Technologies, Llc Visualization of reconstructed handstate information
US11361522B2 (en) 2018-01-25 2022-06-14 Facebook Technologies, Llc User-controlled tuning of handstate representation model parameters
US11331045B1 (en) 2018-01-25 2022-05-17 Facebook Technologies, Llc Systems and methods for mitigating neuromuscular signal artifacts
US11587242B1 (en) 2018-01-25 2023-02-21 Meta Platforms Technologies, Llc Real-time processing of handstate representation model estimates
US10504286B2 (en) 2018-01-25 2019-12-10 Ctrl-Labs Corporation Techniques for anonymizing neuromuscular signal data
US11163361B2 (en) 2018-01-25 2021-11-02 Facebook Technologies, Llc Calibration techniques for handstate representation modeling using neuromuscular signals
US10950047B2 (en) 2018-01-25 2021-03-16 Facebook Technologies, Llc Techniques for anonymizing neuromuscular signal data
US11127143B2 (en) 2018-01-25 2021-09-21 Facebook Technologies, Llc Real-time processing of handstate representation model estimates
US10496168B2 (en) 2018-01-25 2019-12-03 Ctrl-Labs Corporation Calibration techniques for handstate representation modeling using neuromuscular signals
US10817795B2 (en) 2018-01-25 2020-10-27 Facebook Technologies, Llc Handstate reconstruction based on multiple inputs
US10489986B2 (en) 2018-01-25 2019-11-26 Ctrl-Labs Corporation User-controlled tuning of handstate representation model parameters
US10460455B2 (en) 2018-01-25 2019-10-29 Ctrl-Labs Corporation Real-time processing of handstate representation model estimates
US10825241B2 (en) 2018-03-16 2020-11-03 Microsoft Technology Licensing, Llc Using a one-dimensional ray sensor to map an environment
US10592001B2 (en) 2018-05-08 2020-03-17 Facebook Technologies, Llc Systems and methods for improved speech recognition using neuromuscular information
US10937414B2 (en) 2018-05-08 2021-03-02 Facebook Technologies, Llc Systems and methods for text input using neuromuscular information
US11036302B1 (en) 2018-05-08 2021-06-15 Facebook Technologies, Llc Wearable devices and methods for improved speech recognition
US11216069B2 (en) 2018-05-08 2022-01-04 Facebook Technologies, Llc Systems and methods for improved speech recognition using neuromuscular information
US10772519B2 (en) 2018-05-25 2020-09-15 Facebook Technologies, Llc Methods and apparatus for providing sub-muscular control
US11129569B1 (en) 2018-05-29 2021-09-28 Facebook Technologies, Llc Shielding techniques for noise reduction in surface electromyography signal measurement and related systems and methods
US10687759B2 (en) 2018-05-29 2020-06-23 Facebook Technologies, Llc Shielding techniques for noise reduction in surface electromyography signal measurement and related systems and methods
CN112601975A (en) * 2018-05-31 2021-04-02 奇跃公司 Radar head pose positioning
US10276289B1 (en) 2018-06-01 2019-04-30 Ommo Technologies, Inc. Rotating a permanent magnet in a position detection system
US10970374B2 (en) 2018-06-14 2021-04-06 Facebook Technologies, Llc User identification and authentication with neuromuscular signatures
US11045137B2 (en) 2018-07-19 2021-06-29 Facebook Technologies, Llc Methods and apparatus for improved signal robustness for a wearable neuromuscular recording device
US11179066B2 (en) 2018-08-13 2021-11-23 Facebook Technologies, Llc Real-time spike detection and identification
CN110825219B (en) * 2018-08-14 2022-04-22 三星电子株式会社 Electronic device, control method of electronic device, and electronic system
CN110825219A (en) * 2018-08-14 2020-02-21 三星电子株式会社 Electronic device, control method of electronic device, and electronic system
US10905350B2 (en) 2018-08-31 2021-02-02 Facebook Technologies, Llc Camera-guided interpretation of neuromuscular signals
US10842407B2 (en) 2018-08-31 2020-11-24 Facebook Technologies, Llc Camera-guided interpretation of neuromuscular signals
US11567573B2 (en) 2018-09-20 2023-01-31 Meta Platforms Technologies, Llc Neuromuscular text entry, writing and drawing in augmented reality systems
US10921764B2 (en) 2018-09-26 2021-02-16 Facebook Technologies, Llc Neuromuscular control of physical objects in an environment
US10970936B2 (en) 2018-10-05 2021-04-06 Facebook Technologies, Llc Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment
US11797087B2 (en) 2018-11-27 2023-10-24 Meta Platforms Technologies, Llc Methods and apparatus for autocalibration of a wearable electrode sensor system
US11941176B1 (en) 2018-11-27 2024-03-26 Meta Platforms Technologies, Llc Methods and apparatus for autocalibration of a wearable electrode sensor system
GB2580915B (en) * 2019-01-29 2021-06-09 Sony Interactive Entertainment Inc Peripheral tracking system and method
GB2580915A (en) * 2019-01-29 2020-08-05 Sony Interactive Entertainment Inc Peripheral tracking system and method
US11602684B2 (en) 2019-01-29 2023-03-14 Sony Interactive Entertainment Inc. Peripheral tracking system and method
US10905383B2 (en) 2019-02-28 2021-02-02 Facebook Technologies, Llc Methods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces
US11481030B2 (en) 2019-03-29 2022-10-25 Meta Platforms Technologies, Llc Methods and apparatus for gesture detection and classification
US11481031B1 (en) 2019-04-30 2022-10-25 Meta Platforms Technologies, Llc Devices, systems, and methods for controlling computing devices via neuromuscular signals of users
US11493993B2 (en) 2019-09-04 2022-11-08 Meta Platforms Technologies, Llc Systems, methods, and interfaces for performing inputs based on neuromuscular control
CN114174959A (en) * 2019-09-11 2022-03-11 脸谱科技有限责任公司 Artificial reality triggered by physical objects
US11907423B2 (en) 2019-11-25 2024-02-20 Meta Platforms Technologies, Llc Systems and methods for contextualized interactions with an environment
US11961494B1 (en) 2020-03-27 2024-04-16 Meta Platforms Technologies, Llc Electromagnetic interference reduction in extended reality environments
US11868531B1 (en) 2021-04-08 2024-01-09 Meta Platforms Technologies, Llc Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof

Similar Documents

Publication Publication Date Title
WO2016041088A1 (en) System and method for tracking wearable peripherals in augmented reality and virtual reality applications
JP6690014B2 (en) Head mounted display tracking
US11928838B2 (en) Calibration system and method to align a 3D virtual scene and a 3D real world for a stereoscopic head-mounted display
US10852847B2 (en) Controller tracking for multiple degrees of freedom
EP3343320B1 (en) Information processing apparatus, information processing system, and information processing method
CN103180893B (en) For providing the method and system of three-dimensional user interface
EP3000011B1 (en) Body-locked placement of augmented reality objects
CN106774880B (en) Three-dimensional tracking of user control devices in space
US20170336220A1 (en) Multi-Sensor Position and Orientation Determination System and Device
JP2022529245A (en) Sensor fusion for electromagnetic tracking
US20140192164A1 (en) System and method for determining depth information in augmented reality scene
KR100964951B1 (en) Augmented reality apparatus for simulation discipline and virtual image composition method
WO2016095057A1 (en) Peripheral tracking for an augmented reality head mounted device
EP3149698A1 (en) Method and system for image georegistration
JP6859447B2 (en) Information processing system and object information acquisition method
US20210157394A1 (en) Motion tracking system and method
EP3627289A1 (en) Tracking system and tracking method using the same
CN111489376B (en) Method, device, terminal equipment and storage medium for tracking interaction equipment
EP3392833B1 (en) Tracking system for tracking an object based on silhouette
WO2021177132A1 (en) Information processing device, information processing system, information processing method, and program
US11845001B2 (en) Calibration system and method for handheld controller
KR102299174B1 (en) Mixed reality system for spatial sharing using time delay compensation
KR102391539B1 (en) Mixed reality system for spatial sharing using time delay compensation
JP2017215263A (en) Position detecting method and position detecting system
CN117664173A (en) Calibration method, device, equipment and medium of motion capture equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15841840

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15841840

Country of ref document: EP

Kind code of ref document: A1