WO1997042620A1 - Virtual motion controller - Google Patents

Virtual motion controller

Info

Publication number
WO1997042620A1
WO1997042620A1 PCT/US1997/007419 US9707419W WO9742620A1 WO 1997042620 A1 WO1997042620 A1 WO 1997042620A1 US 9707419 W US9707419 W US 9707419W WO 9742620 A1 WO9742620 A1 WO 9742620A1
Authority
WO
WIPO (PCT)
Prior art keywords
operator
control
control zone
zones
zone
Prior art date
Application number
PCT/US1997/007419
Other languages
French (fr)
Inventor
Maxwell J. Wells
Jon Mandeville
Thomas A. Furness
Aaron K. Pulkka
Michael Lamar
Jason Aten
Original Assignee
University Of Washington
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Washington filed Critical University Of Washington
Priority to AU33676/97A priority Critical patent/AU3367697A/en
Publication of WO1997042620A1 publication Critical patent/WO1997042620A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • This invention relates to an input device for controlling movement, and more particularly to a device which measures movement of a person and uses that movement information to control motion of a virtual environment or of a robot or vehicle.
  • a virtual environment is a sophisticated computing environment depicting a virtual reality, a simulated environment, a game environment or other complex graphical environment.
  • the virtual environment is depicted by one or more displays.
  • An input device serves to control movement within the virtual environment.
  • Conventional user input devices for a general purpose computing environment include a keyboard, a pointing device and a clicking device. Examples of conventional pointing devices include a mouse, a trackball, a joy stick, and a touch pad.
  • the pointing device serves to control the position of a cursor on a computer screen. For a virtual environment a more sophisticated input device is desired.
  • Sophisticated military flight simulators use an aircraft vehicle as the input device.
  • a pilot sits in the cockpit and controls the aircraft. The object is to train the operator and provide a near-real experience.
  • the use of a vehicle as an input device is common in many simulator and game environments.
  • Another known input device is a hand-controlled device.
  • Kim et al. in "The Heaven and Earth Virtual Reality: Designing Metaphor for Novice Users” describe a virtual reality setup, consisting of a tracked head-mounted display and a 3D input device.
  • a user makes a hand gesture (e.g., presses a button on a "bird” device). The orientation and location of the bird device relative to the head mounted display determines the direction and velocity of motion.
  • a small sphere floats with the user. To change position the user moves their hand to the sphere, which always remains in the upper right corner of the field of view.
  • a technique called the lean-based technique a user's head displacement (i) in one version is modified by an exponential function to distort movement or (ii) in another version determines the speed of movement.
  • Pausch et al. disclose a miniature to control movement in "Navigation and Locomotion in Virtual Worlds via Right into Hand-Held Miniatures.”
  • a hand-held miniature graphical representation of a virtual environment is used to control movement in the virtual environment.
  • the user moves an iconic representation of himself in the miniature, the user moves correspondingly in the virtual environment.
  • the miniature graphics change to provide the effect of the user shrinking into the miniature or the miniature expanding to the enlarged virtual environment. The motion then occurs.
  • Iwata et al. describes virtual perambulator prototypes in (i) "Virtual
  • Perambulator A Novel Interface Device for Locomotion in Virtual Environment;" (ii) "Haptic Walkthrough Simulator: Its Design and Application to Studies on Cognitive Map;” and (iii) “Virtual Perambulator.”
  • the user wears roller skates and is held in position by a harness or belt. The user then walks or runs using the skates to achieve motion in a virtual environment. The skates serve as input devices.
  • the harness or belt serves as a safety device to keep the user in place.
  • the skates are replaced with sandal devices having a low friction undersurface.
  • a rubber sole brake pad is positioned toward the toe area of the sandals.
  • a hoop frame replaces the harness/belt to confine the user within a limited real space. Motion is tracked by Polhemus sensors at the feet (e.g., the skates or sandals) and head.
  • the inventor's have sought an input controller which involves hands-free use of the body and allows motion along multiple axes in multiple body postures.
  • An intuitive, natural feeling interface is desired which provides feedback on how the user's input affects the system.
  • a cue of the user's real world environment also is desired.
  • a 'sufficient motion' walking simulator serves as a motion control device for a virtual environment, robot or vehicle.
  • a virtual environment the user controls the content and perspective of a display depicting the virtual environment.
  • a real world robot or vehicle the user remotely controls motion via a camera or by direct viewing.
  • Sufficient motion is a term coined herein to refer to the concept of allowing the user enough movement in the real world to create a sense of reality and presence in the virtual environment. This is distinguished from a full motion input device, such as a 360 degree treadmill.
  • a user is positioned on a surface and is able to move among multiple control regions.
  • the control zones define multiple response functions.
  • positional changes within a first control zone translate directly (e.g., 1 : 1, l :x) to positional changes in the virtual environment or of the robot or vehicle.
  • Positional changes in a second control zone translate to velocity changes in the virtual environment, robot or vehicle.
  • the specific response function varies for differing implementations.
  • the operator is positioned within a ring elevated above the surface. The operator is tethered to the ring.
  • the ring has only one degree of freedom relative to the surface, allowing movement of the ring only in a rotational direction (i.e., yaw motion).
  • the operator is able to move within the ring to move a sensor among the multiple control zones.
  • the surface is flat in a first control zone and varies in contour in a second control region. The varying contour provides feedback to the operator of the operator's real world position, and of their input to the system (e.g., how much movement has been requested of the device).
  • the varying contour in the second control zone gets steeper along a radial direction from a border with the first control zone outward.
  • the steepness corresponds to velocity within the virtual environment or of the robot or vehicle.
  • a side-step change in position within the second control translates into side to side motion within the virtual environment or of the robot
  • the apparatus includes a surface upon which an operator is positioned, an elevated ⁇ ng and a multiple sensors
  • the surface defines multiple control zones.
  • a first control zone is located concentrically inward of a second control zone.
  • the elevated ⁇ ng is positioned over a border between the first and second control zones
  • a first sensor is in the possession of the operator (e.g , worn)
  • the position of the first sensor relative to the surface defines the operator's position relative to the control zones
  • Positional change of the operator position within the first control zone translates directly to positional change within the virtual environment or of the robot or vehicle
  • a second sensor is coupled to the ⁇ ng Movement of the operator into contact with the ⁇ ng deflects a portion of the ⁇ ng in the second control zone Deflection of the ⁇ ng into the second control zone is sensed and translated to velocity within the virtual environment or of the robot or vehicle
  • the apparatus includes a platform having a surface upon which an operator is positioned The surface defines multiple control zones A plurality of weight
  • Each sensor generates a signal corresponding to sensed weight
  • a processor processes the signals from each of the plurality of weight sensors to de ⁇ ve a location of a center of gravity of the operator, as projected onto the platform The derived location is taken as the operator position Positional change of the operator position within a first control zone translates directly to positional change within the virtual environment or of the robot or vehicle Change in operator position withm a second control zone translates to change in velocity within the virtual environment or of the robot or vehicle.
  • a plurahty of multiple degree of freedom sensors are positioned about the platform Each sensor detects forces occurring in an xyz coordinate system
  • a processor is able to compare sensor readings over time to precomputed movement signatures
  • a twisting motion may have apply a unique force pattern to the sensors
  • Force pattern signatures for several motions are stored
  • the force pattern currently applied by a user is compared to the prestored signatures to de ⁇ ve the type of movement the operator is making on the platform surface
  • an application can better depict the user, the user's motion and/or the user's field of view change in the virtual world
  • more information than the location of the operator's center of gravity is determined.
  • the location of the operator's center of gravity is used to define the operator position with respect to the multiple control zones.
  • the operator is able to provide command inputs by performing prescribed motions.
  • a user is positioned on an input device having a plurality of decoupled sensing regions.
  • Each sensing zone includes at least one force sensor.
  • Multiple ones of the plurality of sensing zones include multiple force sensors.
  • Each sensor detects forces occurring in an xyz coordinate system.
  • the regions are decoupled meaning that when the user is not in contact with a given region, such region does not indicate any force. Thus, only the regions that the user is on contact with record a sensed force.
  • the regions form a surface upon which the operator is positioned.
  • the surface also defines multiple control regions.
  • the presence and/or motion of a user within a given control region is mapped to a given function or command to control the virtual environment, robot or vehicle..
  • One advantage of the controller of this invention is that using the body and legs as the input sources provides a natural and intuitive interface leaving the hands free to perform other tasks. It is expected that natural involvement of the body enhances the sense of presence in the virtual environment, provides better spacial awareness and better navigation performance.
  • Another advantage of the invention is that the user is able to assume differing postures (e.g., walking, bending, kneeling).
  • Another advantage of the invention is the feedback of information to the user about the user's input and how that input affects the system.
  • Another advantage of the invention is that the volume of movement of the user in the real world is restricted.
  • Fig. 1 is a block diagram of a system for generating a virtual environment which includes an input device according to an embodiment of this invention
  • Fig. 2 is a diagram of a surface of the input device according to an embodiment of this invention
  • Fig. 3 is a diagram of a surface of the input device according to another embodiment of this invention.
  • Fig. 4 is a diagram of a surface of the input device according to another embodiment of this invention.
  • Fig. 5 is a diagram of an operator standing on the surface and wearing a sensor according to an embodiment of the input device of this invention
  • Fig. 6 is a diagram of a surface having a plurality of force sensors according to an embodiment of the input device of this invention
  • Fig. 7 is a diagram of a surface having a plurality of force sensors according to another embodiment of the input device of this invention
  • Fig. 8 is a diagram of a surface having a plurality of force sensors according to another embodiment of the input device of this invention
  • Fig. 9 is a top view of a portion of the input device according to an embodiment of this invention.
  • Fig. 10 is a side view of the portion of the input device of Fig. 9;
  • Fig. 1 1 is a top view of a portion of the input device according to another embodiment of this invention;
  • Fig. 12 is a side view of the portion of the input device of Fig. 10;
  • Fig. 13 is a top view of a portion of the input device according to another embodiment of this invention.
  • Fig. 14 is a side view of the portion of the input device of Fig. 13.
  • Fig. 1 is a block diagram of a system 10 for generating a virtual environment.
  • the system 10 includes a host processing system 12, a display device 14 and an input device 16.
  • the host processing system includes one or more processors 18 for executing a computer program which creates the virtual environment.
  • the virtual environment can be any virtual environment including, simulation environments, game environment, virtual reality environments or other graphical environment.
  • An operator provides inputs to control movements and enter commands in the virtual environment through the input device 16.
  • the processors 18 process the inputs to control what is displayed at display device 14.
  • the display device 14 provides visual feedback to the operator.
  • this invention addresses the input device 16 the operator controls to provide input to the host processing system 12.
  • the input device also is used for remote control of a robot or vehicle via direct view or via a displayed view.
  • the input device 16 includes a surface 20 upon which an operator is positioned (e.g., stands, sits, crawls, crouches), along with one or more sensors 22 and a processor.
  • one or more processors 18 of the host processing system 12 serve as the input device processor.
  • the surface 20 can be of any shape, and is expected to differ according to the embodiment.
  • the input device surface 20 is designed to resemble a surfboard for a surfing virtual environment.
  • Fig. 2 shows a circular surface 20.
  • Fig. 3 shows an oblong surface 20.
  • Fig. 4 shows a rectangular surface 20. Other geometric or odd shapes also may be used.
  • the surface 20 includes multiple control zones 24, 26.
  • the processor 18 receives input signals from the sensors 22 to determine an operator position on the surface 20.
  • Movement of the operator position within a first control zone 24 is processed with a first transformation function. Movement of the operator position within a second control zone 26 is processed with a second transformation function.
  • the specific transformation function may vary depending on the virtual environment or remote control implementation.
  • movement of the operator position within the first control zone is directly transposed to movement within the virtual environment or of a robot or vehicle. For example a 1 to 1 ratio of movement is implemented. Alternatively a 1 to many ratio of movement is implemented. For example, a 1 inch movement in the first control zone is implemented as a 5 foot movement in the virtual environment or of the robot or vehicle.
  • the relative size, shape and location of each control zone 24, 26 also may vary.
  • Figs. 2 and 3 show two concentric control zones 24, 26. Fig.
  • control zone 24 has a first transformation function
  • control zones 26a, 26b have a second transformation function
  • control zones 28a, 28b have a third transformation function.
  • the sensor(s) 22 determine where the operator position is with respect to the control zones.
  • a three space sensor is worn by the operator.
  • An exemplary sensor 22 is a Polhemus 3Space FasTrak sensor from Polhemus, Inc. of Colchester, Vermont.
  • the polhemus sensor includes a receiver and a transmitter. The sensor's receiver is worn by the operator.
  • the sensor's transmitter is located within 1 meter of the sensor's receiver.
  • the sensor generates six degree-of-freedom position measurements (three positions, three orientations).
  • the Polhemus sensor is accurate to within ⁇ 0.8 mm for position and ⁇ 0.15° for orientation.
  • Fig. 5 shows an embodiment in which an operator 40 wears a head mounted display 42. Attached to the head mounted display 42 is the sensor 22 receiver 44.
  • the processor 18 (see Fig. 1 ) receives a signal from the receiver 44 to determine the operator position.
  • the operator position corresponds to the location of the receiver 44 as projected onto the surface 20.
  • Fig. 5 shows a current operator position at position 46. Such position 46 is within the first control zone 24. As the operator 40 moves the sensor receiver 44 and thus the operator position 46 moves.
  • the sensor(s) 22 are force sensors (e.g., weight sensors detecting a force along a z axis; 3 degree-of-freedom sensors detecting force along three axes). Using weight sensors the operator's center of gravity is projected onto the surface 20.
  • the platform defines a working 2 dimensional working plane with the positive z direction in the zy plane.
  • An array of n sensors are located on the working plane, where n is greater than or equal to 3.
  • the sensors are coplanar, but located so as not to be colinear.
  • 4 sensors define a sensor rectangle which is coplanar with the working plane.
  • the sensor rectangle for example is square with the center located at the origin of the xy plane.
  • the total force on the platform is the sum of the forces detected by each sensor.
  • the force on the left half of the working plane is the sum of forces detected by the first and fourth sensor.
  • the force on the right half of the working plane is the sum of forces detected by the second and third sensors.
  • the force on a forward half of the working plane is the sum of forces on the first and second sensors.
  • the force on a rearward half of the working plane is the sum of forces on the third and fourth sensors, the center of gravity of the user has an x-axis force component and a y-axis force component.
  • the x-axis force component is the difference between the right half force and the left half force, divided by the total force.
  • the y-axis force component is the difference between the forward half force and the rearward half force, divided by the total force.
  • Figs. 6-8 and 12 show embodiments in which force sensors are used for the input device sensors 22. In such embodiments an additional polhemus sensor also may be used. Such additional sensor provides additional information on the operator's movements and posture in 3 dimensional space.
  • the surface 20 rests on the force sensors 22.
  • the force sensors are grouped to define decoupled sensing zones.
  • Fig. 6 for example shows a circular surface 20 having 9 sensing zones 50, 52, 54, 56, 58, 60, 62, 64, 66.
  • the surface 20 is formed by 9 separate pieces 20a-20i.
  • Each sensing zone 50-66 rests on at least one force sensor 22.
  • a plurality of the sensing zones 52-66 rest one at least two force sensors.
  • FIG. 7 shows another embodiment in which there are 15 sensing zones 70- 84. Each sensing zone rests on at least one force sensor 22.
  • Fig. 8 shows another embodiment in which there are 15 sensing zones 90-104. Each sensing zone 90-104 rests on at least two force sensors 22.
  • the use of al least one force sensor per sensing zone allows the processor 18 to determine that the operator is applying a force to given sensing zone(s).
  • the use of at least two force sensors per sensing zone allows the processor to determine where within a given zone the operator is applying a force.
  • the sensing zones coincide with the control zones.
  • one or more mutually exclusive subsets of sensing zones define the respective control zones.
  • Fig. 8 shows an embodiment where the border 38 between the control zones 24, 26 need not coincide with the borders of the sensing zones 90- 104.
  • Figs. 9 and 10 show an embodiment of a portion 110 the input device.
  • the portion 1 10 includes the surface 20.
  • the sensors 22 and processor 18 are not shown.
  • the surface 20 defines two concentric control zones 24, 26.
  • Figs. 2-4 show alternative embodiments with different control zone configurations.
  • the contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat along the surface portion 1 12 corresponding to the inner control zone 24. Surface 20 is inclined along the surface portion 1 14 corresponding to the outer control zone 26.
  • the portion 110 also includes a frame 116 to which the operator is tethered.
  • Figs. 9 and 10 show a belt 1 18 which is worn by the operator. Elastic tethers 120 connect the belt 1 18 to the frame 1 16.
  • the frame is generally rigid and rotates in a yaw motion with respect to the surface 20.
  • the frame has only one degree of freedom.
  • the tethers provide tensile feedback to the operator of the operator's real world position with respect to the surface 20.
  • the portion 110 of the input device 16 is used with a polhemus sensor 22 worn by the operator 40 as shown in Fig. 5.
  • the surface 20 rests on a plurality of force sensors as shown in Figs. 13 and 14.
  • the sensor 22 outputs are used to determine the operator's center of gravity. This is taken as the operator position.
  • the location of the operator position with respect to the control zones determines how operator movements are to be processed.
  • Operator movements within the inner control zone 24 are processed according to one transformation function (e.g., direct positional translational). For example, a movement in a given direction by a given amount within the inner control zone is transformed to a movement in such direction in the virtual environment or of the robot or vehicle by an amount equal to a gain factor times the amount moved in the inner control zone.
  • Operator movements within the outer control zone 26 are processed according to another transformation function.
  • a movement in a given direction in the outer control zone 26 by a given amount is transformed to a movement in the virtual environment or of the robot or vehicle in such direction at a velocity equal to a gain factor times the radial change in position within the outer control zone.
  • More complex transformations also are implemented in some embodiments.
  • the incline portions 1 14 of the surface 20 correlate to the gain factor for the outer control zone transformation function, (e.g., the steeper the incline the larger the gain factor and the faster the velocity).
  • Figs. 1 1 and 12 show an embodiment of a portion 130 the input device.
  • the portion 130 includes the surface 20.
  • the sensors 22 and processor 18 are not shown.
  • the surface 20 defines two concentric control zones 24, 26.
  • Figs. 2-4 show alternative embodiments with different control zone configurations.
  • the contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat and inclines toward its periphery.
  • the incline provides a cue to the operator that the operator is nearing the edge of the surface. Thus, the incline serves as a safety mechanism to cue the operator and provide an indication of real world position.
  • the input device portion 130 also includes a frame 132 which suspends a ring
  • the frame 132 is rigid and held in place with zero degrees of freedom.
  • the ring 134 is generally rigid and movable within the frame 132.
  • Springs 136, elastic tethers or another structure biases the ring 134 to a relaxed position.
  • An operator stands within the ring 134 and wears a polhemus sensor.
  • another position sensor 138 is located at the ring 134 for determining the position of the ring 134. While the ring 134 is in its relaxed position the ring 134 is elevated over the border between an inner control 24 and a concentrically outer control zone 26.
  • the polhemus sensor 44 worn by the operator provides inputs to the processor 18.
  • the processor derives the operator position within the ring 134 based on such inputs. Movements of the operator within the ring 134 that are within the inner control zone 24 are processed using a first transformation function. For example in a specific embodiment, a movement in a given direction by a given amount within the inner control zone is transformed to a movement in such direction in the virtual environment or of the robot or vehicle by an amount equal to a gain factor times the amount moved in the inner control zone.
  • the position sensor 138 sends sensor signals to the processor enabling the processor to determine that the ring has moved.
  • the amount and direction that the ring is pushed into the outer control zone 26 is determined and processed using a second transformation function. For example in a specific embodiment, a movement in a given direction in the outer control zone 26 by a given amount is transformed to a movement in the virtual environment or of the robot or vehicle in such direction at a velocity equal to a gain factor times the radial change in position within the outer control zone. More complex transformations also are implemented in some embodiments.
  • Figs. 13 and 14 show a platform 150 defining the surface 20 according to an embodiment of this invention.
  • the platform 150 rests on a plurality of force sensors 152-158.
  • the force sensors are one degree-of-freedom weight sensors for detecting the weight applied at each sensor. Multiple weight sensors are included so that the processor 18 is able to calculate the location of the operator's center of gravity when the operator is positioned on the platform 150.
  • the surface 20 defines multiple control zones 24, 26.
  • Figs. 2-4 show alternative embodiments with different control zone configurations.
  • the platform 150 is used alone or with the configurations described above with respect to the tethered embodiment and the ring embodiment.
  • the location of the operator's center of gravity as determined by the processor 18 from the wight sensors readings is used to identify the operator position, the operator position and changes in position are processed according to which control zone the movement occurs in. Movement within a first control zone is transposed in the virtual environment or to the robot or vehicle using a first transformation function. Movement within a second control zone is transposed in the virtual environment or to the robot or vehicle using a second transformation function.
  • the contour of the surface 20 is flat or varies according to the specific embodiment.
  • surface 20 is generally flat and inclines toward its periphery.
  • the incline provides a cue to the operator that the operator is nearing the edge of the surface.
  • the incline serves as a safety mechanism to cue the operator and provide an indication of real world position.
  • surface 20 is generally flat along the surface portion corresponding to the inner control zone 24 and is inclined along the surface portion corresponding to the outer control zone 26.
  • the force sensors 152-158 are three degree-of-freedom force sensors.
  • the weight applied by an operator corresponds to a force along a z-axis.
  • the sensors 152-158 however also detect other directional components of any forces applied by the operator. Different movements tend to result in different sense patterns. For example sensor readings during a twisting motion exhibit recognizable characteristics used to identify the motion as a twisting motion. Other motions and postures also are recognizable such as crouching crawling, walking, running.
  • the processor determines a three-dimensional operator movement from a time sequence of the output signals. Specifically, the processor accumulates a movement pattern of the operator and compares such movement pattern with prestored movement pattern characteristics to identify the movement pattern.
  • the processor 18 for a given virtual environment implementation is able to use the sensor readings to provide a more detailed control of motion within the virtual environment or of the robot or vehicle.
  • Figs. 6-8 show alternative embodiments for a surface 20 having multiple decoupled sensing zones.
  • Fig. 6 shows a circular surface 20 having 9 sensing zones 50, 52, 54, 56, 58, 60, 62, 64, 66.
  • Fig. 7 shows another embodiment in which there are 15 sensing zones 70-84.
  • Fig. 8 shows another embodiment in which there are 15 sensing zones 90-104.
  • the surface 20 is formed by separate pieces - one piece for each independent sensing zone. By separating the surface 20 pieces, a force applied only to one sensing zone does not impose a force on any other sensing zone. Accordingly, the sensing zones are decoupled. Each sensing zone rests on at least one force sensor 22.
  • a plurality of the sensing zones rest one at least two force sensors.
  • the use of al least one force sensor per sensing zone allows the processor 18 to determine that the operator is applying a force to given sensing zone(s).
  • the use of at least two force sensors per sensing zone allows the processor to determine where within a given zone the operator is applying a force.
  • decoupled sensing zones are defined by including contact switches on the surface 20. In such embodiment the physical surface 20 need not also be decoupled among the various sensing zones. Activation of contact switches identifies the sensing zone(s) where the user stands. Additional force sensors as described provides the force information to obtain the center of gravity and foot position.
  • the sensing zones coincide with the control zones.
  • an embodiment having the Fig. 7 layout may define 15 separate control zones, each control zone coinciding with a unique sensing zone.
  • one or more complete sensing zones define a control zone.
  • one embodiment having the layout shown in Fig. 6 has a first control zone coincident with sensing zone 50 and a second control zone formed by the remaining sensing zones 52-66.
  • one or more subsets of sensing zones define respective control zones.
  • Fig. 7 layout may define 15 separate control zones, each control zone coinciding with a unique sensing zone.
  • one or more complete sensing zones define a control zone.
  • one embodiment having the layout shown in Fig. 6 has a first control zone coincident with sensing zone 50 and a second control zone formed by the remaining sensing zones 52-66.
  • one or more subsets of sensing zones define respective control zones.
  • FIG. 8 shows an embodiment where a first control zone 24 is formed by sensing zone 97 and portions of sensing zones 93-96 and 98-101 and a second control zone is formed by the remaining portions of sensing zones 93-96 and 98-101, along with sensing zones 90-92 and 102-104.
  • the borders of the control zones need not coincide with borders of the sensing zones.
  • the sensing zone has at least two force sensors.
  • One advantage of the controller of this invention is that using the body and legs as the input sources provides a natural and intuitive interface leaving the hands free to perform other tasks. It is expected that natural involvement of the body enhances the sense of presence in the virtual environment, provides better spacial awareness and better navigation performance. Another advantage of the invention is that the user is able to assume differing postures (e.g., walking, bending, kneeling). Another advantage of the invention is the feedback of information to the user about the user's input and how that input affects the system. Another advantage of the invention is that the volume of movement of the user in the real world is restricted.

Abstract

A motion control device (16) for a virtual environment, robot or vehicle. The controller allows the user enough movement in the real world to create a sense of reality and presence in the virtual environment. A user is positioned on a surface and is able to move within multiple control regions (24, 26). The virtual environment, robot or vehicle responds differently to inputs from a first control region (24) than from a second control region (26).

Description

VIRTUAL MOTION CONTROLLER
BACKGROUND OF THE INVENTION
This invention relates to an input device for controlling movement, and more particularly to a device which measures movement of a person and uses that movement information to control motion of a virtual environment or of a robot or vehicle.
A virtual environment is a sophisticated computing environment depicting a virtual reality, a simulated environment, a game environment or other complex graphical environment. The virtual environment is depicted by one or more displays. An input device serves to control movement within the virtual environment. Conventional user input devices for a general purpose computing environment include a keyboard, a pointing device and a clicking device. Examples of conventional pointing devices include a mouse, a trackball, a joy stick, and a touch pad. The pointing device serves to control the position of a cursor on a computer screen. For a virtual environment a more sophisticated input device is desired.
Sophisticated military flight simulators use an aircraft vehicle as the input device. A pilot sits in the cockpit and controls the aircraft. The object is to train the operator and provide a near-real experience. The use of a vehicle as an input device is common in many simulator and game environments. Another known input device is a hand-controlled device. Kim et al. in "The Heaven and Earth Virtual Reality: Designing Metaphor for Novice Users," describe a virtual reality setup, consisting of a tracked head-mounted display and a 3D input device. In a technique called the flying hand a user makes a hand gesture (e.g., presses a button on a "bird" device). The orientation and location of the bird device relative to the head mounted display determines the direction and velocity of motion. In a technique called the floating guide a small sphere floats with the user. To change position the user moves their hand to the sphere, which always remains in the upper right corner of the field of view. In a technique called the lean-based technique, a user's head displacement (i) in one version is modified by an exponential function to distort movement or (ii) in another version determines the speed of movement.
Pausch et al. disclose a miniature to control movement in "Navigation and Locomotion in Virtual Worlds via Right into Hand-Held Miniatures." A hand-held miniature graphical representation of a virtual environment is used to control movement in the virtual environment. When a user moves an iconic representation of himself in the miniature, the user moves correspondingly in the virtual environment. First the user moves the icon, then the miniature graphics change to provide the effect of the user shrinking into the miniature or the miniature expanding to the enlarged virtual environment. The motion then occurs. Then, the graphics change to provide the effect of the user growing or the virtual environment shrinking. Iwata et al. describes virtual perambulator prototypes in (i) "Virtual
Perambulator: A Novel Interface Device for Locomotion in Virtual Environment;" (ii) "Haptic Walkthrough Simulator: Its Design and Application to Studies on Cognitive Map;" and (iii) "Virtual Perambulator." In the early prototypes the user wears roller skates and is held in position by a harness or belt. The user then walks or runs using the skates to achieve motion in a virtual environment. The skates serve as input devices. The harness or belt serves as a safety device to keep the user in place. In a later embodiment, the skates are replaced with sandal devices having a low friction undersurface. A rubber sole brake pad is positioned toward the toe area of the sandals. In addition, a hoop frame replaces the harness/belt to confine the user within a limited real space. Motion is tracked by Polhemus sensors at the feet (e.g., the skates or sandals) and head.
The inventor's have sought an input controller which involves hands-free use of the body and allows motion along multiple axes in multiple body postures. An intuitive, natural feeling interface is desired which provides feedback on how the user's input affects the system. To maintain safe interaction, a cue of the user's real world environment also is desired.
SUMMARY OF THE INVENTION According to the invention, a 'sufficient motion' walking simulator serves as a motion control device for a virtual environment, robot or vehicle. For a virtual environment the user controls the content and perspective of a display depicting the virtual environment. For a real world robot or vehicle, the user remotely controls motion via a camera or by direct viewing. Sufficient motion is a term coined herein to refer to the concept of allowing the user enough movement in the real world to create a sense of reality and presence in the virtual environment. This is distinguished from a full motion input device, such as a 360 degree treadmill.
According to one aspect of the invention, a user is positioned on a surface and is able to move among multiple control regions. The control zones define multiple response functions. In one embodiment positional changes within a first control zone translate directly (e.g., 1 : 1, l :x) to positional changes in the virtual environment or of the robot or vehicle. Positional changes in a second control zone translate to velocity changes in the virtual environment, robot or vehicle. The specific response function varies for differing implementations. According to another aspect of the invention, in one embodiment the operator is positioned within a ring elevated above the surface. The operator is tethered to the ring. The ring has only one degree of freedom relative to the surface, allowing movement of the ring only in a rotational direction (i.e., yaw motion). The operator is able to move within the ring to move a sensor among the multiple control zones. According to another aspect of the invention, the surface is flat in a first control zone and varies in contour in a second control region. The varying contour provides feedback to the operator of the operator's real world position, and of their input to the system (e.g., how much movement has been requested of the device).
According to another aspect of the invention, the varying contour in the second control zone gets steeper along a radial direction from a border with the first control zone outward. The steepness corresponds to velocity within the virtual environment or of the robot or vehicle. According to another aspect of the invention, a side-step change in position within the second control translates into side to side motion within the virtual environment or of the robot
According to another aspect of the invention, in an alternative embodiment the apparatus includes a surface upon which an operator is positioned, an elevated πng and a multiple sensors The surface defines multiple control zones. A first control zone is located concentrically inward of a second control zone. The elevated πng is positioned over a border between the first and second control zones A first sensor is in the possession of the operator (e.g , worn) The position of the first sensor relative to the surface defines the operator's position relative to the control zones Positional change of the operator position within the first control zone translates directly to positional change within the virtual environment or of the robot or vehicle A second sensor is coupled to the πng Movement of the operator into contact with the πng deflects a portion of the πng in the second control zone Deflection of the πng into the second control zone is sensed and translated to velocity within the virtual environment or of the robot or vehicle According to another aspect of the invention, in an alternative embodiment the apparatus includes a platform having a surface upon which an operator is positioned The surface defines multiple control zones A plurality of weight sensors are symmetrically displaced about the platform. Each sensor generates a signal corresponding to sensed weight A processor processes the signals from each of the plurality of weight sensors to deπve a location of a center of gravity of the operator, as projected onto the platform The derived location is taken as the operator position Positional change of the operator position within a first control zone translates directly to positional change within the virtual environment or of the robot or vehicle Change in operator position withm a second control zone translates to change in velocity within the virtual environment or of the robot or vehicle.
According to another aspect of the invention, in another alternative embodiment including the platform, a plurahty of multiple degree of freedom sensors are positioned about the platform Each sensor detects forces occurring in an xyz coordinate system By detecting more than just the weight (e.g., z-axis component of the force), a processor is able to compare sensor readings over time to precomputed movement signatures For example a twisting motion may have apply a unique force pattern to the sensors Force pattern signatures for several motions are stored The force pattern currently applied by a user is compared to the prestored signatures to deπve the type of movement the operator is making on the platform surface By knowing the type of movement an application can better depict the user, the user's motion and/or the user's field of view change in the virtual world Thus, more information than the location of the operator's center of gravity is determined. In some implementations of such embodiment, the location of the operator's center of gravity is used to define the operator position with respect to the multiple control zones. Alternatively or in addition, the operator is able to provide command inputs by performing prescribed motions. According to another aspect of the invention, in yet another embodiment a user is positioned on an input device having a plurality of decoupled sensing regions. Each sensing zone includes at least one force sensor. Multiple ones of the plurality of sensing zones include multiple force sensors. Each sensor detects forces occurring in an xyz coordinate system. The regions are decoupled meaning that when the user is not in contact with a given region, such region does not indicate any force. Thus, only the regions that the user is on contact with record a sensed force. The regions form a surface upon which the operator is positioned. The surface also defines multiple control regions. The presence and/or motion of a user within a given control region is mapped to a given function or command to control the virtual environment, robot or vehicle.. One advantage of the controller of this invention is that using the body and legs as the input sources provides a natural and intuitive interface leaving the hands free to perform other tasks. It is expected that natural involvement of the body enhances the sense of presence in the virtual environment, provides better spacial awareness and better navigation performance. Another advantage of the invention is that the user is able to assume differing postures (e.g., walking, bending, kneeling). Another advantage of the invention is the feedback of information to the user about the user's input and how that input affects the system. Another advantage of the invention is that the volume of movement of the user in the real world is restricted. These and other aspects and advantages of the invention will be better understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram of a system for generating a virtual environment which includes an input device according to an embodiment of this invention; Fig. 2 is a diagram of a surface of the input device according to an embodiment of this invention;
Fig. 3 is a diagram of a surface of the input device according to another embodiment of this invention;
Fig. 4 is a diagram of a surface of the input device according to another embodiment of this invention;
Fig. 5 is a diagram of an operator standing on the surface and wearing a sensor according to an embodiment of the input device of this invention; Fig. 6 is a diagram of a surface having a plurality of force sensors according to an embodiment of the input device of this invention;
Fig. 7 is a diagram of a surface having a plurality of force sensors according to another embodiment of the input device of this invention; Fig. 8 is a diagram of a surface having a plurality of force sensors according to another embodiment of the input device of this invention;
Fig. 9 is a top view of a portion of the input device according to an embodiment of this invention;
Fig. 10 is a side view of the portion of the input device of Fig. 9; Fig. 1 1 is a top view of a portion of the input device according to another embodiment of this invention;
Fig. 12 is a side view of the portion of the input device of Fig. 10;
Fig. 13 is a top view of a portion of the input device according to another embodiment of this invention; and Fig. 14 is a side view of the portion of the input device of Fig. 13.
DESCRIPTION OF SPECIFIC EMBODIMENTS Overview
Fig. 1 is a block diagram of a system 10 for generating a virtual environment. The system 10 includes a host processing system 12, a display device 14 and an input device 16. The host processing system includes one or more processors 18 for executing a computer program which creates the virtual environment. The virtual environment can be any virtual environment including, simulation environments, game environment, virtual reality environments or other graphical environment. An operator provides inputs to control movements and enter commands in the virtual environment through the input device 16. The processors 18 process the inputs to control what is displayed at display device 14. The display device 14 provides visual feedback to the operator. Although other mechanisms for feeding back virtual environment information can be implemented, this invention addresses the input device 16 the operator controls to provide input to the host processing system 12. Although a virtual environment host is described, the input device also is used for remote control of a robot or vehicle via direct view or via a displayed view.
The input device 16 includes a surface 20 upon which an operator is positioned (e.g., stands, sits, crawls, crouches), along with one or more sensors 22 and a processor. In one embodiment one or more processors 18 of the host processing system 12 serve as the input device processor. The surface 20 can be of any shape, and is expected to differ according to the embodiment. For example, the input device surface 20 is designed to resemble a surfboard for a surfing virtual environment. Fig. 2 shows a circular surface 20. Fig. 3 shows an oblong surface 20. Fig. 4 shows a rectangular surface 20. Other geometric or odd shapes also may be used. The surface 20 includes multiple control zones 24, 26. The processor 18 receives input signals from the sensors 22 to determine an operator position on the surface 20. Movement of the operator position within a first control zone 24 is processed with a first transformation function. Movement of the operator position within a second control zone 26 is processed with a second transformation function. The specific transformation function may vary depending on the virtual environment or remote control implementation. In a specific embodiment, movement of the operator position within the first control zone is directly transposed to movement within the virtual environment or of a robot or vehicle. For example a 1 to 1 ratio of movement is implemented. Alternatively a 1 to many ratio of movement is implemented. For example, a 1 inch movement in the first control zone is implemented as a 5 foot movement in the virtual environment or of the robot or vehicle. The relative size, shape and location of each control zone 24, 26 also may vary. Figs. 2 and 3 show two concentric control zones 24, 26. Fig. 4 shows five nonconcentric control zones 24, 26a, 26b, 28a and 28b. Each control zone may have the same or a differing transformation function. For example, in a specific embodiment of the Fig. 4 depiction control zone 24 has a first transformation function, control zones 26a, 26b have a second transformation function and control zones 28a, 28b have a third transformation function. The sensor(s) 22 determine where the operator position is with respect to the control zones. In one embodiment a three space sensor is worn by the operator. An exemplary sensor 22 is a Polhemus 3Space FasTrak sensor from Polhemus, Inc. of Colchester, Vermont. The polhemus sensor includes a receiver and a transmitter. The sensor's receiver is worn by the operator. The sensor's transmitter is located within 1 meter of the sensor's receiver. The sensor generates six degree-of-freedom position measurements (three positions, three orientations). The Polhemus sensor is accurate to within ±0.8 mm for position and ±0.15° for orientation. Fig. 5 shows an embodiment in which an operator 40 wears a head mounted display 42. Attached to the head mounted display 42 is the sensor 22 receiver 44. The processor 18 (see Fig. 1 ) receives a signal from the receiver 44 to determine the operator position. The operator position corresponds to the location of the receiver 44 as projected onto the surface 20. Fig. 5 shows a current operator position at position 46. Such position 46 is within the first control zone 24. As the operator 40 moves the sensor receiver 44 and thus the operator position 46 moves. In alternative embodiments the sensor(s) 22 are force sensors (e.g., weight sensors detecting a force along a z axis; 3 degree-of-freedom sensors detecting force along three axes). Using weight sensors the operator's center of gravity is projected onto the surface 20. The platform defines a working 2 dimensional working plane with the positive z direction in the zy plane. An array of n sensors are located on the working plane, where n is greater than or equal to 3. The sensors are coplanar, but located so as not to be colinear. In one embodiment 4 sensors define a sensor rectangle which is coplanar with the working plane. The sensor rectangle for example is square with the center located at the origin of the xy plane.
The total force on the platform is the sum of the forces detected by each sensor. The force on the left half of the working plane is the sum of forces detected by the first and fourth sensor. The force on the right half of the working plane is the sum of forces detected by the second and third sensors. The force on a forward half of the working plane is the sum of forces on the first and second sensors. The force on a rearward half of the working plane is the sum of forces on the third and fourth sensors, the center of gravity of the user has an x-axis force component and a y-axis force component. The x-axis force component is the difference between the right half force and the left half force, divided by the total force. The y-axis force component is the difference between the forward half force and the rearward half force, divided by the total force.
Using 3 degree-of-freedom force sensors the operator position is similarly determining using the z-component for each sensor reading. The additional force components along x and y axes allows the processors 18 to determine posture and more complex movements of the operator 40. Figs. 6-8 and 12 show embodiments in which force sensors are used for the input device sensors 22. In such embodiments an additional polhemus sensor also may be used. Such additional sensor provides additional information on the operator's movements and posture in 3 dimensional space.
For embodiments in which force sensors are used the surface 20 rests on the force sensors 22. In some embodiments the force sensors are grouped to define decoupled sensing zones. Fig. 6 for example shows a circular surface 20 having 9 sensing zones 50, 52, 54, 56, 58, 60, 62, 64, 66. The surface 20 is formed by 9 separate pieces 20a-20i. Each sensing zone 50-66 rests on at least one force sensor 22. A plurality of the sensing zones 52-66 rest one at least two force sensors. By separating the surface 20 pieces 20a-20i, a force applied only to one sensing zone does not impose a force on any other sensing zone. Accordingly, the sensing zones are decoupled. Fig. 7 shows another embodiment in which there are 15 sensing zones 70- 84. Each sensing zone rests on at least one force sensor 22. Fig. 8 shows another embodiment in which there are 15 sensing zones 90-104. Each sensing zone 90-104 rests on at least two force sensors 22. The use of al least one force sensor per sensing zone allows the processor 18 to determine that the operator is applying a force to given sensing zone(s). The use of at least two force sensors per sensing zone allows the processor to determine where within a given zone the operator is applying a force. In some embodiments the sensing zones coincide with the control zones. In other embodiments one or more mutually exclusive subsets of sensing zones define the respective control zones. Fig. 8 shows an embodiment where the border 38 between the control zones 24, 26 need not coincide with the borders of the sensing zones 90- 104.
Tether Embodiment
Figs. 9 and 10 show an embodiment of a portion 110 the input device. The portion 1 10 includes the surface 20. The sensors 22 and processor 18 are not shown. The surface 20 defines two concentric control zones 24, 26. Figs. 2-4 show alternative embodiments with different control zone configurations. The contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat along the surface portion 1 12 corresponding to the inner control zone 24. Surface 20 is inclined along the surface portion 1 14 corresponding to the outer control zone 26. The portion 110 also includes a frame 116 to which the operator is tethered. Figs. 9 and 10 show a belt 1 18 which is worn by the operator. Elastic tethers 120 connect the belt 1 18 to the frame 1 16. The frame is generally rigid and rotates in a yaw motion with respect to the surface 20. The frame has only one degree of freedom. As the operator moves, the tethers provide tensile feedback to the operator of the operator's real world position with respect to the surface 20. The portion 110 of the input device 16 is used with a polhemus sensor 22 worn by the operator 40 as shown in Fig. 5.
Alternatively, the surface 20 rests on a plurality of force sensors as shown in Figs. 13 and 14. The sensor 22 outputs are used to determine the operator's center of gravity. This is taken as the operator position. The location of the operator position with respect to the control zones determines how operator movements are to be processed. Operator movements within the inner control zone 24 are processed according to one transformation function (e.g., direct positional translational). For example, a movement in a given direction by a given amount within the inner control zone is transformed to a movement in such direction in the virtual environment or of the robot or vehicle by an amount equal to a gain factor times the amount moved in the inner control zone. Operator movements within the outer control zone 26 are processed according to another transformation function. For example, a movement in a given direction in the outer control zone 26 by a given amount is transformed to a movement in the virtual environment or of the robot or vehicle in such direction at a velocity equal to a gain factor times the radial change in position within the outer control zone. More complex transformations also are implemented in some embodiments. In some embodiments the incline portions 1 14 of the surface 20 correlate to the gain factor for the outer control zone transformation function, (e.g., the steeper the incline the larger the gain factor and the faster the velocity).
Ring Embodiment
Figs. 1 1 and 12 show an embodiment of a portion 130 the input device. The portion 130 includes the surface 20. The sensors 22 and processor 18 are not shown. The surface 20 defines two concentric control zones 24, 26. Figs. 2-4 show alternative embodiments with different control zone configurations. The contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat and inclines toward its periphery. The incline provides a cue to the operator that the operator is nearing the edge of the surface. Thus, the incline serves as a safety mechanism to cue the operator and provide an indication of real world position. The input device portion 130 also includes a frame 132 which suspends a ring
134. The frame 132 is rigid and held in place with zero degrees of freedom. The ring 134 is generally rigid and movable within the frame 132. Springs 136, elastic tethers or another structure biases the ring 134 to a relaxed position. An operator stands within the ring 134 and wears a polhemus sensor. In addition another position sensor 138 is located at the ring 134 for determining the position of the ring 134. While the ring 134 is in its relaxed position the ring 134 is elevated over the border between an inner control 24 and a concentrically outer control zone 26.
The polhemus sensor 44 worn by the operator (see Fig. 5) provides inputs to the processor 18. The processor derives the operator position within the ring 134 based on such inputs. Movements of the operator within the ring 134 that are within the inner control zone 24 are processed using a first transformation function. For example in a specific embodiment, a movement in a given direction by a given amount within the inner control zone is transformed to a movement in such direction in the virtual environment or of the robot or vehicle by an amount equal to a gain factor times the amount moved in the inner control zone. As the operator moves into contact with the ring 134, the operator pushes the ring 134 away from its relaxed position. The position sensor 138 sends sensor signals to the processor enabling the processor to determine that the ring has moved. The amount and direction that the ring is pushed into the outer control zone 26 is determined and processed using a second transformation function. For example in a specific embodiment, a movement in a given direction in the outer control zone 26 by a given amount is transformed to a movement in the virtual environment or of the robot or vehicle in such direction at a velocity equal to a gain factor times the radial change in position within the outer control zone. More complex transformations also are implemented in some embodiments.
Force Platform Embodiments
Figs. 13 and 14 show a platform 150 defining the surface 20 according to an embodiment of this invention. The platform 150 rests on a plurality of force sensors 152-158. In one embodiment the force sensors are one degree-of-freedom weight sensors for detecting the weight applied at each sensor. Multiple weight sensors are included so that the processor 18 is able to calculate the location of the operator's center of gravity when the operator is positioned on the platform 150. The surface 20 defines multiple control zones 24, 26. Figs. 2-4 show alternative embodiments with different control zone configurations. The platform 150 is used alone or with the configurations described above with respect to the tethered embodiment and the ring embodiment. The location of the operator's center of gravity as determined by the processor 18 from the wight sensors readings is used to identify the operator position, the operator position and changes in position are processed according to which control zone the movement occurs in. Movement within a first control zone is transposed in the virtual environment or to the robot or vehicle using a first transformation function. Movement within a second control zone is transposed in the virtual environment or to the robot or vehicle using a second transformation function.
The contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat and inclines toward its periphery. The incline provides a cue to the operator that the operator is nearing the edge of the surface. Thus, the incline serves as a safety mechanism to cue the operator and provide an indication of real world position. In another embodiment surface 20 is generally flat along the surface portion corresponding to the inner control zone 24 and is inclined along the surface portion corresponding to the outer control zone 26.
In another embodiment the force sensors 152-158 are three degree-of-freedom force sensors. The weight applied by an operator corresponds to a force along a z-axis. The sensors 152-158 however also detect other directional components of any forces applied by the operator. Different movements tend to result in different sense patterns. For example sensor readings during a twisting motion exhibit recognizable characteristics used to identify the motion as a twisting motion. Other motions and postures also are recognizable such as crouching crawling, walking, running. The processor determines a three-dimensional operator movement from a time sequence of the output signals. Specifically, the processor accumulates a movement pattern of the operator and compares such movement pattern with prestored movement pattern characteristics to identify the movement pattern. The processor 18 for a given virtual environment implementation is able to use the sensor readings to provide a more detailed control of motion within the virtual environment or of the robot or vehicle.
Sensing Zone Embodiments
Figs. 6-8 show alternative embodiments for a surface 20 having multiple decoupled sensing zones. Fig. 6 shows a circular surface 20 having 9 sensing zones 50, 52, 54, 56, 58, 60, 62, 64, 66. Fig. 7 shows another embodiment in which there are 15 sensing zones 70-84. Fig. 8 shows another embodiment in which there are 15 sensing zones 90-104. In each embodiment in which the sensing zones are decoupled the surface 20 is formed by separate pieces - one piece for each independent sensing zone. By separating the surface 20 pieces, a force applied only to one sensing zone does not impose a force on any other sensing zone. Accordingly, the sensing zones are decoupled. Each sensing zone rests on at least one force sensor 22. In the embodiments of Figs. 6 and 8, a plurality of the sensing zones rest one at least two force sensors. The use of al least one force sensor per sensing zone allows the processor 18 to determine that the operator is applying a force to given sensing zone(s). The use of at least two force sensors per sensing zone allows the processor to determine where within a given zone the operator is applying a force. In an alternative embodiment decoupled sensing zones are defined by including contact switches on the surface 20. In such embodiment the physical surface 20 need not also be decoupled among the various sensing zones. Activation of contact switches identifies the sensing zone(s) where the user stands. Additional force sensors as described provides the force information to obtain the center of gravity and foot position.
In some embodiments the sensing zones coincide with the control zones. For example an embodiment having the Fig. 7 layout may define 15 separate control zones, each control zone coinciding with a unique sensing zone. In other embodiments one or more complete sensing zones define a control zone. For example, one embodiment having the layout shown in Fig. 6 has a first control zone coincident with sensing zone 50 and a second control zone formed by the remaining sensing zones 52-66. In yet another embodiment one or more subsets of sensing zones define respective control zones. For example, Fig. 8 shows an embodiment where a first control zone 24 is formed by sensing zone 97 and portions of sensing zones 93-96 and 98-101 and a second control zone is formed by the remaining portions of sensing zones 93-96 and 98-101, along with sensing zones 90-92 and 102-104. Thus, the borders of the control zones need not coincide with borders of the sensing zones. In an embodiment in which different portions of a sensing zone are allocated to different control zones, the sensing zone has at least two force sensors.
Meritorious and Advantageous Effects
One advantage of the controller of this invention is that using the body and legs as the input sources provides a natural and intuitive interface leaving the hands free to perform other tasks. It is expected that natural involvement of the body enhances the sense of presence in the virtual environment, provides better spacial awareness and better navigation performance. Another advantage of the invention is that the user is able to assume differing postures (e.g., walking, bending, kneeling). Another advantage of the invention is the feedback of information to the user about the user's input and how that input affects the system. Another advantage of the invention is that the volume of movement of the user in the real world is restricted.
Although a preferred embodiment of the invention has been illustrated and described, various alternatives, modifications and equivalents may be used. For example although the input device embodiments are described in detail for controlling motion within a virtual environments, the embodiments alternatively can control motion of a robot or vehicle in the same manner described for the virtual environment. Therefore, the foregoing description should not be taken as limiting the scope of the inventions which are defined by the appended claims.

Claims

WHAT IS CLAIMED IS:
1. An input apparatus (16) which an operator interacts with to control movement within a virtual environment, the apparatus comprising: a surface (20) upon which an operator is positioned, the surface defining multiple control zones (24, 26); a sensor (22) generating an output signal; and a processor (18) receiving the output signal for processing sensor output to determine a current operator position; and wherein movement of the operator within a first control zone (24) of the multiple concentric control zones causes a first response in the virtual environment based upon a first transformation function, and wherein movement of the operator within a second control zone (26) of the multiple concentric control zones causes a second response in the virtual environment based upon a second transformation function which differs from the first transformation function.
2. The input apparatus of claim 1, further comprising an elastic tether (120) fastened to the operator, the tether providing tensile feedback to the operator for movement away from a first position.
3. The input apparatus of claim 1, in which the multiple control zones comprise two concentric control zones, a first control zone of the two control zones located concentrically inward of a second control zone of the two control zones.
4. The input apparatus of claim 3, wherein the first transformation function directly translates positional change of the operator position within the first control zone to positional change within the virtual environment.
5. The input apparams of claim 3, wherein the second transformation function translates position of the operator position within the second control zone to velocity within the virtual environment.
6. The input apparatus of claim 3, in which the sensor comprises a first sensor (22) and a second sensor (138), and further comprising an elevated frame (132) located at a border between the first control zone and the second control zone, and wherein the first sensor (22) is in the possession of the operator and indicates operator position within the first control zone, and wherein the second sensor (132) is coupled to the frame and detects movement of the frame, wherein deflection of frame into the second control zone indicates operator position within the second control zone.
7. The input apparatus of claim 1 , wherein the sensor (22) comprises a plurality of weight sensors (152-158) sensing operator weight upon the surface, and wherein the processor receives the output signals from the weight sensors and determines a projection of the operator center of gravity onto the surface, the location of such projection being the current operator position.
8. The input apparatus of claim 1 , wherein the sensor compπses a plurality of multiple degree-of-freedom force sensors (22) sensing forces applied by the operator to the surface (20), and wherein the processor receives the output signals from the force sensors and determines a projection of the operator center of gravity onto the surface, the location of such projection being the current operator position.
9. The input apparatus of claim 8, wherein the processor accumulates a movement pattern of the operator on the surface and compares such movement pattem with prestored movement pattern signatures to identify the movement pattern.
10. An input apparatus (16) which an operator interacts with to control movement within a virtual environment, the apparatus comprising. a surface (20) upon which an operator is positioned, the surface defining a plurality of decoupled sensing zones (50-66/70-84/90-104) and a plurality of control zones (24, 26), for each sensing zone a force sensor (22) generating an output signal, wherein for multiple sensing zones of the plurality of sensing zones at least two force sensors are included, a processor ( 18) receiving the respective output signals from each of the plurahty of sensors, the processor determining a current operator position relative to the plurality of control zones from the sensor output signals; and wherein movement of the operator within a first control zone of the plurality of control zones causes a first response in the virtual environment based upon a first transformation function, and wherein movement of the operator within a second control zone of the plurality of control zones causes a second response in the virtual environment based upon a second transformation function which differs from the first transformation function.
1 1 . The input apparatus of claim 10, wherein the processor determines a three-dimensional operator movement from a time sequence of the output signals.
12. An input apparatus (16) which an operator interacts with to control movement within a virtual environment, the apparatus comprising: a surface (20) upon which an operator is positioned, the surface defining multiple concentric control zones (24,26); and a sensor (22) in the possession of the operator, wherein position of the sensor relative to the surface defines operator position; and wherein positional change of the operator position within a first control zone
(24) of the multiple concentric control zones translates directly to positional change within the virtual environment, wherein operator position within a second control zone (26) of the multiple concentric control zones translates to velocity within the virtual environment, and wherein change in operator position within the second control zone corresponds to change in velocity within the virtual environment, and wherein the first control zone is concentrically inward relative to the second control zone.
13. The apparatus of claim 12, further comprising a rigid ring (116) within which the operator is positioned, the operator tethered to the ring, the ring having only one degree of freedom relative to the surface, the one degree of freedom allowing a yaw motion, and wherein the operator is able to move within the ring to move the sensor among the first control zone and the second control zone.
14. The apparatus of claim 12, wherein the surface is flat at the first control zone and varies in contour in the second control region, the varying contour providing feedback to the operator of a real world position of the operator.
15. The apparatus of claim 14, wherein the varying contour in the second control zone gets steeper along a radial direction from a border with the first control zone outward, and wherein the steepness corresponds to velocity within the virtual environment.
16. An input apparatus (16) which an operator interacts with to control movement within a virtual environment, the apparatus comprising: a surface (20) upon which an operator is positioned, the surface defining first and second concentric control zones (24, 26), the first control zone (24) concentrically inward of the second control zone (26); an elevated ring (134) positioned over a border between the first and second control zones; a sensor (22) in the possession of the operator, wherein position of the sensor relative to the surface defines operator position; and wherein positional change of the operator position within the first control zone translates directly to positional change within the virtual environment, wherein movement of the operator into contact with the ring deflects a portion of the ring in the second control zone, and wherein deflection of the ring into the second control zone translates to velocity within the virtual environment.
17. An input apparatus (16) which an operator interacts with to control movement within a virtual environment, the apparatus comprising: a platform having a surface (20) upon which an operator is positioned, the surface defining multiple concentric control zones (24, 26); and a plurality of force sensors (50-66) symmetrically displaced about the platform, each sensor generating a signal corresponding to sensed force; a processor (18) which processes the signals from each of the plurality of force sensors to derive a location of a center of gravity of the operator, said derived location being an operator position; wherein positional change of the operator position within a first control zone of the multiple concentric control zones translates directly to positional change within the virtual environment, wherein change in operator position within a second control zone of the multiple concentric control zones translates to change in velocity within the virtual environment, and wherein the first control zone is concentrically inward relative to the second control zone.
18. The input apparatus of claim 17, wherein a multiple of the plurality of force sensors are one degree-of-freedom weight sensors generating a respective signal corresponding to a sensed weight.
19. The input apparatus of claim 18, wherein a multiple of the plurality of force sensors sense force in at least three degrees of freedom, wherein the processor determines a three-dimensional operator movement from a time sequence of the output signals of the plurality of sensors.
20. The input apparatus of claim 19, wherein the processor accumulates a movement pattern of the operator on the surface and compares such movement pattern with prestored movement pattern signatures to identify the movement pattern.
21 . A method for an operator to control motion within a virtual environment, wherein the operator is positioned on a surface (20), the surface defining first and second concentric control zones, the first control zone (24) concentrically inward of the second control zone (26), the method comprising the steps of: sensing operator position relative to the concentric control zones; for a positional change of the operator position within the first control zone directly translating the change in operator position to a change in position within the virtual environment; and for a positional change of the operator position within the second control zone, translating the change in operator position to a change in velocity within the virtual environment.
22. The method of claim 21, further comprising the step of providing tensile feedback to the operator for changes in operator position.
23. The method of claim 21, in which the operator is confined to the surface by a frame ( 116) within which the operator is positioned, the operator tethered to the frame, the frame having only one degree of freedom relative to the surface, the one degree of freedom allowing a yaw motion.
24. The method of claim 21 , wherein the surface is flat at the first control zone and varies in contour in the second control region, the varying contour providing feedback to the operator of a real world position of the operator.
25. The method of claim 24, wherein the varying contour in the second control zone gets steeper along a radial direction from a border with the first control zone outward, and wherein the steepness corresponds to velocity within the virtual environment.
26. The method of claim 21, in which an elevated ring (134) is positioned over a border between the first and second control zones, wherein movement of the operator into contact with the ring deflects a portion of the ring into the second control zone, and wherein deflection of the ring into the second control zone translates to a change in velocity within the virtual environment.
27. A method for an operator to control motion within a virtual environment, wherein the operator is positioned on a surface (20) of a platform, the surface defining first and second concentric control zones, the first control zone (24) concentrically inward of the second control zone (26), and wherein a plurality of weight sensors (22/50-66) are symmetrically displaced about the platform, the method comprising the steps of: sampling output from the plurality of weight sensors; processing the sampled output from each of the plurality of weight sensors to derive a location of a center of gravity of the operator, said derived location being an operator position; for a positional change of the operator position within the first control zone directly translating the change in operator position to a change in position within the virtual environment; and for a positional change of the operator position within the second control zone, translating the change in operator position to a change in velocity within the virtual environment.
28. A method for an operator to control motion within a virtual environment, wherein the operator is positioned on a surface (20), the surface defining multiple control zones (24-26/50-66/70-84/90-104) , the method comprising the steps of: sensing operator position relative to the control zones; for a positional change of the operator position within a first control zone of the multiple concentric control zones generating a first response in the virtual environment based upon a first transformation function; and for a positional change of the operator position within a second control zone of the multiple concentric control zones generating a second response in the virtual environment based upon a second transformation function which differs from the first transformation function.
29. A method for an operator to control motion within a virtual environment, wherein the operator is positioned on a surface (20), the surface defining multiple sensing zones (24-26/50-66/70-84/90-104) and multiple control zones (24-26/50-66/70-84/90- 104), the method comprising the steps of: detecting a force applied by the operator to the surface; detecting each sensing zone to which the operator applies the force; detecting a location within each detected sensing zone that the operator applies the force; determining an operator position relative to the multiple control zones based upon each detected force and detected location; for a positional change of the operator position within a first control zone of the multiple control zones generating a first response in the virtual environment based upon a first transformation function; and for a positional change of the operator position within a second control zone of the multiple control zones generating a second response in the virtual environment based upon a second transformation function which differs from the first transformation function.
30. An input apparatus (16) which an operator interacts with to control movement of a device, the apparatus comprising: a surface (20) upon which an operator is positioned, the surface defining multiple control zones (24-26/50-66/70-84/90-104); a sensor (22) generating an output signal; and a processor (18) receiving the output signal for processing sensor output to determine a current operator position; and wherein movement of the operator within a first control zone of the multiple concentric control zones causes a first response of the device based upon a first transformation function, and wherein movement of the operator within a second control zone of the multiple concentric control zones causes a second response of the device based upon a second transformation function which differs from the first transformation function.
31 . An input apparatus (16) which an operator interacts with to control movement of a device, the apparatus comprising: a surface (20) upon which an operator is positioned, the surface defining a plurality of decoupled sensing zones (24-26/50-66/70-84/90-104) and a plurality of control zones (24-26/50-66/70-84/90-104); for each sensing zone a force sensor (22) generating an output signal, wherein for multiple sensing zones of the plurality of sensing zones at least two force sensors are included; a processor (18) receiving the respective output signals from each of the plurality of sensors, the processor determining a current operator position relative to the plurality of control zones from the sensor output signals; and wherein movement of the operator within a first control zone of the plurality of control zones causes a first response of the device based upon a first transformation function, and wherein movement of the operator within a second control zone of the plurality of control zones causes a second response of the device based upon a second transformation function which differs from the first transformation function.
32. An input apparatus (16) which an operator interacts with to control movement of a device, the apparams comprising: a surface (20) upon which an operator is positioned, the surface defining multiple concentric control zones (24-26); and a sensor (22) in the possession of the operator, wherein position of the sensor relative to the surface defines operator position; and wherein positional change of the operator position within a first control zone of the multiple concentric control zones translates directly to positional change of the device, wherein operator position within a second control zone of the multiple concentric control zones translates to velocity of the device, and wherein change in operator position within the second control zone corresponds to change in velocity of the device, and wherein the first control zone is concentrically inward relative to the second control zone.
33. An input apparatus ( 16) which an operator interacts with to control movement of a device, the apparatus comprising: a surface (20) upon which an operator is positioned, the surface defining first and second concentric control zones, the first control zone (24) concentrically inward of the second control zone (26); an elevated ring ( 134) positioned over a border between the first and second control zones; a sensor (22) in the possession of the operator, wherein position of the sensor relative to the surface defines operator position; and wherein positional change of the operator position within the first control zone translates directly to positional change of the device, wherein movement of the operator into contact with the ring deflects a portion of the ring in the second control zone, and wherein deflection of the ring into the second control zone translates to velocity of the device.
34. An input apparatus (16) which an operator interacts with to control movement of a device, the apparatus comprising: a platform having a surface (20) upon which an operator is positioned, the surface defining multiple concentric control zones (24,26); and a plurality of force sensors (22) symmetrically displaced about the platform, each sensor generating a signal conesponding to sensed force; a processor ( 18) which processes the signals from each of the plurality of force sensors to derive a location of a center of gravity of the operator, said derived location being an operator position; wherein positional change of the operator position within a first control zone of the multiple concentric control zones translates directly to positional change of the device, wherein change in operator position within a second control zone of the multiple concentric control zones translates to change in velocity of the device, and wherein the first control zone is concentrically inward relative to the second control zone.
35. A method for an operator to control motion of a device, wherein the operator is positioned on a surface (20), the surface defining first and second concentric control zones, the first control zone (24) concentrically inward of the second control zone (26), the method comprising the steps of: sensing operator position relative to the concentric control zones; for a positional change of the operator position within the first control zone directly translating the change in operator position to a change in position of the device; and for a positional change of the operator position within the second control zone, translating the change in operator position to a change in velocity of the device.
36. A method for an operator to control motion of a device, wherein the operator is positioned on a surface (20) of a platform, the surface defining first and second concentric control zones, the first control zone (24) concentrically inward of the second control zone (26), and wherein a plurality of weight sensors (22) are symmetrically displaced about the platform, the method comprising the steps of: sampling output from the plurality of weight sensors; processing the sampled output from each of the plurality of weight sensors to derive a location of a center of gravity of the operator, said derived location being an operator position; for a positional change of the operator position within the first control zone directly translating the change in operator position to a change in position of the device; and for a positional change of the operator position within the second control zone, translating the change in operator position to a change in velocity of the device.
37. A method for an operator to control motion of the device, wherein the operator is positioned on a surface (20), the surface defining multiple control zones (24- 26/50-66/70-84/90-104), the method comprising the steps of: sensing operator position relative to the control zones; for a positional change of the operator position within a first control zone of the multiple concentric control zones generating a first response of the device based upon a first transformation function; and for a positional change of the operator position within a second control zone of the multiple concentric control zones generating a second response of the device based upon a second transformation function which differs from the first transformation function.
38. A method for an operator to control motion of a device, wherein the operator is positioned on a surface (20), the surface defining multiple sensing zones (24- 26/50-66/70-84/90- 104) and multiple control zones (24-26/50-66/70-84/90- 104), the method comprising the steps of: detecting a force applied by the operator to the surface; detecting each sensing zone to which the operator applies the force; detecting a location within each detected sensing zone that the operator applies the force; determining an operator position relative to the multiple control zones based upon each detected force and detected location; for a positional change of the operator position within a first control zone of the multiple control zones generating a first response of the device based upon a first transformation function; and for a positional change of the operator position within a second control zone of the multiple control zones generating a second response of the device based upon a second transformation function which differs from the first transformation function.
PCT/US1997/007419 1996-05-06 1997-05-02 Virtual motion controller WO1997042620A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU33676/97A AU3367697A (en) 1996-05-06 1997-05-02 Virtual motion controller

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1695496P 1996-05-06 1996-05-06
US60/016,954 1996-05-06

Publications (1)

Publication Number Publication Date
WO1997042620A1 true WO1997042620A1 (en) 1997-11-13

Family

ID=21779918

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/007419 WO1997042620A1 (en) 1996-05-06 1997-05-02 Virtual motion controller

Country Status (2)

Country Link
AU (1) AU3367697A (en)
WO (1) WO1997042620A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000073203A1 (en) * 1999-05-28 2000-12-07 Commonwealth Scientific And Industrial Research Organisation Patterned carbon nanotube films
WO2004072836A1 (en) * 2003-02-06 2004-08-26 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
US6916273B2 (en) 2001-07-23 2005-07-12 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
US7520836B2 (en) 2001-07-23 2009-04-21 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat attached to movable base structure
US7588516B2 (en) 2001-07-23 2009-09-15 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
US8675018B2 (en) 2007-09-05 2014-03-18 Microsoft Corporation Electromechanical surface of rotational elements for motion compensation of a moving object
WO2022042861A1 (en) * 2020-08-31 2022-03-03 Telefonaktiebolaget Lm Ericsson (Publ) Controlling movement of a virtual character in a virtual reality environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5283555A (en) * 1990-04-04 1994-02-01 Pandigital Corp. Dimensional continuous motion controller
US5314391A (en) * 1992-06-11 1994-05-24 Computer Sports Medicine, Inc. Adaptive treadmill
US5367614A (en) * 1992-04-01 1994-11-22 Grumman Aerospace Corporation Three-dimensional computer image variable perspective display system
US5404152A (en) * 1992-02-25 1995-04-04 Mitsubishi Denki Kabushiki Kaisha Multi-dimension track-ring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5283555A (en) * 1990-04-04 1994-02-01 Pandigital Corp. Dimensional continuous motion controller
US5404152A (en) * 1992-02-25 1995-04-04 Mitsubishi Denki Kabushiki Kaisha Multi-dimension track-ring
US5367614A (en) * 1992-04-01 1994-11-22 Grumman Aerospace Corporation Three-dimensional computer image variable perspective display system
US5314391A (en) * 1992-06-11 1994-05-24 Computer Sports Medicine, Inc. Adaptive treadmill

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000073203A1 (en) * 1999-05-28 2000-12-07 Commonwealth Scientific And Industrial Research Organisation Patterned carbon nanotube films
US6916273B2 (en) 2001-07-23 2005-07-12 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
US7381153B2 (en) 2001-07-23 2008-06-03 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
US7387592B2 (en) 2001-07-23 2008-06-17 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
US7520836B2 (en) 2001-07-23 2009-04-21 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat attached to movable base structure
US7588516B2 (en) 2001-07-23 2009-09-15 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
WO2004072836A1 (en) * 2003-02-06 2004-08-26 Southwest Research Institute Virtual reality system locomotion interface utilizing a pressure-sensing mat
CN100373302C (en) * 2003-02-06 2008-03-05 西南研究会 Virtual reality system locomotion interface utilizing a pressure-sensing mat
US8675018B2 (en) 2007-09-05 2014-03-18 Microsoft Corporation Electromechanical surface of rotational elements for motion compensation of a moving object
WO2022042861A1 (en) * 2020-08-31 2022-03-03 Telefonaktiebolaget Lm Ericsson (Publ) Controlling movement of a virtual character in a virtual reality environment

Also Published As

Publication number Publication date
AU3367697A (en) 1997-11-26

Similar Documents

Publication Publication Date Title
US11221730B2 (en) Input device for VR/AR applications
USRE40891E1 (en) Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
US6597347B1 (en) Methods and apparatus for providing touch-sensitive input in multiple degrees of freedom
US20080010616A1 (en) Spherical coordinates cursor, mouse, and method
US5095303A (en) Six degree of freedom graphic object controller
US8325138B2 (en) Wireless hand-held electronic device for manipulating an object on a display
EP3316735B1 (en) Motion control seat input device
JP2019061707A (en) Control method for human-computer interaction and application thereof
WO2016097841A2 (en) Methods and apparatus for high intuitive human-computer interface and human centric wearable "hyper" user interface that could be cross-platform / cross-device and possibly with local feel-able/tangible feedback
US20150248157A9 (en) Hand-held wireless electronic device with accelerometer for interacting with a display
US11209916B1 (en) Dominant hand usage for an augmented/virtual reality device
Punpongsanon et al. Extended LazyNav: Virtual 3D ground navigation for large displays and head-mounted displays
US20190339791A1 (en) Foot controller computer input device
RU2662399C1 (en) System and method for capturing movements and positions of human body and parts of human body
JP3847634B2 (en) Virtual space simulation device
Englmeier et al. Rock or roll–locomotion techniques with a handheld spherical device in virtual reality
WO1997042620A1 (en) Virtual motion controller
Novacek et al. Overview of controllers of user interface for virtual reality
Lee et al. Design and empirical evaluation of a novel near-field interaction metaphor on distant object manipulation in vr
WO2008003331A1 (en) 3d mouse and method
Hilsendeger et al. Navigation in virtual reality with the wii balance board
CN112527109B (en) VR whole body action control method and system based on sitting posture and computer readable medium
Barrera et al. Hands-free navigation methods for moving through a virtual landscape walking interface virtual reality input devices
KR20180033396A (en) A ring mouse for a computer
Gao et al. Effects of transfer functions and body parts on body-centric locomotion in virtual reality

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97540042

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA