VIRTUAL MOTION CONTROLLER
BACKGROUND OF THE INVENTION
This invention relates to an input device for controlling movement, and more particularly to a device which measures movement of a person and uses that movement information to control motion of a virtual environment or of a robot or vehicle.
A virtual environment is a sophisticated computing environment depicting a virtual reality, a simulated environment, a game environment or other complex graphical environment. The virtual environment is depicted by one or more displays. An input device serves to control movement within the virtual environment. Conventional user input devices for a general purpose computing environment include a keyboard, a pointing device and a clicking device. Examples of conventional pointing devices include a mouse, a trackball, a joy stick, and a touch pad. The pointing device serves to control the position of a cursor on a computer screen. For a virtual environment a more sophisticated input device is desired.
Sophisticated military flight simulators use an aircraft vehicle as the input device. A pilot sits in the cockpit and controls the aircraft. The object is to train the operator and provide a near-real experience. The use of a vehicle as an input device is common in many simulator and game environments. Another known input device is a hand-controlled device. Kim et al. in "The Heaven and Earth Virtual Reality: Designing Metaphor for Novice Users," describe a virtual reality setup, consisting of a tracked head-mounted display and a 3D input device. In a technique called the flying hand a user makes a hand gesture (e.g., presses a button on a "bird" device). The orientation and location of the bird device relative to the head mounted display determines the direction and velocity of motion. In a technique called the floating guide a small sphere floats with the user. To change position the user moves their hand to the sphere, which always remains in the upper right corner of the field of view. In a technique called the lean-based technique, a user's head displacement (i) in one version is modified by an exponential function to distort movement or (ii) in another version determines the speed of movement.
Pausch et al. disclose a miniature to control movement in "Navigation and Locomotion in Virtual Worlds via Right into Hand-Held Miniatures." A hand-held miniature graphical representation of a virtual environment is used to control movement in the virtual environment. When a user moves an iconic representation of himself in the miniature, the user moves correspondingly in the virtual environment. First the user moves the icon, then the miniature graphics change to provide the effect of the user shrinking into the miniature or the miniature expanding to the enlarged virtual environment. The motion then occurs. Then, the graphics change to provide the effect of the user growing or the virtual environment shrinking. Iwata et al. describes virtual perambulator prototypes in (i) "Virtual
Perambulator: A Novel Interface Device for Locomotion in Virtual Environment;" (ii) "Haptic Walkthrough Simulator: Its Design and Application to Studies on Cognitive Map;" and (iii) "Virtual Perambulator." In the early prototypes the user wears roller skates and is held in position by a harness or belt. The user then walks or runs using the skates to achieve motion in a virtual environment. The skates serve as input devices. The harness or belt serves as a safety device to keep the user in place. In a later embodiment, the skates are replaced with sandal devices having a low friction undersurface. A rubber sole brake pad is positioned toward the toe area of the sandals. In addition, a hoop frame replaces the harness/belt to confine the user within a limited
real space. Motion is tracked by Polhemus sensors at the feet (e.g., the skates or sandals) and head.
The inventor's have sought an input controller which involves hands-free use of the body and allows motion along multiple axes in multiple body postures. An intuitive, natural feeling interface is desired which provides feedback on how the user's input affects the system. To maintain safe interaction, a cue of the user's real world environment also is desired.
SUMMARY OF THE INVENTION According to the invention, a 'sufficient motion' walking simulator serves as a motion control device for a virtual environment, robot or vehicle. For a virtual environment the user controls the content and perspective of a display depicting the virtual environment. For a real world robot or vehicle, the user remotely controls motion via a camera or by direct viewing. Sufficient motion is a term coined herein to refer to the concept of allowing the user enough movement in the real world to create a sense of reality and presence in the virtual environment. This is distinguished from a full motion input device, such as a 360 degree treadmill.
According to one aspect of the invention, a user is positioned on a surface and is able to move among multiple control regions. The control zones define multiple response functions. In one embodiment positional changes within a first control zone translate directly (e.g., 1 : 1, l :x) to positional changes in the virtual environment or of the robot or vehicle. Positional changes in a second control zone translate to velocity changes in the virtual environment, robot or vehicle. The specific response function varies for differing implementations. According to another aspect of the invention, in one embodiment the operator is positioned within a ring elevated above the surface. The operator is tethered to the ring. The ring has only one degree of freedom relative to the surface, allowing movement of the ring only in a rotational direction (i.e., yaw motion). The operator is able to move within the ring to move a sensor among the multiple control zones. According to another aspect of the invention, the surface is flat in a first control zone and varies in contour in a second control region. The varying contour provides feedback to the operator of the operator's real world position, and of their input to the system (e.g., how much movement has been requested of the device).
According to another aspect of the invention, the varying contour in the second control zone gets steeper along a radial direction from a border with the first control zone outward. The steepness corresponds to velocity within the virtual environment or of the robot or vehicle.
According to another aspect of the invention, a side-step change in position within the second control translates into side to side motion within the virtual environment or of the robot
According to another aspect of the invention, in an alternative embodiment the apparatus includes a surface upon which an operator is positioned, an elevated πng and a multiple sensors The surface defines multiple control zones. A first control zone is located concentrically inward of a second control zone. The elevated πng is positioned over a border between the first and second control zones A first sensor is in the possession of the operator (e.g , worn) The position of the first sensor relative to the surface defines the operator's position relative to the control zones Positional change of the operator position within the first control zone translates directly to positional change within the virtual environment or of the robot or vehicle A second sensor is coupled to the πng Movement of the operator into contact with the πng deflects a portion of the πng in the second control zone Deflection of the πng into the second control zone is sensed and translated to velocity within the virtual environment or of the robot or vehicle According to another aspect of the invention, in an alternative embodiment the apparatus includes a platform having a surface upon which an operator is positioned The surface defines multiple control zones A plurality of weight sensors are symmetrically displaced about the platform. Each sensor generates a signal corresponding to sensed weight A processor processes the signals from each of the plurality of weight sensors to deπve a location of a center of gravity of the operator, as projected onto the platform The derived location is taken as the operator position Positional change of the operator position within a first control zone translates directly to positional change within the virtual environment or of the robot or vehicle Change in operator position withm a second control zone translates to change in velocity within the virtual environment or of the robot or vehicle.
According to another aspect of the invention, in another alternative embodiment including the platform, a plurahty of multiple degree of freedom sensors are positioned about the platform Each sensor detects forces occurring in an xyz coordinate system By detecting more than just the weight (e.g., z-axis component of the force), a processor is able to compare sensor readings over time to precomputed movement signatures For example a twisting motion may have apply a unique force pattern to the sensors Force pattern signatures for several motions are stored The force pattern currently applied by a user is compared to the prestored signatures to deπve the type of movement the operator is making on the platform surface By knowing the type of movement an application can better depict the user, the user's motion and/or the user's field of view change in the virtual world Thus, more information than the location of the operator's
center of gravity is determined. In some implementations of such embodiment, the location of the operator's center of gravity is used to define the operator position with respect to the multiple control zones. Alternatively or in addition, the operator is able to provide command inputs by performing prescribed motions. According to another aspect of the invention, in yet another embodiment a user is positioned on an input device having a plurality of decoupled sensing regions. Each sensing zone includes at least one force sensor. Multiple ones of the plurality of sensing zones include multiple force sensors. Each sensor detects forces occurring in an xyz coordinate system. The regions are decoupled meaning that when the user is not in contact with a given region, such region does not indicate any force. Thus, only the regions that the user is on contact with record a sensed force. The regions form a surface upon which the operator is positioned. The surface also defines multiple control regions. The presence and/or motion of a user within a given control region is mapped to a given function or command to control the virtual environment, robot or vehicle.. One advantage of the controller of this invention is that using the body and legs as the input sources provides a natural and intuitive interface leaving the hands free to perform other tasks. It is expected that natural involvement of the body enhances the sense of presence in the virtual environment, provides better spacial awareness and better navigation performance. Another advantage of the invention is that the user is able to assume differing postures (e.g., walking, bending, kneeling). Another advantage of the invention is the feedback of information to the user about the user's input and how that input affects the system. Another advantage of the invention is that the volume of movement of the user in the real world is restricted. These and other aspects and advantages of the invention will be better understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a block diagram of a system for generating a virtual environment which includes an input device according to an embodiment of this invention; Fig. 2 is a diagram of a surface of the input device according to an embodiment of this invention;
Fig. 3 is a diagram of a surface of the input device according to another embodiment of this invention;
Fig. 4 is a diagram of a surface of the input device according to another embodiment of this invention;
Fig. 5 is a diagram of an operator standing on the surface and wearing a sensor according to an embodiment of the input device of this invention;
Fig. 6 is a diagram of a surface having a plurality of force sensors according to an embodiment of the input device of this invention;
Fig. 7 is a diagram of a surface having a plurality of force sensors according to another embodiment of the input device of this invention; Fig. 8 is a diagram of a surface having a plurality of force sensors according to another embodiment of the input device of this invention;
Fig. 9 is a top view of a portion of the input device according to an embodiment of this invention;
Fig. 10 is a side view of the portion of the input device of Fig. 9; Fig. 1 1 is a top view of a portion of the input device according to another embodiment of this invention;
Fig. 12 is a side view of the portion of the input device of Fig. 10;
Fig. 13 is a top view of a portion of the input device according to another embodiment of this invention; and Fig. 14 is a side view of the portion of the input device of Fig. 13.
DESCRIPTION OF SPECIFIC EMBODIMENTS Overview
Fig. 1 is a block diagram of a system 10 for generating a virtual environment. The system 10 includes a host processing system 12, a display device 14 and an input device 16. The host processing system includes one or more processors 18 for executing a computer program which creates the virtual environment. The virtual environment can be any virtual environment including, simulation environments, game environment, virtual reality environments or other graphical environment. An operator provides inputs to control movements and enter commands in the virtual environment through the input device 16. The processors 18 process the inputs to control what is displayed at display device 14. The display device 14 provides visual feedback to the operator. Although other mechanisms for feeding back virtual environment information can be implemented, this invention addresses the input device 16 the operator controls to provide input to the host processing system 12. Although a virtual environment host is described, the input device also is used for remote control of a robot or vehicle via direct view or via a displayed view.
The input device 16 includes a surface 20 upon which an operator is positioned (e.g., stands, sits, crawls, crouches), along with one or more sensors 22 and a processor. In one embodiment one or more processors 18 of the host processing system 12 serve as the input device processor. The surface 20 can be of any shape, and is expected to differ according to the embodiment. For example, the input device surface 20 is designed to
resemble a surfboard for a surfing virtual environment. Fig. 2 shows a circular surface 20. Fig. 3 shows an oblong surface 20. Fig. 4 shows a rectangular surface 20. Other geometric or odd shapes also may be used. The surface 20 includes multiple control zones 24, 26. The processor 18 receives input signals from the sensors 22 to determine an operator position on the surface 20. Movement of the operator position within a first control zone 24 is processed with a first transformation function. Movement of the operator position within a second control zone 26 is processed with a second transformation function. The specific transformation function may vary depending on the virtual environment or remote control implementation. In a specific embodiment, movement of the operator position within the first control zone is directly transposed to movement within the virtual environment or of a robot or vehicle. For example a 1 to 1 ratio of movement is implemented. Alternatively a 1 to many ratio of movement is implemented. For example, a 1 inch movement in the first control zone is implemented as a 5 foot movement in the virtual environment or of the robot or vehicle. The relative size, shape and location of each control zone 24, 26 also may vary. Figs. 2 and 3 show two concentric control zones 24, 26. Fig. 4 shows five nonconcentric control zones 24, 26a, 26b, 28a and 28b. Each control zone may have the same or a differing transformation function. For example, in a specific embodiment of the Fig. 4 depiction control zone 24 has a first transformation function, control zones 26a, 26b have a second transformation function and control zones 28a, 28b have a third transformation function. The sensor(s) 22 determine where the operator position is with respect to the control zones. In one embodiment a three space sensor is worn by the operator. An exemplary sensor 22 is a Polhemus 3Space FasTrak sensor from Polhemus, Inc. of Colchester, Vermont. The polhemus sensor includes a receiver and a transmitter. The sensor's receiver is worn by the operator. The sensor's transmitter is located within 1 meter of the sensor's receiver. The sensor generates six degree-of-freedom position measurements (three positions, three orientations). The Polhemus sensor is accurate to within ±0.8 mm for position and ±0.15° for orientation. Fig. 5 shows an embodiment in which an operator 40 wears a head mounted display 42. Attached to the head mounted display 42 is the sensor 22 receiver 44. The processor 18 (see Fig. 1 ) receives a signal from the receiver 44 to determine the operator position. The operator position corresponds to the location of the receiver 44 as projected onto the surface 20. Fig. 5 shows a current operator position at position 46. Such position 46 is within the first control zone 24. As the operator 40 moves the sensor receiver 44 and thus the operator position 46 moves.
In alternative embodiments the sensor(s) 22 are force sensors (e.g., weight sensors detecting a force along a z axis; 3 degree-of-freedom sensors detecting force along three axes). Using weight sensors the operator's center of gravity is projected onto the surface 20. The platform defines a working 2 dimensional working plane with the positive z direction in the zy plane. An array of n sensors are located on the working plane, where n is greater than or equal to 3. The sensors are coplanar, but located so as not to be colinear. In one embodiment 4 sensors define a sensor rectangle which is coplanar with the working plane. The sensor rectangle for example is square with the center located at the origin of the xy plane.
The total force on the platform is the sum of the forces detected by each sensor. The force on the left half of the working plane is the sum of forces detected by the first and fourth sensor. The force on the right half of the working plane is the sum of forces detected by the second and third sensors. The force on a forward half of the working plane is the sum of forces on the first and second sensors. The force on a rearward half of the working plane is the sum of forces on the third and fourth sensors, the center of gravity of the user has an x-axis force component and a y-axis force component. The x-axis force component is the difference between the right half force and the left half force, divided by the total force. The y-axis force component is the difference between the forward half force and the rearward half force, divided by the total force.
Using 3 degree-of-freedom force sensors the operator position is similarly determining using the z-component for each sensor reading. The additional force components along x and y axes allows the processors 18 to determine posture and more complex movements of the operator 40. Figs. 6-8 and 12 show embodiments in which force sensors are used for the input device sensors 22. In such embodiments an additional polhemus sensor also may be used. Such additional sensor provides additional information on the operator's movements and posture in 3 dimensional space.
For embodiments in which force sensors are used the surface 20 rests on the force sensors 22. In some embodiments the force sensors are grouped to define decoupled sensing zones. Fig. 6 for example shows a circular surface 20 having 9 sensing zones 50, 52, 54, 56, 58, 60, 62, 64, 66. The surface 20 is formed by 9 separate pieces 20a-20i. Each sensing zone 50-66 rests on at least one force sensor 22. A plurality of the sensing zones 52-66 rest one at least two force sensors. By
separating the surface 20 pieces 20a-20i, a force applied only to one sensing zone does not impose a force on any other sensing zone. Accordingly, the sensing zones are decoupled. Fig. 7 shows another embodiment in which there are 15 sensing zones 70- 84. Each sensing zone rests on at least one force sensor 22. Fig. 8 shows another embodiment in which there are 15 sensing zones 90-104. Each sensing zone 90-104 rests on at least two force sensors 22. The use of al least one force sensor per sensing zone allows the processor 18 to determine that the operator is applying a force to given sensing zone(s). The use of at least two force sensors per sensing zone allows the processor to determine where within a given zone the operator is applying a force. In some embodiments the sensing zones coincide with the control zones. In other embodiments one or more mutually exclusive subsets of sensing zones define the respective control zones. Fig. 8 shows an embodiment where the border 38 between the control zones 24, 26 need not coincide with the borders of the sensing zones 90- 104.
Tether Embodiment
Figs. 9 and 10 show an embodiment of a portion 110 the input device. The portion 1 10 includes the surface 20. The sensors 22 and processor 18 are not shown. The surface 20 defines two concentric control zones 24, 26. Figs. 2-4 show alternative embodiments with different control zone configurations. The contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat along the surface portion 1 12 corresponding to the inner control zone 24. Surface 20 is inclined along the surface portion 1 14 corresponding to the outer control zone 26. The portion 110 also includes a frame 116 to which the operator is tethered. Figs. 9 and 10 show a belt 1 18 which is worn by the operator. Elastic tethers 120 connect the belt 1 18 to the frame 1 16. The frame is generally rigid and rotates in a yaw motion with respect to the surface 20. The frame has only one degree of freedom. As the operator moves, the tethers provide tensile feedback to the operator of the operator's real world position with respect to the surface 20. The portion 110 of the input device 16 is used with a polhemus sensor 22 worn by the operator 40 as shown in Fig. 5.
Alternatively, the surface 20 rests on a plurality of force sensors as shown in Figs. 13 and 14. The sensor 22 outputs are used to determine the operator's center of gravity. This is taken as the operator position. The location of the operator position with respect to the control zones determines how operator movements are to be processed. Operator
movements within the inner control zone 24 are processed according to one transformation function (e.g., direct positional translational). For example, a movement in a given direction by a given amount within the inner control zone is transformed to a movement in such direction in the virtual environment or of the robot or vehicle by an amount equal to a gain factor times the amount moved in the inner control zone. Operator movements within the outer control zone 26 are processed according to another transformation function. For example, a movement in a given direction in the outer control zone 26 by a given amount is transformed to a movement in the virtual environment or of the robot or vehicle in such direction at a velocity equal to a gain factor times the radial change in position within the outer control zone. More complex transformations also are implemented in some embodiments. In some embodiments the incline portions 1 14 of the surface 20 correlate to the gain factor for the outer control zone transformation function, (e.g., the steeper the incline the larger the gain factor and the faster the velocity).
Ring Embodiment
Figs. 1 1 and 12 show an embodiment of a portion 130 the input device. The portion 130 includes the surface 20. The sensors 22 and processor 18 are not shown. The surface 20 defines two concentric control zones 24, 26. Figs. 2-4 show alternative embodiments with different control zone configurations. The contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat and inclines toward its periphery. The incline provides a cue to the operator that the operator is nearing the edge of the surface. Thus, the incline serves as a safety mechanism to cue the operator and provide an indication of real world position. The input device portion 130 also includes a frame 132 which suspends a ring
134. The frame 132 is rigid and held in place with zero degrees of freedom. The ring 134 is generally rigid and movable within the frame 132. Springs 136, elastic tethers or another structure biases the ring 134 to a relaxed position. An operator stands within the ring 134 and wears a polhemus sensor. In addition another position sensor 138 is located at the ring 134 for determining the position of the ring 134. While the ring 134 is in its relaxed position the ring 134 is elevated over the border between an inner control 24 and a concentrically outer control zone 26.
The polhemus sensor 44 worn by the operator (see Fig. 5) provides inputs to the processor 18. The processor derives the operator position within the ring 134 based
on such inputs. Movements of the operator within the ring 134 that are within the inner control zone 24 are processed using a first transformation function. For example in a specific embodiment, a movement in a given direction by a given amount within the inner control zone is transformed to a movement in such direction in the virtual environment or of the robot or vehicle by an amount equal to a gain factor times the amount moved in the inner control zone. As the operator moves into contact with the ring 134, the operator pushes the ring 134 away from its relaxed position. The position sensor 138 sends sensor signals to the processor enabling the processor to determine that the ring has moved. The amount and direction that the ring is pushed into the outer control zone 26 is determined and processed using a second transformation function. For example in a specific embodiment, a movement in a given direction in the outer control zone 26 by a given amount is transformed to a movement in the virtual environment or of the robot or vehicle in such direction at a velocity equal to a gain factor times the radial change in position within the outer control zone. More complex transformations also are implemented in some embodiments.
Force Platform Embodiments
Figs. 13 and 14 show a platform 150 defining the surface 20 according to an embodiment of this invention. The platform 150 rests on a plurality of force sensors 152-158. In one embodiment the force sensors are one degree-of-freedom weight sensors for detecting the weight applied at each sensor. Multiple weight sensors are included so that the processor 18 is able to calculate the location of the operator's center of gravity when the operator is positioned on the platform 150. The surface 20 defines multiple control zones 24, 26. Figs. 2-4 show alternative embodiments with different control zone configurations. The platform 150 is used alone or with the configurations described above with respect to the tethered embodiment and the ring embodiment. The location of the operator's center of gravity as determined by the processor 18 from the wight sensors readings is used to identify the operator position, the operator position and changes in position are processed according to which control zone the movement occurs in. Movement within a first control zone is transposed in the virtual environment or to the robot or vehicle using a first transformation function. Movement within a second control zone is transposed in the virtual environment or to the robot or vehicle using a second transformation function.
The contour of the surface 20 is flat or varies according to the specific embodiment. In one embodiment surface 20 is generally flat and inclines toward its
periphery. The incline provides a cue to the operator that the operator is nearing the edge of the surface. Thus, the incline serves as a safety mechanism to cue the operator and provide an indication of real world position. In another embodiment surface 20 is generally flat along the surface portion corresponding to the inner control zone 24 and is inclined along the surface portion corresponding to the outer control zone 26.
In another embodiment the force sensors 152-158 are three degree-of-freedom force sensors. The weight applied by an operator corresponds to a force along a z-axis. The sensors 152-158 however also detect other directional components of any forces applied by the operator. Different movements tend to result in different sense patterns. For example sensor readings during a twisting motion exhibit recognizable characteristics used to identify the motion as a twisting motion. Other motions and postures also are recognizable such as crouching crawling, walking, running. The processor determines a three-dimensional operator movement from a time sequence of the output signals. Specifically, the processor accumulates a movement pattern of the operator and compares such movement pattern with prestored movement pattern characteristics to identify the movement pattern. The processor 18 for a given virtual environment implementation is able to use the sensor readings to provide a more detailed control of motion within the virtual environment or of the robot or vehicle.
Sensing Zone Embodiments
Figs. 6-8 show alternative embodiments for a surface 20 having multiple decoupled sensing zones. Fig. 6 shows a circular surface 20 having 9 sensing zones 50, 52, 54, 56, 58, 60, 62, 64, 66. Fig. 7 shows another embodiment in which there are 15 sensing zones 70-84. Fig. 8 shows another embodiment in which there are 15 sensing zones 90-104. In each embodiment in which the sensing zones are decoupled the surface 20 is formed by separate pieces - one piece for each independent sensing zone. By separating the surface 20 pieces, a force applied only to one sensing zone does not impose a force on any other sensing zone. Accordingly, the sensing zones are decoupled. Each sensing zone rests on at least one force sensor 22. In the embodiments of Figs. 6 and 8, a plurality of the sensing zones rest one at least two force sensors. The use of al least one force sensor per sensing zone allows the processor 18 to determine that the operator is applying a force to given sensing zone(s). The use of at least two force sensors per sensing zone allows the processor to determine where within a given zone the operator is applying a force.
In an alternative embodiment decoupled sensing zones are defined by including contact switches on the surface 20. In such embodiment the physical surface 20 need not also be decoupled among the various sensing zones. Activation of contact switches identifies the sensing zone(s) where the user stands. Additional force sensors as described provides the force information to obtain the center of gravity and foot position.
In some embodiments the sensing zones coincide with the control zones. For example an embodiment having the Fig. 7 layout may define 15 separate control zones, each control zone coinciding with a unique sensing zone. In other embodiments one or more complete sensing zones define a control zone. For example, one embodiment having the layout shown in Fig. 6 has a first control zone coincident with sensing zone 50 and a second control zone formed by the remaining sensing zones 52-66. In yet another embodiment one or more subsets of sensing zones define respective control zones. For example, Fig. 8 shows an embodiment where a first control zone 24 is formed by sensing zone 97 and portions of sensing zones 93-96 and 98-101 and a second control zone is formed by the remaining portions of sensing zones 93-96 and 98-101, along with sensing zones 90-92 and 102-104. Thus, the borders of the control zones need not coincide with borders of the sensing zones. In an embodiment in which different portions of a sensing zone are allocated to different control zones, the sensing zone has at least two force sensors.
Meritorious and Advantageous Effects
One advantage of the controller of this invention is that using the body and legs as the input sources provides a natural and intuitive interface leaving the hands free to perform other tasks. It is expected that natural involvement of the body enhances the sense of presence in the virtual environment, provides better spacial awareness and better navigation performance. Another advantage of the invention is that the user is able to assume differing postures (e.g., walking, bending, kneeling). Another advantage of the invention is the feedback of information to the user about the user's input and how that input affects the system. Another advantage of the invention is that the volume of movement of the user in the real world is restricted.
Although a preferred embodiment of the invention has been illustrated and described, various alternatives, modifications and equivalents may be used. For example although the input device embodiments are described in detail for controlling motion within a virtual environments, the embodiments alternatively can control motion of a
robot or vehicle in the same manner described for the virtual environment. Therefore, the foregoing description should not be taken as limiting the scope of the inventions which are defined by the appended claims.