WO2009091536A1 - Low latency navigation for visual mapping for a telepresence robot - Google Patents

Low latency navigation for visual mapping for a telepresence robot Download PDF

Info

Publication number
WO2009091536A1
WO2009091536A1 PCT/US2009/000212 US2009000212W WO2009091536A1 WO 2009091536 A1 WO2009091536 A1 WO 2009091536A1 US 2009000212 W US2009000212 W US 2009000212W WO 2009091536 A1 WO2009091536 A1 WO 2009091536A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
location
time
telepresence
remote location
Prior art date
Application number
PCT/US2009/000212
Other languages
French (fr)
Inventor
Roy Sandberg
Dan Sandberg
Original Assignee
Roy Sandberg
Dan Sandberg
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Roy Sandberg, Dan Sandberg filed Critical Roy Sandberg
Publication of WO2009091536A1 publication Critical patent/WO2009091536A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0038Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0268Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
    • G05D1/0272Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means comprising means for registering the travel distance, e.g. revolutions of wheels

Definitions

  • the present invention is related to the field of telepresence robotics, more specifically, the invention is method for controlling a telepresence robot.
  • Telepresence robots have been used for military and commercial purposes for some time. Typically, these devices are controlling using a joystick, or some user interface based on a GUI with user controlled buttons.
  • the present invention is related to the field of telepresence robotics, more specifically, the invention is method for controlling a telepresence robot.
  • This patent application incorporates by reference copending application 11/223675 (Sandberg).
  • This patent application incorporates by reference copending application PCT/US 07/14489 (Sandberg). Matter essential to the understanding of the present application is contained therein. While the preferred embodiment of this invention relates to a bidirectional videoconferencing robot, it should be understood that the matter taught herein can also be applied to unidirectional video feed robots.
  • a user When using a telepresence robot, a user is always faced with a view of the remote location, transmitted from a remote camera to a local video display. Each point on the local video display maps to a specific remote location as viewed through the remote camera. Consequently, by selecting a point on the local video display the user is implicitly selecting an actual location at the remotely viewed area.
  • the point on the local video display is a location on the remote location's floor, it is possible to algorithmically compute the distance from the telepresence robot to the specified remote location. [See PCT/US 07/14489 (Sandberg)]. Via this technique, it is possible to navigate a telepresence robot merely by selecting a point on the local display using a pointing device such as a mouse, trackball, touchscreen interface, or other pointing device known in the art.
  • the present invention consists of a technique for selecting points using this technique, such that active control over the telepresence robot is afforded to the user, while at the same time minimizing the effects of latency resulting from communication delay between the client and the robot.
  • a path can be calculated between the robot and the remote location selected by the user. Using techniques known in the art, the robot can be commanded to travel along this path.
  • an Internet link does not allow instantaneous communication; a small and variable latency is always present between the time a command is sent from a local user device and the time it is received by a remote telepresence robot. This latency results in an error when selecting a remote location while the telepresence robot is moving. This error occurs because the location viewed by the local user on his/her display represents a location as it existed when the robot camera captured the location.
  • a finite time is required to transmit the camera data to the user's location, and a finite time is required to transmit the user's command back to the robot.
  • the robot has moved relative to its original location. Therefore the remote location selected by the local user on the camera is no longer at the same position when the user's command is received at the robot. This results in a movement error.
  • this error is compensated for by using knowledge of the geometry of the remotely located robot and its speed and direction.
  • this location is translated into an actual location at the remote location.
  • the offset between the robot and this remote location is recorded, and transmitted to the telepresence robot.
  • a timestamp representing the time that the local image was generated by the telepresence robot camera is also recorded and transmitted to the telepresence robot.
  • a unique identifier other than a timestamp can be used to identify the local image.
  • the telepresence robot When the telepresence robot receives the offset data and timestamp data, it compares the timestamp to the current time, and calculates the movement that occurred between the timestamp time and the current time. Alternatively a unique identifier for the current video frame can be compared to a unique identifier for the past video frame, and the relative movement calculated using a lookup table. This movement is subtracted from the offset data. The adjusted offset data thereby represents the movement required to move to the location originally selected by the user. Via this technique, a telepresence robot can be accurately commanded to move to a desired location while in motion, with minin ⁇ ed error due to latency. This technique may be used to control any system where the desired location of the controlled entity can be selected from an image.
  • a three-dimensional visualization system such as a stereoscopic display, coupled with a three-dimensional pointing device, such as a three-dimensional mouse, may be used to select the final destination in a three-dimensional space. This is useful, for example, to control the end- effector of a telepresence-enabled robot arm, or to control the flight path of an unmanned aerial vehicle or unmanned submersible vehicle.
  • SLAM Simultaneous Localization and Mapping
  • SLAM is a well known technique for generating navigation maps on the fly.
  • SLAM is generally used by autonomous robots for navigating about a location.
  • the map generated by SLAM typically is an outline of the boundaries of the area that has been explored by the robot.
  • This outline is generated using range data from laser scanners and similar sensors known in the art to return distance data. It is often difficult for a human operator to correlate this map with the environment as perceived through the eyes of a person, because landmarks are difficult to discern when displayed as crude outlines. Consequently, using a SLAM-generated map to navigate a telepresence robot may be cumbersome and difficult for humans.
  • the present invention consists of a map stitched together from multiple camera images gathered by the telepresence robot camera. This map appears to the user to be a likeness of an aerial photograph of the location explored by the robot. Consequently, it is easy for the remote user to navigate using this map, because objects familiar to him are present in the picture.
  • a comprehensive, human-friendly map of a remote location can be created.
  • the area directly in front of the robot can generally be assumed to be at ground level. This creates a natural index location to correlate across multiple images. Range sensors on the robot can inform the algorithm when this assumption has been violated, for example, when the robot has moved right up against a wall.
  • the search space for the image transformation algorithm used to find common regions between photos to be stitched together can be simplified, and the algorithm can be sped up.
  • the telepresence robot uses dead-reckoning to trace a path from the current location to the location selected on the map. By limiting allowable movement to the regions already traveled by the robot, a path free from obstructions can be found.
  • the current camera image is compared to the navigation map previously constructed, and errors in dead-reckoning are corrected accordingly. Note that while slight errors incurred while stitching images together may result in an imperfect map, as long as the robot location is compared to the map itself, the robot will correctly navigate the terrain, even if dead-reckoning errors exist in the map. Active bird's eye "map" view
  • a camera video image gathered by a telepresence robot is transformed such that it represents a "bird's eye" view from directly above the terrain being viewed.
  • This image is displayed in real-time to the user.
  • Either the actual robot base or a synthesized digital representation of the robot base is displayed on the map, such that a user can locate the robot relative to the surrounding terrain. Any area that is not visible via the robotic camera is not displayed.
  • a telepresence robot can be made to navigate to any location on the map used techniques known in the art of path planning. The user experiences a control technique that consists of clicking on a location on a map, even if areas of the map are not known.
  • the real-time "bird's eye” view discussed above is superimposed over the image map composed of stitched-together images (discussed above: Map- view Navigation).
  • Map- view Navigation the real-time "bird's eye” view discussed above is superimposed over the image map composed of stitched-together images.
  • a remotely-controlled robot may in some instances continue to move on its own without the ability for the remote user to intervene. While danger this poses can be mitigated to some degree by specifying maximum movement distances and using collision sensors on the robot, it is also desirable for the robot to stop on it's own when it identifies a network failure.
  • a connectionless protocol such as UDP
  • UDP connectionless protocol
  • One technique to ensure that the robot stops when a network outage is detected is to calculate the distance traveled between data packets, and send an emergency stop command (ESTOP) when the distance calculated exceeds a threshold.
  • the emergency stop command causes the robot to decelerate at the fastest allowable rate until the robot has stopped moving.
  • the threshold should be sent to a distance which when reached implies a network outage as opposed to a packet transmission delay.
  • the robot's movement is slowed continuously at a deceleration rate proportional to its speed and lower than that of the ESTOP deceleration rate when a distance threshold (representing the distance traveled since the last data packet) is reached.
  • This distance threshold is 10cm in the preferred embodiment.
  • the robot's trajectory remains the same, regardless of it's speed.
  • the robot accelerates to it's original speed. In both cases, the robot ceases movement when a prolonged network failure is detected. Because network failures can sometimes occur for short periods of time (particularly with wireless networks), using distance as the trigger to cause cessation of motion is preferred over the more obvious technique of using a time trigger, because a slow moving robot can move for a longer time period before a network failure makes it a danger or nuisance to those in it's environment. In the preferred embodiment, slowing the robot at a slower acceleration than a regular ESTOP but with less of a delay since the last data packet, allows a balance between being overly sensitive to long delays between data packets, and moving for too great of a distance after a network failure occurs.
  • FIG. 1 is a exemplary embodiment of the invention, diagrammatically showing the control sequence comprising the robotic navigation technique.
  • FIG.2 illustrates the manner in which a user controls a telepresence robot using the robotic navigation technique.
  • FIG.3 illustrates the network failure emergency stop technique.
  • the present invention is a method and apparatus for controlling a telepresence robot.
  • FIG. 1 is a exemplary embodiment of the invention, diagrammatically showing the control sequence comprising the robotic navigation technique.
  • a telepresence robot 101, 102, and 103 is shown moving along movement path 108, here a circular arc curving 90 degrees to the left.
  • the remote user selects a new destination B 106, which appears to the user to be directly in front of the robot, since it was captured when that was true.
  • the location selected by the remote user is transferred to the robot.
  • the telepresence robot periodically queries the amount of movement undertaken by the left and right wheels using positional data collected by wheel encoders. This information is converted into an x,y position, and an amount of rotation, theta.
  • the robot is modeled as traveling in an arc of constant radius when determining the (x,y) position and rotation.
  • a table of (x,y) and theta positions and their associated time is then assembled. Given a timestamp associated with the time that a video image was taken, and knowledge of the current time, a delay time (the time since the video image was taken) can be calculated.
  • an estimate of the location and rotation angle of the robot can be calculated for a past time.
  • the location and rotation angle of the robot at the present time can also be calculated.
  • a correction factor can be calculated the corrects for the time delay required to send the image to the client, and for the client to send a command in response to the image.
  • FIG.2 illustrates the manner in which a user controls a telepresence robot using the robotic navigation technique.
  • a user By moving a pointer 203 using a pointing device, a user selects a remote location 214 on an image 206 captured by a telepresence robot camera using a pointing device such as a mouse, trackball, touchscreen, or other pointing device known in the art.
  • the image is shown as displayed on a computer monitor 205.
  • a path line 202 is generated from the present location of the telepresence robot to the remote location selected and displayed on the computer monitor.
  • the remote location is assumed to be the floor-height location selected by the user.
  • the end point of this path line is sent to the telepresence robot. In the preferred embodiment, the end point is sent periodically, although intermittent or aperiodic transmission of the end point may also occur.
  • any location within the field of view of the camera may be selected. In the preferred embodiment valid moves are limited to areas that are below the horizon (and hence represent a location on the floor) and no farther away than some predetermined distance, for example, 8 meters. Areas behind the robot may be selected by tilting the camera down until a view of the area behind the robot is revealed 211. Note that the camera's line-of-sight must not be obstructed by the robot body for this to work. Tilting the camera down also shows the view of the robot itself 207. Here the supports for the monitor assembly can also be seen 208. By selecting a location behind the robot 215, a path line 209 to a location behind the robot is created and the robot can be made to move backwards.
  • the camera automatically tilts down (enabling a view of areas behind the robot) whenever the user moves the pointing device cursor (for example, a mouse pointer) below a lower bound 204 on the screen.
  • the camera also automatically moves back to the original forward- facing position when the user moves the pointing device cursor above a set level 212 when the camera is tilting down.
  • the user may adjust the angle of the camera to allow downward motion without a need to use keyboard bindings or additional mouse clicks or actions.
  • FIG.3 illustrates the network failure emergency stop technique.
  • Graph (a) represents a robots velocity over given timeframe.
  • Graph (b) represents data packet transmission and data packet transmission gaps over the same time period.
  • a robot is traveling at a velocity of 1.0 meters/second.
  • Data packets are being sent continuously until time tl 302, where a short gap in data packet transmission exists until time t2 303.
  • the shaded area 311 represents the distance traveled by the robot between times tl and t2. This distance is lower than the threshold required to begin slowing the robot, and so no robot deceleration occurs in this time interval.
  • Data packets start to be resent continuously at time t2, and continue to be sent until time t3 304.
  • the shaded area 312 represents the distance traveled by the robot between times t3 and t4305. This distance is equal to the threshold distance required to begin slowing the robot, and so the robot is seen to begin decelerating at time t4.
  • What has been described is a method and apparatus for compensating for latency when controlling a telepresence robot. This offers many beneficial advantages, such as navigating through unknown environments, navigating at a higher speed, accurately compensating for latency-induced delays, navigating without a joystick, and navigating without undue cognitive load to the user.
  • An alternative embodiment of the invention discloses a means of navigating a robot using a visual map, and a means of creating that visual map. This offers many beneficial advantages, such as allowing a user to navigate a known environment more easily, and allowing a user to select known landmarks on a map that offers a user-friendly visualization of the terrain wherein objects appear on the map much as they would if the user was actually present at the robot's location.
  • Another alternative embodiment of the invention discloses a means of stopping a robot when a network failure occurs. This offers many beneficial advantages, such as allowing safer operation of the robot around people and other obstacles.

Abstract

A method and apparatus for controlling a telepresence robot.

Description

Low Latency Navigation for Visual Mapping for a Telepresence Robot
BACKGROUND OF THE INVENTION
(1) Field of Invention
The present invention is related to the field of telepresence robotics, more specifically, the invention is method for controlling a telepresence robot.
(2) Related Art
Telepresence robots have been used for military and commercial purposes for some time. Typically, these devices are controlling using a joystick, or some user interface based on a GUI with user controlled buttons.
While these user interface mechanisms enable some degree of control over the distant robot, they are often plagued by problems concerning latency of the Internet link, steep learning curves, and difficulty of use.
i SUMMARY OF THE INVENTION
The present invention is related to the field of telepresence robotics, more specifically, the invention is method for controlling a telepresence robot. This patent application incorporates by reference copending application 11/223675 (Sandberg). This patent application incorporates by reference copending application PCT/US 07/14489 (Sandberg). Matter essential to the understanding of the present application is contained therein. While the preferred embodiment of this invention relates to a bidirectional videoconferencing robot, it should be understood that the matter taught herein can also be applied to unidirectional video feed robots.
Latency Correction
When using a telepresence robot, a user is always faced with a view of the remote location, transmitted from a remote camera to a local video display. Each point on the local video display maps to a specific remote location as viewed through the remote camera. Consequently, by selecting a point on the local video display the user is implicitly selecting an actual location at the remotely viewed area. When the point on the local video display is a location on the remote location's floor, it is possible to algorithmically compute the distance from the telepresence robot to the specified remote location. [See PCT/US 07/14489 (Sandberg)]. Via this technique, it is possible to navigate a telepresence robot merely by selecting a point on the local display using a pointing device such as a mouse, trackball, touchscreen interface, or other pointing device known in the art.
The present invention consists of a technique for selecting points using this technique, such that active control over the telepresence robot is afforded to the user, while at the same time minimizing the effects of latency resulting from communication delay between the client and the robot.
A path can be calculated between the robot and the remote location selected by the user. Using techniques known in the art, the robot can be commanded to travel along this path. However, an Internet link does not allow instantaneous communication; a small and variable latency is always present between the time a command is sent from a local user device and the time it is received by a remote telepresence robot. This latency results in an error when selecting a remote location while the telepresence robot is moving. This error occurs because the location viewed by the local user on his/her display represents a location as it existed when the robot camera captured the location. A finite time is required to transmit the camera data to the user's location, and a finite time is required to transmit the user's command back to the robot. During this time, the robot has moved relative to its original location. Therefore the remote location selected by the local user on the camera is no longer at the same position when the user's command is received at the robot. This results in a movement error.
In the preferred embodiment of this invention, this error is compensated for by using knowledge of the geometry of the remotely located robot and its speed and direction. When a user selects a location on the local terminal's display, this location is translated into an actual location at the remote location. The offset between the robot and this remote location is recorded, and transmitted to the telepresence robot. Additionally, a timestamp representing the time that the local image was generated by the telepresence robot camera is also recorded and transmitted to the telepresence robot. Alternatively, a unique identifier other than a timestamp can be used to identify the local image. When the telepresence robot receives the offset data and timestamp data, it compares the timestamp to the current time, and calculates the movement that occurred between the timestamp time and the current time. Alternatively a unique identifier for the current video frame can be compared to a unique identifier for the past video frame, and the relative movement calculated using a lookup table. This movement is subtracted from the offset data. The adjusted offset data thereby represents the movement required to move to the location originally selected by the user. Via this technique, a telepresence robot can be accurately commanded to move to a desired location while in motion, with mininώed error due to latency. This technique may be used to control any system where the desired location of the controlled entity can be selected from an image. Therefore this technique is applicable for general purpose unmanned ground vehicle control and unmanned water-going vehicle control. In alternative embodiments, a three-dimensional visualization system, such as a stereoscopic display, coupled with a three-dimensional pointing device, such as a three-dimensional mouse, may be used to select the final destination in a three-dimensional space. This is useful, for example, to control the end- effector of a telepresence-enabled robot arm, or to control the flight path of an unmanned aerial vehicle or unmanned submersible vehicle.
As discussed in PCTVUS 07/14489 (Sandberg), it is possible to actively control the robot while it is moving at the remote location. By sending a continuous sequence of movement points to the robot, the local user can dynamically control the location of the robot through the use of a pointing device. This embodiment of the invention allows a more fluid control over the robot, because the user perceives that the robot is instantaneously responding to his commands. When using this technique with the afore mentioned latency compensation technique, active control over the robot is enabled in a manner similar to joystick control, while at the same time, the effects of latency are minimized.
Map-view Navigation
While navigation using a path line is convenient for maneuvering a telepresence robot about an unknown environment, other techniques exist when the environment is known. For example, a user can click on a location on a map of a remote location, and a telepresence robot could then automatically travel to that location. However, this requires that either a map exist of the location a priori, or that the robot generate a map on the fly. SLAM (Simultaneous Localization and Mapping) is a well known technique for generating navigation maps on the fly. SLAM is generally used by autonomous robots for navigating about a location. The map generated by SLAM typically is an outline of the boundaries of the area that has been explored by the robot. This outline is generated using range data from laser scanners and similar sensors known in the art to return distance data. It is often difficult for a human operator to correlate this map with the environment as perceived through the eyes of a person, because landmarks are difficult to discern when displayed as crude outlines. Consequently, using a SLAM-generated map to navigate a telepresence robot may be cumbersome and difficult for humans. The present invention consists of a map stitched together from multiple camera images gathered by the telepresence robot camera. This map appears to the user to be a likeness of an aerial photograph of the location explored by the robot. Consequently, it is easy for the remote user to navigate using this map, because objects familiar to him are present in the picture. By manipulating camera images as gathered by the telepresence robot camera such that the camera view is directed vertically downward (using image transformation techniques known in the art), and then stitching these camera images together using techniques known in the art of computer generated panoramic images, a comprehensive, human-friendly map of a remote location can be created. Note that the area directly in front of the robot can generally be assumed to be at ground level. This creates a natural index location to correlate across multiple images. Range sensors on the robot can inform the algorithm when this assumption has been violated, for example, when the robot has moved right up against a wall.
Because the camera angle on the telepresence robot is known, the search space for the image transformation algorithm used to find common regions between photos to be stitched together can be simplified, and the algorithm can be sped up.
When a user wishes to navigate to a location demarcated on the map, he simply selects a location on the map using a pointing device such as a mouse, trackball, or touch screen. In one embodiment, the telepresence robot uses dead-reckoning to trace a path from the current location to the location selected on the map. By limiting allowable movement to the regions already traveled by the robot, a path free from obstructions can be found.
In an improved embodiment of the invention, the current camera image is compared to the navigation map previously constructed, and errors in dead-reckoning are corrected accordingly. Note that while slight errors incurred while stitching images together may result in an imperfect map, as long as the robot location is compared to the map itself, the robot will correctly navigate the terrain, even if dead-reckoning errors exist in the map. Active bird's eye "map" view
In an alternative embodiment of the invention, a camera video image gathered by a telepresence robot is transformed such that it represents a "bird's eye" view from directly above the terrain being viewed. This image is displayed in real-time to the user. Either the actual robot base or a synthesized digital representation of the robot base is displayed on the map, such that a user can locate the robot relative to the surrounding terrain. Any area that is not visible via the robotic camera is not displayed. By clicking a location on this map, a telepresence robot can be made to navigate to any location on the map used techniques known in the art of path planning. The user experiences a control technique that consists of clicking on a location on a map, even if areas of the map are not known.
In another embodiment of this invention, the real-time "bird's eye" view discussed above is superimposed over the image map composed of stitched-together images (discussed above: Map- view Navigation). Using this technique, the user sees live imagery of the area within the camera's field of view, and previously-collected static imagery in areas outside the camera's field of view. Using this technique, a more realistic map- view is presented to the user, facilitating easier real-time navigation.
Network Failure EStop
In the event of a communications network outage, a remotely-controlled robot may in some instances continue to move on its own without the ability for the remote user to intervene. While danger this poses can be mitigated to some degree by specifying maximum movement distances and using collision sensors on the robot, it is also desirable for the robot to stop on it's own when it identifies a network failure. When communicating using a connectionless protocol (such as UDP) it is not always possible to distinguish between a network outage and a simple delay between the transmission of data packets. Assume that the robot receives new data packets from the client at a regular interval. This could be accomplished by sending a "Keep Alive" packet on a periodic basis. One technique to ensure that the robot stops when a network outage is detected is to calculate the distance traveled between data packets, and send an emergency stop command (ESTOP) when the distance calculated exceeds a threshold. The emergency stop command causes the robot to decelerate at the fastest allowable rate until the robot has stopped moving. The threshold should be sent to a distance which when reached implies a network outage as opposed to a packet transmission delay. In the preferred embodiment, the robot's movement is slowed continuously at a deceleration rate proportional to its speed and lower than that of the ESTOP deceleration rate when a distance threshold (representing the distance traveled since the last data packet) is reached. This distance threshold is 10cm in the preferred embodiment. However, the robot's trajectory remains the same, regardless of it's speed. If a new data packet is received before the robot comes to a complete stop, the robot accelerates to it's original speed. In both cases, the robot ceases movement when a prolonged network failure is detected. Because network failures can sometimes occur for short periods of time (particularly with wireless networks), using distance as the trigger to cause cessation of motion is preferred over the more obvious technique of using a time trigger, because a slow moving robot can move for a longer time period before a network failure makes it a danger or nuisance to those in it's environment. In the preferred embodiment, slowing the robot at a slower acceleration than a regular ESTOP but with less of a delay since the last data packet, allows a balance between being overly sensitive to long delays between data packets, and moving for too great of a distance after a network failure occurs.
DETAILED DESCRIPTION OF THE DRAWINGS
FIG. 1 is a exemplary embodiment of the invention, diagrammatically showing the control sequence comprising the robotic navigation technique.
FIG.2 illustrates the manner in which a user controls a telepresence robot using the robotic navigation technique.
FIG.3 illustrates the network failure emergency stop technique.
DETAILED DESCRIPTION OF THE INVENTION
The present invention is a method and apparatus for controlling a telepresence robot.
FIG. 1 is a exemplary embodiment of the invention, diagrammatically showing the control sequence comprising the robotic navigation technique.
As viewed from above, a telepresence robot 101, 102, and 103 is shown moving along movement path 108, here a circular arc curving 90 degrees to the left. The robot is shown in motion at three different time periods, t=0, t=l, and t=2. The telepresence robot captures a view through it's camera at initial time t=0 and initial Cartesian coordinate (0,0). At time t=l the view captured by the robot has reached the remote user and is displayed on a video screen at the remote user's location. The remote user selects a new destination B 106, which appears to the user to be directly in front of the robot, since it was captured when that was true. At time t=2, the location selected by the remote user is transferred to the robot. By now the robot has moved to location 103, and if the movement request is treated as a request to move directly forward, the robot would move to location C. This is erroneous, as the desired location is B. Using dead reckoning, it can be computed via the robot that it is currently at location 103, and that at time t=0, it was at location 101. Therefore it can determine the location of destination B 106 relative to it's position at time t=0. Using this knowledge, a new move path 110, can be contracted by the robot that results hi a move to the desired final destination
In the preferred embodiment, the telepresence robot periodically queries the amount of movement undertaken by the left and right wheels using positional data collected by wheel encoders. This information is converted into an x,y position, and an amount of rotation, theta. In the preferred embodiment, the robot is modeled as traveling in an arc of constant radius when determining the (x,y) position and rotation. However, other techniques of converting encoder values into a position on a Cartesian plane may also be used. A table of (x,y) and theta positions and their associated time is then assembled. Given a timestamp associated with the time that a video image was taken, and knowledge of the current time, a delay time (the time since the video image was taken) can be calculated. By interpolating or extrapolating the delay time using the table of positions, an estimate of the location and rotation angle of the robot can be calculated for a past time. The location and rotation angle of the robot at the present time can also be calculated. By subtracting the location and rotation at the delay time from the location and rotation at the present time, a correction factor can be calculated the corrects for the time delay required to send the image to the client, and for the client to send a command in response to the image.
FIG.2 illustrates the manner in which a user controls a telepresence robot using the robotic navigation technique.
By moving a pointer 203 using a pointing device, a user selects a remote location 214 on an image 206 captured by a telepresence robot camera using a pointing device such as a mouse, trackball, touchscreen, or other pointing device known in the art. The image is shown as displayed on a computer monitor 205. A path line 202 is generated from the present location of the telepresence robot to the remote location selected and displayed on the computer monitor. The remote location is assumed to be the floor-height location selected by the user. The end point of this path line is sent to the telepresence robot. In the preferred embodiment, the end point is sent periodically, although intermittent or aperiodic transmission of the end point may also occur.
Any location within the field of view of the camera may be selected. In the preferred embodiment valid moves are limited to areas that are below the horizon (and hence represent a location on the floor) and no farther away than some predetermined distance, for example, 8 meters. Areas behind the robot may be selected by tilting the camera down until a view of the area behind the robot is revealed 211. Note that the camera's line-of-sight must not be obstructed by the robot body for this to work. Tilting the camera down also shows the view of the robot itself 207. Here the supports for the monitor assembly can also be seen 208. By selecting a location behind the robot 215, a path line 209 to a location behind the robot is created and the robot can be made to move backwards. In the preferred embodiment of the invention, the camera automatically tilts down (enabling a view of areas behind the robot) whenever the user moves the pointing device cursor (for example, a mouse pointer) below a lower bound 204 on the screen. The camera also automatically moves back to the original forward- facing position when the user moves the pointing device cursor above a set level 212 when the camera is tilting down. By this means, the user may adjust the angle of the camera to allow downward motion without a need to use keyboard bindings or additional mouse clicks or actions. FIG.3 illustrates the network failure emergency stop technique. Graph (a) represents a robots velocity over given timeframe. Graph (b) represents data packet transmission and data packet transmission gaps over the same time period. At time tO 301, a robot is traveling at a velocity of 1.0 meters/second. Data packets are being sent continuously until time tl 302, where a short gap in data packet transmission exists until time t2 303. The shaded area 311 represents the distance traveled by the robot between times tl and t2. This distance is lower than the threshold required to begin slowing the robot, and so no robot deceleration occurs in this time interval. Data packets start to be resent continuously at time t2, and continue to be sent until time t3 304. The shaded area 312 represents the distance traveled by the robot between times t3 and t4305. This distance is equal to the threshold distance required to begin slowing the robot, and so the robot is seen to begin decelerating at time t4. This deceleration continues until time t5 306 when data packet transmission begins again. The robot begins accelerating as soon as the data packet is received, and continues accelerating until time t6 307, when it returns to its normal velocity. At time t7 308, a network failure occurs. The shaded area 313 represents the distance traveled by the robot between times t7 and and t8 309. This distance is equal to the threshold required to begin slowing the robot, and so the robot is seen to be decelerating beginning at time t8. The robot comes to a complete stop at time t9310, thereby ensuring the safety of the robot and its surroundings during the network failure.
Advantages
What has been described is a method and apparatus for compensating for latency when controlling a telepresence robot. This offers many beneficial advantages, such as navigating through unknown environments, navigating at a higher speed, accurately compensating for latency-induced delays, navigating without a joystick, and navigating without undue cognitive load to the user.
An alternative embodiment of the invention discloses a means of navigating a robot using a visual map, and a means of creating that visual map. This offers many beneficial advantages, such as allowing a user to navigate a known environment more easily, and allowing a user to select known landmarks on a map that offers a user-friendly visualization of the terrain wherein objects appear on the map much as they would if the user was actually present at the robot's location.
Another alternative embodiment of the invention discloses a means of stopping a robot when a network failure occurs. This offers many beneficial advantages, such as allowing safer operation of the robot around people and other obstacles.
While certain exemplary embodiments have been described in detail and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention is not to be limited to the specific arrangements and constructions shown and described, since various other modifications may occur to those with ordinary skill in the art.

Claims

What is claimed:
1. A method for controlling a moving remotely-controllable entity comprising the steps of: capturing an image using a remotely-located camera; recording a first time at which the captured image was captured; transmitting the captured image to a client device selecting a remote location on the captured image from a user interface on the client device; recording a second time at which the remote location was selected; transmitting the selected remote location and the recorded second time to the remotely-controllable entity; calculating a delta in location between the first recorded time and the second recorded time for the remotely-controllable entity; calculating the actual location by subtracting the delta in location from the selected remote location; and moving the remotely-controllable entity to the actual location.
2. A method for controlling a moving telepresence robot comprising the steps of: capturing an image using a telepresence robot; recording a first time at which the captured image was captured; transmitting the captured image to a client device selecting a remote location on the captured image from a user interface on the client device; recording a second time at which the remote location was selected; transmitting the selected remote location and the recorded second time to the telepresence robot; calculating the delta in robot location between the first recorded time and the second recorded time; calculating the actual location by subtracting the delta in robot location from the selected remote location; and moving the telepresence robot to the actual location.
3. A method for controlling a moving telepresence robot comprising the steps of: capturing an image using a telepresence robot; recording a first robot location at which the image was captured; transmitting the captured image to client device selecting a remote location on the captured image from a user interface on the client device; recording a time at which the remote location was selected; transmitting the selected remote location and the recorded time to the telepresence robot; calculating the delta in robot location between the first robot location and a second robot location at the recorded time; calculating the actual location by subtracting the delta in robot location from the selected remote location; and moving the telepresence robot to the actual location.
4. A method for emergency stopping a remotely controllable robot comprising the steps of: calculating a distance traveled by the robot between contiguously received data packets; and when the distance travels exceeds a threshold, decelerating the robot;
5. The method of claim 4 further comprising the steps of: when a next data packet is received, accelerating the robot if the robot is still moving.
PCT/US2009/000212 2008-01-15 2009-01-14 Low latency navigation for visual mapping for a telepresence robot WO2009091536A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US1113308P 2008-01-15 2008-01-15
US61/011,133 2008-01-15

Publications (1)

Publication Number Publication Date
WO2009091536A1 true WO2009091536A1 (en) 2009-07-23

Family

ID=40885591

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/000212 WO2009091536A1 (en) 2008-01-15 2009-01-14 Low latency navigation for visual mapping for a telepresence robot

Country Status (1)

Country Link
WO (1) WO2009091536A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101986219A (en) * 2010-08-27 2011-03-16 东南大学 Implementation method of force telepresence of telerobotics based on integration of virtual strength and real strength
CN102123266A (en) * 2010-01-12 2011-07-13 华为终端有限公司 Point-to-point video communication method based on telepresence technology, codec (coder/decoder) and client
US8947522B1 (en) 2011-05-06 2015-02-03 Google Inc. Systems and methods to adjust actions based on latency levels
US20150057801A1 (en) * 2012-10-10 2015-02-26 Kenneth Dean Stephens, Jr. Real Time Approximation for Robotic Space Exploration
US20150120048A1 (en) * 2013-10-24 2015-04-30 Harris Corporation Control synchronization for high-latency teleoperation
US9300430B2 (en) 2013-10-24 2016-03-29 Harris Corporation Latency smoothing for teleoperation systems
WO2017214551A1 (en) * 2016-06-10 2017-12-14 Cnh Industrial America Llc System and method for autonomous vehicle communications protocols
EP2668008A4 (en) * 2011-01-28 2018-01-24 Intouch Technologies, Inc. Interfacing with a mobile telepresence robot
WO2020189230A1 (en) * 2019-03-20 2020-09-24 Ricoh Company, Ltd. Robot and control system that can reduce the occurrence of incorrect operations due to a time difference in network
CN112041126A (en) * 2018-03-29 2020-12-04 捷普有限公司 Sensing authentication apparatus, system, and method for autonomous robot navigation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5727132A (en) * 1994-08-25 1998-03-10 Faunc Ltd. Robot controlling method for tracking a moving object using a visual sensor
US6194860B1 (en) * 1999-11-01 2001-02-27 Yoder Software, Inc. Mobile camera-space manipulation
US20040243282A1 (en) * 2003-05-29 2004-12-02 Fanuc Ltd Robot system
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5727132A (en) * 1994-08-25 1998-03-10 Faunc Ltd. Robot controlling method for tracking a moving object using a visual sensor
US6194860B1 (en) * 1999-11-01 2001-02-27 Yoder Software, Inc. Mobile camera-space manipulation
US20040243282A1 (en) * 2003-05-29 2004-12-02 Fanuc Ltd Robot system
US20070156286A1 (en) * 2005-12-30 2007-07-05 Irobot Corporation Autonomous Mobile Robot

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102123266A (en) * 2010-01-12 2011-07-13 华为终端有限公司 Point-to-point video communication method based on telepresence technology, codec (coder/decoder) and client
CN101986219A (en) * 2010-08-27 2011-03-16 东南大学 Implementation method of force telepresence of telerobotics based on integration of virtual strength and real strength
EP2668008A4 (en) * 2011-01-28 2018-01-24 Intouch Technologies, Inc. Interfacing with a mobile telepresence robot
US8947522B1 (en) 2011-05-06 2015-02-03 Google Inc. Systems and methods to adjust actions based on latency levels
US20150057801A1 (en) * 2012-10-10 2015-02-26 Kenneth Dean Stephens, Jr. Real Time Approximation for Robotic Space Exploration
US9623561B2 (en) * 2012-10-10 2017-04-18 Kenneth Dean Stephens, Jr. Real time approximation for robotic space exploration
US20150120048A1 (en) * 2013-10-24 2015-04-30 Harris Corporation Control synchronization for high-latency teleoperation
US9144907B2 (en) * 2013-10-24 2015-09-29 Harris Corporation Control synchronization for high-latency teleoperation
US9300430B2 (en) 2013-10-24 2016-03-29 Harris Corporation Latency smoothing for teleoperation systems
WO2017214551A1 (en) * 2016-06-10 2017-12-14 Cnh Industrial America Llc System and method for autonomous vehicle communications protocols
US9952596B2 (en) 2016-06-10 2018-04-24 Cnh Industrial America Llc System and method for autonomous vehicle communications protocols
RU2730117C2 (en) * 2016-06-10 2020-08-17 СиЭнЭйч ИНДАСТРИАЛ АМЕРИКА ЭлЭлСи Data communication system and method for autonomous vehicle
CN112041126A (en) * 2018-03-29 2020-12-04 捷普有限公司 Sensing authentication apparatus, system, and method for autonomous robot navigation
EP3774200A4 (en) * 2018-03-29 2022-01-05 Jabil Inc. Apparatus, system, and method of certifying sensing for autonomous robot navigation
CN112041126B (en) * 2018-03-29 2023-06-13 捷普有限公司 Sensing authentication device, system and method for autonomous robot navigation
WO2020189230A1 (en) * 2019-03-20 2020-09-24 Ricoh Company, Ltd. Robot and control system that can reduce the occurrence of incorrect operations due to a time difference in network
CN113597363A (en) * 2019-03-20 2021-11-02 株式会社理光 Robot and control system capable of reducing misoperation caused by time difference of network
CN113597363B (en) * 2019-03-20 2023-09-01 株式会社理光 Robot and control system capable of reducing misoperation caused by time difference of network

Similar Documents

Publication Publication Date Title
WO2009091536A1 (en) Low latency navigation for visual mapping for a telepresence robot
JP5324607B2 (en) Method and system for remotely controlling a mobile robot
US8255092B2 (en) Autonomous behaviors for a remote vehicle
US20110087371A1 (en) Responsive control method and system for a telepresence robot
US6845297B2 (en) Method and system for remote control of mobile robot
US20100241289A1 (en) Method and apparatus for path planning, selection, and visualization
CN111716365B (en) Immersive remote interaction system and method based on natural walking
US20220260998A1 (en) Navigating a Mobile Robot
WO2008060689A2 (en) Autonomous behaviors for a remote vehicle
US20230097676A1 (en) Tactical advanced robotic engagement system
KR101436555B1 (en) Internet based Teleoperation System of UAV
KR101536415B1 (en) System and method of remomtely controlling mobile robot
JP2019000918A (en) System and method for controlling arm attitude of working robot
US11586225B2 (en) Mobile device, mobile body control system, mobile body control method, and program
EP2147386B1 (en) Autonomous behaviors for a remote vehicle
CN111583692A (en) Remote visual field acquisition method, device and system based on automatic driving
WO2022137876A1 (en) Mobile object, control method for mobile object, and program
Geerinck et al. Tele-robots with shared autonomy: tele-presence for high level operability
Kadavasal Sivaraman et al. Sensor Augmented Virtual Reality Based Teleoperation Using Mixed Autonomy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09701689

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09701689

Country of ref document: EP

Kind code of ref document: A1