WO2003027827A1 - Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface - Google Patents

Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface Download PDF

Info

Publication number
WO2003027827A1
WO2003027827A1 PCT/NL2001/000686 NL0100686W WO03027827A1 WO 2003027827 A1 WO2003027827 A1 WO 2003027827A1 NL 0100686 W NL0100686 W NL 0100686W WO 03027827 A1 WO03027827 A1 WO 03027827A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
path
scene
location
command
Prior art date
Application number
PCT/NL2001/000686
Other languages
French (fr)
Inventor
Jacob Schaper
Original Assignee
Le Gué Beheer B.V.
Jacob Schaper Beheer B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Le Gué Beheer B.V., Jacob Schaper Beheer B.V. filed Critical Le Gué Beheer B.V.
Priority to PCT/NL2001/000686 priority Critical patent/WO2003027827A1/en
Publication of WO2003027827A1 publication Critical patent/WO2003027827A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements

Abstract

In a method of providing a GUI for entering a command (27) for a processing device, an image is generated and a pointer is provided to select a location (9) in the image, referring to the command (27). The image is generated by rendering at least part of a three-dimensional scene from a dataset (18), which comprises entries specifying paths (5) traced in a plurality of directions from a viewpoint (4) in the scene to first points in the scene. Each location (9) in the image is associated with one of the paths (5) comprised in the dataset (18). Selecting the location (9) results in selecting the associated path (5). A path (5) is linked to the command (27). A programmable processing device comprises a GUI provided using such a method.

Description

Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface.
The invention relates to a method of providing a GUI for entering a command for a processing device, wherein an image is generated and a pointer is provided to select a location in the image, referring to the command.
The invention further relates to a programmable processing device comprising such a GUI.
Graphical user interfaces, or GUIs find application in a wide range of processing devices . Computers and workstations are prime examples of such devices. Many other devices, however, also make use of GUIs to facilitate control of processes running on the device. Video editing equipment and game consoles, for example, also commonly make use of GUIs.
All of the above-mentioned devices are equipped with screens, displaying a two-dimensional image, and means for pointing a cursor to a location in the image. These means can be a joystick, mouse, keyboard, electronic tablet and pen, or a laser directed at a light-sensitive screen, for example. The processing device keeps track of the location of the cursor in the two-dimensional image. A command, such as a mouse-click, entered when the cursor is in a particular location in the image, activates a command linked to that particular location. The visible objects in the image provide the user with a means to relate the location to the command. In known GUIs, these objects commonly comprise icons, or buttons, sometimes provided with text.
In a known example of the method according to the opening paragraph, providing links to a large number of commands requires the use of many objects. When two-dimensional objects are shown in the image, this also means that a larger screen must be used, or that the objects must be smaller and packed together more densely, in the image. This in turn makes selection more difficult, since the information comprised in the image is too dense for the user to comprehend. In addition, a lot of dexterity is required to place the cursor over the correct image component . Conventional rendering methods, using for example piecewise polynomials, also limit the accuracy with which an object can be selected.
Thus, the known example described above suffers the disadvantage that the two-dimensional image limits the number of commands that can be linked. Above a certain density of image components, it becomes too difficult for the user to differentiate between locations. It is also no longer possible to retain a clear overview of selectable locations .
It is an object of the invention to provide a method of the above-mentioned kind that provides a GUI that allows easy selection of a command from among a large number of commands available for selection.
This object is achieved using the method according to the invention, which is characterised in that the image is generated by rendering at least part of a three- dimensional scene from a dataset, which comprises entries specifying paths traced in a plurality of directions from a viewpoint in the scene to first points in the scene, wherein each location in the image is associated with one of the paths comprised in the dataset, wherein selecting the location results in selecting the associated path, and in that a path is linked to the command.
Thus, paths through a three-dimensional scene can be selected, allowing both a larger number of commands to be provided for selection, and a sharper image to be rendered of the scene, so that selection of the wrong command is prevented.
A preferred embodiment of the invention is characterised in that at least one location in the image is associated with at least one path to a second point in the scene, hit by the path upon reflection or refraction in the first point, and in that the path is linked to a second command and means are provided for selecting between all commands linked to the path, upon selection of the location in the image associated with the path.
Accordingly, one path can be linked to several commands, allowing a larger number of commands to be linked, using the same number of locations in the image. As an example, a transparent object can be shown in the image. In the GUI according to the invention, clicking on one location in the image associated with a path to a point on the object will make a choice of commands available. One command is linked to a point on the transparent object and one command is linked to a point on an object visible through the transparent object. In effect, one selectable location in the image is used to link two commands. The GUI is still easy to understand and use, since the user sees both the transparent object and the object behind it. Of course, the objects should in some way represent the commands, so that it is immediately obvious to the user what choice of commands can be expected.
According to an aspect of the invention, physical properties, e.g. colour, luminance or coefficient of reflection, assigned to the first points are used to determine the appearance of the locations in the image associated with the paths to the first points.
This makes it possible to use a scene with a limited number of basic geometric objects, yet still allow a good level of differentiation between objects. In addition, a more realistic rendering of the scene is possible, so that navigation through the scene is made easier.
According to another aspect of the invention, at least one of the physical properties assigned to a first point is changed upon selection of the location in the image associated with the path to the first point.
Accordingly, components can be made more or less visible. This makes navigation through the GUI easier. For example, an opaque component can be made transparent to re- veal components behind it, making further linked commands available for selection.
According to another aspect of the invention, the dataset comprises mapping co-ordinates for a set of first points on or in an object in the scene, and an image is selected and mapped onto the object according to the mapping co-ordinates .
Thus, the appearance of components in a scene can be changed, by mapping a different image onto them.
According to another aspect of the invention, wherein locations in the image are defined by the point of intersection of the associated path with a rendering plane and means are provided to change the geometric properties of the rendering plane, and to subsequently re-render the image.
Thus, it is possible to obtain a better view of certain of the components, for example by zooming in on part of the scene. This makes selection of a component easier for the user.
The invention will now be explained in further detail with reference to the drawings .
Fig. 1 shows an example of a scene defined in a dataset, in order to illustrate some concepts used in the invention.
Fig. 2 schematically shows the mapping of an image onto a component in the scene.
Fig. 3 schematically represents components in an embodiment of a processing device comprising the GUI according to the invention.
The problems mentioned in the introduction are solved with the GUI provided with the invention, wherein the image is a rendering of a three-dimensional scene, generated using paths traced from a viewpoint to points in the scene. Whereas in the known method, all the information comprised in the GUI had to be packed into two dimensions, the invention adds a third dimension. The user of the GUI is, however, not overpowered, since navigating through a three- dimensional scene corresponds to moving through the real world, and therefore comes perfectly natural to human beings.
The user of the processing device is provided with means to select a location in the image, referring to a command for the processing device. Since the location in the image is associated with a path, selecting the location effectively means selecting a path. In the invention, a path is linked to a command, so a location in the image refers to a command by means of the path associated with that image. The path is traced to a point in the scene, on a component visible in the image, so that the user can relate the component to the command.
The image shown to the user, is rendered from a dataset, which comprises entries specifying paths traced in a plurality of directions from a viewpoint to points in a three-dimensional scene. The intersection of each path with a rendering plane defines a location in the image of the scene associated with the path, as will be explained in more detail with reference to Fig. 1, showing a very simple scene .
The scene comprises a cube 1, a cylinder 2 and a light-source 3. The scene is described in a dataset, which defines components of the scene in terms of their geometry and location within the set. The term component is used here to denote both objects and parts of objects. Thus, the cube 1 is a component, but a face of the cube 1 is also a component .
The dataset describing the scene comprises data resulting from a scan of the scene. It comprises entries for paths traced from a viewpoint 4 to points on or in a component in the scene. As an example, Fig. 1 shows a path 5 from the viewpoint 4 to a point on the upper surface of the cube 1.
It is noted that in this case, the path 5 leads to a point on the surface of a three-dimensional component. However, within the scope of the invention, it is also pos- sible that the dataset comprises entries for paths traced to points inside components like the cube 1. This could be necessary if the cube 1 is to be translucent, for example, to create the impression that it is made of glass.
Information on the point in the scene is stored with the entry for the path 5, so that the path 5 is effectively linked to the point on the surface of the cube 1.
In the preferred embodiment of the invention, the dataset comprises entries for paths traced in a large range of directions, covering a very wide spatial angle. In the example of Fig. 1 a hemisphere 6 is covered, but an entire sphere would be even more advantageous . The dataset thus comprises information for paths traced in all directions. This makes it possible to provide a GUI wherein the user can zoom in onto part of the scene.
The image is rendered from the dataset by defining a rendering plane 7. A clipping plane 8 can limit the depth of focus. The intersection of the path 5 between the viewpoint 4 and the cube 1 defines a pixel 9, in other words a location, in the image. As can be seen in Fig. 1, the spatial angle subtended by paths from the viewpoint 4 to the extremities of the rendering plane 7 is smaller than the spatial angle covered by the hemisphere 6. Thus, there is scope for zooming out .
To zoom out, the area of the rendering plane 7 and clipping plane 8 is enlarged. More of the data in the dataset is then used to render the image, since more paths intersect the rendering plane 7. Because the spatial angle subtended from the viewpoint 4 by the rendering plane 7 is smaller than the spatial angle covered by the traced paths used to generate the dataset, no re-scanning of the scene is required.
As part of the invention, physical properties, e.g. colour, luminance or coefficient of reflection, assigned to the point on the surface of the cube 1, are used to determine the appearance of the location in the image associated with the path 5 to the point on the cube 1. These properties can be defined in the dataset, as an image component with each entry, or assigned when the image is rendered, using a method described more fully in co-pending patent application, entitled 'method of rendering one or more two-dimensional output images from a 3D-dataset, ren- derer and device for processing a 3D-dataset' by the same applicant .
If the cube 1 is green, then its colour is stored in the dataset. If the colour is to be defined later, a reference allowing an appropriate algorithm to change the colour is comprised in the dataset. In any case, the entry for the path 5 will comprise co-ordinates for the point on or in the cube 1 to which the path 5 has been traced.
In the preferred embodiment of the invention, the pixel 9 is associated with a second point on or in a component of the scene. In the example of Fig. 1 this is a point on the cylinder 2, hit by an outgoing path 10, which is a reflection of the path 5 to the cube 1. The direction of the outgoing path 10 is determined using physical laws of reflection.
The same principle is used to allow an object behind a transparent object to appear in the image. In this case, a refracted path is traced from a first point of intersection with a component of the scene to a second point hit upon refraction of the path 5 in the first point in the scene .
It is noted that the path 10 can be traced even further back, for example by means of a further reflected path 11 from the cylinder 2 to the light-source 3. Physical properties such as the emitted colour, the intensity, or the focus of the light can be assigned to the light-source 3, thus influencing, through reflection in the cylinder 2, the appearance of the cube 1 in the image. This is described more fully in co-pending patent application 'method of rendering one or more two-dimensional output images from a 3D- dataset, renderer and device for processing a 3D-dataset by the same applicant. Rendering methods, such as the one just described, are known in the art as ray-tracing algorithms. Various variants of such algorithms exist. It will be understood that the invention is not limited to just one of them, and that further algorithms used to refine the ray-tracing process can also be used within the scope of the invention. For example, the ray-tracing algorithm can be enhanced with a radiosity calculation or another method taking account of indirect lighting. The ray-tracing method here described serves merely as an illustrative example of a currently used embodiment .
A clear advantage of the use of such methods, is that they provide a very sharply contrasting image. Because the commands for the processing device are linked to paths, it becomes less likely that the user of the GUI will select the wrong command. When commands are linked to components, correct rendering of the image becomes critical. Rendering the image using piecewise polynomials, for example, will not do, since such a rendering method will not provide a correct rendering of edges of components. Users might click on the wrong component if the components are not accurately delimited in the image.
Referring to Fig. 2, a further feature of the invention will now be explained, namely the possibility of mapping textures, or images onto components in the scene, during rendering of the image. Thus, different versions of the image can be rendered from one and the same dataset .
This is made possibly by the inclusion of mapping co-ordinates with each entry in the dataset. Fig. 2 shows an upper surface 12 of the cube 1 of Fig. 1. A system of mapping co-ordinates (u, v) is superimposed on the surface 12. An image 13 is likewise accorded a co-ordinate system, so that a point 14 on the surface 12 of the cube 1 maps to a point 15 in the image 13. To allow image mapping onto the surface 12 of the cube 1, the entry in the dataset for the path 5 between the viewpoint 4 and the point of contact with the cube 1, will comprise both the spatial co-ordinates and the mapping co-ordinates of that point of contact. The latter co-ordinates are used during rendering to map the image 13 or a similarly rasterised texture, onto the surface 12.
It is noted that three-dimensional mappings are also within the scope of the invention. These are used to map three-dimensional objects onto three-dimensional components. For example, a sphere could be mapped onto the cube 1, such that it appears to be contained in the cube 1, if the cube 1 is transparent. In this case, points on or in the cube 1 are assigned three mapping co-ordinates (u, v, w) , and the object to be mapped onto the cube 1 is likewise fitted into a three-dimensional co-ordinate system. The coordinates (u, v, w) , comprised in the dataset result from tracing a sufficient number of paths to points inside the cube 1. Any three-dimensional object can then be mapped onto the cube 1 during rendering, using the rasterised object to be mapped as input .
Turning now to Fig. 3, there is shown a more detailed schematic view of a processing device comprising an embodiment of the GUI according to the invention.
The device is provided with a pointer controller 16, such as a mouse, keyboard, joystick or tablet. A pointer processor 17 can translate commands issued with the pointer controller 16. It makes use of a dataset 18, to determine which path is currently selected. A user manipulating the pointer controller 16 is provided with an image 19 of the scene. The image 19 is rendered by a rendering engine 20, using the entries in the dataset 18.
Commands to change the user's view of the scene are passed from the pointer processor 17 to a parameter and camera controller 21. The view of the scene afforded by the image 19 can be changed in a number of ways according to the invention.
As mentioned above, the user can zoom in on parts of the scene to make selection of paths easier. This means that a command issued with the pointer controller 16 is translated by the pointer processor 17 and the parameter and camera controller 21 to a definition of a rendering plane 7 and/or clipping plane 8 with different geometric properties. The rendering engine 20 subsequently re-generates the image 19, determining the points of intersection of the paths with the re-defined rendering plane 7.
Alternatively, a command issued with the pointer controller 16, resting on a location associated with a path to a point on an object in the scene, can lead to a change in the physical properties of that object. For example, it can be given a more prominent colour, or made transparent. The command is then translated to an appropriate value for a parameter characterising the property, by the parameter and camera controller 21. The rendering engine 20 subsequently generates a new image 19. It uses the parameter value for the physical property just assigned to the point on the selected object to determine the appearance of the associated location in the image 19.
In the embodiment of Fig. 3, the pointer processor 17 can also issue instructions to a switcher 22, through a switch control. The switcher 22 selects images from amongst a selection of available images, to be mapped onto the component hit by the path selected with the pointer controller 16. The mapping is carried out by the rendering engine 20 according to mapping co-ordinates stored in the dataset 18 for the component. Through a command issued through switch control 23, the switcher can be directed to select a different image to be mapped by the rendering engine 20. The switcher can be an external device, connected to the processing device according to the invention, or it can be comprised in the device, possibly as a software module .
The images available for mapping can be stored images 24 or images 25 generated by image generating applications 26. The latter feature is a straightforward extension of the basic method according to the invention, since commands for the processing device are linked to paths anyhow. Some of these commands can be commands to direct an image generation application 26.
Of course, the main object of the invention is to provide a GUI for issuing commands 27 to an external application. The term external application is used here to distinguish applications unconnected with the GUI and the generation of this GUI. The commands 27 are usually commands connected to the actual functions the processing device is meant to perform. If the device is a computer, it could be a command to start a program, to retrieve a file or data object from memory, to select a variable in a program, or a command controlling a peripheral component connected to the device, etc.
The GUI comprised in a processing device can be provided using a program loaded onto the computer. In this case, the dataset, and algorithms enabling the method according to the invention to be carried out on the computer are provided as executable software. The processing device generates the image. New commands 27 can be added, and linked to paths which have not yet been linked to a command, enabling new external applications to be loaded onto the device.
Alternatively, the image and knowledge of which path is associated with which location in the image can be provided in a file, with a much simpler algorithm that allows selection of a path. In effect, the software loaded onto the device is the result of executing the method according to the invention, in this case.
It will be apparent to those skilled in the art, that the invention is not limited to the embodiments described above, which can be varied in a number of ways within the scope of the claims. For example, the image of the scene need not be provided on a flat screen, but could be provided using stereovision equipment to give an even more realistic impression of the three-dimensional scene.

Claims

1. Method of providing a GUI for entering a command (27) for a processing device, wherein an image (19) is generated and a pointer is provided to select a location (9) in the image (19) , referring to the command (27) characterised in that the image (19) is generated by rendering at least part of a three-dimensional scene from a dataset (18) , which comprises entries specifying paths (5) traced in a plurality of directions from a viewpoint (4) in the scene to first points in the scene, wherein each location (9) in the image (19) is associated with one of the paths (5) comprised in the dataset (18) , wherein selecting the location (9) results in selecting the associated path (5) , and in that a path (5) is linked to the command (27) .
2. Method according to claim 1, characterised in that at least one location (9) in the image (19) is associated with at least one path (10) to a second point in the scene, hit by the path (10) upon reflection or refraction in the first point, and in that the path (5) is linked to a second command and means are provided for selecting between all commands linked to the path (5) upon selection of the location (9) in the image (19) associated with the path (5) .
3. Method according to claim 1 or 2 , wherein physical properties, e.g. colour, luminance or coefficient of reflection, assigned to the first points are used to determine the appearance of the locations (9) in the image (19) associated with the paths (5) to the first points .
4. Method according to claim 3 , wherein at least one of the physical properties assigned to a first point is changed upon selection of the location (9) in the image (19) associated with the path (5) to the first point.
5. Method according to any one of the previous claims, wherein the dataset (18) comprises mapping coordinates for a set of first points (14) on or in an object (1, 2) in the scene, and an image (13; 24, 25) is selected and mapped onto the object (1, 2) according to the mapping co-ordinates.
6. Method according to claim 5, wherein a path (5) to one of the first points in the set is linked to a command to select and map a different image (13; 24, 25) .
7. Method according to any one of the previous claims, wherein locations (9) in the image (19) are defined by the point of intersection of the associated path (5) s with a rendering plane (7) and means (17, 21) are provided to change the geometric properties of the rendering plane (7) , and to subsequently re-render the image (19) .
8. Programmable processing device comprising a GUI provided using a method according to any one of the previous claims.
9. Computer program loadable into a computer so that the computer programmed in this way is capable of or adapted to carrying out a method according to any one of claims 1-7.
10. Computer program product, comprising a computer readable medium, having thereon computer program code means, when said program is loaded to carry out a method according to any one of claims 1-7.
11. Computer program loadable into a computer so that the computer programmed in this way comprises a GUI provided using a method according to any one of claims 1-7.
12. Computer program product, comprising a computer readable medium, having thereon computer program code means, comprising a GUI provided using a method according to any one of claims 1-7.
PCT/NL2001/000686 2001-09-14 2001-09-14 Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface WO2003027827A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/NL2001/000686 WO2003027827A1 (en) 2001-09-14 2001-09-14 Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/NL2001/000686 WO2003027827A1 (en) 2001-09-14 2001-09-14 Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface

Publications (1)

Publication Number Publication Date
WO2003027827A1 true WO2003027827A1 (en) 2003-04-03

Family

ID=19760770

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/NL2001/000686 WO2003027827A1 (en) 2001-09-14 2001-09-14 Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface

Country Status (1)

Country Link
WO (1) WO2003027827A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1769832A3 (en) * 2005-09-02 2007-07-11 Nintendo Co., Ltd. Game apparatus, storage medium storing a American Football game program, and American Football game controlling method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812829A (en) * 1986-05-17 1989-03-14 Hitachi, Ltd. Three-dimensional display device and method for pointing displayed three-dimensional image
US6196917B1 (en) * 1998-11-20 2001-03-06 Philips Electronics North America Corp. Goal directed user interface

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812829A (en) * 1986-05-17 1989-03-14 Hitachi, Ltd. Three-dimensional display device and method for pointing displayed three-dimensional image
US6196917B1 (en) * 1998-11-20 2001-03-06 Philips Electronics North America Corp. Goal directed user interface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"VIRTUAL LASER FOR 3D ENVIRONMENTS", IBM TECHNICAL DISCLOSURE BULLETIN, IBM CORP. NEW YORK, US, vol. 35, no. 6, 1 November 1992 (1992-11-01), pages 226 - 228, XP000314119, ISSN: 0018-8689 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1769832A3 (en) * 2005-09-02 2007-07-11 Nintendo Co., Ltd. Game apparatus, storage medium storing a American Football game program, and American Football game controlling method
EP2033697A3 (en) * 2005-09-02 2010-08-04 Nintendo Co., Ltd. Game apparatus, storage medium storing a American Football game program, and American Football game controlling method
US7884822B2 (en) 2005-09-02 2011-02-08 Nintendo Co., Ltd. Game apparatus, storage medium storing a game program, and game controlling method
US8094153B2 (en) 2005-09-02 2012-01-10 Nintendo Co., Ltd. Game apparatus, storage medium storing a game program, and game controlling method
US8723867B2 (en) 2005-09-02 2014-05-13 Nintendo Co., Ltd. Game apparatus, storage medium storing a game program, and game controlling method

Similar Documents

Publication Publication Date Title
US9639987B2 (en) Devices, systems, and methods for generating proxy models for an enhanced scene
KR101491035B1 (en) 3-D Model View Manipulation Apparatus
Vanacken et al. Exploring the effects of environment density and target visibility on object selection in 3D virtual environments
US6091410A (en) Avatar pointing mode
US7528823B2 (en) Techniques for pointing to locations within a volumetric display
US8643569B2 (en) Tools for use within a three dimensional scene
JP2022547930A (en) Devices, methods, and graphical user interfaces for interacting with a three-dimensional environment
KR101863041B1 (en) Creation of playable scene with an authoring system
US9424690B2 (en) Method for translating the location, orientation and movement of a predefined object into computer generated data
CN110163942B (en) Image data processing method and device
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
Montero et al. Designing and implementing interactive and realistic augmented reality experiences
CN115049811B (en) Editing method, system and storage medium for digital twin virtual three-dimensional scene
CN110333773A (en) For providing the system and method for immersion graphical interfaces
CN106980378A (en) Virtual display methods and system
US9483873B2 (en) Easy selection threshold
Wacker et al. Heatmaps, shadows, bubbles, rays: Comparing mid-air pen position visualizations in handheld ar
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
CN114663632A (en) Method and equipment for displaying virtual object by illumination based on spatial position
KR101919077B1 (en) Method and apparatus for displaying augmented reality
US11625900B2 (en) Broker for instancing
WO2020114395A1 (en) Virtual picture control method, terminal device and storage medium
Neves et al. Cognitive spaces and metaphors: A solution for interacting with spatial data
Slay et al. Tangible user interaction using augmented reality
WO2003027827A1 (en) Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ PH PL PT RO SD SE SG SI SK SL TJ TM TR TT TZ UG US UZ VN YU ZA

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZW AM AZ BY KG KZ MD TJ TM AT BE CH CY DE DK ES FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION UNDER RULE 69(1) EPC(EPO FORM 1205 OF 14.07.2004)

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP