Method of providing a graphical user interface for a processing device, and programmable processing device comprising such an interface.
The invention relates to a method of providing a GUI for entering a command for a processing device, wherein an image is generated and a pointer is provided to select a location in the image, referring to the command.
The invention further relates to a programmable processing device comprising such a GUI.
Graphical user interfaces, or GUIs find application in a wide range of processing devices . Computers and workstations are prime examples of such devices. Many other devices, however, also make use of GUIs to facilitate control of processes running on the device. Video editing equipment and game consoles, for example, also commonly make use of GUIs.
All of the above-mentioned devices are equipped with screens, displaying a two-dimensional image, and means for pointing a cursor to a location in the image. These means can be a joystick, mouse, keyboard, electronic tablet and pen, or a laser directed at a light-sensitive screen, for example. The processing device keeps track of the location of the cursor in the two-dimensional image. A command, such as a mouse-click, entered when the cursor is in a particular location in the image, activates a command linked to that particular location. The visible objects in the image provide the user with a means to relate the location to the command. In known GUIs, these objects commonly comprise icons, or buttons, sometimes provided with text.
In a known example of the method according to the opening paragraph, providing links to a large number of commands requires the use of many objects. When two-dimensional objects are shown in the image, this also means that a larger screen must be used, or that the objects must be smaller and packed together more densely, in the image. This
in turn makes selection more difficult, since the information comprised in the image is too dense for the user to comprehend. In addition, a lot of dexterity is required to place the cursor over the correct image component . Conventional rendering methods, using for example piecewise polynomials, also limit the accuracy with which an object can be selected.
Thus, the known example described above suffers the disadvantage that the two-dimensional image limits the number of commands that can be linked. Above a certain density of image components, it becomes too difficult for the user to differentiate between locations. It is also no longer possible to retain a clear overview of selectable locations .
It is an object of the invention to provide a method of the above-mentioned kind that provides a GUI that allows easy selection of a command from among a large number of commands available for selection.
This object is achieved using the method according to the invention, which is characterised in that the image is generated by rendering at least part of a three- dimensional scene from a dataset, which comprises entries specifying paths traced in a plurality of directions from a viewpoint in the scene to first points in the scene, wherein each location in the image is associated with one of the paths comprised in the dataset, wherein selecting the location results in selecting the associated path, and in that a path is linked to the command.
Thus, paths through a three-dimensional scene can be selected, allowing both a larger number of commands to be provided for selection, and a sharper image to be rendered of the scene, so that selection of the wrong command is prevented.
A preferred embodiment of the invention is characterised in that at least one location in the image is associated with at least one path to a second point in the scene, hit by the path upon reflection or refraction in the
first point, and in that the path is linked to a second command and means are provided for selecting between all commands linked to the path, upon selection of the location in the image associated with the path.
Accordingly, one path can be linked to several commands, allowing a larger number of commands to be linked, using the same number of locations in the image. As an example, a transparent object can be shown in the image. In the GUI according to the invention, clicking on one location in the image associated with a path to a point on the object will make a choice of commands available. One command is linked to a point on the transparent object and one command is linked to a point on an object visible through the transparent object. In effect, one selectable location in the image is used to link two commands. The GUI is still easy to understand and use, since the user sees both the transparent object and the object behind it. Of course, the objects should in some way represent the commands, so that it is immediately obvious to the user what choice of commands can be expected.
According to an aspect of the invention, physical properties, e.g. colour, luminance or coefficient of reflection, assigned to the first points are used to determine the appearance of the locations in the image associated with the paths to the first points.
This makes it possible to use a scene with a limited number of basic geometric objects, yet still allow a good level of differentiation between objects. In addition, a more realistic rendering of the scene is possible, so that navigation through the scene is made easier.
According to another aspect of the invention, at least one of the physical properties assigned to a first point is changed upon selection of the location in the image associated with the path to the first point.
Accordingly, components can be made more or less visible. This makes navigation through the GUI easier. For example, an opaque component can be made transparent to re-
veal components behind it, making further linked commands available for selection.
According to another aspect of the invention, the dataset comprises mapping co-ordinates for a set of first points on or in an object in the scene, and an image is selected and mapped onto the object according to the mapping co-ordinates .
Thus, the appearance of components in a scene can be changed, by mapping a different image onto them.
According to another aspect of the invention, wherein locations in the image are defined by the point of intersection of the associated path with a rendering plane and means are provided to change the geometric properties of the rendering plane, and to subsequently re-render the image.
Thus, it is possible to obtain a better view of certain of the components, for example by zooming in on part of the scene. This makes selection of a component easier for the user.
The invention will now be explained in further detail with reference to the drawings .
Fig. 1 shows an example of a scene defined in a dataset, in order to illustrate some concepts used in the invention.
Fig. 2 schematically shows the mapping of an image onto a component in the scene.
Fig. 3 schematically represents components in an embodiment of a processing device comprising the GUI according to the invention.
The problems mentioned in the introduction are solved with the GUI provided with the invention, wherein the image is a rendering of a three-dimensional scene, generated using paths traced from a viewpoint to points in the scene. Whereas in the known method, all the information comprised in the GUI had to be packed into two dimensions, the invention adds a third dimension. The user of the GUI is, however, not overpowered, since navigating through a three-
dimensional scene corresponds to moving through the real world, and therefore comes perfectly natural to human beings.
The user of the processing device is provided with means to select a location in the image, referring to a command for the processing device. Since the location in the image is associated with a path, selecting the location effectively means selecting a path. In the invention, a path is linked to a command, so a location in the image refers to a command by means of the path associated with that image. The path is traced to a point in the scene, on a component visible in the image, so that the user can relate the component to the command.
The image shown to the user, is rendered from a dataset, which comprises entries specifying paths traced in a plurality of directions from a viewpoint to points in a three-dimensional scene. The intersection of each path with a rendering plane defines a location in the image of the scene associated with the path, as will be explained in more detail with reference to Fig. 1, showing a very simple scene .
The scene comprises a cube 1, a cylinder 2 and a light-source 3. The scene is described in a dataset, which defines components of the scene in terms of their geometry and location within the set. The term component is used here to denote both objects and parts of objects. Thus, the cube 1 is a component, but a face of the cube 1 is also a component .
The dataset describing the scene comprises data resulting from a scan of the scene. It comprises entries for paths traced from a viewpoint 4 to points on or in a component in the scene. As an example, Fig. 1 shows a path 5 from the viewpoint 4 to a point on the upper surface of the cube 1.
It is noted that in this case, the path 5 leads to a point on the surface of a three-dimensional component. However, within the scope of the invention, it is also pos-
sible that the dataset comprises entries for paths traced to points inside components like the cube 1. This could be necessary if the cube 1 is to be translucent, for example, to create the impression that it is made of glass.
Information on the point in the scene is stored with the entry for the path 5, so that the path 5 is effectively linked to the point on the surface of the cube 1.
In the preferred embodiment of the invention, the dataset comprises entries for paths traced in a large range of directions, covering a very wide spatial angle. In the example of Fig. 1 a hemisphere 6 is covered, but an entire sphere would be even more advantageous . The dataset thus comprises information for paths traced in all directions. This makes it possible to provide a GUI wherein the user can zoom in onto part of the scene.
The image is rendered from the dataset by defining a rendering plane 7. A clipping plane 8 can limit the depth of focus. The intersection of the path 5 between the viewpoint 4 and the cube 1 defines a pixel 9, in other words a location, in the image. As can be seen in Fig. 1, the spatial angle subtended by paths from the viewpoint 4 to the extremities of the rendering plane 7 is smaller than the spatial angle covered by the hemisphere 6. Thus, there is scope for zooming out .
To zoom out, the area of the rendering plane 7 and clipping plane 8 is enlarged. More of the data in the dataset is then used to render the image, since more paths intersect the rendering plane 7. Because the spatial angle subtended from the viewpoint 4 by the rendering plane 7 is smaller than the spatial angle covered by the traced paths used to generate the dataset, no re-scanning of the scene is required.
As part of the invention, physical properties, e.g. colour, luminance or coefficient of reflection, assigned to the point on the surface of the cube 1, are used to determine the appearance of the location in the image associated with the path 5 to the point on the cube 1. These
properties can be defined in the dataset, as an image component with each entry, or assigned when the image is rendered, using a method described more fully in co-pending patent application, entitled 'method of rendering one or more two-dimensional output images from a 3D-dataset, ren- derer and device for processing a 3D-dataset' by the same applicant .
If the cube 1 is green, then its colour is stored in the dataset. If the colour is to be defined later, a reference allowing an appropriate algorithm to change the colour is comprised in the dataset. In any case, the entry for the path 5 will comprise co-ordinates for the point on or in the cube 1 to which the path 5 has been traced.
In the preferred embodiment of the invention, the pixel 9 is associated with a second point on or in a component of the scene. In the example of Fig. 1 this is a point on the cylinder 2, hit by an outgoing path 10, which is a reflection of the path 5 to the cube 1. The direction of the outgoing path 10 is determined using physical laws of reflection.
The same principle is used to allow an object behind a transparent object to appear in the image. In this case, a refracted path is traced from a first point of intersection with a component of the scene to a second point hit upon refraction of the path 5 in the first point in the scene .
It is noted that the path 10 can be traced even further back, for example by means of a further reflected path 11 from the cylinder 2 to the light-source 3. Physical properties such as the emitted colour, the intensity, or the focus of the light can be assigned to the light-source 3, thus influencing, through reflection in the cylinder 2, the appearance of the cube 1 in the image. This is described more fully in co-pending patent application 'method of rendering one or more two-dimensional output images from a 3D- dataset, renderer and device for processing a 3D-dataset by the same applicant.
Rendering methods, such as the one just described, are known in the art as ray-tracing algorithms. Various variants of such algorithms exist. It will be understood that the invention is not limited to just one of them, and that further algorithms used to refine the ray-tracing process can also be used within the scope of the invention. For example, the ray-tracing algorithm can be enhanced with a radiosity calculation or another method taking account of indirect lighting. The ray-tracing method here described serves merely as an illustrative example of a currently used embodiment .
A clear advantage of the use of such methods, is that they provide a very sharply contrasting image. Because the commands for the processing device are linked to paths, it becomes less likely that the user of the GUI will select the wrong command. When commands are linked to components, correct rendering of the image becomes critical. Rendering the image using piecewise polynomials, for example, will not do, since such a rendering method will not provide a correct rendering of edges of components. Users might click on the wrong component if the components are not accurately delimited in the image.
Referring to Fig. 2, a further feature of the invention will now be explained, namely the possibility of mapping textures, or images onto components in the scene, during rendering of the image. Thus, different versions of the image can be rendered from one and the same dataset .
This is made possibly by the inclusion of mapping co-ordinates with each entry in the dataset. Fig. 2 shows an upper surface 12 of the cube 1 of Fig. 1. A system of mapping co-ordinates (u, v) is superimposed on the surface 12. An image 13 is likewise accorded a co-ordinate system, so that a point 14 on the surface 12 of the cube 1 maps to a point 15 in the image 13. To allow image mapping onto the surface 12 of the cube 1, the entry in the dataset for the path 5 between the viewpoint 4 and the point of contact with the cube 1, will comprise both the spatial co-ordinates and
the mapping co-ordinates of that point of contact. The latter co-ordinates are used during rendering to map the image 13 or a similarly rasterised texture, onto the surface 12.
It is noted that three-dimensional mappings are also within the scope of the invention. These are used to map three-dimensional objects onto three-dimensional components. For example, a sphere could be mapped onto the cube 1, such that it appears to be contained in the cube 1, if the cube 1 is transparent. In this case, points on or in the cube 1 are assigned three mapping co-ordinates (u, v, w) , and the object to be mapped onto the cube 1 is likewise fitted into a three-dimensional co-ordinate system. The coordinates (u, v, w) , comprised in the dataset result from tracing a sufficient number of paths to points inside the cube 1. Any three-dimensional object can then be mapped onto the cube 1 during rendering, using the rasterised object to be mapped as input .
Turning now to Fig. 3, there is shown a more detailed schematic view of a processing device comprising an embodiment of the GUI according to the invention.
The device is provided with a pointer controller 16, such as a mouse, keyboard, joystick or tablet. A pointer processor 17 can translate commands issued with the pointer controller 16. It makes use of a dataset 18, to determine which path is currently selected. A user manipulating the pointer controller 16 is provided with an image 19 of the scene. The image 19 is rendered by a rendering engine 20, using the entries in the dataset 18.
Commands to change the user's view of the scene are passed from the pointer processor 17 to a parameter and camera controller 21. The view of the scene afforded by the image 19 can be changed in a number of ways according to the invention.
As mentioned above, the user can zoom in on parts of the scene to make selection of paths easier. This means that a command issued with the pointer controller 16 is
translated by the pointer processor 17 and the parameter and camera controller 21 to a definition of a rendering plane 7 and/or clipping plane 8 with different geometric properties. The rendering engine 20 subsequently re-generates the image 19, determining the points of intersection of the paths with the re-defined rendering plane 7.
Alternatively, a command issued with the pointer controller 16, resting on a location associated with a path to a point on an object in the scene, can lead to a change in the physical properties of that object. For example, it can be given a more prominent colour, or made transparent. The command is then translated to an appropriate value for a parameter characterising the property, by the parameter and camera controller 21. The rendering engine 20 subsequently generates a new image 19. It uses the parameter value for the physical property just assigned to the point on the selected object to determine the appearance of the associated location in the image 19.
In the embodiment of Fig. 3, the pointer processor 17 can also issue instructions to a switcher 22, through a switch control. The switcher 22 selects images from amongst a selection of available images, to be mapped onto the component hit by the path selected with the pointer controller 16. The mapping is carried out by the rendering engine 20 according to mapping co-ordinates stored in the dataset 18 for the component. Through a command issued through switch control 23, the switcher can be directed to select a different image to be mapped by the rendering engine 20. The switcher can be an external device, connected to the processing device according to the invention, or it can be comprised in the device, possibly as a software module .
The images available for mapping can be stored images 24 or images 25 generated by image generating applications 26. The latter feature is a straightforward extension of the basic method according to the invention, since commands for the processing device are linked to paths
anyhow. Some of these commands can be commands to direct an image generation application 26.
Of course, the main object of the invention is to provide a GUI for issuing commands 27 to an external application. The term external application is used here to distinguish applications unconnected with the GUI and the generation of this GUI. The commands 27 are usually commands connected to the actual functions the processing device is meant to perform. If the device is a computer, it could be a command to start a program, to retrieve a file or data object from memory, to select a variable in a program, or a command controlling a peripheral component connected to the device, etc.
The GUI comprised in a processing device can be provided using a program loaded onto the computer. In this case, the dataset, and algorithms enabling the method according to the invention to be carried out on the computer are provided as executable software. The processing device generates the image. New commands 27 can be added, and linked to paths which have not yet been linked to a command, enabling new external applications to be loaded onto the device.
Alternatively, the image and knowledge of which path is associated with which location in the image can be provided in a file, with a much simpler algorithm that allows selection of a path. In effect, the software loaded onto the device is the result of executing the method according to the invention, in this case.
It will be apparent to those skilled in the art, that the invention is not limited to the embodiments described above, which can be varied in a number of ways within the scope of the claims. For example, the image of the scene need not be provided on a flat screen, but could be provided using stereovision equipment to give an even more realistic impression of the three-dimensional scene.