WO1997034213A2 - Computerized graphics systems - Google Patents

Computerized graphics systems Download PDF

Info

Publication number
WO1997034213A2
WO1997034213A2 PCT/IL1997/000094 IL9700094W WO9734213A2 WO 1997034213 A2 WO1997034213 A2 WO 1997034213A2 IL 9700094 W IL9700094 W IL 9700094W WO 9734213 A2 WO9734213 A2 WO 9734213A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
scene
generating
auxiliary
point
Prior art date
Application number
PCT/IL1997/000094
Other languages
French (fr)
Other versions
WO1997034213A3 (en
Inventor
Eyal Ofek
Ari Rappoport
Original Assignee
I.I. Interactive Innovations Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL11750296A external-priority patent/IL117502A0/en
Priority claimed from IL11951096A external-priority patent/IL119510A0/en
Application filed by I.I. Interactive Innovations Ltd. filed Critical I.I. Interactive Innovations Ltd.
Priority to AU21057/97A priority Critical patent/AU2105797A/en
Priority to GB9819870A priority patent/GB2326806A/en
Publication of WO1997034213A2 publication Critical patent/WO1997034213A2/en
Publication of WO1997034213A3 publication Critical patent/WO1997034213A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects

Definitions

  • the present invention relates to computerized graphics systems in general, and in particular to computerized graphics systems for producing images of three-dimensional scenes.
  • BACKGROUND OF THE INVENTION Producing a representation of a three-dimensional scene using a computerized graphics system is well known in the art.
  • a three-dimensional scene is stored in the computerized graphics system.
  • Conditions of lighting and a point of view of an observer of the scene are specified.
  • Some or all of reflections, diffuse reflections, specular reflections, shadows, and refractions due to objects in the scene are computed. Based on the computed results, an image of the three-dimensional scene is generated.
  • Lighting effects are generally implemented by affecting the color of surfaces in the scene by employing what is referred to in the art as local and global shading models.
  • local shading models the color of each object is determined by analyzing the object itself and the light sources alone.
  • global shading models the color of an object is affected by that of other objects.
  • the major local shading models determine the color of an object as being either constant, constant plus correction for ambient light (which practically produces constant color), or as a function of the angle between the normal to the surface and the light source direction.
  • Gouraud shading such a color is computed for each vertex of the object, and then the color is interpolated to yield the color inside the object.
  • Phong shading the normal to the surface is interpolated and the model is invoked for each pixel.
  • ray tracing typically begins with a single ray of light through every image element or pixel, from the eye of the observer to an object in the scene, and then recursively traces other rays of light beginning at the intersection between the first ray of light and the object in the scene, spawning reflection, refraction and shadow rays.
  • Radiosity Another global shading model known in the prior art is known as "radiosity.” In the radiosity method, correct simulation of energy transfer in the scene is attempted.
  • beam tracing A method to accelerate the computation of ray tracing known in the prior art is known as "beam tracing."
  • beam tracing beams of rays are traced instead of individual light rays. These beams are defined by reflecting the view point along the plane of a polygon in the scene, generating bounding planes passing through the view point and the edges of the polygon, and by computing, in object space, the polygons visible inside a beam.
  • Beam tracing has two primary disadvantages: 1) it is limited to reflections on planar polygons, with refractions being computed by a crude approximation, and curved objects not handled at all; and 2) all computations are done in object space, thus making the method highly inefficient and very difficult to implement.
  • beam tracing cannot be implemented using standard graphics APIs such as OpenGL, the standard API for 3-D graphics suggested by Silicon Graphics and adopted by most computer manufacturers.
  • the present invention seeks to provide an improved computerized graphics system for producing representations, typically pictures, of three-dimensional scenes.
  • the present invention provides an improved system which overcomes the known disadvantages of the prior art as discussed above.
  • the present invention is not limited to computing reflections on planar polygons and allows for computing lighting effects for curved objects.
  • the present invention is an image-space method, utilizing the z- buffer method and stencil bit planes, and can be implemented using standard graphics APIs.
  • the present invention also seeks to provide improved methods for receiving 3D information regarding a scene and generating therefrom a 2D view of the scene including at least one of shadow, refractions and reflections.
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in the first planar surface, from 3D information regarding the 3D scene the method including generating an auxiliary 2D image of the reflection of the second surface in the first planar surface, wherein the step of generating includes reflecting the second surface through the plane, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface, generating a main 2D image of the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface, and compositing the
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in the first non-planar surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, generating a main 2D image of the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary
  • the compositing step includes employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surface and at least one second surface wherein at least one of the second surfaces is reflected in each of the first non-planar surfaces including generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
  • the step of generating an auxiliary 2D image of the reflection includes computing a virtual surface including a 3D approximation of the reflection of the second surface in the first surface, and rendering the virtual surface, thereby to generate the auxiliary image.
  • the step of computing a virtual second surface includes selecting a set of second surface locations within the second surface to be reflected in the first surface, for each second surface location in the set determining a polygon P on an approximation of the first surface, wherein a real reflection point of the second surface location falls within the polygon, and computing a possible reflection point for at least one point of the polygon, and computing a location within the virtual surface corresponding to the second surface location based at least partly on the possible reflection points.
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in the first surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution.
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first non-planar surfaces the method including for each of the at least one first non-planar surfaces, generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
  • a method for computing shadows generated on a planar polygon by a light source in an image of a three-dimensional polygonal scene from a point of view of an observer including computing at least one shadowed polygon within a plane, wherein at least one shadow is cast on the shadowed polygon, computing a stencil which includes visible pixels of the at least one shadowed polygon, rendering objects in the scene, including computing a scene including the at least one shadowed polygon using hidden-surface removal, to produce a preliminary image, computing at least one shadow polygon by projecting a scene polygon in the direction of the light source such that shadow generating portions of the scene polygon are generally projected above the plane and non-shadow generating portions of the scene polygon are generally projected below the plane, and rendering the at least one shadow polygon in a shadow color using the stencil and using hidden-surface removal to produce at least one rendered shadow polygon.
  • the at least one first surfaces includes at least two first surfaces A and B and the step of generating a main 2D image for surface A includes generating a main 2D image of the scene including an image of the first non-planar surface A which does not include any reflections of any one of the second surfaces in the first non-planar surface A, and wherein the step of generating a main 2D image for surface B includes rendering surface B into the main 2D image of the scene generated for surface A.
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first surfaces including for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any reflections of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first planar surfaces the method including for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does, not include any reflections of any one of the second surfaces in the first planar surface, and for each pair of first planar surfaces and second surfaces characterized in that the second surface is reflected in the first planar surface generating an auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating includes reflecting the second surface through a plane which includes the first plane in the pair, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs
  • the at least one second surface includes a plurality of second surfaces and wherein the step of generating an auxiliary 2D image for each pair of first and second surfaces includes generating a single auxiliary 2D image which includes reflections of all of the plurality of second surfaces in the first non-planar image.
  • the steps of generating the main image and compositing include texture mapping the auxiliary image onto the first surface.
  • the first non-planar surface includes a polygonal mesh.
  • the compositing step includes the step of alpha-blending the auxiliary and main images.
  • the step of generating an auxiliary 2D image includes generating an auxiliary 2D image of the reflections of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of reflected second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of the reflected second surfaces.
  • the step of rendering includes using a z-buffer to remove any liidden portions of the reflected second surfaces.
  • the step of generating an auxiliary 2D image includes recursively invoking the same method for different first surfaces in the scene.
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in the first surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image of the refraction of the second surface in the first surface, generating a main 2D image of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main
  • the compositing step includes employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first surfaces the method including generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface generating an auxiliary 2D image of the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D
  • the step of generating an auxiliary 2D image of the refraction includes computing a virtual surface including a 3D approximation of the refraction of the second surface in the first surface, and rendering the virtual surface, thereby to generate the auxiliary image.
  • the step of computing a virtual second surface includes selecting a set of second surface locations within the second surface to be refracted in the first surface, for each second surface location in the set determining a polygon on an approximation of the first surface, wherein a real refraction point of the second surface location falls within the polygon, and computing a possible refraction point for at least one point of the polygon, and computing a location within the virtual surface corresponding to the second surface location based at least partly on the possible refraction point.
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in the first surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image, at a first resolution, of the refraction of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution.
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one" first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first surfaces including for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface generating an auxiliary 2D image of the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface refracted in the first planar surface, from 3D information regarding the 3D scene including generating an auxiliary 2D image of the refraction of the second surface in the first planar surface by refracting the second surface through the first planar surface, thereby to define a virtual second surface, and generating a main 2D image of the scene including a 2D image of the first planar surface which does not include the refraction of the second surface in the first planar surface, wherein the step of generating includes texture mapping the auxiliary image onto the first surface.
  • the method also includes the step of using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface.
  • the at least one first surface includes at least two first surfaces A and B and the step of generating a main 2D image for surface A includes generating a main 2D image of the scene including an image of the first surface A which does not include any refractions of any one of the second surfaces in the first surface A, and wherein the step of generating a main 2D image for surface B includes rendering surface B into the main 2D image of the scene generated for surface A.
  • the step of generating includes texture mapping the auxiliary image onto the first surface.
  • a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first planar surfaces the method including for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does not include any refractions of any one of the second surfaces in the first planar surface, and for each pair of first planar surfaces and second surfaces characterized in that the second surface is refracted in the first planar surface generating an auxiliary 2D image of the refraction of the second surface in the first planar surface, wherein the step of generating includes refracting the second surface through a plane which includes the
  • the at least one second surfaces includes a plurality of second surfaces and wherein the step of generating an auxiliary 2D image for each pair of first and second surfaces includes generating a single auxiliary 2D image which includes refractions of all of the plurality of second surfaces in the first imase. Additionally in accordance with a preferred embodiment of the present invention, the steps of generating the main image and compositing include texture mapping the auxiliary image onto the first surface.
  • the first surface includes a polygonal mesh.
  • the compositing step includes the step of alpha-blending the auxiliary and main images.
  • the step of generating an auxiliary 2D image includes generating an auxiliary 2D image of the refractions of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of refracted second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of the refracted second surfaces.
  • the step of rendering includes using a z-buffer to remove any hidden portions of the refracted second surfaces.
  • the step of generating an auxiliary 2D image includes recursively invoking the same method for different first surfaces in the scene.
  • the method also includes generating an additional auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the reflection of the second surface in the first surface and the refraction of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
  • the method also includes generating an additional auxiliary 2D image of the shadow of the second surface on the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the refraction of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
  • the method also includes generating an additional auxiliary 2D image of the shadow of the second surface on the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the reflection of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
  • a method for generating a 2D image of a 3D scene from a point of view using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the method including, for each of the at least one objects generating an environment map for a selected point, for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within the environment map, and rendering the object's polygons using the environment map as a texture map, wherein the step of generating an environment map includes rendering at least one of the surfaces on each of four faces of a four-faced pyramid defined about the selected point.
  • the method also includes mapping the faces of the pyramid, on each of which at least one of the surfaces are rendered, onto a 2D circle, thereby to generate a modified environment map.
  • the step of computing an ordered pair of texture coordinates includes computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, computing an intersection point of the ray with an object centered at the selected point, and computing a pair of texture coordinates as a function of a vector extending from the selected point to the intersection point.
  • the method also includes the step of generating a depth map of the surfaces as seen from the selected point, the depth map corresponding to the environment map, and wherein the combining step includes assigning a weight to each individual one of the directions based on a depth which the depth map assigns to the individual one of the directions.
  • the selected point includes a user-selected location in the scene.
  • the selected point, for which an environment map is generated for an individual one of the at least one objects includes a central location of the individual object.
  • apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in the first non-planar surface, from 3D information regarding the 3D scene including auxiliary 2D image apparatus which generates the reflection of the second surface in the first non-planar surface, main 2D image apparatus which generates the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non- planar surface, and compositor apparatus which combines the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
  • apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in the first planar surface, from 3D information regarding the 3D scene including auxiliary 2D image apparatus which generates the reflection of the second surface in the first planar surface
  • the auxiliary 2D image apparatus includes reflecting apparatus which reflects the second surface through the plane, thereby defining a virtual second surface, and removal apparatus which uses a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface
  • main 2D image apparatus for generating the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface includes reflecting apparatus which reflects the second surface through the plane, thereby defining a virtual second surface
  • removal apparatus which uses a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface
  • a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in the first surface, from 3D information regarding the 3D scene including recursively performing the following steps generating an auxiliary 2D image of the reflection of the second surface in the first surface, generating a main 2D image of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
  • the total number of vertices defined by the object approximating polygons may be less than 3 times the number of object approximating polygons since two or more polygons may have one or more vertices in common.
  • the number of polygons used to approximate different objects need not be the same.
  • the "selected point”, also termed herein the "object point”, for which an environment map is generated in the course of rendering an object's polygons, may be interior to that object, but alternatively may be on the surface of that object or even exterior to that object.
  • Fig. IA is a simplified block diagram of a computerized graphics system constructed and operative in accordance with a preferred embodiment of the present invention
  • Fig. IB is a simplified pictorial illustration of an example of a scene as operated upon in accordance with a preferred embodiment of the present invention
  • FIG. 2A and 2B taken together, comprise a simplified flowchart illustration of a preferred method of operation of the system of Fig. IA;
  • Fig. 3 is a simplified flowchart illustration of a preferred implementation of step 110 of Fig. 2A;
  • Fig. 4 is a simplified flowchart illustration of a preferred implementation of step 130 of Fig. 2A;
  • Fig. 5 is a simplified flowchart illustration of a preferred implementation of steps 190 and 200 of Fig. 2B;
  • Fig. 6 is a simplified flowchart illustration of a preferred method of implementation of step 210 of Fig. 2B;
  • Fig. 7 is a simplified flowchart illustration of a preferred implementation of step 230 of Fig. 2B;
  • Fig. 8 is a simplified flowchart illustration of a preferred implementation of step 220 of Fig. 2B;
  • Fig. 9 is a simplified flowchart illustration of a preferred implementation of step 390 of Fig. 7;
  • Fig. 10 is a simplified flowchart illustration of a preferred implementation of step 460 of Fig. 9;
  • Figs. 11A and 11B, taken together, are a simplified flowchart illustration of a preferred implementation of step 480 of Fig. 10;
  • Fig. 12 is a simplified flowchart illustration of a preferred implementation of step 490 of Fig. 10;
  • Fig. 13 is a simplified flowchart illustration of a preferred implementation of step 500 of Fig. 10;
  • Fig. 14 is a simplified flowchart illustration of a preferred implementation of step 510 ofFig. 10
  • Fig. 15 is a simplified flowchart illustration of a preferred implementation of step
  • Fig. 18 is a pictorial illustration of a scene including four spheres each having a shadow rendered on the floor plane of the scene;
  • Fig. 19 is a pictorial illustration of one of the spheres of Fig. 18, a point of view from which the sphere is being observed, and a four-faced pyramid surrounding the sphere;
  • Figs. 20A - 20D are pictorial illustrations of the scene of Fig. 18 rendered onto each of the four faces, respectively, of the pyramid of Fig. 19
  • Fig. 21 is a pictorial illustration of the pyramid of Fig. 19 unfolded onto a planar surface, where the scene of Fig. 18 is rendered onto each of the faces of the pyramid;
  • Fig. 22 is a pictorial illustration of a tessellation of the front face of the pyramid;
  • Fig. 23 is a pictorial illustration of the tessellation of Fig. 22 mapped into a 2D circle
  • Fig. 24 is a circular environment map including a mapping, as illustrated in Figs. 22 - 23, of each of the faces of the pyramid onto the 2D circle of Fig. 23;
  • Fig. 25 is a rendering of the entire scene of Fig. 18 including a reflection of the entire scene in the sphere of Fig. 19, the reflection having been generated by texture-mapping the environment map of Fig. 24 onto the sphere 1510;
  • Fig. 26 is a simplified block diagram of a system for generating a 2D image, including reflections, of a 3D scene from a point of view, using 3D information regarding the 3D scene,
  • Fig. 27 is a simplified flowchart illustration of a preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating a 2D image of a 3D scene using a pyramidal technique to compute an environment map
  • Fig. 28 is a simplified flowchart illustration of a first preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating texture coordinates useful in generating a 2D image of a 3D scene either according to the method of Fig. 27 or according to conventional methods;
  • Fig. 29 is a simplified flowchart illustration of a second preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating texture coordinates useful in generating a 2D image of a 3D scene either according to the method of Fig. 27 or according to conventional methods;
  • Fig. 30 is a pictorial illustration of the relationship between geometric entities participating in the methods shown and described herein.
  • Fig. IA is a simplified block diagram of a computerized graphics system constructed and operative in accordance with a preferred embodiment of the present invention. It is appreciated that the apparatus of Fig. IA may be implemented partly in computer hardware and partly in software, or entirely in custom hardware. Preferably, the apparatus of the present invention is implemented in suitably programmed computer hardware.
  • the computer hardware may comprise, for example, a Silicon Graphics Indigo-2 Impact Graphics workstation, commercially available from Silicon Graphics, Mountain View, California, USA.
  • the system of Fig. IA comprises a control unit 10, receiving an input comprising a scene, as described more fully below with reference to Figs. 2 A and 2B.
  • the input to the control unit 10 typically comprises 3-D coordinates of all objects and light sources in the scene, color characteristics for all objects in the scene, textures, and viewing parameters.
  • the system of Fig. IA also comprises a scan-conversion unit 15, receiving input from the control unit 10.
  • the input received by the scan-conversion unit 15 typically comprises 2-D coordinates of all polygons in the scene and depth information, and optionally also comprises color information, which color information may have been computed by the control unit 10 based on lighting information.
  • the operation of the scan-conversion unit 15 is described more fully below with reference to Figs. 2A and 2B.
  • the system of Fig. 1 A also comprises a reflections, refractions and shadows unit (RRS) 20.
  • the RRS 20 receives an input from the control unit 10 indicating what operation is to be carried out and, based on the received input, provides a control signal to a 3-D transformation and lighting unit (3-D unit) 25.
  • the RRS 20 also provides a control signal to a stencils unit 30 and a z-buffer 35.
  • the z-buffer 35 may comprise a plurality of z-buffers.
  • the stencils unit 30 and the z-buffer 35 are also connected to the scan conversion unit 15.
  • the system of Fig. IA also comprises an image buffer 40, an optional accumulation buffer 45, an alpha bit planes unit 50, and a texture bit planes unit 55.
  • the optional accumulation buffer 45 is used as a working area, preferably of the same size as the image buffer 40.
  • the optional accumulation buffer 45 is referred to herein primarily for purposes of clarity of description; it is appreciated that the present invention need not include an accumulation buffer 45.
  • the system of Fig. IA also comprises an image output unit 60, receiving input from the image buffer 40.
  • the output unit 60 typically produces the image contained in the image buffer 40 on a device such as a video monitor, printer, plotter, or other output device as is well known in the art.
  • Each of the units 15, 30, 35, 40, 45, 50, and 55 are preferably connected to each of the other units 15, 30, 35, 40, 45, 50 and 55 with two-way connections.
  • the term "stencil” or “stencil bit plane,” as used herein, refers generally to an image used to mask another image; that is, to specify a sub-image comprising a subset of the pixels of an image.
  • a stencil is a single bit image at the same resolution as that of the generated image, comprising a single bit per pixel.
  • Fig. IB is a simplified pictorial illustration of an example of a scene as operated upon and in accordance with various preferred embodiments of the present invention.
  • the scene of Fig. IB is useful in understanding terms used in the present specification, as it depicts reflections, refractions, shadows, and other aspects of a scene as may be produced by various steps in various embodiments described below.
  • the scene of Fig. IB comprises a polygon 70 on which shadow is cast, a shadow
  • a polygonal object 74 as cast on polygon 70, a polygonal object 74, a reflecting surface 75, a refracting surface 76, a refracting beam 78, a reflection 80, and a light source 82.
  • Figs. 2A and 2B which, taken together, comprise a simplified flowchart illustration of a preferred method of operation of the system of Fig. IA. It is appreciated that the method of Figs. 2A and 2B, as well as other methods described below, need not necessarily be performed in a particular order, and that in fact, for reasons of implementation, a particular implementation of the methods may be performed in a different order than another particular implementation.
  • the method of Figs. 2A and 2B preferably includes the following steps:
  • a scene to be processed is input by the control unit 10, along with viewing parameters which describe how the scene is to be viewed (step 100).
  • the scene typically comprises both information about surfaces in the scene and lighting parameters.
  • the information about surfaces typically includes both geometry and color characteristics of the surfaces, typically including the following: base color; diffuse reflection coefficients; specular reflection coefficients; refraction indices; and other characteristics.
  • the specular reflection coefficients are particularly useful in performing the methods of the present invention.
  • the specular reflection coefficients are usually described in RGBA form, as is well known in the art and are described in chapter 13 of Foley et al, referred to above.
  • the lighting parameters typically define the location, orientation, and intensity of lighting in the scene.
  • Viewing parameters typically comprise point of view parameters, which define the point of view of an observer of the scene, including location of the point of view, orientation of view, and field of view. Processing by the method of Figs. 2A and 2B is in accordance with the viewing parameters.
  • step 110 A virtual world W, a stencil S, and a rendered surface RO are initialized (step 110).
  • a preferred implementation of step 110 is described in more detail below with reference to Fig. 3.
  • a stenciled portion SZ of the z-buffer, defined according to the stencil S, is initialized, typically by setting all values within SZ to the largest possible number, also referred to herein as setting to infinity (step 120).
  • All surfaces in W are rendered into a stenciled portion SB of the accumulation buffer 45 and into SZ, including computation of a color C for each pixel point P through which each surface O is visible (step 130).
  • color includes information on color in the usual sense as well as on reflectance and on refraction characteristics.
  • rendering may include the application of a clipping plane, which is a plane defining a portion of the scene, on one side of the plane, which is to be clipped or removed from the image. Methods for applying a clipping plane are well known in the art.
  • the stenciled portion SB of the accumulation buffer 45 is defined by the stencil
  • a stenciled portion SI of the image buffer 40 is similarly defined.
  • a preferred implementation of step 130 is described in more detail below with reference to Fig. 4.
  • the current color of each pixel in the stenciled portion SB of the accumulation buffer 45 is replaced with a computed color by a weighted average of the current color of the pixel and the corresponding pixel in the accumulation buffer AB. (step 140).
  • This process is known as alpha blending.
  • the weights of the weighted average are determined based, at least in part, on characteristics of the rendered surface RO such as, for example, reflectance and opacity of RO. Other factors, such as level of recursion of the method may also have an effect on the weights.
  • Typical methods, known in the art, for determining the weights are described in Foley et al, refe ⁇ ed to above, in chapter 16.
  • recursion criteria may be defined in order to determine when a given level of recursion is to be terminated. Such criteria may include: time spent on processing a particular level of the method; amount of change in the rendered surface RO computed in the current level of iteration; reaching a maximum recursion level, such as, for example, 4 levels of recursion; or screen space occupied by the surface being processed being smaller than a predetermined minimum size such as, for example 10 pixels. Examples of a scene after performing one or more recursions of steps 120 through 140 are described in more detail below with reference to Figs. 17A, 17B, 17D, 17E, 17G, and 17H.
  • Recursion criteria are checked to see whether any recursion-ending criteria are satisfied (step 150). If no recursion-ending criteria are satisfied, processing continues with step 190. If some recursion criteria are satisfied, the current level of recursion is ended (step 160). As each recursion level is ended, a check is made as to whether processing is presently at the top recursion level (step 170). If so, the method of Figs. 2A and 2B terminates (step 180), and the present image in the image buffer 40 is output by the control unit 10, being the result of the method of Figs. 2A and 2B. Otherwise, processing continues at the next higher recursion level (step 185).
  • a surface of interest is generally any visible surface which has an effect on computation of the rendered image.
  • a surface of interest includes any visible surface for which we are interested in at least one of the following: computing shadows falling on the object; computing reflections from the object; computing refraction through the object. Whether or not a surface is of interest may be determined using methods well-known in the art, based partly on the color as described above.
  • step 200 The next surface of interest O, found in step 190, is selected for processing (step 200).
  • a preferred implementation of the method of steps 190 and 200 is described in more detail below with reference to Fig. 5.
  • step 210 A new stencil S' is prepared (step 210).
  • a preferred implementation of step 210 is described in more detail below with reference to Fig. 6.
  • Step 210 may, in principle, be carried out at the same time as step 130, described above. Carrying out step 210 at the same time as step 130 may be preferable in order to increase the efficiency of the method of Figs. 2A and 2B.
  • Shadows falling on surface O are computed if desired (step 220). Methods of shadow computation are well known in the art and are described, for example, in Reeves, Salesin, and Cook, referred to above. A preferred implementation of step 220, believed to be possibly preferable to those known in the art, is described in more detail below with reference to Fig. 8.
  • step 120 If computation of reflections and/or refractions from O is desired, a new virtual world W is computed, and steps beginning at step 120 are then performed recursively, using S', W and O instead of S, W, and RO (steps 230 and 240).
  • S', W and O instead of S, W, and RO (steps 230 and 240).
  • W is computed based in part on the surface O and the fact that reflections and/or refractions are to be computed.
  • An example of a scene after performing all recursions of steps 120 through 240 is described in more detail below with reference to Fig. 171.
  • Fig. 3 is a simplified flowchart illustration of a preferred implementation of step 110 of Fig. 2A.
  • the method of Fig. 3 preferably comprises the following steps:
  • the virtual world W is set to contain all surfaces in the scene, (step 250). Stencil
  • the rendered surface RO is set to be the background, that is, an infinite plane at a very great distance, further away from the point of view than all surfaces, (step 270)
  • Fig. 4 is a simplified flowchart illustration of a preferred implementation of step 130 of Fig. 2A.
  • the method of Fig. 4 preferably includes the following steps:
  • a color C is computed for each pixel through which O is visible (step 290).
  • the color computation takes the following into account: the color of O, optionally using a texture map; the color of ambient light; the normal to O at P, optionally perturbed using a bump map; the location and orientation of light sources; and other factors, as are well known in the art.
  • Fig. 5 is a simplified flowchart illustration of a preferred implementation of steps 190 and 200 of Fig. 2B.
  • the method of Fig. 5 preferably includes the following steps:
  • O is considered a surface of interest (step 330)
  • Fig. 6 is a simplified flowchart illustration of a preferred implementation of step 210 of Fig. 2B.
  • the method of Fig. 6 preferably includes the following steps:
  • Pixels covered by O and through which O is visible are determined, using SZ (step 350). Typically, this determination is made using a z-buffer operating on the stenciled portion SZ. Computational methods using a z-buffer are well-known in the art and are described, for example, in Foley et al, referred to above, at pages 668-672.
  • a new stencil S' is then computed as the intersection of stencil S and the pixels determined in step 350 (step 360).
  • Fig. 7 is a simplified flowchart illustration of a preferred implementation of step 230 of Fig. 2B.
  • the method of Fig. 7 preferably includes the following steps:
  • a surface O is determined (step 370).
  • the viewer's location and orientation are determined from the initial viewing parameters (step 380).
  • a new virtual world W is computed as a transformation of the scene, based on the surface of O (step 390).
  • a preferred implementation of step 390 is described in more detail below with reference to Fig. 9.
  • the transformation depends on the type of surface O. If, for example, the surface O is planar polygonal, W contains all surfaces in the scene reflected and/or refracted through the plane on which the surface O lies. In the case of reflection, W' does not contain the reflection of surfaces that lie on the side of the surface of O away from the viewer. In the case of refraction, W' does not contain the refraction of surfaces that lie on the side of the surface of O nearest the viewer. In the case of a planar polygonal O, each point in W is reflected through the plane in a uniform way; that is, the same reflection transformation is used for all points.
  • Figs. 10 - 16 Briefly, in the case of a curved surface O2, a set of sample points is generated on O2. The sample points, when connected form a subdivision of O2 into polygons. The subdivision is also known as a tessellation. For each point Q in the scene W, it is determined, as described below, in which tessellation polygon T the point Q is reflected and/or refracted.
  • Q is refracted by taking refraction coefficients into account and applying Snell's law as described below with reference to Fig. 1 IB.
  • a weighted average of the reflected and/or refracted points is then computed, and the final location of Q in the reflected and/or refracted world is taken to be the weighted average.
  • the weights for the weighted average are determined as a function of the proximity of Q to the rays reflected and/or refracted from the vertices.
  • Fig. 8 is a simplified flowchart illustration of a preferred implementation of step 220 of Fig. 2B.
  • the method of Fig. 8 is described for the case of a single shadowed polygon.
  • the term "shadowed polygon,” as used herein, refers generally to a polygon on which shadow falls.
  • the term “shadow polygon,” as used herein, refers generally to a representation of a shadow that is used to create shadowed polygons. It is appreciated that the method of Fig. 8 may be applied repeatedly to a plurality of shadowed polygons in the scene, and that shadow polygons may be used to approximate non-polygonal objects in the scene.
  • the method of Fig. 8 preferably includes the following steps:
  • Shadow polygons are computed (step 415).
  • a preferred method for computing shadow polygons is as follows:
  • G be a row vector (a,b,c,d) of the coefficients of the implicit equation of the plane in which the polygon PO, on which shadow is to fall, lies.
  • M (L dot G - epsilon) I - G ⁇ times L where: dot indicates dot product; epsilonO ⁇ epsilon « 1; I is the 4 x 4 identity matrix;
  • QT is the transpose of G; and times indicates matrix multiplication.
  • the projection of a 3-D point Q is given by QM.
  • QM The projection of a 3-D point Q.
  • epsilon > 0 we maintain the spatial order between polygon PO and the shadow polygons, resulting in the removal of shadows of objects that are beyond polygon PO during the hidden surface removal stage of the rendering.
  • a stencil, including the visible pixels of polygon PO is computed (step 420).
  • Objects in the scene are rendered, preferably by computing a z-buffer of the scene including the shadowed polygon, preferably with hidden surfaces removed (step 425).
  • the shadow polygon is then rendered in a shadow color, preferably by using the stencil and the z-buffer computed in step 425, to produce a rendered shadow polygon
  • alpha blending is used to blend the generated shadows with the image generated in step 425.
  • the weights used are determined by the desired
  • the rendered shadow polygon is then blended with the preliminary shadowed image to produce a final shadowed image (step 435). Examples of a scene after performing step 435 are described in more detail below with reference to Figs. 17Q and 17R.
  • Fig. 9 is a simplified flowchart illustration of a preferred implementation of step 390 of Fig. 7.
  • the method of Fig. 9 preferably comprises the following steps: Each surface in the scene is approximated with a polygonal mesh (step 450).
  • a polygonal mesh typically comprises: a set of vertices; a set of edges connecting the vertices and defining polygonal faces; and optionally a set of normals, one at each vertex.
  • the approximation with a polygonal mesh may be an exact approximation such as, for example, when a planar polygon is represented by a mesh containing a plurality of polygons.
  • the final accuracy of the generated image depends in large pan on the density of the mesh since a denser mesh generally gives a closer approximation and therefore a higher quality image.
  • a reflected and or refracted image for point P', or an approximation thereto is computed (step 460).
  • the collection of all such reflected and/or refracted images P' defines a mesh belonging to the reflected and/or refracted world W', and thus defines the reflected and/or refracted world W ⁇
  • the computation of P' of a vertex P differs according to the type of reflecting and/or refracting surface SR, and is described as follows with reference to Fig. 10.
  • Fig. 10 is a simplified flowchart illustration of a preferred implementation of step 460 of Fig. 9.
  • the method of Fig. 10 preferably includes the following steps:
  • Each reflecting and/or refracting surface is classified and a reflected and/or refracted image is computed accordingly (step 470).
  • the reflecting and/or refracting surface is classified, and the reflected and/or refracted image is computed, as follows: for a planar surface (step 480), described in more detail below with reference to
  • Figs. HA and l lB for an orthogonal linear 3-D extrusion of a convex planar curve (step 490), described in more detail below with reference to Fig. 12; for a cylinder (step 500), described in more detail below with reference to Fig.
  • step 510 for a sphere (step 510), described in more detail below with reference to Fig. 14; for a general convex 3-D polygonal mesh (step 520), described in more detail below with reference to Fig. 15; for a general concave 3-D polygonal mesh (step 530), described in more detail below with reference to Fig. 15; for a general 3-D polygonal mesh (step 540), described in more detail below with reference to Fig. 16.
  • Figs. 11 A and 1 IB which, taken together, comprise a simplified flowchart illustration of a preferred implementation of step 480 of Fig. 10.
  • the methods of Figs. 11 A and 1 IB preferably comprise the following steps:
  • P' lies on a straight line L defined by V and Q
  • Q is between V and P' on the line L
  • the distance between Q and P' is equal to the distance between Q and P.
  • a point H on the reflecting surface SR is found, such that H is the point on surface SR nearest to P, the point to be reflected (step 550). Since surface SR is planar, the reflection P' lies on the straight line defined by P and H, on the opposite side of surface SR, and at the same distance from H as P is from H (step 553).
  • the computation of P' can thus be carried out using methods well known in the art, such as projection of point P onto the plane in which surface SR lies, distance computation, and point and vector addition.
  • the transformation necessary to compute the image P' of a point P is the same for all points P in the world W
  • the matrix M depends neither on the position of the viewer nor on the position of the point P to be reflected.
  • a refraction of a point P, denoted as P', according to a refracting surface SR, relative to a viewer at position V, obeys Snell's law of refraction, described, for example, in
  • the refracted image for point P' lies on the line from V to H, on the opposite side of SR (relative to V), and the length of the segment from H to P' is equal to the length of the segment from H to P.
  • the computation of P' can thus be carried out using methods well known in the art, such as equation solving, distance computation, and point and vector addition.
  • the transformation necessary to compute the image P' of a point P is the same for all points P in the world W
  • the transformation can be represented by an affine transformation as follows.
  • the position of P' is given by
  • N 1 be a 4x4 rotation matrix representing a coordinate transformation to such a coordinate system
  • N "1 be the inverse rotation.
  • the final refraction matrix is given by:
  • SR but are between SR and the viewer.
  • the refracted image W' of the polygon will lie wholly or partially on the wrong side of SR, the side of SR on which there should be no refraction seen.
  • This case is dealt with during rendering such as, for example, in step 130 described above, by applying an appropriate clipping plane, such as the plane of SR as a front clipping plane. Surfaces lying on the other side of the surface of 0, not seen by the viewer, may be eliminated or ignored (step 561).
  • Fig. 12 is a simplified flowchart illustration of a preferred implementation of step 490 of Fig. 10. It is appreciated that there is at most a single reflection P' for any point P on a convex curve, and there is at most a single refraction
  • the method of Fig. 12 preferably comprises the following steps: The problem of computing reflection and/or refraction for an orthogonal linear
  • step 565 A direction parallel to the extrusion direction of the 3-D extrusion is chosen.
  • a coordinate transformation is performed to make the chosen direction the "z" direction, and all further computations of Fig. 12 are performed in the x-y plane, until step 620, described below.
  • the curve SR must preferably be represented in a form which supports generating a point at any location on SR and computing the normal, or an approximation thereof, at any point on SR.
  • SR is a parametric curve, point generation and normal computation are easily accomplished using the parametric equation.
  • SR is an implicit curve
  • any appropriate method for point generation may be used, such as the "marching cubes" method which is well known in the art, and which is described by Foley et al, referred to above, at pages 1035 - 1036.
  • the normal to an implicit curve may be computed from the implicit equation by differentiation. If the curve SR is not already represented in a form supporting the operations described above, the curve SR is preferably represented in such a form (step 570).
  • a plurality of sample points is generated along the curve SR (step 580).
  • the number of sample points generated may depend on the desired degree of accuracy in the computation of the point P', which is an approximation to the true reflection and/or refraction of the point P. If a more precise approximation is desired, more points are computed.
  • a typical method for producing the sample points is to sample the parameter space of the curve SR using uniform intervals whose size is determined by the desired number of sample points. Alternatively, it is possible to adaptively modify the parameter value associated with a given point if the given point is too close to or too far from a previous point, in accordance with some predetermined criterion.
  • sample points For each sample point, the geometry of the sample point, that is, its location, and the normal to the curve SR at the sample point are computed (step 590).
  • the sample points are referred to hereinbelow as Tl,...,Tm, and the associated normals as Nl,...,Nm.
  • Tl,...,Tm the geometry of the sample point, that is, its location, and the normal to the curve SR at the sample point are computed (step 590).
  • the sample points are referred to hereinbelow as Tl,...,Tm, and the associated normals as Nl,...,Nm.
  • the discussion below assumes that the sample points Tl,...,Tm were generated in a counter clockwise order, but it is appreciated that, more generally, the sample points Tl,...,Tm may be generated in any order.
  • a reflected and/or refracted ray Ri is generated for each sample point Ti and the associated normal Ni (step 600), dependent on the viewer location V.
  • V-Ti vector vector
  • Ni normal Ni
  • Ri is such that the angle between Ni and Di is equal to the angle between Ni and (V-Ti). Rays of this type are referred to herein as "front-facing rays.” Otherwise, Di is identical to (Ti-V).
  • the direction Di of Ri is such that the angle between the vector (V-Ti) and Di obeys Snell's law.
  • sin(alpha) (N1/N2) > 1 the refraction is defined as the total internal reflection; that is, the same transformation as reflection from surface SR.
  • a plurality of cells C(i,k) are determined by the rays and by line segments connecting adjacent sample points as follows (step 610). Let k be an number greater than 1.
  • some cells may be non convex, in which case such cells are preferably split into two convex subcells for easier testing, typically by extending the defining rays of such a cell into lines and intersecting the lines.
  • P' (u/(u+v)) Pi' + (v/(u+v)) P(i+1)'
  • the weights u and v in the above equation may be determined by any appropriate method such as, for example, one of the following:
  • FIG. 13 is a simplified flowchart illustration of a prefe ⁇ ed implementation of step 500 of Fig. 10.
  • Fig. 17U is an example of a scene containing Ci, O, OV, OP, Q', V, B, R, P, P', dv, and dp as described hereinbelow.
  • Fig. 17U is described in more detail below.
  • the method of Fig. 13 preferably includes the following steps: The problem of computing reflection for a cylindrical surface is reduced to a 2-D problem as described above with reference to step 565 of Fig. 12. The result is a circle Ci which is the generating circle of the cylinder.
  • a first ray OV from the center O of the circle Ci to the viewer V and a second ray OP from the center O of the circle Ci to the point P are generated, and a bisector ray B is generated from the center O of the circle Ci such that the angle between OV and B is equal to the angle between OP and B (step 650).
  • the distances dv and dp of V and P respectively from the surface of the generating circle Ci are computed (step 660). Suppose that V is nearer to the generating circle
  • Ci Ci than P is.
  • a ray R is generated from the center O of the circle Ci between OV and B such that the ratio of angle (R.OV) to angle (R,B) equals dv/dp (step 670). It is appreciated that, if
  • V is not nearer to the generating circle Ci than P is, then R will lie between B and OP, and all computations will be similar to those described.
  • step 690 It is appreciated that, optionally, repeated iterations of the method of Fig. 13 may lead to increasingly exact approximations P' (step 700), by repeated bisection of the angle using current and previous values of P'.
  • Fig. 14 is a simplified flowchart illustration of a preferred implementation of step 510 of Fig. 10.
  • the reflecting surface SR is a 3-
  • the problem is reduced to a 2-D problem (step 705) as follows.
  • the center O of the sphere, the viewer P, the point P, and the point Q, described above in reference to Figs. 11 A and 1 IB, will all lie on the same 2-D plane because of the geometry of a sphere.
  • SR is a general convex polygonal 3-D mesh or a general concave polygonal 3-D mesh.
  • the convex case for calculating reflection which is generally the same as the concave case for calculating refraction, will be described first.
  • all polygonal faces in the mesh are assumed to be triangular.
  • any polygonal mesh may be reduced to a polygonal mesh with all faces triangular using methods well known in the art, such as that described in R. P. Ronfart and J. R. Rossignac, referred to above (step 720).
  • a reflected and/or refracted ray is computed (step 725).
  • the reflected and/or refracted ray Ri which has a direction vector Di, is computed for each mesh vertex Ti and associated normal Ni.
  • the three rays Ri, Rj, Rk defined by the three vertices Ti, Tj, Tk of a mesh triangle T(i,j,k) define a space cell C(i,j,k).
  • each pair of rays defining a bi-linear surface parameterized by the length of each ray, the surface starting at the vertices of the rays (steps 730 and 740).
  • R2 is as follows.
  • Ray Ri is defined by a point Ti and a vector Di.
  • the parametric equations for the rays are:
  • the surface B is called bilinear because it is linear in both parameters s and t.
  • A2 (T2_z - Tl_z) (D2_x - Dl_x) - (T2_x - Tl_x) (D2_z - Dl_z)
  • B2 (D2_z - Dl_z) (X-Tl_x) + (D2_x - Dl_x) (Z - Tl_z) +
  • the coordinates of P are used as the x, y, and z variables in implicit equation of the surface, giving the result n.
  • the result n is equal to 0 if P is on the bi-linear surface, is positive if P lies on a first side of the bi-linear surface, and is negative if P lies on a second side of the bi-linear surface.
  • each mesh triangle may be oriented such that the result n is positive for all three bi-linear surfaces of a triangle if P lies in the cell defined by that triangle.
  • One method of orientation of the triangles is to orient all triangles in a counter clockwise order as seen by the viewer. By performing the indicated test on each cell, the cell C in which the point P lies can be determined. Each of the vertices of C are used to reflect and/or refract P, yielding images Pi',
  • the P' which is thus computed is an approximation of the "true" P ⁇
  • the accuracy of the approximation depends, in general, on the density of the mesh vertices and on the method used to compute the weighted average in step
  • intersection points of the plain with the rays Ri are computed.
  • the weight of each point Pi' is then taken as the distance of the point P from the corresponding intersection point, divided by the sum of the three distances.
  • SR is a general concave 3-D polygonal mesh reflecting surface or a general convex 3-D polygonal mesh refracting surface
  • a single point P can generate several reflected and/or refracted images P'. Therefore, it is possible that multiple images P' need to be computed in steps 760 and 770.
  • Fig. 16 is a simplified flowchart illustration of a preferred implementation of step 540 of Fig. 10.
  • the mesh is preferably partitioned into concave and convex pieces (step 780), and then the method of Fig. 15 is performed separately for each piece (step 790).
  • SR when treating objects lying partially in front and in back of SR, when SR is not a planar surface, one of two methods is preferably used:
  • W being rendered, in addition to comparing the object's depth against the ordinary z-buffer, it is also compared against the depth of that pixel in the depth map of SR. If the new pixel is closer to the viewer than the conesponding pixel in the depth map, the new pixel is preferably not rendered, nor is it used to update the z-buffer.
  • This method is effectively a generalization of the concept of a front clipping plane; the part of the front clipping plane is played, in the case of this method by a front clipping buffer.
  • any available knowledge about the original scene in order to accelerate the computations of the various methods of the present invention. If, for example, the only moving objects in the scene are known to always lie in front of every reflecting and/or refracting surface SR, it is appreciated that it is possible to pre-process the scene by clipping, once and for all, the objects lying partially behind SR, as in method 1) above.
  • bounding box or sphere hierarchies around objects can be used, as is standard in many graphics methods.
  • a preferable method of treating point light sources in the scene when generating W is as follows. Use the point light sources to compute diffuse color values at the vertices of all scene polygonal meshes at the beginning of the method of Fig. 2. These color values are then used when performing Gouraud or Phong shading during rendering in step 130 of Fig. 2.
  • This method may be preferable to reflecting and/or refracting of each point light source in the same manner as any other point P, since in this case the normal at the point P, if there is one, should also be reflected and/or refracted.
  • the normal may be reflected by generating any two points on the normal, reflecting and/or refracting them as points, subtracting the reflected and/or refracted points to obtain a vector, and then normalizing the vector.
  • FIG. 17A is a simplified bird's-eye view illustration of a logical representation of an example of a scene after performing steps 100 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and 830, a reflective surface 810 of rectangular object 830, a light source 840, and a viewer's position S50.
  • FIG. 17B is a simplified pictorial illustration of a representation of an example of the scene referred to in Fig. 17A after performing steps 100 through 140, and as contained in image buffer 40. Illustrated are a reflecting surface 800, rectangular objects 820 and 830, and the representation 860.
  • FIG. 17C is a simplified illustration of an representation of an example of stencil S', designated by reference number 870, after performing step 360 of Fig. 6.
  • FIG. 17D is a simplified bird's-eye view illustration of a logical representation of an example of a scene after recursively performing steps 120 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and
  • a reflective surface 810 of rectangular object 830 a reflective surface 810 of rectangular object 830, reflected rectangular objects 890 and 900, a reflective surface 910 of reflected rectangular object 900, a light source 840, a reflected light source 880, and a viewer's position 850.
  • FIG. 17E is a simplified pictorial illustration of a representation of an example of the scene referred to in Fig. 17D after recursively performing steps 120 through 140, and as contained in image buffer 40. Illustrated are a reflecting surface 800, reflected rectangular objects 890 and 900, a reflected reflective surface 910 of reflected rectangular object 900, and the representation 860.
  • Fig. 17F is a simplified illustration of an representation of an example of stencil S", designated by reference number 870, after performing step 360 of Fig. 6.
  • Fig. 17G is a simplified bird's-eye view illustration of a logical representation of an example of a scene after recursively performing steps 120 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and
  • a reflective surface 910 of reflected rectangular object 900 a light source 840, a reflected light source SSO, a reflected reflecting surface 940, a reflected light source 950 of reflected light source SSO. and a viewer's position 850.
  • Fig. 17H is a simplified pictorial illustration of a representation of an example of the scene refened to in Fig. 17G after recursively performing steps 120 through 140. and as contained in image buffer 40. Illustrated are a reflected reflective surface 910, a reflection 891, and the representation S60.
  • Fig. 171 is a simplified pictorial representation of a representation of an example of a scene after performing all recursions of steps 120 through 240, and as contained in accumulation buffer 45. Illustrated are reflecting surface 800, rectangular objects 820 and 830, reflected rectangular objects 890 and 900, a reflected reflective surface 910 of reflected rectangular object 900, a reflection 891 of reflected rectangular object 890, and the representation 861. Reference is now made to Fig.
  • 17J which is a simplified bird's-eye view illustration of an example of steps 565 through 630 for calculating refraction and/or reflection of a point P on an orthogonal linear 3-D extrusion of a convex planar curve, consisting of a viewer's position 850, a curved surface SR, designated by reference number 960, point P to be reflected and/or refracted, designated by reference number 970, sample points 975 and 976, normals 977 and 978 to sample points 975 and 976 respectively, reflecting and/or refracting rays 980, reference rays 985, refracted points 990, and calculated refraction point 991.
  • Fig. 17K is a simplified bird's-eye illustration of an example of a scene prior to performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, and light direction vector 1020. Reference is now made to Fig. 17L, which is a simplified pictorial illustration of an example of a scene prior to performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 that receives shadows, and light direction vector 1020.
  • Fig. 17M is a simplified illustration of an example of the projection of objects onto a planar polygon as described in step 415 in order to create shadows of the projected objects onto the planar polygon. Illustrated in a two- dimensional representation are rectangular objects 1030, planar polygonal surface 1040, projected shadows 1050, projection vectors 1060, and light source 1070.
  • Fig. 17N is a simplified pictorial illustration of an example of a scene after performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 that receives shadows, light direction vector 1020, and shadow polygon 1030.
  • FIG. 17O is a simplified illustration of an example of a stencil 1040 created after performing step 420 of Fig. 8.
  • Fig. 17P is a simplified illustration of an example of surface 1015 of rectangular object 1010 with projected shadow 1016 after performing step 425 of Fig. 8.
  • Fig. 17Q is a simplified pictorial illustration of an example of a scene after performing step 435 of Fig. 8. Illustrated are rectangular objects
  • FIG. 17R is a simplified pictorial illustration of an example of a scene after performing step 435 of Fig. 8 for all polygons in the scene on which a shadow or shadows are projected. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 having received shadows 1016 and 1017, and light direction vector 1020.
  • Fig. 17S is a simplified illustration of an example of a scene as described with reference to Fig.
  • FIG. 17T is a simplified illustration of an example of a scene as described with reference to Fig. 17B, containing a reflecting and/or refracting surface SR, a viewer's position V, a normal vector N, a point H on reflecting and/or refracting surface SR, a point to be refracted P, and a refracted image P'.
  • FIG. 17U is a simplified illustration of an example of a scene as described with reference to Fig. 13, containing a generating circle Ci of a cylinder, a center O of the circle Ci, a viewer's position V, rays OV, OP, B, and R, an intersection point Q', a point to be reflected and/or refracted P, and distances dv and dp.
  • Fig. 17V is a simplified bird's-eye view illustration of an example of steps 565 through 630 for calculating refraction and/or reflection of a point P on an orthogonal linear 3-D extrusion of a convex planar curve, consisting of a viewer's position V, a curved surface SR, , point P to be reflected and/or refracted, sample points Ti and Ti+1, normals Ni and Ni+1 to sample points Ti and Ti+1 respectively, reflecting and/or refracting rays Ri and Ri+1, reference rays V-Ti and V-ti+1, refracted points P'i and P'i+1, cell C(i,l), and calculated refraction point P'.
  • Fig. 18 is a pictorial illustration of a scene including four spheres 1510, 1520, 1530 and 1540 each having a shadow 1550, 1560, 1570 and 1580 respectively. The shadows are rendered on the floor plane 1590 of the scene.
  • Fig. 19 is a pictorial illustration of sphere 1510 of Fig. 18, a center point 1610 of the sphere 1510, a point of view 1620 from which the sphere 1510 is being observed, and a four-faced pyramid 1630 sunounding the sphere 1510 and centered at the center point 1610.
  • the pyramid 1630 is defined by four vertices 1632, 1634, 1636 and 1638.
  • the pyramid 1630 includes a front face 1640 which is orthogonal to the line segment 1650 which connects the point of view 1620 and the center point 1610.
  • Figs. 20A - 20D are pictorial illustrations of the scene of Fig. 18 rendered onto each of the four faces, respectively, of the pyramid 1630 of Fig. 18.
  • the notation employed in Figs. 20A - 20D is that a non-primed reference numeral denotes the scene element which is rendered onto the relevant face whereas a primed reference numeral denotes a reflection of the scene element identified by that reference numeral, which reflection is rendered onto the relevant face.
  • Fig. 21 is a pictorial illustration of the pyramid 1630 of Fig. 1 unfolded onto a planar surface, where the scene of Fig. 18 is rendered onto each of the faces of the pyramid.
  • Fig. 22 is a pictorial illustration of a tessellation of the front face 1640 of the pyramid 1630.
  • Fig. 23 is a pictorial illustration of the tessellation of Fig. 22 mapped into a 2D circle 1660.
  • Fig. 24 is a circular environment map 1670 including a mapping, as illustrated in Figs. 22 - 23, of each of the faces of the pyramid 1630 onto the 2D circle 1660 of Fig. 23.
  • Fig. 25 is a rendering of the entire scene of Fig. 18 including a reflection of the entire scene in sphere 1510. This reflection is generated by texture-mapping the environment map 1670 of Fig. 24 onto the sphere 1510.
  • Fig. 26 is a simplified block diagram of a system for generating a 2D image, including reflections, of a 3D scene from a point of view, using 3D information regarding the 3D scene, which is constructed and operative in accordance with a prefened embodiment of the present invention.
  • the apparatus of Fig. 26 is generally similar to the apparatus of Fig. IA.
  • 3D information regarding an artificial scene may be generated by a conventional workstation 1770 such as an IBM PC.
  • 3D information regarding a natural scene may be generated using a state of the art measuring device such as a CyberWare measuring device, commercially available from Cyber-Ware Inc., 2110 De Monte Ave, Monterey, CA, 93940, USA, preferably in conjunction with a suitable camera 1790.
  • Fig. 27 is a simplified flowchart illustration of a prefened method of operation for reflections, refractions and shadows unit 1720 of Fig. 26 in conjunction with units 1715 and
  • the method of Fig. 27 preferably includes the following steps for each object in the scene from which a 2D image, seen from a selected viewpoint V, is generated. Preferably, the steps of Fig. 27 are performed only for those objects in the scene which are visible from the selected viewpoint V and on which it is desired to see reflections.
  • Step 1800 A four-faced pyramid whose center is located at a selected
  • a (typically user-selected and/or central) location C within the current object is used to compute an environment map for that object.
  • the selected location C of a point-symmetric object is preferably the point about which the object is symmetric.
  • the selected location C for an object having an axis of symmetry is preferably the center of the axis of symmetry.
  • the pyramid has four faces including a front face which is orthogonal to the line connecting the viewpoint and the selected object point C.
  • Fig. 19 illustrates a four-faced pyramid 1630 whose center 1610 is located at a selected C within the cunent object which may be used to compute an environment map for object 1600 in Fig. 19.
  • the pyramid 1630 has four faces defined by vertices 1632, 1634, 1636, 1638 including a front face 1640 which is orthogonal to the line 1650 connecting the viewpoint 1620 and the selected object point 1610 (also termed point C).
  • the environment map is computed as follows: a. Each of the triangular faces of the pyramid is circumscribed within a bounding rectangle. An example of this is illustrated in Figs. 20A - 20D. b. The scene is rendered onto each of the bounding rectangles. c. Each of the triangular faces, with the scene rendered thereupon, is mapped onto a single circle which forms the environment map. This step may be performed using tessellation, as described herein, or may be performed as follows:
  • a vector R(x,y,z) is defined which is the difference between that point and between the selected object point C.
  • s and t are the coordinates on the circle corresponding to the point P on the pyramid.
  • Figs. 20A - 20D are pictorial illustrations of four triangular faces with a single scene rendered thereupon.
  • a reflection vector R(x,y,z) is computed for each vertex Q of the reflecting object, using Equation 3 where V is the viewpoint from which the object is viewed and N is a "normal" to the object at Q. "Normal” is a known term in the art of computer graphics which is not identical to the term "perpendicular”. Texture coordinates s,t for the vertex Q are computed by plugging the coordinates of R(x,y,z) into Equations 1 and 2.
  • a conventional rendering process 1820 is performed in which the environment map is mapped as a texture map onto the object.
  • Conventional rendering is described in Foley et al Computer Graphics: principles and practice, published by Addison- Wesley, 1990.
  • Fig. 28 is a simplified flowchart illustration of a prefened method for generating a 2D image of a 3D scene from a point of view including a location and a direction, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces wherein at least one of the surfaces is reflected in each of the objects, wherein the 3D information regarding the 3D scene including, for each of the at least one objects, a first plurality of polygons which approximate the surface of the object and which define a second plurality of vertices.
  • the method comprises the following steps for each of the at least one objects, a.
  • the step (b), in which an ordered pair of texture coordinates is computed for each object vertex Q, typically comprises the following steps:
  • Equation 5 (i) Computing a reflection ray R based on the point of view, the vertex and a normal to the object at the vertex (step 1830 of Fig. 28; Equation 3); (ii) Computing an intersection point U of the ray R with a user-selected sphere centered at the selected point, using Equations 4 and 5 in which C is the selected point. r is the radius of the sphere and t is the positive solution of the quadratic equation termed herein Equation 5.
  • Step 1850 Computing a pair of texture coordinates as a function of a vector from the selected point C to the intersection point U, as described above with reference to step 1810 of Fig. 27 except that the role of R in step 1810 is replaced, in step 1850, by the final reflection vector F.
  • Fig. 29 is a simplified flowchart illustration of another prefened method for generating texture coordinates for each vertex Q of each reflecting object in a 3D scene, in order to generate a 2D image of that 3D scene, either according to the method of Fig. 27 or according to conventional methods.
  • Steps 1860 and 1900 of Fig. 29 may be similar to steps 1830 and 1850 respectively.
  • a preliminary step 1852 is added in which a depth map is computed and step 1S40 of Fig. 29 is replaced by the following steps: a.
  • Step 1870 The intersection U of the reflected ray R computed in step 1860 with a user-selected sphere centered at the selected point C is computed, using Equations 4 and 5 in which C is the selected point, r is the radius of the sphere and t is the positive solution of the quadratic equation termed herein Equation 5.
  • Step 1880 Compute an intermediate ray I(x,y,z) by subtracting U - C.
  • Step 18S4 Compute intermediate texture coordinates (s 1 , t') for the intermediate ray by applying Equations 1 and 2 to I(x,y,z).
  • Step 188S - Access the depth map and retrieve the value w stored at the coordinates (s',f). e.
  • Step 1890 ⁇ Compute a final ray L using Equation 6 which combines the intermediate ray I and the reflected ray R.
  • w' is as defined in Equation 7.
  • d max in Equation 7 is an upper bound on the depth of the scene e.g. the depth of a box which bounds the scene.
  • Preliminary depth map computing step 1852 is performed as follows:
  • a four-faced pyramid whose center is located at a selected (typically user- selected and/or central) location C within the cunent object is used to compute a depth map for that object.
  • the depth map is computed as follows: a. Each of the triangular faces of the pyramid is circumscribed within a bounding rectangle. b. The scene is rendered, using conventional methods, onto each of the bounding rectangles and a z-buffer generated in the course of the rendering process is stored on each of the triangular faces. c. Each of the triangular faces, with the z-buffer stored thereupon, is mapped onto a single circle which forms the depth map. This step may be performed using tessellation, as described herein, or may be performed as follows:
  • a vector R(x,y,z) is defined which is the difference between that point and between the selected object point C.
  • s and t are the coordinates on the circle conesponding to the point P on the pyramid.
  • the 3-D Reflection Subdivision The reflector is approximated using a triangular mesh. Each vertex of the mesh possesses a normal. This step is redundant when the reflector is already given as a triangular mesh.
  • Mesh vertices can be classified into hidden and (potentially) visible, the visible ones being strictly front-facing, profile or back-facing. We force visible back-facing vertices to be profile vertices, profile and strictly front-facing vertices are called front-facing, and contour vertices that are strictly front-facing are called cut vertices.
  • the explosion map method uses two maps: an explosion map and a hidden map.
  • the reflection triangles do not cover the whole circle, because the concealed region was not mapped onto the circle yet (in addition, the line segment connecting two adjacent profile vertices leaves out a small portion of the circle).
  • Each adjacent pair of contour vertices Vi
  • Vj defines an extension polygon whose vertices are Vi, Vj, Ei, Ej, where Ei, Ej are points on the rays Ri, Rj outside the circle.
  • the distance between a point Ei and the circle center could be approximately 1.3 the radius of the circle.
  • the last step in building the explosion map is filling its polygons with a constant value denoting the index of the conesponding mesh triangles. Extension polygons are filled with the index of their adjacent mesh triangle, which, by construction, is unique. Polygon fill can be done by graphics hardware.
  • the explosion map represents a discrete projection of the reflection subdivision on a two-dimensional sheet around the object. The index of a reflection cell is written on all discrete map pixels covered by it. In a separate location, we store a list of the exact intersection points Ii, indexed by cell index.
  • a hidden map which is simply an item buffer of the visible mesh triangles. That is, it is a discrete map (of a resolution much smaller than the frame buffer) in which a visible mesh triangle is mapped to a 2-D triangle filled with the index of the mesh triangle. Using the explosion map to compute virtual vertices.
  • the explosion map circle represents a mapping of all possible reflection directions from the reflector's boundary. We use it to directly generate the final virtual vertex Q'.
  • For each reflector we define two explosion maps (near and far), a hidden map, and an auxiliary z-buffer of itself.
  • the near map is computed using a sphere that tightly bounds the object, and the far map is computed using a sphere that bounds all the scene. It is important to understand that although the topologies of the two maps are quite similar (because cells do not intersect each other), their geometries are different; reflection rays, which determine the geometry of map vertices, evolve non-linearly.
  • V are now computed, and used as weights in a weighted average of the vertices and normals of V that produces a plane of reflection (H, NH) through which Q is mirrored to produce Q'.
  • the second case is when Q is not hidden; we utilize the explosion maps.
  • the pixels in which they fall contain indices of two (generally, different) mesh triangles Tn, Tf. In general, none of these triangles conesponds to the conect reflection cell in which Q is located, because In, If were computed using a line to the center of the sphere, which is rather arbitrary.
  • the intersection point In is mapped to the conect cell when Q is very near (or on) the near bounding sphere (and similarly to If).
  • Each of the triangles Tn, Tf defines an auxiliary virtual vertex Qn', Qf.
  • Those auxiliary virtual vertices are computed by (1) computing the barycentric coordinates of In, If relative to Tn, Tf, (2) using these coordinates as weights in a weighted average of the vertices and normals of Tn, Tf to produce planes of reflection, and (3) minoring Q across the two planes of reflection.
  • the final virtual vertex Q' is a weighted average of Qn', Qf, i.e., it is on the line segment connecting Qn', Qf .
  • the weights are determined, say, by the relative distances dn, df of Q from the near and far spheres:
  • Q' ⁇ ⁇ ⁇ Qn' ⁇ / ⁇ dn ⁇ + ⁇ Qf ⁇ / ⁇ df ⁇ ⁇ / ⁇ l/ ⁇ dn ⁇ +l/ ⁇ df ⁇ ⁇ .
  • the stencil When rendering these polygons, the stencil automatically masks the parts that do not belong to our finite cone.
  • the hidden region can be represented as the union of the inside of the cone and a truncated pyramid defined by the intersection between (1) the two planes passing through the viewer and tangent to the cone, and (2) the plane passing through the two edges on the geometric contour.
  • the hidden region of a finite cone is more complex, but we don't need it. Thus, a few simple 'plane side' tests and a cone containment test suffice in order to implement the hidden region test. If Q lies in the hidden region and is a vertex of a mixed polygon, we intersect the ray from the viewer to it with the front part of the cone, and create Q' by minoring Q across the tangent plane at the intersection point.
  • Spheres and cylinders allow a direct computation of the point of reflection H, by reducing the computation to a 2-D one.
  • the method described below does not utilize the reflection subdivision at all.
  • Beta Alpha ⁇ dE / ⁇ dE+dQ ⁇ . Assume without loss of generality that dE ⁇ dQ.
  • H' to be a point such that the angle between H'-C, E-C is Beta.
  • H' is a first approximation to H.
  • a second approximation is to take a point H" as a weighted average of H' and HO (the point at which Q-C intersects the circle). The weights are determined by the relative squared distances of Q to the reflection rays from ⁇ , HO.
  • Concave Reflectors The behavior of the reflection of a reflected object depends on which of three regions it is located in.
  • the reflection of an object located in region A looks like an enlarged, deformed version of the object, which is otherwise similar to the object.
  • the reflection of an object located in region B looks like an enlarged, deformed, upside-down version of the object.
  • the reflection of objects located in region C is utterly chaotic. Reflections of objects lying in regions A and B can be computed exactly as for convex reflectors, because in these regions the reflection subdivision is well-defined (since the reflection cells are disjoint). Virtual vertices and polygons can be computed and rendered.
  • Reflections of objects lying in region C or intersecting that region are unpredictable and chaotic anyway, so almost any policy for computing virtual vertices will be satisfactory. For example, we can simply select the first cell in which a vertex lies and create a virtual vertex according to that cell.
  • the software components of the present invention may, if desired, be implemented in ROM (read-only memory) form.
  • the software components may, generally, be implemented in hardware, if desired, using conventional techniques.

Abstract

A method for generating a 2D image of a 3D scene (565) from a point of view, the 3D scene including a first surface (830) reflected in a second surface (800). The method generates an auxiliary 2D image of the reflection of the first surface by reflecting the surface through a plane (fig. 10). A clipping plane is used to remove a portion of the reflected image that obstructs the view of the second surface (870), The method composites the reflected image with an image of the second surface which does not include the reflection by mapping the reflected image (861).

Description

COMPUTERIZED GRAPHICS SYSTEMS FIELD OF THE INVENTION The present invention relates to computerized graphics systems in general, and in particular to computerized graphics systems for producing images of three-dimensional scenes.
BACKGROUND OF THE INVENTION Producing a representation of a three-dimensional scene using a computerized graphics system is well known in the art. Typically, a three-dimensional scene is stored in the computerized graphics system. Conditions of lighting and a point of view of an observer of the scene are specified. Some or all of reflections, diffuse reflections, specular reflections, shadows, and refractions due to objects in the scene are computed. Based on the computed results, an image of the three-dimensional scene is generated.
Lighting effects are generally implemented by affecting the color of surfaces in the scene by employing what is referred to in the art as local and global shading models. With local shading models, the color of each object is determined by analyzing the object itself and the light sources alone. With global shading models, the color of an object is affected by that of other objects. The major local shading models determine the color of an object as being either constant, constant plus correction for ambient light (which practically produces constant color), or as a function of the angle between the normal to the surface and the light source direction. In Gouraud shading, such a color is computed for each vertex of the object, and then the color is interpolated to yield the color inside the object. In Phong shading, the normal to the surface is interpolated and the model is invoked for each pixel.
The preferred major global shading model known in the prior art is known as "ray tracing." Briefly, ray tracing typically begins with a single ray of light through every image element or pixel, from the eye of the observer to an object in the scene, and then recursively traces other rays of light beginning at the intersection between the first ray of light and the object in the scene, spawning reflection, refraction and shadow rays.
Ray tracing has been well known for a number of years. The primary disadvantage of ray tracing is that its implementation is quite slow, taking minutes or hours to produce an image of a single scene, even using state of the art hardware and software. Another global shading model known in the prior art is known as "radiosity." In the radiosity method, correct simulation of energy transfer in the scene is attempted.
A method to accelerate the computation of ray tracing known in the prior art is known as "beam tracing." In beam tracing beams of rays are traced instead of individual light rays. These beams are defined by reflecting the view point along the plane of a polygon in the scene, generating bounding planes passing through the view point and the edges of the polygon, and by computing, in object space, the polygons visible inside a beam.
Beam tracing has two primary disadvantages: 1) it is limited to reflections on planar polygons, with refractions being computed by a crude approximation, and curved objects not handled at all; and 2) all computations are done in object space, thus making the method highly inefficient and very difficult to implement. In addition, beam tracing cannot be implemented using standard graphics APIs such as OpenGL, the standard API for 3-D graphics suggested by Silicon Graphics and adopted by most computer manufacturers.
Relevant methods useful in computer graphics are discussed in Foley et al, Computer Graphics: Principles and Practice, published by Addison-Wesley, 1990, particularly as follows: beam tracing, p. 787; z-buffer, pp. 668 - 672; transformation to a view coordinate system, chapter 6; scan conversion, chapter 3; ray tracing, section 16.2; alpha blending,
Gouraud shading, and Phong shading, chapter 16; marching cubes method, pages 1035 - 1036; and chapter 13. Methods of shadow computation are described in Reeves, Salesin, and Cook,
Rendering Antialiased Shadows with Depth Maps, Computer Graphics fSIGGRAPH '87
Proceedings^ July 1987, pp. 283-291.
Additional relevant methods useful in computer graphics are discussed in the following Blinn, J. F., Me and My (Fake) Shadow, IEEE Computer Graphics and
Applications. January, 1988, pp. 82 - 86;
R. P. Ronfart and J. R. Rossignac, "Triangulating Multiply-connected
Polygons," Computer Graphics Forum. Special Issue, Proceedings of Eurographics '94, pp.
281 - 292; and Heckbert, P. S. and Hanrahan, P., Beam Tracing and Polygonal Objects,
Computer Graphics, published by the Association for Computing Machinery. July, 1984, pp.
119-127. 3 The disclosures of the above publications and of the publications cited therein are hereby incorporated by reference. The disclosures of all publications mentioned in this specification and of the publications cited therein are hereby incorporated by reference.
SUMMARY OF THE INVENTION The present invention seeks to provide an improved computerized graphics system for producing representations, typically pictures, of three-dimensional scenes. The present invention provides an improved system which overcomes the known disadvantages of the prior art as discussed above. In contrast to beam tracing the present invention is not limited to computing reflections on planar polygons and allows for computing lighting effects for curved objects. In addition, the present invention is an image-space method, utilizing the z- buffer method and stencil bit planes, and can be implemented using standard graphics APIs.
The present invention also seeks to provide improved methods for receiving 3D information regarding a scene and generating therefrom a 2D view of the scene including at least one of shadow, refractions and reflections. There is thus provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in the first planar surface, from 3D information regarding the 3D scene, the method including generating an auxiliary 2D image of the reflection of the second surface in the first planar surface, wherein the step of generating includes reflecting the second surface through the plane, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface, generating a main 2D image of the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in the first non-planar surface, from 3D information regarding the 3D scene, the method including generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, generating a main 2D image of the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary
2D image onto the image of the first surface in the main 2D image.
In accordance with a preferred embodiment of the present invention, the compositing step includes employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surface and at least one second surface wherein at least one of the second surfaces is reflected in each of the first non-planar surfaces, the method including generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
In accordance with a preferred embodiment of the present invention, the step of generating an auxiliary 2D image of the reflection includes computing a virtual surface including a 3D approximation of the reflection of the second surface in the first surface, and rendering the virtual surface, thereby to generate the auxiliary image.
Additionally in accordance with a preferred embodiment of the present invention, the step of computing a virtual second surface includes selecting a set of second surface locations within the second surface to be reflected in the first surface, for each second surface location in the set determining a polygon P on an approximation of the first surface, wherein a real reflection point of the second surface location falls within the polygon, and computing a possible reflection point for at least one point of the polygon, and computing a location within the virtual surface corresponding to the second surface location based at least partly on the possible reflection points. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in the first surface, from 3D information regarding the 3D scene, the method including generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first non-planar surfaces, the method including for each of the at least one first non-planar surfaces, generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
There is also provided in accordance with a preferred embodiment of the present invention, a method for computing shadows generated on a planar polygon by a light source in an image of a three-dimensional polygonal scene from a point of view of an observer, the method including computing at least one shadowed polygon within a plane, wherein at least one shadow is cast on the shadowed polygon, computing a stencil which includes visible pixels of the at least one shadowed polygon, rendering objects in the scene, including computing a scene including the at least one shadowed polygon using hidden-surface removal, to produce a preliminary image, computing at least one shadow polygon by projecting a scene polygon in the direction of the light source such that shadow generating portions of the scene polygon are generally projected above the plane and non-shadow generating portions of the scene polygon are generally projected below the plane, and rendering the at least one shadow polygon in a shadow color using the stencil and using hidden-surface removal to produce at least one rendered shadow polygon. In accordance with a preferred embodiment of the present invention, the 3D approximation of the reflection includes the reflection itself.
Additionally in accordance with a preferred embodiment of the present invention, the at least one first surfaces includes at least two first surfaces A and B and the step of generating a main 2D image for surface A includes generating a main 2D image of the scene including an image of the first non-planar surface A which does not include any reflections of any one of the second surfaces in the first non-planar surface A, and wherein the step of generating a main 2D image for surface B includes rendering surface B into the main 2D image of the scene generated for surface A. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first surfaces, the method including for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any reflections of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of the first planar surfaces, the method including for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does, not include any reflections of any one of the second surfaces in the first planar surface, and for each pair of first planar surfaces and second surfaces characterized in that the second surface is reflected in the first planar surface generating an auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating includes reflecting the second surface through a plane which includes the first plane in the pair, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
In accordance with a preferred embodiment of the present invention, the at least one second surface includes a plurality of second surfaces and wherein the step of generating an auxiliary 2D image for each pair of first and second surfaces includes generating a single auxiliary 2D image which includes reflections of all of the plurality of second surfaces in the first non-planar image.
Additionally in accordance with a preferred embodiment of the present invention, the steps of generating the main image and compositing include texture mapping the auxiliary image onto the first surface. Further in accordance with a preferred embodiment of the present invention, the first non-planar surface includes a polygonal mesh.
Still further in accordance with a preferred embodiment of the present invention, the compositing step includes the step of alpha-blending the auxiliary and main images.
In accordance with a preferred embodiment of the present invention, the step of generating an auxiliary 2D image includes generating an auxiliary 2D image of the reflections of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of reflected second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of the reflected second surfaces.
Additionally in accordance with a preferred embodiment of the present invention, the step of rendering includes using a z-buffer to remove any liidden portions of the reflected second surfaces.
Further in accordance with a preferred embodiment of the present invention, the step of generating an auxiliary 2D image includes recursively invoking the same method for different first surfaces in the scene. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in the first surface, from 3D information regarding the 3D scene, the method including generating an auxiliary 2D image of the refraction of the second surface in the first surface, generating a main 2D image of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main
2D image.
In accordance with a preferred embodiment of the present invention, the compositing step includes employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first surfaces, the method including generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface generating an auxiliary 2D image of the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
In accordance with a preferred embodiment of the present invention, the step of generating an auxiliary 2D image of the refraction includes computing a virtual surface including a 3D approximation of the refraction of the second surface in the first surface, and rendering the virtual surface, thereby to generate the auxiliary image. Additionally in accordance with a preferred embodiment of the present invention, the step of computing a virtual second surface includes selecting a set of second surface locations within the second surface to be refracted in the first surface, for each second surface location in the set determining a polygon on an approximation of the first surface, wherein a real refraction point of the second surface location falls within the polygon, and computing a possible refraction point for at least one point of the polygon, and computing a location within the virtual surface corresponding to the second surface location based at least partly on the possible refraction point. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in the first surface, from 3D information regarding the 3D scene, the method including generating an auxiliary 2D image, at a first resolution, of the refraction of the second surface in the first surface, and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to the second resolution. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one" first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first surfaces, the method including for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface, and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface generating an auxiliary 2D image of the refraction of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface refracted in the first planar surface, from 3D information regarding the 3D scene, the method including generating an auxiliary 2D image of the refraction of the second surface in the first planar surface by refracting the second surface through the first planar surface, thereby to define a virtual second surface, and generating a main 2D image of the scene including a 2D image of the first planar surface which does not include the refraction of the second surface in the first planar surface, wherein the step of generating includes texture mapping the auxiliary image onto the first surface. In accordance with a preferred embodiment of the present invention, the method also includes the step of using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface. Additionally in accordance with a preferred embodiment of the present invention, the at least one first surface includes at least two first surfaces A and B and the step of generating a main 2D image for surface A includes generating a main 2D image of the scene including an image of the first surface A which does not include any refractions of any one of the second surfaces in the first surface A, and wherein the step of generating a main 2D image for surface B includes rendering surface B into the main 2D image of the scene generated for surface A.
Further in accordance with a preferred embodiment of the present invention, the step of generating includes texture mapping the auxiliary image onto the first surface. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of the first planar surfaces, the method including for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does not include any refractions of any one of the second surfaces in the first planar surface, and for each pair of first planar surfaces and second surfaces characterized in that the second surface is refracted in the first planar surface generating an auxiliary 2D image of the refraction of the second surface in the first planar surface, wherein the step of generating includes refracting the second surface through a plane which includes the first plane in the pair, thereby to define a virtual second surface, and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first planar surface in the main 2D image.
In accordance with a preferred embodiment of the present invention, the at least one second surfaces includes a plurality of second surfaces and wherein the step of generating an auxiliary 2D image for each pair of first and second surfaces includes generating a single auxiliary 2D image which includes refractions of all of the plurality of second surfaces in the first imase. Additionally in accordance with a preferred embodiment of the present invention, the steps of generating the main image and compositing include texture mapping the auxiliary image onto the first surface.
Further in accordance with a preferred embodiment of the present invention, the first surface includes a polygonal mesh.
Still further the compositing step includes the step of alpha-blending the auxiliary and main images.
In accordance with a preferred embodiment of the present invention, the step of generating an auxiliary 2D image includes generating an auxiliary 2D image of the refractions of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of refracted second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of the refracted second surfaces.
Additionally in accordance with a preferred embodiment of the present invention, the step of rendering includes using a z-buffer to remove any hidden portions of the refracted second surfaces.
Further in accordance with a preferred embodiment of the present invention, the step of generating an auxiliary 2D image includes recursively invoking the same method for different first surfaces in the scene.
Still further in accordance with a preferred embodiment of the present invention, the method also includes generating an additional auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the reflection of the second surface in the first surface and the refraction of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
In accordance with a preferred embodiment of the present invention, the method also includes generating an additional auxiliary 2D image of the shadow of the second surface on the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the refraction of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
Additionally in accordance with a preferred embodiment of the present invention, the method also includes generating an additional auxiliary 2D image of the shadow of the second surface on the first surface, wherein the step of generating a main 2D image includes generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the reflection of the second surface in the first surface, and wherein the compositing step includes compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the method including, for each of the at least one objects generating an environment map for a selected point, for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within the environment map, and rendering the object's polygons using the environment map as a texture map, wherein the step of generating an environment map includes rendering at least one of the surfaces on each of four faces of a four-faced pyramid defined about the selected point.
In accordance with a preferred embodiment of the present invention, the method also includes mapping the faces of the pyramid, on each of which at least one of the surfaces are rendered, onto a 2D circle, thereby to generate a modified environment map.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the method including, for each of the at least one objects generating an environment map for a selected point, for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within the environment map, and rendering the object's polygons using the environment map as a texture map, wherein the step of computing an ordered pair of texture coordinates includes computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, computing an intersection point of the ray with an object centered at the selected point, and computing a pair of texture coordinates as a function of a vector extending from the selected point to the intersection point.
In accordance with a preferred embodiment of the present invention, the step of computing an ordered pair of texture coordinates includes computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, computing an intersection point of the ray with an object centered at the selected point, and computing a pair of texture coordinates as a function of a vector extending from the selected point to the intersection point. There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the method including, for each of the at least one objects generating an environment map for a selected point, for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within the environment map, and rendering the object's polygons using the environment map as a texture map, wherein the step of computing an ordered pair of texture coordinates includes computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, computing at least one intersection points of the ray with at least one spheres, respectively, which are centered at the selected point, computing at least one intermediate rays from the selected point to each of the at least one intersection points, respectively, combining directions of the reflection ray and all of the at least one intermediate rays, thereby to define a final vector, and computing a pair of texture coordinates as a function of the final vector. In accordance with a preferred embodiment of the present invention, the method also includes the step of generating a depth map of the surfaces as seen from the selected point, the depth map corresponding to the environment map, and wherein the combining step includes assigning a weight to each individual one of the directions based on a depth which the depth map assigns to the individual one of the directions.
Additionally in accordance with a preferred embodiment of the present invention, the selected point includes a user-selected location in the scene.
Further in accordance with a preferred embodiment of the present invention, the selected point, for which an environment map is generated for an individual one of the at least one objects, includes a central location of the individual object.
There is also provided in accordance with a preferred embodiment of the present invention, apparatus for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the apparatus including an environment map generator operative to generate an environment map for a selected point for each of the at least one objects, a texture coordinate computer operative to compute, for each individual vertex from among each object's polygons' vertices, an ordered pair of texture coordinates within the environment map, and a rendering device operative to render each object's polygons using the environment map as a texture map, wherein the environment map generator includes a pyramidic rendering device operative to render at least one of the surfaces on each of four faces of a four-faced pyramid defined about the selected point. There is also provided in accordance with a preferred embodiment of the present invention, apparatus for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the apparatus including an environment map generator operative to generate an environment map for a selected point for each of the at least one objects, a texture coordinate computer operative to compute, for each individual vertex from among each object's polygons' vertices, an ordered pair of texture coordinates within the environment map, and a rendering device operative to render each object's polygons using the environment map as a texture map, wherein the texture coordinate computer is operative to compute a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, to compute an intersection point of the ray with a sphere centered at the selected point, and to compute a pair of texture coordinates as a function of a vector extending from the selected point to the intersection point.
There is also provided in accordance with a preferred embodiment of the present invention, apparatus for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of the at least one objects, wherein the 3D information regarding the 3D scene includes, for each of the at least one objects, at least one polygons which approximate the object, each of the at least one polygons having at least 3 vertices, the apparatus including an environment map generator operative to generate an environment map for a selected point for each of the at least one objects, a texture coordinate computer operative to compute, for each individual vertex from among each object's polygons' vertices, an ordered pair of texture coordinates within the environment map, and a rendering device operative to render each object's polygons using the environment map as a texture map, wherein the texture coordinate computer is operative to compute a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, to compute at least one intersection points of the ray with at least one spheres, respectively, which are centered at the selected point, to compute at least one intermediate rays from the selected point to each of the at least one intersection points, respectively, to combine directions of the reflection ray and all of the at least one intermediate rays, thereby to define a final vector and to compute a pair of texture coordinates as a function of the final vector.
There is also provided in accordance with a preferred embodiment of the present invention, apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in the first non-planar surface, from 3D information regarding the 3D scene, the apparatus including auxiliary 2D image apparatus which generates the reflection of the second surface in the first non-planar surface, main 2D image apparatus which generates the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non- planar surface, and compositor apparatus which combines the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image. There is also provided in accordance with a preferred embodiment of the present invention, apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in the first planar surface, from 3D information regarding the 3D scene, the apparatus including auxiliary 2D image apparatus which generates the reflection of the second surface in the first planar surface, wherein the auxiliary 2D image apparatus includes reflecting apparatus which reflects the second surface through the plane, thereby defining a virtual second surface, and removal apparatus which uses a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from the point of view of the first planar surface, and main 2D image apparatus for generating the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface.
There is also provided in accordance with a preferred embodiment of the present invention, a method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in the first surface, from 3D information regarding the 3D scene, the method including recursively performing the following steps generating an auxiliary 2D image of the reflection of the second surface in the first surface, generating a main 2D image of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface, and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image. It is appreciated that the total number of vertices defined by the object approximating polygons may be less than 3 times the number of object approximating polygons since two or more polygons may have one or more vertices in common. The number of polygons used to approximate different objects need not be the same. The "selected point", also termed herein the "object point", for which an environment map is generated in the course of rendering an object's polygons, may be interior to that object, but alternatively may be on the surface of that object or even exterior to that object.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will be understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
Fig. IA is a simplified block diagram of a computerized graphics system constructed and operative in accordance with a preferred embodiment of the present invention; Fig. IB is a simplified pictorial illustration of an example of a scene as operated upon in accordance with a preferred embodiment of the present invention;
Figs. 2A and 2B, taken together, comprise a simplified flowchart illustration of a preferred method of operation of the system of Fig. IA;
Fig. 3 is a simplified flowchart illustration of a preferred implementation of step 110 of Fig. 2A;
Fig. 4 is a simplified flowchart illustration of a preferred implementation of step 130 of Fig. 2A;
Fig. 5 is a simplified flowchart illustration of a preferred implementation of steps 190 and 200 of Fig. 2B; Fig. 6 is a simplified flowchart illustration of a preferred method of implementation of step 210 of Fig. 2B;
Fig. 7 is a simplified flowchart illustration of a preferred implementation of step 230 of Fig. 2B;
Fig. 8 is a simplified flowchart illustration of a preferred implementation of step 220 of Fig. 2B;
Fig. 9 is a simplified flowchart illustration of a preferred implementation of step 390 of Fig. 7;
Fig. 10 is a simplified flowchart illustration of a preferred implementation of step 460 of Fig. 9; Figs. 11A and 11B, taken together, are a simplified flowchart illustration of a preferred implementation of step 480 of Fig. 10;
Fig. 12 is a simplified flowchart illustration of a preferred implementation of step 490 of Fig. 10;
Fig. 13 is a simplified flowchart illustration of a preferred implementation of step 500 of Fig. 10;
Fig. 14 is a simplified flowchart illustration of a preferred implementation of step 510 ofFig. 10, Fig. 15 is a simplified flowchart illustration of a preferred implementation of step
520 of Fig. 10;
Fig. 16 is a simplified flowchart illustration of a preferred implementation of step 540 of Fig. 10; Figs. 17A - 17V are simplified pictorial illustrations representing examples of the results of various steps of the methods of the present invention;
Fig. 18 is a pictorial illustration of a scene including four spheres each having a shadow rendered on the floor plane of the scene;
Fig. 19 is a pictorial illustration of one of the spheres of Fig. 18, a point of view from which the sphere is being observed, and a four-faced pyramid surrounding the sphere;
Figs. 20A - 20D are pictorial illustrations of the scene of Fig. 18 rendered onto each of the four faces, respectively, of the pyramid of Fig. 19
Fig. 21 is a pictorial illustration of the pyramid of Fig. 19 unfolded onto a planar surface, where the scene of Fig. 18 is rendered onto each of the faces of the pyramid; Fig. 22 is a pictorial illustration of a tessellation of the front face of the pyramid;
Fig. 23 is a pictorial illustration of the tessellation of Fig. 22 mapped into a 2D circle,
Fig. 24 is a circular environment map including a mapping, as illustrated in Figs. 22 - 23, of each of the faces of the pyramid onto the 2D circle of Fig. 23; Fig. 25 is a rendering of the entire scene of Fig. 18 including a reflection of the entire scene in the sphere of Fig. 19, the reflection having been generated by texture-mapping the environment map of Fig. 24 onto the sphere 1510;
Fig. 26 is a simplified block diagram of a system for generating a 2D image, including reflections, of a 3D scene from a point of view, using 3D information regarding the 3D scene,
Fig. 27 is a simplified flowchart illustration of a preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating a 2D image of a 3D scene using a pyramidal technique to compute an environment map; Fig. 28 is a simplified flowchart illustration of a first preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating texture coordinates useful in generating a 2D image of a 3D scene either according to the method of Fig. 27 or according to conventional methods;
Fig. 29 is a simplified flowchart illustration of a second preferred method, constructed and operative in accordance with a preferred embodiment of the present invention, for generating texture coordinates useful in generating a 2D image of a 3D scene either according to the method of Fig. 27 or according to conventional methods; and
Fig. 30 is a pictorial illustration of the relationship between geometric entities participating in the methods shown and described herein.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Reference is now made to Fig. IA which is a simplified block diagram of a computerized graphics system constructed and operative in accordance with a preferred embodiment of the present invention. It is appreciated that the apparatus of Fig. IA may be implemented partly in computer hardware and partly in software, or entirely in custom hardware. Preferably, the apparatus of the present invention is implemented in suitably programmed computer hardware. The computer hardware may comprise, for example, a Silicon Graphics Indigo-2 Impact Graphics workstation, commercially available from Silicon Graphics, Mountain View, California, USA.
The system of Fig. IA comprises a control unit 10, receiving an input comprising a scene, as described more fully below with reference to Figs. 2 A and 2B. The input to the control unit 10 typically comprises 3-D coordinates of all objects and light sources in the scene, color characteristics for all objects in the scene, textures, and viewing parameters.
The system of Fig. IA also comprises a scan-conversion unit 15, receiving input from the control unit 10. The input received by the scan-conversion unit 15 typically comprises 2-D coordinates of all polygons in the scene and depth information, and optionally also comprises color information, which color information may have been computed by the control unit 10 based on lighting information. The operation of the scan-conversion unit 15 is described more fully below with reference to Figs. 2A and 2B.
The system of Fig. 1 A also comprises a reflections, refractions and shadows unit (RRS) 20. The RRS 20 receives an input from the control unit 10 indicating what operation is to be carried out and, based on the received input, provides a control signal to a 3-D transformation and lighting unit (3-D unit) 25. The RRS 20 also provides a control signal to a stencils unit 30 and a z-buffer 35.
The z-buffer 35 may comprise a plurality of z-buffers. The stencils unit 30 and the z-buffer 35 are also connected to the scan conversion unit 15. The system of Fig. IA also comprises an image buffer 40, an optional accumulation buffer 45, an alpha bit planes unit 50, and a texture bit planes unit 55. The optional accumulation buffer 45 is used as a working area, preferably of the same size as the image buffer 40. The optional accumulation buffer 45 is referred to herein primarily for purposes of clarity of description; it is appreciated that the present invention need not include an accumulation buffer 45.
The system of Fig. IA also comprises an image output unit 60, receiving input from the image buffer 40. The output unit 60 typically produces the image contained in the image buffer 40 on a device such as a video monitor, printer, plotter, or other output device as is well known in the art.
Each of the units 15, 30, 35, 40, 45, 50, and 55 are preferably connected to each of the other units 15, 30, 35, 40, 45, 50 and 55 with two-way connections. The term "stencil" or "stencil bit plane," as used herein, refers generally to an image used to mask another image; that is, to specify a sub-image comprising a subset of the pixels of an image. Typically, a stencil is a single bit image at the same resolution as that of the generated image, comprising a single bit per pixel.
Reference is now additionally made to Fig. IB which is a simplified pictorial illustration of an example of a scene as operated upon and in accordance with various preferred embodiments of the present invention. The scene of Fig. IB is useful in understanding terms used in the present specification, as it depicts reflections, refractions, shadows, and other aspects of a scene as may be produced by various steps in various embodiments described below. The scene of Fig. IB comprises a polygon 70 on which shadow is cast, a shadow
72 as cast on polygon 70, a polygonal object 74, a reflecting surface 75, a refracting surface 76, a refracting beam 78, a reflection 80, and a light source 82.
The operation of the apparatus of Fig. 1 A is now briefly described. Reference is now additionally made to Figs. 2A and 2B, which, taken together, comprise a simplified flowchart illustration of a preferred method of operation of the system of Fig. IA. It is appreciated that the method of Figs. 2A and 2B, as well as other methods described below, need not necessarily be performed in a particular order, and that in fact, for reasons of implementation, a particular implementation of the methods may be performed in a different order than another particular implementation.
The method of Figs. 2A and 2B preferably includes the following steps:
A scene to be processed is input by the control unit 10, along with viewing parameters which describe how the scene is to be viewed (step 100). The scene typically comprises both information about surfaces in the scene and lighting parameters. The information about surfaces typically includes both geometry and color characteristics of the surfaces, typically including the following: base color; diffuse reflection coefficients; specular reflection coefficients; refraction indices; and other characteristics. The specular reflection coefficients are particularly useful in performing the methods of the present invention. The specular reflection coefficients are usually described in RGBA form, as is well known in the art and are described in chapter 13 of Foley et al, referred to above. The lighting parameters typically define the location, orientation, and intensity of lighting in the scene.
Viewing parameters typically comprise point of view parameters, which define the point of view of an observer of the scene, including location of the point of view, orientation of view, and field of view. Processing by the method of Figs. 2A and 2B is in accordance with the viewing parameters.
A virtual world W, a stencil S, and a rendered surface RO are initialized (step 110). A preferred implementation of step 110 is described in more detail below with reference to Fig. 3.
A stenciled portion SZ of the z-buffer, defined according to the stencil S, is initialized, typically by setting all values within SZ to the largest possible number, also referred to herein as setting to infinity (step 120).
All surfaces in W are rendered into a stenciled portion SB of the accumulation buffer 45 and into SZ, including computation of a color C for each pixel point P through which each surface O is visible (step 130).
The term "color," as used in the art and throughout the present specification, includes information on color in the usual sense as well as on reflectance and on refraction characteristics. The term "render," as used throughout the present specification, refers to transformation of the surface to the view coordinate system, followed by finding the pixels in the resulting image which are covered by the surface. Preferred methods for transformation to the view coordinate system and finding the pixels which are covered by the surface via scan conversion are well known in the art, and are discussed by Foley et al, referred to above, in chapters 6 and 3, respectively. Generally, rendering may include the application of a clipping plane, which is a plane defining a portion of the scene, on one side of the plane, which is to be clipped or removed from the image. Methods for applying a clipping plane are well known in the art.
The stenciled portion SB of the accumulation buffer 45 is defined by the stencil
S in a manner similar to the definition of SZ, described above. A stenciled portion SI of the image buffer 40 is similarly defined. A preferred implementation of step 130 is described in more detail below with reference to Fig. 4. The current color of each pixel in the stenciled portion SB of the accumulation buffer 45 is replaced with a computed color by a weighted average of the current color of the pixel and the corresponding pixel in the accumulation buffer AB. (step 140). This process is known as alpha blending. Typically, the weights of the weighted average are determined based, at least in part, on characteristics of the rendered surface RO such as, for example, reflectance and opacity of RO. Other factors, such as level of recursion of the method may also have an effect on the weights. Typical methods, known in the art, for determining the weights are described in Foley et al, refeσed to above, in chapter 16.
The method of Figs. 2 A and 2B is a recursive method. As is well-known in the art, recursion criteria may be defined in order to determine when a given level of recursion is to be terminated. Such criteria may include: time spent on processing a particular level of the method; amount of change in the rendered surface RO computed in the current level of iteration; reaching a maximum recursion level, such as, for example, 4 levels of recursion; or screen space occupied by the surface being processed being smaller than a predetermined minimum size such as, for example 10 pixels. Examples of a scene after performing one or more recursions of steps 120 through 140 are described in more detail below with reference to Figs. 17A, 17B, 17D, 17E, 17G, and 17H.
Recursion criteria are checked to see whether any recursion-ending criteria are satisfied (step 150). If no recursion-ending criteria are satisfied, processing continues with step 190. If some recursion criteria are satisfied, the current level of recursion is ended (step 160). As each recursion level is ended, a check is made as to whether processing is presently at the top recursion level (step 170). If so, the method of Figs. 2A and 2B terminates (step 180), and the present image in the image buffer 40 is output by the control unit 10, being the result of the method of Figs. 2A and 2B. Otherwise, processing continues at the next higher recursion level (step 185).
If no recursion criteria are satisfied in step 150, a check is made to see whether any more surfaces of interest are found in the scene (step 190). A surface of interest is generally any visible surface which has an effect on computation of the rendered image. Typically, a surface of interest includes any visible surface for which we are interested in at least one of the following: computing shadows falling on the object; computing reflections from the object; computing refraction through the object. Whether or not a surface is of interest may be determined using methods well-known in the art, based partly on the color as described above.
If no more surfaces of interest are found, recursion at the current level is ended in a way similar to that described above with reference to steps 160, 170, 180, and 185.
The next surface of interest O, found in step 190, is selected for processing (step 200). A preferred implementation of the method of steps 190 and 200 is described in more detail below with reference to Fig. 5.
A new stencil S' is prepared (step 210). A preferred implementation of step 210 is described in more detail below with reference to Fig. 6. Step 210 may, in principle, be carried out at the same time as step 130, described above. Carrying out step 210 at the same time as step 130 may be preferable in order to increase the efficiency of the method of Figs. 2A and 2B.
Shadows falling on surface O are computed if desired (step 220). Methods of shadow computation are well known in the art and are described, for example, in Reeves, Salesin, and Cook, referred to above. A preferred implementation of step 220, believed to be possibly preferable to those known in the art, is described in more detail below with reference to Fig. 8.
The computation of reflections and the computation of refractions as described below are generally the same. Where the computations are the same, reference is made by using "reflection and or refraction" or similar constructs. Where the computations differ, however, they are described separately.
If computation of reflections and/or refractions from O is desired, a new virtual world W is computed, and steps beginning at step 120 are then performed recursively, using S', W and O instead of S, W, and RO (steps 230 and 240). A preferred implementation of step
230 is described in more detail below with reference to Fig. 7. As described below, the world
W is computed based in part on the surface O and the fact that reflections and/or refractions are to be computed. An example of a scene after performing all recursions of steps 120 through 240 is described in more detail below with reference to Fig. 171.
Reference is now made to Fig. 3, which is a simplified flowchart illustration of a preferred implementation of step 110 of Fig. 2A. The method of Fig. 3 preferably comprises the following steps: The virtual world W is set to contain all surfaces in the scene, (step 250). Stencil
S is set to be a stencil covering the whole image (step 260). The rendered surface RO is set to be the background, that is, an infinite plane at a very great distance, further away from the point of view than all surfaces, (step 270)
Reference is now made to Fig. 4, which is a simplified flowchart illustration of a preferred implementation of step 130 of Fig. 2A. The method of Fig. 4 preferably includes the following steps:
All surfaces in W are rendered into the stenciled portion SI of the image and into
SZ, using SZ (step 280). For each surface O in W, during rendering into SI, a color C is computed for each pixel through which O is visible (step 290). Preferably, the color computation takes the following into account: the color of O, optionally using a texture map; the color of ambient light; the normal to O at P, optionally perturbed using a bump map; the location and orientation of light sources; and other factors, as are well known in the art.
Texture maps, bump maps, and using the location and orientation of light sources are described in Foley et al, referred to above, at pages 721 - 744. Reference is now made to Fig. 5, which is a simplified flowchart illustration of a preferred implementation of steps 190 and 200 of Fig. 2B. The method of Fig. 5 preferably includes the following steps:
If we are interested in computing shadows falling on O (step 300), or if we are interested in computing reflections from O (step 310), or if we are interested in computing refractions through O (step 320), then O is considered a surface of interest (step 330)
Otherwise, O is not considered an object of interest (step 340) Reference is now made to Fig. 6, which is a simplified flowchart illustration of a preferred implementation of step 210 of Fig. 2B. The method of Fig. 6 preferably includes the following steps:
Pixels covered by O and through which O is visible are determined, using SZ (step 350). Typically, this determination is made using a z-buffer operating on the stenciled portion SZ. Computational methods using a z-buffer are well-known in the art and are described, for example, in Foley et al, referred to above, at pages 668-672. A new stencil S' is then computed as the intersection of stencil S and the pixels determined in step 350 (step 360).
Examples of a scene after performing one or more recursions of steps 350 and 360 are described in more detail below with reference to Figs. 17C, and 17F.
Reference is now made to Fig. 7, which is a simplified flowchart illustration of a preferred implementation of step 230 of Fig. 2B. The method of Fig. 7 preferably includes the following steps:
A surface O is determined (step 370). The viewer's location and orientation are determined from the initial viewing parameters (step 380). A new virtual world W is computed as a transformation of the scene, based on the surface of O (step 390). A preferred implementation of step 390 is described in more detail below with reference to Fig. 9.
The transformation depends on the type of surface O. If, for example, the surface O is planar polygonal, W contains all surfaces in the scene reflected and/or refracted through the plane on which the surface O lies. In the case of reflection, W' does not contain the reflection of surfaces that lie on the side of the surface of O away from the viewer. In the case of refraction, W' does not contain the refraction of surfaces that lie on the side of the surface of O nearest the viewer. In the case of a planar polygonal O, each point in W is reflected through the plane in a uniform way; that is, the same reflection transformation is used for all points.
The case of a general surface O2 is described in detail below with reference to
Figs. 10 - 16. Briefly, in the case of a curved surface O2, a set of sample points is generated on O2. The sample points, when connected form a subdivision of O2 into polygons. The subdivision is also known as a tessellation. For each point Q in the scene W, it is determined, as described below, in which tessellation polygon T the point Q is reflected and/or refracted.
In the case of reflection. Q is reflected through each of the vertices of T using the normal to the surface, which, in the case of a curved surface, must be known at each vertex and is therefore generally supplied as an input parameter to control unit 10, as described above with reference to Fig. IA. Alternatively, the normal to the surface may be computed by the RRS
20, using methods well known in the art. In the case of refraction, Q is refracted by taking refraction coefficients into account and applying Snell's law as described below with reference to Fig. 1 IB. A weighted average of the reflected and/or refracted points is then computed, and the final location of Q in the reflected and/or refracted world is taken to be the weighted average. The weights for the weighted average are determined as a function of the proximity of Q to the rays reflected and/or refracted from the vertices.
Reference is now made to Fig. 8, which is a simplified flowchart illustration of a preferred implementation of step 220 of Fig. 2B. The method of Fig. 8 is described for the case of a single shadowed polygon. The term "shadowed polygon," as used herein, refers generally to a polygon on which shadow falls. The term "shadow polygon," as used herein, refers generally to a representation of a shadow that is used to create shadowed polygons. It is appreciated that the method of Fig. 8 may be applied repeatedly to a plurality of shadowed polygons in the scene, and that shadow polygons may be used to approximate non-polygonal objects in the scene. The method of Fig. 8 preferably includes the following steps:
Shadow polygons are computed (step 415). A preferred method for computing shadow polygons is as follows:
Let G be a row vector (a,b,c,d) of the coefficients of the implicit equation of the plane in which the polygon PO, on which shadow is to fall, lies. Let L be a row vector (x,y,z,w) of the homogeneous coordinates of the location of a point light source, where w=0 if the light source is at infinity. Let a matrix M = (L dot G - epsilon) I - G^ times L where: dot indicates dot product; epsilonO < epsilon « 1; I is the 4 x 4 identity matrix;
QT is the transpose of G; and times indicates matrix multiplication.
The projection of a 3-D point Q is given by QM. Use of epsilon in defining M allows computation involving objects which are partially hidden by the polygon PO from the point of view of the light source. If epsilon = 0 then all shadow polygons are projected onto the plane in which polygon PO lies whether the object for which shadows are cast is between polygon PO and the light source or beyond polygon PO. By setting epsilon > 0 we maintain the spatial order between polygon PO and the shadow polygons, resulting in the removal of shadows of objects that are beyond polygon PO during the hidden surface removal stage of the rendering.
Examples of a scene after performing step 415 are described in more detail below with reference to Figs. 17K through 17N.
A stencil, including the visible pixels of polygon PO is computed (step 420).
An example of a scene after performing step 420 is described in more detail below with reference to Fig. 170.
Objects in the scene are rendered, preferably by computing a z-buffer of the scene including the shadowed polygon, preferably with hidden surfaces removed (step 425).
An example of a scene after performing step 425 is described in more detail below with reference to Fig. 17P. The shadow polygon is then rendered in a shadow color, preferably by using the stencil and the z-buffer computed in step 425, to produce a rendered shadow polygon
(step 430). Preferably, alpha blending is used to blend the generated shadows with the image generated in step 425. In alpha blending, the weights used are determined by the desired
"strength" of the shadows, where 0 < strength < 1, controlling the relative darkness of the shadow from invisible (strength=0) to total darkness (strength=l). Strength is input as a viewing parameter by the control unit 10 (step 100).
The rendered shadow polygon is then blended with the preliminary shadowed image to produce a final shadowed image (step 435). Examples of a scene after performing step 435 are described in more detail below with reference to Figs. 17Q and 17R.
Reference is now made to Fig. 9, which is a simplified flowchart illustration of a preferred implementation of step 390 of Fig. 7. The method of Fig. 9 preferably comprises the following steps: Each surface in the scene is approximated with a polygonal mesh (step 450).
Approximation with a polygonal mesh is a method well-known in the art. A polygonal mesh typically comprises: a set of vertices; a set of edges connecting the vertices and defining polygonal faces; and optionally a set of normals, one at each vertex.
The approximation with a polygonal mesh may be an exact approximation such as, for example, when a planar polygon is represented by a mesh containing a plurality of polygons. The final accuracy of the generated image depends in large pan on the density of the mesh since a denser mesh generally gives a closer approximation and therefore a higher quality image.
The vertices of the polygonal mesh will be denoted herein as P. For each vertex
P of each polygonal mesh in the scene, a reflected and or refracted image for point P', or an approximation thereto, is computed (step 460). The collection of all such reflected and/or refracted images P' defines a mesh belonging to the reflected and/or refracted world W', and thus defines the reflected and/or refracted world W\ The computation of P' of a vertex P differs according to the type of reflecting and/or refracting surface SR, and is described as follows with reference to Fig. 10. Reference is now made to Fig. 10, which is a simplified flowchart illustration of a preferred implementation of step 460 of Fig. 9. The method of Fig. 10 preferably includes the following steps:
Each reflecting and/or refracting surface is classified and a reflected and/or refracted image is computed accordingly (step 470). The reflecting and/or refracting surface is classified, and the reflected and/or refracted image is computed, as follows: for a planar surface (step 480), described in more detail below with reference to
Figs. HA and l lB; for an orthogonal linear 3-D extrusion of a convex planar curve (step 490), described in more detail below with reference to Fig. 12; for a cylinder (step 500), described in more detail below with reference to Fig.
13; for a sphere (step 510), described in more detail below with reference to Fig. 14; for a general convex 3-D polygonal mesh (step 520), described in more detail below with reference to Fig. 15; for a general concave 3-D polygonal mesh (step 530), described in more detail below with reference to Fig. 15; for a general 3-D polygonal mesh (step 540), described in more detail below with reference to Fig. 16.
The following conditions, which are satisfied for P', will aid in the understanding of Figs. 1 1A - 16:
1. Let Y denote the location of the viewer.
2. Let N denote the normal for a planar reflecting and/or refracting surface SR. Reference is now made to Figs. 11 A and 1 IB, which, taken together, comprise a simplified flowchart illustration of a preferred implementation of step 480 of Fig. 10. The methods of Figs. 11 A and 1 IB preferably comprise the following steps:
For each vertex in the scene, the following steps are performed to compute the reflected image:
For each reflected image P', there exists a point Q on surface SR such that the angle between the vector (V-Q) and N is equal to the angle between the vector (P-Q) and N.
P' lies on a straight line L defined by V and Q, Q is between V and P' on the line L, and the distance between Q and P' is equal to the distance between Q and P. An example of a scene containing SR, V, N, Q, P, and P' is described in more detail below with reference to Fig. 17S.
A point H on the reflecting surface SR is found, such that H is the point on surface SR nearest to P, the point to be reflected (step 550). Since surface SR is planar, the reflection P' lies on the straight line defined by P and H, on the opposite side of surface SR, and at the same distance from H as P is from H (step 553). The computation of P' can thus be carried out using methods well known in the art, such as projection of point P onto the plane in which surface SR lies, distance computation, and point and vector addition.
Preferably, since the transformation necessary to compute the image P' of a point P is the same for all points P in the world W, the transformation can be represented by an affine transformation as follows. Let (a,b,c,d) be the four coefficients of an implicit equation of the plane in which surface SR lies, such that for each point (x,y,z) on the plane ax+by+cz=d.
Then define the matrix M as follows: l-2aa -2ab -2ac 0
-2ab l-2bb -2bc 0
-2ac -2bc l-2cc 0 2 2aadd 2 2bbdd 2cd 1
The reflected image P' of P is then given by PM, where P is written as a row vector as follows: x y z 1
It is appreciated that the matrix M depends neither on the position of the viewer nor on the position of the point P to be reflected.
It is appreciated that it is possible, that two vertices P j and P of a polygon to be reflected lie on opposite sides of SR, or that both vertices Pj and P2 lie on the same side of SR but are obscured from the viewer by SR. In both cases the reflected image W' of the polygon will lie wholly or partially on the wrong side of SR, the side of SR on which there should be no reflection seen. This case is dealt with during rendering such as, for example, in step 130 described above, by applying an appropriate clipping plane, such as the plane of SR as a front clipping plane. Surfaces lying on the other side of the surface of 0, not seen by the viewer, may be eliminated or ignored (step 555).
For each vertex in the scene, the following steps are performed to compute the refracted image:
A refraction of a point P, denoted as P', according to a refracting surface SR, relative to a viewer at position V, obeys Snell's law of refraction, described, for example, in
Foley and van Dam referred to above. That is, if we denote Nl as the refractive index of the medium in which the viewer is located, and N2 is the refractive index of the refractive object, there is a refraction point H on SR such that sin (alpha)/sin(beta) = N2 / Nl where alpha is the angle between the straight line defined by V and H and the normal N to SR, and beta is the angle between the straight line defined by P and H and the normal N to SR (step
557). An example of a scene containing SR, V, N, H, P, and P' is described in more detail below with reference to Fig. 17T.
The refracted image for point P' lies on the line from V to H, on the opposite side of SR (relative to V), and the length of the segment from H to P' is equal to the length of the segment from H to P. The computation of P' can thus be carried out using methods well known in the art, such as equation solving, distance computation, and point and vector addition.
When the viewer is relatively far from the surface SR, such that the angle between the normal N to SR and the line from V to any point on SR is almost constant, the position of P' can be derived directly from P and V without the need to calculate the position of the refracting point H on the surface SR.
Preferably, since the transformation necessary to compute the image P' of a point P is the same for all points P in the world W, the transformation can be represented by an affine transformation as follows. In a coordinate system in which the refracting plane SR lies in the XY plane where the normal to the surface is the direction of the positive Z axis, and V and P lie in the XZ plane where the viewing rays from V to H and H to P lie in the XZ plane as well, the position of P' is given by
P' = P R where the matrix R is defined as follows: 1 0 0 0
0 1 0 0 sin alpha /cos beta - sin beta 0 cos alpha /cos beta 0
0 0 0 1 and P is written as a row vector as follows: x y z 1
Let N1 be a 4x4 rotation matrix representing a coordinate transformation to such a coordinate system, and let N"1 be the inverse rotation. The final refraction matrix is given by:
M = N1 RN*1 and the position of the refracted point P' is given by P' = P M (step 559). It is appreciated that the matrix M does not depend on the position of the point
P to be refracted, yet does depend on the position of the viewer.
It is appreciated that in a case where sin(alpha)(Nl N2) > 1, the refraction is defined as the total internal reflection; that is, the same transformation as reflection from surface SR. It is appreciated that it is possible, that two vertices ?\ and P2. of a polygon to be refracted lie on opposite sides of SR, or that both vertices Pj and P2 lie on the same side of
SR but are between SR and the viewer. In both cases the refracted image W' of the polygon will lie wholly or partially on the wrong side of SR, the side of SR on which there should be no refraction seen. This case is dealt with during rendering such as, for example, in step 130 described above, by applying an appropriate clipping plane, such as the plane of SR as a front clipping plane. Surfaces lying on the other side of the surface of 0, not seen by the viewer, may be eliminated or ignored (step 561).
Reference is now made to Fig. 12, which is a simplified flowchart illustration of a preferred implementation of step 490 of Fig. 10. It is appreciated that there is at most a single reflection P' for any point P on a convex curve, and there is at most a single refraction
P' for any point P on a concave curve. The method of Fig. 12 preferably comprises the following steps: The problem of computing reflection and/or refraction for an orthogonal linear
3-D extrusion of a convex or concave planar curve is reduced to a 2-D problem as follows
(step 565). A direction parallel to the extrusion direction of the 3-D extrusion is chosen. A coordinate transformation is performed to make the chosen direction the "z" direction, and all further computations of Fig. 12 are performed in the x-y plane, until step 620, described below.
In order to perform the remainder of the method of Fig. 12, the curve SR must preferably be represented in a form which supports generating a point at any location on SR and computing the normal, or an approximation thereof, at any point on SR. In the case where
SR is a parametric curve, point generation and normal computation are easily accomplished using the parametric equation. In the case where SR is an implicit curve, any appropriate method for point generation may be used, such as the "marching cubes" method which is well known in the art, and which is described by Foley et al, referred to above, at pages 1035 - 1036. The normal to an implicit curve may be computed from the implicit equation by differentiation. If the curve SR is not already represented in a form supporting the operations described above, the curve SR is preferably represented in such a form (step 570).
A plurality of sample points is generated along the curve SR (step 580). The number of sample points generated may depend on the desired degree of accuracy in the computation of the point P', which is an approximation to the true reflection and/or refraction of the point P. If a more precise approximation is desired, more points are computed. A typical method for producing the sample points is to sample the parameter space of the curve SR using uniform intervals whose size is determined by the desired number of sample points. Alternatively, it is possible to adaptively modify the parameter value associated with a given point if the given point is too close to or too far from a previous point, in accordance with some predetermined criterion. For each sample point, the geometry of the sample point, that is, its location, and the normal to the curve SR at the sample point are computed (step 590). The sample points are referred to hereinbelow as Tl,...,Tm, and the associated normals as Nl,...,Nm. For simplicity, the discussion below assumes that the sample points Tl,...,Tm were generated in a counter clockwise order, but it is appreciated that, more generally, the sample points Tl,...,Tm may be generated in any order.
A reflected and/or refracted ray Ri is generated for each sample point Ti and the associated normal Ni (step 600), dependent on the viewer location V. For reflected rays, if the angle between the vector (V-Ti) and Ni is less than or equal to 90 degrees, the direction Di of
Ri is such that the angle between Ni and Di is equal to the angle between Ni and (V-Ti). Rays of this type are referred to herein as "front-facing rays." Otherwise, Di is identical to (Ti-V).
For refracted rays, the direction Di of Ri is such that the angle between the vector (V-Ti) and Di obeys Snell's law. In a case where sin(alpha) (N1/N2) > 1, the refraction is defined as the total internal reflection; that is, the same transformation as reflection from surface SR.
A plurality of cells C(i,k) are determined by the rays and by line segments connecting adjacent sample points as follows (step 610). Let k be an number greater than 1. The cell C(i,k) is defined as the area of the plane in which the curve SR lies bounded by the infinite rays Ri and R(i+k) and by the line segment connecting Ti and T(i+k). It is appreciated that the cells C(i,i+1) for i=l,...,m-l are disjoint, and that a point P therefore lies in at most a single cell C(i,i+1).
In order to determine in which cell a point P lies, first note that some cells may be non convex, in which case such cells are preferably split into two convex subcells for easier testing, typically by extending the defining rays of such a cell into lines and intersecting the lines. Furthermore, note that the cell C(i,k) contains all cells C(j,l), where i <= j < 1 <= k. Therefore, a prefened efficient method for computation of the cell in which a given point lies can be performed efficiently using a binary search: for example, if a point is found to lie inside C(i,k), a check is made to see whether the point lies inside C(i,k/2). If so, a similar search continues recursively from the cell C(i,k/2). If not, a similar search continues recursively from C(k/2,k). A point which does not lie in any cell must be obscured from the viewer by SR (step 620).
We now know which cell C(i,i+1) contains the point to be reflected and/or refracted P. The point Q, as defined above, lies between Ti and T(i+1) on SR. Reflected and/or refracted points Pi' and P(i+1)' are computed in 3-D, where Pn' is the reflection and/or refraction of P through the plane passing through Tn, the plane's normal being Nn. Computation of the reflection and/or refraction is preferably performed using the matrix M described above in reference to Figs. 11A and 1 IB. P' is now computed as a weighted average of Pi' and P(i÷l)' (step 630). A preferred formula for computing P' is as follows: P' = (u/(u+v)) Pi' + (v/(u+v)) P(i+1)' The weights u and v in the above equation may be determined by any appropriate method such as, for example, one of the following:
1) u is the distance of P from Ri and v is the distance of P from R(i+1).
2) Intersection points Ii and 1(1+1) of a line parallel to Ti,T(i+l) and passing through P is computed, the intersection points being the points of intersection with the rays Ri and R(i+1). The weight u is then the distance of P from Ii, while the weight v is the distance of
P from I(i+1).
An example of a scene after performing steps 565 through 630 is described in more detail below with reference to Figs. 17J and 17V. Reference is now made to Fig. 13, which is a simplified flowchart illustration of a prefeπed implementation of step 500 of Fig. 10. Reference is additionally made to Fig. 17U which is an example of a scene containing Ci, O, OV, OP, Q', V, B, R, P, P', dv, and dp as described hereinbelow. Fig. 17U is described in more detail below. The method of Fig. 13 preferably includes the following steps: The problem of computing reflection for a cylindrical surface is reduced to a 2-D problem as described above with reference to step 565 of Fig. 12. The result is a circle Ci which is the generating circle of the cylinder.
A first ray OV from the center O of the circle Ci to the viewer V and a second ray OP from the center O of the circle Ci to the point P are generated, and a bisector ray B is generated from the center O of the circle Ci such that the angle between OV and B is equal to the angle between OP and B (step 650).
The distances dv and dp of V and P respectively from the surface of the generating circle Ci are computed (step 660). Suppose that V is nearer to the generating circle
Ci than P is. A ray R is generated from the center O of the circle Ci between OV and B such that the ratio of angle (R.OV) to angle (R,B) equals dv/dp (step 670). It is appreciated that, if
V is not nearer to the generating circle Ci than P is, then R will lie between B and OP, and all computations will be similar to those described.
The intersection Q' of R and the generating circle Ci is found (step 680). Q' is taken to be an approximation to Q, described above in reference to Figs. 11A and 11B. P is then reflected through Q\ using the normal to the generating circle Ci at Q' to generate P'
(step 690). It is appreciated that, optionally, repeated iterations of the method of Fig. 13 may lead to increasingly exact approximations P' (step 700), by repeated bisection of the angle using current and previous values of P'.
Reference is now made to Fig. 14, which is a simplified flowchart illustration of a preferred implementation of step 510 of Fig. 10. In this case, the reflecting surface SR is a 3-
D spherical surface. First, the problem is reduced to a 2-D problem (step 705) as follows. The center O of the sphere, the viewer P, the point P, and the point Q, described above in reference to Figs. 11 A and 1 IB, will all lie on the same 2-D plane because of the geometry of a sphere.
In the 2-D plane, the method of Fig. 13 is performed beginning with step 650 (step 710). Reference is now made to Fig. 15, which is a simplified flowchart illustration of a preferred implementation of steps 520 and 530 of Fig. 10. In this case, SR is a general convex polygonal 3-D mesh or a general concave polygonal 3-D mesh. The convex case for calculating reflection, which is generally the same as the concave case for calculating refraction, will be described first. For simplicity, all polygonal faces in the mesh are assumed to be triangular. In general, any polygonal mesh may be reduced to a polygonal mesh with all faces triangular using methods well known in the art, such as that described in R. P. Ronfart and J. R. Rossignac, referred to above (step 720).
At each triangular vertex a reflected and/or refracted ray is computed (step 725). The reflected and/or refracted ray Ri, which has a direction vector Di, is computed for each mesh vertex Ti and associated normal Ni. The direction vector Di may, for example, be computed for reflection by defining a vector W = V - Ti, a number s to be the dot product of W and Ni, and a vector n = 2 s Ni. Then Di = n - W. The direction vector Di may, for example, be computed for refraction by using Snell's law by defining a vector W = V - Ti, a number s to be the dot product of W and Ni, a vector n = sNi, Nl as the refractive index of the medium in which the viewer V is located, and N2 as the refractive index of the refractive object. Then: Di = -n + (W - n) (N1/N2). The three rays Ri, Rj, Rk defined by the three vertices Ti, Tj, Tk of a mesh triangle T(i,j,k) define a space cell C(i,j,k). with each pair of rays defining a bi-linear surface parameterized by the length of each ray, the surface starting at the vertices of the rays (steps 730 and 740). A preferred method for defining a bilinear surface B defined by two rays RI and
R2 is as follows. Ray Ri is defined by a point Ti and a vector Di. The parametric equations for the rays are:
Rl(t) = Tl + t Dl R2(t) = T2 + 1 D2
The parametric equation for the bilinear surface B is:
B(s,t) = (l - s) Rl + s R2
The surface B is called bilinear because it is linear in both parameters s and t.
Let Ln_q, for L = T or D, for n=l or 2 and for q = x or y or z, be the q component of La For example, Tl_y is the y component of TI. In order to find the implicit equation for (x,y,z) of the surface B, the following numbers are defined:
Al = (T2_y - Tl_y) (D2_x - Dl_x) - (T2_x - Tl_x) (D2_y - Dl_x)
Bl = (D2_y - Dl_y) (X - Tl_x) + (D2_x - Dl_x) (Y - Tl_y) +
(T2_y - Tl_y) Dl_x - (T2_x - Tl_x) Dl_y
Cl = Dl_y (X - Tl_x) + Dl_x (Y - Tl_y)
A2 = (T2_z - Tl_z) (D2_x - Dl_x) - (T2_x - Tl_x) (D2_z - Dl_z)
B2 = (D2_z - Dl_z) (X-Tl_x) + (D2_x - Dl_x) (Z - Tl_z) +
(T2_z - Tl_z) Dl_x - (T2_x - Tl_x) Dl_z
C2 = Dl_z (X - Tl_x) + Dl_x (Z - Tl_z)
The implicit equation of X,Y,Z for the surface B is then given by the determinant of the following matrix:
Al Bl Cl 0 0 0 A All B Bll Cl
A2 B2 C2 0
0 A2 B2 C2
A determination is then made as to which cell P lies in as follows (step 750).
The coordinates of P are used as the x, y, and z variables in implicit equation of the surface, giving the result n. The result n is equal to 0 if P is on the bi-linear surface, is positive if P lies on a first side of the bi-linear surface, and is negative if P lies on a second side of the bi-linear surface. As is well-known in the art, each mesh triangle may be oriented such that the result n is positive for all three bi-linear surfaces of a triangle if P lies in the cell defined by that triangle.
One method of orientation of the triangles is to orient all triangles in a counter clockwise order as seen by the viewer. By performing the indicated test on each cell, the cell C in which the point P lies can be determined. Each of the vertices of C are used to reflect and/or refract P, yielding images Pi',
Pj', and Pfc' (step 760). A weighted average of Pi', P , and Pk' is then computed to determine the reflected and/or refracted image P' (step 770).
It is appreciated that, in general, the P' which is thus computed is an approximation of the "true" P\ The accuracy of the approximation depends, in general, on the density of the mesh vertices and on the method used to compute the weighted average in step
770. Either of the following methods is believed to be useful for computing the weighting average:
1) The relative distance of P from each ray Ri is the weight of the corresponding point Pi'. 2) A plane is generated passing through P and parallel to the triangle of the cell.
The intersection points of the plain with the rays Ri are computed. The weight of each point Pi' is then taken as the distance of the point P from the corresponding intersection point, divided by the sum of the three distances.
In the case where SR is a general concave 3-D polygonal mesh reflecting surface or a general convex 3-D polygonal mesh refracting surface, a single point P can generate several reflected and/or refracted images P'. Therefore, it is possible that multiple images P' need to be computed in steps 760 and 770.
Reference is now made to Fig. 16, which is a simplified flowchart illustration of a preferred implementation of step 540 of Fig. 10. In the case of a general concave 3-D polygonal mesh, the mesh is preferably partitioned into concave and convex pieces (step 780), and then the method of Fig. 15 is performed separately for each piece (step 790).
Generally, when treating objects lying partially in front and in back of SR, when SR is not a planar surface, one of two methods is preferably used:
1) The object whose reflected and/or refracted image is desired is clipped against SR, i.e. it is intersected with SR and only the part lying in front of SR is retained for reflection and the part lying behind SR is retained for refraction. 2) During the rendering of the reflected world , in addition to the z-buffer a depth map of SR, that is, a z-buffer containing only SR, is used. For each pixel of an object in
W being rendered, in addition to comparing the object's depth against the ordinary z-buffer, it is also compared against the depth of that pixel in the depth map of SR. If the new pixel is closer to the viewer than the conesponding pixel in the depth map, the new pixel is preferably not rendered, nor is it used to update the z-buffer. This method is effectively a generalization of the concept of a front clipping plane; the part of the front clipping plane is played, in the case of this method by a front clipping buffer.
Generally, it is preferable to use any available knowledge about the original scene in order to accelerate the computations of the various methods of the present invention. If, for example, the only moving objects in the scene are known to always lie in front of every reflecting and/or refracting surface SR, it is appreciated that it is possible to pre-process the scene by clipping, once and for all, the objects lying partially behind SR, as in method 1) above. In addition, bounding box or sphere hierarchies around objects can be used, as is standard in many graphics methods.
Generally, a preferable method of treating point light sources in the scene when generating W is as follows. Use the point light sources to compute diffuse color values at the vertices of all scene polygonal meshes at the beginning of the method of Fig. 2. These color values are then used when performing Gouraud or Phong shading during rendering in step 130 of Fig. 2. This method may be preferable to reflecting and/or refracting of each point light source in the same manner as any other point P, since in this case the normal at the point P, if there is one, should also be reflected and/or refracted. The normal may be reflected by generating any two points on the normal, reflecting and/or refracting them as points, subtracting the reflected and/or refracted points to obtain a vector, and then normalizing the vector.
Reference is now made to Fig. 17A, which is a simplified bird's-eye view illustration of a logical representation of an example of a scene after performing steps 100 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and 830, a reflective surface 810 of rectangular object 830, a light source 840, and a viewer's position S50.
Reference is now made to Fig. 17B, which is a simplified pictorial illustration of a representation of an example of the scene referred to in Fig. 17A after performing steps 100 through 140, and as contained in image buffer 40. Illustrated are a reflecting surface 800, rectangular objects 820 and 830, and the representation 860.
Reference is now made to Fig. 17C, which is a simplified illustration of an representation of an example of stencil S', designated by reference number 870, after performing step 360 of Fig. 6.
Reference is now made to Fig. 17D, which is a simplified bird's-eye view illustration of a logical representation of an example of a scene after recursively performing steps 120 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and
830, a reflective surface 810 of rectangular object 830, reflected rectangular objects 890 and 900, a reflective surface 910 of reflected rectangular object 900, a light source 840, a reflected light source 880, and a viewer's position 850.
Reference is now made to Fig. 17E, which is a simplified pictorial illustration of a representation of an example of the scene referred to in Fig. 17D after recursively performing steps 120 through 140, and as contained in image buffer 40. Illustrated are a reflecting surface 800, reflected rectangular objects 890 and 900, a reflected reflective surface 910 of reflected rectangular object 900, and the representation 860.
Reference is now made to Fig. 17F, which is a simplified illustration of an representation of an example of stencil S", designated by reference number 870, after performing step 360 of Fig. 6. Reference is now made to Fig. 17G, which is a simplified bird's-eye view illustration of a logical representation of an example of a scene after recursively performing steps 120 through 140. Illustrated are a reflecting surface 800, rectangular objects 820 and
830, a reflective surface 810 of rectangular object 830, reflected rectangular objects 890 900,
920, and 930, a reflective surface 910 of reflected rectangular object 900, a light source 840, a reflected light source SSO, a reflected reflecting surface 940, a reflected light source 950 of reflected light source SSO. and a viewer's position 850.
Reference is now made to Fig. 17H, which is a simplified pictorial illustration of a representation of an example of the scene refened to in Fig. 17G after recursively performing steps 120 through 140. and as contained in image buffer 40. Illustrated are a reflected reflective surface 910, a reflection 891, and the representation S60.
Reference is now made to Fig. 171, which is a simplified pictorial representation of a representation of an example of a scene after performing all recursions of steps 120 through 240, and as contained in accumulation buffer 45. Illustrated are reflecting surface 800, rectangular objects 820 and 830, reflected rectangular objects 890 and 900, a reflected reflective surface 910 of reflected rectangular object 900, a reflection 891 of reflected rectangular object 890, and the representation 861. Reference is now made to Fig. 17J, which is a simplified bird's-eye view illustration of an example of steps 565 through 630 for calculating refraction and/or reflection of a point P on an orthogonal linear 3-D extrusion of a convex planar curve, consisting of a viewer's position 850, a curved surface SR, designated by reference number 960, point P to be reflected and/or refracted, designated by reference number 970, sample points 975 and 976, normals 977 and 978 to sample points 975 and 976 respectively, reflecting and/or refracting rays 980, reference rays 985, refracted points 990, and calculated refraction point 991.
Reference is now made to Fig. 17K, which is a simplified bird's-eye illustration of an example of a scene prior to performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, and light direction vector 1020. Reference is now made to Fig. 17L, which is a simplified pictorial illustration of an example of a scene prior to performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 that receives shadows, and light direction vector 1020.
Reference is now made to Fig. 17M, which is a simplified illustration of an example of the projection of objects onto a planar polygon as described in step 415 in order to create shadows of the projected objects onto the planar polygon. Illustrated in a two- dimensional representation are rectangular objects 1030, planar polygonal surface 1040, projected shadows 1050, projection vectors 1060, and light source 1070.
Reference is now made to Fig. 17N, which is a simplified pictorial illustration of an example of a scene after performing step 415 of Fig. 8. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 that receives shadows, light direction vector 1020, and shadow polygon 1030.
Reference is now made to Fig. 17O, which is a simplified illustration of an example of a stencil 1040 created after performing step 420 of Fig. 8. Reference is now made to Fig. 17P, which is a simplified illustration of an example of surface 1015 of rectangular object 1010 with projected shadow 1016 after performing step 425 of Fig. 8. Reference is now made to Fig. 17Q, which is a simplified pictorial illustration of an example of a scene after performing step 435 of Fig. 8. Illustrated are rectangular objects
1000 and 1010, surface 1015 of rectangular object 1010 having received shadow 1016, and light direction vector 1020. Reference is now made to Fig. 17R, which is a simplified pictorial illustration of an example of a scene after performing step 435 of Fig. 8 for all polygons in the scene on which a shadow or shadows are projected. Illustrated are rectangular objects 1000 and 1010, surface 1015 of rectangular object 1010 having received shadows 1016 and 1017, and light direction vector 1020. Reference is now made to Fig. 17S, which is a simplified illustration of an example of a scene as described with reference to Fig. 11 A, containing a reflecting and/or refracting surface SR, a viewer's position V, a normal vector N, a point Q on reflecting and/or refracting surface SR, a point to be reflected P, and a reflected image P'.
Reference is now made to Fig. 17T, which is a simplified illustration of an example of a scene as described with reference to Fig. 17B, containing a reflecting and/or refracting surface SR, a viewer's position V, a normal vector N, a point H on reflecting and/or refracting surface SR, a point to be refracted P, and a refracted image P'.
Reference is now made to Fig. 17U, which is a simplified illustration of an example of a scene as described with reference to Fig. 13, containing a generating circle Ci of a cylinder, a center O of the circle Ci, a viewer's position V, rays OV, OP, B, and R, an intersection point Q', a point to be reflected and/or refracted P, and distances dv and dp.
Reference is now made to Fig. 17V, which is a simplified bird's-eye view illustration of an example of steps 565 through 630 for calculating refraction and/or reflection of a point P on an orthogonal linear 3-D extrusion of a convex planar curve, consisting of a viewer's position V, a curved surface SR, , point P to be reflected and/or refracted, sample points Ti and Ti+1, normals Ni and Ni+1 to sample points Ti and Ti+1 respectively, reflecting and/or refracting rays Ri and Ri+1, reference rays V-Ti and V-ti+1, refracted points P'i and P'i+1, cell C(i,l), and calculated refraction point P'.
Fig. 18 is a pictorial illustration of a scene including four spheres 1510, 1520, 1530 and 1540 each having a shadow 1550, 1560, 1570 and 1580 respectively. The shadows are rendered on the floor plane 1590 of the scene. Fig. 19 is a pictorial illustration of sphere 1510 of Fig. 18, a center point 1610 of the sphere 1510, a point of view 1620 from which the sphere 1510 is being observed, and a four-faced pyramid 1630 sunounding the sphere 1510 and centered at the center point 1610. The pyramid 1630 is defined by four vertices 1632, 1634, 1636 and 1638. The pyramid 1630 includes a front face 1640 which is orthogonal to the line segment 1650 which connects the point of view 1620 and the center point 1610.
Figs. 20A - 20D are pictorial illustrations of the scene of Fig. 18 rendered onto each of the four faces, respectively, of the pyramid 1630 of Fig. 18. The notation employed in Figs. 20A - 20D is that a non-primed reference numeral denotes the scene element which is rendered onto the relevant face whereas a primed reference numeral denotes a reflection of the scene element identified by that reference numeral, which reflection is rendered onto the relevant face.
Fig. 21 is a pictorial illustration of the pyramid 1630 of Fig. 1 unfolded onto a planar surface, where the scene of Fig. 18 is rendered onto each of the faces of the pyramid. Fig. 22 is a pictorial illustration of a tessellation of the front face 1640 of the pyramid 1630.
Fig. 23 is a pictorial illustration of the tessellation of Fig. 22 mapped into a 2D circle 1660.
Fig. 24 is a circular environment map 1670 including a mapping, as illustrated in Figs. 22 - 23, of each of the faces of the pyramid 1630 onto the 2D circle 1660 of Fig. 23.
Fig. 25 is a rendering of the entire scene of Fig. 18 including a reflection of the entire scene in sphere 1510. This reflection is generated by texture-mapping the environment map 1670 of Fig. 24 onto the sphere 1510.
Fig. 26 is a simplified block diagram of a system for generating a 2D image, including reflections, of a 3D scene from a point of view, using 3D information regarding the 3D scene, which is constructed and operative in accordance with a prefened embodiment of the present invention. The apparatus of Fig. 26 is generally similar to the apparatus of Fig. IA. 3D information regarding an artificial scene may be generated by a conventional workstation 1770 such as an IBM PC. 3D information regarding a natural scene may be generated using a state of the art measuring device such as a CyberWare measuring device, commercially available from Cyber-Ware Inc., 2110 De Monte Ave, Monterey, CA, 93940, USA, preferably in conjunction with a suitable camera 1790. Fig. 27 is a simplified flowchart illustration of a prefened method of operation for reflections, refractions and shadows unit 1720 of Fig. 26 in conjunction with units 1715 and
1725 of Fig. 26. The method of Fig. 27 preferably includes the following steps for each object in the scene from which a 2D image, seen from a selected viewpoint V, is generated. Preferably, the steps of Fig. 27 are performed only for those objects in the scene which are visible from the selected viewpoint V and on which it is desired to see reflections.
These objects are termed herein "reflecting objects". a. Step 1800 — A four-faced pyramid whose center is located at a selected
(typically user-selected and/or central) location C within the current object is used to compute an environment map for that object. For example, the selected location C of a point-symmetric object is preferably the point about which the object is symmetric. The selected location C for an object having an axis of symmetry is preferably the center of the axis of symmetry. The pyramid has four faces including a front face which is orthogonal to the line connecting the viewpoint and the selected object point C. For example, Fig. 19 illustrates a four-faced pyramid 1630 whose center 1610 is located at a selected C within the cunent object which may be used to compute an environment map for object 1600 in Fig. 19. The pyramid 1630 has four faces defined by vertices 1632, 1634, 1636, 1638 including a front face 1640 which is orthogonal to the line 1650 connecting the viewpoint 1620 and the selected object point 1610 (also termed point C). The environment map is computed as follows: a. Each of the triangular faces of the pyramid is circumscribed within a bounding rectangle. An example of this is illustrated in Figs. 20A - 20D. b. The scene is rendered onto each of the bounding rectangles. c. Each of the triangular faces, with the scene rendered thereupon, is mapped onto a single circle which forms the environment map. This step may be performed using tessellation, as described herein, or may be performed as follows:
For each point P on the pyramid, a vector R(x,y,z) is defined which is the difference between that point and between the selected object point C. s and t, as defined by Equations 1 and 2, are the coordinates on the circle corresponding to the point P on the pyramid.
Figs. 20A - 20D are pictorial illustrations of four triangular faces with a single scene rendered thereupon. In step 1810, a reflection vector R(x,y,z) is computed for each vertex Q of the reflecting object, using Equation 3 where V is the viewpoint from which the object is viewed and N is a "normal" to the object at Q. "Normal" is a known term in the art of computer graphics which is not identical to the term "perpendicular". Texture coordinates s,t for the vertex Q are computed by plugging the coordinates of R(x,y,z) into Equations 1 and 2.
Finally, a conventional rendering process 1820 is performed in which the environment map is mapped as a texture map onto the object. Conventional rendering is described in Foley et al Computer Graphics: principles and practice, published by Addison- Wesley, 1990.
Fig. 28 is a simplified flowchart illustration of a prefened method for generating a 2D image of a 3D scene from a point of view including a location and a direction, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces wherein at least one of the surfaces is reflected in each of the objects, wherein the 3D information regarding the 3D scene including, for each of the at least one objects, a first plurality of polygons which approximate the surface of the object and which define a second plurality of vertices. The method comprises the following steps for each of the at least one objects, a. Generating an environment map for a selected point C of the object using conventional methods as described in the above-referenced Greene publication or alternatively by means of the method described hereinabove with reference to Fig. 27. b. For each vertex Q of the object, computing an ordered pair of texture coordinates within the environment map; and c. Rendering a plurality of polygons of the object using the environment map as a texture map.
The step (b), in which an ordered pair of texture coordinates is computed for each object vertex Q, typically comprises the following steps:
(i) Computing a reflection ray R based on the point of view, the vertex and a normal to the object at the vertex (step 1830 of Fig. 28; Equation 3); (ii) Computing an intersection point U of the ray R with a user-selected sphere centered at the selected point, using Equations 4 and 5 in which C is the selected point. r is the radius of the sphere and t is the positive solution of the quadratic equation termed herein Equation 5.
(iii) Computing a final reflection vector F (step 1840 of Fig. 28) by subtracting Q -
C. (iv) Step 1850 — Computing a pair of texture coordinates as a function of a vector from the selected point C to the intersection point U, as described above with reference to step 1810 of Fig. 27 except that the role of R in step 1810 is replaced, in step 1850, by the final reflection vector F.
Conventional environmental mapping techniques give adequate results only for reflected surfaces which are located relatively far from the reflecting surface of the object. The methods shown and described herein are particularly suited for treatment of a reflected surface whose distance from the reflecting surface of the object is such that conventional techniques provide insufficient quality. In these instances, the user-selected sphere of radius r is selected such that r is approximately equal to the distance of the reflected surface from the selected object point C and then good results are obtained.
Fig. 29 is a simplified flowchart illustration of another prefened method for generating texture coordinates for each vertex Q of each reflecting object in a 3D scene, in order to generate a 2D image of that 3D scene, either according to the method of Fig. 27 or according to conventional methods. Steps 1860 and 1900 of Fig. 29 may be similar to steps 1830 and 1850 respectively. However, a preliminary step 1852 is added in which a depth map is computed and step 1S40 of Fig. 29 is replaced by the following steps: a. Step 1870 — The intersection U of the reflected ray R computed in step 1860 with a user-selected sphere centered at the selected point C is computed, using Equations 4 and 5 in which C is the selected point, r is the radius of the sphere and t is the positive solution of the quadratic equation termed herein Equation 5. b. Step 1880 -- Compute an intermediate ray I(x,y,z) by subtracting U - C. c. Step 18S4 — Compute intermediate texture coordinates (s1, t') for the intermediate ray by applying Equations 1 and 2 to I(x,y,z). d. Step 188S - Access the depth map and retrieve the value w stored at the coordinates (s',f). e. Step 1890 ~ Compute a final ray L using Equation 6 which combines the intermediate ray I and the reflected ray R. In Equation 6, w' is as defined in Equation 7. dmax in Equation 7 is an upper bound on the depth of the scene e.g. the depth of a box which bounds the scene.
The texture coordinates for the final ray L(x,y,z) are computed (step 400) using Equations 1 and 2. Preliminary depth map computing step 1852 is performed as follows:
A four-faced pyramid whose center is located at a selected (typically user- selected and/or central) location C within the cunent object is used to compute a depth map for that object.
The depth map is computed as follows: a. Each of the triangular faces of the pyramid is circumscribed within a bounding rectangle. b. The scene is rendered, using conventional methods, onto each of the bounding rectangles and a z-buffer generated in the course of the rendering process is stored on each of the triangular faces. c. Each of the triangular faces, with the z-buffer stored thereupon, is mapped onto a single circle which forms the depth map. This step may be performed using tessellation, as described herein, or may be performed as follows:
For each point P on the pyramid, a vector R(x,y,z) is defined which is the difference between that point and between the selected object point C. s and t, as defined by Equations 1 and 2, are the coordinates on the circle conesponding to the point P on the pyramid.
The relationship between reflected ray R, intersection point U, intermediate ray
1. the radius r of the user-selected sphere and the user-selected object point C is illustrated in Fig. 30. As shown, r is the length of vector I. The Equations refened to herein are as follows: l. s = x / [2 (z+l)]Λ0.5
2. t = y / [2 (z+l)]Λ0.5
3. R = (Q-V) - 2 ((Q-V) N) N
4. U = Q + t R 5. tA2 RΛ2 + 1 (2R (Q-C)) + (Q-C)Λ2 - r = 0
6. L = w' R + (l-w') I
7. w1 = max [(w-r) / d_{max} -r), 0] 8. P = (T_i+JR_i) + g(TJ-T_i+I(RJ-R_i)) + h(T_k-T_i+I(R_k-R_i))
where I is the solution of the following equations:
9. -IΛ3R_i((RJ-R_i)x(R_k-R_i))+
IΛ2[{P-T)((RJ-R_i)x(R_k-R_i))- RJ((RJ-R_i)x(T_k-T_i) + (R_k-RJ)x(TJ-T_i))] +
I [(P-T_i)((RJ-R_i)x(T_k-T_i) + (R_k-RJ)x(TJ-T_i))] +
R ((TJ-T_i)x(T_k-T_i))] + (P-T_i)((TJ-T_i)x(T_k-T_i)) = 0
The term "surface" refers to the external portion of an object and does not necessarily refer to a planar portion of the object. One method for computing a virtual image of a point reflected by a general polygonal mesh is described above. An alternative method for computing a virtual image of a point PO reflected by a general polygonal mesh is now described:
Vector Equation 8 is solved, thereby to obtain g, h in terms of the other variables in the equation. If 0 <= g, h < = 1, the point PO lies within the space cell C(ij,k) defined hereinabove. In Equation 8, 1 is as defined by Equation 9.
Another method for generating virtual images of scene vertices is now described. The method is based on a 3D reflection subdivision and an explosion map, each of which are now described:
The 3-D Reflection Subdivision The reflector is approximated using a triangular mesh. Each vertex of the mesh possesses a normal. This step is redundant when the reflector is already given as a triangular mesh. Mesh vertices can be classified into hidden and (potentially) visible, the visible ones being strictly front-facing, profile or back-facing. We force visible back-facing vertices to be profile vertices, profile and strictly front-facing vertices are called front-facing, and contour vertices that are strictly front-facing are called cut vertices.
For each front-facing vertex Vi we define a reflection ray Ri and a through ray Ti. Two reflection rays Ri, Rj corresponding to adjacent mesh vertices Vi. Vj define a ruled bi-linear parametric surface Ruled(s.t) = s(V_i + t R_i) + (l-s)(VJ + t RJ). Note that in general that surface is not planar, because the two rays are usually not co-planar.
Consider three vertices Vi, Vj, Vk of a front-facing mesh triangle Vijk. The triangle induces two cells in the reflection subdivision: a reflection cell Cijk and a hidden cell Dijk. Three of the four faces of Cijk are ruled surfaces (the fourth is the triangle itself), while all four faces of Dijk are planar. The subdivision into cells is disjoint because the reflector is convex. A point lies in one of the reflection cells if and only if it is potentially reflected.
If there are cut vertices, the collection of reflection and hidden cells does not cover the whole space. The remaining part of space is again called the concealed region. The Explosion Map Method for Computing Virtual Vertices.
In the following we describe the approximate explosion map method, used both for cell identification and virtual vertex computation. The explosion map method uses two maps: an explosion map and a hidden map.
Building the explosion map. Take any simple object bounding the reflector, say a sphere. For each reflection ray Ri, compute its intersection Ii with the sphere. The intersection points are now mapped onto a discrete image of a circle, whose resolution is large enough so that different points are mapped to different pixels. In practice, a resolution of 100Λ2 is sufficient.
Mapping of an intersection point Ii is done as follows. Define a normalized direction vector Ni = (xi,yi,zi) from the center of the bounding sphere to the intersection point
Ii. If the resolution of the map is sA2, the point Ii is mapped to pixel coordinates {sxi / {
(2(zi+l))A{ 1/2} } } + s/2, {syi / { (2(zi+l))Λ{ 1/2} } } + s/2. The pixels to which the intersection points Ii are mapped all fall inside a circle of radius s/2.
Take a front-facing mesh triangle Tijk. By definition, reflection rays, hence, intersection points Ii, Ij, Ik, exist. On the circle, pixels conesponding to such adjacent mesh vertices are connected by line segments to form triangles. Each visible mesh triangle now has a corresponding map triangle.
In general, the reflection triangles do not cover the whole circle, because the concealed region was not mapped onto the circle yet (in addition, the line segment connecting two adjacent profile vertices leaves out a small portion of the circle). In order to cover the whole circle, we first identify all contour (cut and profile vertices) vertices. For each contour vertex V, we define a ray RV starting at the vertex and in a direction opposite to that of the center of the circle. Each adjacent pair of contour vertices Vi,
Vj defines an extension polygon whose vertices are Vi, Vj, Ei, Ej, where Ei, Ej are points on the rays Ri, Rj outside the circle. The distance between a point Ei and the circle center could be approximately 1.3 the radius of the circle.
The last step in building the explosion map is filling its polygons with a constant value denoting the index of the conesponding mesh triangles. Extension polygons are filled with the index of their adjacent mesh triangle, which, by construction, is unique. Polygon fill can be done by graphics hardware. The explosion map represents a discrete projection of the reflection subdivision on a two-dimensional sheet around the object. The index of a reflection cell is written on all discrete map pixels covered by it. In a separate location, we store a list of the exact intersection points Ii, indexed by cell index.
In addition to the explosion map, we compute a hidden map, which is simply an item buffer of the visible mesh triangles. That is, it is a discrete map (of a resolution much smaller than the frame buffer) in which a visible mesh triangle is mapped to a 2-D triangle filled with the index of the mesh triangle. Using the explosion map to compute virtual vertices.
The explosion map circle represents a mapping of all possible reflection directions from the reflector's boundary. We use it to directly generate the final virtual vertex Q'. For each reflector we define two explosion maps (near and far), a hidden map, and an auxiliary z-buffer of itself. The near map is computed using a sphere that tightly bounds the object, and the far map is computed using a sphere that bounds all the scene. It is important to understand that although the topologies of the two maps are quite similar (because cells do not intersect each other), their geometries are different; reflection rays, which determine the geometry of map vertices, evolve non-linearly.
Given a scene vertex Q, we first determine whether we should use the explosion maps or the hidden map, by testing a rendered image of Q against the reflector's z-buffer. If Q is hidden and belongs to a mixed polygon, we use the hidden map. If all vertices of all polygons to which Q belongs are hidden, we don't have to compute a virtual vertex Q'. Otherwise, we use the explosion maps. The first case is when Q is hidden. The rendered image of Q falls on one of the pixels of the hidden map, containing an index of a visible mesh triangle V. We compute the intersection I of V and a ray from the viewer to Q. The barycentric coordinates of I relative to
V are now computed, and used as weights in a weighted average of the vertices and normals of V that produces a plane of reflection (H, NH) through which Q is mirrored to produce Q'.
The barycentric coordinates of a point M relative to a triangle A, B, C is computed as follows. A weighted average of the vertices with weights adding to 1 spans all points in the plane. We can thus write (l-(s+t))A + sB +tC = M. These are two linear equations in two variables, which can easily be solved for the weights. If the point M lies outside of the triangle, one or two of the coordinates will be negative. We can find the triangle point nearest to M as follows. If two coordinates are negative, we force them to be 0 and the third to be 1. If one coordinate is negative, we force it to be 0 and divide each of the other two by their sum. In both cases the resulting coordinates are non-negative and sum to 1.
The second case is when Q is not hidden; we utilize the explosion maps. Define a ray from the center of the bounding spheres to Q, and compute its intersections In, If with the near and far bounding spheres. Map the intersection points onto the near and far explosion maps (just as intersections were mapped to create the maps). The pixels in which they fall contain indices of two (generally, different) mesh triangles Tn, Tf. In general, none of these triangles conesponds to the conect reflection cell in which Q is located, because In, If were computed using a line to the center of the sphere, which is rather arbitrary. The intersection point In is mapped to the conect cell when Q is very near (or on) the near bounding sphere (and similarly to If).
We could try to find the conect cell on a path connecting In and If. However, recall that our final goal is to generate an approximate virtual vertex for Q, not to find the conect cell. Therefore, we use the following approximation. Each of the triangles Tn, Tf defines an auxiliary virtual vertex Qn', Qf. Those auxiliary virtual vertices are computed by (1) computing the barycentric coordinates of In, If relative to Tn, Tf, (2) using these coordinates as weights in a weighted average of the vertices and normals of Tn, Tf to produce planes of reflection, and (3) minoring Q across the two planes of reflection. The final virtual vertex Q' is a weighted average of Qn', Qf, i.e., it is on the line segment connecting Qn', Qf . The weights are determined, say, by the relative distances dn, df of Q from the near and far spheres: Q' = { { {Qn'}/{dn} + {Qf}/{df} } / { l/{dn} +l/{df}} }. Virtual Vertices for Cones
Reflecting 3-D cones allow a more efficient treatment than general 3-D convex reflectors. For the purpose of computing virtual polygons, we treat the cone as if it is infinite.
When rendering these polygons, the stencil automatically masks the parts that do not belong to our finite cone. Given a vertex Q, we first test whether it lies in the hidden or the reflected region. The hidden region can be represented as the union of the inside of the cone and a truncated pyramid defined by the intersection between (1) the two planes passing through the viewer and tangent to the cone, and (2) the plane passing through the two edges on the geometric contour. The hidden region of a finite cone is more complex, but we don't need it. Thus, a few simple 'plane side' tests and a cone containment test suffice in order to implement the hidden region test. If Q lies in the hidden region and is a vertex of a mixed polygon, we intersect the ray from the viewer to it with the front part of the cone, and create Q' by minoring Q across the tangent plane at the intersection point.
If Q lies in the reflected region, we approximate the cone by 'vertical' facets. The two vertices on each vertical edge of an approximation facet share a tangent plane. Because of that, the reflection rays outcoming from all points located on a vertical edge lie in the same plane (the plane is defined by the point of reflection of the viewer through the tangent plane and the two edge vertices). In other words, the boundaries between the cells of the reflection subdivision induced by the cone are planar. The reflection cells can thus be ananged in a hierarchy, which can be efficiently searched in order to find the conect cell containing a given vertex Q. Once the cell is identified, we compute an approximate plane of reflection TQ as a weighted average between the two main boundary planes of the cell. The weights are taken according to the relative distances (or squared distances) of Q from these boundary planes. Q' is computed by minoring Q across the plane of reflection TQ. Spheres and Cylinders
Spheres and cylinders allow a direct computation of the point of reflection H, by reducing the computation to a 2-D one. The method described below does not utilize the reflection subdivision at all.
We first test whether Q is reflected or hidden (it cannot be concealed, because we use the algebraic representation of spheres and cylinders). The 'hidden' test is treated first. For spheres, we represent the hidden region as the union of the inside of the sphere and a truncated cone (the cone's apex is the viewer, it passes through the geometric contour of the sphere, and it is truncated by a plane containing that contour). For cylinders, we project the viewer and the point Q onto a plane orthogonal to the major axis of the cylinder and implement the ' hidden' test easily in 2-D.
If Q is identified as hidden and it belongs to a mixed polygon, we intersect the ray from the viewer to Q with the front of the sphere or cylinder, and create a virtual vertex Q' by mirroring Q using the point of intersection and its normal.
Suppose now that Q is a reflected vertex. For spheres, note that the center C of the sphere, the viewer, Q and the point of reflection H all lie in the same plane. Hence, the problem of computing H when Q is given can be reduced to a circular 2-D reflector by working on the plane spanned by C, V, Q. The same holds for cylinders, by projection as described above.
Computation of H proceeds as follows. First, define Alpha to be the angle between the vectors E-C, Q-C, and dE, dQ to be the distances from E and Q to the circle.
Define Beta = Alpha { dE / {dE+dQ}}. Assume without loss of generality that dE < dQ. Define H' to be a point such that the angle between H'-C, E-C is Beta. H' is a first approximation to H. A second approximation is to take a point H" as a weighted average of H' and HO (the point at which Q-C intersects the circle). The weights are determined by the relative squared distances of Q to the reflection rays from Η, HO.
Concave Reflectors The behavior of the reflection of a reflected object depends on which of three regions it is located in. The reflection of an object located in region A looks like an enlarged, deformed version of the object, which is otherwise similar to the object. The reflection of an object located in region B looks like an enlarged, deformed, upside-down version of the object. The reflection of objects located in region C is utterly chaotic. Reflections of objects lying in regions A and B can be computed exactly as for convex reflectors, because in these regions the reflection subdivision is well-defined (since the reflection cells are disjoint). Virtual vertices and polygons can be computed and rendered.
Reflections of objects lying in region C or intersecting that region are unpredictable and chaotic anyway, so almost any policy for computing virtual vertices will be satisfactory. For example, we can simply select the first cell in which a vertex lies and create a virtual vertex according to that cell. It is appreciated that the software components of the present invention may, if desired, be implemented in ROM (read-only memory) form. The software components may, generally, be implemented in hardware, if desired, using conventional techniques.
It is appreciated that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable subcombination. Particularly, it is appreciated that features relating to the computation of reflections, refractions, and shadows may be provided separately or in any combination. It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention is defined only by the claims that follow:

Claims

CLAIMSWhat is claimed is:
1. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in said first planar surface, from 3D information regarding the 3D scene, the method comprising: generating an auxiliary 2D image of the reflection of the second surface in the first planar surface, wherein the step of generating comprises: reflecting the second surface through said plane, thereby to define a virtual second surface; and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from said point of view of the first planar surface; generating a main 2D image of the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
2. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in said first non-planar surface, from 3D information regarding the 3D scene, the method comprising: generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface; generating a main 2D image of the scene including an image of the first non- planar surface which does not include the reflection of the second surface in the first non- planar surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
3. A method according to claim 2 wherein said compositing step comprises employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
4. A method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surface and at least one second surface wherein at least one of the second surfaces is reflected in each of said first non-planar surfaces, the method comprising: generating a main 2D image of the scene including an image of the first non- planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface; and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface: generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
5. A method according to claim 2 or claim 3 wherein said step of generating an auxiliary 2D image of the reflection comprises: computing a virtual surface comprising a 3D approximation of the reflection of the second surface in the first surface; and rendering the virtual surface, thereby to generate the auxiliary image.
6. A method according to claim 5 wherein said step of computing a virtual second surface comprises: selecting a set of second surface locations within said second surface to be reflected in said first surface; for each second surface location in said set: determining a polygon P on an approximation of the first surface, wherein a real reflection point of the second surface location falls within the polygon: and computing a possible reflection point for at least one point of the polygon; and computing a location within said virtual surface corresponding to said second surface location based at least partly on said possible reflection points.
7. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in said first surface, from 3D information regarding the 3D scene, the method comprising: generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface; and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to said second resolution.
8. A method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first non-planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of said first non-planar surfaces, the method comprising: for each of the at least one first non-planar surfaces, generating a main 2D image of the scene including an image of the first non-planar surface which does not include any reflections of any one of the second surfaces in the first non-planar surface; and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface: generating an auxiliary 2D image of the reflection of the second surface in the first non-planar surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D
9. A method for computing shadows generated on a planar polygon by a light source in an image of a three-dimensional polygonal scene from a point of view of an observer, the method comprising: computing at least one shadowed polygon within a plane, wherein at least one shadow is cast on said shadowed polygon; computing a stencil which includes visible pixels of the at least one shadowed polygon; rendering objects in the scene, including computing a scene including the at least one shadowed polygon using hidden-surface removal, to produce a preliminary image; computing at least one shadow polygon by projecting a scene polygon in the direction of the light source such that shadow generating portions of the scene polygon are generally projected above the plane and non-shadow generating portions of the scene polygon are generally projected below the plane; and rendering the at least one shadow polygon in a shadow color using the stencil and using hidden-surface removal to produce at least one rendered shadow polygon.
10. A method according to claim 5 wherein the 3D approximation of the reflection comprises the reflection itself.
11. A method according to claim 8 wherein said at least one first surfaces comprises at least two first surfaces A and B and said step of generating a main 2D image for surface A comprises generating a main 2D image of the scene including an image of the first non-planar surface A which does not include any reflections of any one of the second surfaces in the first non-planar surface A, and wherein said step of generating a main 2D image for surface B comprises rendering surface B into said main 2D image of the scene generated for surface A.
12. A method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of said first surfaces, the method comprising: for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any reflections of any one of the second surfaces in the first surface; and for each pair of first and second surfaces characterized in that the second surface is reflected in the first surface: generating an auxiliary 2D image, at a first resolution, of the reflection of the second surface in the first surface; and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to said second resolution.
13. A method for generating a 2D image of a 3D scene from a point of view, using
3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is reflected in each of said first planar surfaces, the method comprising: for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does not include any reflections of any one of the second surfaces in the first planar surface; and for each pair of first planar surfaces and second surfaces characterized in that the second surface is reflected in the first planar surface: generating an auxiliary 2D image of the reflection of the second surface in the first surface, wherein the step of generating comprises: reflecting the second surface through a plane which includes the first plane in the pair, thereby to define a virtual second surface; and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from said point of view of the first planar surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
14. A method according to claim 4 wherein said at least one second surface comprises a plurality of second surfaces and wherein said step of generating an auxiliary 2D image for each pair of first and second surfaces comprises generating a single auxiliary 2D image which includes reflections of all of said plurality of second surfaces in the first non- planar image.
15. A method according to any of claims 2, 4, 8, 11, 14 wherein said steps of generating the main image and compositing comprise texture mapping the auxiliary image onto the first surface.
16. A method according to any of claims 2-6, 8, 11 wherein said first non-planar surface comprises a polygonal mesh.
17. A method according to any of the preceding claims 2 - 6, 8, 10, 11, 13 - 16 wherein said compositing step comprises the step of alpha-blending the auxiliary and main images.
18. A method according to any of claims 1-8, 10-17, 31 wherein said step of generating an auxiliary 2D image comprises generating an auxiliary 2D image of the reflections of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of reflected second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of said reflected second surfaces.
19. A method according to claim 18 wherein said step of rendering comprises using a z-buffer to remove any hidden portions of said reflected second surfaces.
20. A method according to any of the preceding claims 1-8, 10-19, 31 wherein said step of generating an auxiliary 2D image comprises recursively invoking the same method for different first surfaces in said scene.
21. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in said first surface, from 3D information regarding the 3D scene, the method comprising: generating an auxiliary 2D image of the refraction of the second surface in the first surface; generating a main 2D image of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
22. A method according to claim 21 wherein said compositing step comprises employing a stencil bit plane to differentiate pixels covered by the first surface from pixels not covered by the first surface.
23. A method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of said first surfaces, the method comprising: generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface; and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface: generating an auxiliary 2D image of the refraction of the second surface in the first surface; and compositing the auxiliary 2D image with. the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
24. A method according to any of claims 21 - 23 wherein said step of generating an auxiliary 2D image of the refraction comprises: computing a virtual surface comprising a 3D approximation of the refraction of the second surface in the first surface; and rendering the virtual surface, thereby to generate the auxiliary image.
25. A method according to claim 24 wherein said step of computing a virtual second surface comprises: selecting a set of second surface locations within said second surface to be refracted in said first surface; for each second surface location in said set: determining a polygon on an approximation of the first surface, wherein a real refraction point of the second surface location falls within the polygon; and computing a possible refraction point for at least one point of the polygon; and computing a location within said virtual surface conesponding to said second surface location based at least partly on the possible refraction point.
26. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface refracted in said first surface, from 3D information regarding the 3D scene, the method comprising: generating an auxiliary 2D image, at a first resolution, of the refraction of the second surface in the first surface; and generating a main 2D image, at a second resolution which differs from the first resolution, of the scene including an image of the first surface which does not include the refraction of the second surface in the first surface including texture mapping the auxiliary image onto the first surface thereby to change the resolution of the auxiliary image to said second resolution.
27. A method for generating a 2D image of a 3D scene from a point of view, using
3D information regarding the 3D scene, the 3D scene including at least one first surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of said first surfaces, the method comprising: for each of the at least one first surfaces, generating a main 2D image of the scene including an image of the first surface which does not include any refractions of any one of the second surfaces in the first surface; and for each pair of first and second surfaces characterized in that the second surface is refracted in the first surface: generating an auxiliary 2D image of the refraction of the second surface in the first surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
28. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface refracted in said first planar surface, from 3D information regarding the 3D scene, the method comprising: generating an auxiliary 2D image of the refraction of the second surface in the first planar surface by refracting the second surface through said first planar surface, thereby to define a virtual second surface; and generating a main 2D image of the scene including a 2D image of the first planar surface which does not include the refraction of the second surface in the first planar surface, wherein the step of generating includes texture mapping the auxiliary image onto the first surface.
29. A method according to claim 28 and also comprising the step of: using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from said point of view of the first planar surface.
30. A method according to claim 23 wherein said at least one first surface comprises at least two first surfaces A and B and said step of generating a main 2D image for surface A comprises generating a main 2D image of the scene including an image of the first surface A which does not include any refractions of any one of the second surfaces in the first surface A, and wherein said step of generating a main 2D image for surface B comprises rendering surface B into said main 2D image of the scene generated for surface A.
31. A method according to claim 1 and wherein the step of generating includes texture mapping the auxiliary image onto the first surface
32. A method for generating a 2D image of a 3D scene from a point of view, using
3D information regarding the 3D scene, the 3D scene including at least one first planar surfaces and at least one second surfaces wherein at least one of the second surfaces is refracted in each of said first planar surfaces, the method comprising: for each of the at least one first planar surfaces, generating a main 2D image of the scene including an image of the first planar surface which does not include any refractions of any one of the second surfaces in the first planar surface; and for each pair of first planar surfaces and second surfaces characterized in that the second surface is refracted in the first planar surface: generating an auxiliary 2D image of the refraction of the second surface in the first planar surface, wherein the step of generating comprises: refracting the second surface through a plane which includes the first plane in the pair, thereby to define a virtual second surface; and using a clipping plane to remove at least a portion of the virtual second surface which obstructs the view from said point of view of the first planar surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first planar surface in the main
2D image.
33. A method according to claim 23 wherein said at least one second surfaces comprises a plurality of second surfaces and wherein said step of generating an auxiliary 2D image for each pair of first and second surfaces comprises generating a single auxiliary 2D image which includes refractions of all of said plurality of second surfaces in the first image.
34. A method according to any of claims 21, 24, 27, 30, 33 wherein said steps of generating the main image and compositing comprise texture mapping the auxiliary image onto the first surface.
35. A method according to any of claims 21-27, 30, 32-34 wherein said first surface comprises a polygonal mesh.
36. A method according to any of the preceding claims 21-25, 27, 30, 32-35 wherein said compositing step comprises the step of alpha-blending the auxiliary and main images.
37. A method according to any of claims 21-30, 32-36 wherein said step of generating an auxiliary 2D image comprises generating an auxihary 2D image of the refractions of a set of second surfaces including a plurality of second surfaces, in the first surface, thereby to generate a set of refracted second surfaces, and including rendering of the set of second surfaces so as to remove any hidden portions of said refracted second surfaces.
38. A method according to claim 37 wherein said step of rendering comprises using a z-buffer to remove any hidden portions of said refracted second surfaces
39. A method according to any of the preceding claims 21-30, 32-38 wherein said step of generating an auxiliary 2D image comprises recursively invoking the same method for different first surfaces in said scene.
40. A method according to any of claims 21-25, 27, 30, 32-33, 35-39 and also comprising: generating an additional auxiliary 2D image of the reflection of the second surface in the first surface; wherein said step of generating a main 2D image comprises generating a main 2D imase of the scene including an image of the first surface which does not include both of the reflection of the second surface in the first surface and the refraction of the second surface in the first surface; and wherein said compositing step comprises compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
41. A method according to any of claims 21-25, 27, 30, 32-33, 35-39 and also comprising: generating an additional auxiliary 2D image of the shadow of the second surface on the first surface; wherein said step of generating a main 2D image comprises generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the refraction of the second surface in the first surface; and wherein said compositing step comprises compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
42. A method according to any of claims 2-6, 8, 10-11, 13-14, 16-20 and also comprising: generating an additional auxiliary 2D image of the shadow of the second surface on the first surface; wherein said step of generating a main 2D image comprises generating a main 2D image of the scene including an image of the first surface which does not include both of the shadow of the second surface on the first surface and the reflection of the second surface in the first surface; and wherein said compositing step comprises compositing both auxiliary 2D images with the main 2D image including mapping both auxiliary 2D images onto the image of the first surface in the main 2D image.
43. A method for generating a 2D image of a 3D scene from a point of view, using
3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of said at least one objects, wherein the 3D information regarding the 3D scene includes, for each of said at least one objects, at least one polygons which approximate the object, each of said at least one polygons having at least 3 vertices, the method comprising, for each of said at least one objects: generating an environment map for a selected point; for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within said environment map; and rendering the object's polygons using the environment map as a texture map; wherein said step of generating an environment map comprises rendering at least one of the surfaces on each of four faces of a four-faced pyramid defined about said selected point.
44. A method according to claim 43 and also comprising mapping the faces of the pyramid, on each of which at least one of the surfaces are rendered, onto a 2D circle, thereby to generate a modified environment map.
45. A method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of said at least one objects, wherein the 3D information regarding the 3D scene includes, for each of said at least one objects, at least one polygons which approximate the object, each of said at least one polygons having at least 3 vertices, the method comprising, for each of said at least one objects: generating an environment map for a selected point; for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within said environment map; and rendering the object's polygons using the environment map as a texture map; wherein said step of computing an ordered pair of texture coordinates comprises: computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex; computing an intersection point of the ray with an object centered at said selected point; and computing a pair of texture coordinates as a function of a vector extending from said selected point to said intersection point.
46. A method according to claim 44 wherein said step of computing an ordered pair of texture coordinates comprises: computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex; computing an intersection point of the ray with an object centered at said selected point; and computing a pair of texture coordinates as a function of a vector extending from said selected point to said intersection point.
47. A method for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of said at least one objects, wherein the 3D information regarding the 3D scene includes, for each of said at least one objects, at least one polygons which approximate the object, each of said at least one polygons having at least 3 vertices, the method comprising, for each of said at least one objects: generating an environment map for a selected point; for each individual vertex from among the object's polygons' vertices, computing an ordered pair of texture coordinates within said environment map; and rendering the object's polygons using the environment map as a texture map; wherein said step of computing an ordered pair of texture coordinates comprises: computing a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex; computing at least one intersection points of the ray with at least one spheres, respectively, which are centered at said selected point; computing at least one intermediate rays from said selected point to each of said at least one intersection points, respectively; combining directions of said reflection ray and all of said at least one intermediate rays, thereby to define a final vector; and computing a pair of texture coordinates as a function of said final vector.
48. A method according to claim 47 and also comprising the step of generating a depth map of the surfaces as seen from the selected point, said depth map conesponding to said environment map, and wherein said combining step comprises assigning a weight to each individual one of said directions based on a depth which the depth map assigns to the individual one of said directions.
49. A method according to any of claims 43-48 wherein said selected point comprises a user-selected location in the scene.
50. A method according to any of claims 43-48 wherein said selected point, for which an environment map is generated for an individual one of said at least one objects, comprises a central location of said individual object.
51. Apparatus for generating a 2D image of a 3D scene from a point of view, using 3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of said at least one objects, wherein the 3D information regarding the 3D scene includes, for each of said at least one objects, at least one polygons which approximate the object, each of said at least one polygons having at least 3 vertices, the apparatus comprising: an environment map generator operative to generate an environment map for a selected point for each of said at least one objects: a texture coordinate computer operative to compute, for each individual vertex from among each object's polygons' vertices, an ordered pair of texture coordinates within said environment map; and a rendering device operative to render each object's polygons using the environment map as a texture map; wherein said environment map generator comprises a pyramidic rendering device operative to render at least one of the surfaces on each of four faces of a four-faced pyramid defined about said selected point.
52. Apparatus for generating a 2D image of a 3D scene from a point of view, using
3D information regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of said at least one objects, wherein the 3D information regarding the 3D scene includes, for each of said at least one objects, at least one polygons which approximate the object, each of said at least one polygons having at least 3 vertices, the apparatus comprising: an environment map generator operative to generate an environment map for a selected point for each of said at least one objects; a texture coordinate computer operative to compute, for each individual vertex from among each object's polygons' vertices, an ordered pair of texture coordinates within said environment map; and a rendering device operative to render each object's polygons using the environment map as a texture map; wherein said texture coordinate computer is operative to compute a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, to compute an intersection point of the ray with a sphere centered at said selected point, and to compute a pair of texture coordinates as a function of a vector extending from said selected point to said intersection point.
53. Apparatus for generating a 2D image of a 3D scene from a point of view, using
3D infoπnation regarding the 3D scene, the 3D scene including at least one objects and at least one surfaces, at least one of which is reflected in each of said at least one objects, wherein the 3D information regarding the 3D scene includes, for each of said at least one objects, at least one polygons which approximate the object, each of said at least one polygons having at least 3 vertices, the apparatus comprising: an environment map generator operative to generate an environment map for a selected point for each of said at least one objects; a texture coordinate computer operative to compute, for each individual vertex from among each object's polygons' vertices, an ordered pair of texture coordinates within said environment map; and a rendering device operative to render each object's polygons using the environment map as a texture map; wherein the texture coordinate computer is operative to compute a reflection ray based on the point of view, the individual vertex and a normal to the object at the individual vertex, to compute at least one intersection points of the ray with at least one spheres, respectively, which are centered at said selected point, to compute at least one intermediate rays from said selected point to each of said at least one intersection points, respectively, to combine directions of said reflection ray and all of said at least one intermediate rays, thereby to define a final vector and to compute a pair of texture coordinates as a function of said final vector.
54. Apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first non-planar surface and a second surface reflected in said first non-planar surface, from 3D information regarding the 3D scene, the apparatus comprising: auxiliary 2D image apparatus which generates the reflection of the second surface in the first non-planar surface; main 2D image apparatus which generates the scene including an image of the first non-planar surface which does not include the reflection of the second surface in the first non-planar surface; and compositor apparatus which combines the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main
2D image.
55. Apparatus for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first planar surface and a second surface reflected in said first planar surface, from 3D information regarding the 3D scene, the apparatus comprising: auxiliary 2D image apparatus which generates the reflection of the second surface in the first planar surface, wherein the auxiliary 2D image apparatus comprises: reflecting apparatus which reflects the second surface through said plane, thereby defining a virtual second surface; and removal apparatus which uses a clipping plane to remove at least a por¬ tion of the virtual second surface which obstructs the view from said point of view of the first planar surface; and main 2D image apparatus for generating the scene including a 2D image of the first surface which does not include the reflection of the second surface in the first surface.
56. A method for generating a 2D image of a 3D scene from a point of view, the 3D scene including a first surface and a second surface reflected in said first surface, from 3D information regarding the 3D scene, the method comprising recursively performing the following steps: generating an auxiliary 2D image of the reflection of the second surface in the first surface; generating a main 2D image of the scene including an image of the first surface which does not include the reflection of the second surface in the first surface; and compositing the auxiliary 2D image with the main 2D image including mapping the auxiliary 2D image onto the image of the first surface in the main 2D image.
PCT/IL1997/000094 1996-03-14 1997-03-13 Computerized graphics systems WO1997034213A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU21057/97A AU2105797A (en) 1996-03-14 1997-03-13 Computerized graphics systems
GB9819870A GB2326806A (en) 1996-03-14 1997-03-13 Computerized graphics systems

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
IL117502 1996-03-14
IL11750296A IL117502A0 (en) 1996-03-14 1996-03-14 System for processing graphic scenes
IL119510 1996-10-28
IL11951096A IL119510A0 (en) 1996-10-28 1996-10-28 Apparatus and method for generating 2D images of 3D scenes including reflections

Publications (2)

Publication Number Publication Date
WO1997034213A2 true WO1997034213A2 (en) 1997-09-18
WO1997034213A3 WO1997034213A3 (en) 1997-10-23

Family

ID=26323234

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL1997/000094 WO1997034213A2 (en) 1996-03-14 1997-03-13 Computerized graphics systems

Country Status (3)

Country Link
AU (1) AU2105797A (en)
GB (1) GB2326806A (en)
WO (1) WO1997034213A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000049573A1 (en) * 1999-02-19 2000-08-24 Sony Computer Entertainment Inc. System for and method of implementing refraction mapping
WO2001020554A1 (en) * 1999-09-10 2001-03-22 Sony Computer Entertainment Inc. Method and apparatus for rendering images with refractions
WO2007129476A1 (en) 2006-05-09 2007-11-15 Sega Corporation Image processing program and image processor
AT503743B1 (en) * 2002-10-09 2008-05-15 Vrvis Zentrum Fuer Virtual Rea METHOD FOR THE COMPUTER-BASED PRESENTATION OF OBJECTS
EP2204776A1 (en) * 2007-11-01 2010-07-07 Konami Digital Entertainment Co., Ltd. Image processing device, image processing method, information recording medium and program
KR20170091710A (en) * 2014-12-03 2017-08-09 노키아 테크놀로지스 오와이 Digital video rendering
KR101910893B1 (en) 2017-04-21 2019-01-04 오스템임플란트 주식회사 Apparatus And Method For Selecting Data Using Clipping Fuction
US20220237852A1 (en) * 2021-01-25 2022-07-28 Nvidia Corporation Techniques for texture filtering using refracted ray cones

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043922A (en) * 1988-09-09 1991-08-27 International Business Machines Corporation Graphics system shadow generation using a depth buffer
US5461712A (en) * 1994-04-18 1995-10-24 International Business Machines Corporation Quadrant-based two-dimensional memory manager
US5533185A (en) * 1991-11-27 1996-07-02 Seiko Epson Corporation Pixel modification unit for use as a functional unit in a superscalar microprocessor
US5544292A (en) * 1990-01-15 1996-08-06 U.S. Philips Corporation Display apparatus having a display processor for storing and filtering two dimensional arrays forming a pyramidal array, and method of operating such an apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5043922A (en) * 1988-09-09 1991-08-27 International Business Machines Corporation Graphics system shadow generation using a depth buffer
US5544292A (en) * 1990-01-15 1996-08-06 U.S. Philips Corporation Display apparatus having a display processor for storing and filtering two dimensional arrays forming a pyramidal array, and method of operating such an apparatus
US5533185A (en) * 1991-11-27 1996-07-02 Seiko Epson Corporation Pixel modification unit for use as a functional unit in a superscalar microprocessor
US5461712A (en) * 1994-04-18 1995-10-24 International Business Machines Corporation Quadrant-based two-dimensional memory manager

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6563508B1 (en) 1999-02-19 2003-05-13 Sony Computer Entertainment Inc. System for and method of implementing refraction mapping
US7075533B2 (en) 1999-02-19 2006-07-11 Sony Computer Entertainment Inc. System for and method of implementing refraction mapping
WO2000049573A1 (en) * 1999-02-19 2000-08-24 Sony Computer Entertainment Inc. System for and method of implementing refraction mapping
WO2001020554A1 (en) * 1999-09-10 2001-03-22 Sony Computer Entertainment Inc. Method and apparatus for rendering images with refractions
US6784882B1 (en) 1999-09-10 2004-08-31 Sony Computer Entertainment Inc. Methods and apparatus for rendering an image including portions seen through one or more objects of the image
US6972758B2 (en) 1999-09-10 2005-12-06 Sony Computer Entertaiment Inc. Methods and apparatus for rendering an image including portions seen through one or more objects of the image
AT503743B1 (en) * 2002-10-09 2008-05-15 Vrvis Zentrum Fuer Virtual Rea METHOD FOR THE COMPUTER-BASED PRESENTATION OF OBJECTS
JP4983793B2 (en) * 2006-05-09 2012-07-25 株式会社セガ Image processing program and image processing apparatus
WO2007129476A1 (en) 2006-05-09 2007-11-15 Sega Corporation Image processing program and image processor
EP2034446A1 (en) * 2006-05-09 2009-03-11 Sega Corporation Image processing program and image processor
US8400445B2 (en) 2006-05-09 2013-03-19 Sega Corporation Image processing program and image processing apparatus
EP2034446A4 (en) * 2006-05-09 2011-01-05 Sega Corp Image processing program and image processor
EP2204776A4 (en) * 2007-11-01 2010-12-01 Konami Digital Entertainment Image processing device, image processing method, information recording medium and program
TWI384414B (en) * 2007-11-01 2013-02-01 Konami Digital Entertainment Image processing device, image processing method, and information recording medium
EP2204776A1 (en) * 2007-11-01 2010-07-07 Konami Digital Entertainment Co., Ltd. Image processing device, image processing method, information recording medium and program
CN101911129B (en) * 2007-11-01 2013-08-28 科乐美数码娱乐株式会社 Image processing device, image processing method, information recording medium and program
KR20170091710A (en) * 2014-12-03 2017-08-09 노키아 테크놀로지스 오와이 Digital video rendering
KR102059732B1 (en) * 2014-12-03 2020-02-20 노키아 테크놀로지스 오와이 Digital video rendering
KR101910893B1 (en) 2017-04-21 2019-01-04 오스템임플란트 주식회사 Apparatus And Method For Selecting Data Using Clipping Fuction
US20220237852A1 (en) * 2021-01-25 2022-07-28 Nvidia Corporation Techniques for texture filtering using refracted ray cones

Also Published As

Publication number Publication date
GB2326806A (en) 1998-12-30
GB9819870D0 (en) 1998-11-04
AU2105797A (en) 1997-10-01
WO1997034213A3 (en) 1997-10-23

Similar Documents

Publication Publication Date Title
US7030879B1 (en) System and method of improved calculation of diffusely reflected light
US7362332B2 (en) System and method of simulating motion blur efficiently
Ofek et al. Interactive reflections on curved objects
Zhang et al. Visibility culling using hierarchical occlusion maps
EP1128330B1 (en) Visibility splatting and image reconstruction for surface elements
US6172679B1 (en) Visibility calculations for 3D computer graphics
US6583787B1 (en) Rendering pipeline for surface elements
US7009608B2 (en) System and method of using multiple representations per object in computer graphics
US5442733A (en) Method and apparatus for generating realistic images using a discrete representation
US7973790B2 (en) Method for hybrid rasterization and raytracing with consistent programmable shading
EP1128375B1 (en) Texture filtering for surface elements
US6825840B2 (en) System and method of adjusting ray origins when shading vertices with rays
EP1128331B1 (en) Hierarchical data structures for surface elements
WO1997034213A2 (en) Computerized graphics systems
García et al. Interactive ray tracing using the compute shader in directx 11
US20030184546A1 (en) Image processing method
Sobierajski Global illumination models for volume rendering
Zhang et al. Single-pass point rendering and transparent shading
KR100372901B1 (en) Ray-tracing acceleration method using ZF-buffer
Han et al. In-Depth Buffers
Meseth et al. Interactive fragment tracing
Měch Real-Time Image-Based Rendering Using Surface Proxies and Texture Packing
Koyamada Fast ray-casting for irregular volumes
Güdükbay et al. Three-dimensional scene representations: Modeling, animation, and rendering techniques
Frühauf Joining volume with surface rendering

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN YU AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN YU AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase in:

Ref country code: GB

Ref document number: 9819870

Kind code of ref document: A

Format of ref document f/p: F

NENP Non-entry into the national phase in:

Ref country code: JP

Ref document number: 97532408

Format of ref document f/p: F

NENP Non-entry into the national phase in:

Ref country code: CA

122 Ep: pct application non-entry in european phase