US 7236169 B2 Abstract A geometric processing stage for a pipelined engine for processing video signals and generating processed video signal in space coordinates (S) adapted for display on a screen. The geometric processing stage includes: a model view module for generating projection coordinates of primitives of the video signals in a view space, said primitives including visible and non-visible primitives, a back face culling module arranged downstream of the model view module for at least partially eliminating the non visible primitives, a projection transform module for transforming the coordinates of the video signals from view space coordinates into normalized projection coordinates (P), and a perspective divide module for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S). The back face culling module is arranged downstream the projection transform module and operates on normalized projection (P) coordinates of said primitives. The perspective divide module is arranged downstream the back face culling module for transforming the coordinates of the video signals from normalized projection (P) coordinates into screen space coordinates (S). A circuit in the back face culling module can be shared with a standard three dimension back face culling operation when necessary. An application is in graphic engines using standard graphics language like OpenGL and NokiaGL.
Claims(31) 1. A geometric processing stage for a pipelined engine for processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said geometric processing stage comprising:
a model view module for generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives;
a back face culling module arranged downstream of said model view module for at least partially eliminating said non-visible primitives;
a projection transform module for transforming coordinates of said video signals from said view space coordinates into normalized projection coordinates (P); and
a perspective divide module for transforming coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S);
wherein;
said back face culling module is arranged downstream of said projection transform module and operates on normalized projection (P) coordinates of said primitives;
said perspective divide module is arranged downstream said back face culling module for transforming the coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S); and
wherein said back face culling module is configured for performing a perspective division (S=P/Pw) for transforming said screen coordinates (S) into said normalized projection (P) coordinates.
2. The processing stage of
3. The processing stage of
4. The processing stage of
5. A geometric processing stage for a pipelined engine for processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said geometric processing stage comprising:
a model view module for generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives;
a back face culling module arranged downstream of said model view module for at least partially eliminating said non-visible primitives;
a projection transform module for transforming coordinates of said video signals from said view space coordinates into normalized projection coordinates (P); and
a perspective divide module for transforming coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S);
wherein;
said back face culling module is arranged downstream of said projection transform module and operates on normalized projection (P) coordinates of said primitives; and
said perspective divide module is arranged downstream said back face culling module for transforming the coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S), wherein said back face culling module is configured for operating on:
4D normalized projection (P) coordinates of said primitives as received from said projection transform module; and
3D view space coordinates of said primitives as received from said model view module.
6. The processing stage of
a first layer of multipliers for computing first subproducts of components of said coordinates (P);
a layer of subtraction nodes for computing differences of said subproducts of coordinates as computed in said first layer of multipliers;
a second layer of multipliers for computing second subproducts of said differences as computed in said layer of subtraction nodes; and
a summation node for summing said second subproducts as computed in said second layer of multipliers.
7. The processing stage of
8. A method of processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said method comprising:
generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives;
back face culling said primitives for at least partially eliminating said non-visible primitives;
transforming coordinates of said video signals from said view space coordinates into normalized projection coordinates (P);
transforming coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S);
operating said back face culling on said normalized projection (P) coordinates of said primitives; and
transforming coordinates of said video signals as resulting from said back face culling from said normalized projection (P) coordinates into said screen space coordinates (S), wherein said back face culling includes perspective dividing (S=P/Pw) for transforming said screen coordinates (S) into said normalized projection (P) coordinates.
9. The method of
10. The method of
11. The method of
12. A method of processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said method comprising:
generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives;
back face culling said primitives for at least partially eliminating said non-visible primitives;
transforming coordinates of said video signals from said view space coordinates into normalized projection coordinates (P);
transforming coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S);
operating said back face culling on said normalized projection (P) coordinates of said primitives;
transforming coordinates of said video signals as resulting from said back face culling from said normalized projection (P) coordinates into said screen space coordinates (S); and
providing a back face culling module configured for performing said back face culling and configured for selectively operating on:
4D normalized projection (P) coordinates of said primitives as received from said projection transform module; and
3D view space coordinates of said primitives as received from said model view module.
13. The method of
including in said back face culling module a first layer of multipliers, a layer of subtraction nodes, a second layer of multipliers, and a summation node; and
computing via said first layer of multipliers subproducts of components of said coordinates (P);
computing via said layer of subtraction nodes differences of said subproducts of coordinates as computed in said first layer of multipliers;
computing via said second layer of multipliers second subproducts of said differences as computed in said layer of subtraction nodes; and
summing via said summation node said second subproducts as computed in said second layer of multipliers.
14. The method of
15. An article of manufacture, comprising:
a computer program product loadable into a memory of a computer and having software code portions to perform the following if the product is run on the computer:
generate projection coordinates of primitives of a video signal in a view space, the primitives including visible and non-visible primitives;
back face cull the primitives to at least partially eliminate the non-visible primitives;
transform coordinates of the video signal from the view space coordinates into normalized projection coordinates;
operate the back face culling on the normalized projection coordinates of the primitives; and
transform coordinates of the video signal from the normalized projection coordinates into screen space coordinates, including transform coordinates of the video signals resulting from the back face culling from the normalized projection coordinates into the screen space coordinates, wherein the computer program product further has software code portions to perform the following:
compute, via first multipliers, first subproducts of components of the normalized projection coordinates;
compute, via subtraction nodes, differences of the first subproducts as computed by the first multipliers;
compute, via second multipliers, second subproducts of the differences as computed by the subtraction nodes; and
sum the second subproducts as computed by the second multipliers.
16. The article of manufacture of
calculate a projection normal vector of a primitive defined in the normalized projection coordinates;
evaluate a sign of the projection normal vector; and
issue a signal enabling subsequent processing based on the sign.
17. The article of manufacture of
4D normalized projection (P) coordinates of said primitives as received from said projection transform module; and
3D view space coordinates of said primitives as received from said model view module.
18. An article of manufacture, comprising:
a computer program product loadable into a memory of a computer and having software code portions to perform the following if the product is run on the computer;
generate projection coordinates of primitives of a video signal in a view space, the primitives including visible and non-visible primitives;
back face cull the primitives to at least partially eliminate the non-visible primitives, wherein the software code portions to perform the back face culling includes software code portions to perform a perspective division;
transform coordinates of the video signal from the view space coordinates into normalized projection coordinates;
operate the back face culling on the normalized projection coordinates of the primitives; and
transform coordinates of the video signal from the normalized projection coordinates into screen space coordinates, including transform coordinates of the video signals resulting from the back face culling from the normalized projection coordinates into the screen space coordinates.
19. The article of manufacture of
compute, via first multipliers, first subproducts of components of the normalized projection coordinates;
compute, via subtraction nodes, differences of the first subproducts as computed by the first multipliers;
compute, via second multipliers, second subproducts of the differences as computed by the subtraction nodes; and
sum the second subproducts as computed by the second multipliers.
20. The article of manufacture of
3D view space coordinates of said primitives as received from said model view module.
21. A system to generate a processed video signal in space coordinates, the system comprising:
means for transforming coordinates of the video signal from view space coordinates into normalized projection coordinates;
means for generating projection coordinates of primitives of the video signals in a view space, the primitives including visible and non-visible primitives;
means for back face culling the primitives for at least partially eliminating the non-visible primitives;
means for transforming the coordinates of the video signals from the view space coordinates into normalized projection coordinates;
means for operating the back face culling on the normalized projection coordinates of the primitives;
means for transforming the coordinates of the video signals from the normalized projection coordinates into the screen space coordinates, including means for transforming the coordinates of the video signals as resulting from the back face culling from the normalized projection coordinates into the screen space coordinates;
means for computing, via first multipliers, first subproducts of components of the normalized projection coordinates;
means for computing, via subtraction nodes, differences of the first subproducts as computed by the first multipliers;
means for computing, via second multipliers, second subproducts of the differences as computed by the subtraction nodes; and
means for summing the second subproducts as computed by the second multipliers.
22. The system of
23. The system of
means for calculating a projection normal vector of a primitive defined in the normalized projection coordinates;
means for evaluating a sign of the projection normal vector; and
means for providing a signal that enables subsequent processing based on the evaluated sign.
24. An apparatus to generate a processed video signal in space coordinates, the apparatus comprising:
a first module to generate projection coordinates of primitives of the video signal in a view space;
a second module operatively coupled downstream of the first module to transform coordinates of the video signal from the view space into normalized projection coordinates;
a third module operatively coupled downstream of the second module to operate on the normalized projection coordinates of the primitives using back face culling, wherein the third module is configured to perform perspective division; and
a fourth module operatively coupled downstream of the third module to transform coordinates of the video signal, as resulting from the back face culling, from the normalized projection coordinates into screen space coordinates.
25. The apparatus of
26. The apparatus of
a first layer of multipliers to compute first subproducts of components of the normalized projection coordinates
a layer of subtraction nodes operatively coupled to the first layer of multipliers to compute differences of the subproducts as computed in the first layer of multipliers;
a second layer of multipliers operatively coupled to the layer of subtraction nodes to compute second subproducts of the differences as computed in the layer of subtraction nodes;
a summation node operatively coupled to the second layer of multipliers to sum the second subproducts as computed in the second layer of multipliers; and
a comparison circuit operatively coupled to the summation node to evaluate a sign of the projection normal vector and to issue a signal that enables subsequent processing based on the sign.
27. The apparatus of
28. A method to process a video signal, the method comprising:
generating projection coordinates of primitives of the video signal in a view space;
transforming coordinates of the video signal from the view space into normalized projection coordinates;
operating on the normalized projection coordinates of the primitives using back face culling, wherein using back face culling includes performing perspective division; and
transforming coordinates of the video signal, as resulting from the back face culling, from the normalized projection coordinates into screen space coordinates.
29. The method of
30. The method of
31. The method of
Description 1. Field of the Invention The present disclosure relates generally to techniques for triangle culling in pipelined 3D graphic engines and was developed by paying specific attention to the possible application to graphic engines that operate in association with graphic languages. Exemplary of such an application are the graphic engines operating in association with e.g., OpenGL, NokiaGL and Direct3D, in particular but not exclusively in mobile phones. However, reference to this preferred application is in no way to be construed as limiting the scope of the invention. 2. Description of the Related Art Modern 3D graphic pipelines for graphic engines in graphic cards include a rich set of features for synthesizing interactive three-dimensional scenes with a high and realistic quality. Nowadays, in order to output frame-rates ranging from 30 to 100 frames per second, powerful dedicated graphic cards are required having correspondingly high costs. The number of pictures per time unit is increased usually by parallelizing—inasmuch as possible—all the operations involved in the process of designing in the frame buffer those graphic objects that are to be displayed while at the same time increasing the computational power available by performing at higher speeds those operations that cannot be parallelized. A description of an OpenGL pipeline for 3D interactive graphics is given in a co-pending European Patent application filed on the same day and in the name of one of the Applicants herein, entitled “Graphic system comprising a pipelined graphic engine, pipelining method and computer program product”. Such a patent application, whose contents are herein incorporated by reference, describes graphical operations performed in a geometry stage comprised in the graphic engine. In particular, a back-face culling operation is described. In the back-face culling operation, it is very convenient to cull the triangles at an earlier stage of the pipeline since this will appreciably decrease the “potentially visible triangles” submitted to later stages and the processing needed. Assuming that a 3D model is composed by triangles within the unit cube (which means that they are potentially visible), triangles can be of two types: those triangles that are visible for the observer and those that are not visible. “Back-face culling” is able to remove from the list of potentially visible triangles those of “back-face” type. For symmetric objects, 50% of the triangles are front facing and the other 50% are back facing. The ability of removing 50% of the triangles leads to avoiding all subsequent pipeline computations on their vertexes. Consequently, it is convenient to apply this technique as early as possible within a 3D pipeline. In The structure of the geometry stage One of the primary roles of the pipelined graphic system is to transform coordinates from the 3D space used in an application scene/stage into the 2D space of the final display unit. This transformation normally involves several intermediate coordinate systems, namely: -
- the modeling space: the modeling space is the space in which individual elements in a model are defined. These are usually the “natural” coordinates for an object. For example, a sphere may be defined as having unit radius and be centered at the origin in the modeling coordinates. Subsequent scaling and translation would position and resize the sphere appropriately within a scene;
the world space: this represents the coordinate system of the final scene prior to viewing. OpenGL does not support an explicit notion of world space separate from view space, but some other graphics systems do (e.g., Direct3D, GKS 3D); the view space: the view space has the synthetic camera as its origin, with the view direction along the z-axis. The view space coordinates are obtained from the modeling space via a model-view transformation. This coordinate system is sometimes referred to as “eye space”; the normalized projection space: here the coordinates are projected into a canonical space via the projection transformation. It is a four dimensional space where each coordinate is represented by (x, y, z, w). The view volume represents the region of the model visible to the synthetic camera. It is bounded by six planes, for example z=0, z=w, y=−w, y=w, x=−w, x=w. A perspective projection results in a view volume that is a frustum; the normalized device space: the normalized projection coordinates are converted into normalized device coordinates by dividing the first three coordinates (x, y, z) by the fourth w coordinate to obtain (x/w, y/w, z/w). The values of the resulting coordinates normally lie in the range −1 to 1 or from 0 to 1 in the case of the z coordinate; the screen space: this space corresponds to the coordinates used to address the physical displaying device. Thus, the geometry stage The model view transform module Each vertex is multiplied by a 4×4 transform matrix in order to rotate or translate or scale or skew it. Each vertex is represented by four coordinates (x,y,z,w), where w=1. Such a four coordinates system defines a homogeneous coordinate space and it is very convenient since points and vectors may be processed from a mathematical point of view with the same 4×4 matrices. For example, a 3D transform is implemented as:
It is not possible to have only a matrix in this case. A four dimensional transform is implemented as:
M The output of the module A projection transform module The projection transform module A frustum culling module The pipelined frustum culling module An outcode is here a 6-bit set of flags indicating the relationship of a vertex to the viewing frustum. If the outcodes of all three vertexes are zero, then the triangle is wholly within the frustum and must be retained. If the conjunction of the outcodes is non-zero, then all three vertexes must lie to one side of the frustum and the geometry must be culled. Otherwise the geometry must be retained, even though it may not actually overlap the view volume. The outcodes are useful in the pipeline for a clipper module The clipper module The clipping module as implemented increases the view volume slightly before applying the clips. This prevents artifacts from appearing at the edges of the screen, but implies that the rasterization stage The clipper module Then, in the geometry stage For concave solids, front facing polygons may not be all visible. Back-face culling is not appropriate for transparent objects where back faces may well be visible. In the OpenGL pipeline, it is possible to assign different material properties to each side of a primitive. In this case, culling should be disabled. However, the back face calculation is still required in order to select the material that should be applied. In most pipeline configurations, this causes problems since it is not known at lighting time if a face is front or back facing necessitating double the lighting calculations and extra color information in each vertex. Backface culling typically leads to roughly half of the polygons in a scene being eliminated. Since the decision is local to each polygon, it does not eliminate polygons that are obscured by other polygons. Backface culling must be consistent with rasterization. If the back face culling module Moreover, it is important to also cull polygons that should not be rendered. The latter case is more subtle. An incorrect implementation can lead to artifacts at the edge of transparent objects whose back-faces have not been properly culled. The back face culling module The module In OpenGL, the rasterization stage From the point of view of the back face culling operation performed in the module In Specifically, a triangle TR is shown along with its triangle normal vector TN, i.e., the vector that is normal to the triangle surface. An eye-point WP is shown associated to an eye-point vector V. A projected normal vector N, i.e., the projection of the normal vector TN in the direction of the eye-point vector V is also shown. To calculate the normal projected vector N, two triangle edges T Conventionally, the sign is defined on the basis of the order of triangle vertexes. The first vector is from the first vertex toward the second vertex. The second vector is from the second vertex toward the third vertex. The eye-point vector V is drawn from the eye-point PW toward one of the triangle vertexes. The projected normal vector N on the eye-point vector V is the inner (i.e., dot) product of the two vectors T If the sign of the dot product operation is positive, then the triangle is visible. In the case shown in If the sign of the z component is positive, then the triangle is visible. Specifically, the first triangle edge vector T This method requires only two multiplications and one algebraic sum for each triangle; however it is possible to perform culling only at the end stage of the pipeline: consequently, the triangles that are not visible are eliminated at a very late stage and the resulting pipeline is not efficient. In such a configuration of the geometry stage The computation of the projected normal vector N is schematically shown in N This arrangement, where triangles are culled at an early stage of the pipeline, requires nine multiplications and five algebraic sums for each triangle. Thus, neither the geometry stage One embodiment of the present invention provides a geometric processing stage for a pipelined graphic engine, wherein the overall mathematical operation count and number of primitives, i.e., triangles, to be processed is significantly reduced without appreciably affecting the accuracy of the results obtained and the speed of processing. Embodiments of the invention also relate to a corresponding method, as well as to a computer program product loadable in the memory of a computer and comprising software code portions for performing the method of the invention when the product is run on a computer. Substantially, the arrangement described herein provides a graphic engine comprising a geometric processing stage that performs back face culling after the projection transform, i.e., in four dimensions. The arrangement described herein exploits the relationship between the screen space and the projection space established by the perspective division in order to lower complexity of calculation. As result, an appreciable reduction is achieved in comparison with the prior art in terms of operation count and number of primitives to be processed. A further advantage is the possibility of hardware implementation in a form adapted both to 3D back face culling and 4D back face culling: a geometry stage adapted for use with different graphics language can be implemented with a reduction of hardware. Finally, the arrangement described herein gives rise to a back face culling module able to deal in a simplified manner with four dimensions coordinates. One embodiment provides a geometric processing stage for a pipeline engine for processing video signals and generating therefrom processed video signal in space coordinates (S)adapted for display on a screen, said geometric processing stage including: a model view module for generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives; a back face culling module arranged downstream of said model view module for at least partially eliminating said non-visible primitives; a projection transform module for transforming coordinates of said video signals from said view space coordinates into normalized projection coordinates (P); and a perspective divide module for transforming coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S); wherein; said back face culling module is arranged downstream of said projection transform module and operates on normalized projection (P) coordinates of said primitives; said perspective divide module is arranged downstream said back face culling module for transforming the coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S); and said back face culling module is configured for performing a perspective division (S=P/Pw) for transforming said screen coordinates (S) into said normalized projection (P) coordinates. Another embodiment provides a method of processing video signals and generating therefrom processed video signal in space coordinates (S) adapted for display on a screen, said method including: generating projection coordinates of primitives of said video signals in a view space, said primitives including visible and non-visible primitives; back face culling said primitives for at least partially eliminating said non-visible primitives; transforming coordinates of said video signals from said view space coordinates into normalized projection coordinates (P); transforming coordinates of said video signals from said normalized projection (P) coordinates into said screen space coordinates (S); operating said back face culling on said normalized projection (P) coordinates of said primitives; and transforming coordinates of said video signals as resulting from said back face culling from said normalized projection (P) coordinates into said screen space coordinates (S), wherein said back face culling includes perspective dividing (S=P/Pw) for transforming said screen coordinates (S) into said normalized projection (P) coordinates. Still another embodiment provides apparatus to generate a processed video signal in space coordinates, the apparatus including: a first module to generate projection coordinates of primitives of the video signal in a view space; a second module operatively coupled downstream of the first module to transform coordinates of the video signal from the view space into normalized projection coordinates; a third module operatively coupled downstream of the second module to operate on the normalized projection coordinates of the primitives using back face culling, wherein the third module is configured to perform perspective division; and a fourth module operatively coupled downstream of the third module to transform coordinates of the video signal, as resulting from the back face culling, from the normalized projection coordinates into screen space coordinates. Yet another embodiment provides method to process a video signal, the method including: generating projection coordinates of primitives of the video signal in a view space; transforming coordinates of the video signal from the view space into normalized projection coordinates; operating on the normalized projection coordinates of the primitives using back face culling, wherein using back face culling includes performing perspective division; and transforming coordinates of the video signal, as resulting from the back face culling, from the normalized projection coordinates into screen space coordinates. The invention will now be described, purely by way of non-limiting examples, with reference to the attached drawings, wherein: Embodiments of a geometric processing stage for a pipelined graphic engine, corresponding method and computer program product therefor are described herein. In the following description, numerous specific details are given to provide a thorough understanding of embodiments. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention. Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the arrangement described herein, the back face culling operation is performed on normalized projection coordinates, after the projection transform, so that calculation takes place in four dimensions. Accordingly, a block diagram of a geometry stage Such a geometry stage A back face culling module Downstream of the back face culling module The same reference numerals already appearing in Therefore, this section of present description will primarily deal with operation of the module designated As indicated, calculation of the cross product and the dot product of four dimensional vectors as originated by the projection transform module The geometry stage Such a relationship may be expressed as:
Thus, the following relationships apply for the first and second triangle edge vectors T The cross product between T The screen coordinate vertexes can be substituted by homologous projection coordinate vertexes by exploiting the relationship S=P/P By so doing the following relationship will result: The formula above is algebraically equivalent to the following:
This is in fact the determinant for the following matrix:
Besides, P Also, z<0 inside the frustum, hence P The vertex coordinates expressed in projection coordinates P are sent to a layer of multipliers Such subproducts are then sent to three corresponding subtraction nodes Finally, a summation node The circuit of In four dimensions (4D):
In three dimensions (3D):
This possibility of exploiting the same circuit for 3D and 4D back face culling is particularly interesting in connection with OpenGL and NokiaGL software languages for computer graphics. In fact, the geometry stage In OpenGL or NokiaGL there are only two types of projections admitted directly with a specific command: orthogonal and perspective projection. If another projection is required, a corresponding projection matrix must be loaded by means of a load matrix command. This matrix is generic, and the 3D position of the eye-viewpoint is unknown, whereby 3D back face culling cannot be applied. In the geometry stage if the user adopts a specific command to effect orthogonal or perspective projection, then the pipeline applies the 3D back face culling solution. In that case culling is operated upstream of the transform projection module, thus being more efficient than 4D culling; if the user loads its own projection matrix, then the pipeline applies the 4D dimension back face culling solution. The arrangement disclosed herein thus leads to significant advantages over previously known solutions. Specifically, the arrangement described significantly reduces the overall mathematical operation count and number of primitives, i.e., triangles, to be processed by the 3D pipeline and in the geometry stage in particular: in fact, the number of primitives at the first stages of the pipeline is approximately halved, so that the workload and the power consumption in the subsequent pipeline stages are correspondingly reduced. A further advantage is provided by the possibility of hardware implementation adapted for both 3D and 4D back face culling. A geometry stage adapted for use with different graphics language can thus be implemented with reduced hardware requirements. Finally, the arrangement described gives rise to a back face culling module able to deal in a simplified manner with four dimensional coordinates. It is therefore evident that, without prejudice to the underlying principle of the invention, the details and embodiment may vary, also significantly, with respect to what has been disclosed just by way of example without departing from the scope of the invention, as defined by the claims that follow. All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, are incorporated herein by reference, in their entirety. Patent Citations
Referenced by
Classifications
Legal Events
Rotate |