US7528841B2 - Image transformation apparatus, image transformation circuit and image transformation method - Google Patents

Image transformation apparatus, image transformation circuit and image transformation method Download PDF

Info

Publication number
US7528841B2
US7528841B2 US11/184,799 US18479905A US7528841B2 US 7528841 B2 US7528841 B2 US 7528841B2 US 18479905 A US18479905 A US 18479905A US 7528841 B2 US7528841 B2 US 7528841B2
Authority
US
United States
Prior art keywords
image
filter coefficient
read address
pixel
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US11/184,799
Other versions
US20060022989A1 (en
Inventor
Akihiro Takashima
Hiroshi Yamauchi
Hideyuki Shimizu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHIMIZU, HIDEYUKI, TAKASHIMA, AKIHIRO, YAMAUCHI, HIROSHI
Publication of US20060022989A1 publication Critical patent/US20060022989A1/en
Application granted granted Critical
Publication of US7528841B2 publication Critical patent/US7528841B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Definitions

  • the present invention contains subject matter related to Japanese Patent Application JP 2004-224209 filed in the Japanese Patent Office on Jul. 30, 2004, the entire contents of which are incorporated herein by reference.
  • the present invention relates to an apparatus, a circuit and a method wherein an image is transformed by means of texture mapping.
  • a technique called texture mapping is used in which a model that is an object shape and is configured with a combination of triangles called a polygon is made and then an image is attached to the model.
  • the model is positioned farther than a camera viewpoint in a virtual three-dimensional space, the model is reduced in size.
  • texture mapping with less aliasing is executed by preparing in advance texture data in which the same image is made to have lower resolution in stages (discretely), and texture data whose reduction ratio is close to that of the model is attached to the model.
  • an anti-aliasing method according to this technique is called the mipmap method, and texture data lowered in resolution in phases are called mipmap images.
  • the reduction ratio in the original state in which an image has not been reduced in size is defined as 1.0 and the reduction ratio when an image has been reduced in size is defined as 0.0 to 0.999 . . .
  • image data with a finite number of reduction ratios reduced by 1 ⁇ 2 n are prepared as texture data in a memory.
  • the mipmap data whose reduction ratio is 1.0 and the mipmap data whose reduction ratio is 0.5 are read out from the memory, and further, by linearly interpolating these mipmap data by means of the weighting addition of one half each, texture data whose reduction ratio is 0.75 are calculated (refer to Patent Literature 1, for example).
  • Patent Literature 1 Japanese Patent Publication No. 2002-83316 (Paragraph No. 0004)
  • the present invention addresses the above-identified and other problems associated with conventional methods and apparatuses and provides an image transformation apparatus, an image transformation circuit, and an image transformation method, in which when an image is transformed by means of texture mapping, an output image having less aliasing and a high image quality is obtained irrespective of the reduction ratio of a model, and the processing time is made short, the scale of a circuit is small-sized and the capacity of a memory is reduced.
  • an image transformation apparatus includes: modeling means which calculates the coordinates of vertices of each polygon and calculate a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon, with respect to a model to which an image is attached; read address setting means which converts the coordinates of vertices of each polygon calculated by the modeling means into the coordinates of each pixel and set a read address for attaching an image to a model using the coordinates of each pixel; pre-filter coefficient conversion means which converts a pre-filter coefficient calculated by the modeling means into a pre-filter coefficient at the position of each pixel; pre-filter processing means which performs filtering on input image data with a pre-filter coefficient obtained through conversion by the pre-filter coefficient conversion means; image storage means to which image data filtered by the pre-filter processing means are written; and readout means which reads out image data from the image storage means in accordance with a read address set by the read address setting means.
  • this image transformation apparatus with respect to a model to which an image is attached, after the coordinates of the vertices of each polygon are calculated, the coordinates of the vertices of each polygon are converted into the coordinates of each pixel, and a read address for attaching an image to a model is set using the coordinates of each pixel.
  • this pre-filter coefficient is converted into a pre-filter coefficient at the position of each pixel.
  • the image data are read out from the image storage means in accordance with the set read address, and so an image is attached to a model (an image is transformed).
  • input image data to be used as texture data are pre-filtered with a pre-filter coefficient corresponding to a reduction ratio at the position of each pixel of a model.
  • the image transformation apparatus further includes address storage means to which a read address set in the read address setting means is written with video data being input to the pre filter processing means and from which the read address is read out by one frame based on a synchronous signal input along with the video data, and the readout means reads out image data from the image storage means in accordance with a read address read out from the address storage means.
  • an image transformation circuit is mounted on a single substrate and includes: modeling means which calculates the coordinates of vertices of each polygon and calculates a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon, with respect to a model to which an image is attached; read address setting means which converts the coordinates of vertices of each polygon calculated by the modeling means into the coordinates of each pixel and sets a read address for attaching an image to a model using the coordinates of each pixel; pre-filter coefficient conversion means which converts a pre-filter coefficient calculated by the modeling means into a pre-filter coefficient at the position of each pixel; pre-filter processing means which performs filtering on input image data with a pre-filter coefficient obtained through conversion by the pre-filter coefficient conversion means; image storage means to which image data filtered by the pre-filter processing means are written; and readout means which reads out image data from the image storage means in accordance with a read address set by the read address setting
  • an image transformation method includes with respect to a model to which an image is attached: a first step of calculating the coordinates of vertices of each polygon and calculating a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon; a second step of converting the coordinates of vertices of each polygon calculated in the first step into the coordinates of each pixel and setting a read address for attaching an image to a model using the coordinates of each pixel; a third step of converting a pre-filter coefficient calculated in the first step into a pre-filter coefficient at the position of each pixel; a fourth step of filtering input image data with a pre-filter coefficient obtained through conversion in the third step; a fifth step of writing image data that have been filtered in the fourth step to image storage means; and a sixth step of reading out image data from the image storage means in accordance with a read address set in the second step.
  • FIG. 1 is a block diagram showing an example of a configuration of an image transformation apparatus according to an embodiment of the present invention
  • FIG. 2 is a view showing an example of a model to which an image is attached
  • FIGS. 3A and 3B are views showing a polygon and the texture coordinates of vertices thereof;
  • FIG. 4 is a view showing the texture coordinates of each sub-pixel.
  • FIG. 5 is a figure showing an example of generating data of a sub-pixel.
  • FIG. 1 is a block diagram showing an example of a configuration of an image transformation apparatus according to an embodiment of the present invention.
  • An image transformation apparatus 1 is an apparatus of a single unit stored in a chassis, and it broadly includes an address processing block 2 and a video processing block 3 .
  • the address processing block 2 is a block performing a reduction in size and transformation of a model, which includes a network interface 4 for executing communication via the Ethernet®, a modeling unit 5 , a texture address DDA (Digital Differential Analyzer) 6 , a filter coefficient DDA 7 and an address buffer 8 .
  • a network interface 4 for executing communication via the Ethernet®
  • a modeling unit 5 for executing communication via the Ethernet®
  • a texture address DDA (Digital Differential Analyzer) 6 for executing communication via the Ethernet®
  • a filter coefficient DDA 7 for executing communication via the Ethernet®
  • an address buffer 8 for executing communication via the Ethernet®
  • DDA Digital Differential Analyzer
  • the video processing block 3 is a block performing anti-aliasing in order to attach an image to a model transformed by the address processing block 2 , and it includes an H-direction pre-filter 9 , an HV scan converter 10 , a V-direction pre-filter 11 , a texture memory controller 12 , a texture memory 13 , an interpolation unit 14 and a synchronous separation unit 15 .
  • This image transformation apparatus 1 is used as an effecter (apparatus which performs special effects on an image) that is part of a nonlinear editing system in a television broadcasting station, in which the address processing block 2 is connected to an editing terminal (a computer installed with editing software) 21 through the Ethernet®, and the video processing block 3 is connected to a video storage (for example, a VTR or an AV server) 22 and a monitor 23 .
  • an editing terminal a computer installed with editing software
  • the video processing block 3 is connected to a video storage (for example, a VTR or an AV server) 22 and a monitor 23 .
  • FIG. 2 shows an example of a model designated at the editing terminal 21 .
  • a side surface (shown with oblique lines) of a cube diagonally seen is designated as a model 31 .
  • a command to read out the designated video data is sent to the video storage 22 .
  • video data read out in accordance with this command are sent to the video processing block 3 in the image transformation apparatus 1 .
  • the modeling unit 5 divides a model into a plurality of polygons based on the wire frame data supplied from the editing terminal 21 . Then, the texture coordinates of vertices of each polygon are calculated, and a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon is calculated.
  • FIG. 3A shows the state in which the model (side surface of a cube) shown in FIG. 2 has been divided into a plurality of polygons 32 , and FIG.
  • 3B shows the texture coordinates (s 1 , t 1 , q 1 ), (s 2 , t 2 , q 2 ) and (s 3 , t 3 , q 3 ) of vertices A, B and C of one polygon 32 thereof.
  • texture coordinates data calculated in the modeling unit 5 are sent to the texture address DDA 6 .
  • a pre-filter coefficient calculated in the modeling unit 5 is sent to the filter coefficient DDA 7 .
  • the filter coefficient DDA 7 converts a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon into a pre-filter coefficient corresponding to a reduction ratio at the position of each pixel (pixel having resolution equal to that of the video data in the video storage 22 ) by means of linear interpolation.
  • the pre-filter coefficient obtained through conversion in the filter coefficient DDA 7 is sent to the H-direction pre-filter 9 and the V-direction pre-filter 11 in the video processing block 3 .
  • the H-direction pre-filter 9 performs filtering on video data sent from the video storage 22 by pixel in the horizontal direction on the screen with a pre-filter coefficient from the filter coefficient DDA 7 .
  • Video data to which filtering is performed by the H-direction pre-filter 9 is sent to the HV scan converter 10 .
  • the HV scan converter 10 After writing image data of one frame to a memory inside, the HV scan converter 10 reads out data of each pixel from the memory in a vertical direction on the screen to scan and convert video data.
  • Video data scanned and converted by the HV scan converter 10 are sent to the V-direction pre-filter 11 .
  • the V-direction pre-filter 11 performs filtering on the video data in a vertical direction by pixel with a pre-filter coefficient from the filter coefficient DDA 7 .
  • Video data to which filtering is performed in the V-direction pre-filter 11 are written to the texture memory 13 through the texture memory controller 12 .
  • the synchronous separation unit 15 separates a vertical synchronous signal from video data sent from the video storage 22 and sends the signal to the address buffer 8 in the address processing block 2 . From the address buffer 8 , in synchronization with this vertical synchronous signal, a texture address (u, v) of each sub-pixel is read out by one frame and sent to the texture memory controller 12 within the video processing block 3 .
  • the texture memory controller 12 makes a texture address (u, v) of each sub-pixel from the address buffer 8 as a read address and reads out from the texture memory 13 a plurality of (four or eight) pixel data in the vicinity of each sub-pixel.
  • Data read out from the texture memory 13 are sent from the texture memory controller 12 to the interpolation unit 14 .
  • the interpolation unit 14 generates data of each sub-pixel by linearly interpolating a plurality of pixel data in the vicinity of each sub-pixel.
  • FIG. 5 shows an example of generating data of a sub-pixel.
  • data D 0 to D 3 of four pixels P 0 to P 3 in the vicinity of a sub-pixel Ps are read out from the texture memory 13 , and by linearly interpolating the data D 0 to D 3 with weighting addition coefficients K 0 to K 3 corresponding to the distances between the sub-pixel Ps and the pixels P 0 to P 3 , data Ds of the sub-pixel Ps is generated.
  • data of each sub-pixel generated in the interpolation unit 14 is sent from the image transformation apparatus 1 to the monitor 23 and is displayed on the monitor 23 .
  • this image transformation apparatus 1 with respect to a model to which an image is attached, after the coordinates of vertices of each polygon are calculated by the modeling unit 5 , the coordinates of the vertices of each polygon are converted into the coordinates of each pixel in the texture address DDA 6 , and a texture address that is a read address for attaching an image to a model is set using the coordinates of each pixel.
  • this pre-filter coefficient is converted into a pre-filter coefficient at the position of each pixel in the filter coefficient DDA 7 .
  • this image transformation apparatus 1 input image data to which pre-filtering is performed with a pre-filter coefficient corresponding to a reduction ratio at the position of each pixel of a model are used as texture data.
  • optimum pre-filtering in accordance with the reduction ratio of a model is executed, an output image having less aliasing and a high image quality can be obtained irrespective of the reduction ratio of a model, as compared with the mipmap method in which texture data with reduction ratios in stages (discretely) are prepared.
  • the processing time can be made short and the scale of a circuit can be made small, in comparison with the mipmap method in which a series of texture data sets reduced in size by 1 ⁇ 2 n are prepared for an input image.
  • the capacity of a memory can be reduced in comparison with the mipmap method in which a texture data amount is approximately twice the amount of an input image.
  • this image transformation apparatus 1 since a texture address is read out from the address buffer 8 by one frame in synchronization with a vertical synchronous signal separated from input video data and is sent to the texture memory controller 12 , after being written to the address buffer 8 , it is possible to apply real-time texture mapping to input video data.
  • video data are input to the image transformation apparatus 1 in the embodiment above, other than that, still image data and data produced by means of or computer graphics may be input to the image transformation apparatus 1 . If still image data are input, real-time nature is not required, so that the address buffer 8 may be omitted, allowing a texture address set in the texture address DDA 6 to be directly sent to the texture memory controller 12 .
  • the image transformation apparatus 1 as a single apparatus connected to the editing terminal 21 by means of the Ethernet® has been explained.
  • an image transformation circuit in which the same elements constituting this image transformation apparatus 1 are mounted on a single substrate may be produced, and the image transformation circuit may be installed in a slot of the editing terminal 21 .
  • the present invention is applied to an effecter that is part of a nonlinear editing system.
  • the present invention is not limited thereto and may also be applied to a computer game apparatus, for example.

Abstract

An image transformation apparatus is provided, which includes a modeling unit 5 that calculates the coordinates of vertices of each polygon and calculates a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon, with respect to a model to which an image is attached to; a texture address unit 6 that converts the coordinates of vertices of each polygon calculated in the modeling unit 5 into the coordinates of each pixel and sets a read address for attaching an image to the model using the coordinates of each pixel; a filter coefficient unit 7 that converts a pre-filter coefficient calculated in the modeling unit 5 into a pre-filter coefficient at the position of each pixel; an H-direction pre-filter 9, an HV scan converter 10 and a V-direction pre-filter 11 that perform filtering on input image data with a pre-filter coefficient obtained through conversion in the filter coefficient unit 7; a texture memory 13 to which image data filtered in the H-direction pre-filter 9, HV scan converter 10 and V-direction pre-filter 11, is written; and a texture memory controller 12 which reads out image data from the texture memory 13 in accordance with a read address set in the texture address unit 6.

Description

CROSS REFERENCES TO RELATED APPLICATIONS
The present invention contains subject matter related to Japanese Patent Application JP 2004-224209 filed in the Japanese Patent Office on Jul. 30, 2004, the entire contents of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an apparatus, a circuit and a method wherein an image is transformed by means of texture mapping.
2. Description of the Related Art
In the field of computer graphics, as a technique of transforming an image in a virtual three-dimensional space, a technique called texture mapping is used in which a model that is an object shape and is configured with a combination of triangles called a polygon is made and then an image is attached to the model. Here, in the case where the model is positioned farther than a camera viewpoint in a virtual three-dimensional space, the model is reduced in size.
On this occasion, texture mapping with less aliasing is executed by preparing in advance texture data in which the same image is made to have lower resolution in stages (discretely), and texture data whose reduction ratio is close to that of the model is attached to the model. Typically, an anti-aliasing method according to this technique is called the mipmap method, and texture data lowered in resolution in phases are called mipmap images.
If the reduction ratio in the original state in which an image has not been reduced in size is defined as 1.0 and the reduction ratio when an image has been reduced in size is defined as 0.0 to 0.999 . . . , in the mipmap method, image data with a finite number of reduction ratios reduced by ½n (one divided by the n-th power of two), such as 1.0, 0.5, 0.25, 0.125, 0.0625 . . . , are prepared as texture data in a memory. After that, if the reduction ratio of a model is, for example, 0.75, the mipmap data whose reduction ratio is 1.0 and the mipmap data whose reduction ratio is 0.5 are read out from the memory, and further, by linearly interpolating these mipmap data by means of the weighting addition of one half each, texture data whose reduction ratio is 0.75 are calculated (refer to Patent Literature 1, for example).
[Patent Literature 1] Japanese Patent Publication No. 2002-83316 (Paragraph No. 0004)
However, there have been such problems as the following (1) to (3) in executing anti-aliasing in a typical manner by means of the mipmap method.
(1) If the reduction ratio of a model is of other reduction ratios than mipmap data, anti-aliasing may not be appropriately executed, because two mipmap data sets are linearly interpolated as described above. Thus, there occurs aliasing or a blur in an output image.
(2) Since it is necessary to prepare texture data of input image, which have been reduced in size by ½n, processing time becomes longer and the scale of a circuit is enlarged.
(3) Since approximately twice the amount of input image is required as a texture data amount, such as an input image+(half the input image)+(a quarter the input image)+(an eighth the input image)+ . . . , a memory having large capacity is required.
SUMMARY OF THE INVENTION
The present invention addresses the above-identified and other problems associated with conventional methods and apparatuses and provides an image transformation apparatus, an image transformation circuit, and an image transformation method, in which when an image is transformed by means of texture mapping, an output image having less aliasing and a high image quality is obtained irrespective of the reduction ratio of a model, and the processing time is made short, the scale of a circuit is small-sized and the capacity of a memory is reduced.
To solve these problems, an image transformation apparatus according to an embodiment of the present invention includes: modeling means which calculates the coordinates of vertices of each polygon and calculate a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon, with respect to a model to which an image is attached; read address setting means which converts the coordinates of vertices of each polygon calculated by the modeling means into the coordinates of each pixel and set a read address for attaching an image to a model using the coordinates of each pixel; pre-filter coefficient conversion means which converts a pre-filter coefficient calculated by the modeling means into a pre-filter coefficient at the position of each pixel; pre-filter processing means which performs filtering on input image data with a pre-filter coefficient obtained through conversion by the pre-filter coefficient conversion means; image storage means to which image data filtered by the pre-filter processing means are written; and readout means which reads out image data from the image storage means in accordance with a read address set by the read address setting means.
In this image transformation apparatus, with respect to a model to which an image is attached, after the coordinates of the vertices of each polygon are calculated, the coordinates of the vertices of each polygon are converted into the coordinates of each pixel, and a read address for attaching an image to a model is set using the coordinates of each pixel.
Further, with respect to this model, after a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon is calculated, this pre-filter coefficient is converted into a pre-filter coefficient at the position of each pixel.
Further, after input image data are filtered with a pre-filter coefficient at the position of each pixel and are written to the image storage means, the image data are read out from the image storage means in accordance with the set read address, and so an image is attached to a model (an image is transformed).
As described above, according to this image transformation apparatus, input image data to be used as texture data are pre-filtered with a pre-filter coefficient corresponding to a reduction ratio at the position of each pixel of a model. Thus, since optimum pre-filtering in accordance with the reduction ratio of a model is executed, an output image having less aliasing and a high image quality can be obtained, irrespective of the reduction ratio of a model.
Further, since only one texture data set needs to be prepared in accordance with the reduction ratio of a model, processing time can be made short and the scale of a circuit can be small-sized.
Since there is only one texture data set provided as described above and the amount of input image data becomes the amount of texture data, the capacity of a memory (image storage means) can be reduced.
Further, as an embodiment of this image transformation apparatus, it is preferable that the image transformation apparatus further includes address storage means to which a read address set in the read address setting means is written with video data being input to the pre filter processing means and from which the read address is read out by one frame based on a synchronous signal input along with the video data, and the readout means reads out image data from the image storage means in accordance with a read address read out from the address storage means.
Thus, it becomes possible to apply real-time texture mapping to input video data.
Next, an image transformation circuit according to an embodiment of the present invention is mounted on a single substrate and includes: modeling means which calculates the coordinates of vertices of each polygon and calculates a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon, with respect to a model to which an image is attached; read address setting means which converts the coordinates of vertices of each polygon calculated by the modeling means into the coordinates of each pixel and sets a read address for attaching an image to a model using the coordinates of each pixel; pre-filter coefficient conversion means which converts a pre-filter coefficient calculated by the modeling means into a pre-filter coefficient at the position of each pixel; pre-filter processing means which performs filtering on input image data with a pre-filter coefficient obtained through conversion by the pre-filter coefficient conversion means; image storage means to which image data filtered by the pre-filter processing means are written; and readout means which reads out image data from the image storage means in accordance with a read address set by the read address setting means.
Furthermore, an image transformation method according to an embodiment of the present invention includes with respect to a model to which an image is attached: a first step of calculating the coordinates of vertices of each polygon and calculating a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon; a second step of converting the coordinates of vertices of each polygon calculated in the first step into the coordinates of each pixel and setting a read address for attaching an image to a model using the coordinates of each pixel; a third step of converting a pre-filter coefficient calculated in the first step into a pre-filter coefficient at the position of each pixel; a fourth step of filtering input image data with a pre-filter coefficient obtained through conversion in the third step; a fifth step of writing image data that have been filtered in the fourth step to image storage means; and a sixth step of reading out image data from the image storage means in accordance with a read address set in the second step.
According to the above circuit and method, similarly to the explanation of the above image transformation apparatus according to an embodiment of the present invention, when an image is transformed by means of texture mapping, it is possible to obtain an output image having less aliasing and a high image quality irrespective of the reduction ratio of a model, reduce the processing time, make the scale of a circuit small-sized and reduce the capacity of a memory.
According to embodiments of the present invention, in the case where an image is transformed by means of texture mapping, such effectiveness in which an output image having less aliasing and high image quality is obtained irrespective of the reduction ratio of a model, the processing time is reduced, the scale of a circuit is made small-sized and the capacity of a memory is reduced can be obtained.
Further, such effectiveness in which real-time texture mapping can be applied to input video data can be obtained.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram showing an example of a configuration of an image transformation apparatus according to an embodiment of the present invention;
FIG. 2 is a view showing an example of a model to which an image is attached;
FIGS. 3A and 3B are views showing a polygon and the texture coordinates of vertices thereof;
FIG. 4 is a view showing the texture coordinates of each sub-pixel; and
FIG. 5 is a figure showing an example of generating data of a sub-pixel.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
Hereinafter, embodiments of the present invention are explained in detail, using the figures. FIG. 1 is a block diagram showing an example of a configuration of an image transformation apparatus according to an embodiment of the present invention. An image transformation apparatus 1 is an apparatus of a single unit stored in a chassis, and it broadly includes an address processing block 2 and a video processing block 3.
The address processing block 2 is a block performing a reduction in size and transformation of a model, which includes a network interface 4 for executing communication via the Ethernet®, a modeling unit 5, a texture address DDA (Digital Differential Analyzer) 6, a filter coefficient DDA 7 and an address buffer 8.
The video processing block 3 is a block performing anti-aliasing in order to attach an image to a model transformed by the address processing block 2, and it includes an H-direction pre-filter 9, an HV scan converter 10, a V-direction pre-filter 11, a texture memory controller 12, a texture memory 13, an interpolation unit 14 and a synchronous separation unit 15.
This image transformation apparatus 1 is used as an effecter (apparatus which performs special effects on an image) that is part of a nonlinear editing system in a television broadcasting station, in which the address processing block 2 is connected to an editing terminal (a computer installed with editing software) 21 through the Ethernet®, and the video processing block 3 is connected to a video storage (for example, a VTR or an AV server) 22 and a monitor 23.
In the editing terminal 21, a model to which an image is attached and an image (video data) which is to be transformed are designated based on an operation by an operator. FIG. 2 shows an example of a model designated at the editing terminal 21. A side surface (shown with oblique lines) of a cube diagonally seen is designated as a model 31.
As shown in FIG. 1, from the editing terminal 21, data (wire frame data) showing the shape of this model designated is supplied to the address processing block 2 in the image transformation apparatus 1.
Further, from the editing terminal 21, a command to read out the designated video data is sent to the video storage 22. From the video storage 22, video data read out in accordance with this command are sent to the video processing block 3 in the image transformation apparatus 1.
In the address processing block 2, the modeling unit 5 divides a model into a plurality of polygons based on the wire frame data supplied from the editing terminal 21. Then, the texture coordinates of vertices of each polygon are calculated, and a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon is calculated. FIG. 3A shows the state in which the model (side surface of a cube) shown in FIG. 2 has been divided into a plurality of polygons 32, and FIG. 3B shows the texture coordinates (s1, t1, q1), (s2, t2, q2) and (s3, t3, q3) of vertices A, B and C of one polygon 32 thereof.
As shown in FIG. 1, texture coordinates data calculated in the modeling unit 5 are sent to the texture address DDA 6. As shown in FIG. 4, the texture address DDA 6 converts the texture coordinates (s1, t1, q1), (s2, t2, q2) and (s3, t3, q3) of vertices A, B and C of each polygon into the texture coordinates (s, t, q) of each sub-pixel Ps (pixel having resolution higher than that of the video data in the video storage 22) by means of linear interpolation. Then, by executing the calculation of u=s/q, v=t/q, the texture address (u, v) of each sub-pixel is set. This texture address (u, v) is written to the address buffer 8.
A pre-filter coefficient calculated in the modeling unit 5 is sent to the filter coefficient DDA 7. The filter coefficient DDA 7 converts a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon into a pre-filter coefficient corresponding to a reduction ratio at the position of each pixel (pixel having resolution equal to that of the video data in the video storage 22) by means of linear interpolation. The pre-filter coefficient obtained through conversion in the filter coefficient DDA 7 is sent to the H-direction pre-filter 9 and the V-direction pre-filter 11 in the video processing block 3.
In the video processing block 3, the H-direction pre-filter 9 performs filtering on video data sent from the video storage 22 by pixel in the horizontal direction on the screen with a pre-filter coefficient from the filter coefficient DDA 7.
Video data to which filtering is performed by the H-direction pre-filter 9 is sent to the HV scan converter 10. After writing image data of one frame to a memory inside, the HV scan converter 10 reads out data of each pixel from the memory in a vertical direction on the screen to scan and convert video data.
Video data scanned and converted by the HV scan converter 10 are sent to the V-direction pre-filter 11. The V-direction pre-filter 11 performs filtering on the video data in a vertical direction by pixel with a pre-filter coefficient from the filter coefficient DDA 7.
Video data to which filtering is performed in the V-direction pre-filter 11 are written to the texture memory 13 through the texture memory controller 12.
Further, in the video processing block 3, the synchronous separation unit 15 separates a vertical synchronous signal from video data sent from the video storage 22 and sends the signal to the address buffer 8 in the address processing block 2. From the address buffer 8, in synchronization with this vertical synchronous signal, a texture address (u, v) of each sub-pixel is read out by one frame and sent to the texture memory controller 12 within the video processing block 3.
The texture memory controller 12 makes a texture address (u, v) of each sub-pixel from the address buffer 8 as a read address and reads out from the texture memory 13 a plurality of (four or eight) pixel data in the vicinity of each sub-pixel.
Data read out from the texture memory 13 are sent from the texture memory controller 12 to the interpolation unit 14. The interpolation unit 14 generates data of each sub-pixel by linearly interpolating a plurality of pixel data in the vicinity of each sub-pixel. FIG. 5 shows an example of generating data of a sub-pixel. In this example, data D0 to D3 of four pixels P0 to P3 in the vicinity of a sub-pixel Ps are read out from the texture memory 13, and by linearly interpolating the data D0 to D3 with weighting addition coefficients K0 to K3 corresponding to the distances between the sub-pixel Ps and the pixels P0 to P3, data Ds of the sub-pixel Ps is generated.
As shown in FIG. 1, data of each sub-pixel generated in the interpolation unit 14 is sent from the image transformation apparatus 1 to the monitor 23 and is displayed on the monitor 23.
In this image transformation apparatus 1, with respect to a model to which an image is attached, after the coordinates of vertices of each polygon are calculated by the modeling unit 5, the coordinates of the vertices of each polygon are converted into the coordinates of each pixel in the texture address DDA 6, and a texture address that is a read address for attaching an image to a model is set using the coordinates of each pixel.
Further, with respect to this model, after a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon is calculated by the modeling unit 5, this pre-filter coefficient is converted into a pre-filter coefficient at the position of each pixel in the filter coefficient DDA 7.
Then, after input video data, to which filtering is performed with a pre-filter coefficient at the position of each pixel in the H-direction pre-filter 9 and in the V-direction pre-filter 11, is written to the texture memory 13, an image is attached to a model (an image is transformed) by the video data being read out from the texture memory 13 in accordance with a texture address which has been set.
As described above, according to this image transformation apparatus 1, input image data to which pre-filtering is performed with a pre-filter coefficient corresponding to a reduction ratio at the position of each pixel of a model are used as texture data. Thus, since optimum pre-filtering in accordance with the reduction ratio of a model is executed, an output image having less aliasing and a high image quality can be obtained irrespective of the reduction ratio of a model, as compared with the mipmap method in which texture data with reduction ratios in stages (discretely) are prepared.
Further, since only one texture data set needs to be prepared in accordance with the reduction ratio of a model, the processing time can be made short and the scale of a circuit can be made small, in comparison with the mipmap method in which a series of texture data sets reduced in size by ½n are prepared for an input image.
Since there is only one texture data set as described above and the amount of input image data becomes the amount of texture data, the capacity of a memory (texture memory 13) can be reduced in comparison with the mipmap method in which a texture data amount is approximately twice the amount of an input image.
Further, according to this image transformation apparatus 1, since a texture address is read out from the address buffer 8 by one frame in synchronization with a vertical synchronous signal separated from input video data and is sent to the texture memory controller 12, after being written to the address buffer 8, it is possible to apply real-time texture mapping to input video data.
Note that although video data are input to the image transformation apparatus 1 in the embodiment above, other than that, still image data and data produced by means of or computer graphics may be input to the image transformation apparatus 1. If still image data are input, real-time nature is not required, so that the address buffer 8 may be omitted, allowing a texture address set in the texture address DDA 6 to be directly sent to the texture memory controller 12.
Further, in the embodiment above, the image transformation apparatus 1 as a single apparatus connected to the editing terminal 21 by means of the Ethernet® has been explained. However, as another embodiment, an image transformation circuit in which the same elements constituting this image transformation apparatus 1 are mounted on a single substrate may be produced, and the image transformation circuit may be installed in a slot of the editing terminal 21.
Furthermore, in the above embodiments, the present invention is applied to an effecter that is part of a nonlinear editing system. However, the present invention is not limited thereto and may also be applied to a computer game apparatus, for example.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. An image transformation apparatus comprising:
modeling means which calculate, with respect to a model to which an image is attached, the coordinates of vertices of each polygon and calculate a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon;
read address setting means which convert the coordinates of vertices of each polygon calculated by said modeling means into the coordinates of each pixel and which set a read address for attaching an image to said model using said coordinates of each pixel;
pre-filter coefficient conversion means which convert a pre-filter coefficient calculated by said modeling means into a pre-filter coefficient at the position of each pixel;
pre-filter processing means which perform filtering on input image data with a pre-filter coefficient obtained through conversion by said pre-filter coefficient conversion means;
image storage means to which image data filtered by said pre-filter processing means is written;
readout means which read out image data from said image memory means in accordance with a read address set by said read address setting means; and
address storage means to which a read address set in said read address setting means is written with video data being input to said pre-filter processing means, and from which the read address is read out by one frame based on a synchronous signal input along with said video data;
wherein said readout means read out image data from said image storage means in accordance with a read address read out from said address storage means,
wherein resolution of said coordinates of each pixel in said read address setting means is higher than that of said input video data, and
wherein resolution of said position of each pixel in said pre-filter coefficient conversion means is equal to that of said input video data.
2. The image transformation apparatus according to claim 1, further comprising:
interpolation means which generate, in accordance with a read address read out from said address storage means, image data of said read address using a plurality of image data in the vicinity thereof that are read out from said image storage means.
3. The image transformation apparatus according to claim 2, wherein
said plurality of image data in the vicinity thereof are either four or eight sets of image data.
4. An image transformation circuit mounted on a single substrate, comprising:
modeling means which calculate, with respect to a model to which an image is attached, the coordinates of vertices of each polygon and calculate a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon;
read address setting means which convert the coordinates of vertices of each polygon calculated by said modeling means into the coordinates of each pixel and which set a read address for attaching an image to said model using the coordinates of said each pixel;
pre-filter coefficient conversion means which convert a pre-filter coefficient calculated by said modeling means into a pre-filter coefficient at the position of each pixel;
pre-filter processing means which perform filtering on input image data with a pre-filter coefficient obtained through conversion by said pre-filter coefficient conversion means;
image storage means to which image data filtered by said pre-filter processing means is written;
readout means which read out image data from said image memory means in accordance with a read address set by said read address setting means; and
address storage means to which a read address set in said read address setting means is written with video data being input to said pre-filter processing means, and from which a read address is read out by one frame based on a synchronous signal input along with said video data,
wherein said readout means read out image data from said image storage means in accordance with a read address read out from said address storage means
wherein resolution of said coordinates of each pixel in said read address setting means are higher than that of said input video data, and
wherein resolution of said position of each pixel in said pre-filter coefficient conversion means is equal to that of said input video data.
5. The image transformation circuit according to claim 4, further comprising:
interpolation means which generate, in accordance with a read address read out from said address storage means, image data of said read address using a plurality of image data in the vicinity thereof that are read out from said image storage means.
6. The image transformation circuit according to claim 5, wherein
said plurality of image data in the vicinity thereof are either four or eight sets of image data.
7. An image transformation method comprising:
a first step of calculating, with respect to a model to which an image is attached, the coordinates of vertices of each polygon and of calculating a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon;
a second step of converting the coordinates of vertices of each polygon calculated in said first step into the coordinates of each pixel, and of setting a read address for attaching an image to said model using the coordinates of said each pixel;
a third step of converting a pre-filter coefficient calculated in said first step into a pre-filter coefficient at the position of each pixel;
a fourth step of filtering input image data with a pre-filter coefficient obtained through conversion in said third step;
a fifth step of writing image data filtered in said fourth step to image storage means; and
a sixth step of reading out image data from said image storage means in accordance with a read address set in said second step;
wherein in said fourth step, video data is input,
wherein in said second step, a read address that has been set is written into address storage means, and the read address is read out from said address storage means by one frame based on a synchronous signal input along with said video data,
wherein in said sixth step, image data is read out from said image storage means in accordance with a read address read out from said address storage means,
wherein resolution of said coordinates of each pixel in said second step are higher than that of said input video data, and
wherein resolution of said position of each pixel in said third step is equal to that of said input video data.
8. An image transformation method according to claim 7, further comprising:
a seventh step of generating, in accordance with a read address read out from said address storage means, image data of said read address by means of interpolation using a plurality of image data in the vicinity thereof that are read out from said image storage means.
9. An image transformation method according to claim 8, wherein
said plurality of image data in the vicinity thereof are either four or eight image data sets.
10. An image transformation apparatus comprising:
a modeling unit which calculates, with respect to a model to which an image is attached, the coordinates of vertices of each polygon and calculates a pre-filter coefficient corresponding to a reduction ratio at the position of a vertex of each polygon;
a read address setting unit which converts the coordinates of vertices of each polygon calculated by said modeling unit into the coordinates of each pixel and sets a read address for attaching an image to said model using the coordinates of said each pixel;
a pre-filter coefficient conversion unit which converts a pre-filter coefficient calculated by said modeling unit into a pre-filter coefficient at the position of each pixel;
a pre-filter processing unit which performs filtering on input image data with a pre-filter coefficient obtained through conversion by said pre-filter coefficient conversion unit;
an image storage unit to which image data filtered by said pre-filter processing unit is written; and
a readout unit which reads out image data from said image storage unit in accordance with a read address set by said read address setting unit
an address storage unit to which a read address set in said read address setting unit is written with video data being input to said pre-filter processing unit, and from which the read address is read out by one frame based on a synchronous signal input along with said video data;
wherein said readout unit read out image data from said image storage unit in accordance with a read address read out from said address storage unit,
wherein resolution of said coordinates of each pixel in said read address setting unit is higher than that of said input video data, and
wherein resolution of said position of each pixel in said pre-filter coefficient conversion unit is equal to that of said input video data.
US11/184,799 2004-07-30 2005-07-20 Image transformation apparatus, image transformation circuit and image transformation method Expired - Fee Related US7528841B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-224209 2004-07-30
JP2004224209A JP4140575B2 (en) 2004-07-30 2004-07-30 Image deformation device, image deformation circuit, and image deformation method

Publications (2)

Publication Number Publication Date
US20060022989A1 US20060022989A1 (en) 2006-02-02
US7528841B2 true US7528841B2 (en) 2009-05-05

Family

ID=35731616

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/184,799 Expired - Fee Related US7528841B2 (en) 2004-07-30 2005-07-20 Image transformation apparatus, image transformation circuit and image transformation method

Country Status (3)

Country Link
US (1) US7528841B2 (en)
JP (1) JP4140575B2 (en)
CN (1) CN100354896C (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI367457B (en) * 2006-07-03 2012-07-01 Nippon Telegraph & Telephone Image processing method and apparatus, image processing program, and storage medium for storing the program
US8896505B2 (en) * 2009-06-12 2014-11-25 Global Oled Technology Llc Display with pixel arrangement
WO2018129044A1 (en) * 2017-01-03 2018-07-12 Walmart Apollo, Llc Delivery reservation apparatus and method
CN111008928B (en) * 2019-11-26 2024-03-29 杭州小影创新科技股份有限公司 Method and system for realizing special effects of image raindrop dropping and waving

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5909219A (en) * 1996-06-28 1999-06-01 Cirrus Logic, Inc. Embedding a transparency enable bit as part of a resizing bit block transfer operation
US6226012B1 (en) * 1998-04-02 2001-05-01 Nvidia Corporation Method and apparatus for accelerating the rendering of graphical images
JP2002083316A (en) 2000-09-08 2002-03-22 Sony Corp Image deforming device and image deforming method
US20030117399A1 (en) * 2001-11-21 2003-06-26 Tanio Nagasaki Image processing apparatus and method, storage medium, and program

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6771835B2 (en) * 2000-06-12 2004-08-03 Samsung Electronics Co., Ltd. Two-dimensional non-linear interpolation system based on edge information and two-dimensional mixing interpolation system using the same
EP1282078A1 (en) * 2001-08-02 2003-02-05 Koninklijke Philips Electronics N.V. Video object graphic processing device
JP2005516313A (en) * 2002-02-01 2005-06-02 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Using texture filtering to remove edge aliases

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5909219A (en) * 1996-06-28 1999-06-01 Cirrus Logic, Inc. Embedding a transparency enable bit as part of a resizing bit block transfer operation
US6226012B1 (en) * 1998-04-02 2001-05-01 Nvidia Corporation Method and apparatus for accelerating the rendering of graphical images
JP2002083316A (en) 2000-09-08 2002-03-22 Sony Corp Image deforming device and image deforming method
US20030117399A1 (en) * 2001-11-21 2003-06-26 Tanio Nagasaki Image processing apparatus and method, storage medium, and program

Also Published As

Publication number Publication date
CN1728183A (en) 2006-02-01
CN100354896C (en) 2007-12-12
US20060022989A1 (en) 2006-02-02
JP2006048140A (en) 2006-02-16
JP4140575B2 (en) 2008-08-27

Similar Documents

Publication Publication Date Title
JP4462132B2 (en) Image special effects device, graphics processor, program
US7733419B1 (en) Method and apparatus for filtering video data using a programmable graphics processor
US6327000B1 (en) Efficient image scaling for scan rate conversion
CN102014249B (en) Image processing device and imaging apparatus
JP3190762B2 (en) Digital video special effects device
US20200143516A1 (en) Data processing systems
US7528841B2 (en) Image transformation apparatus, image transformation circuit and image transformation method
US8134557B2 (en) Image processing apparatus and image processing method
EP2346240B1 (en) Image processing method and device, and imaging apparatus using the image processing device
US6441818B1 (en) Image processing apparatus and method of same
JP5585885B2 (en) Image processing apparatus and image processing method
JP6750847B2 (en) Image processing apparatus, control method thereof, and program
JP3395195B2 (en) Image distortion correction method
US20070217714A1 (en) Image processing apparatus and image processing method
JP4670185B2 (en) Image generating apparatus, image processing apparatus, and methods thereof
JP2002099926A (en) Image processing device, receiving device, and their methods
JP3451644B2 (en) TV intercom
JP2000348196A (en) Three-dimensional picture generation device and environment map generation method
JP2611211B2 (en) Two-dimensional filter device
JP2011182040A (en) Method of compressing data for image processing, and compressor and imaging apparatus including the same
JP2001155673A (en) Scanning electron microscope
CN117931120A (en) Camera image visual angle adjusting method based on GPU
JPH10126686A (en) Picture special effect device
JPH07225833A (en) Method and device for processing video signal
JPH09147137A (en) Method for generating three-dimensional image

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TAKASHIMA, AKIHIRO;YAMAUCHI, HIROSHI;SHIMIZU, HIDEYUKI;REEL/FRAME:016799/0969

Effective date: 20050708

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20130505