CA1286393C - Method and system for effecting a transformation of a video image - Google Patents

Method and system for effecting a transformation of a video image

Info

Publication number
CA1286393C
CA1286393C CA000498556A CA498556A CA1286393C CA 1286393 C CA1286393 C CA 1286393C CA 000498556 A CA000498556 A CA 000498556A CA 498556 A CA498556 A CA 498556A CA 1286393 C CA1286393 C CA 1286393C
Authority
CA
Canada
Prior art keywords
image
address
plane
data
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000498556A
Other languages
French (fr)
Inventor
Nobuo Sasaki
Nobuyuki Minami
Tetsuzo Kuragano
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Application granted granted Critical
Publication of CA1286393C publication Critical patent/CA1286393C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

ABSTRACT OF THE DISCLOSURE
A method and system for effecting a transformation of video image on a video screen applicable to a system for producing a special visual effect on, e.g., a television screen, in which a two-dimensional address plane is defined within a memory area, input video image is stored within the memory area, a cylinder shaped virtual image is placed on the address plane, a part of the address plane is wound on the cylinder shaped image, and when the cylinder shaped image is displaced along a predetermined direction on the address plane with a radius of a circle in vertical section thereof being varied with time, the address plane can be viewed as if it were turned over. If the input address data within the memory area is read out on the basis of output address data indicating the above-described displacement of the address plane, the output video image on the video screen can be viewed therethrough as if the video image were being turned over.

Description

MET~OD AND SYSTEM FOR EFFECTING A TRANSFORMATION OF
`A VIDEO IMAGE

BACRGR~UND OF T~E INV~NTION
The present invention relates generally to a method and system for effecting a transformation of a video image from an original image on a TV ~creen applicable for example, to a system for producing a special visual effect ~n a television screen in a television broadcasting station.
Such a kind of special visual effect system has ~een proposed in which image signals in a standard tele~ision method constructed so as to form a piece of two-dimensional image having a rectangular shape are converted lnto digital signals, Thereafter the digital image signals are read in predetermined address locations generated within an input image memory having a memory capacity corresponding to one field. When the output image data are read out by accessing the input lmage memory to read out the read-in data in an order changed from the read in order according to the neces6ity to display the output image data on the screen of a di6play unit, a piece of image having a special effect ~uch that an image derived from inpu~ image data is geometrically changed can be di6played.
In this case, 4 read-out address signal for the input image memory is generated by means of a read-out addres6 tran~form circuit for transforming the input image address of the input image data according to the nece6sity.
As the read-out addregs transform ci,rcuit, a circuit is used in which with a three-dimensional 6urface data previously 6tored on the ba6i6 of a concept such that on the basis of input image data generating a plane image, the input image data are converted into a three-dimensional surface and a calculation to map the . ~ .' , , ~, ,~ .

lZB6393 ~nput $mage on the three-dimensional ~urface using the three-dimensional surface data is achieved- by software calculating means.
However. there are problems in the conventional read-out address transform circuit. That is to say. a large scale of memory i5 not only required as a storage means for storing the three dimensional surface data but also the transform calculation for many picture elements constituting the displayed image need to be executed so that a large-sized and complicated construction of the whole speclal effect system cannot ~e avoided.
E~pecially. in a case when the whole w reen on which the input video $mage is displayed is transformed lS into a video screen which can be viewed as if the screen were three-dimensionally inflexed. the lnflexed surface being varied with time. an oversized construction of the special visual effect system exceeding a practical capacity range connot be avoided. Therefore. it is desirable to provide a method and sygtem for effecting a transformation of the video image which achieves a practically sufficient ~pecial visual effect described with a simple hardware construction in place of various conventional oftware methods.
8~MMARY OF TB~ INVENTION
With these problems in mind. it is an object of the pre~ent invention to provide a sy~tem and method for effecting a transformation of a video image on a video screen which can remarkably reduce a scale of the special effect 6ystem as compared with the above-described system and in which a plane image formed with the input image data i8 converted into an image signal having an effect such that an image on the screen can be vlewed as lf a page of a book were turned over.
Thls ls referred to as a ~page turn-over effect".
This effect can be achleved by a method for effecting ,, :' , ,, . . , .. .. : .

'' '. ' ~ '' - '' '' - ' .
-. . : , . . ~ - :
:... , - ...... , . - :. .. . .

12~3~3~3 -- 3 ~
a transformation of a video image on a video screen, which comprises the steps of:(a) storing an input video image in a memory device; (b) defining a two-dimensional address plane in a memory are~ of the memory device: (c) providing a first line on the address plane to divide the address plane into first and second regions' (d) providing second and third lines on the first and second regions of the address plane in parallel to the first line; (e) calculating 1~ address data of the address plane for providing transferred address data so that the address data of the first regi~n are symmetrically transformed with respect to the first line, and the address data between the first and second lines and between the first and third lines are non-linear compression transformed along an axis perpendicular to the first line;(f) calculating transformed address data between the first and second lines and between the first and third lines when address data between the first and second lines and between the first and third lines are non-linear compression transformed along an axis perpendicular to the first line, and ~g) reading out the input video image from the memory device data and generating an output video image according to the transformed address data, whereby the output image can be viewed such as to be turned over along the flrst line.
Thls can be achieved by a system for effecting a transformation of a video image on a video screen, which comprises (a) first means for storing input image data, tb) ~econd means for sequentlally generating a po~itional output lmage address signal, tc) third means for presetting parameters representing a locus on which an output of the video image is turned over as if a sheet of paper were folded up, ~d) fourth means for sequentially generating position designation signals indicative of a displacement of the input image on a .
-~ 3~ 3 two-dimensional plane, ~e) fifth means for calculating values including a positional reference point signal of the input image on the two-dimensional plane on the - basis of which the input image is displaced, rotation transform matrix data based on a given angle through which the two-dimensional plane i8 rotated, and a radius data on a virtual cylindrical image on whi~h part of the input image is wound, the positional reference point signal, the rotation transform matrix data, and the radius data being base~ on preset parameters derived from the third means an~ position ~eslgnation signals derived from the fourth means, (f) SlXth means for executing transform arithmetic operation5 for transformable parts of an output video image, the transformable parts being defined by a fir6t part representing a rear part of the output video image which is wound on an upper surface of the cylindrical image as viewed through the video screen, a second part representing a front part of the output video image which is outside of a projection 2~ port~on of the cylindrical image, a third part representing the front part of the output video image which is wound on a lower surface of the cylindrical image as viewed through the video screen, and a fourth part representing the rear part of the output image which is outside of the wound flrst part ~o as to overlap on the second part, on the basis of the reference point signal, rotation transform matrix data, and radius data of the cylindrical image calculated by the fifth means and reading out the input image data the contents of which are to be the output image and spec$fied by the pos$tional output image address signal gener~ted by the second means, and ~9) seventh means for display~ng the input video image whose data are stored in the firgt means and read out from the first means by the sixth means according to the positional output image addresg signal on the video screen 80 that the whole ' - , - .
' ~ , , . ~ . --- .
, .' ~ , -- ' ~ .
- ' .

lZ~3~393 -- S --video screen can be viewed as if a sheet of paper were being folded up.
~his can also be achieved by a method for effecting a transformation of a video image on a video screen, comprises the steps of:
ta) defining a two-dimensional address plane within a memory area;
C~l stor1ng an input video lmage wlthln the memory area so tna~ aata on eacn picture element thereof is placed at tne correspondlng address;
(c) virtually placlng a cylinder-shaped image f whose radius of a sectlon thereof is varied on the address plane defined ln the step ~a) and winding a part of the address plane on the cylinder shaped image:
(d) displacing the cylinder-shaped image along a predetermined direction on the address plane with its radius varied with time 50 that the address plane i8 turned over along the predetermined direction;
~e) transforming parallel translation and rotation for the whole address plane, a non-linear Compression with respect to the predetermined direction for front and rear parts of the address plane which are wound on a surface of the cylinder shaped image as viewed vertlcally through the video screen, and a fold back of the rear part of the address plane:
~f) inverse transforming the transformed addre~s data obtained in the step ~e) ~o as to unfold the output video image and reading out inverse transformed image addres~ data with a priority taken for 3~ the,.turnea over part of output image as input image address aatP; and ~9) displaying the input image on the video screen on the basis of the input image address, whereby the video image on the vldeo wreen can be viewed as if a page were turned over.
BRIBF DESCRIPrION OF I~E DRAWINGS
, -: ~ - ', -' ''' . . .

- , .
: . - .
., .

~2~i3'33 A more complete understanding of the present invention may be obtained from the followin~ description taken in conjunction with tbe attached dr~wings and in which like reference numerals designate corresponding elements and in which:
Figs. l(A) through l~D) are schematic diagrams of image transformation procedure in the method for transforming image ~isnals according to the present invention:
0 Fig. 2 is a schematic diagram of an unfolded transformed image:
Fig. 3 is a ~chematic diagram for explaining f the compression transform processing:
Fig. 4 is a flowchart of image signal 1~ transform processing method according to the present invention:
Fig. 5 i8 a circuit block diagram of the ~y~tem for transforminq image signals according to the present invention:
Fig. 6 is a schematic circuit block diagram of a read-out address generator 3 shown in Fig. S: and Figs. ~ and 8 are schematic diagrams for explaining a model of the image signal transform method according to the present invention.
~5 DFTAIL~D DBSCRIP~ION OF T~E PREF~RRED FMBODIMENT
,~ ._ ._. , _ , . _ . . .
Reference will hereinafter be made to the drawings in order to facllitate an understandlng of tne present m vention.
In a method for effecting a transformation of a viaeo image according to the present invention, a positional relationship of each picture element on a video screen between input picture image data and output image data is modeled on the basis of a technique shown in Fig. 7 and Fig. 8.
Suppose that a virtual cylindrical image CYL
is mounted on an input image IM represented by input . .
,. : .
~ -~.
.

image data so as to cross the input image IM obliquely.
`In addition, one end IMl of the input image IM is folded and wound aro~nd the cylindrical image CYL. A ~creen in a state in which the end IMl described above ~s wound around the cylindrical image CYL is viewed from an upward position in a vertical direction with respect to an unwound part of the screen IMO. A construction of the screen in this case can provide a visual effect such that an image represented by a piece of paper can be ~ visualized as if it were gradually turned over.
In more detail, simultaneously when a center axis of the cyllndrical lmage CYL i6 translated in parallel to a direction in which one page of a book is turned, a diameter of the cyllndrical image CYL is ~ncreased accordlng to an lncrease in distance of the parallel tran~latlon of the cyllndrical image CYL.
One sheet of image is thus turned over obliquely from one of the corners. As the turned sheet part of lmage becomes large as compared with the remalning sheet part, a dlameter of a part of image IM2 currently wound on the cylindrical image CYL continued from the folded part of image IMl becomes gradually varied and the position of the cylindrlcal image CYL is accordingly moved in a page tur~ed over directlon 26 denoted by an arrow MRl.
As shown ~n Flg. 7, suppose that with the part of"thhe lmage IM wouna on the cyllndrical image CYL, the contents of the whole image on the screen when viewing the screen on whlch the part of lmage 18 wound on the cylindrical lmage from a posltion vertlcal to a plane of the part of image IM. In thl8 ca8e, an unfolaed part of the imaqe IGO remalns unchanged without image ~lteration by a compression, shift, and rotation. On the other hand, a part o. the image IGl ~olded ~ack over the part of image IGo indicates inver~ely pro~ected contents of, e.g., part of image IGO. In addition, a part of image IG21 .~ . , . , . ~ . .

: . .

, 12~36393 located below the part of image IM2 (refer to Fig. 7) `wound on the cylindrical image CYL has projected contents of Ine image before the part of image IM0 is wound on the cylindrical image CYL without folding back of image and which is subjected to a non-linear compression. In addition, the upper part of image IG22 has projected contents with the folding back of the previous image IGo before winding on the image CYL and with a non-linear compression in image.
When the effect of page turning over is modeled as shown in Fig. 8, the contents of the part of lmage IG2 wound on the cylindrical image CYL can be achieved if the content6 of the image before tne folding back of the original image is processed by way of a one r5 dimensional non-linear compression only for a direction ~represented by an arrow mark MK2) orthogonal to a fold line L1 on a plane including the part of image IGo with respect to the fold line Ll, i.e., a straight line parallel to a center axis of the cylindrical image CYL.
For the transform processing when each pisture element COnstituting the image on the screen ic mapped on the part of the wound part of image IG2, the non-linear compression transform may be executed only for one axial direction, l.e., the direction MK2 orthogonal to the fold line L1.
The one-axis direction non-linear co~pression transform processing may be executed over the confines from the fold line L1 through first and second non-linear comprefision parts of images IG21 and IG22 to a fold boundary line L2 representing a boundary between the orlginal part-of lmage IG0 and folded part:of tne image IG1. The folded ~oundary line L2 corresponds to a position of the wound part of image IM2 in Fig. 7 which has been separated from the cylindrical image CYL and comprising upper and lower fold boundary lines L21, L22.
In order to effect a transformation of the - - - .- . , :
: . :
.

' .

~2~3~;3~-~3 g folded part of image into an image projected on a plane including the original part of image IGO in Fig. 8, the position of each picture element on the original screen may sequentially be transformed in accordance with a procedure of transformation processing in Figs. l(A) through l~D).
In a first transformation step, the fold line Ll is set on an original part of image OIG on a x-y plane in an orthogonal coordinate system.
In a second transformation step, the original part of image OIG is translated in parallel to an xl axis direction by a distance +a SO that the set fold line Ll i8 aligned with a Yl axis and is rot~ted through +e in a counterclockwise direction, as shown in Figs.
l~A) and l~B).
Thereafter, a part of image OIGN which belongs to a negative area in the xl axis direction ~-xl, Y~ xl, -Yl) is folded up along the Yl axis ~hence, the fold line Ll) to overlap the part of image OIGN over the remaining part of image OIGP present in a positive area of the xl axis direction (xl, yl)(xl~ -Yl)-Consequently, although the unfolded part of image OIGP is maintained as the original image OIG
without being subjected to the transformation processing, the part of image OIGN is transformed to take a reversed form of the original image OIG (denoted by oblique lines). The whole image executed under such a folding transformation processing is represented on the xl Yl plane-Next, in a third transformation step, a fold boundary line L2 is set which is parallel to the fold line Ll on the part of image OIGN and the part of image OIGP mutually overlapping and then a part of area ER
(refer to Fig. l(c)) between the parts of images OIGN
and OIGP is non-linearly compressed. The non-linear compression is carried out in such a way that the part :' ''' . ' '::

~: -- . .: . :
.
-., ', :~ ' .. ' . , .
.

~Z~3~393 of the image wound on the cylindrical image CYL obtained through a perspective view from an upward -direction as described above with reference to Fig. 8 is produced on the x2, Y2 plane. This can be achieved by obtaining the position of each picture element on the cylindrical image calculated as a result of mapping the parts of plane images OIGN and OIGP present in the area ER on a surface of the cylindrical image extending toward a direction along the Y2 axis.
Consequently, the upper section of the part of image OIGN which belongs to the area ER is transformed into a part of image OIG3 representing the cylindrical surface through the non-linear compression processing with respect to the one axis of the x2 axis direction.
A plane lower part of image OIGP which belongs to the area ER is transformed ~nto the part of image OIG4 (refer to Fig. l(c)) representing the cylindrical surface through the non-linear compression processing with respect to the x2 axis. It should be noted that the parts OIG1, OIG2 other than the area ER which belong to the parts of images OIGN, OIGP shown in Fig. l(C) are not subjected to the non-linear compression transformation.
Next, in a fouith transformation step, the zs whole transformed image obtained as shown in Pig. l(C) is rotated by ~e in the clockwise direction and translated in parallel (sXifte~ to the x2 direction by -a as appreciated from Fig. l(D).
The above-described parallel translation and rotation transformations means such ~teps as to return the position of the whole image moved by the parallel translation and rotation transformation executed as shown in Fig. l(A) to the original image position.
In this way, all transformation operations are ended and the entire image on the screen ~IG after the transformation operation represented on the X Y plane 12~36393 provides the same visual effect as the perspective view from the upward direction orthogonal ~o a ~lane including the original gcreen OIG when a part of the left upward corner QL in the original screen OIG (Fig. l(A)) is folded up in a direction orthogonal to the fold line Ll.
If the transformed image PIG shown in Fig.
l(D) is unfolded on a plane as shown in Fig. 2, the part of transformed image OIG4 appears on the screen from the folded line L1 to the lower fold boundary line L22 which has been subjected to the non-linear compression and the part of transformed image OIG3 is also present between the fold line L1, an upper fold boundary line L21 drawn along a position which is axially symmetric to the lower fold boundary line L22 with respect to the fold line L1, and the upper part of image OIGl which has not been subjected to the non-linear compression transformation is present outside of the upper fold boundary line L21.
On the other hand. if the partially transformed image PIG ~hown in Fig. 2 is compared with the original image OIG shown in Fig. l~A), the part of trans~ormed image OIG2 which has not been subjected to the non-linear compression transformation i8 subjected to ~uch a transformation ~hat a corresponding part of the original image OIG on the x-y plane is translated in parallel by the distance +a along the x axis (Fig.
l~A)), is then rotatcd by the angle +e in the counterclockwise direction ~Fig. l~B)), and is, in turn, rotated in the clockwise direction and translated in parallel to the x axis direction by -Q ~Fig. l~D)).
However, during such a transformation procedure, the position of each picture element ln the part of image ~: OIG2 is finally returned to the original position of the corresponding picture element (pixcel) in the.original image OIG. Consequently, the part of transformed image OIG2 is directly derived from the corresponding part of . ............... : . , . .
.
-. .
. -12~6393 the original image OIG without any alternation. Thus, a position (X, Y) of each picture element Dn the X, Y
plane in an area of the transformed part of image OIG2 can be expressed in the following equa~ion by transforming a position ~x, y) in the corresponding part of image present on the x, y plane.

[ y ] [ y ~ ........ '(1) Next, the part of transformed image OIG4 is obtained by parallel translating the corresponding part s of tran~formed image OIG by I a ~Fig. l(A)), rotating that part through the angle ~e in the counterclockwise direction ~Fig. llB)), transforming that part in the non-linear compression ~Fig. l(C)), and rotating that I part through the angle ~e in the clockwise direction and ¦ 20 translating that part in parallel by the distance -a. In this case, if a position of each picture element (x. y) present on the part of original image OIG corresponding to the part of transformed image OIG4 is expressed relatively to a reference position ~xO, yO). a position (X. Y) on the transformed image PIG of each picture element can be expressed in the following equation.

[ Y ~ [ yO ~ [ 1 ]

x R~ ~e) [ ] ...... ~ 2) Y Yo -~ . ' : ~ . - -.. . . . - .
:- , ~': . . ' .
.

lZ~ti393 In the above equation (2), a first term of the right side represents an amount of the parallel translation of the part of image with respect to the screen after processing of the non-linear compression transformation and R~(e) denotes a rotation matrix in which the image on the screen is rotated through +e in the counterclockwise direction. The rotation matrix R~(e) may be expressed as ~cose - sine~
R~(e) = ...... (3) _sine cose In addition, R~(-e) denotes the coordinate position after the processing of the non-linear compression transformation is rotated through -e in the counterclockwise direction and may be expressed as follows:

~ cose sinel R~(-e) 8 I ........ (4) _-sine coseJ

Furthermore, the second matrix in the equation (2) may be expressed as folows.

~ F 1 T~ .................. ( S) In the equation (S), the right side denotes the execution of the non-linear compression transformation.
35In the equation (S), F denotes an operator for obtaining a value in the X ordinate after transformation as expressed below:

F.x = f(x) = D-r.sin (DrX) ........ (6) In the equation (6), r denotes a radius of the cylindrical image CYL (refer to Figs. l(A) through l(D)) used for the non-linear compression transformation and 0 is expressed by the following equation:

r ' n . D .......... (7) Furthermore, the fourth matrix of the right-side second term of the above equation ~2), i.e., [ x -x ] .......... .(8) Y ~YO

indicates that the position of each picture element (x, y) before the transformation processing is translated in parallel by the distance corresponding to the coordinates (xO, yO) of the reference position.
Consequently, the reference position ~xO, yO) is placed on a position which coincides with an origin of xl Y
plane ~Fig. l~B)) after transformation.
Next, the part of transformed image OIG3 is derived from the following procedure: the corresponding part of the original part of image OIG is translated in parallel by + ~refer to Fig. l~A)), rotated through the angle +e in the counterclockwise direction and folded back ~refer to Fig. l~B)), transformed through the non-linear compression (refer to Fig. l(C)), and finally rotated through the angle +e in the clockwise direction , . . .

:
- - . ' - ~:

. . .: ' ' -3~ 3 - lS -together with the parallel translation by -~ (refer to Fi 9 . 1 ( D ) ) .
In this case. the part of transformed image OlG3 is derived in the similar transformation procedure as the other part of transformed image OIG4 except the above-described fold back transformation procedure. The part of transformed image OI~3 can be expressed in the following equation.

[ ~ - [ ~ + R~ (-e) [ ]

[ x -xO ~
Y ~YO

It should be noted that a difference between the equations ~2) and ~9) lies in a term -F which is included in the second matrix of the second term of the equation ~9).The minus sign of the term F represents the oriqinal image be$ng turned over.
Next, the part of transformed image OIGl is derived from the following procedure: after parallel translation by ~, the original image thereof is sub~ected to the rotation transformation through the angle ~e in the counterclockwise direction and is folded ~refer to Fig. ltB)), and thereafter undergoes the rotation transformation through le in the clockwise direction and parallel translation by -~. Consequently, each picture element in the part of transformed image OIGl is transformed into a position with respect to the original image OIG which can be expressed in the following equation.

~286393 [ ~ = [ ~ + R~ ) [

[ ~ ...... ( 11) Y ~YO

A difference of the above equation (11) from the equation ~9) lies in the use of a coefficient -l in place of the operator -F in the second matrix of the right-side second term. This represents that in the case of equation (ll) the image before subjection of transformation is turned over through the fold back transformation processing without the non-linear compression transformation.
In this way, the parts of transformed image OIG2, OIG4, OIG3, and OIGl constituting the transformed picture PIG can be obtained by transforming the image on the original part OIG into such positions as to satisfy the transformation equations represented by equations ~ 2 ), ~ 9 ), and ~ll).
The equation ~l) represents that the part of transformed image OIG2 is returned to the same position as the original part of image OIG as the result of a series of transformation steps. In this case, the following equation can be substituted for the equation ~l) if the intermediate series of transformation processings are included.

, .

12~3~393 [ X ~ ~ [ o ] + R~ (-e) [ ]

~ x R~ (e~ [ ] ................ (12) Y ~YO

The transformation equations applied to all of the parts I of image OIG2, OIG4, OIG3, and OIG1 can be represented I s in the following general formula.

[ X xO ] . R~(-e)T~R4(0) x [ ~ .... (13) In the above equation (13), T~ denotes a matrix in which the operator F or a numerical value substituted for the I operator F i8 included for each arithmetic operation.
¦ The non-linear compression transformation equation represented by the equation (6) can be obtained by the utilization of the relationship shown in Fig. 3.
In details, in the case when the double folded transformed image on the x1, Yl plane shown in Fig. l(B) is sub~ected to the non-linear compression as shown in Fig. l(C), the continuous partg OIG4, OIG3 of tne lmage following the part of the transformed image OIG2 are folded back ~o as to be wound about the cylindrical image having a radius r.
In this case, supposing that when the part of 3s transformed image OIG4 present between the fold line L1 , . .

.
. .
' ' ' .. . .
.. .
.

~2t36393 on the xl axis and the folded boundary line L22 (naviny a width of D) is wound about an angular range of 2 ~ 90) at a quarter lower part of cylinder surface, the position of a point x1 on the part of transformed image OIG4 is moved to a position on the cylinder surface by an angle 0 ~radl with respect to the folded boundary line L22, the position of the point on the cylinder surface can be expressed as follows with respect to an ordinate on the x2 axis.

X2 ~ D - r sin 0 ... ~14) In the above equation (14), the width D can be expressea as follows slnce the part of transformed image OIG4 havlng the wldth D i6 wound on the angular range of n/2 at a quarter lower part of an outer surface of the cyllndrlcal lmage CYL.

D . n2 . r ......... (15) In addition, the following equation is established from the relationship between the angle 0 for the transformed point of the ordinate x1 and wound a~gle n/2 wlth respect to ~ center of the cylindrical lmage CY~.

~7~ D -~ 16) If the equatlons ~15) and ~16) are substltuted for the 30 above eguation ~14), the following relationship is establlshed.
D-X
X2 - D - r 8in 2 D
D-xl ~ D - r sin r ~ ~17) ~.. ,.. . ~ .
--,: ~ `, ,` ' ' ;:~ '' ' , 1213~;3'~3 Although the transformation from the xl, Yl plane to the x2, Y2 plane is described with reference to Fig. 3, the general formula can be expressed as the above-mentioned equation (6).
5In order to display the transformed image obtained in the fourth transformation step (Fig. l(D)) in accordance with the transformation procedure of Figs.
l(A) through l(D) as the output image on the screen, the data on the original image OIG (refer to Fig. l(A)) is 0sequentially written into an input image memory together with a predetermined address data (, i.e., input image address data). If an output image address is inversely transformed into the input image address in order to sequentially access the input image address 5corresponding to the output image address required for the display of the output image after transformation of the input image, the transformed output image ~refer to Fig. l(D)) can be read out from the input image memory.
To obtain the corresponding input image address from the 20output image address, in this way, image position data on which the transformation processing has been executed in the order beginning from Fig. l(A) and ending to Fig.
(D) is inversely transformed in the procedure starting from Fig. l(D) and ending to Fig. l(A) on the basis of 25the output image address.
The above-described inverse transformation can be derived by solving the general formula expressed in the equation (13~ with respect to the term expressed below [ x-x ] .......... (18) y-yO

3 Then the following equation is established.

121~6393 [ ~ = R~ (-e) (T~) 1 R~(e) [ ~
Y YO Y-Y

Each value of picture element expressed in the above equation (19) may be obtained sequentially from the coordinates (X, Y) of the transformed image PIG as the coordinate (x, y) of the original image OIG.
The above-described inverse transformation is executed in accordance with a processing sequence shown in Fig. 4.
In a step SP1 of Fig. 4, a position (X, Y) of transformed image PIG (hence, output image address) is inputted.
In the next steps SP2 and SP3, the parallel translation (shift) transformation and rotation transformation are executed on the basis of the following equation (20).

[ X2 ~ ~ R~ (e) [ o ] ........... ( 20) Y2 Y yO

The coordinate position (x2, Y2) on the x2, Y2 plane shown in Fig. l(C) i9 thus obtained.
In an image on the x2, Y2 plane, the above-described fold line L1 is represented by the following equation.

X2 = ......... (21) , , On the other hand, the above-described fold boundary line L2 (hence, corresponding to a center axial line of the cylindrical image about which the part of transformed image i6 wound) is expressed by the following equation.

X2 ' D ......... ~22) In addition, in the transformed image of the ~ x2, Y2 plane, any input image is not present in a negative area outside the fold l$ne L1 and an inverse transform of the non-llnear compression is necessary for an area between the fold line Ll and fold boundary line L2..
Next, in a step SP4 of Fig. 4 the inverse-transform of the non-linear compression transformed ~mage on the x2, Y2 plane is executed 80 as to form the transformed image on the x1, Y2 plane as described above with reference to Fig. l(B).
Thereafter, in a step SPs, a fold back transform proce~sing is executed so as to unfold the folded image.
8ince in the transformation processing in the steps SP4 and SP~, there i~ no part of~,image to be transformed in the followlng range on the x2, Y2 plane, no transformation is executed.

X2 < ......... ~23) On the other hand, since in the area expressed below on the x2, Y2 plane the two parts of image OIG3 and OIG4 are mutually overlapped, the inverse transform needs to be executed for each part of image.

< X2 < D ...... ~24) .
... . .
, . . .. . .

.. . .- :
:, :- ' . : , .

For a point (x2, Y2) of the part of image OIG4 in which the image is not turned over, a point (xl4, Yl4) obtained on the xl. Yl plane through the transformation is expressed as follows.

[ Y14 ] [ 1 ] [ Y2 ~ ......... (2S) For a point (x2, Y2) within the part OIG3 in which the image is turned over. a point ~xl3, Yl3) obtained on the xl, Yl plane through the transformation is expressed as follows.

[ Xl3 ] [-F l o ] [ x2 ] ....... (26) In the equations ~2S) and (26). F 1 denotes an operator for executing the inverse transform of the non-linear compression represented by F.
In addition, for a point (x2, Y2) within the part OIG2 in which the contents of image is not turned p (Xl2~ Yl2) obtained on the xl, Yl plane through the transformation is expressed as follows.

[ Yl~ ] [ I ~ [ Y~ ]

The above-described equation (28) indicates . ~
. : . -. - . . , -~, - , - . .
. . .
, ~
'''' ' ' '': ~ ' . . : .
..

lZt~63'-33 that neither inverse non-linear compression transform nor fold back transform is executed as appr-eciated from the use of coef$icient 1 in place of the operator F 1.
On the other hand, for a point tx2, Y2) of the part OIG1 in which the contents of image is turned over, a point (xl1, Yll) obtained on the x1, Yl plane through the transformation is expressed as follows.

[ Yll ~ [ I ] [ Y2 ]

The above-described equation (29) indicates that the contents of the image is folded back as appreciated from the use of coefficient -1 in place of the operator p-l Whereas the fold-back transform outputs for the parts of image OIG1 and OIG3 are obtained in the ~tep SP~ and are sub~ected to the rotation and parallel translation transforms in the next steps SP6 and SP7, for the other part~ of image OIG2 and OIG4, they are directly 8ubjected to the rotation and parallel 2s translation transformations wlthout the fold-back transform in the next stePs SP~ and SP9, respectively. Such series of transforms are executed respectively for the point8 ~x1" Yl1)~ (Xl2~ Y12)~ (x13, Y13)~ ~x14, Y14) on the x1, Y1 plane related to the parts of transformed image OIGl, OIGi2, OIG3, and OIG4. Consequently, the points (xO1~ Yo1)~ (Xo2~ Yo2)~ (xo3~ yo3)~ (Xo4~ yo4) on the x, y plane Irefer to Fig. l~A)) are derived from - the following four equations, respectively.

12l36393 ~ 24 -[ x01 xO ~ ~ R~-e) [ 11 ~ ....,.
Yol ~ Yo Yl 1 [ ] - R~(-e) [ ~ ......... (31) Yo2 ~ Yo Y12 [ x03 - xO ~ ~ R~_e) [ 13 ~ ............... (32) Yo3 Yo Y13 [ ] ~ R~(_e) [ 14 ] ......
Yo4 ~ Yo Y14 z0 In this way, whenever the output image address allocated to each picture element included in the parts of transformed images OIG1, OIG2, OIG3, and OIG4 is specified by a sequential specification of an output address allocated to each picture elemen~ of the transformed image ~IG described ~bove wlth reference to Fig. l(D), the input image addres8 allocated to a picture element located at a position expressed by the corresponding one of the above-described equations ~30) through (33) can be fetched from the input image memory as the read-out addre8s.
Consequently, the image data constituting each part of the transformed image can be read out of the input image memory. It should be noted that it becomes practically necessary to select with a higher priority the image data on a part of transformed image which is located at an upper side of the mutually opposing parts . .

of the transformed imaqe, in a case when a plane imaqe ~having a page turned-over effect as shown ~in Fig. 1(D) is produced. -Therefore, a control of the prl~rlty for eacn part of image is carried out as shown in step SP10 of Fig. 4.
The order of priority for the transformed image exhibiting the page turn-over effect in Fig. l(D) is set in ~uch a way that the part of image OIG1 has a higher priority ~han the part of image OIG2 and the part 0 of image OIG3 has a higher priority than the part of image OIG4.
~hus, in the same way as a page of a ~ook is f turned over, a lower part of the front page hidden by the folded up part of the rear page is not displayed so ~5 that the input image address tx, y) which i~ capable of orming such an image as having more practical page turn-over effect can be produced in the final step SP11.
A system for transforming image signals which achieves the above-described series of transformation processing is shown schematically in Fig. s.
As shown in Fig. 5, the input image data IND
i8 sequentially read in the input image memory 2 via a low pass filterlinterpolation circuit 1. The read image data ig, in turn, read out by means of a read-out 26 addre6s signal RAD produced in a read-out address genera~or 3 and transmitted as an output image data OUD
via an interpolation/low pass filter circuit 4.
.The read-out address generator 3 receives an output image address signal OADD generated by an output image address generator 5.
Furthermore, the read-out address generator 3 receives a reference point signal tho, vO) obtained through an arithmetic operation, rotation transformation matrix data AR, B11, B21, B12, and B22 calculated on the basis of angle e in the rotation transform, and a value of radius r of the above-described cylindrical image 12~ 393 CYL, these signals and data being derived fr~m a control parameter calculation circuit 8 on the basis of data PPD
representlns apreset parameter inputted from a pointing device 6 and position assignment data PDD inputted from a joystick ~control stick) 7.
In addition. the read-out address generator 3 executes respective transform calculations for the four transformable part6 of image OIGl. OIG2. OIG3, and OIG4 as descrlbed above with reference to Figs. l(A) through 0 ltD) on the basis of the output image address signals OADD. reference point signal ~ho~ vO). rotation transformation matrix data AR. B11. B21. B12. and B22.
and radius r of the above-described cylindrical image CY~ (in addition. a gignal S-D to be described later) and read out the input ~mage data which is to be the contents of image 6pecified by the output image address fetched from the input image memory 2.
In addition. the read-out address generator 3 has An internal circuit configurat~on shown $n Fig. 6 and executes a sequential transform processing of each ~ignal corresponding to the inverse transform steps shown in Flg. 4.
Although in the image transform processing described above with reference to Figs. l~A) through l~D) the arithmetic operation is carried ou~ on the bas~s of ~quare lattice coordinates between x. y plane and X. Y plane, both output image address signal OADD
produced from the output image address generator 5 and reference polnt slgnal (ho~ vO) produced from the control parameter calculation circuit 8 are signals represented by addresses of respectlve picture:elements on a raster ~creen (thls is called real addresses). The read-out address slgnals RAD to be supplied from the read-out address generator 3 to the input image memory 2 need to be converted to data having the contents of the real addresses.

.. . . . . . .
~ -- . ' ' . .' ~
~.
-12t3~3~3 Therefore. a relationship between each point ~x, y) and (X, Y) on the x, y coordinates and X, Y
coordinates and each real address (h, v) and (H, V) corresponding to the former coordinate point is defined in the following four equations.

x = HSsiize .h ..... (34) X = XuS~Ze .H ..... (3S) Y ' VSsize v ..... (36) Y ' VSize V ---- (37) The reference point (xO, yO) on the basis of the above-described definition can correspond to the real address (ho~ vO).

Xo 3 -XHsize ho .............................. (38) Yo ' VSize vO ---- (39) Such a conversion from the x, y coordinates and X, Y coordinates to corresponding real addresses ~h, v) and (H, V) is executed simultaneously when the read-out address generator 3 executes the rotation transform.
The read-out address generator 3, as shown in Fig. 6, the signal (H, V) indicative of the real address supplied from the output image address generator S as '. ' ' .' ~
.

. ~ , ~21363'33 the output image address signal OADD is supplied to ~adders 16, 17 of a parallel translation (shift) circuit lS.
The adders 16. 17 receive respective reference point signals (ho~ vO) supplied from the control parameter calculation circuit 8 as subtraction inputs so as to produce the addition outputs (~-ho) and (V-vO) representing the parallel translation processing such that the reference point (ho~ vO) is shifted to an ! ~o origin. The addition outputs ~H-ho) and tV-vO) are sent to a multipl1er 19 in the rotation transform circuit 18.
It should be noted that the reference point signal ~ho~ vO) is derived from the control parameter lS calculation circuit 8 on the basis of the change in the output PDD of the ~oystick 7 and the position of the reference point (ho~ vO) is changed by an operation of Joyst~ck 7 80 that the visual effect of the page being folded gradually can be achieved.
.. 20 The multiplier 19 multiplies the transformed data AR by the address signals H-ho and V-vO. The transformed data AR i8 derived from the control parameter calculation circuit 8 and the value of R~e) in the above-described equation (20) is obtained in the control parameter calculation circuit 8 as data having the contents expressed in the following equation multiplied by conversion coefficients in the equations ~34) through (37).

XS iZ e coS e ~ Y8iZe sin e AR ~ ... ~40) Hxssiize sin e - VSsize cos e 12~63!33 The data of angle e for rotation transform is previously inputted to the control parameter calculation circuit 8 using the pointing device 6. The control parameter calculation circuit 8 outputs the rotation transform data AR calculated from the equation (~0).
The output data RVol appearing on an output terminal of the multiplier 19 is thus expressed by the following equation.

r Hsize cos e Vsize sin HXssizee sin e - Vss ii Z ee C O S e r H - ho 1 ................. (41) L V - vo J

Since the rotation transform data AR includes a conversion coefficient for converting the address (h, v) and (H, V) to square lattice coordinates (x. y) and (X, Y) (as shown in the above equation (40)), the contents of output data RV01 are converted into the x, y coordinates and X, Y coordinates described with reference to the equations (18) through (33), on the basis of which the inverse transform arithmetic operation is executed at the subsequent stage of non-linear compression inverse transform circuit 20.
The non-linear compression inverse transform circuit 20 calculates the non-linear compression inverse transform using the data of radius r sent from the control parameter calculation circuit 8. Specifically, .

, ., .. . .

1~t36393 an inverse transform calculation circuit 21 constituting the non-linear compression inverse transform circuit 20 calculates data x2 of the outputs RV01 produced from the previous stage of rotation transform circuit 18 in the axial direction of x2 and the calculation result is sent to a selection switching circuit 22. The selection switching circuit 22 receives directly the data x2 and enables the output of data on the part of image not requiring this inverse transform without passage through 0 the non-linear compression inverse transform calculation circuit 21 when the data on the part of image not requiring such a transform is received.
In addition, the selection switching circuit 22 receives as selective switching control signals CH x2 axis direction data x2 and a transform region specification signal SD (corresponds to the region D in Fig. 2) indicating that data on the regions of the parts of transformed image OIG3 and OIG4 requiring the inverse : transform of non-linear compressions have reached from : 20 the circuit 8.
In this way, the selection switching circuit 22 selects the output of the non-linear compression inverse transform calculation circuit 21 and outputs it as the output data xl of the non-linear compression inverse transform circuit 20 upon arrival of the data corresponding to the parts of transformed image OIG3 and OIG4 and outputs the data x2 directly as the data x1 constituting the output NCOMP of the non-linear compression inverse transform circuit 20 upon arrival of the picture element data of the other parts of transformed image OIGl and OIG2 . On the other hand, a y-axis component Y2 of the output RV01 from the multiplier 19 is directly outputted as the y-axis data Yl of the output NCOMP.
The non-linear compression inverse transform : circuit 21 calculates the above equations ~25) and (26) , "~
- ~ ,...

. ', ' ' ,' ~' , ~, -. . ,- , , . - ~
: , - . :
, lZ86393 by means of the following equation.
F~1.x = f-1(x) ........ ( 42) After the calculation is executed. a minus sign attached to the term of operator F 1 as a minus exponent is operated arithmetically at the subsequent stage of rotation transform circuit 23 in order to simplify the construction.
0 The rotation transform circuit 23 executes the arithmetic operation of the equations (30) through (33) and the conversion of data represented by the square lattice coordinates (x1, Y1) to the real address data ~h, v). In details, in the arithmetic operation of the term R~-e) in the above equations (30) through (33), the terms cos e and -sin e are provided for the axial direction of x, as appreciated from the equation (4).
For the matrix of the right-side first term, the inverse transform operator F 1 and minus sign in the coefficient 1 are used as shown in the equations (26) and (29).
Therefore, the inverse transform operator in the non-linear compression inverse transform matrix and sign in the term of the coefficient are moved to the term of x in the rotation matrix. Thereafter, when the arithmetic operation of the rotation transform matrix is executed as shown in the following four equations (43) through ~46), the sign in the term x is exchanged so that the arithmetic operation for all parts of transformed image can be executed using the same construction shown in Fig. 6.

.~

.. ~ .

. :
.

~2863~ 13 rXol -Xo]

r-l lr l o 1 J L Y2 J
= r cOse sinOlr lr 2l L-Sine COSeJL O 1 JLY2 ~
tO
r _cos e sin e ][ X2 ] ..... (43) L sin e cos e Y2 [ xo2 xO ]
Yo2 Yo [ 1 ] [ Y2 r cos e sin e ] [ X2 ] ........... (44) L -sin e cos e Y2 rX03 - Xol L Yo3 ~ Yo J

[ O 1 ] [ Y2 ]

[ ] [ ] [
-sin e cos e o 1 Y2 [ -COs e sin e ][-F O ][ x2 ] .... (4S

sin e cos e o 1 Y2 .

'- ' - ' ~ ; ' "
'.

12~36393 _ _ Xo4 - xO ., _ Yo4 ~ Yo -~ F-l o ] [ ]
[ COs e sin e ] [-F O ] [ x2 ~ .~,.. (40 -sin e cos e o 1 Y2 Therefore, the x-axis component xl among the outputs NCOMP of the non-linear compression inverse transform circuit 20 i~ sent to multipllers 2S, 26 in 5 which the rotation transform data Bll and B21 expresæed ln the following equations are multiplied by the component xl.

Bll ~ X8Size C08 ~ ........ ~ 47 ) B21 ' YSiiZ sin e ......... ~48) ~he multiplied output of the multiplier 25 is dlrectly sent to an adder 27 and i8 sent to an adder 29 via a sign inverter 28. ~he multiplied output of the multiplier 26 is directly gent to an adder 30 and sent to an adder 30 via a sign inverter 31.
On the other hand, the y-axi~ component Yl among the non-linear compression inverse transformed output6 NCOMP is sent to the multipliers 35, 36 in which the rotation transform data B12, B22 expressed in the following equations are multiplied by the y-axis 36 component Yl-. ,' ~ ~ ' ..
.
: - .
- ~
.
;

: - ~ , . .

ii3~13 .

B12 ~ XsSiizee sin ~ .... (4g) B22 ~ YsSizee cos e ...... ~ 50) ~hereafter. the multiplied output of the multiplier 3S is sent to the adders 27, 29 and that of ~0 the multiplier 36 is sent to the adders 30, 32.
~he adders 27, 30 directly receive the results of multiplication from the multipliers 25, 26 respectively in which the x-axis component x1 among the non-linear compression inverse transform outputs NCOMP
i8 multiplied by the rotation transform data Bll, s12.
On the other hand, the adders 29, 32 receive the results of multiplication from the multipliers 2S, 26 via the ~ign inverters 28, 31. Consequently, the result of calculations from the eguations ~44) and (46) are sent to the adders 27, 30 and the result of calculations from the equations (43) and (4S) is sent to the adders 29, 32, respectively.
In this way, the results of arithmetic operations RVo2 using the above-described eguations (43) throu~h (46) are sent to the rotation transform circuit 23, the reference point data ho~ ho~ vO, vO are respectively added to the adders 41, 42, 43, 44 constituting the parallel translation (shift) circuit 40.
~his ~rithmetic operation means that the reference point (ho~ vO) is returned to the original position.
Conseguently, among the parts of original lmage OIG (refer to Fig- l(A)), the addresses ha~ hv in the dlrectlon of x axi- of the parts of ilDage OIGl.

. . .

.. i ~2~393 OIG3, OIG2 . and OIG4 corresponding to the rear and front portions of the transformed image PIG can be obtained. - -On the other hand, output terminals o~ adders 43, 44 in the parallel translation (shift) circuit 40 appear addresses va, vb in the y-axis direction for the parts of image at the front and rear sides of the transformed image PIG among the original image OIG
(refer to Fig. l(A)).
These address signals ha~ hb, va, and vb are sent as the address data (h, v) of the read-out address signal RAD, with the higher priority taken to the address data corresponding to the rear side parts of image, i.e., OIGl, OIG3 than the front side parts of image, i.e., OIG2, OIG4 as appreciated from Fig. l(D).
According to the internal configuration of the read-out address generator 3 shown in Fig. 6, in a case when the generator 3 carries out an image transformation such that the input image can be transformed to the output image having the page turn-over effect, the read-out address generator 3 is so constructed as to generate sequentially the transformed output image address and to execute an arithmetic operation of the inverse transform of the sequentially generated output image address into the read-out address signal RAD so that the part of image data required for the appearance of page turn-over effect among the image data stored in the input image data can assuredly be read out.
Parameters representing a locus for a page to be turned over is preset in the control parameter calculation circuit 8 (refer to Fig. 5) using the pointing device 6. In addition, the data on the reference point ~ho~ vO), rotation transform data AR, B11, B21, B12, and B22 calculated on the basis of angle e for the rotation transform, and data on the radius of the cylindrical image CYL, these data being generated on the basis of the preset parameters, are modified by ~ .
.
.

.
. , ~ ', , -. . .

2 8 ~ 39 3 means of the joystick 7. Conseguently, the reference point (ho~ vO), angle e of the rotation transform, and radius r of the above-described cylindrical ~mage CYL
can be changed in accordance with the positions S ~pecified by the joystick 7. Therefore, the output image data generating the page turn-over effect as if a ~eries of changes from the beginning of page turn-over to the end of page turn-over were viewed perspectively from an upper position of the screen can be read out from the input lmage memory 2.
As appreciated from Figs. 5 and 6, a major part of all arithmetic operations is achievable by the hardware con8truction 50 that the image 8ignal transform ~yBtem which can remarkably 8implify part of a software arithmetic operation program ag compared with the ~oftware execution for all arithmetic operatiQns, According to the present invention, the transformation of image signals with the page turn-over effect to be arithmetically operated as a three-dimensional surface can easily be achieved by the two-dimenslonal (plane) data transform and by one-dimen lonal compres6ion transform of the part of output image constitutlng the surface part of the output image. Consequently, the effect of page turning over can easily be achieved with a ~imple construction.
It will be clearly Understood by those skilled in the art that the foregoing description is made in terms of preferred embodiment and various change~ and modifications may be made without departing from the ~cope of the lnvention which is to be defined by the appended claims.

. ' ' -'' ': . ' .

Claims (21)

1. A method for effecting a transformation of a video image on a video screen, comprising the steps of:
(a) storing an input video image in a memory device;
(b) defining a two-dimensional address plane in a memory area of said memory device;
(c) providing a first line on said address plane to divide said address plane into first and second regions;
(d) providing a second and third lines on said first and second regions of said address plane;
(e) calculating address data of said address plane for providing transferred address data so that said address data of said first region are symmetrically transformed with respect to said first line;
(f) calculating said address data between said first and second lines and between said first and third lines so that said address data between said first and second lines and between said first and third lines are non-linear compression transformed along an axis perpendicular to said first line; and (g) reading out said input video image from said memory device and generating an output video image according to said calculated address data, whereby said output image can be viewed such as to be turned over along said first line.
2. The method according to claim 1 which further comprises:
(h) transforming parallel translation and rotation for said address plane so that said first line coincides with one axis of said address plane before said transforming of the parallel translation and rotation is carried out: and (i) transforming inverse parallel translation and rotation for said address plane so that axes of said address plane are returned to an original condition.
3. The method as set forth in claim 1 wherein step (d) further includes the step of providing said second and third lines in parallel to each other.
4. The method as set forth in claim 3 wherein, when the parallel second and third lines are parallel to said first line, an imaginary cylinder is defined over which the video image is turned.
5. The method as set forth in claim 3 where the distance between said first line and said second line is about the same as the distance between said first line and said third line.
6. The method as set forth in claim 1 wherein said second line and said third line define an imaginary geometrical shape over which the video image is transformed.
7. The method as set forth in claim 1 further including the step of moving said first line to achieve a turnover effect.
8. The method as set forth in claim 1 wherein the step (d) further includes the step of varying the distance between said second and third lines.
9. The method as set forth in claim 1 further including a step of displacing said first line on said address plane in a predetermined direction so that said output image is turned over a geometrical shape having said first line as its axis.
10. The method as set forth in claim 9 wherein a radius of said geometrical shape varies with time.
11. The method as set forth in claim 9 wherein said predetermined direction is normal to said first line.
12. A system for effecting a transformation of a video image on a video screen, comprising:
(a) first means for storing input image data;
(b) second means for sequentially generating a positional output image address signal;
(c) third means for presetting parameters representing a locus on which an output of the video image is turned over as if a sheet of paper were folded up;
(d) fourth means for sequentially generating position designation signals indicative of a displacement of the input image on a two-dimensional plane;
(e) fifth means for calculating values including a positional reference point signal of the input image on the two-dimensional plane on the basis of which the input image is displaced, rotation transform matrix data based on a given angle through which the two-dimensional plane is rotated, and a radius data on a virtual geometrical image on which part of the input image is wound, said positional reference point signal, rotation transform matrix data, and radius data being based on present parameters derived from said third means and position designation signals derived from said fourth means;
(f) sixth means for executing transform arithmetic operations for transformable parts of an output video image, said transformable parts being defined by a first part representing a rear part of the output video image which is wound on an upper surface of said geometrical image as viewed through the video screen, a second part representing a front part of the output video image which is outside of a projection portion of the geometrical image, a third part representing the front part of the output video image which is wound on a lower surface of said geometrical image as viewed through the video screen, and a fourth part representing the rear part of the output image which is outside of said wound first part so as to overlap on said second part, on the basis of the reference point signal, rotation transform matrix data, and radius data of said geometrical image calculated by said fifth means and reading out the input image data the contents of which are to be the output image and specified by the positional output image address signal generated by said second means; and (g) seventh means for displaying the input video image whose data are stored in said first means and read out from said first means by said sixth means according to the positional output image address signal on the video screen so that the whole video screen can be viewed as if a sheet of paper were being folded up about said geometrical image.
13. The system according to claim 12, wherein said sixth means comprises:
(a) a first parallel translation circuit which receives the output address signal from said second means and reference point signal from said fifth means and adds both signals for each axis of the two-dimensional plane for processing a parallel translation such that the reference point of the output image is moved to an origin of the two-dimensional plane;
(b) a first rotation transform circuit for processing a rotation transform of the positional output signal of said parallel translation circuit through a first predetermined angle so that a center axis of said geometrical image coincides with one axis of the two-dimensional plane using one of the rotation transform matrix data derived from said fifth means;
(c) a non-linear compression inverse transform circuit for processing an inverse transform of non-linear compression transformation for said first and third parts with respect to only the other axis of the two-dimensional plane orthogonal to the one axis thereof using the radius data from said fifth means and for passing the data on said second and fourth parts and on the one axis of the two-dimensional plane without being subjected to the inverse transform of non-linear compression;
(d) a second rotation transform circuit for processing a rotation transform of the positional output signal of said non-linear compression inverse transform circuit through a second predetermined angle so that the center axis of said cylindrical image is returned to the original position using the other rotation matrix data derived from said fifth means;
(e) a second parallel translation circuit which receives the positional output signal from said second rotation transform circuit and positional reference point signal from said fifth means and add both signal for each axis of the two dimensional plane for processing the parallel translation such that the reference point of the output image is returned to the original position; and (f) a priority selection circuit for selecting with a higher priority the positional output signal on said first and fourth parts of the output image from said second parallel translation circuit than that on said second and third parts of the output image and outputting the positional output signal of said second parallel translation circuit as the read-out address signal to said first means.
14. The system according to claim 12, wherein said fourth means generates sequentially the position designation signals in such a way that the respective values of the reference point, the first predetermined angle, and radius are varied so that the output video image is gradually turned on.
15. The system according to claim 14, wherein said fourth means is a joystick.
16. The system according to claim 13, wherein said positional output image address signal generated by said fourth means and positional reference point signal calculated by said fifth means are signals representing addresses of respective picture elements on a raster screen.
17. The system according to claim 16, wherein said first and second rotation transform circuits carry out the rotation transform processing using a conversion coefficient for converting each positional address data on the two-dimensional plane to each positional address data on the raster screen.
18. The system according to claim 13, wherein said second rotation transform circuit processes the rotation transform together with a fold back transform processing for the positional address data outputted from said non-linear compression inverse transform circuit corresponding to said first and fourth parts of the output video image.
19. The system according to claim 13, wherein said second predetermined angle is a minus value of said first determined angle.
20. The system according to claim 13, wherein a coordinate system of the two-dimensional plane is changed whenever the transform processing is carried out.
21. A method for effecting a transformation of a video image on a video screen, comprising the steps of:
(a) defining a two-dimensional address plane within a memory area:
(b) storing an input video image within said memory area so that each video data of a picture element thereof is placed at a corresponding address;
(c) virtually placing a cylinder shaped image whose radius of a section thereof is varied on the address plane defined in said step (a) and winding a part of said address plane on said cylinder shaped image;
(d) displacing said cylinder shaped image along a predetermined direction on said address plane with its radius varied with time so that the address plane is turned over along the predetermined direction;
(e) transforming parallel translation and rotation for the whole address plane, a non-linear compression with respect to the predetermined direction for front and rear parts of the address plane which are wound on a surface of said cylinder shaped image as viewed vertically through the video screen, and a fold back of the rear part of the address plane;

(f) inverse transforming the transformed address data obtained in said step (e) so as to unfold the output video image and reading out an inverse transformed image address data with a priority taken for the turned over part of output image as input image address data;
(g) displaying the input image on the video screen on the basis of the input image address, whereby the video image on the video screen can be viewed as if a page were turned over.
CA000498556A 1984-12-27 1985-12-24 Method and system for effecting a transformation of a video image Expired - Lifetime CA1286393C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP59-281225 1984-12-27
JP59281225A JP2526857B2 (en) 1984-12-27 1984-12-27 Image signal conversion method

Publications (1)

Publication Number Publication Date
CA1286393C true CA1286393C (en) 1991-07-16

Family

ID=17636106

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000498556A Expired - Lifetime CA1286393C (en) 1984-12-27 1985-12-24 Method and system for effecting a transformation of a video image

Country Status (8)

Country Link
US (1) US4860217A (en)
EP (1) EP0186206B1 (en)
JP (1) JP2526857B2 (en)
KR (1) KR930011840B1 (en)
AT (1) ATE63797T1 (en)
AU (1) AU586846B2 (en)
CA (1) CA1286393C (en)
DE (1) DE3582921D1 (en)

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2181929B (en) * 1985-10-21 1989-09-20 Sony Corp Methods of and apparatus for video signal processing
GB8613447D0 (en) * 1986-06-03 1986-07-09 Quantel Ltd Video image processing systems
GB8622610D0 (en) * 1986-09-19 1986-10-22 Questech Ltd Processing of video image signals
US4689682A (en) * 1986-10-24 1987-08-25 The Grass Valley Group, Inc. Method and apparatus for carrying out television special effects
US4689681A (en) * 1986-10-24 1987-08-25 The Grass Valley Group, Inc. Television special effects system
US5070465A (en) * 1987-02-25 1991-12-03 Sony Corporation Video image transforming method and apparatus
FR2638308A1 (en) * 1988-10-24 1990-04-27 Mosaic Sa METHOD AND MATERIAL FOR TOUCH-BASED CONTROLLING FROM ONE IMAGE TO ANOTHER IN A SELECTED SEQUENCE ON A VIDEODISK
DE3843232A1 (en) * 1988-12-22 1990-06-28 Philips Patentverwaltung CIRCUIT ARRANGEMENT FOR GEOMETRIC IMAGE TRANSFORMATION
US5444831A (en) * 1988-12-24 1995-08-22 Fanuc Ltd. Developed product shape deciding method for a computer-aided design system
US5053762A (en) * 1989-04-28 1991-10-01 Microtime, Inc. Page turn simulator
DE69027712T2 (en) * 1989-05-19 1997-02-06 Sony Corp Image transformation process
US5063608A (en) * 1989-11-03 1991-11-05 Datacube Inc. Adaptive zonal coder
JP2773354B2 (en) * 1990-02-16 1998-07-09 ソニー株式会社 Special effect device and special effect generation method
EP0449478A3 (en) * 1990-03-29 1992-11-25 Microtime Inc. 3d video special effects system
US5369735A (en) * 1990-03-30 1994-11-29 New Microtime Inc. Method for controlling a 3D patch-driven special effects system
US5367623A (en) * 1990-09-25 1994-11-22 Sharp Kabushiki Kaisha Information processing apparatus capable of opening two or more windows on screen, one window containing a page and other windows containing supplemental information
US5283864A (en) * 1990-10-30 1994-02-01 Wang Laboratories, Inc. Computer apparatus and method for graphical flip book
US5214511A (en) * 1990-11-30 1993-05-25 Sony Corporation Image transforming apparatus
EP0526918A2 (en) * 1991-06-12 1993-02-10 Ampex Systems Corporation Image transformation on a folded curved surface
JP3092259B2 (en) * 1991-10-14 2000-09-25 ソニー株式会社 Image processing device
US5592599A (en) * 1991-12-18 1997-01-07 Ampex Corporation Video special effects system with graphical operator interface
JP3117097B2 (en) * 1992-01-28 2000-12-11 ソニー株式会社 Image conversion device
US5351995A (en) * 1992-01-29 1994-10-04 Apple Computer, Inc. Double-sided, reversible electronic paper
US5544295A (en) * 1992-05-27 1996-08-06 Apple Computer, Inc. Method and apparatus for indicating a change in status of an object and its disposition using animation
US5422988A (en) * 1992-06-25 1995-06-06 International Business Machines Corporation Method and apparatus for rendering a three-dimensional object with a plurality of dots
US5463725A (en) * 1992-12-31 1995-10-31 International Business Machines Corp. Data processing system graphical user interface which emulates printed material
US5467446A (en) * 1993-01-29 1995-11-14 Colorage, Inc. System and method for rasterizing images subject to orthogonal rotation
US6262732B1 (en) * 1993-10-25 2001-07-17 Scansoft, Inc. Method and apparatus for managing and navigating within stacks of document pages
JP3731212B2 (en) * 1994-04-28 2006-01-05 ソニー株式会社 Image special effect device
US5835692A (en) * 1994-11-21 1998-11-10 International Business Machines Corporation System and method for providing mapping notation in interactive video displays
US5640540A (en) * 1995-02-13 1997-06-17 International Business Machines Corporation Method and apparatus for translating key codes between servers over a conference networking system
US5887170A (en) * 1995-02-13 1999-03-23 International Business Machines Corporation System for classifying and sending selective requests to different participants of a collaborative application thereby allowing concurrent execution of collaborative and non-collaborative applications
US6356275B1 (en) 1995-02-13 2002-03-12 International Business Machines Corporation Pixel color matching across X servers in network conferencing systems by master-participant pair mapping
US5557725A (en) * 1995-02-13 1996-09-17 International Business Machines Corporation Method and system for switching between users in a conference enabled application
GB9607647D0 (en) * 1996-04-12 1996-06-12 Discreet Logic Inc Proportional modelling
DE69739685D1 (en) * 1996-09-11 2010-01-21 Sony Corp TRICKTFFEKTVORRICHTUNG FOR TRICKEFFEKTE, IMAGE AVERAGE METHOD AND OBJECT IMAGE GENERATION METHOD
JP3677924B2 (en) * 1997-02-17 2005-08-03 株式会社セガ Display method and control method of video game apparatus
US6069668A (en) * 1997-04-07 2000-05-30 Pinnacle Systems, Inc. System and method for producing video effects on live-action video
US6674484B1 (en) * 2000-01-10 2004-01-06 Koninklijke Philips Electronics N.V. Video sample rate conversion to achieve 3-D effects
GB0007974D0 (en) * 2000-04-01 2000-05-17 Discreet Logic Inc Processing image data
JP3846445B2 (en) * 2003-04-04 2006-11-15 ソニー株式会社 Special effect device, address signal generating device, address signal generating method, and address signal generating program
JP3846446B2 (en) * 2003-04-04 2006-11-15 ソニー株式会社 Special effect device, address signal generation device, address signal generation method, and address signal generation program
US20050231643A1 (en) * 2004-03-26 2005-10-20 Ross Video Limited Method, system and device for real-time non-linear video transformations
CN1905924B (en) * 2004-06-21 2012-08-29 威科私人有限公司 Virtual card gaming system
US7794324B2 (en) 2004-09-13 2010-09-14 Pokertek, Inc. Electronic player interaction area with player customer interaction features
US7898541B2 (en) 2004-12-17 2011-03-01 Palo Alto Research Center Incorporated Systems and methods for turning pages in a three-dimensional electronic document
USD683730S1 (en) * 2010-07-08 2013-06-04 Apple Inc. Portable display device with graphical user interface
US20120096374A1 (en) * 2010-10-18 2012-04-19 Nokia Corporation Computer modeling
US9674407B2 (en) 2012-02-14 2017-06-06 Honeywell International Inc. System and method for interactive image capture for a device having a camera
USD708638S1 (en) 2012-03-07 2014-07-08 Apple Inc. Display screen or portion thereof with graphical user interface
USD749637S1 (en) * 2012-10-17 2016-02-16 Samsung Electronics Co., Ltd. Portable electronic device with a graphical user interface
USD736221S1 (en) * 2012-10-17 2015-08-11 Samsung Electronics Co., Ltd. Portable electronic device with a graphical user interface
USD747346S1 (en) * 2012-10-17 2016-01-12 Samsung Electronics Co., Ltd. Portable electronic device with a graphical user interface
USD734765S1 (en) * 2012-10-17 2015-07-21 Samsung Electronics Co., Ltd. Portable electronic device with a graphical user interface
USD746336S1 (en) * 2012-10-17 2015-12-29 Samsung Electronics Co., Ltd. Portable electronic device with graphical user interface
USD736784S1 (en) * 2012-10-17 2015-08-18 Samsung Electronics Co., Ltd. Portable electronic device with a graphical user interface
USD736783S1 (en) * 2012-10-17 2015-08-18 Samsung Electronics Co., Ltd. Portable electronic device with a graphical user interface
US10486068B2 (en) 2015-05-14 2019-11-26 Activision Publishing, Inc. System and method for providing dynamically variable maps in a video game
US10463964B2 (en) 2016-11-17 2019-11-05 Activision Publishing, Inc. Systems and methods for the real-time generation of in-game, locally accessible heatmaps
US10709981B2 (en) 2016-11-17 2020-07-14 Activision Publishing, Inc. Systems and methods for the real-time generation of in-game, locally accessible barrier-aware heatmaps

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS50122119A (en) * 1974-03-12 1975-09-25
US3976982A (en) * 1975-05-12 1976-08-24 International Business Machines Corporation Apparatus for image manipulation
GB1547119A (en) * 1977-12-09 1979-06-06 Ibm Image rotation apparatus
US4283765A (en) * 1978-04-14 1981-08-11 Tektronix, Inc. Graphics matrix multiplier
JPS57138685A (en) * 1981-02-23 1982-08-27 Hitachi Ltd Graphic conversion for graphic indicator
JPS58108868A (en) * 1981-12-23 1983-06-29 Sony Corp Picture converter
JPS58156273A (en) * 1982-03-11 1983-09-17 Hajime Sangyo Kk Masking device of picture information
GB2119594B (en) * 1982-03-19 1986-07-30 Quantel Ltd Video processing systems
US4475104A (en) * 1983-01-17 1984-10-02 Lexidata Corporation Three-dimensional display system
US4594673A (en) * 1983-06-28 1986-06-10 Gti Corporation Hidden surface processor
US4625290A (en) * 1983-11-02 1986-11-25 University Of Florida Apparatus and method for producing a three-dimensional display on a video display device
ATE75872T1 (en) * 1983-11-29 1992-05-15 Tandy Corp HIGH RESOLUTION GRAPHIC VIDEO DISPLAY SYSTEM.
JPH0644814B2 (en) * 1984-04-13 1994-06-08 日本電信電話株式会社 Image display device
GB2158321B (en) * 1984-04-26 1987-08-05 Philips Electronic Associated Arrangement for rotating television pictures
US4685070A (en) * 1984-08-03 1987-08-04 Texas Instruments Incorporated System and method for displaying, and interactively excavating and examining a three dimensional volume
US4653013A (en) * 1984-11-19 1987-03-24 General Electric Company Altering spatial characteristics of a digital image

Also Published As

Publication number Publication date
JPS61156980A (en) 1986-07-16
AU586846B2 (en) 1989-07-27
KR860005538A (en) 1986-07-23
EP0186206A3 (en) 1988-03-30
JP2526857B2 (en) 1996-08-21
KR930011840B1 (en) 1993-12-21
DE3582921D1 (en) 1991-06-27
AU5159085A (en) 1986-07-03
US4860217A (en) 1989-08-22
ATE63797T1 (en) 1991-06-15
EP0186206A2 (en) 1986-07-02
EP0186206B1 (en) 1991-05-22

Similar Documents

Publication Publication Date Title
CA1286393C (en) Method and system for effecting a transformation of a video image
JP2550530B2 (en) Video signal processing method
EP0437074B1 (en) Special effects using polar image coordinates
JP2985847B2 (en) Input device
US6441864B1 (en) Video signal processing device and method employing transformation matrix to generate composite image
JPH05342310A (en) Method and device for three-dimensional conversion of linear element data
EP0205252A1 (en) Video signal processing
JPH1049704A (en) Image conversion method
KR100412166B1 (en) Image processing apparatus and image processing method
US5327501A (en) Apparatus for image transformation
US6020932A (en) Video signal processing device and its method
JP4635437B2 (en) Electronic apparatus and image display method
JP2005195867A5 (en)
US5150213A (en) Video signal processing systems
EP0435905B1 (en) Improvements in and relating to the production of digital video effects
JP2713895B2 (en) 3D direction vector input method
EP0707420A1 (en) 3D special effects system
JP3602899B2 (en) Graphic data display device
JPH07298136A (en) Image special effect device
JPH05126937A (en) Video display device
JP3092779B2 (en) Graphic processing method and apparatus
JPH0371275A (en) Method and device for image transformation
JPS61196681A (en) Image signal converter
JPS61195085A (en) Picture signal converting device
JPS6126714B2 (en)

Legal Events

Date Code Title Description
MKLA Lapsed