WO1993011525A1 - Global coordinate image representation system - Google Patents

Global coordinate image representation system Download PDF

Info

Publication number
WO1993011525A1
WO1993011525A1 PCT/US1992/010633 US9210633W WO9311525A1 WO 1993011525 A1 WO1993011525 A1 WO 1993011525A1 US 9210633 W US9210633 W US 9210633W WO 9311525 A1 WO9311525 A1 WO 9311525A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
storage structure
gray scale
memory storage
Prior art date
Application number
PCT/US1992/010633
Other languages
French (fr)
Inventor
Barry B. Sandrew
Ronald Rinaldi
Original Assignee
American Film Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Film Technologies, Inc. filed Critical American Film Technologies, Inc.
Publication of WO1993011525A1 publication Critical patent/WO1993011525A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed

Definitions

  • This invention relates to computer animation and more generally to computer graphics and image processing.
  • Animation is a process by which a series or sequence of hand or computer drawn images are combined in such a way as to appear to the viewer that an image or multiple images have moved from one frame to another.
  • films and videos are actually a sequence of still pictures linked together, it is necessary to relate the images from one frame to another so that the transition appears fluid when shown at the speed of the final product (30 frames per second for video) .
  • cell animation The most prevalent animation technique is called cell animation.
  • various layers or “cells” are hand drawn by animators on transparencies (or on paper which is later transferred to a transparency) , depending on the stage of the process.
  • each cell tracks an individual object.
  • This type of a system is referred to as a "paper" system.
  • the first step is to draw the image in stick or outline form for the first frame.
  • this image is redrawn as it would appear a number of frames into the future. For purposes of example, this future frame is nine frames into the future. After these two frames are drawn, the "in-betweening" step takes place.
  • the "in- betweener” begins by drawing the frame which occurs in the middle of frames 1 and 9 (frame 5) .
  • frame between the extreme frame and the first in- between, frame (frames 1 and 5) is drawn, and so on until all of the images occurring in successive frames (1 to 9) have been, drawn.
  • In-betweening in paper systems is accomplished with backlit paper, so that the outer frames are in the in-betweener's view when he or she is drawing the in-between frames.
  • the first pass of "in-betweening, "called pencil testing, is usually drawn in pencil or on individual pieces of paper that are pin registered.
  • the paper drawings are then videotaped for viewing the accuracy of the animation. This allows for verification of lip synching, expression and movement of the elements of a set of frames.
  • the next pass is called inking, where the pencil drawing is traced with clean lines drawn in ink.
  • the third step involves photocopying, followed by hand painting, and the final step of compositing. In compositing, all of the cells (layers comprising each animated image) for an individual frame are stacked on top of one another and photographed to make up each frame for the animated sequence.
  • the animated images track the sound or speaking with which the images will be displayed.
  • the soundtrack and speaking parts are usually recorded prior to the animated images, and the animator, in creating the images, tracks the sound. This means that, as an animated character is speaking, the animator draws the mouth and facial expressions to sync with the pre-recorded soundtrack.
  • CAD Computer Animated Design
  • a CAD program may represent an image being drawn as a set of vectors.
  • the use of an equation to represent the image information allows for complicated effects, such as image manipulation, translation and rotation.
  • the animator may have a set of drawings on paper called turnarounds that provide multiple orientations and poses.
  • turnarounds When the animator needs to draw an image in a particular position, he will reference the turnarounds and draw an image on a separate piece of paper in whatever scale and whatever modification are deemed necessary by the animator.
  • Microsoft PaintTM allows a user to select from a library of shapes or templates of objects, for use in drawing.
  • a user typically, it is common for a user to first select an object, such as a circle. This is dragged or brought over to the drawing area and then appears on screen as a circle. The user can adjust the size of the circle by drawing a radius line or "stretching" the circle. Paint programs work on pixels, as the images are represented as bitmaps. This can also be referred to as a raster representation.
  • CAD software programs are also used for creation and manipulation of images. CAD programs do not manipulate pixels, but rather vectors. These vectors are then later converted into pixel representation for display on screen. Manipulation of vectors provides for a greater degree of modification of images. Like paint programs, CAD programs have library of objects which can be brought into the drawing area and manipulated to create vector represented images.
  • the present invention involves a computerised animation system which comprises a series of images stored in various representations in a plurality of memory storage structures.
  • the images are broken down into several components, which facilitate the creation and modification of images and the application of color thereto.
  • the images are first drawn and modified in a vector representation.
  • the vector represented images are then stored in a raster representation for display and creation of subsequent images.
  • Subsequent additions or modifications to the stored raster representation are facilitated by modifications or additions of the vector representations, then restored as a new raster representation.
  • gray scale information and color region information (which, in turn, addresses color look-up tables) is assigned to each pixel of an image. This information is then used to construct unique transfer functions to produce the final color versions of the animated images.
  • the present invention also involves a computer- based method for producing animation sequences by storing images in several orientations, each orientation corresponding to the rotation of the image in a global coordinate system. In this way, an image's desired rotational and pivotal movements are represented. All of the orientations of the image are then stored for later retrieval in putting together an animation sequence.
  • An image can also be broken down into primary and secondary parts, with secondary parts linked to the primary part as well as ether secondary parts. In this manner, as one part moves or changes its orientation, related movements of secondary parts are automatically implemented by the system.
  • Fig. 1 shows an expanded view of a memory structure of the present invention.
  • Fig. 1 A is a block diagram of an apparatus in accordance with an embodiment of this invention.
  • Fig. 2 shows a diagrammatic view of the use of the memory structure of the present invention in animation.
  • Fig. 3 shows storage in the lower memory storage structure of the present invention for a series of frames.
  • Fig. 4 shows storage in the upper-upper memory storage structure for a series of frames.
  • Fig. 5 is a flow diagram of the animation process of the present invention.
  • Fig. 6 is a more detailed flow diagram of the first three steps illustrated in Fig. 5.
  • Fig. 7 is a more detailed flow diagram of the gray scale assignment process.
  • Fig. 8 is a more detailed flow diagram of the color assignment process.
  • Fig. 9 is a flow diagram of the shading special effect.
  • Fig. 10 is a flow diagram of the shadowing special effect.
  • Fig. 11 is a flow diagram of the gradient shading special effect.
  • Fig. 12 is a flow diagram of the dissolve special effect.
  • Fig. 13 is a flow diagram of the merge special effect.
  • Fig. 14 is a flow diagram of the composite special effect.
  • Fig. 15 is a flow diagram of the pixel offset process.
  • Fig. 16 is a flow diagram of the automatic in- betweening process.
  • Fig. 17 is a sphere illustrating the present invention as a Global Coordinate System.
  • Fig. 18 shows an image in different positions corresponding to its location on the sphere.
  • Fig. 19 shows another image with its position corresponding to its location on the sphere.
  • Fig. 20 shows a sphere with the images of Figs. 18 and 19 combined into one object with the position of each object corresponding to its location on the sphere.
  • Fig. 21 shows a figure illustrating attach points of several objects.
  • Fig. 22 shows an exploded view of the image of Fig. 21.
  • Fig. 23 is an illustration of the data structures used in the present venture.
  • Fig. 24 is an illustration of additional data structures used in the present venture.
  • Memory storage structure 10 comprises a lower memory storage structure 18, an upper memory storage structure 16, an upper-upper memory storage structure 14 and a vector storage structure 22.
  • the information from vector storage structure 22 is translated into raster information and stored in memory structure 12 prior to storage in upper memory storage structure 16 and lower memory storage structure 18.
  • Upper memory storage structure 16 comprises a plurality of bit planes 16a-16d. Each of these bit planes is further comprised of a plurality of bits 20. Bits 20 correspond to the picture elements (pixels) which make up a visual image as it is displayed on a display device.
  • each of the bit planes 16a through 16d represent an independent (one- level of information) image.
  • memory storage structure 16 comprises four bit planes of independent information by which four monochrome images can be stored.
  • each of the four bit planes in upper memory storage structure 16 can contain an image. These images are comprised of a group of bits, and are termed raster images. Temporarily storing a raster represented image in one of the bit planes for motion reference (i.e., bit plane 16a), and then temporarily storing an image representing motion or animation of the earlier image in an additional bit plane for motion reference (i.e., bit plane 16b) allows for animation sequences to be created. Using images in this way is referred to as "ghosting" of images.
  • FIG. 1A A block diagram of an example computer animation station 8, is shown in Fig. 1A.
  • This animation station includes a computer 2, containing: pipeline architecture image processing hardware with a central processing unit and related hardware 6, which operates on the described memory structures 10; a digitizing device 4, such as a SumagraphicsTH tablet; and graphics display hardware 9, such as an extended graphics adapter (EGA) monitor.
  • the image memory is 16 bits deep, and is organized as a two- dimensional rectangular array. The horizontal dimension of the memory is 2K words, and may be expanded.
  • the upper and lower bytes of the memory may be addressed separately, so that two different 8-bit images can occupy the same area of memory simultaneously.
  • One of these bytes is further divided into the two 4-bit segments comprising upper memory storage structure 16 and upper- upper memory storage structure 14.
  • Image memory is controlled through a group of registers that physically reside on an image memory interface board. The registers are accessible through I/O ports. The five registers consist of control and status registers. Generally, the control register selects a mode of operation for a transfer of image data, and the address registers control the address where the data is placed.
  • the image to be animated is drawn in vector form with a drawing device and stored in vector files in vector storage structure 22 as a vector representation.
  • This vector representation is a series of vector defined lines (line segments) created by the animator prior to storage in vector storage structure 22.
  • Vector storage structure 22 can contain numerous vector files. As the vectors cannot be displayed per se, they must be translated into a raster representation and displayed in graphics bit plane 12.
  • Graphics bit plane 12 comprises a graphics bit plane such as an EGA bit plane.
  • the information in graphics bit plane 12 is displayed directly on a display device, such as a computer monitor. Therefore, though the information for the image as it is being drawn is displayed in a raster format (so that the animator can see the image as an image, and not a string of numbers) , the image is actually generated from a vector file stored in the host memory of the computer. Using vectors facilitates manipulation and modification of the images as they are being created, as well as after they are created.
  • the animator can store the vector represented image in one of the four bit planes of upper memory storage structure 16 selected by the animator. Storing the vector represented image in upper memory storage structure 16 displays the image in a raster represented format. In this way, each pixel of the image (an outline figure) is stored and addressable as a single bit in one of the four bit planes of upper memory storage structure 16.
  • upper memory storage structure 16 contains eight bit planes in the present embodiment of the invention.
  • lower memory storage structure 18 can contain fewer or greater number of bit planes depending on the desired memory structure design.
  • eight bit planes are used so that 256 shades of gray (gray scale values) can be addressed and assigned to the images created.
  • gray scale values are assigned to the images.
  • lower memory storage structure 18 comprises dependent bit planes. These dependent bit planes comprise an eight-bit word (in the present embodiment) for each pixel.
  • each pixel can be assigned a gray scale value from a range of 256 values.
  • the animator creates an image in a vector represented format which is stored in a vector file contained in vector storage structure 22.
  • the vector represented image is translated into a raster representation in the graphics bit plane 12, so that it can be displayed visually as an image on a display device.
  • This image can then be stored in a raster representation in one of the independent bit planes (16a- 16d) of upper memory storage structure 16.
  • Subsequent images, representing the image in various stages of motion can also be stored in the additional bit planes of upper memory storage structure 16. Storing these subsequent images allows for the animator to display the images simultaneously on the display device so that, in- between, stages of the image to be animated can be drawn.
  • the animator selects gray scale values to be assigned to the various regions of the images. Once the gray scale values are selected for the regions of the images, the images with the gray scale value information are stored in lower memory storage structure 18.
  • the bit planes of lower memory storage structure 18 are dependent. In this way, each bit (pixel) of lower memory storage structure 18 contains 8 bits of information.
  • the animator can assign a gray scale value from a range of 256 gray scale values to the regions of the images.
  • the images stored in the bit planes of upper memory storage structure 16, in the preferred embodiment of the presenc invention are assigned a different color, solely for display purposes. This color is wholly unrelated to the ultimate color of an image. Therefore, when the animator chooses to display multiple images from the bit planes of upper memory storage structure 16, the displayed images will appear in different colors. This allows for easier distinction on the part of the animator between the image being created and the image in its various stages of motion. As an additional aid, the animator can select the intensity of the colors to be displayed. By choosing the color and the intensity of the color for images in each of the bit planes, the animator can adapt the system to produce the most effective workspace environment.
  • the cycling tool allows the animator to "run” the images in order to test for the smoothness of the animation. This is similar to the flipping of pages in a paper animation system.
  • the animator can choose either an automatic or manual mode.
  • Automatic mode runs through the images at a preselected speed in a forward, then backward direction to maintain continuity of motion.
  • Manual mode allows the animator to interactively choose the direction, speed and starting point (frame) for the cycling.
  • each of the images stored in lower memory storage structure 18 are cycled.
  • the present system works with 32 frames. This allows for cycling of up to 32 frames, over and over. A system working with more than 32 frames at a time could cycle ' through more than 32 frames.
  • the color transfer function In order to produce a color image (as opposed to the colors assigned to the bit planes of upper memory storage structure 16, which are used for reference purposes) , the color transfer function also requires a color region value to point to the color look-up table which contains the HLS (hue, luminance, saturation) color coordinate value associated with each of the possible gray scale values (i.e., 256 in the present embodiment) .
  • Upper-upper memory storage structure 14 provides this information for the color transfer functions.
  • Upper-upper memory storage structure 14 contains four bit planes. In the present embodiment, then, there is a possibility of 16 different regions of color, any one of which can be addressed. Those skilled in the art will understand and recognize that fewer or greater number of bit planes can be present, yielding fewer or greater number of colors (color regions) which can be addressed. As there are four bit planes in the present embodiment, there are a total of sixteen colors which can be addressed or assigned. Each region addresses 256 different values of the hue. As in lower memory storage structure 18, the bit planes of upper- upper memory storage structure 14 are dependent in that the four bit planes comprise a four-bit word for each pixel 20 in the image.
  • the animator selects the color for each region and designates the region as being assigned this selected color.
  • the color assignment to each region can be concurrent with, or separate from, the assignment of gray scale information to each of the regions of the image.
  • Sequence A of Fig. 2 shows the image (as a series of four "X" characters) displayed in graphics bit plane 12.
  • the image shown in graphics bit plane 12 of Sequence A is created by the animator and is represented in a vector file of vector storage structure 22 as a set of vector values.
  • the animator then stores the image in one of the bit planes of upper memory storage structure 16.
  • the animator can select any of the 4 bit planes of upper memory storage structure 16.
  • the image is stored in bit plane 16a.
  • Sequence B shows the image from bit plane 16a displayed on a display device 9. Additionally, a second image representing movement of the first images (shown as a series of four "X" characters) is drawn by the animator in graphics bit plane 12 (not shown) .
  • the new image is stored as a vector file in vector storage structure 22, but represented as a raster image in graphics bit plane 12 (and displayed as such on display device 9) .
  • each of these images is displayed in a different color and possibly a different intensity on the animator's display monitor 9. This is due to each bit plane of upper memory storage structure 16 having a different color assigned to it.
  • Graphics bit plane 12 (raster representation of the vector information of the image) also has a color assigned to it which should be different than those colors assigned to the bit planes of upper memory storage structure 16.
  • the second image is stored in bit plane 16b of upper memory storage structure 16.
  • the original and new images are both displayed on display device 9 (in the colors assigned to their respective bit planes) , and the animator can draw in the third image in graphics bit plane 12 (not shown) .
  • This third image (shown as a series of four "X" characters) represents the "in-between image" of the first and second images.
  • the first and second images are displayed in their respective colors on display device 9 to allow the animator to draw in the third image (in graphics bit plane 12) in the proper position.
  • the animator can then store this third image in a third bit plane, shown as bit plane 16c in Sequence C of Fig. 2.
  • Each of the raster bit planes 16a, 16b, and 16c represent the image to be animated as it would appear in three separate frames of an animation sequence. Therefore, when assigning gray scale information to these respective images, the gray scale information is stored in a different memory storage structure 18 for each frame. In this way, the image stored in bit plane 16a is assigned a gray scale value, and then the gray scale value is stored in a lower memory storage structure 18 for that frame. The image in bit plane 16b is assigned a gray scale value, and then this gray scale information is stored in a lower memory storage structure 18 for a subsequent frame. Lastly, the image in bit plane 16c would be assigned a gray scale value, and this gray scale information would be stored in a lower memory storage structure 18 for a third frame.
  • the gray scale values should be the same or all three frames.
  • a of Fig. 3 (corresponding to Sequence A of Fig. 2), an arbitrary gray scale (for purposes of illustration) represented by the binary value of eight ones (1 l l l 1 1 1) is repeated for the four pixels illustrated by the shaded area.
  • the animation of the images would appear as a transition of the X characters in their location in the first frame to their location in the second and third frames.
  • each of the images is assigned the same gray scale value.
  • the final animation product could yield each of the images in a different color. In that case, it would be necessary to assign each of the regions, represented by the gray scale values, a different color.
  • FIG. 4 a representation of the color information for the images drawn in Sequences A, B and C of Fig. 2 and stored in the upper-upper memory storage structure 14.
  • Structure A of Fig. 4 shows the color information for the image drawn in Sequence A of Fig. 2.
  • An arbitrary value (for purposes of illustration) of four ones (1 1 1 1) is stored in the bit planes of the shaded area.
  • B and C of Fig. 4 show similar storage for the corresponding images from Figs. 2 and 3.
  • Look-up tables (not shown) , a selected one of which is defined for each color region by an identifier, define color transfer functions corresponding to the values stored in the bit planes for each pixel in A, B and C of Fig. 4.
  • This information along with the 8-bit gray scale information (stored in lower memory storage structure 18) , provides for a unique output color for each color pixel.
  • This results in the color being applied in the final image displayed on display device 9, which is dependent upon, but not a mere summation of, gray scale values and operator-selected colors for the various regions of an image.
  • Figs. 2-4 only one color is being assigned to all images. Combining these images, as a sequence of images, results in the final animation or animated feature.
  • the information that is used for the produccion of the final, colored animated images is contained in lower memory storage structure 18, upper-upper memory storage structure 14, and the look-up tables (not shown) for each of the color regions (colors) which can be assigned to areas of the images.
  • the vector information is no longer necessary once the unfilled images are satisfactory to the animator and completed.
  • the information in graphics bit plane 12 is temporary, corresponding to that displayed on the display device at any given time.
  • the raster representation of the images stored in the bit planes of upper memory storage structure 16 is also temporary, designed to facilitate the drawing and creation of the animated images. Once this is completed and gray scale information for the images is stored in lower memory storage structure 18, the information in upper memory storage structure 16 is no longer required.
  • FIG. 5 an illustration of the animation process of the present invention in the form of a flow chart, with block 30 representing the creation of the first animation frame image and block 32 representing the creation of the Nth frame image.
  • the Nth frame image is the ninth frame for image animation.
  • the creation of in-between frames based on the first frame and the Nth frame is represented by block 34.
  • the Nth frame is the second frame created, and the in-between frame is the third frame created.
  • Block 36 represents the assignment of gray scale values to regions, wherein the regions are defined by the image outlines.
  • Block 38 represents the assignment of colors to the regions where gray scales were assigned in block 36. Blocks 36 and 38 can be combined into one simultaneous step.
  • Block 40 represents the addition of special effects, including the combinations of images and layering of images.
  • Fig. 6 a more detailed schematic of the steps represented by blocks 30-34.
  • an animator begins, in block 42, by creating the image outlines for an image to be animated. This information is stored in a vector file in vector storage structure 22.
  • the animator has the ability to modify and manipulate the images through such techniques as rotation, stretching, shrinking, duplicating, etc. For this reason, image information in vector representation is stored in a vector file as shown in block 44 should image modification or adjustment be necessary at a later time.
  • the image is stored in raster representation (as a bit map) in one of the bit planes of upper memory storage structure 16.
  • each bit plane of upper memory storage structure 16 has associated with it a particular color and intensity to differentiate between frames.
  • the storage of vectors in a vector file in vector storage structure 22 is carried out at the time that the image is stored in raster representation in one of the bit planes of lower memory storage structure 18. It is possible, however, to store the vector information at a-separate time than that of the raster information. At this point, one image outline has been created in a vector representation and stored in a raster representation in upper memory storage structure 16.
  • the next step for the animator is to create a second image corresponding to where the image just created will be located after "motion" has taken place.
  • this second frame is the ninth frame in a sequence of animated images. In Fig. 5, this is referred to as the "Nth" frame. Deciding whether this step is necessary is shown in decision block 48.
  • the animator can display the first image in the color of the bit plane in which it is stored. This color should be different than the color in which the animator is displaying the vector represented image (in graphics bit plane 12) that is currently being created. This is referred to in block 50 as the ghosting of images.
  • the animator can draw the current image using the ghosted image as a reference. This greatly aids in the production of high-quality and accurate animation.
  • the animator goes back to block 42 and draws a new image.
  • the animator When complete, the animator stores the raster representation of this second image in its appropriate lower memory storage structure 18. But for its appropriate frame, all ghosted images can be displayed in the same plane within their respective frames. During ghosting, each ghosted image is assigned to the frame being in-betweened, for reference purposes only. The ghosted frame is not saved to memory storage structure 18 where the new in-between image is ' stored. The in-betweening is represented in Fig. 5 as block 34. The process for in-betweening is identical to that described earlier for creating the first and second images. The difference is that this is a further iteration of the process as already described.
  • a region is typically any area surrounded by the lines of the image.
  • the inside of a circle is an example of a region, as is the area outside the circle.
  • Block 56 shows the animator selecting a region of an image to be filled with the gray scale values selected in block 54.
  • an image contains several areas or regions which will eventually receive different colors. An example of this would be a cartoon figure wearing clothing having different colors. Two hundred fifty-six gray scales can be achieved in one image, but only 16 regions in the-present embodiment.
  • the selected region is filled with the selected gray scale value. In the present embodiment, the animator will immediately see the gray scale appear in the selected region on the display device.
  • decision block 59 it is determined whether all gray scale values have been selected. If not, the next gray scale value must be selected in block 54. Blocks 54-58 are repeated for all of the regions in a particular frame.
  • gray scale values are assigned to all of the regions of a frame. There may be several different gray scales for each region.
  • the gray scale values are stored in lower memory storage structure 18, as shown in block 60. In the present embodiment, this is an eight- bit value stored for each pixel corresponding to a possible selection of 256 gray scale values.
  • lower storage structure 18 is not always "empty" before the gray scale values are stored there. It is possible that other image information, such as a digitized live-action image is already residing there. It is also possible to circumvent the upper memory storage structure 16 and store the raster representation of the vector-based images being created directly into lower memory storage structure 18. In any case, the gray scale values assigned in Fig. 7 are stored in lower memory storage structure 18, and overwrite the location of any information previously stored there. Further, if lower storage structure 18 is
  • this raster gray scale information is anti-aliased in block 61 and then stored in a permanent memory location (i.e., hard disk, removable storage media, etc.), as shown in block 62.
  • Anti-aliasing typically takes place on the designated background to provide proper fusing of images. This can occur after storage of all information in lower memory storage structure 18, or after lower memory storage structure information is stored for several frames.
  • each frame has gray scale values stored in its own lower memory storage structure 18.
  • Fig. 8 There is shown in Fig. 8 a flow chart representation of the assignment of colors 38.
  • a color is selected for a particular region which is not yet designated. An example of this would be selecting red for application to a cartoon character's dress.
  • a region is selected to which this color is to be applied. It is possible to designate a color at the time a gray scale is applied to the region, as was described in Fig. 7. In doing so, it is understood that a particular color is selected and associated with a particular gray scale. This can be done prior to the selection and application of the gray scale to a particular region, so that gray scale and color are thus applied simultaneously to a region. Color can also be applied after the selection and application of the gray scale to a particular region.
  • a region to which a color is to be applied it is necessary to designate a region to which a color is to be applied.
  • the color is applied to the selected region in block 65.
  • a color- will typically be applied at the time that a gray scale is applied to a region.
  • the present embodiment does not provide for displaying gray scale information concurrent with color information.
  • the animator in order to display the color information, the animator must choose an option for solely displaying color. This is not a limitation of the present system, as it is readily understood that additional information can be used in systems operating on more information to allow for display of color as well as gray scale information and image modification information.
  • Colors are selected in the present embodiment by using a color bar on the display device.
  • these colors are selected prior to the animation process, as the palate from which the animator colors each region. This facilitates efficient and consistent coloring in a production-type animation process.
  • the animator designates which regions receive a certain color by pointing to a color wheel displaying colors.
  • these colors are generated by a 24 bit color generating board, such as the Targa BoardR.
  • the colors are Chen locked inCo a color bar.
  • the color bar is used by Che designer and colorist for completing the production coloring. After the colors are selected, they appear on the side of the menu used for filling the colors. Colors are chosen for all regions, and the color bar is passed on as data to all persons carrying out the coloring in later stages of production.
  • Fig. 9 a flow chart for the process of applying a shading special effect.
  • Shading is an operation which assigns a particular gray scale (and corresponding color) to an area designated by the animator, called the active mask. This allows the animator to provide an effect such as a black shadow or the assignment of gray scale values to a region not defined by the outlines of an image.
  • an animator selects an area to be shaded in a first step, represented by block 67. This is accomplished through the region selection tools such as a window or a free-hand designated region.
  • the animator selects a gray scale value to be applied to this region. Typically, this is a black or very dark gray scale.
  • the gray scale is applied to the selected area.
  • the selected area is also referred to as a designated mask or active mask. Note that the application of the selected gray scale information to the selected area will overwrite the underlying gray scale information in that entire area.
  • the new gray scale values are stored in lower memory storage structure 18, overwriting any prior values stored there. Again, this tool is effective for blackening out (or, conversely, whitening out) sections or areas of the screen.
  • FIG. 10 a flow chart of the process for providing a special effect of shadowing, in accordance with the preferred embodiment of the present invention * A shadow is different than a shade (Fig. 9) in that the underlying gray scale is not replaced by a selected gray scale value, as is done in shading.
  • the underlying gray scale pixel values are offset by an operator designated value. In this way, the underlying pixels will be adjusted upward or downward according to this offset value.
  • the animator selects an area to be shadowed, in a first step represented by block 72. Again, this is accomplished through any number of tools, such as windows or free-hand area designation.
  • a gray scale offset value is selected. This value is either positive or negative, reflecting an increase or decrease, respectively, of the underlying gray scale values of the designated region.
  • the gray scale offset value is then applied to the gray scale values located in the selected area, block 76, and, finally, the new gray scale values are stored in lower memory storage structure 18, block 78.
  • Fig. 11 a flow chart for the process of providing the grading special effect, another feature of the preferred embodiment of the present invention.
  • the grading of an image involves the production of a gradation of gray scale values for an intermediary region based upon the gray scale values in adjacent operator-selected regions.
  • the region which is desired to be graded is selected, block 80. This can be accomplished with any of the selection tools such as windows, free- hand drawing, connection of operator-selected vertices, etc.
  • the "light” and “dark” regions are selected. Light and dark are only used as labels to distinguish the gradient intensity and direction of two gray scales from which the region to be graded is derived.
  • the selection of light and dark regions is not limited to one region apiece. As the light and dark regions only refer to the gray scale values from which the region to be graded is derived, the animator can position multiple light and dark regions around the region to be graded. These light and dark regions can be positioned randomly around or adjacent to the region to be graded.
  • gray scale values are assigned to the light and dark regions. These are the values from which the region to be graded will be derived. Note that the labels “light” and “dark” refer to a “light” region having a gray scale value which is less than that of the "dark” region. These are merely labels to distinguish the lower gray scale value region from the higher gray scale value region.
  • the pixels in the region to be graded are assigned gray scale values based upon the linear relationship between the light and dark regions.
  • the light and dark regions can be placed exactly opposite each other (on opposite sides of the region to be graded) , or can be positioned anywhere between 180° and 0°. As the light and dark regions approach each other (i.e., approach 0° apart) , the effect that they have on the region to be graded diminishes.
  • the grading occurs by operating on each pixel in the light region with respect to each pixel in each dark region that is linearly related through the region to be graded to that pixel in the light region.
  • Linearly related refers to the relationship between the light and dark pixels and the region to be graded. At least one pixel in the region to be graded must be within a line segment extending between at least one pixel in each of the light and dark regions. Absent this linear relationship, there will be no pixels in the region to be graded or which will undergo grading.
  • each pixel in the dark region is operated upon with respect to each pixel in the light region that is linearly related through the region to be graded to that pixel in the dark region.
  • These operations occur for each pixel in each light and dark region with respect to each pixel in the opposite contrast region that has a linear relationship through the region to be graded.
  • the necessity for a linear relationship between light and dark region pixels is why placing a light and dark region adjacent to each other without the region to be graded between the light and dark region results in no grading of the region to be graded. Also effecting the grading is the distance between the light and dark regions and their angular relationship.
  • the new gray scale values are stored in lower memory storage structure 18, as shown in block 90.
  • the actual grading process operates by determining the difference in gray scale values between the light and dark regions.
  • a light region with a gray scale value of 150 and a dark region with a gray scale value of 50 yields a difference of 100.
  • the number of pixels which are linearly between the light and dark regions is determined.
  • the difference in gray scale values between the light and dark regions is then "ramped" according to the number of pixels linearly between the light and dark regions. If there are 100 pixels between the light and dark regions, and the light and dark regions have gray scale values of 150 and 50 respectively, then each pixel between a pixel in the light and a pixel in the dark regions would be incremented by one. This would result in the "between" pixels having values of 51, 52, 53 . . .
  • the region to be graded has a length of 50 pixels in a line between a pixel in the light region and a pixel in the dark region, and the region to be graded was located 50 pixels for the light region, then the region to be graded would have a gray scale value of 10l added to the pixel within the region to be graded which is closest to the dark region.
  • the gray scale of 102 would be added to the gray scale value of the next pixel within the region to be graded. This continues until all pixels in the region to be graded have offset values added to their underlying gray scale values. If the region to be graded has a gray scale value of 10 for all of its pixels, this would be added to gray scale values of 101 . . , 150 for the respective pixels.
  • the new gray scale values are assigned to all pixels inside the region to be graded. Regions which are either outside the region to be graded or not linearly between the light and dark regions are not affected. All linear relationships between light and dark regions (through the region to be graded) are determined on a pixel pair basis; i.e., a pixel from the light region must be linearly related (through the region to be graded) to a pixel in the dark region.
  • Multiple grading occurs when pixels in the region to be graded are linearly between multiple pairs of light and dark regions (or effective light and dark regions, due to overlap) . These pairs need not be comprised of unique pairs, as many to one relationships may exist. This is handled sequentially by ordering the grading of multiple light and dark region pairs.
  • Fig. 12 There is shown in Fig. 12 a flow chart illustrating the dissolve special effect.
  • the dissolve effect allows for the dissolving or fading in and out of one image into another. This also can take place over multiple images.
  • the source image or images to be faded in or out of an image are selected.
  • the image from which the fading in or out it so occur is selected.
  • the number of frames over which the dissolve takes place are selected.
  • Block 96 shows the dissolving of the source image(s) into the destination image(s).
  • the dissolve takes place over the number of frames selected in block 95. This is reflected as a percentage. For example, if ten frames are selected, ten percent of the source image pixels will dissolve into the destination frame. This will continue through until the dissolve is complete.
  • the process is done, as shown in block 100. Until the dissolve is complete, the process loops back and greater and greater amounts of the source image are faded in (or faded ouc of) the destination image until the dissolve is complete.
  • Fig. 13 a flow chart illustrating the merge special effect.
  • the merge effect is similar to the dissolve, except the transfer from source to destination occurs in only one frame at a preselected percentage of the gray " scale value of the source pixels.
  • Merging allows for transparent effects such as images appearing through fire or smoke and also reflections of images. Examples include a person's image (source) reflected on a tile floor (destination) whereby the tile floor is discernable through the reflected (transparent) source image.
  • Fig. 14 a flow chart illustrating the composite special effect.
  • the composite special effect works in the manner of cell animation. This allows the creation of several layers which are composited together to form a final image. This is often the case in animation, and allows for the creation and production of various parts of a complete animation sequence or a part of a character. For instance, this tool allows the animators to isolate an image or a part of an image and create animation for those individual images or parts of images. This "sub-animation” can then be composited into the other image or part of the image. An example of this is animating blinking eyes separately from the face in which the eyes are to appear. Through each frame of a sequence, a different "blink" of the eyes would be composited into the image of the face.
  • the composite tool completes an absolute transfer from the source image to a destination image.
  • the source image or images are selected.
  • the destination image or images are selected.
  • the images are composited, with the source images overwriting any memory locations occupied by the destination images.
  • Compositing can be done so that the source image is only transferred to areas where there are no pre-existing masks or masks of non-assignable pixels. In this way, a character walking behind another character will appear to walk behind the character instead of having portions of the character intermingled with each other as if they were transparent. If the compositing is complete, the process is finished. If not (there are additional layers to composite) , the process cycles back to block 108.
  • the animator must, of course, decide the order of compositing so that a character who is to appear in front of a background will not have portions of it overwritten by corresponding areas of the background image.
  • Fig. 15 a flow chart illustrating the process for the pixel offset special effect.
  • This tool allows an animator to pan background images across a series of frames, producing the effect of motion of these background images.
  • Many images can be set along a predetermined trajectory, such as clouds sliding across the sky.
  • the clouds can be translated over an X and Y coordinate distance and rate pre-set by the animator. This occurs from frame to frame and can be cycled after the images have left the "boundaries" of the screen.
  • the velocity of pixel offset can be modified so that there is a slow in and out of the apparent motion of an element moving across the screen.
  • the trajectory of movement can be programmed into the pixel offset operation.
  • the image Co be offsec is selected.
  • the X and Y coordinates for the offset distances are selected.
  • the offset is completed. This translates into the image moving across the screen.
  • the option of cycling the images once they have left the screen back to the other side of the screen is accomplished. This allows images to "wrap around" to the beginning of the next frame.
  • Fig. 16 a flow chart illustrating the auto-in-betweening special effect.
  • This tool is useful where characters or images are moving along a predetermined trajectory or through a predetermined rotation, all at a relatively constant rate.
  • An example of this concept is a character that tumbles across a screen.
  • Another example is the movement of a character's arm, or leg, or head, etc. Any predictable motion can be determined by this tool.
  • the image (or a portion of an image) to be auto-in-betweened is selected.
  • the angle of rotation around a fixed point is selected.
  • the trajectory of the image is selected. Note that, if an image is not rotating, only a trajectory will be selected. Conversely, if an image is rotating without moving, only an angle of rotation will be selected. It is also possible to select particular locations instead of supplying measurements for the distances that the images are to move across.
  • the number of frames in which the motion is being undertaken are selected.
  • the in- between frames are determined by the system.
  • the blurring function allows an animator to blur or soften colors by averaging gray scale values of selected numbers of pixels.
  • a portion of an image i.e., a "rosy" cheek on a face
  • an active mask is set up consisting of the portion to be blurred and the surrounding area of the image where the blur will fade into.
  • An example would be designating an outline of a face without eyes, nose, etc., but with the cheeks to be blurred as a mask.
  • This mask is displayed in graphics bit plane 12.
  • the parts of the image which are not part of the mask are displayed in graphics bit plane 12 in their proper position on the mask. This means that the eyes, nose, etc., are displayed on the face.
  • a value is selected corresponding to the number of pixels on either side of a pixel undergoing the averaging which are to be included in the processing.
  • an average of the gray scale values of a selected number of adjacent pixels is made and assigned to the pixel being processed.
  • Fig. 17 a sphere having intersection points (201-226) at the various intersections of latitude lines (227-229) and longitude lines (230-237) .
  • the number of intersection points in Fig. 17 is illustrative of the method and system embodied in the present invention.
  • the number of intersection points shown in Fig. 17 is not a limitation of the present invention. It should be understood that many more or fewer intersection points are possible, depending on the particular application for which the present invention is being used.
  • intersection points are assigned a unique name.
  • This name can be a number as is the case in Fig. 17. It is necessary to insure that each name is unique.
  • Each image or part of an image is drawn in a position corresponding to an intersection point on the sphere.
  • the location of an image on the sphere corresponds to the movement of the image in a spherical coordinate system.
  • the noses and head of a character as shown in numerous positions (intersection points) located on the sphere.
  • the parts of the head are shown in five positions. Each of these positions corresponds to the movement of the head, with its axis of rotation in the center, to a location at an intersections point on the sphere. For example, at position A the head is initially facing straight ahead without any tilt or rotation. If the head turns a number of degrees to its right (position B) , the corresponding partial profile position is shown at the equatorial position adjacent to the head from its base equatorial position. This process is continued throughout the sphere, including reverse images (not shown) as if the object were actually turning around in the sphere. In this case, only the rear or back of the head of the object is seen by an observer. The corresponding locations on the far side (or back) of the sphere can be displayed when included. Fig. 18 shows only a few of the possible head positions.
  • Each image located at an intersection point of the sphere is assigned a unique name corresponding to the location of the intersection point on the sphere.
  • the location (intersection point) on the sphere is designated by a number, called an orientation code (See Fig. 23 - orientation code 290) .
  • the part, itself, is given a file name.
  • the combined image representation is now comprised of the file name for the part and its location on the sphere.
  • An example could be "head,l.” This shows that the head is the object (part) in question and 1 is the location of its position and orientation on the sphere.
  • This unique name (file name, location on sphere) can be called up at anytime for producing or choreographing an animation sequence.
  • Fig. 19 shows a sphere with the eyes and mouth of the image from Fig. 18 drawn at each intersection on the sphere corresponding to the positions (A-E) of the image from Fig. 18.
  • the parts in Fig. 19 are considered linked parts.
  • a linked part is a part which has its own orientations on the sphere, but which moves in conjunction with another part.
  • An example of this is the eyes and mouth (shown in Fig. 19) on a face (shown in Fig. 18) .
  • the eyes and mouth move in corresponding manner. This is true, even though the eyes and mouth may have their own separate and independent movements. For instance, as a face moves through a range of motions, eyes can be blinking as the face is moving.
  • FIGs. 21 and 22 There is shown in Figs. 21 and 22, an image with multiple parts (extremities) 241-253 attached to a base image 240. Each of these attached parts are connected at "attach" points 254-266.
  • An attach point is the location on an object at which it is attached to a corresponding point on another object. Many attach points can occur at multiple positions on an image.
  • an arm is typically thought of as composed of several parts.
  • both right and left arms are shown.
  • the fingers are drawn incorporated into hands 244 and 247.
  • Each of these arm segments has an attach point.
  • the upper arms connect to the shoulder at points 255 and 258. At the same time they connect the lower arms at attach points 256 and 259.
  • the lower arms conversely connect to the upper arm while at the same time connecting to the hand through the wrist at attach points 257 and 260.
  • the part of an image which is moving at a particular time in an animation sequence is important in determining whether a part is an attach part or a linked part.
  • the body and its appendages do not necessarily have to move in relation to a head turning. It is easy to see that a person can move his head while maintaining his body position.
  • the shoulder muse turn along with the body.
  • the arm may still remain visible or contort to its approximated prior position, but, nevertheless, at least part of the arm moves in conjunction with the shoulder.
  • the attached part (the arm) may be linked to the turn of the body.
  • the attached part (the arm) would not be linked to the turning of the head.
  • the linking or non- linking of parts to the movement of other parts is contained in a file which catalogs which parts are linked to the movement of other parts, as well as showing which parts are attached to other parts.
  • the designation of linking or attaching is detailed by the animator.
  • the animator then creates the file using geographical manipulation and interface tools or alternatively typing out the file using a keyboard.
  • attached parts are distinct from linked parts.
  • linked parts have an identical sphere of motion to that of the part with which it is linked.
  • a mouth moves corresponding to the head movement, therefore, it has an identical range of motion and orientation sphere. For mouth movements such as the movement necessary to speak, each mouth movement has its own sphere.
  • the mouth movement for the letter "A” has a sphere of motion corresponding to the head motion, whereas another sphere must be generated for the mouth motion necessary for the letter "E".
  • a half-a-dozen or so mouth movements are generated to cover the entire range necessary for visualizing speaking.
  • more mouth movements might be added.
  • a new sphere must be drawn by the animator corresponding to that mouth movement, so that its range of motion and orientation on the sphere correspond to the corresponding movements of the head.
  • the attached part does not re-orient itself with the part with which it is attached.
  • the arm In the case of an arm and a moving torso, the arm must be re-oriented by the animator to adjust for the rotation of the torso.
  • This is the present embodiment of the invention and not a limitation thereof. It is wholly within the spirit and scope of the invention to adjust the orientation of attached parts to correspond with the orientation of the parts with which they are attached. In doing so, a rotation of the attached parts is made to compensate for the rotation of the part to which it is attached.
  • the spherical library of movements for the arm is a library of positions corresponding to rotation about an axis located at the shoulder attach point.
  • the position of the attach point (shoulder) changes, even though the range of motion of the arm is still about its attached point.
  • an arm moving about a shoulder has its axis point at the shoulder. This can be visualized as the shoulder being at the center of the sphere. As the arm moves to different intersection points on the sphere, it always moves about its central axis point, the shoulder. As a torso moves through its sphere, the arm still retains its independent sphere of motion. The complicating factor in this motion is that the independent sphere of motion of the arm is now viewed differently from the perspective of the observer looking into the paper. Imagine an arm when lifted up from the shoulder being pivoted sideways as the torso turns. This results in a different viewing of the arm, even though the arm is moving within its independent sphere of motion. By rotation of the arms spherical locations, it can tied or linked to the corresponding torso locations.
  • Fig. 23 There is shown in Fig. 23 a description of the data structures of the host computer.
  • the present invention uses AT-compatible computers using 386 or 486 microprocessors, associated memory and support circuitry plus an intelligent graphics controller and associated memory.
  • the basic element of the host data structure shown in Fig. 23 is base part record 270 (270a-270f) .
  • the base part record contains information for each part which is an element of an image representation library called the screen.
  • there is only one base part record This is the case where the outline of an entire image is requested through only one sphere. This simple case does not have any attached parts.
  • the primary information contained in base part records 270 consists of an orientation list (ORILIST) 271, a transformation list (TRANLIST) 272, a link part array (LINKARRAY) 273, number of link parts (NLINKS) 274, a group list 275, and information to manage the hierarchical data structure.
  • This linking information is Shown as SIBLING 276 and ATTACH 277.
  • ORIFILE 278 indicates the file which contains the valid orientation codes for the part.
  • Orientation list 271 is set up for each base part and in turn, each base part record 270.
  • the array contained in orientation list 271 contains the base part's orientation for each frame. It should be noted that in any given frame, all parts and all images have exactly one orientation. (Changes in orientation correspond to movement and animation) and is accomplished through changes from frame to frame of the images and the various parts of images.
  • a transformation list 272 is set up for every base part that is. to be graphically transformed. This array contains one transformation record for each frame, indicating the transformation that is to take place on that part for that frame. Transformations include movement, example transformations could be movement, rotation, and scaling upward or downward from an image's original form. Transformations take place during the animation choreography process.
  • the spherical image library need only contain the images at a unit size. In this way, only one spherical representation library is necessary, even though the images can be adjusted, contorted, and modified by animation choreography.
  • Transformation information is only allocated for a given part when it becomes apparent that the part is to be transformed.
  • the absence of transformation information on a part means that the part is not transformed.
  • Link array 273 is set up for every base part that has link parts defined for it.
  • a link part is a part which is drawn in conjunction with a base part at the same position, orientation and transformation of the base part. There may be one or many link parts defined for a given base part. The origin of the link part is drawn at the same location as the origin of the base part to which it is linked.
  • a link part must belong to one link group.
  • a link group is used to name a collection of link parts that are used for one purpose. For example, there may be a series of eye parts for a particular head, but only one eye part is to be displayed in a particular frame or sequence of frames.
  • a link group may be used to indicate all of the eye parts. For uniformity, all of the parts which belong to a link group have the first few letters of their identification name in common. These common letters are used for the link group name.
  • the link group for eyes might be called EYE, while part names of this link group might be EYE OPEN, EYE CLOSED, EYE HALF, etc.
  • Link array 274 is two dimensional in the present embodiment of the invention. The first "dimension" of the link array indicates the frame number in which the particular link part is displayed. The second "dimension” is the link group for the given link part. This allows for a series of link parts to be defined for each frame, if necessary.
  • Group List 275 is set up for every base part that has link parts. Each entry in group list 275 contains the link group name (actually a link group index) that defines a group of link parts.
  • a group index (G#) is an index into a list of group names. This is a way of indicating a particular group within the data structures of Fig. 23.
  • Group list 275 works in conjunction with link array 274. Information is set up in base part record 270 so that it can be easily changed on a per-frame basis. At a given frame, the orientation of the base part, as well as any link parts which may be present, is changed by simply changing one number in the orientation array. A link part can be replaced (e.g. EYE OPEN replace with EYE CLOSED) , simply by replacing the link part number in link array 274 with another one from the appropriate group listed in link group 275.
  • EYE OPEN replace with EYE CLOSED simply by replacing the link part number in link array 274 with another one from the appropriate group listed in link group
  • Sibling 276 relates to multiple base parts being tied together or defined to exist at a root level. These multiple parts are completely independent of each other, yet tied to the same root level. This allows multiple characters to be choreographed together in one session.
  • Base part data structure 270 supports this by having each base part linked into sibling list 276 beginning from the "first part" which is always the root part.
  • a single base part can also have other parts attached to it forming more complex images.
  • the attached parts are affected by the location, and transformation of the given base part. All attachments for a base part are kept in a standard link list, sibling list 276.
  • sibling list 276 In the image shown in Figs. 21 and 22, it can be seen that a body part having 5 attached points including two arms, two legs, and one head has five sibling parts. These 5 parts are considered siblings among themselves, since they are affected in common by the base part and its related movements, but do not affect one another.
  • Each sibling part is independent of the other sibling parts attached to the base part.
  • Each of the sibling parts may in turn have its own attached parts.
  • each arm includes an upper arm, lower arm, and a hand.
  • each component may move in conjunction with each other, each has its own independent spherical representation library of movements. Because numerous parts have their own spherical representation library (e.g., upper arms 242 and 245 in Figs. 21 and 22) changes in orientation and movement of a base part can tie directly to the changes in orientation and movement of an attach part.
  • spherical representation library e.g., upper arms 242 and 245 in Figs. 21 and 22
  • Each attached part must be assigned to an attached point (e.g., 254-266 in Figs. 21 and 22) that must be drawn on each orientation of the base part.
  • an attached part Once an attached part has been created, it can become its own base part (e.g., base part 270b and base part 270e) . In this way, a hierarchical tree of parts is built around the first part base part. All parts in the system are always affected by parts to which it is subordinated.
  • Name list array 283 contains part name 285 and address list 286 which is a pointer to the GSP (shown in Fig. 24) address list. This is the information entry point to the GSP data structure.
  • Group list array 284 contains group name 287, as well as information on which parts are contained in the group.
  • N# or name index 288 is an index into a list of part names.
  • G# or group index 282 is an index into a list of group names. This is a way of indicating a particular group within base part record 270.
  • the GSP is a TIGATM compatible graphics board having its own local memory.
  • the GSP data structures contain the actual vector data needed to draw each part.
  • Vector data indicates the basic data which describes the graphic drawing.
  • Vector data is composed of line and points (vertex points) , which when connected appropriately, describe the desired image.
  • Part data is downloaded to the GSP on an as-needed basis.
  • Part data is downloaded from a memory storage device such as a hard disk drive. It can also be in a virtual disk drive set up in faster RAM memory.
  • a memory storage device such as a hard disk drive. It can also be in a virtual disk drive set up in faster RAM memory.
  • Address list 86 is associated with the GSP and is an array of GSP addresses which point to part data.
  • the desired orientation code 290 (contained in Ori list 271, Fig. 23) is used as an index into address list 286.
  • An entry of zero in address list 86 indicates that a part is not loaded nor has any load been attempted for that part. If the entry is a negative one, it means an attempt was made to load a part, but part data was not found. In the case of a negative one, no further attempts need be made to load the data for this point at this particular orientation. This may occur if that corresponding orientation for the part is not present in the spherical representation library of orientations for that part. If the entry is neither a zero nor a negative one, then it is a GSP address to part data.
  • the actual part data is ' contained in a fixed header section consisting of extent 2 and attach points 293, followed by a variable size section consisting of all vector data for the part 294.
  • Vector data is terminated by a pair of zeros in termination record 295.
  • the vector data consists of a series of polylines. Each polyline contains a number of coordinates and a delimiter followed by the actual coordinate data.
  • the first coordinate in a polyline always represents the "moved to" position or starting point. All remaining coordinates in the polyline are "draw to" positions. The draw to positions follow the initialization established by the move to position.

Abstract

There is provided a system and method for a computerized animation system comprising a series of images stored in a memory storage structure (10). These images are comprised of picture elements, and the memory storage structures comprise a plurality of bit planes (12, 14, 16, 18) which are further divided into at least two storage structures. A first storage structure (22) is used for storing operator-modified image information in a vector representation, and a second storage structure (12, 14, 16, 18) is used to store a raster representation of the operator-modified image information. The system includes means to display multiple images from the second storage structure to facilitate the creation of additional images. There is also provided a system and method for computerized animation system wherein an image is created in a first orientation within a global coordinate orientation system. Subsequent images representing new orientations of the first are also stored.

Description

GLOBAL COORDINATE IMAGE REPRESENTATION SYSTEM
This application is a continuation-in-part of pending U.S. Patent Application Serial No. 07/526,977 filed May 22, 1990.
FIELD OF THE INVENTION
This invention relates to computer animation and more generally to computer graphics and image processing.
BACKGROUND OF THE INVENTION
Animation is a process by which a series or sequence of hand or computer drawn images are combined in such a way as to appear to the viewer that an image or multiple images have moved from one frame to another. As films and videos are actually a sequence of still pictures linked together, it is necessary to relate the images from one frame to another so that the transition appears fluid when shown at the speed of the final product (30 frames per second for video) .
The most prevalent animation technique is called cell animation. In cell animation, various layers or "cells" are hand drawn by animators on transparencies (or on paper which is later transferred to a transparency) , depending on the stage of the process. Typically, each cell tracks an individual object. This type of a system is referred to as a "paper" system. For a typical cell ani ation application, the first step is to draw the image in stick or outline form for the first frame. Next, this image is redrawn as it would appear a number of frames into the future. For purposes of example, this future frame is nine frames into the future. After these two frames are drawn, the "in-betweening" step takes place. Here, the "in- betweener" begins by drawing the frame which occurs in the middle of frames 1 and 9 (frame 5) . After this step, the frame between the extreme frame and the first in- between, frame (frames 1 and 5) is drawn, and so on until all of the images occurring in successive frames (1 to 9) have been, drawn. In-betweening in paper systems is accomplished with backlit paper, so that the outer frames are in the in-betweener's view when he or she is drawing the in-between frames.
The first pass of "in-betweening, " called pencil testing, is usually drawn in pencil or on individual pieces of paper that are pin registered. The paper drawings are then videotaped for viewing the accuracy of the animation. This allows for verification of lip synching, expression and movement of the elements of a set of frames. The next pass is called inking, where the pencil drawing is traced with clean lines drawn in ink. The third step involves photocopying, followed by hand painting, and the final step of compositing. In compositing, all of the cells (layers comprising each animated image) for an individual frame are stacked on top of one another and photographed to make up each frame for the animated sequence. Traditional cell animation uses approximately three layers (transparencies) , where each layer is considered a "cell." Each cell is drawn in outline form, and then turned over and painted from the back. Next, each cell is layered, one on top of another, to produce a final image on film. In reality, though three layers are involved, there may actually be four or five cells produced. This is because each layer may itself involve multiple cells.
To produce high quality and more realistic animation, it is always essential to have the animated images track the sound or speaking with which the images will be displayed. To ensure this identity between sounds and images, the soundtrack and speaking parts are usually recorded prior to the animated images, and the animator, in creating the images, tracks the sound. This means that, as an animated character is speaking, the animator draws the mouth and facial expressions to sync with the pre-recorded soundtrack.
There are various software programs on the market which facilitate the drawing of images. CAD (Computer Animated Design) programs use an equation to generate a series of screen pixels between two points. A CAD program, then, may represent an image being drawn as a set of vectors. The use of an equation to represent the image information allows for complicated effects, such as image manipulation, translation and rotation.
Other drawing programs work with raster- represented image information. This is also referred to as bit mapping. In this technique, an image is drawn and stored as a map of pixels on the screen. Manipulation of images is much more limited, since there is a one-to-one correlation between what is stored for each pixel and what is displayed for each pixel. This is in contrast to an equation or a vector based system, where each pixel ca be altered by changing the variables in the equation. One benefit, however, of a raster- representation system is its simplicity, as contrasted with the complex calculations occurring in a vector or graphics-based system.
In traditional paper-based animation techniques an animator will draw successive images, each incorporating a slight modification which appear on separate pages. When the individual pages of paper are viewed in rapid succession, the modifications of successive images provide the viewer with the perception that the image is moving. The modification of images may involve different orientations, profiles, and/or poses.
As a reference, the animator may have a set of drawings on paper called turnarounds that provide multiple orientations and poses. When the animator needs to draw an image in a particular position, he will reference the turnarounds and draw an image on a separate piece of paper in whatever scale and whatever modification are deemed necessary by the animator.
With this system of referenced positions for images, it is necessary for the animators to draw each image in each position which it occurs. An image may occur every frame or every other frame, if the animation sequence is being displayed in twos for instance. If any changes or modifications are necessary after images are drawn, the entire sequence of images may have to be completely redrawn.
Traditional eel animation mitigates the problem of redrawing entire sequences, by- reaking down a scene into different layers or eels of images. In this case, an entire scene would not have to be redrawn should any modifications or corrections be necessary. It is still necessary, however, to draw each image at each occurrence. Paint programs for personal computers such as
Microsoft PaintTM, allows a user to select from a library of shapes or templates of objects, for use in drawing. Typically, it is common for a user to first select an object, such as a circle. This is dragged or brought over to the drawing area and then appears on screen as a circle. The user can adjust the size of the circle by drawing a radius line or "stretching" the circle. Paint programs work on pixels, as the images are represented as bitmaps. This can also be referred to as a raster representation.
Computer Aided Design (CAD) software programs are also used for creation and manipulation of images. CAD programs do not manipulate pixels, but rather vectors. These vectors are then later converted into pixel representation for display on screen. Manipulation of vectors provides for a greater degree of modification of images. Like paint programs, CAD programs have library of objects which can be brought into the drawing area and manipulated to create vector represented images.
SUMMARY OF THE INVENTION
The present invention involves a computerised animation system which comprises a series of images stored in various representations in a plurality of memory storage structures. The images are broken down into several components, which facilitate the creation and modification of images and the application of color thereto. The images are first drawn and modified in a vector representation. The vector represented images are then stored in a raster representation for display and creation of subsequent images. Subsequent additions or modifications to the stored raster representation are facilitated by modifications or additions of the vector representations, then restored as a new raster representation. Finally, gray scale information and color region information (which, in turn, addresses color look-up tables) is assigned to each pixel of an image. This information is then used to construct unique transfer functions to produce the final color versions of the animated images.
The present invention also involves a computer- based method for producing animation sequences by storing images in several orientations, each orientation corresponding to the rotation of the image in a global coordinate system. In this way, an image's desired rotational and pivotal movements are represented. All of the orientations of the image are then stored for later retrieval in putting together an animation sequence. An image can also be broken down into primary and secondary parts, with secondary parts linked to the primary part as well as ether secondary parts. In this manner, as one part moves or changes its orientation, related movements of secondary parts are automatically implemented by the system.
BRIEF DESCRIPTION OF THE FIGURES
Fig. 1 shows an expanded view of a memory structure of the present invention.
Fig. 1 A is a block diagram of an apparatus in accordance with an embodiment of this invention. Fig. 2 shows a diagrammatic view of the use of the memory structure of the present invention in animation.
Fig. 3 shows storage in the lower memory storage structure of the present invention for a series of frames.
Fig. 4 shows storage in the upper-upper memory storage structure for a series of frames.
Fig. 5 is a flow diagram of the animation process of the present invention.
Fig. 6 is a more detailed flow diagram of the first three steps illustrated in Fig. 5.
Fig. 7 is a more detailed flow diagram of the gray scale assignment process. Fig. 8 is a more detailed flow diagram of the color assignment process.
Fig. 9 is a flow diagram of the shading special effect.
Fig. 10 is a flow diagram of the shadowing special effect.
Fig. 11 is a flow diagram of the gradient shading special effect.
Fig. 12 is a flow diagram of the dissolve special effect. Fig. 13 is a flow diagram of the merge special effect.
Fig. 14 is a flow diagram of the composite special effect.
Fig. 15 is a flow diagram of the pixel offset process. Fig. 16 is a flow diagram of the automatic in- betweening process.
Fig. 17 is a sphere illustrating the present invention as a Global Coordinate System. Fig. 18 shows an image in different positions corresponding to its location on the sphere.
Fig. 19 shows another image with its position corresponding to its location on the sphere.
Fig. 20 shows a sphere with the images of Figs. 18 and 19 combined into one object with the position of each object corresponding to its location on the sphere.
Fig. 21 shows a figure illustrating attach points of several objects.
Fig. 22 shows an exploded view of the image of Fig. 21.
Fig. 23 is an illustration of the data structures used in the present venture.
Fig. 24 is an illustration of additional data structures used in the present venture.
DETAILED DESCRIPTION OF THE INVENTION
There is shown in Fig. 1 a memory storage structure 10. Memory storage structure 10 comprises a lower memory storage structure 18, an upper memory storage structure 16, an upper-upper memory storage structure 14 and a vector storage structure 22. The information from vector storage structure 22 is translated into raster information and stored in memory structure 12 prior to storage in upper memory storage structure 16 and lower memory storage structure 18. Upper memory storage structure 16 comprises a plurality of bit planes 16a-16d. Each of these bit planes is further comprised of a plurality of bits 20. Bits 20 correspond to the picture elements (pixels) which make up a visual image as it is displayed on a display device. In memory storage structure 16, each of the bit planes 16a through 16d represent an independent (one- level of information) image. Thus memory storage structure 16 comprises four bit planes of independent information by which four monochrome images can be stored.
In the present invention, for animation, each of the four bit planes in upper memory storage structure 16 can contain an image. These images are comprised of a group of bits, and are termed raster images. Temporarily storing a raster represented image in one of the bit planes for motion reference (i.e., bit plane 16a), and then temporarily storing an image representing motion or animation of the earlier image in an additional bit plane for motion reference (i.e., bit plane 16b) allows for animation sequences to be created. Using images in this way is referred to as "ghosting" of images.
Before an image can be scored in any of the bit planes in upper memory storage structure 16, the image must first be "drawn" or created. In the present invention, images to be animated are created using a computer animation station. A block diagram of an example computer animation station 8, is shown in Fig. 1A. This animation station includes a computer 2, containing: pipeline architecture image processing hardware with a central processing unit and related hardware 6, which operates on the described memory structures 10; a digitizing device 4, such as a SumagraphicsTH tablet; and graphics display hardware 9, such as an extended graphics adapter (EGA) monitor. The image memory is 16 bits deep, and is organized as a two- dimensional rectangular array. The horizontal dimension of the memory is 2K words, and may be expanded. The upper and lower bytes of the memory may be addressed separately, so that two different 8-bit images can occupy the same area of memory simultaneously. One of these bytes is further divided into the two 4-bit segments comprising upper memory storage structure 16 and upper- upper memory storage structure 14. Image memory is controlled through a group of registers that physically reside on an image memory interface board. The registers are accessible through I/O ports. The five registers consist of control and status registers. Generally, the control register selects a mode of operation for a transfer of image data, and the address registers control the address where the data is placed.
Referring back to Fig. 1, the image to be animated is drawn in vector form with a drawing device and stored in vector files in vector storage structure 22 as a vector representation. This vector representation is a series of vector defined lines (line segments) created by the animator prior to storage in vector storage structure 22. Vector storage structure 22 can contain numerous vector files. As the vectors cannot be displayed per se, they must be translated into a raster representation and displayed in graphics bit plane 12.
Graphics bit plane 12 comprises a graphics bit plane such as an EGA bit plane. The information in graphics bit plane 12 is displayed directly on a display device, such as a computer monitor. Therefore, though the information for the image as it is being drawn is displayed in a raster format (so that the animator can see the image as an image, and not a string of numbers) , the image is actually generated from a vector file stored in the host memory of the computer. Using vectors facilitates manipulation and modification of the images as they are being created, as well as after they are created.
Using an on-screen menu, the animator can store the vector represented image in one of the four bit planes of upper memory storage structure 16 selected by the animator. Storing the vector represented image in upper memory storage structure 16 displays the image in a raster represented format. In this way, each pixel of the image (an outline figure) is stored and addressable as a single bit in one of the four bit planes of upper memory storage structure 16.
Though there are four bit planes in upper memory storage structure 16 of the present invention, it should be understood by those skilled in the art that any number of bit planes could be included in upper memory storage structure 16. It should be equally well understood that not all of the bit planes in upper memory storage structure 16 need be used. The optimal use of upper memory storage structure 16 is to store images to be animated in different stages of motion so that these images can be displayed simultaneously (ghosted) so that the animator can draw or create images in-between the ghosted images. Thus, the ghosting of images serves as a reference for the animator. Lower memory storage structure 18 contains eight bit planes in the present embodiment of the invention. As with upper memory storage structure 16, those skilled in the art will understand that lower memory storage structure 18 can contain fewer or greater number of bit planes depending on the desired memory structure design. In the present embodiment, eight bit planes are used so that 256 shades of gray (gray scale values) can be addressed and assigned to the images created. After an animator is satisfied with the image or images which are created, gray scale values are assigned to the images. Though not necessary, it is typical for animated images to be filled, either in monochrome or in color. If the animated images are to be filled in; the animator assigns gray scale values to the different areas of the animated image, and then stores the gray scale selected image in the lower memory storage structure 18. In contrast to upper memory storage structure 16, which comprises independent bit planes, lower memory storage structure 18 comprises dependent bit planes. These dependent bit planes comprise an eight-bit word (in the present embodiment) for each pixel. Thus, each pixel can be assigned a gray scale value from a range of 256 values.
Also of note, it is possible to store the raster representation of the image directly into the lower memory storage structure 18 instead of first storing the raster representation of the image in upper memory storage structure 16.
If the animated images are to be colored, it is still necessary to assign a gray scale value to the different regions to be assigned colors. Gray scale values are used by the color transfer function (disclosed in Applicant's related application, European Patent
Office Publication No. 0,302,454, and incorporated by reference herein) , which provides for the application of color to the images.
To review, the animator creates an image in a vector represented format which is stored in a vector file contained in vector storage structure 22. The vector represented image is translated into a raster representation in the graphics bit plane 12, so that it can be displayed visually as an image on a display device. This image can then be stored in a raster representation in one of the independent bit planes (16a- 16d) of upper memory storage structure 16. Subsequent images, representing the image in various stages of motion, can also be stored in the additional bit planes of upper memory storage structure 16. Storing these subsequent images allows for the animator to display the images simultaneously on the display device so that, in- between, stages of the image to be animated can be drawn.
After the animator is satisfied with the images, the animator selects gray scale values to be assigned to the various regions of the images. Once the gray scale values are selected for the regions of the images, the images with the gray scale value information are stored in lower memory storage structure 18. The bit planes of lower memory storage structure 18 are dependent. In this way, each bit (pixel) of lower memory storage structure 18 contains 8 bits of information. Thus, the animator can assign a gray scale value from a range of 256 gray scale values to the regions of the images.
To facilitate the ghosting of images, the images stored in the bit planes of upper memory storage structure 16, in the preferred embodiment of the presenc invention, are assigned a different color, solely for display purposes. This color is wholly unrelated to the ultimate color of an image. Therefore, when the animator chooses to display multiple images from the bit planes of upper memory storage structure 16, the displayed images will appear in different colors. This allows for easier distinction on the part of the animator between the image being created and the image in its various stages of motion. As an additional aid, the animator can select the intensity of the colors to be displayed. By choosing the color and the intensity of the color for images in each of the bit planes, the animator can adapt the system to produce the most effective workspace environment.
Another tool used by the animator is the cycling tool. The cycling tool allows the animator to "run" the images in order to test for the smoothness of the animation. This is similar to the flipping of pages in a paper animation system.
In order to cycle through the images, the animator can choose either an automatic or manual mode. Automatic mode runs through the images at a preselected speed in a forward, then backward direction to maintain continuity of motion. Manual mode allows the animator to interactively choose the direction, speed and starting point (frame) for the cycling. During cycling, each of the images stored in lower memory storage structure 18 are cycled. The present system works with 32 frames. This allows for cycling of up to 32 frames, over and over. A system working with more than 32 frames at a time could cycle ' through more than 32 frames.
In order to produce a color image (as opposed to the colors assigned to the bit planes of upper memory storage structure 16, which are used for reference purposes) , the color transfer function also requires a color region value to point to the color look-up table which contains the HLS (hue, luminance, saturation) color coordinate value associated with each of the possible gray scale values (i.e., 256 in the present embodiment) . Upper-upper memory storage structure 14 provides this information for the color transfer functions.
Upper-upper memory storage structure 14 contains four bit planes. In the present embodiment, then, there is a possibility of 16 different regions of color, any one of which can be addressed. Those skilled in the art will understand and recognize that fewer or greater number of bit planes can be present, yielding fewer or greater number of colors (color regions) which can be addressed. As there are four bit planes in the present embodiment, there are a total of sixteen colors which can be addressed or assigned. Each region addresses 256 different values of the hue. As in lower memory storage structure 18, the bit planes of upper- upper memory storage structure 14 are dependent in that the four bit planes comprise a four-bit word for each pixel 20 in the image.
The animator selects the color for each region and designates the region as being assigned this selected color. The color assignment to each region can be concurrent with, or separate from, the assignment of gray scale information to each of the regions of the image.
In order to display the image with color information on a display device, in the present embodiment, it is necessary to "move" the color information in upper-upper memory storage structure 14 into the memory space of upper memory storage structure 16. This is necessary because the present embodiment only operates on 12 bits of image information for display purposes. It will be understood by those skilled in the art that a system designed to handle a greater number of bits of display information could display color information as well as the raster represented "ghost" bit plane images. This operation is accomplished in the present embodiment by swapping the color information inco the memory storage locations of the "ghost" bit planes.
There is shown in Fig. 2 an illustration of the use of bit planes 16a-16d of upper memory storage structure 16. Sequence A of Fig. 2 shows the image (as a series of four "X" characters) displayed in graphics bit plane 12. The image shown in graphics bit plane 12 of Sequence A is created by the animator and is represented in a vector file of vector storage structure 22 as a set of vector values. The animator then stores the image in one of the bit planes of upper memory storage structure 16. The animator can select any of the 4 bit planes of upper memory storage structure 16. In Sequence A, the image is stored in bit plane 16a.
Sequence B shows the image from bit plane 16a displayed on a display device 9. Additionally, a second image representing movement of the first images (shown as a series of four "X" characters) is drawn by the animator in graphics bit plane 12 (not shown) . The new image is stored as a vector file in vector storage structure 22, but represented as a raster image in graphics bit plane 12 (and displayed as such on display device 9) . For purposes of distinction, each of these images is displayed in a different color and possibly a different intensity on the animator's display monitor 9. This is due to each bit plane of upper memory storage structure 16 having a different color assigned to it. Graphics bit plane 12 (raster representation of the vector information of the image) also has a color assigned to it which should be different than those colors assigned to the bit planes of upper memory storage structure 16. After drawing the second image in Sequence B
(using the image from bit plane 16a as a reference) , the second image is stored in bit plane 16b of upper memory storage structure 16. In Sequence C, the original and new images are both displayed on display device 9 (in the colors assigned to their respective bit planes) , and the animator can draw in the third image in graphics bit plane 12 (not shown) . This third image (shown as a series of four "X" characters) represents the "in-between image" of the first and second images. The first and second images are displayed in their respective colors on display device 9 to allow the animator to draw in the third image (in graphics bit plane 12) in the proper position. The animator can then store this third image in a third bit plane, shown as bit plane 16c in Sequence C of Fig. 2.
Each of the raster bit planes 16a, 16b, and 16c represent the image to be animated as it would appear in three separate frames of an animation sequence. Therefore, when assigning gray scale information to these respective images, the gray scale information is stored in a different memory storage structure 18 for each frame. In this way, the image stored in bit plane 16a is assigned a gray scale value, and then the gray scale value is stored in a lower memory storage structure 18 for that frame. The image in bit plane 16b is assigned a gray scale value, and then this gray scale information is stored in a lower memory storage structure 18 for a subsequent frame. Lastly, the image in bit plane 16c would be assigned a gray scale value, and this gray scale information would be stored in a lower memory storage structure 18 for a third frame. Unless the animation "story" requires different colors or effects for the same image over several frames, the gray scale values should be the same or all three frames. The storage of the gray scale information in lower memory storage structure 18 for each of the bic planes 16a, 16b and 16d, is illustrated in Fig. 3. In A of Fig. 3 (corresponding to Sequence A of Fig. 2), an arbitrary gray scale (for purposes of illustration) represented by the binary value of eight ones (1 l l l 1 1 1 1) is repeated for the four pixels illustrated by the shaded area.
In B of Fig. 3 (corresponding to Sequence B of Fig. 2) , the arbitrary gray scale value of the eighc ones is shown and covers the four pixels represented by the shaded area.
In C of Fig. 3 (corresponding to Sequence C of Fig. 2) , the arbitrary gray scale value of four ones and is shown and covers the four pixels represented by the shaded area.
For this example, then, the animation of the images would appear as a transition of the X characters in their location in the first frame to their location in the second and third frames. In the three frames of this animation sequence, each of the images is assigned the same gray scale value.
The final animation product could yield each of the images in a different color. In that case, it would be necessary to assign each of the regions, represented by the gray scale values, a different color.
There is shown in Fig. 4 a representation of the color information for the images drawn in Sequences A, B and C of Fig. 2 and stored in the upper-upper memory storage structure 14. Structure A of Fig. 4 shows the color information for the image drawn in Sequence A of Fig. 2. An arbitrary value (for purposes of illustration) of four ones (1 1 1 1) is stored in the bit planes of the shaded area. B and C of Fig. 4 show similar storage for the corresponding images from Figs. 2 and 3.
Look-up tables (not shown) , a selected one of which is defined for each color region by an identifier, define color transfer functions corresponding to the values stored in the bit planes for each pixel in A, B and C of Fig. 4. This information, along with the 8-bit gray scale information (stored in lower memory storage structure 18) , provides for a unique output color for each color pixel. This, in turn, results in the color being applied in the final image displayed on display device 9, which is dependent upon, but not a mere summation of, gray scale values and operator-selected colors for the various regions of an image. In the present example of Figs. 2-4, only one color is being assigned to all images. Combining these images, as a sequence of images, results in the final animation or animated feature.
The information that is used for the produccion of the final, colored animated images is contained in lower memory storage structure 18, upper-upper memory storage structure 14, and the look-up tables (not shown) for each of the color regions (colors) which can be assigned to areas of the images. The vector information is no longer necessary once the unfilled images are satisfactory to the animator and completed. The information in graphics bit plane 12 is temporary, corresponding to that displayed on the display device at any given time. The raster representation of the images stored in the bit planes of upper memory storage structure 16 is also temporary, designed to facilitate the drawing and creation of the animated images. Once this is completed and gray scale information for the images is stored in lower memory storage structure 18, the information in upper memory storage structure 16 is no longer required.
There is shown in Fig. 5 an illustration of the animation process of the present invention in the form of a flow chart, with block 30 representing the creation of the first animation frame image and block 32 representing the creation of the Nth frame image. Typically, the Nth frame image is the ninth frame for image animation. The creation of in-between frames based on the first frame and the Nth frame is represented by block 34. As described, the Nth frame is the second frame created, and the in-between frame is the third frame created. Block 36 represents the assignment of gray scale values to regions, wherein the regions are defined by the image outlines. Block 38 represents the assignment of colors to the regions where gray scales were assigned in block 36. Blocks 36 and 38 can be combined into one simultaneous step. Block 40 represents the addition of special effects, including the combinations of images and layering of images. There is shown in Fig. 6 a more detailed schematic of the steps represented by blocks 30-34. As indicated in Fig. 6, an animator begins, in block 42, by creating the image outlines for an image to be animated. This information is stored in a vector file in vector storage structure 22. By drawing the images in vector representation, the animator has the ability to modify and manipulate the images through such techniques as rotation, stretching, shrinking, duplicating, etc. For this reason, image information in vector representation is stored in a vector file as shown in block 44 should image modification or adjustment be necessary at a later time.
Once the image is satisfactorily created, the image is stored in raster representation (as a bit map) in one of the bit planes of upper memory storage structure 16. As previously described, each bit plane of upper memory storage structure 16 has associated with it a particular color and intensity to differentiate between frames. In the present embodiment, the storage of vectors in a vector file in vector storage structure 22 is carried out at the time that the image is stored in raster representation in one of the bit planes of lower memory storage structure 18. It is possible, however, to store the vector information at a-separate time than that of the raster information. At this point, one image outline has been created in a vector representation and stored in a raster representation in upper memory storage structure 16. The next step for the animator is to create a second image corresponding to where the image just created will be located after "motion" has taken place. Typically, this second frame is the ninth frame in a sequence of animated images. In Fig. 5, this is referred to as the "Nth" frame. Deciding whether this step is necessary is shown in decision block 48.
In order to provide proper relationship of the first image to that of this second image, the animator can display the first image in the color of the bit plane in which it is stored. This color should be different than the color in which the animator is displaying the vector represented image (in graphics bit plane 12) that is currently being created. This is referred to in block 50 as the ghosting of images.
By ghosting (displaying the images from upper memory storage structure 16) , the animator can draw the current image using the ghosted image as a reference. This greatly aids in the production of high-quality and accurate animation. To draw the second image (i.e., the ninth frame) , the animator goes back to block 42 and draws a new image.
When complete, the animator stores the raster representation of this second image in its appropriate lower memory storage structure 18. But for its appropriate frame, all ghosted images can be displayed in the same plane within their respective frames. During ghosting, each ghosted image is assigned to the frame being in-betweened, for reference purposes only. The ghosted frame is not saved to memory storage structure 18 where the new in-between image is' stored. The in-betweening is represented in Fig. 5 as block 34. The process for in-betweening is identical to that described earlier for creating the first and second images. The difference is that this is a further iteration of the process as already described. At block 50, where images are ghosted, there would now be at least two images displayed, in different colors corresponding to the bit planes of upper memory storage structure 16, in which they have been ghosted to. The animator uses both ghosted images to serve as references in order to draw an image which shows motion between the ghosted images. It is understood that there is not a limitation of only ghosting two images. Any number of image layers (limited only by the number of bit planes in upper memory storage structure 16) could be ghosted as an aid to the animator. When all frames (frames 1-9, for this example) have been created, gray scale values can be assigned to the regions of the images which are selected by the animator, as shown in block 36. There is shown in Fig. 7 a flow diagram of the gray scale assignment process 36. In block 54, it is shown that the animator selects a gray scale value which he or she desires to assign to a region of an image. In animation, a region is typically any area surrounded by the lines of the image. The inside of a circle is an example of a region, as is the area outside the circle.
Block 56 shows the animator selecting a region of an image to be filled with the gray scale values selected in block 54. Typically, an image contains several areas or regions which will eventually receive different colors. An example of this would be a cartoon figure wearing clothing having different colors. Two hundred fifty-six gray scales can be achieved in one image, but only 16 regions in the-present embodiment. In block 58, the selected region is filled with the selected gray scale value. In the present embodiment, the animator will immediately see the gray scale appear in the selected region on the display device. In decision block 59, it is determined whether all gray scale values have been selected. If not, the next gray scale value must be selected in block 54. Blocks 54-58 are repeated for all of the regions in a particular frame. In this way, different gray scale values are assigned to all of the regions of a frame. There may be several different gray scales for each region. Once this is complete, the gray scale values are stored in lower memory storage structure 18, as shown in block 60. In the present embodiment, this is an eight- bit value stored for each pixel corresponding to a possible selection of 256 gray scale values.
As it is possible to combine animation with live-action images or background images created prior to animation, lower storage structure 18 is not always "empty" before the gray scale values are stored there. It is possible that other image information, such as a digitized live-action image is already residing there. It is also possible to circumvent the upper memory storage structure 16 and store the raster representation of the vector-based images being created directly into lower memory storage structure 18. In any case, the gray scale values assigned in Fig. 7 are stored in lower memory storage structure 18, and overwrite the location of any information previously stored there. Further, if lower storage structure 18 is
"empty", a uniform gray scale value is stored for all non-image pixels. In the present system, a gray scale value of 128 is selected and stored.
Once the gray scale information is stored in lower memory storage structure 18, and the animator is satisfied with the region and gray scale selection, this raster gray scale information is anti-aliased in block 61 and then stored in a permanent memory location (i.e., hard disk, removable storage media, etc.), as shown in block 62. Anti-aliasing typically takes place on the designated background to provide proper fusing of images. This can occur after storage of all information in lower memory storage structure 18, or after lower memory storage structure information is stored for several frames.
As already described, it is necessary to provide gray scale information in lower memory storage structure 18 for each frame. This means that each frame has gray scale values stored in its own lower memory storage structure 18.
There is shown in Fig. 8 a flow chart representation of the assignment of colors 38. In block 63, a color is selected for a particular region which is not yet designated. An example of this would be selecting red for application to a cartoon character's dress. In block 64, a region is selected to which this color is to be applied. It is possible to designate a color at the time a gray scale is applied to the region, as was described in Fig. 7. In doing so, it is understood that a particular color is selected and associated with a particular gray scale. This can be done prior to the selection and application of the gray scale to a particular region, so that gray scale and color are thus applied simultaneously to a region. Color can also be applied after the selection and application of the gray scale to a particular region. In any case, it is necessary to designate a region to which a color is to be applied. The color is applied to the selected region in block 65. In the presently preferred embodiment, a color- will typically be applied at the time that a gray scale is applied to a region. The present embodiment, however, does not provide for displaying gray scale information concurrent with color information. Thus, in order to display the color information, the animator must choose an option for solely displaying color. This is not a limitation of the present system, as it is readily understood that additional information can be used in systems operating on more information to allow for display of color as well as gray scale information and image modification information.
Colors are selected in the present embodiment by using a color bar on the display device. In the present embodiment, these colors are selected prior to the animation process, as the palate from which the animator colors each region. This facilitates efficient and consistent coloring in a production-type animation process. The animator designates which regions receive a certain color by pointing to a color wheel displaying colors. In the present embodiment, these colors are generated by a 24 bit color generating board, such as the Targa BoardR. The colors are Chen locked inCo a color bar. The color bar is used by Che designer and colorist for completing the production coloring. After the colors are selected, they appear on the side of the menu used for filling the colors. Colors are chosen for all regions, and the color bar is passed on as data to all persons carrying out the coloring in later stages of production.
There is shown in Fig. 9 a flow chart for the process of applying a shading special effect. Shading is an operation which assigns a particular gray scale (and corresponding color) to an area designated by the animator, called the active mask. This allows the animator to provide an effect such as a black shadow or the assignment of gray scale values to a region not defined by the outlines of an image. To produce this effect, an animator selects an area to be shaded in a first step, represented by block 67. This is accomplished through the region selection tools such as a window or a free-hand designated region. In the second step, represented by block 68, the animator selects a gray scale value to be applied to this region. Typically, this is a black or very dark gray scale. In the next step, block 69, the gray scale is applied to the selected area. The selected area is also referred to as a designated mask or active mask. Note that the application of the selected gray scale information to the selected area will overwrite the underlying gray scale information in that entire area. In block 70, the new gray scale values are stored in lower memory storage structure 18, overwriting any prior values stored there. Again, this tool is effective for blackening out (or, conversely, whitening out) sections or areas of the screen.
There is shown in Fig. 10 a flow chart of the process for providing a special effect of shadowing, in accordance with the preferred embodiment of the present invention* A shadow is different than a shade (Fig. 9) in that the underlying gray scale is not replaced by a selected gray scale value, as is done in shading. In shadow, the underlying gray scale pixel values are offset by an operator designated value. In this way, the underlying pixels will be adjusted upward or downward according to this offset value.
For shadowing, the animator selects an area to be shadowed, in a first step represented by block 72. Again, this is accomplished through any number of tools, such as windows or free-hand area designation. In the next step, represented by block 74, a gray scale offset value is selected. This value is either positive or negative, reflecting an increase or decrease, respectively, of the underlying gray scale values of the designated region. The gray scale offset value is then applied to the gray scale values located in the selected area, block 76, and, finally, the new gray scale values are stored in lower memory storage structure 18, block 78.
This tool provides the realistic effect of causing the underlying surface be reflected as either a darker or lighter image, as is the case in many real life shadows. There is shown in Fig. 11 a flow chart for the process of providing the grading special effect, another feature of the preferred embodiment of the present invention. The grading of an image involves the production of a gradation of gray scale values for an intermediary region based upon the gray scale values in adjacent operator-selected regions.
To begin, the region which is desired to be graded is selected, block 80. This can be accomplished with any of the selection tools such as windows, free- hand drawing, connection of operator-selected vertices, etc. Next, as represented by shown blocks 82 and 84, the "light" and "dark" regions are selected. Light and dark are only used as labels to distinguish the gradient intensity and direction of two gray scales from which the region to be graded is derived. The selection of light and dark regions is not limited to one region apiece. As the light and dark regions only refer to the gray scale values from which the region to be graded is derived, the animator can position multiple light and dark regions around the region to be graded. These light and dark regions can be positioned randomly around or adjacent to the region to be graded.
In block 86, gray scale values are assigned to the light and dark regions. These are the values from which the region to be graded will be derived. Note that the labels "light" and "dark" refer to a "light" region having a gray scale value which is less than that of the "dark" region. These are merely labels to distinguish the lower gray scale value region from the higher gray scale value region.
In the step represented by block 88, the pixels in the region to be graded are assigned gray scale values based upon the linear relationship between the light and dark regions. The light and dark regions can be placed exactly opposite each other (on opposite sides of the region to be graded) , or can be positioned anywhere between 180° and 0°. As the light and dark regions approach each other (i.e., approach 0° apart) , the effect that they have on the region to be graded diminishes.
This results in the farther apart (closer to 180° apart) from each other that the light and dark regions are placed, the greater the visual changes in the region to be graded. Placing multiple light and dark regions around the region to be graded achieves more variation effects in the region to be graded. Each of these multiple light and dark regions may be thought of as subdivisions of the light and dark regions, respectively. The grading occurs by operating on each pixel in the light region with respect to each pixel in each dark region that is linearly related through the region to be graded to that pixel in the light region. Linearly related refers to the relationship between the light and dark pixels and the region to be graded. At least one pixel in the region to be graded must be within a line segment extending between at least one pixel in each of the light and dark regions. Absent this linear relationship, there will be no pixels in the region to be graded or which will undergo grading.
Similarly, each pixel in the dark region is operated upon with respect to each pixel in the light region that is linearly related through the region to be graded to that pixel in the dark region. These operations occur for each pixel in each light and dark region with respect to each pixel in the opposite contrast region that has a linear relationship through the region to be graded. The necessity for a linear relationship between light and dark region pixels is why placing a light and dark region adjacent to each other without the region to be graded between the light and dark region results in no grading of the region to be graded. Also effecting the grading is the distance between the light and dark regions and their angular relationship.
Once the region to be graded has gray scale values assigned to it, the new gray scale values are stored in lower memory storage structure 18, as shown in block 90.
The actual grading process operates by determining the difference in gray scale values between the light and dark regions. A light region with a gray scale value of 150 and a dark region with a gray scale value of 50 yields a difference of 100. Next, the number of pixels which are linearly between the light and dark regions is determined. The difference in gray scale values between the light and dark regions is then "ramped" according to the number of pixels linearly between the light and dark regions. If there are 100 pixels between the light and dark regions, and the light and dark regions have gray scale values of 150 and 50 respectively, then each pixel between a pixel in the light and a pixel in the dark regions would be incremented by one. This would result in the "between" pixels having values of 51, 52, 53 . . . 147, 148, 149. These between values are then added to the appropriate gray scale values in the region to be graded. Thus, if the region to be graded has a length of 50 pixels in a line between a pixel in the light region and a pixel in the dark region, and the region to be graded was located 50 pixels for the light region, then the region to be graded would have a gray scale value of 10l added to the pixel within the region to be graded which is closest to the dark region. The gray scale of 102 would be added to the gray scale value of the next pixel within the region to be graded. This continues until all pixels in the region to be graded have offset values added to their underlying gray scale values. If the region to be graded has a gray scale value of 10 for all of its pixels, this would be added to gray scale values of 101 . . , 150 for the respective pixels.
The new gray scale values are assigned to all pixels inside the region to be graded. Regions which are either outside the region to be graded or not linearly between the light and dark regions are not affected. All linear relationships between light and dark regions (through the region to be graded) are determined on a pixel pair basis; i.e., a pixel from the light region must be linearly related (through the region to be graded) to a pixel in the dark region.
Multiple grading occurs when pixels in the region to be graded are linearly between multiple pairs of light and dark regions (or effective light and dark regions, due to overlap) . These pairs need not be comprised of unique pairs, as many to one relationships may exist. This is handled sequentially by ordering the grading of multiple light and dark region pairs.
There is shown in Fig. 12 a flow chart illustrating the dissolve special effect. The dissolve effect allows for the dissolving or fading in and out of one image into another. This also can take place over multiple images.
In block 92, the source image or images to be faded in or out of an image are selected. In block 94, the image from which the fading in or out it so occur is selected. In block 95, the number of frames over which the dissolve takes place are selected.
Block 96 shows the dissolving of the source image(s) into the destination image(s). The dissolve takes place over the number of frames selected in block 95. This is reflected as a percentage. For example, if ten frames are selected, ten percent of the source image pixels will dissolve into the destination frame. This will continue through until the dissolve is complete.
Once the dissolve is complete, the process is done, as shown in block 100. Until the dissolve is complete, the process loops back and greater and greater amounts of the source image are faded in (or faded ouc of) the destination image until the dissolve is complete.
This is a very useful tool for a gradual transition from scene to scene, or the appearance of characters such as ghosts.
There is shown in Fig. 13 a flow chart illustrating the merge special effect. The merge effect is similar to the dissolve, except the transfer from source to destination occurs in only one frame at a preselected percentage of the gray"scale value of the source pixels. Merging allows for transparent effects such as images appearing through fire or smoke and also reflections of images. Examples include a person's image (source) reflected on a tile floor (destination) whereby the tile floor is discernable through the reflected (transparent) source image.
There is shown in Fig. 14 a flow chart illustrating the composite special effect. The composite special effect works in the manner of cell animation. This allows the creation of several layers which are composited together to form a final image. This is often the case in animation, and allows for the creation and production of various parts of a complete animation sequence or a part of a character. For instance, this tool allows the animators to isolate an image or a part of an image and create animation for those individual images or parts of images. This "sub-animation" can then be composited into the other image or part of the image. An example of this is animating blinking eyes separately from the face in which the eyes are to appear. Through each frame of a sequence, a different "blink" of the eyes would be composited into the image of the face. Over the entire sequence, the blinking or complete blinking of the eyes would be present in the face of the image. Another example is having one animation group working on the background while another works on the foreground, a third on the primary character, and a fourth on secondary characters. All of these "layers" need to be brought together (composited) for the final production. The composite tool completes an absolute transfer from the source image to a destination image. In block 108, the source image or images are selected. In block 110, the destination image or images are selected. In block 112, the images are composited, with the source images overwriting any memory locations occupied by the destination images.
Compositing can be done so that the source image is only transferred to areas where there are no pre-existing masks or masks of non-assignable pixels. In this way, a character walking behind another character will appear to walk behind the character instead of having portions of the character intermingled with each other as if they were transparent. If the compositing is complete, the process is finished. If not (there are additional layers to composite) , the process cycles back to block 108.
The animator must, of course, decide the order of compositing so that a character who is to appear in front of a background will not have portions of it overwritten by corresponding areas of the background image.
There is shown in Fig. 15 a flow chart illustrating the process for the pixel offset special effect. This tool allows an animator to pan background images across a series of frames, producing the effect of motion of these background images. Many images can be set along a predetermined trajectory, such as clouds sliding across the sky. Using the pixel offset tool, the clouds can be translated over an X and Y coordinate distance and rate pre-set by the animator. This occurs from frame to frame and can be cycled after the images have left the "boundaries" of the screen. The velocity of pixel offset can be modified so that there is a slow in and out of the apparent motion of an element moving across the screen. Also, the trajectory of movement can be programmed into the pixel offset operation.
In block 118, the image Co be offsec is selected. In block 120, the X and Y coordinates for the offset distances are selected. In block 122, the offset is completed. This translates into the image moving across the screen. Finally, in block 124, the option of cycling the images once they have left the screen back to the other side of the screen is accomplished. This allows images to "wrap around" to the beginning of the next frame.
There is shown in Fig. 16 a flow chart illustrating the auto-in-betweening special effect. This tool is useful where characters or images are moving along a predetermined trajectory or through a predetermined rotation, all at a relatively constant rate. An example of this concept is a character that tumbles across a screen. Another example is the movement of a character's arm, or leg, or head, etc. Any predictable motion can be determined by this tool.
In block 126, the image (or a portion of an image) to be auto-in-betweened is selected. In block 128, the angle of rotation around a fixed point is selected. In block 130, the trajectory of the image is selected. Note that, if an image is not rotating, only a trajectory will be selected. Conversely, if an image is rotating without moving, only an angle of rotation will be selected. It is also possible to select particular locations instead of supplying measurements for the distances that the images are to move across. In block 132, the number of frames in which the motion is being undertaken are selected. Finally, in block 134, the in- between frames are determined by the system.
The blurring function allows an animator to blur or soften colors by averaging gray scale values of selected numbers of pixels. When a portion of an image is selected (i.e., a "rosy" cheek on a face), an active mask is set up consisting of the portion to be blurred and the surrounding area of the image where the blur will fade into. An example would be designating an outline of a face without eyes, nose, etc., but with the cheeks to be blurred as a mask.
This mask is displayed in graphics bit plane 12. Next, the parts of the image which are not part of the mask are displayed in graphics bit plane 12 in their proper position on the mask. This means that the eyes, nose, etc., are displayed on the face. Next, a value is selected corresponding to the number of pixels on either side of a pixel undergoing the averaging which are to be included in the processing. Then, for each pixel displayed, an average of the gray scale values of a selected number of adjacent pixels is made and assigned to the pixel being processed. These new gray scale values are only stored for pixels within the active mask, i.e., for the cheeks and surrounding pixels in which there are new gray scale values as a result of the blurring (softening) .
There is shown in Fig. 17, a sphere having intersection points (201-226) at the various intersections of latitude lines (227-229) and longitude lines (230-237) . The number of intersection points in Fig. 17 is illustrative of the method and system embodied in the present invention. The number of intersection points shown in Fig. 17 is not a limitation of the present invention. It should be understood that many more or fewer intersection points are possible, depending on the particular application for which the present invention is being used.
Each of these intersection points is assigned a unique name. This name can be a number as is the case in Fig. 17. It is necessary to insure that each name is unique. When beginning a project or creation of images, it is necessary to determine the number of locations which are to be used on the sphere (i.e. designate the number of intersection points) and assign unique names. This establishes the "Universe" for the creation of images. Each image or part of an image is drawn in a position corresponding to an intersection point on the sphere. The location of an image on the sphere corresponds to the movement of the image in a spherical coordinate system. In Fig. 18, the noses and head of a character as shown in numerous positions (intersection points) located on the sphere. For illustration, the parts of the head are shown in five positions. Each of these positions corresponds to the movement of the head, with its axis of rotation in the center, to a location at an intersections point on the sphere. For example, at position A the head is initially facing straight ahead without any tilt or rotation. If the head turns a number of degrees to its right (position B) , the corresponding partial profile position is shown at the equatorial position adjacent to the head from its base equatorial position. This process is continued throughout the sphere, including reverse images (not shown) as if the object were actually turning around in the sphere. In this case, only the rear or back of the head of the object is seen by an observer. The corresponding locations on the far side (or back) of the sphere can be displayed when included. Fig. 18 shows only a few of the possible head positions.
Each image located at an intersection point of the sphere is assigned a unique name corresponding to the location of the intersection point on the sphere. The location (intersection point) on the sphere is designated by a number, called an orientation code (See Fig. 23 - orientation code 290) . The part, itself, is given a file name. The combined image representation is now comprised of the file name for the part and its location on the sphere. An example could be "head,l." This shows that the head is the object (part) in question and 1 is the location of its position and orientation on the sphere. This unique name (file name, location on sphere) can be called up at anytime for producing or choreographing an animation sequence.
Fig. 19 shows a sphere with the eyes and mouth of the image from Fig. 18 drawn at each intersection on the sphere corresponding to the positions (A-E) of the image from Fig. 18. The parts in Fig. 19 are considered linked parts. A linked part is a part which has its own orientations on the sphere, but which moves in conjunction with another part. An example of this is the eyes and mouth (shown in Fig. 19) on a face (shown in Fig. 18) . As a face moves through a range of motions the eyes and mouth move in corresponding manner. This is true, even though the eyes and mouth may have their own separate and independent movements. For instance, as a face moves through a range of motions, eyes can be blinking as the face is moving. This results in the eyes moving with the face, yet moving independent of the face in a blinking fashion. The same is true for a mouth movement. A character can be talking as it's head is moving. As it moves through a range of motions, its profile changes with respect to the portion of the mouth which is visible to a viewer from the front side changes corresponding to the movement of the face. The movement of the mouth in speaking, however, is completely independent of the head and face movement. In Fig. 20, the combination of the two linked images is from Figs. 18 and 19 are shown at five intersection points on the sphere corresponding to the intersection points shown in Figs. 18 and 19. At each intersection the combination or linking from the two separate spherical libraries can be seen. This illustrates the linking technique. In linking, a figure or object combined of various independently moveable objects can then be generated. The ultimate result is that libraries of images and part of images can be maintained and combined to produce animation. By constructing such a library of images and parts-of- images, the constant drawing and redrawing in traditional animation is eliminated. Images, and thus entire animated sequences, are generated by newly selecting parts. Then, the image is animated by selecting corresponding parts of the image to show motion in subsequent frames. This technique is referred to as choreography of an image.
.In addition to merely showing motion, characters speaking as well as secondary characters and backgrounds can be incorporated into the process. An image is then choreographed through an animated sequence according to a storyboard.
There is shown in Figs. 21 and 22, an image with multiple parts (extremities) 241-253 attached to a base image 240. Each of these attached parts are connected at "attach" points 254-266. An attach point is the location on an object at which it is attached to a corresponding point on another object. Many attach points can occur at multiple positions on an image.
For example, an arm is typically thought of as composed of several parts. In Fig. 21, both right and left arms are shown. First, there are the upper arms (242 and 245) ; next there are the lower arms (243 and 246) ; next there are the hands (244 and 247) ; then there could be separate fingers with each finger having multiple joints. In this example, the fingers are drawn incorporated into hands 244 and 247. Each of these arm segments has an attach point. The upper arms connect to the shoulder at points 255 and 258. At the same time they connect the lower arms at attach points 256 and 259. The lower arms conversely connect to the upper arm while at the same time connecting to the hand through the wrist at attach points 257 and 260. The part of an image which is moving at a particular time in an animation sequence is important in determining whether a part is an attach part or a linked part. In the case of a head turning, the body and its appendages do not necessarily have to move in relation to a head turning. It is easy to see that a person can move his head while maintaining his body position. In the case of a body turn, however, the shoulder muse turn along with the body. As a body is turning, the arm may still remain visible or contort to its approximated prior position, but, nevertheless, at least part of the arm moves in conjunction with the shoulder. In the case of a body turn, the attached part (the arm) may be linked to the turn of the body. In the case of a head turn, the attached part (the arm) would not be linked to the turning of the head.
In the present invention, the linking or non- linking of parts to the movement of other parts is contained in a file which catalogs which parts are linked to the movement of other parts, as well as showing which parts are attached to other parts. The designation of linking or attaching is detailed by the animator. The animator then creates the file using geographical manipulation and interface tools or alternatively typing out the file using a keyboard. In the present embodiment of this invention attached parts are distinct from linked parts. The reason for this is that linked parts have an identical sphere of motion to that of the part with which it is linked. A mouth moves corresponding to the head movement, therefore, it has an identical range of motion and orientation sphere. For mouth movements such as the movement necessary to speak, each mouth movement has its own sphere. For example, the mouth movement for the letter "A" has a sphere of motion corresponding to the head motion, whereas another sphere must be generated for the mouth motion necessary for the letter "E". Typically in animation, only a half-a-dozen or so mouth movements are generated to cover the entire range necessary for visualizing speaking. In more complicated animation pieces, more mouth movements might be added. For each mouth movement a new sphere must be drawn by the animator corresponding to that mouth movement, so that its range of motion and orientation on the sphere correspond to the corresponding movements of the head. In the case of an attached part, the attached part does not re-orient itself with the part with which it is attached. In the case of an arm and a moving torso, the arm must be re-oriented by the animator to adjust for the rotation of the torso. This is the present embodiment of the invention and not a limitation thereof. It is wholly within the spirit and scope of the invention to adjust the orientation of attached parts to correspond with the orientation of the parts with which they are attached. In doing so, a rotation of the attached parts is made to compensate for the rotation of the part to which it is attached.
In the case of an arm attached to the torso, the spherical library of movements for the arm is a library of positions corresponding to rotation about an axis located at the shoulder attach point. As the torso moves in a different direction the position of the attach point (shoulder) changes, even though the range of motion of the arm is still about its attached point.
For example, an arm moving about a shoulder has its axis point at the shoulder. This can be visualized as the shoulder being at the center of the sphere. As the arm moves to different intersection points on the sphere, it always moves about its central axis point, the shoulder. As a torso moves through its sphere, the arm still retains its independent sphere of motion. The complicating factor in this motion is that the independent sphere of motion of the arm is now viewed differently from the perspective of the observer looking into the paper. Imagine an arm when lifted up from the shoulder being pivoted sideways as the torso turns. This results in a different viewing of the arm, even though the arm is moving within its independent sphere of motion. By rotation of the arms spherical locations, it can tied or linked to the corresponding torso locations.
There is shown in Fig. 23 a description of the data structures of the host computer. The present invention uses AT-compatible computers using 386 or 486 microprocessors, associated memory and support circuitry plus an intelligent graphics controller and associated memory.
******** Parts List *******
The basic element of the host data structure shown in Fig. 23 is base part record 270 (270a-270f) . The base part record contains information for each part which is an element of an image representation library called the screen. In the simplest case, there is only one base part record. This is the case where the outline of an entire image is requested through only one sphere. This simple case does not have any attached parts. In more complex cases, there may be several sets of base part records, each linked together to form a hierarchical part tree. An example of this is the figure shown in Figs. 18 through 22 where both linked and attached pares are needed to form and choreograph the image through animation sequences.
The primary information contained in base part records 270, consists of an orientation list (ORILIST) 271, a transformation list (TRANLIST) 272, a link part array (LINKARRAY) 273, number of link parts (NLINKS) 274, a group list 275, and information to manage the hierarchical data structure. This linking information is Shown as SIBLING 276 and ATTACH 277. ORIFILE 278 indicates the file which contains the valid orientation codes for the part.
Orientation list 271 is set up for each base part and in turn, each base part record 270. The array contained in orientation list 271 contains the base part's orientation for each frame. It should be noted that in any given frame, all parts and all images have exactly one orientation. (Changes in orientation correspond to movement and animation) and is accomplished through changes from frame to frame of the images and the various parts of images.
A transformation list 272 is set up for every base part that is. to be graphically transformed. This array contains one transformation record for each frame, indicating the transformation that is to take place on that part for that frame. Transformations include movement, example transformations could be movement, rotation, and scaling upward or downward from an image's original form. Transformations take place during the animation choreography process. The spherical image library need only contain the images at a unit size. In this way, only one spherical representation library is necessary, even though the images can be adjusted, contorted, and modified by animation choreography.
Transformation information is only allocated for a given part when it becomes apparent that the part is to be transformed. The absence of transformation information on a part means that the part is not transformed.
Link array 273 is set up for every base part that has link parts defined for it. As stated earlier, a link part is a part which is drawn in conjunction with a base part at the same position, orientation and transformation of the base part. There may be one or many link parts defined for a given base part. The origin of the link part is drawn at the same location as the origin of the base part to which it is linked.
Each link part must belong to one link group. A link group is used to name a collection of link parts that are used for one purpose. For example, there may be a series of eye parts for a particular head, but only one eye part is to be displayed in a particular frame or sequence of frames. A link group may be used to indicate all of the eye parts. For uniformity, all of the parts which belong to a link group have the first few letters of their identification name in common. These common letters are used for the link group name. For example, the link group for eyes might be called EYE, while part names of this link group might be EYE OPEN, EYE CLOSED, EYE HALF, etc. Link array 274 is two dimensional in the present embodiment of the invention. The first "dimension" of the link array indicates the frame number in which the particular link part is displayed. The second "dimension" is the link group for the given link part. This allows for a series of link parts to be defined for each frame, if necessary.
Group List 275 is set up for every base part that has link parts. Each entry in group list 275 contains the link group name (actually a link group index) that defines a group of link parts. A group index (G#) is an index into a list of group names. This is a way of indicating a particular group within the data structures of Fig. 23. Group list 275 works in conjunction with link array 274. Information is set up in base part record 270 so that it can be easily changed on a per-frame basis. At a given frame, the orientation of the base part, as well as any link parts which may be present, is changed by simply changing one number in the orientation array. A link part can be replaced (e.g. EYE OPEN replace with EYE CLOSED) , simply by replacing the link part number in link array 274 with another one from the appropriate group listed in link group 275.
Sibling 276 relates to multiple base parts being tied together or defined to exist at a root level. These multiple parts are completely independent of each other, yet tied to the same root level. This allows multiple characters to be choreographed together in one session. Base part data structure 270 supports this by having each base part linked into sibling list 276 beginning from the "first part" which is always the root part.
A single base part can also have other parts attached to it forming more complex images. The attached parts are affected by the location, and transformation of the given base part. All attachments for a base part are kept in a standard link list, sibling list 276. In the image shown in Figs. 21 and 22, it can be seen that a body part having 5 attached points including two arms, two legs, and one head has five sibling parts. These 5 parts are considered siblings among themselves, since they are affected in common by the base part and its related movements, but do not affect one another. Each sibling part is independent of the other sibling parts attached to the base part. Each of the sibling parts, however, may in turn have its own attached parts. In the case of the arms, each arm includes an upper arm, lower arm, and a hand. While these components may move in conjunction with each other, each has its own independent spherical representation library of movements. Because numerous parts have their own spherical representation library (e.g., upper arms 242 and 245 in Figs. 21 and 22) changes in orientation and movement of a base part can tie directly to the changes in orientation and movement of an attach part.
Each attached part must be assigned to an attached point (e.g., 254-266 in Figs. 21 and 22) that must be drawn on each orientation of the base part. Once an attached part has been created, it can become its own base part (e.g., base part 270b and base part 270e) . In this way, a hierarchical tree of parts is built around the first part base part. All parts in the system are always affected by parts to which it is subordinated.
In addition to base part records 270, two other arrays are maintained on the host computer. These are the name list array 283 and group list array 284. Name list array 283 contains part name 285 and address list 286 which is a pointer to the GSP (shown in Fig. 24) address list. This is the information entry point to the GSP data structure. Group list array 284 contains group name 287, as well as information on which parts are contained in the group. N# or name index 288 is an index into a list of part names. G# or group index 282 is an index into a list of group names. This is a way of indicating a particular group within base part record 270.
There is shown in Fig. 24, an additional data structure representation of the GSP or graphic system processor. In the present embodiment, the GSP is a TIGA™ compatible graphics board having its own local memory. The GSP data structures contain the actual vector data needed to draw each part. Vector data indicates the basic data which describes the graphic drawing. Vector data is composed of line and points (vertex points) , which when connected appropriately, describe the desired image. Part data is downloaded to the GSP on an as-needed basis. Part data is downloaded from a memory storage device such as a hard disk drive. It can also be in a virtual disk drive set up in faster RAM memory. Each time a part is called to be drawn, a check is made to see if it has already been downloaded to the GSP. If not, the part is downloaded to the GSP at the time the part is called to be drawn. In either case, after a pointer to the particular vector data is obtained, a GSP memory resident routine is called to draw the data for the part on the display screen. Address list 86 is associated with the GSP and is an array of GSP addresses which point to part data. The desired orientation code 290 (contained in Ori list 271, Fig. 23) is used as an index into address list 286.
An entry of zero in address list 86 indicates that a part is not loaded nor has any load been attempted for that part. If the entry is a negative one, it means an attempt was made to load a part, but part data was not found. In the case of a negative one, no further attempts need be made to load the data for this point at this particular orientation. This may occur if that corresponding orientation for the part is not present in the spherical representation library of orientations for that part. If the entry is neither a zero nor a negative one, then it is a GSP address to part data.
The actual part data is'contained in a fixed header section consisting of extent 2 and attach points 293, followed by a variable size section consisting of all vector data for the part 294. Vector data is terminated by a pair of zeros in termination record 295.
The vector data consists of a series of polylines. Each polyline contains a number of coordinates and a delimiter followed by the actual coordinate data. The first coordinate in a polyline always represents the "moved to" position or starting point. All remaining coordinates in the polyline are "draw to" positions. The draw to positions follow the initialization established by the move to position.
Final offset transformations of the images are completed for a given part when it is drawn, selected and written to the screen.

Claims

CLAIMS :
1. A computer based method for creating animated images comprising the steps of: a) creating a first image in a first global coordinate orientation; b) storing said first image; c) creating a preselected number of additional images in a preselected number of additional global coordinate orientations, wherein said additional images are different in orientation than said first image created in step (a) , as well as from each other; and d) storing said additional images.
2. The method of claim 1 wherein said first image is a primary part of an entire image made up of at least one primary part.
3. The method of claim 1 wherein said first image is comprised of at least one secondary part.
4. The method of claim 3 comprising the further steps of: a) creating a first image for said secondary part in a first global coordinate orientation for said secondary part; b) storing said first image for said secondary part; c) creating a preselected number of additional images for said secondary part, each having a unique global coordinate orientation and each representing said first image for said secondary part in a different global coordinate orientation; d) storing said additional images for said secondary part.
5. The method of claim 4 wherein each image for said secondary part is combined with each image for said primary part having the same global coordinate orientation.
6. The method of claim 1 wherein said first image is a face.
7. The method of claim 3 wherein said secondary part is a mouth.
8. The method of claim 5 wherein said primary part is a face and said secondary part is a mouth.
9. The method of claim 5 wherein said combined image constitutes one frame of a sequence of frames, said sequence of frames representing animation of said combined image over said sequence of frames.
10. A computer based system for generating animated images, wherein an animation frame including one of said images is generated at least in part by selecting at least one of said stored images associated with a preselected global coordinate position for said frame and storing said image information to comprise at least a part of said frame.
PCT/US1992/010633 1991-12-06 1992-12-07 Global coordinate image representation system WO1993011525A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US80608891A 1991-12-06 1991-12-06
US806,088 1991-12-06

Publications (1)

Publication Number Publication Date
WO1993011525A1 true WO1993011525A1 (en) 1993-06-10

Family

ID=25193288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/010633 WO1993011525A1 (en) 1991-12-06 1992-12-07 Global coordinate image representation system

Country Status (5)

Country Link
CN (1) CN1075564A (en)
AU (1) AU3274093A (en)
IL (1) IL103979A0 (en)
MX (1) MX9207034A (en)
WO (1) WO1993011525A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317130B1 (en) * 1996-10-31 2001-11-13 Konami Co., Ltd. Apparatus and method for generating skeleton-based dynamic picture images as well as medium storing therein program for generation of such picture images
US11912960B2 (en) 2015-05-19 2024-02-27 Ecolab Usa Inc. Efficient surfactant system on plastic and all types of ware

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4600919A (en) * 1982-08-03 1986-07-15 New York Institute Of Technology Three dimensional animation
US4899293A (en) * 1988-10-24 1990-02-06 Honeywell Inc. Method of storage and retrieval of digital map data based upon a tessellated geoid system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4600919A (en) * 1982-08-03 1986-07-15 New York Institute Of Technology Three dimensional animation
US4600919B1 (en) * 1982-08-03 1992-09-15 New York Inst Techn
US4899293A (en) * 1988-10-24 1990-02-06 Honeywell Inc. Method of storage and retrieval of digital map data based upon a tessellated geoid system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6317130B1 (en) * 1996-10-31 2001-11-13 Konami Co., Ltd. Apparatus and method for generating skeleton-based dynamic picture images as well as medium storing therein program for generation of such picture images
US11912960B2 (en) 2015-05-19 2024-02-27 Ecolab Usa Inc. Efficient surfactant system on plastic and all types of ware

Also Published As

Publication number Publication date
CN1075564A (en) 1993-08-25
AU3274093A (en) 1993-06-28
IL103979A0 (en) 1993-05-13
MX9207034A (en) 1994-06-30

Similar Documents

Publication Publication Date Title
US5252953A (en) Computergraphic animation system
US6590573B1 (en) Interactive computer system for creating three-dimensional image information and for converting two-dimensional image information for three-dimensional display systems
Agrawala et al. Artistic multiprojection rendering
Neider et al. OpenGL programming guide
US5182548A (en) Method and apparatus for painting on a computer
US5561745A (en) Computer graphics for animation by time-sequenced textures
US6373490B1 (en) Using remembered properties to create and regenerate points along an editable path
US6822648B2 (en) Method for occlusion of movable objects and people in augmented reality scenes
US4952051A (en) Method and apparatus for producing animated drawings and in-between drawings
US5459529A (en) Video processing for composite images
JPH06507743A (en) Image synthesis and processing
WO1992021096A1 (en) Image synthesis and processing
US5999194A (en) Texture controlled and color synthesized animation process
JPH05174129A (en) Modeling apparatus for imaging three-dimensional model
CN104091366B (en) Three-dimensional intelligent digitalization generation method and system based on two-dimensional shadow information
Crow Shaded computer graphics in the entertainment industry
US5793372A (en) Methods and apparatus for rapidly rendering photo-realistic surfaces on 3-dimensional wire frames automatically using user defined points
Gooch Interactive non-photorealistic technical illustration
Durand The “TOON” project: requirements for a computerized 2D animation system
WO1996041311A2 (en) Stereoscopic interactive painting
WO1993011525A1 (en) Global coordinate image representation system
Levene A framework for non-realistic projections
Booth et al. Computers: Computers animate films and video: Computers permit spectacular visual effects and take the drudgery out of film cartooning
Lee et al. Stylized rendering for anatomic visualization
Huijsmans et al. WAALSURF: Molecular graphics on a personal computer

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU BB BG BR CA CS FI HU JP KP KR LK MG MN MW NO NZ PL RO RU SD

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA