CA2225400A1 - Method and system for image combination using a parallax-based technique - Google Patents

Method and system for image combination using a parallax-based technique Download PDF

Info

Publication number
CA2225400A1
CA2225400A1 CA002225400A CA2225400A CA2225400A1 CA 2225400 A1 CA2225400 A1 CA 2225400A1 CA 002225400 A CA002225400 A CA 002225400A CA 2225400 A CA2225400 A CA 2225400A CA 2225400 A1 CA2225400 A1 CA 2225400A1
Authority
CA
Canada
Prior art keywords
mosaic
image
images
scene
parallax
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002225400A
Other languages
French (fr)
Inventor
Rakesh Kumar
Keith James Hanna
James R. Bergen
Padmanabhan Anandan
Michal Irani
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sarnoff Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2225400A1 publication Critical patent/CA2225400A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0088Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Generation (AREA)

Abstract

A system for generating three-dimensional (3D) mosaics from a plurality of input images representing an imaged scene. The plurality input images contain at least two images of a single scene, where at least two of the images have overlapping regions. The system combines the images using a parallax-based approach that generates a 3D mosaic comprising an image mosaic representing a panoramic view of the scene and a shape mosaic representing the three-dimensional geometry of the scene. Specifically, in one embodiment, the system registers the input images along a parametric surface within the imaged scene and derives translation vectors useful in aligning the images into a two-dimensional image mosaic. Once registered, the system generates a shape mosaic representing objects within the scene.

Description

W O 97/01135 PCTrUS96/1048 METHOD AND ~Y~ ;~ FOR IMAGE COMBINATION USING
A PARALIl~X-BASED TECHNIQUE

~ The invention relates lto image proce.sqing systems, and more5 particularly, the invention relates to an image proce.qqing system that comhin~s multiple images into a mo~3~ic using a parallax-based technique.
Until recently, image proce.q.qin~ systems have generally processed images, such as frames of video, still photographs, and the like, on an individual, image-by-image basis. Each individufJl frame or photograph is 10 typically processed by filt~rin~, w~ g, and applying various parametric transformations. In order to form a panoramic view of the scene, the individual images are comhine-7 to form a two-~7imenqion~7l mosf7ic, i.e., an image that cont~inq a plurality of individual images. ~nnitinnAl image processing is performed on the mosaic to ensure that the seams between 15 the images are invisible such that the mos~7ic looks like a single large image.
The Ali~nment of the images and the z7n7niti~)nAl proce~ing to remove seams is typically slccomrli~he-~7 manually by a te~hni~ if7n 77~ing a computer workstSltion, i.e., the image ~ nment and comhinf7tion processes are computer aided. In such computer aided image proce~~ing systems, the 20 technician mAnll~7lly selects processed images, manually aligns those images, and a computer applie-3 various image c~mhining processes to the images to remove any seams or gaps between the images. ManiplllAtion of the images is typically accomrli.~hed using various computer input devices such as a mouse, trAl~.khAll, keyboard and the like. Since mAnlls~l mosaic 25 generation is costly, those skilled in the art have developed automated systems for generating image mosaics.
In ~llt,omAted systems for constructing mosaics, the information vwithin a mosaic is generally e~pressed as two-~limen.cioT-Al motion fields.
The motion is represented as a planar motion field, e.g, an afflne or 30 projective motion field. Such a system is ~li.cl lose-l in U.S. patent applicAt.ion serial nnmber 08/339,491, ~ntitlerl "Mosaic Based Image Proce.~.~ing System", filed November 14, 1994 and herein incorporated by reference. The image proces.~ing approach ~ rloserl in the '491 application automatically comhine.s mlll~iple image firames into one or more 35 two--limen~ nAl mosaics. Huwev~ that system does not account for parallax motion that may c ause errors in the displAcPm~nt fields representing motion in the mo~

CA 0222~400 1997-12-19 In other types of image procçqqing systems, multiple images are analyzed in order to recover photogr~mm.qtic inform~ti~ such as relative orientation estim~tion, range map rec..ve~.y and the like without generating a mosaic. These image analysis techniques assume that the internal camera parameters (e.g., focal length pixel resolution, aspect ratio, and image center) are known. In ~lltom~te~l image procç.qqing systems that use ~lignment and photogrammetry, the ~ nm~nt and photogr~mm~t.ic process involves two steps: (1) est~hli-qhin~ corresp~n~nce between pixels within various images via some form of area- or feature-based m~tching scheme, and (2) analyzing pixel displ~c~mqnt in order to recover t_ree-dimensional scene inform~ti--n The invention is a method of procç.q.qing a plurality of images to generate a three-dimensional (3D) mosaic of a scene comprising the steps of providing a plurality of images of the scene registering said images to construct said 3D mosaic cont~ining an image mosaic and a shape ~nosaic, where said image mosaic represents a panoramic view of the scene and said shape mosaic represents a 3D geometry of the scene.
The invention is also a system for generating a 3D mosaic of a scene from a plurality of images of the scene, comprising means for storing said plurality of images and a registration processor, crnnected to said storing means, for registering said images to construct said 3D mosaic cont~ining an image mosAic and a shape mosaic, where said image mosaic represents a panoramic view of the scene and said shape mosaic represents a 3D
geometry of the scene.
The te~rhing.q of the invention can be readily Imderstood by c-)nqi-l~ring the following ~ ile(l description in conjunction with the ~ccomr~nying drawings, in which:
FIG. 1 depicts a block diagram of an im~ing system incorporating an image proce.sqing system of the invention;
FIG. 2 schematically depicts the input images and output mosaics of the system of FIG. 1;
FIG. 3 is a geometric representation of the r~l~tion~qhip amongst a reference image generated by a reference camera, an inspection image generated by an inspection camera, and an albi~lal ~ parametric surface within a scene imaged by the cameras;
FIG. 4 is a flow chart of a P-then-P routine for registering images and extracting parallax inform~tion from the registered images;

CA 0222~400 1997-12-19 FIG. 5 is a flow chart of a P-and-P routine for registering images and extracting parallax inform~tion from the registered images;
FIG. 6 is a functional block diagram of an image processing system of the invention;
FIG. 7 is a flow chart of a pose estim~tion routine;
~ FIG. 8 is a flow chart of a 3D corrected mosaic construction routine;
FIG. 9 is a two--lim~n.qi-n~l geometric represent~tior~ of the plane OMP of FIG. 3 where the scene contains an object that occludes points within the image; and FIG. 10 depicts an experim~nt~1 set-up for estim~t~ing h~ight~ of objects within a scene using the system of the invention .
FIG. 11 depicts a block diagram of an application for the inventive system that synth~ es a new view of existing 3D mos~ic~;
An image proc~qing system that combines a plurality of images representing an imaged scene to form a 3D mosaic, where the 3D mosaic cont~in~ an image mosaic representing a panoramic view of the scene and a shape mosaic repr~s~nting the 3D geometry of the scene is first described.
The shape mosaic ~lefine~q a r~l~tionqhip between any two images by a motion field that is decomposed into two-rlim.onqion~1 image motion of a two-dimçn.cion~1, par~metric surface and a residual parallax field. Although many techniques may be useful in generating the motion fields and the parametric tr~n.ql~tion parameters, the following disclosure discusses two illustrative processes. The first process, known as plane-then-parallax (P-then-P), initi~11y registers the images along a parametric surface (plane) in the scene and then determines a parallax field representing the 3D
geometry of the scene. I'he second illustrative process, known as plane-and-parallax (P-and-P), simultaneously registers the images and determines the parallax field. With either process, the results of registration are trAn.q1s3t.ion parameters for achieving image ~1ignm~nt along the parametric surface, a parallax field representing the 3D geometry (motion) of the scene with respect to the parametric surface, and a planar motion field representing motion within the parametric surface. These results can be used to combine the input images to form a 3D mosaic.
Image motion of a parametric surface is, in essence, a conventional representation of a 2D mosaic. Motion of the parametric surface is generally expressed as a parametric motion field that is estim~ted using one of the many available techniques for directly estim~ting two-llim~nqional motion fields. Generally speaking, a direct approach is sufficient for ~ ning CA 0222~400 1997-12-19 W O 97/01135 _4 PCTAJS96/10485 and c~nhining a plurality of images to form a two--limen.qionAl mosaic.
Such a two--lim~n-qinnAl mosaic represents an Alignm~nt of a two-dimensional parametric surface within a scene captured by the image sequence. This parametric surface can either be an actual surface in the scene within which lie most objects of the scene or the parametric surface can be a virtual surface that is arbitrarily selected within the scene. AU
objects within the scene generate what is known as parallax motion as a camera moves with respect to the parametric surface. This parametric motion is repre.s~nte~l by a parallax motion field (also referred to herein as aparallax field). The parallax field has value for objects within the scene that do not lie in the plane of the surface. Although objects lying in the plane of the surface are represented in the parallax field, those objects have zero parallax. More particularly, the parallax field represents the objects that lie in front of and behind the parametric surface and the distance ~height) of these objects from the surface, i.e., the 3D geometry of the scene. As such, using the parallax field in comhinAti--n with the parametric surface and its planar motion field, the system can generate a 3D reconstruction of the scene up to an arbitrary colline~tio~ If camera calibration parameters such as focal length and optical center are known, then this 3D
reconstruction of the scene is Euclidean.
FIG. 1 depicts a block diagram of the image proce.qqing system 100 as it is used to generate 3D mosaics from a plurality of images. The system is, in general, a general purpose computer that is programmed to function as an image proces.qing system as described herein. The system further 25 . co~t~inq one or more cameras 104n that image a scene 102. In the illustrative system two cameras, cameras 104l and 1042. are shown. Each c~mera, for simplicity, is assumed to be digital video camera that generates a series of frames of rligiti~etl video informAtion Alternatively, the cameras could be still cameras, conventional video cameras, or some other form of imAging sensor such as an infrared sensor, an ultrasonic sensor, and the like, whose output signal is separately ~ligiti~ell before the signal is used asan input to the system 100. In any event, each c~mera 104l and 104z generates an image having a distinct view of the scene. Specifically, the images could be selected frames from each camera im~in~ a different view of the scene or the images could be a series of frames from a single camera as the camera pans across the scene. In either case, the input signal to the image proce~ing system of the invention is at least two images taken from different viewpoints of a single scene. Each of the images partially overlaps CA 0222F,400 1997- 12- 19 the scene depicted in at least one other image. The system 100 combines the images into a 3D mosaic and presents the mosaic to an output device 106. The output device could be a video compression system, a video storage and retrieval ~,y~lelll, or some other application for the 3D mos~ic FIG. 2 st~hem~tically depicts the input images 200n to the system 100 and the output 3D mosaic 202 generated by that system in response to the input images. The input images, as me~tionetl above, are a series of images of a scene, where each image depicts the scene from a Llr~ t viewpoint. The system 100 a~igns the images and c~mhines them to form an image mosaic 204, e.g, a two-tlim~.n.~i-n~l mosaic having the images ~ n~l along an arbitrary parametric surface ~ n-ling through all the images. ~ ning the images to form the image mosaic requires both the parametric tr~n.~l~tion parameters and the planar motion field. In ~ n to the image mosaic, the system generates a shape mosaic 206 that contains the motion field that relates the 3D objects within the images to one another and to the parametric surface. The shape mosaic cont~in~ a parallax motion field 208. The planar motion field represents motion within the parametric surface that appears in the images from image to image, while the parallax flow field represents motion due to parallax of 3D objects in the scene with respect to the parametric surface.
A. Determinin~ A R~ Parallax F~eld C--n.qi~lPr two camera views, one denoted as the "lef~rellce" camera and the other denoted the "inspection" camera (e.g, respectively cameras 1041 and 1042 of FIG. 1). In general, the image procç.~ing system Z5 maps any 3D (3D) point Pl in t;he reference camera coordinate system to a 3D point P2 in the inspection camera coordinate system using a rigid body transformation represented by Equation 1.

P2 = R(P, ) + T2 = R(PI--T, ) The mapping vector is represented by a rotation (R) followed by a tr~n.qk3tion (T2) or by a tr~n.~ tion (Tl)followed by a rotation (R). Using perspective projection, the image coor(1in~tes (x,y) of a projected point P are given by the vector p of Equation 2.
x y = f p (2) CA 0222~400 1997-12-19 W O 97/01135 PCT~US96/10485 where f is the focal length of the camera.
FIG. 3 is a geometric representation of the r~l~ti- nqhip amongst a reference image 302 generated by the reference camera, an inspection 5 image 304 generated by the inspection camera, and an arbitrary parametric surface 300 within the imaged scene. Let S denote the surface of interest (a real or virtual surface 300), P denotes an environrnent~l point (e.g., a location of an object) within the scene that is not located on S, and Oand M denote the center locations (focal points) of each camera. The image 10 of P on the image 302 is p. Let the ray MP intersect the surface S at location Q. A conventional W~illg process, used to align the images 302 and 304 by ~ligning all points on the surface S, warps p', the image of P on the image 304, to q, the image of Q on the image 302. Therefore, the residual parallax vector is pq, which is the image of line PQ. It is 15 imme~ ~ly obvious from the figure that vector pq lies on the plane OMP, which is the epipolar plane p~qqing through p. Since such a vector is generated for any point P in the scene, it can be said that the collection of all parallax vectors forms a parallax displ~c~ment field. Since the parallax displ~cemçnt vector associated with each image point lies along the 20 epipolar plane associated with that image, the vector is referred to as an epipolar field. This field has a radial structure, each vector appearing to çm~n~te from a common origin in the image dubbed the "epipole" (alias focus of ~nqion (FOE)). In FIG. 3 the epipole is located at point "t".
From FIG. 3, it is obvious that the epipole t lies at the intersection of the 25 line OM with the image plane 302. The parallax displ~cement field is also referred to herein simply as a parallax field or parallax motion field.
In det~l...i..i..g the r~qi(lll~l parallax inform~tio~ (e.g, parallax field), it is assumed that the two images are ~lignetl (registered) along the parametric surface using a conventional parametric motion est~im~tion 30 method. These ~liFnm~snt methods are also known in the art as "hierarchical direct methods" of ~ nment or registration. One such method is described in commonly ~q.ci~nerl U.S. patent application serial number 08/339,491 filed November 14, 1994 and herein incorporated by reference.
As shall be discussed in detail below, once the inventive system determines 35 the transform~tion and planar motion field for ~ligninF the two images along the parametric surface, the system d~te~ es the r~.qi~ l parallax information representing the h~ight above or below, the parametric surface of objects within the scene.

CA 0222~400 1997-12-19 W O 97/01135 7 PCT~US96/10485 B. Re~istration of Images Using the general prin( iple.s discussed above to accurately represent a 3D scene, the system must recover both the planar and parallax motions as well as the tr~n~l~ti~n parameters for ~ ning the images.
5 Illustratively, the system u~,es two terhniques either separately, or in sequence, to determine the transformation parameters and the motions within the images. The first te-~hnique is a "seqll~nti~l registration"
approach, in which a plane (parametric surface) within the scene that is imaged by both cameras is first registered using an eight parameter planar 10 transformation. The re~ l parallax motion is then estim~te~l using a separate, seqllenti~lly ~ecllte~l step. The second terhnique is a "simultaneous registration" approach, in which the system simultaneously estimates the parametric transformation as well as the planar and parallax motion fields.
i. Seqll.qnti~l Re~i.c~ration FIG. 4 depicts a flow chart of a routine 400 executed by the system to perform seqll~nti~l registration and determine the parallax field. To register a plane within the scene, the system uses a hierarchical direct registration terhnique. This technique uses a planar flow field model for 20 motion within a plane. Once a plurality of images are input to the system at step 402, the routine performs two seql1~nti~1 steps to determine the tr~n~l~tion parameters and the motion fields; namely, at step 404, the routine derives the planar motion fields and, at step 406, the routine estimates both the tr~n~l~tion parameters and the parallax field. The 25 resulting output 408 from the routine is the r~l~t.i- n~l inform~t.ion regarding the input images, e.g, the tr~n.cl~tion parameters for ~ ning the images along a plane and the planar and parallax motion fields representing the 3D
geometry of the scene.
Specifically, the total motion vector of a point in the scene is 30 expressed as the sum of the motion vectors due to the planar surface motion (up,vp) and the residual parallax motion (ur,vr). As such, this motion vector is represented as Equation 3.

(u,v) = (up~uv) + (ur,vr) (3) Further, the motion field of a planar surface (two-(limen~ion~l) is represented as:

W O 97/01135 8 PCT~US96/10485 up(x) = p~X + P2Y + p5+ p7X + p8Xy Vp(X) = p3X+P4Y+P6 +P7XY+P8Y
(4) where N2,~T2,~--N2ZT2Z
PlN2yT2l--QZ
P2N2~T2,.--QZ
P3 N2yT2y--N2ZT2Z
= f(N2ZT2~ - Qy) f(N2ZT2.~--n~
P6 f (Q,--N2xT2z) P8 f (--Ql--N2!.T2z ) (5) T2X~ T2y~ and T2z denotes the tr~nql~qtion vector between camera views, S2,~, Qy and Qz denotes the angular-velocity vector, f denotes the focal length of the camera. and N2X~ N2y~ and N2z denotes the normal vector to the planar surface from a camera center. The rç.~ l parallax vector is further 10 represented as:
ur(x) = ~(fT2l--xT2z ) (6) v,(X) = r(fT2!--YT2z ) Where the parallax magnitude field is represented by Equation 7.
y = H/PzTl (7) where H is the perpen(lic3ll~r distance of the point of interest from the plane, and P~ is the depth of the point of interest (also referred to in the art20 as range). Tl is the perpendicular fli~t~nce from the center of the first camera (reference) to the plane, and f is the focal length of that camera. At each point in the image, the parallax m~Enit~ e field ~ varies directly with the height of the corresponding 3D point from the reference surface and .

W O 97/01135 9 PCTrUS96/10485 inversely with the depth of the point, i.e., the distance of the point from the camera center.
To determine the total motion field, the seqll~nti~l approach first solves Equation 4 for (up,vp) and then Equation 3 for (ur,vr). To achieve 5 Ali~nment in a coarse-to-fine, iterative m~nn~r, the input images are subsampled to form multi-r~ t;on~l image pyr~mids. Within each level of the pyramid, the measure used as in~ of an image AliFnm~nt match is the sum of the squared di~~ ce (SSD) measure integrated over selected regions of interest on the images. Typically, the system initially selects the 10 entirety of the images as the selected region and, thereafter, re~ iv~ly selects smaller regions until the AliFnm~nt measure is ~ i7ç~ To perfect ~ nm~nt, the Alignm~nt measure is ,..;..;,..i~ç.l with respect to the quadratic flow field parAm~tP!rs (-1efine-1 below). The SSD error measure for F~"gtim~ting the flow field within an image region is:
E(l,U}) = ~(l(X,t)--I(X--u(x),t--1)) (8) where x=(x,y) denotes the spat;al position of a point within an image, I is the multi-resollltionAl ~y~ ..id image intensity, and u(~)=(u(x,y),v(x,y)) denotes 20 the image velocity at a point (x,y) within an image region and {u} denotes the entire motion field within the region. The motion field is modeled by a set of global and local parameters.
To use this technique, the system, at step 410, first constructs a multi-resollltion~l pyramid representation (e.g., Laplacian or Gll~ iAn 25 pyramids) of each of the two input images. Thereafter, at step 412, the routine estimAtes, in a coarse-to-fine mAnn~r~ the motion parameters that align the two images to one another, i.e., although not specifically shown, the routine iterates over the levels of the pyramids to achieve the coarse-to-fine AliFnm~nt Specifically, the routine aligns the images using 30 the fol~ Ui~lg planar motion field computations and ~ ing the SSD at each level of the image pyramids. The routine estimates the eight motion parameters (Pl through P8) and the resulting motion field with reference to a region within a planar surface comprising a substantial number of pixels in the two images (e.g., a "real" or physical surface). In particular, the routine 35 begins with some initial parameter values (typically, zero) and then iterively refines the parameters in order to first minimi7e the SSD error at a coarse image resolution, then ellccç~.~ively at finer image resolutions within CA 0222~400 1997-12-19 W O 97/01135 lo PCTAUS96/10485 the image pyramids. After each step of ~ nm~nt iteration, the transformation based on the current set of parameters is applied to the inspection image in order to reduce the r~qi~llAl displAc~m-?nt between the two images. The reference and inspection images are registered so that the 5 selected image region (e.g., the overlapping region in the physical plane) is Ali~n~l along a "visible" planar surface. The routine queries, at step 414, whether further computational iterations are necessary to achieve ~lignm~nt The decision is based on a comparison of the SSD to a predefined threshold SSD level. If further Ali~nm~qnt is necessary the 10 routine loops to step 412. Once the images are accurately registered, the routine proceeds from step 414 to step 406.
At step 406, the routine uses the planar flow field information to compute the trAn~lAtic-n parameters and the parallax field. At step 418, the value of (up,vp) as expressed in Equation 4 is computed using the 15 estimated values of Pl through P8 computed in step 404. Then, the values of (up,vp) and the expression of (ur,vr) in Equation 6 are substituted into the SSD function (Equation 8). Equation 8 is then ...;..;..~i~ell to solve for the direction trAn.~lAti-n T2(X~y~z) and the parallax vector field ~. The routine iterates from step 420 as the parameters are computed to minimi~e the 20 SSD. Although not ~xpli~itly shown, the routine also iterates through the pyramid levels to achieve sufficient tr~n.~l~tif n parameter accuracy. Once the SSD is minimi~ed to a sufficient level, the routine generates, at step 408, as an output the trAn.~l~tion parameters, the planar motion field, and the parallax motion field. Note that these values are generated llsing Z5 the various levels of image pyramids and, as such, these parameters and motion fields are generated as multi-resollltio~Al pyramids. Thus, the parameter and motion field pyramids can be directly used to produce a multi-resolutional pyramid of the 3D mosaic.
~he system generally uses this seqll~nt;~l registration process to 30 align images that depict a scene col.ts.;..ing a well~ fin~-l planar surface.The result of the process is a set of tr~n.~l~t.ion parameters for Ali~ning the images along the plane to produce a 2D mosaic and a motion fields representing 3D geometry in the scene. In other words, the system generates the parameters used to produce a 3D mosaic using a two step 35 process: registration to a plane (step 404), then determine the parallax inform~tion (step 406). This two step process has been dubbed the plane-then-parallax (P-then-P) method.
ii. Simultaneous Registration W O 97101135 ~ PCTAJS96/10485 FIG. 5 depicts a routine 500 for simultaneously registering two images and generating a parallax field. In the simultaneous registration approach, the system simultaneously solves for both (up,vp) and (ur,vr) in the total motion vector as r~fine~ in Eqll~tinn~ 3 and 6.
Using the simultaneous registration approach, a "real~ planar surface is not necessary; thus, the images can be registered to a ~virtual"
planar surface lying ~ ~;]y within the images. As such, this approach is more flexible than the seqllentiiRl registration approach.
Routine 500 begins in the same m~nn~r as routine 400, in that, steps 402 and 410 input images and then construct multi-resolllt;on~l pyramids thelerl.~ . Thereafter, the routine, in step 504, computes the tr~n.~l~t.ion parameters, the planar motion field and the parallax field. The results are output in step 510.
More specifically, the e2~pres.~iong for (u,v) in Eqll~til-n~ 3 and 4 are substituted into equation 5 to obtain a complete objective filnc~;on The resulting function is then ~ i7e-1, in step 504, with respect to SSD to simultaneously solve for the planar motion par~meters (P1 through P8), direction of tr~n~l~tion T2(y,y,z), and the parallax field ~ at each level of the image pyramid. As such, this process is iterated using the multi-resollltion~l image pyr2lmids in a coarse-to-fine f~Rhion Results obtained at the coarse level of the pyramid are used as an initial estimate for a computation at the ne~t level. At each level of the pyramid, the computation is iterated to minimi7e SSD. Howeve~, the results at each iteration are stored to form a multi-resolutional pyramid of the computation results, i.e., the process fi~rms multi-resollltion~l pyramids for the tr~n~ t.ion parameters and :motion fields. After each iteration through step 506, the estimated motion parameters are n~ e~l, at step 508, such that the planar registration parameters correspond to a virtual plane which gives rise to the smallest parallax field (e.g, the average plane of the 3D scene imaged by the two cameras). The result, generated at step 510, is a set of tr~n.ql~t.ion parameters for Ali~ning the images along the plane to produce a 2D mosaic, a planar motion field representing motion with the plane, and a parallax vector field representing objects in the scene that do not lie in the plane. In other words, the system generates a 3D ~ nment llsing a one step process: simultaneous registration to a plane and deter~nin~tion of the parallax information. This one step process has been dubbed the plane-and-paralla~: (P-and-P) method.
ui. Comhin~tit)n of Seqllenti~l and Simultaneous Image Registration CA 0222~400 1997-12-19 W O 97/01135 PCT~US96/10485 FIG. 6 depicts a filnrtion~l block diagram of the system 100 of the i~lv~ ion . The input images are temporarily stored in image storage 600.
First, the system 100 uses seqllPnti~l registration to register the reference and inspection images and provide an estimate of the parallax field 5 (P-then-P registration processor 602 operating in accordance with routine 400 of FIG. 4). Secondly, the system uses simultaneous registration to provide further image ~ nmPnt and an accurate parallax field (P-and-P registration processor 604 operating in accordance with routine 500 of FIG. 6). If either processor 602 or 604 generates a flow field 10 for the parametric surface that is deemed accurate to within a pretlP.finP-l measure of accuracy (e.g, a minim~l SSD), the system ceases proce.qqing the images and begins generating a 3D mosaic. For a~mrle~ in a scene c-nt~ining simple shaped objects, the P-then-P proc~fiqing may be enough to accurately generate a 3D mosaic of the scene. More compliç~te-1 scenes 15 having many parallax objects may require both forms of image procP.$qing to generate an accurate 3D mosaic. Generally, ~lignment. quality is tested by computing the magnitude of the normal flow field between the inspection and reference images. Regi--n.q in the parametric surface having normal flow above a prelefinP-l threshold (e.g., 0.5) are labeled as lln~ nPIl and 20 further processed by a subsequent processor. If these regions obtain a ;..;...us size, the system deems the images ~ ne~l and the next processor is not executed.
The output of the two processors are the tr~n.ql~ti-)n parameters and the motion fields. A 3D mosaic generator 606 combines the input images with the tr~n.ql~tion parameters and motion fields to produce a 3D mosaic.
As ~PfinP~l above, the 3D mosaic contains an image mosaic and a shape mosaic, where the image mosaic is a panoramic view of the scene represented by the images and the shape mosaic represents the 3D
geometry of the scene within the panoramic view.
The 3D mosaic can then be used in various ~ nqion.q and applications (reference 608) of the basic system discussed above. These P~tPn.qion.q and application of the system are discussed in detail below.
C. ~,~t~nqion.q of the Invention There are a number of optional processes that enh~nce the usefulness of the invention. The f~rst is a pose estimation routine that provides a simple ter-hnique for relating a new image taken from a new viewpoint to an P~i~ting mosaic. The second P~tenqi~n is a te-~hnique for generating a new 3D mosaic by c-)mhining an P~jqtin~ mosaic with a new CA 0222~400 1997-12-19 image of the scene represented by the mosaic. The third e~enqir~n is a technique for detecting and proce.q~ing occlusions v.~ithin a 3D mosaic.
i. Pose Estimation FIG. 7 is a flow chart of a pose esti~n~tion routine 700. Given both lefelellce and inspection images, the system aligns the images and determines a parallax field using the P-then-P process and/or the P-and-P
process as discussed above to form a lefelellce image mosaic and a ~efelellce shape mosaic. The reference mosaics then serve as am initial repres~nt~ti-~n of the 3D scene. The reference image and shape mos~ics are input at step 702 and then collvel led into multi-resollltioI ~l pyramids at step 704. If the reference mosaics are provided as pyramids, then step 704 can be di~l~led. Given a new image of the scene taken from a new viewpoint (step 706) that has been c~ v~lled into an image pyramid at step 708, the routine computes, at step 710, the pose of the new viewpoint with respect to the reference view used to construct the reference mosaics.
The pose of the new image is represented by eleven pose parameters;
namely, eight planar motion parameters (Pl through P8) and three tr~n.ql~t.ion parameters (T2(y,y,z)). To compute the pose parameters, the system again uses the direct lhierarchical te~hnique used above to register the images, and iterated through step 712 until the SSD achieves a pre~l~fine-l value. Specificall~y, given the parallax field y, Equation 8 is i7e~1 using Equation 3 to estimate the eleven pose parameters. As with the registration approaches described above, the new image is f3li~n.oA
using the coarse-to-fine registration process over an image pyr~mid of both the 3D representation of the scene and the new image. The outcome of the pose estimation routine are tr~n~l~tion parameters, a planar motion field, and a parallax motion field for the new image. With these results, the new image can be integrated into the 3D mo~ic as discussed below.
ii. 3D Corrected Mos~ic Generation Given a reference 3D mosaic (e.g., an ~i.cting mosaic) relating a plurality of images of a scene to one another and using pose estimation, the system can update the existing mosaic with new image inform~tion as it becomes available. This process of integrating inform~tion from new images into an existing mos~ic is known as correcting the 3D mosaic.
As discussed above 3D mosaics contain two parts; namely, a mosaic image repres~nting an assemblage of the various images of a scene into a single (real or virtual) camera view, and a parallax map (shape mosaic) corresponding to that view. Note that the parallax map is itself a mosaic CA 0222~400 1997-12-l9 W O 97/01135 -14- PCTAUS96tlO485 produced by arranging the parallax maps relating various images to one another. To construct a 3D corrected mosaic, a new image is registered with the existing mosaic and then the new image information is merged into the ~i.eting mosaic. The merged image and mosaic become a corrected 5 mosaic that then becomes the existing mosaic for the next new image.
FIG. 8 depicts a flow chart of a routine800 for constructing 3D
corrected mosaics. At step 802, the system is supplied an ~ ting 3D
mosaic, then the mosaic assembly process proceeds as follows.
1. At step 804, a camera provides a new image of the scene 10 represented in the existing mosaic. The new image is taken from a new viewpoint of the scene.
2. At step 806, the routine uses the pose estim~tion process to compute the eleven pose par~meters that register the ç~i~tinF mosaic to the new image.
3. At step 808, the routine creates a synthetic image taken from the new viewpoint by reprojecting the ~ ting image mosaic using the estimated pose parameters. The reprojection is accompli~hed by folvv~d W~illg the existing image mosaic using Equation 3. To avoid generating any holes in the synthetic image arising from rch ~d image w ~ lg, the routine conve~tiol-~lly super-samples the second image as described in Wolberg, Digital Image Warping, IEEE Computer Society Press, Los Alamitos, California (1990). The reprojection must also be sensitive to occlusion within the image. Occlusion detection and proc~ ing is described below.
It should be noted that the new image can also be registered to the existing image mosaic and then the new image warped to the ~ ting image mosaic. However, to accomplish this W~illg, parallax information concerning the new image is needed to accurately warp the new image and capture the 3D geometry of the scene. To generate the necessary parallax inforTn~tion, either the previous image merged into the mosaic is temporarily stored in memory and used as a reference image to generate the parallax information with respect to the new image, or two new images and there respective parallax field is provided at step 804. In either instance, if the new image is to be warped to the mosaic, the new image must be provided with a parallax field. This is not necessary when the existing mosaic is warped to the new image. FIG. 8 only depicts the process for warping the mosaic to the image.

CA 0222~400 1997-12-19 W O 97/01135 PCT~US96/10485
4. At step 810, the routine merges the synthetic image into the new image to create a new mosaicr This new mosaic is supplied along path 814 as the existing mosaic. The synthetic image c-nt~in.~ image regions not present in the new image. These new regions are added to the new image
5 and extend its boundaries to create the new 3D mosaic. Note, in the merging process, to achieve a smooth mosaic construction, the routine can temporally average the inten.~ities of common regions in the synthetic image and the new image.
To construct the shape mosaic for the new viewpoint, the system 10 r~wald warps the ~xi.~ting s]hape mosaic to the new image coordinate system in much the same way as the existing image mosaic was reprojected. Given the pose parameters between the ~.xi~tinF image mosaic and the new image, the shape mosaic of those portions of the ~xi~t.ing 3D
mosaic not visible in new image, but only visible in f~xi~ting mosaic can al:so 15 be estim~te~l The reprojected shape mosaic is merged with this ~rlrlit:ir~n~l parallax information to complete the 3D mosaic as viewed from the new viewpoint.
5. At step 812, the routine displays the corrected 3D mosaic as seen from the new viewpoint. As such, new image information is accurately ZO incorporated into an existing 31D mosaic.
iii. Occlusion Detection and Proce.~ing Due to occlusion, in creation of the synthetic image (as flet~ile~
above) more than one image p,oint in the second image may project to the same point in the synthetic image. As shown in FIG. 9 points P, Q and R aU
25 project to the same point the inspection image 304. If the depth of each point relative to inspection image 304 were known, then it would be known that points P and Q are occluded by point R. In other words, from the viewpoint of the inspection camera, the points P and Q are occluded by the corner of the box 900. How~vel-, the parallax map does not contain the 30 necessary information to deduce the relative depth of each point.
Nonetheless, the relative depth information can be derived from the relative locations of the image points p, q, and r in the reference image 302. These points in image 302 must lie in an epipolar line 902 within the image. By connecting the focal points O and M of each image (camera) with a line 904, 35 an epipole m is defined on line 902. Given that focal point M is nearer the scene than focal point M, the order of the points from point m on line 902 itl~ntifies the occluded points. l[n this .ox~mple, point r proceeds points p and q and, as such, point R occludes points P and Q. If, however, focal point O is CA 0222~400 1997-12-19 nearer to the scene than focal point M, then the ordering of the occluded points is reversed and the occluded points are nearest to point m on line 902. The system uses this relatively simple te-hnique for d~ ...i..i..g oc~ l points within an image. Once reco~ni~e~, the oc~ l(lçl points can 5 be ~l~lete~ from the image, filtered, or otherwise processed such that potential artifacts generated by the occluded points are avoided.
D. Applications for 3D mosaics The foregoing description discussed the image procç.~ing system used to comhinç at least two images into a single 3D mosaic. Such a 10 dynamic system for representing video information has many applic~t.ion~, some of which are discussed below.
i. Object Height Estimation In general, parallax flow vectors vary directly with height and inversely with depth, where depth is the distance of an object from the 15 camera. As such, 3D mos~iç.s generated from aerial views of objects on the ground can be used to estimate the height of the objects above the earth's surface. To elimin~te depth and efitim~te object height from the parametric surface (the earth's surface), the inventive system is adapted to use a characteristic property of aerial view images (her~.in~fter referred to as an 20 aerial view property). Sperific~lly, the depth from a camera to an object is typically much greater than the height of the object from the ground. In nadir aerial images, the depth of all points is apl~loxi..~tely the same so that a weak perspective projection can be used to estimate object height .
VVhereas, in an oblique view, there can be con~i~çrable depth variation 25 across a given image. How~ver, for any single point in an oblique aerial image, the depth of that image point is appro~im~tely the same depth of a virtual 3D point obtained by ~xtPntling a line of sight ray from the camera and intersecting it with the ground plane. Therefore, the system can factor out the depth in the parallax Eqll~tion.~ 4 and 5 by estim~tinF an equation 30 of for ground plane.
This specific application for the inventive image processing system uses the magnitude of the displacement vectors to infer the m~gnit.l~ of the height and the direction of the flow vectors to infer the sign of the height.
The sign of the height in(li(~.~tes whether the point is above or below the 35 plane. For the seqnenti~l approach, the m~nit:ll(le of the displacement vector Y2 = ~(XW - x, )2 + (Yw _ Yl ~2 for the case where the tr~n~l~tion is parallel to the image plane is given by Equation 9.

W O 97/01135 -17- PCTÇUS96/10485 r2 = PT ~ + 1;, (9) The m~nitude of the displ~cem.ont vector for the case where Tl~O is given 5 by Equation 10.
l~.H
r2= ~ r (10) where YF = ~(X., -XF) +(YW -YF) is the distance of the point (xw,yW) from 10 the focus of expansion (FOE).
Since NTpl = f, using the aerial view property eqll~tionR 9 and 10 can be reduced to Equation 11.

I = S2HN2Ty~ (11) where I=Y2/YF is a measurement obtained from the ~lignment parameters and the est.im~te-l parallax vector and S is a proportionately factor that dependsupon the trFn~l~tion vector Tl and the distance T.,I . Equation 11 can be lew~il,len as Equation 12 to solve for the height H of any image point 20 in a scene.

(K,x + K2y + K3) (12) where K is an unknown vector having components K1=SN2X, K2=SN2X and 25 K3=fSN2z To best utilize the height estimation computation, the intrinsic camera parameters such as focal length and image center should be determined using standard te- hniques. H~ w~ve~-, in some applications, it is not possible to calibrate the camera parameters nor obtain these 30 parameters apriori. As such, a nllmber of alternative estimation methods have been developed to estimate height when either the focal length and/or image center are known or unknown.

CA 0222~400 1997-12-19 If the focal length and center are both unknown and height of at least three points are known (reference points), then Equation 12 can be solved to linearly estimate vector K.
If focal length and camera center are both known, a normal plane is 5 inferred using Equation 12. This equation related the quadratic registration parameters to the tr~n.ql~tion~ rotation and normal of the plane, but the tr~n-ql~tion direction is computed dur ng the quasi-parametric residual estimation. ThetrAn.ql~tio~ direction together with Equation 10 provides a linear set of eight equations cont~inin~ six unknowns; namely, normal 10 vector N and rotation vector Q. Since the tr~n.ql~tion used in Equation 9 is T2, while the tr~qnql~tion computed for the parallax flow vectors is Tl, the quadratic transformation ~lefin~ by parameters Pl through P8 is inverted.
Alternatively, the inverse quadratic transformation can be directly estimated by inter-r~h~n~ing the two images dur~ng parallax vector 15 eqtim~tion. The tr~nql~t.ion vector is determined up to a pre~ o.fin~ sçflling factor. As such, the height of at least one point is neetl~l to determine the height of each other point in the 3D mosaic or any constituent image (real or virtual).
To determine the height of image points when focal length is unknown 20 and image center is known, Equation 12 is solved using the known height of two points in the scene. Since focal length is unknown, it is not possible to utilize all eight parameters given by Equation 8. Howev~r, the linear parameters Pl through p4 do not depend on the focal length and, as such, Equation 10 pert~ining to these parameters can be used. On inspecting 25 these equations, when Tz=0, the normal component Nz cannot be determined. However, the components Nx and Ny can be determined up to a scaling factor. Since the focal length is unknown, this result is also true for the case when Tz~0. As such the tr~n.ql~tion vector is a scaled version of the vector [fTx ~y Tz]. Therefore, whether Tz is zero or not, the method is 30 capable of det~...;..;..g at least one component of the vector N and subsequently the vector K The method uses the height of at least two image points and Equation 12 to determine vector K and the height of any point in the mosaic.
The foregoing te.hnique has been used experim~n~lly to determine 35 the height of a number of objects lying upon a plane (ground plane). FIG. 10 schematically depicts the experimental set-up. Specifically, a camera 1000 was mounted on a tripod 1002 ~l~x;...~te to a flat plane 1004 upon which various shaped objects 1006 were pl~ce-l The objects ranged in height from -W O 97/01135 19 PCT~US96110485 1 inch to 4.9 inches above the plane. Initially, an image was taken with the camera at an angle of 3~ degrees from hori~ont~l and a~oki~ t~ly 69 inches above the plane, i.e., position 1. Next, the camera was moved fol~/v~l in the y-z plane by ap~lox;...~tely 4 inches to position 2. At 5 position 2, a second image waLs captured. The foregoing height estim~tiC)n technique was used to register the image from position 2 (inspection image) to that of the image taken at position 1 (reference image) and then determine the height~ of the various objects. Without knowing the focal leng~h and camera center and knowing the height of three points, the 10 method det~lllPilled the height; of the entire scene. When compared to the actual height of each object, the largest st~nfl~rd deviation for the estimated height was 0.2 inches.
In a second experiment, the inspection image was generated by moving the camera to position 3, e.g., a~plf xi~ tely 4 inches laterally along 15 the x axis from position 1. The method was ag~in used to register the images and generate a height map. The result of this experiment showed a largest standard deviation of C!.27 in(.hqs.
The foregoing height estimation processes were ~ c~ ed in the context of three scenarios; namely, (1) when no camera informAtion is 20 known, (2) when camera focal length is known and camera center is unknown (or vice versa), and (3) when both camera center and focal length are known. In the third scenario, the assumption regarding the aerial view property is not relevant and is not assumed. In the first and second scenarios, the assumption was used Nonet.hele~, the foregoing equations 25 can be slightly modified to avoid using this assumption to solve for the h~.ight The foregoing technique used to find object height within a mosaic of two images can be ~ten~l~rl to determine height inform~tion in mosaics comprised of more than two images. The multiframe te(~;hrlique uses a 30 batch method that registers all the images to a single reference image and a single reference plane that ex~tends through all the images. Once all the images are ~lign~(l along the plane, the method computes the re~
parallax displ~cçm~nt vectors between each pair of image frames. The height map is inferred from the sequence of estimated residual parallax 35 displacement vectors.
To accomplish this computation, Equation 11 is l~w.;l ~en as Equation 13.

CA 0222~400 1997-12-19 W O 97/01135 PCT~US96/10485 li = H(K,x + K2y + K3 ) (13) Where Ii and Si each vary from frame to frame, while the right-hand 5 side of the equation is constant over the entire image sequence. As such, the ratio of Ii and Si is an invariant quantity across the mos~ic For a sequence of N inspection frames and given the height of three image points relative to the image plane, the method solves 3N linear eqll~tion.~ cont~ining N+3 unknown values; namely, N Si terms and the 10 vector K First the method finds a solution for the N+3 unknown values and then uses the values to solve Equation 13 to estimate the height of other points in the mosaic. If the focal length and/or image center is known, then the eqll~t~ can be solved using only one or two known image height values.
The image height compllt~t.ion can be comhined with the 3D
corrected mosaic routine to produce topographic mapping inform~ticn For example, the foregoing height estim~ti~n system is used to produce a height map of terrain, and the 3D corrected mosaic routine uses the same images generated used to generate the height map to produce a 3D corrected mosaic. Thereafter, a new view, e.g., pcrp~n~ r to the terrain, can be synt.hesi7ed and the height map can be corrected (altered to conform to the new view). As such, the height map can be generated from any arbitrary viewpoint of the scene. Consequently, images that are captured at an oblique angle of a scene can be converted into an image of the scene from an orthogonal viewpoint and height inform~ti-~n can be generated fiom that new viewpoint.
ii. Synthetic View Generation (Tweening) Generally speaking, given an existing 3D mosaic representing a 3D
scene and the pose of a new viewpoint with respect to that mosaic, the system can derive a synthetic image of the scene. As such, by capturing a scene using di~e~ t cameras having different viewpoints of the scene, the system can synt.he~i~e images that are a view of the scene from viewpoints other than those of the cameras.
FIG. 11 depicts a haldw~u.2 arrangement of camera(s) within a 3D
studio 1100 used to generate a 3D mosaic representation of the studio. The studio is merely illustrative of one type of 3D scene that can be recorded by the system. It, of course, can be replaced with any 3D scene. The 3D

CA 0222~400 l997-l2-l9 W O 97/01135 -21- PCTrUS96/10485 mosaic generation process, as discussed above, uses a plurality of images of the scene to produce one or more mos~i~R representing the scene. As such, a two f~im~n~i-)n~l grid 1102, ~l~.fining a plurality of one foot by one foot squares, is used to define camera positions within an area ~ xi~ te to the S studio. In general, the specific size of the grid squares, i.e., the number of camera positions, will vary depending upon the complexity of the scene.
Also, the shape of the grid will vary depending upon the type of scene being recorded, e.g., some scenes, such as a sporting event, may be circumscribed by the grid.
To produce the images for the mosaic(s), a camera 1104 records an image (or a series of images, e.g., video) from each of the grid squares. The images are typically recorded at various camera pan, tilt, rotate and zoom positions for each grid square to generate the plurality of images from a plurality of viewpoints. The image proce.c~in~ system described above generates a 3D mosaic from the various images recorded at each camera location. .qimil~rly, 3D mosaics are generated for the other camera locations at each of the grid points. For ~mpl~, 3D mosaics 1106, 1108, 1110 (only the image mosaic portion is depicted) represent the scene as recorded from g~id loc~tionq 1112, 1114, and 1116. These 3D mo.s~i~q are merged to generate a synthetic image 1118 repre~enting the scene as viewed from, for example, la,cation 1120. The image generated at the synthetic viewpoint is not a areal" camera viewpoint, but rather is synthesized from information cont~ined in the various mo.s~içs.
The system of the invention generates the synthetic image using one of t~,vo processes. The first pra,cess used to generate a synthetic image view of the scene, warps each of the individual mosaics (e.g., mosaics 1106, 1108, and 1110) to the location of l;he synthetic ~iewpoint (e.g., location 1120).
Thus, as each 3D mosaic is generated for each grid point, the 3D mosaic is stored in memory (mosaic storage 1122) with respective to its associated grid point. Given a new viewpoint location, the mosaics are recalled from memory to generate a synthetic image representing the scene from the new viewpoint. Depending upon the compl~ity of the scene being imaged, the system may recall each of the 3D mosaics in memory or some subset of those mosaics, e.g., only recall those mosaics that are nearest the new view location. Using new view generator 11249 each recalled 3D mosaic is warped to the new viewpoint Location (e.g, location 1120) and the mosaics are merged to form the new view image 1118. Image merging is typically ~ccomrlished by averaging the pixels of the various mosaics used to form CA 0222~400 l997-l2-l9 W O 97/01135 -22- PCT~US96/10485 the new view image. H~wev~., other forms of image merging are known in the art and can be applied to these 3D mosaics. The result generated by the new view generator is a new view (e.g., image 1118) of the scene 1100.
The second process warps each camera view 3D mosaic to the S location of a previously generated 3D mosaic. Illustratively, the 3D
mosaic 1106 from camera location 1112 iS pro~ f e~l first, the mosaic 1108 produced from camera location 1114 is then warped to the coordinate system of location 1112, and lastly, the mosaic 1110 produced by camera 1118 is warped to the coordinate system of location 1112. As such, a composite 3D mosaic of the scene (not specifically shown) is generated by comhining (merging) the various 3D mosaics as viewed from a reference coordinate system (e.g, location 1112). Of course, any coordinate system can be used as the reference coordinate system. Also, rl~.pPn~ling upon the scene being imaged, less than all the 3D mosaics generated at each grid point may be used to produce the composite mosaic. Thereafter, any synthetic view of the scene can be produced by W~illg the composite 3D
mosaic to the coordinate system of the synthetic view, e.g, location 1120.
The result is a new view (image 1118) of the scene.
iii. Scene Change Detection The system of the invention can be used to monitor a scene through a moving im~ing device (e.g, camera) and detect changes in the scene.
The system corrects for changes that are due to parallax and viewpoint changes and, thc er(~le, is less sensitive to false scene changes than prior art systems.
Specifically, the system detects change by c- mhining a sequence of images to form a 3D mosaic (or a corrected 3D mosaic). For any image in the sequence of images, or for any new images that are to be added to the 3D mosaic, the system compares the selected image to both a previous and a next image in the sequence using the PthenP process, the P-and-P
process, or pose estimation. The afinal" areas of cha~ge that represent "real" moving objects are those that appear in both the comparisons to the previous and next images. The system deems all other areas of change to be due to viewpoint changes, i.e., parallax. This simple heuristic operates quite well in ~qliminflt.in~ many areas of change which are viewpoint dependent such as specularities and occlusions.
iv. Other applications 3D mosaics can be used in applications where 2D mosaics presently find use. Specifically, since image re~ n~l~ncy is removed by comhining W O 97101135 -23- PCT~US96/10485 seql7~ncefi of images into mosaics, mosaics find use in video tr~n.~mi.~.~ion, video storage and retrieval, an~d video analysis and manipulation. By using mosaics, less video data need be transmitted, stored, or analyzed. As such, the 3D mosaics generated by the system of the invention will find use in 5 many appliç~tion~ where image information needs to be çffi~ nt.ly m~nipl71~ter7., stored, and/or transmitted.
Although various embor7.ime~tq which incorporate the te~rhing~ of the invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied emborlime~t,s that still 10 incorporate these te~hings.

Claims (14)

WHAT IS CLAIMED IS:
1. A method of processing a plurality of images to generate a three-dimensional (3D) mosaic of a scene comprising the steps of:
providing a plurality of images of the scene; and registering said images to construct said 3D mosaic containing an image mosaic and a shape mosaic, where said image mosaic represents a panoramic view of the scene and said shape mosaic represents a three dimensional geometry of the scene.
2. The method of claim 1 wherein said registering step further comprises the steps of:
registering each image in said plurality of images along a parametric surface to produce registered images;
determining, in response to said registered images, translation parameters and a parametric motion field useful in Aligning the images along the parametric surface; and generating a parallax field representing parallax of objects within the scene.
3. The method of claim 2 further comprising the step of converting said plurality of images into a plurality of multi-resoutional pyramids, where each image pyramid contains a plurality of levels; and wherein said registering and determining steps are iterated over each of said levels within said multi resolutional pyramids until said plurality of images are registered to a predefined degree of accuracy.
4. The method of claim 1 wherein said registering step further comprises:
simultaneously registering said images in said plurality of images along a parametric surface to produce registered images, determinining, in response to said registered images, translation parameters and a parametric motion field useful in Aligning the images along the parametric surface, and generating a parallax field representing parallax of objects not lying within said parametric surface.
The method of claim 1 further comprising the steps of:
converting said image mosaic and said shape mosaic into multi-resolutional pyramids;
converting a new image into a multi-resolutional pyramid; and determining pose parameters for relating the new image with the image mosaic and the shape mosaic, where the pose parameters contain translation parameters, a planar motion field, and a parallax motion field for the new image.
6. The method of claim 6 further comprising the steps of:
providing an existing 3D mosaic;
determining pose parameters for a new image with respect to said existing 3D mosaic;
warping said existing 3D mosaic to image coordinates of said new image to create a synthetic image, where said synthetic image represents a view of the 3D mosaic from the coordinates of the new image; and merging said synthetic image into said new image to produce a new 3D mosaic that is a combination of said new image and said existing 3D
mosaic.
providing a next image that sequentially follows said new image; and detecting changes between said new image, said existing 3D mosaic, and said next image, where said changes represent motion within the scene without detecting parallax due to viewpoint change as said motion.
7. The method of claim 1 further comprising the steps of:
detecting points within said 3D mosaic that are occluded within the scene by objects in the scene; and image processing the detected occluded points such that said occluded points do not produce artifacts in said 3D moscaic.
8. The method of claim 1 further comprising the step of:
estimating a height of points within said 3D mosaic relative to said parametric surface, where said height of said points form a height map that represents the height of object points within said scene.
9. The method of claim 1 further comprising the steps of:
providing a plurality of 3D mosaics representing a scene from different viewpoints, where 21 3D mosaic has been generated at each viewpoint;
warping said plurality of 3D mosaics to a reference coordinate system;
merging said plurality of 3D mosaics to form a composite 3D mosaic;
providing coordinates for a new viewpoint of said scene;
determining parameters to relate said new viewpoint coordinates to said composite 3D mosaic; and warping said composite 3D mosaic to said viewpoint coordinates to create a synthetic image, where said synthetic image represents a new view of the composite 3D mosaic taken from the new viewpoint.
10. The method of claim 1 further comprising the steps of:
providing a plurality of 3D mosaics representing a scene from different viewpoints, where a 3D mosaic has been generated at each viewpoint;
providing coordinates for a new viewpoint of said scene;
determining parameters to relate said new viewpoint coordinates to a plurality of the 3D mosaics;
warping said plurality of 3D mosaics to said viewpoint coordinates to create a synthetic image, where said synthetic image represents a new view of the 3D mosaic taken from the new viewpoint; and merging said plurality of 3D mosaics to form said synthetic image.
11. The method of claim 1 wherein said registering step further comprises the steps of:
performing a plane-then-parallax process including the steps of registering each image in said plurality of images along a parametric surface to produce initially registered images; determining, in response to said initially registered images, initial translation parameters and a initial parametric motion field useful in initially aligning the images along the parametric surface; and generating an initial parallax field representing parallax of objects within the scene; and simultaneously registering, using said initial translation parameters, initial parametric motion field and initial parallax field, said images in said plurality of images along said parametric surface to produce final registered images, determining, in response to said final registered images, final translation parameters and a final parametric motion field useful in aligning the images along the parametric surface, and generating a final parallax field representing parallax of objects within the scene.
12. A system for generating a three-dimensional (3D) mosaic of a scene from a plurality of images of the scene, comprising:
means for storing said plurality of images;
a registration processor, connected to said storing means, for registering said images to construct said 3D mosaic containing an image mosaic and a shape mosaic, where said image mosaic represents a panoramic view of the scene and said shape mosaic represents a three-dimentional geometry of the scene.
13. The system of claim 12 wherein said registration processor further comprises:

a plane-then-parallax registration processor for aligning said images along a parametric surface that extends through the plurality of images to produce translation parameters and a parametric motion field used to align the images within the image mosaic and then for determining a parallax field representing objects within the scene.
14. The system of claim 12 wherein said registration processor further comprises:
a plane-and-parallax registration processor for alinging said images along a parametric surface that extends through the plurality of images to produce translation parameters and a parametric motion field used to align the images within the image mosaic and for determining a parallax field representing objects within the scene.
CA002225400A 1995-06-22 1996-06-24 Method and system for image combination using a parallax-based technique Abandoned CA2225400A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/493,632 1995-06-22
US08/493,632 US5963664A (en) 1995-06-22 1995-06-22 Method and system for image combination using a parallax-based technique

Publications (1)

Publication Number Publication Date
CA2225400A1 true CA2225400A1 (en) 1997-01-09

Family

ID=23961056

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002225400A Abandoned CA2225400A1 (en) 1995-06-22 1996-06-24 Method and system for image combination using a parallax-based technique

Country Status (6)

Country Link
US (1) US5963664A (en)
EP (1) EP0842498A4 (en)
JP (1) JPH11509946A (en)
KR (1) KR100450469B1 (en)
CA (1) CA2225400A1 (en)
WO (1) WO1997001135A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552981B2 (en) 2017-01-16 2020-02-04 Shapetrace Inc. Depth camera 3D pose estimation using 3D CAD models

Families Citing this family (226)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038349A (en) 1995-09-13 2000-03-14 Ricoh Company, Ltd. Simultaneous registration of multiple image fragments
US6389179B1 (en) * 1996-05-28 2002-05-14 Canon Kabushiki Kaisha Image combining apparatus using a combining algorithm selected based on an image sensing condition corresponding to each stored image
EP0918439B1 (en) * 1996-07-18 2008-12-24 SANYO ELECTRIC Co., Ltd. Device for converting two-dimensional video into three-dimensional video
WO1998029834A1 (en) * 1996-12-30 1998-07-09 Sharp Kabushiki Kaisha Sprite-based video coding system
US6222583B1 (en) * 1997-03-27 2001-04-24 Nippon Telegraph And Telephone Corporation Device and system for labeling sight images
US5946645A (en) * 1997-04-09 1999-08-31 National Research Council Of Canada Three dimensional imaging method and device
US6597818B2 (en) 1997-05-09 2003-07-22 Sarnoff Corporation Method and apparatus for performing geo-spatial registration of imagery
US6512857B1 (en) * 1997-05-09 2003-01-28 Sarnoff Corporation Method and apparatus for performing geo-spatial registration
JPH114398A (en) * 1997-06-11 1999-01-06 Hitachi Ltd Digital wide camera
US6501468B1 (en) * 1997-07-02 2002-12-31 Sega Enterprises, Ltd. Stereoscopic display device and recording media recorded program for image processing of the display device
US6061468A (en) * 1997-07-28 2000-05-09 Compaq Computer Corporation Method for reconstructing a three-dimensional object from a closed-loop sequence of images taken by an uncalibrated camera
US20020113865A1 (en) * 1997-09-02 2002-08-22 Kotaro Yano Image processing method and apparatus
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US7409111B2 (en) * 1997-10-21 2008-08-05 Canon Kabushiki Kaisha Image synthesizing system for automatically retrieving and synthesizing images to be synthesized
JP3567066B2 (en) * 1997-10-31 2004-09-15 株式会社日立製作所 Moving object combination detecting apparatus and method
US6396961B1 (en) * 1997-11-12 2002-05-28 Sarnoff Corporation Method and apparatus for fixating a camera on a target point using image alignment
AU761950B2 (en) 1998-04-02 2003-06-12 Kewazinga Corp. A navigable telepresence method and system utilizing an array of cameras
US6522325B1 (en) 1998-04-02 2003-02-18 Kewazinga Corp. Navigable telepresence method and system utilizing an array of cameras
US6324299B1 (en) * 1998-04-03 2001-11-27 Cognex Corporation Object image search using sub-models
US6192393B1 (en) * 1998-04-07 2001-02-20 Mgi Software Corporation Method and system for panorama viewing
US6333749B1 (en) * 1998-04-17 2001-12-25 Adobe Systems, Inc. Method and apparatus for image assisted modeling of three-dimensional scenes
US7015951B1 (en) * 1998-05-08 2006-03-21 Sony Corporation Picture generating apparatus and picture generating method
US6198852B1 (en) * 1998-06-01 2001-03-06 Yeda Research And Development Co., Ltd. View synthesis from plural images using a trifocal tensor data structure in a multi-view parallax geometry
EP1418766A3 (en) * 1998-08-28 2010-03-24 Imax Corporation Method and apparatus for processing images
US6269175B1 (en) * 1998-08-28 2001-07-31 Sarnoff Corporation Method and apparatus for enhancing regions of aligned images using flow estimation
JP4146938B2 (en) * 1998-09-08 2008-09-10 オリンパス株式会社 Panorama image synthesis apparatus and recording medium storing panorama image synthesis program
WO2000039995A2 (en) * 1998-09-17 2000-07-06 Yissum Research Development Company System and method for generating and displaying panoramic images and movies
US6681056B1 (en) * 1999-03-30 2004-01-20 International Business Machines Corporation Method and system for digital image acquisition and continuous zoom display from multiple resolutional views using a heterogeneous image pyramid representation
US6891561B1 (en) * 1999-03-31 2005-05-10 Vulcan Patents Llc Providing visual context for a mobile active visual display of a panoramic region
US6424351B1 (en) * 1999-04-21 2002-07-23 The University Of North Carolina At Chapel Hill Methods and systems for producing three-dimensional images using relief textures
US6661913B1 (en) 1999-05-05 2003-12-09 Microsoft Corporation System and method for determining structure and motion using multiples sets of images from different projection models for object modeling
US7620909B2 (en) 1999-05-12 2009-11-17 Imove Inc. Interactive image seamer for panoramic images
JP3867883B2 (en) * 1999-06-01 2007-01-17 株式会社リコー Image composition processing method, image composition processing apparatus, and recording medium
EP1208526A2 (en) * 1999-06-28 2002-05-29 Cognitens, Ltd Aligning a locally-reconstructed three-dimensional object to a global coordinate system
US6587601B1 (en) 1999-06-29 2003-07-01 Sarnoff Corporation Method and apparatus for performing geo-spatial registration using a Euclidean representation
JP2001024874A (en) * 1999-07-07 2001-01-26 Minolta Co Ltd Device and method for processing picture and recording medium recording picture processing program
US6198505B1 (en) * 1999-07-19 2001-03-06 Lockheed Martin Corp. High resolution, high speed digital camera
CA2278108C (en) 1999-07-20 2008-01-29 The University Of Western Ontario Three-dimensional measurement method and apparatus
US7098914B1 (en) * 1999-07-30 2006-08-29 Canon Kabushiki Kaisha Image synthesis method, image synthesis apparatus, and storage medium
US7015954B1 (en) * 1999-08-09 2006-03-21 Fuji Xerox Co., Ltd. Automatic video system using multiple cameras
US6507665B1 (en) * 1999-08-25 2003-01-14 Eastman Kodak Company Method for creating environment map containing information extracted from stereo image pairs
US7502027B1 (en) * 1999-09-13 2009-03-10 Solidworks Corporation Electronic drawing viewer
US7477284B2 (en) * 1999-09-16 2009-01-13 Yissum Research Development Company Of The Hebrew University Of Jerusalem System and method for capturing and viewing stereoscopic panoramic images
US6366689B1 (en) * 1999-10-14 2002-04-02 Asti, Inc. 3D profile analysis for surface contour inspection
JP4484288B2 (en) 1999-12-03 2010-06-16 富士機械製造株式会社 Image processing method and image processing system
US6516087B1 (en) 2000-01-10 2003-02-04 Sensar, Inc. Method for real time correlation of stereo images
US6798923B1 (en) * 2000-02-04 2004-09-28 Industrial Technology Research Institute Apparatus and method for providing panoramic images
US6690840B1 (en) * 2000-02-08 2004-02-10 Tektronix, Inc. Image alignment with global translation and linear stretch
US7262778B1 (en) 2000-02-11 2007-08-28 Sony Corporation Automatic color adjustment of a template design
US8407595B1 (en) 2000-02-11 2013-03-26 Sony Corporation Imaging service for automating the display of images
US7810037B1 (en) 2000-02-11 2010-10-05 Sony Corporation Online story collaboration
US7522186B2 (en) * 2000-03-07 2009-04-21 L-3 Communications Corporation Method and apparatus for providing immersive surveillance
US6741757B1 (en) * 2000-03-07 2004-05-25 Microsoft Corporation Feature correspondence between images using an image pyramid
EP1330783A2 (en) * 2000-03-20 2003-07-30 Cognitens, Ltd System and method for globally aligning individual views based on non-accurate repeatable positioning devices
US6834128B1 (en) * 2000-06-16 2004-12-21 Hewlett-Packard Development Company, L.P. Image mosaicing system and method adapted to mass-market hand-held digital cameras
US6788333B1 (en) * 2000-07-07 2004-09-07 Microsoft Corporation Panoramic video
US6701030B1 (en) * 2000-07-07 2004-03-02 Microsoft Corporation Deghosting panoramic video
JP2002083285A (en) * 2000-07-07 2002-03-22 Matsushita Electric Ind Co Ltd Image compositing device and image compositing method
US6704460B1 (en) * 2000-07-10 2004-03-09 The United States Of America As Represented By The Secretary Of The Army Remote mosaic imaging system having high-resolution, wide field-of-view and low bandwidth
FR2811849B1 (en) * 2000-07-17 2002-09-06 Thomson Broadcast Systems STEREOSCOPIC CAMERA PROVIDED WITH MEANS TO FACILITATE THE ADJUSTMENT OF ITS OPTO-MECHANICAL PARAMETERS
GB0019399D0 (en) * 2000-08-07 2001-08-08 Bae Systems Plc Height measurement enhancemsnt
US7095905B1 (en) 2000-09-08 2006-08-22 Adobe Systems Incorporated Merging images to form a panoramic image
JP2002224982A (en) * 2000-12-01 2002-08-13 Yaskawa Electric Corp Thin substrate transfer robot and detection method of the same
WO2002045003A1 (en) * 2000-12-01 2002-06-06 Imax Corporation Techniques and systems for developing high-resolution imagery
US6650396B2 (en) 2000-12-04 2003-11-18 Hytechnology, Inc. Method and processor for stereo cylindrical imaging
US7447754B2 (en) 2000-12-06 2008-11-04 Microsoft Corporation Methods and systems for processing multi-media editing projects
US6882891B2 (en) 2000-12-06 2005-04-19 Microsoft Corporation Methods and systems for mixing digital audio signals
US6774919B2 (en) * 2000-12-06 2004-08-10 Microsoft Corporation Interface and related methods for reducing source accesses in a development system
US7103677B2 (en) * 2000-12-06 2006-09-05 Microsoft Corporation Methods and systems for efficiently processing compressed and uncompressed media content
US6959438B2 (en) * 2000-12-06 2005-10-25 Microsoft Corporation Interface and related methods for dynamically generating a filter graph in a development system
US6983466B2 (en) 2000-12-06 2006-01-03 Microsoft Corporation Multimedia project processing systems and multimedia project processing matrix systems
SE519884C2 (en) * 2001-02-02 2003-04-22 Scalado Ab Method for zooming and producing a zoomable image
US7046831B2 (en) * 2001-03-09 2006-05-16 Tomotherapy Incorporated System and method for fusion-aligned reprojection of incomplete data
US8958654B1 (en) * 2001-04-25 2015-02-17 Lockheed Martin Corporation Method and apparatus for enhancing three-dimensional imagery data
US7006707B2 (en) * 2001-05-03 2006-02-28 Adobe Systems Incorporated Projecting images onto a surface
US20030030638A1 (en) * 2001-06-07 2003-02-13 Karl Astrom Method and apparatus for extracting information from a target area within a two-dimensional graphical object in an image
US20030012410A1 (en) * 2001-07-10 2003-01-16 Nassir Navab Tracking and pose estimation for augmented reality using real features
US7103236B2 (en) 2001-08-28 2006-09-05 Adobe Systems Incorporated Methods and apparatus for shifting perspective in a composite image
JP3720747B2 (en) * 2001-09-28 2005-11-30 キヤノン株式会社 Image forming system, image forming apparatus, and image forming method
JP3776787B2 (en) * 2001-11-02 2006-05-17 Nec東芝スペースシステム株式会社 3D database generation system
KR100433625B1 (en) * 2001-11-17 2004-06-02 학교법인 포항공과대학교 Apparatus for reconstructing multiview image using stereo image and depth map
KR100837776B1 (en) * 2001-12-24 2008-06-13 주식회사 케이티 Apparatus and Method for Converting 2D Images to 3D Object
US20030152264A1 (en) * 2002-02-13 2003-08-14 Perkins Christopher H. Method and system for processing stereoscopic images
AU2003209553A1 (en) * 2002-03-13 2003-09-22 Imax Corporation Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data
US7006706B2 (en) * 2002-04-12 2006-02-28 Hewlett-Packard Development Company, L.P. Imaging apparatuses, mosaic image compositing methods, video stitching methods and edgemap generation methods
JP4211292B2 (en) * 2002-06-03 2009-01-21 ソニー株式会社 Image processing apparatus, image processing method, program, and program recording medium
US7321386B2 (en) * 2002-08-01 2008-01-22 Siemens Corporate Research, Inc. Robust stereo-driven video-based surveillance
US8994822B2 (en) 2002-08-28 2015-03-31 Visual Intelligence Lp Infrastructure mapping system and method
US7893957B2 (en) 2002-08-28 2011-02-22 Visual Intelligence, LP Retinal array compound camera system
US8483960B2 (en) 2002-09-20 2013-07-09 Visual Intelligence, LP Self-calibrated, remote imaging and data processing system
US6928194B2 (en) * 2002-09-19 2005-08-09 M7 Visual Intelligence, Lp System for mosaicing digital ortho-images
USRE49105E1 (en) 2002-09-20 2022-06-14 Vi Technologies, Llc Self-calibrated, remote imaging and data processing system
GB0222211D0 (en) * 2002-09-25 2002-10-30 Fortkey Ltd Imaging and measurement system
JP3744002B2 (en) * 2002-10-04 2006-02-08 ソニー株式会社 Display device, imaging device, and imaging / display system
US7424133B2 (en) 2002-11-08 2008-09-09 Pictometry International Corporation Method and apparatus for capturing, geolocating and measuring oblique images
US7746379B2 (en) * 2002-12-31 2010-06-29 Asset Intelligence, Llc Sensing cargo using an imaging device
US20040196282A1 (en) * 2003-02-14 2004-10-07 Oh Byong Mok Modeling and editing image panoramas
US7164800B2 (en) * 2003-02-19 2007-01-16 Eastman Kodak Company Method and system for constraint-consistent motion estimation
CA2529044C (en) * 2003-06-13 2011-08-09 Universite Laval Three-dimensional modeling from arbitrary three-dimensional curves
US7633520B2 (en) * 2003-06-19 2009-12-15 L-3 Communications Corporation Method and apparatus for providing a scalable multi-camera distributed video processing and visualization surveillance system
JP4280656B2 (en) * 2003-06-20 2009-06-17 キヤノン株式会社 Image display device and image display method thereof
US7259778B2 (en) * 2003-07-01 2007-08-21 L-3 Communications Corporation Method and apparatus for placing sensors using 3D models
SE0302065D0 (en) * 2003-07-14 2003-07-14 Stefan Carlsson Video - method and apparatus
US7190826B2 (en) * 2003-09-16 2007-03-13 Electrical Geodesics, Inc. Measuring the location of objects arranged on a surface, using multi-camera photogrammetry
US7463778B2 (en) * 2004-01-30 2008-12-09 Hewlett-Packard Development Company, L.P Motion estimation for compressing multiple view images
US7421112B2 (en) * 2004-03-12 2008-09-02 General Electric Company Cargo sensing system
US7983835B2 (en) 2004-11-03 2011-07-19 Lagassey Paul J Modular intelligent transportation system
FR2868168B1 (en) * 2004-03-26 2006-09-15 Cnes Epic FINE MATCHING OF STEREOSCOPIC IMAGES AND DEDICATED INSTRUMENT WITH A LOW STEREOSCOPIC COEFFICIENT
EP1584896A1 (en) * 2004-04-06 2005-10-12 Saab Ab Passive measurement of terrain parameters
EP1587035A1 (en) * 2004-04-14 2005-10-19 Koninklijke Philips Electronics N.V. Ghost artifact reduction for rendering 2.5D graphics
US7187809B2 (en) * 2004-06-10 2007-03-06 Sarnoff Corporation Method and apparatus for aligning video to three-dimensional point clouds
DE102004041115A1 (en) * 2004-08-24 2006-03-09 Tbs Holding Ag Airline passenger`s finger-or face characteristics recording method, involves recording objects by two sensors, where all points of surface to be displayed are represented in two different directions in digital two-dimensional pictures
US7606395B2 (en) 2004-08-25 2009-10-20 Tbs Holding Ag Method and arrangement for optical recording of data
US7342586B2 (en) * 2004-09-13 2008-03-11 Nbor Corporation System and method for creating and playing a tweening animation using a graphic directional indicator
JP2006101224A (en) * 2004-09-29 2006-04-13 Toshiba Corp Image generating apparatus, method, and program
US20060132482A1 (en) * 2004-11-12 2006-06-22 Oh Byong M Method for inter-scene transitions
US7363157B1 (en) 2005-02-10 2008-04-22 Sarnoff Corporation Method and apparatus for performing wide area terrain mapping
WO2007094765A2 (en) * 2005-02-10 2007-08-23 Sarnoff Corporation Method and apparatus for performing wide area terrain mapping
US8645870B2 (en) 2005-03-31 2014-02-04 Adobe Systems Incorporated Preview cursor for image editing
US7680307B2 (en) * 2005-04-05 2010-03-16 Scimed Life Systems, Inc. Systems and methods for image segmentation with a multi-stage classifier
JP4102386B2 (en) * 2005-05-31 2008-06-18 三菱電機株式会社 3D information restoration device
US7499586B2 (en) * 2005-10-04 2009-03-03 Microsoft Corporation Photographing big things
US7471292B2 (en) * 2005-11-15 2008-12-30 Sharp Laboratories Of America, Inc. Virtual view specification and synthesis in free viewpoint
US7660464B1 (en) 2005-12-22 2010-02-09 Adobe Systems Incorporated User interface for high dynamic range merge image selection
US8842730B2 (en) 2006-01-27 2014-09-23 Imax Corporation Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
WO2007148219A2 (en) 2006-06-23 2007-12-27 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US7912319B2 (en) * 2006-08-28 2011-03-22 Avago Technologies General Ip (Singapore) Pte. Ltd. Systems and methods for panoramic image construction using small sensor array
US7873238B2 (en) * 2006-08-30 2011-01-18 Pictometry International Corporation Mosaic oblique images and methods of making and using same
US8019180B2 (en) * 2006-10-31 2011-09-13 Hewlett-Packard Development Company, L.P. Constructing arbitrary-plane and multi-arbitrary-plane mosaic composite images from a multi-imager
DE102006052779A1 (en) * 2006-11-09 2008-05-15 Bayerische Motoren Werke Ag Method for generating an overall image of the surroundings of a motor vehicle
US20080118159A1 (en) * 2006-11-21 2008-05-22 Robert Wendell Sharps Gauge to measure distortion in glass sheet
US7865285B2 (en) * 2006-12-27 2011-01-04 Caterpillar Inc Machine control system and method
US8593518B2 (en) 2007-02-01 2013-11-26 Pictometry International Corp. Computer system for continuous oblique panning
US8520079B2 (en) 2007-02-15 2013-08-27 Pictometry International Corp. Event multiplexer for managing the capture of images
US9262818B2 (en) 2007-05-01 2016-02-16 Pictometry International Corp. System for detecting image abnormalities
US8385672B2 (en) 2007-05-01 2013-02-26 Pictometry International Corp. System for detecting image abnormalities
US7991226B2 (en) 2007-10-12 2011-08-02 Pictometry International Corporation System and process for color-balancing a series of oblique images
DE102007053812A1 (en) * 2007-11-12 2009-05-14 Robert Bosch Gmbh Video surveillance system configuration module, configuration module monitoring system, video surveillance system configuration process, and computer program
US8531472B2 (en) 2007-12-03 2013-09-10 Pictometry International Corp. Systems and methods for rapid three-dimensional modeling with real façade texture
WO2009089128A1 (en) * 2008-01-04 2009-07-16 3M Innovative Properties Company Hierarchical processing using image deformation
TW200937348A (en) * 2008-02-19 2009-09-01 Univ Nat Chiao Tung Calibration method for image capturing device
US8675068B2 (en) * 2008-04-11 2014-03-18 Nearmap Australia Pty Ltd Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features
US8497905B2 (en) 2008-04-11 2013-07-30 nearmap australia pty ltd. Systems and methods of capturing large area images in detail including cascaded cameras and/or calibration features
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US8588547B2 (en) 2008-08-05 2013-11-19 Pictometry International Corp. Cut-line steering methods for forming a mosaic image of a geographical area
US8401222B2 (en) 2009-05-22 2013-03-19 Pictometry International Corp. System and process for roof measurement using aerial imagery
EP2261827B1 (en) * 2009-06-10 2015-04-08 Dassault Systèmes Process, program and apparatus for displaying an assembly of objects of a PLM database
WO2011068582A2 (en) * 2009-09-18 2011-06-09 Logos Technologies, Inc. Systems and methods for persistent surveillance and large volume data streaming
JP2011082918A (en) * 2009-10-09 2011-04-21 Sony Corp Image processing device and method, and program
US9330494B2 (en) 2009-10-26 2016-05-03 Pictometry International Corp. Method for the automatic material classification and texture simulation for 3D models
EP2502115A4 (en) 2009-11-20 2013-11-06 Pelican Imaging Corp Capturing and processing of images using monolithic camera array with heterogeneous imagers
JP2011146762A (en) * 2010-01-12 2011-07-28 Nippon Hoso Kyokai <Nhk> Solid model generator
CN101789123B (en) * 2010-01-27 2011-12-07 中国科学院半导体研究所 Method for creating distance map based on monocular camera machine vision
US9237294B2 (en) 2010-03-05 2016-01-12 Sony Corporation Apparatus and method for replacing a broadcasted advertisement based on both heuristic information and attempts in altering the playback of the advertisement
US20130194414A1 (en) * 2010-03-05 2013-08-01 Frédéric Poirier Verification system for prescription packaging and method
US8917929B2 (en) * 2010-03-19 2014-12-23 Lapis Semiconductor Co., Ltd. Image processing apparatus, method, program, and recording medium
DE102010016109A1 (en) 2010-03-24 2011-09-29 Tst Biometrics Holding Ag Method for detecting biometric features
JP5641200B2 (en) * 2010-05-28 2014-12-17 ソニー株式会社 Image processing apparatus, image processing method, image processing program, and recording medium
US8477190B2 (en) 2010-07-07 2013-07-02 Pictometry International Corp. Real-time moving platform management system
CN103080976B (en) 2010-08-19 2015-08-05 日产自动车株式会社 Three-dimensional body pick-up unit and three-dimensional body detection method
JP2012085252A (en) * 2010-09-17 2012-04-26 Panasonic Corp Image generation device, image generation method, program, and recording medium with program recorded thereon
FR2965616B1 (en) * 2010-10-01 2012-10-05 Total Sa METHOD OF IMAGING A LONGITUDINAL DRIVE
US9832528B2 (en) 2010-10-21 2017-11-28 Sony Corporation System and method for merging network-based content with broadcasted programming content
JP2012118666A (en) * 2010-11-30 2012-06-21 Iwane Laboratories Ltd Three-dimensional map automatic generation device
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
US8823732B2 (en) 2010-12-17 2014-09-02 Pictometry International Corp. Systems and methods for processing images with edge detection and snap-to feature
WO2012115009A1 (en) * 2011-02-21 2012-08-30 日産自動車株式会社 Periodic stationary object detection device and periodic stationary object detection method
CA2835290C (en) 2011-06-10 2020-09-08 Pictometry International Corp. System and method for forming a video stream containing gis data in real-time
WO2013049699A1 (en) 2011-09-28 2013-04-04 Pelican Imaging Corporation Systems and methods for encoding and decoding light field image files
KR20130036593A (en) * 2011-10-04 2013-04-12 삼성디스플레이 주식회사 3d display apparatus prevneting image overlapping
EP2817955B1 (en) 2012-02-21 2018-04-11 FotoNation Cayman Limited Systems and methods for the manipulation of captured light field image data
US9153063B2 (en) * 2012-02-29 2015-10-06 Analytical Graphics Inc. System and method for data rendering and transformation images in 2- and 3- dimensional images
US9183538B2 (en) 2012-03-19 2015-11-10 Pictometry International Corp. Method and system for quick square roof reporting
WO2013146269A1 (en) * 2012-03-29 2013-10-03 シャープ株式会社 Image capturing device, image processing method, and program
KR20150023907A (en) 2012-06-28 2015-03-05 펠리칸 이매징 코포레이션 Systems and methods for detecting defective camera arrays, optic arrays, and sensors
US20140002674A1 (en) 2012-06-30 2014-01-02 Pelican Imaging Corporation Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors
JP2014027448A (en) * 2012-07-26 2014-02-06 Sony Corp Information processing apparatus, information processing metho, and program
KR102111181B1 (en) 2012-08-21 2020-05-15 포토내이션 리미티드 Systems and methods for parallax detection and correction in images captured using array cameras
US20140055632A1 (en) 2012-08-23 2014-02-27 Pelican Imaging Corporation Feature based high resolution motion estimation from low resolution images captured using an array source
WO2014052974A2 (en) 2012-09-28 2014-04-03 Pelican Imaging Corporation Generating images from light fields utilizing virtual viewpoints
CN103942754B (en) * 2013-01-18 2017-07-04 深圳市腾讯计算机系统有限公司 Panoramic picture complementing method and device
EP2962309B1 (en) 2013-02-26 2022-02-16 Accuray, Inc. Electromagnetically actuated multi-leaf collimator
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
US9881163B2 (en) 2013-03-12 2018-01-30 Pictometry International Corp. System and method for performing sensitive geo-spatial processing in non-sensitive operator environments
US9244272B2 (en) 2013-03-12 2016-01-26 Pictometry International Corp. Lidar system producing multiple scan paths and method of making and using same
US9124831B2 (en) 2013-03-13 2015-09-01 Pelican Imaging Corporation System and methods for calibration of an array camera
WO2014159779A1 (en) 2013-03-14 2014-10-02 Pelican Imaging Corporation Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9100586B2 (en) 2013-03-14 2015-08-04 Pelican Imaging Corporation Systems and methods for photometric normalization in array cameras
US9275080B2 (en) 2013-03-15 2016-03-01 Pictometry International Corp. System and method for early access to captured images
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
JP2016524125A (en) 2013-03-15 2016-08-12 ペリカン イメージング コーポレイション System and method for stereoscopic imaging using a camera array
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9753950B2 (en) 2013-03-15 2017-09-05 Pictometry International Corp. Virtual property reporting for automatic structure detection
TWI496109B (en) * 2013-07-12 2015-08-11 Vivotek Inc Image processor and image merging method thereof
US9898856B2 (en) 2013-09-27 2018-02-20 Fotonation Cayman Limited Systems and methods for depth-assisted perspective distortion correction
WO2015074078A1 (en) 2013-11-18 2015-05-21 Pelican Imaging Corporation Estimating depth from projected texture using camera arrays
WO2015081279A1 (en) 2013-11-26 2015-06-04 Pelican Imaging Corporation Array camera configurations incorporating multiple constituent array cameras
US9612598B2 (en) 2014-01-10 2017-04-04 Pictometry International Corp. Unmanned aircraft structure evaluation system and method
US10111714B2 (en) 2014-01-27 2018-10-30 Align Technology, Inc. Adhesive objects for improving image registration of intraoral images
US9292913B2 (en) 2014-01-31 2016-03-22 Pictometry International Corp. Augmented three dimensional point collection of vertical structures
WO2015120188A1 (en) 2014-02-08 2015-08-13 Pictometry International Corp. Method and system for displaying room interiors on a floor plan
WO2015134996A1 (en) 2014-03-07 2015-09-11 Pelican Imaging Corporation System and methods for depth regularization and semiautomatic interactive matting using rgb-d images
EP3467776A1 (en) * 2014-09-29 2019-04-10 Fotonation Cayman Limited Systems and methods for dynamic calibration of array cameras
US10008027B1 (en) * 2014-10-20 2018-06-26 Henry Harlyn Baker Techniques for determining a three-dimensional representation of a surface of an object from a set of images
US10292369B1 (en) * 2015-06-30 2019-05-21 Vium, Inc. Non-contact detection of physiological characteristics of experimental animals
AU2016377491A1 (en) * 2015-12-23 2018-06-21 Mayo Foundation For Medical Education And Research System and method for integrating three dimensional video and galvanic vestibular stimulation
US10402676B2 (en) 2016-02-15 2019-09-03 Pictometry International Corp. Automated system and methodology for feature extraction
US10671648B2 (en) 2016-02-22 2020-06-02 Eagle View Technologies, Inc. Integrated centralized property database systems and methods
WO2017218834A1 (en) 2016-06-17 2017-12-21 Kerstein Dustin System and method for capturing and viewing panoramic images having motion parralax depth perception without images stitching
CN106162143B (en) * 2016-07-04 2018-11-09 腾讯科技(深圳)有限公司 parallax fusion method and device
US10430994B1 (en) 2016-11-07 2019-10-01 Henry Harlyn Baker Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
US10477064B2 (en) * 2017-08-21 2019-11-12 Gopro, Inc. Image stitching with electronic rolling shutter correction
WO2020097128A1 (en) * 2018-11-06 2020-05-14 Flir Commercial Systems, Inc. Automatic co-registration of thermal and visible image pairs
US11830163B2 (en) 2019-07-01 2023-11-28 Geomagical Labs, Inc. Method and system for image generation
DE112020004391T5 (en) 2019-09-17 2022-06-02 Boston Polarimetrics, Inc. SYSTEMS AND METHODS FOR SURFACE MODELING USING POLARIZATION FEATURES
JP2022552833A (en) 2019-10-07 2022-12-20 ボストン ポーラリメトリックス,インコーポレイティド System and method for polarized surface normal measurement
WO2021108002A1 (en) 2019-11-30 2021-06-03 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
KR20220132620A (en) 2020-01-29 2022-09-30 인트린식 이노베이션 엘엘씨 Systems and methods for characterizing object pose detection and measurement systems
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
US11953700B2 (en) 2020-05-27 2024-04-09 Intrinsic Innovation Llc Multi-aperture polarization optical systems using beam splitters
US11190748B1 (en) 2020-11-20 2021-11-30 Rockwell Collins, Inc. Dynamic parallax correction for visual sensor fusion
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4797942A (en) * 1987-03-02 1989-01-10 General Electric Pyramid processor for building large-area, high-resolution image by parts
JPH0766445B2 (en) * 1988-09-09 1995-07-19 工業技術院長 Image processing method
US5187754A (en) * 1991-04-30 1993-02-16 General Electric Company Forming, with the aid of an overview image, a composite image from a mosaic of images
US5568384A (en) * 1992-10-13 1996-10-22 Mayo Foundation For Medical Education And Research Biomedical imaging and analysis
US5550937A (en) * 1992-11-23 1996-08-27 Harris Corporation Mechanism for registering digital images obtained from multiple sensors having diverse image collection geometries
US5682198A (en) * 1993-06-28 1997-10-28 Canon Kabushiki Kaisha Double eye image pickup apparatus
US5530774A (en) * 1994-03-25 1996-06-25 Eastman Kodak Company Generation of depth image through interpolation and extrapolation of intermediate images derived from stereo image pair using disparity vector fields

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552981B2 (en) 2017-01-16 2020-02-04 Shapetrace Inc. Depth camera 3D pose estimation using 3D CAD models

Also Published As

Publication number Publication date
KR100450469B1 (en) 2005-06-13
EP0842498A2 (en) 1998-05-20
JPH11509946A (en) 1999-08-31
US5963664A (en) 1999-10-05
WO1997001135A2 (en) 1997-01-09
KR19990028281A (en) 1999-04-15
WO1997001135A3 (en) 1997-03-06
EP0842498A4 (en) 2000-07-19

Similar Documents

Publication Publication Date Title
CA2225400A1 (en) Method and system for image combination using a parallax-based technique
Irani et al. About direct methods
Kumar et al. Registration of video to geo-referenced imagery
US6353678B1 (en) Method and apparatus for detecting independent motion in three-dimensional scenes
Avidan et al. Novel view synthesis by cascading trilinear tensors
Stein et al. Model-based brightness constraints: On direct estimation of structure and motion
Szeliski et al. Direct methods for visual scene reconstruction
Irani Multi-frame correspondence estimation using subspace constraints
US6571024B1 (en) Method and apparatus for multi-view three dimensional estimation
Hsu et al. Automated mosaics via topology inference
US6137491A (en) Method and apparatus for reconstructing geometry using geometrically constrained structure from motion with points on planes
US6597818B2 (en) Method and apparatus for performing geo-spatial registration of imagery
Coorg et al. Spherical mosaics with quaternions and dense correlation
US6219462B1 (en) Method and apparatus for performing global image alignment using any local match measure
US8208029B2 (en) Method and system for calibrating camera with rectification homography of imaged parallelogram
Baker et al. Parameterizing homographies
JP2004515832A (en) Apparatus and method for spatio-temporal normalization matching of image sequence
US20040071367A1 (en) Apparatus and method for alignmemt of spatial or temporal non-overlapping images sequences
US10762654B2 (en) Method and system for three-dimensional model reconstruction
Jain et al. A review paper on various approaches for image mosaicing
Patidar et al. Automatic image mosaicing: an approach based on FFT
Deschenes et al. An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts
Yang et al. Single-shot extrinsic calibration of a generically configured RGB-D camera rig from scene constraints
Grattoni et al. A mosaicing approach for the acquisition and representation of 3d painted surfaces for conservation and restoration purposes
Chen Multi-view image-based rendering and modeling

Legal Events

Date Code Title Description
FZDE Discontinued
FZDE Discontinued

Effective date: 20010626