US20050232513A1 - System and method for aligning images - Google Patents

System and method for aligning images Download PDF

Info

Publication number
US20050232513A1
US20050232513A1 US11/133,544 US13354405A US2005232513A1 US 20050232513 A1 US20050232513 A1 US 20050232513A1 US 13354405 A US13354405 A US 13354405A US 2005232513 A1 US2005232513 A1 US 2005232513A1
Authority
US
United States
Prior art keywords
image
reference points
positioning
template
geometrical object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/133,544
Inventor
Daniel Ritt
Matthew Whitaker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Radiological Imaging Technology Inc
Original Assignee
Radiological Imaging Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Radiological Imaging Technology Inc filed Critical Radiological Imaging Technology Inc
Priority to US11/133,544 priority Critical patent/US20050232513A1/en
Publication of US20050232513A1 publication Critical patent/US20050232513A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Definitions

  • the present invention relates in general to the alignment of images. More specifically, the present invention relates to a system or method for aligning two or more images (collectively “alignment system” or simply the “system”).
  • image alignment Another possible application of image alignment is for quality assurance measurements. For example, radiation oncology often requires image treatment plans to be compared to quality assurance films to determine if the treatment plan is actually being executed. There are also numerous non-medical applications for which image alignment can be very useful.
  • the invention is a system or method for aligning images (the “system”).
  • a definition subsystem including a first image, a second image, one or more target reference points, one or more template reference points, and a geometrical object.
  • the definition subsystem identifies one or more target reference points associated with the first image and one or more template reference points associated with the second image by providing a geometrical object for positioning the first image in relation to the second image.
  • a combination subsystem is configured to generate an aligned image from the first image and second image.
  • An interface subsystem may be used to facilitate interactions between users and the system.
  • the alignment system can be applied to images involving two, three, or more dimensions.
  • an Affine transform heuristic is performed using various target and template points.
  • the Affine transform can eliminate shift, rotational, and magnification differences between different images.
  • different types of combination heuristics may be used.
  • FIG. 1 is an environmental block diagram illustrating an example of an image alignment system accessible by a user.
  • FIG. 2A is a subsystem-level block diagram illustrating an example of a definition subsystem and a combination subsystem.
  • FIG. 2B is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, and an interface subsystem.
  • FIG. 2C is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, an interface subsystem, and a detection subsystem.
  • FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output.
  • FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images.
  • FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system or method may execute to generate an aligned image.
  • FIG. 6 is a flow diagram illustrating an example of steps that a user of an image alignment system may perform to generate an aligned image.
  • FIG. 7A is a diagram illustrating one example of target reference points associated with a first image.
  • FIG. 7B is a diagram illustrating one example of a geometrical object connecting target reference points associated with a first image.
  • FIG. 7C is a diagram illustrating an example of a geometrical object and a centroid associated with that geometrical object.
  • FIG. 7D is a diagram illustrating a geometrical object and various template reference points positioned in relation to a second image.
  • the present invention relates generally to methods and systems for aligning images (collectively an “image alignment system” or “the system”) by producing an aligned image from a number of images and a relationship between various reference points associated with those images.
  • a geometrical object can be formed from selected reference points in one image, copied or transferred to a second image, and positioned within that second image to establish a relationship between reference points.
  • FIG. 1 is a block diagram illustrating an example of some of the elements that can be incorporated into an image alignment system 20 .
  • FIG. 1 shows a human being to represent a user 22 , a computer terminal to represent an access device 24 , a GUI to represent an interface 26 , and a computer tower to represent a computer 28 .
  • a user 22 can access the system 20 through an access device 24 .
  • the user 22 is a human being.
  • the user 22 may be an automated agent, a robot, a neural network, an expert system, an artificial technology device, or some other form of intelligence technology (collectively “intelligence technology”).
  • intelligence technology a form of intelligence technology
  • the system 20 can be implemented in many different ways, giving users 22 a potentially wide variety of different ways to configure the processing performed by the system 20 .
  • the access device 24 can be any device that is either: (a) capable of performing the programming logic of the system 20 ; or (b) communicating a device that is capable of performing the programming logic of the system 20 .
  • Access devices 24 can include desktop computers, laptop computers, mainframe computers, mini-computers, programmable logic devices, embedded computers, hardware devices capable of performing the processing required by the system 20 , cell phones, satellite pagers, personal data assistants (“PDAs”), and a wide range of future devices that may not yet currently exist.
  • PDAs personal data assistants
  • the access device 24 can also include various peripherals associated with the device such as a terminal, keyboard, mouse, screen, printer, input device, output device, or any other apparatus that can relay data or commands between a user 22 and an interface 26 .
  • the user 22 uses the access device 24 to interact with an interface 26 .
  • the interface 26 is typically web page that is viewable from a browser in the access device 22 .
  • the interface 26 is likely to be influenced by the operating system and other characteristics of the access device 24 .
  • Users 22 can view system 20 outputs through the interface 26 , and users 22 can also provide system 20 inputs by interacting with the interface 26 .
  • the interface 26 can be describe as a combination of the various information technology layers relevant to communications between various software applications and the user 22 .
  • the interface 26 can be the aggregate characteristics of a graphical user interface (“GUI”), an intranet, an extranet, the Internet, a local area network (“LAN”), a wide area network (“WAN”), a software application, other type of network, and any other factor relating to the relaying of data or commands between an access device 24 and a computer 28 , or between a user 22 and a computer 28 .
  • GUI graphical user interface
  • LAN local area network
  • WAN wide area network
  • software application other type of network
  • a computer 28 is any device or combination of devices that allows the processing of the system 20 to be performed.
  • the computer 28 may be a general purpose computer capable of running a wide variety of different software applications or a specialized device limited to particular functions.
  • the computer 28 is the same device as the access device 24 .
  • the computer 28 is a network of computers 28 accessed by the accessed device 24 .
  • the system 20 can incorporate a wide variety of different information technology architectures.
  • the computer 28 is able to receive, incorporate, store, and process information that may relate to operation of the image alignment system 20 .
  • the computer 28 may include any type, number, form, or configuration of processors, system memory, computer-readable mediums, peripheral devices, and operating systems.
  • the computer 28 is a server and the access device 24 is a client device accessing the server.
  • Images to be aligned by the system 20 are examples of processing elements existing as representations within the computer 28 .
  • An image may include various reference points, and those reference points can exist as representations within the computer 28 .
  • a geometrical object 35 of reference point(s) that is used to align a first image 30 with respect to a second image 32 also exist as representations within the computer 28 .
  • An image is potentially any visual representation that can potentially be aligned with one or more other visual representations.
  • images are captured through the use of a light-based sensor, such as a camera.
  • images can be generated from non-light based sensors or the sources of information and data.
  • An ultrasound image is an example of an image that is generated from a non-light based sensor.
  • the images processed by the system 20 are preferably digital images.
  • the images are initially captured in a digital format and are passed unmodified to the system 20 .
  • digital images may be generated from analog images.
  • Various enhancement heuristics may be applied to an image before it is aligned by the system 20 , but the system 20 does not require the performance of such pre-alignment enhancement processing.
  • target reference points 34 are associated with the first image 30 (the “target image”) and template reference points 36 are associated with a second image 32 (the “template image”). Any number of target images can be aligned with respect to a single template image.
  • the target reference points 34 and template reference points 36 are locations in relation to an image, and the system 20 uses the locations of the target reference points 34 and the template reference points 36 to determine a relationship so that an aligned 38 can be generated. Locations of the template reference points 36 may be determined by positioning the geometrical object 35 within the second image 32 .
  • the geometrical object 35 can be used to facilitate a generation of an aligned image 38 .
  • a geometrical object 35 is transmitted or copied from a first image 30 to a second image 32 . In alternative embodiments, the geometrical object 35 may be reproduced in the second image 32 in some other way.
  • the geometrical object 35 is the configuration of target reference point(s) 34 within the target image 30 that are used to align the target image 30 with the template image 32 .
  • the geometrical object 35 is made up at least three points.
  • the system 20 can be implemented in the form of various subsystems. A wide variety of different subsystem configurations can be incorporated into the system 20 .
  • FIGS. 2A, 2B , and 2 C illustrate different subsystem-level configurations of the system 20 .
  • FIG. 2A shows a system 20 made up of two subsystems: a definition subsystem 40 and a combination subsystem 42 .
  • FIG. 2B illustrates a system 20 made up of three subsystems: the definition subsystem 40 , the combination subsystem 42 , and an interface subsystem 44 .
  • FIG. 2C displays an association of a four subsystems: the definition subsystem 40 , the combination subsystem 42 , the interface subsystem 44 , and a detection subsystem 45 .
  • Interaction between subsystems 40 - 44 can include an exchange of data, algorithms, instructions, commands, locations of points in relation to images, or any other communication helpful for implementation of the system 20 .
  • the definition subsystem 40 allows the system 20 to define the relationship(s) between the first image 30 and the second image 32 so that the combination subsystem 42 can create the aligned image 38 from the first image 30 and the second image 32 .
  • the processing elements of the definition subsystem 40 can include the first image 30 , the second image 32 , the target reference points 34 , the template reference points 36 , and the geometrical object 35 .
  • the target reference points 34 are associated with the first image 34
  • the template reference points 36 are associated with the second image 32 .
  • the target reference points 34 may be selected through an interface subsystem 44 or by any other method readable to the definition subsystem 40 .
  • the definition subsystem 40 is configured to define or create the geometrical object 35 .
  • the definition subsystem 40 generates the geometrical object 35 by connecting at least the subset of target reference points 34 .
  • the definition subsystem 40 may further identify a centroid of the geometrical object 35 .
  • the definition subsystem 40 may impose a constraint upon one or more target reference points 34 . Constraints may be purely user defined on a case-by-case basis, or may be created by the system 20 through the implementation of user-defined processing rules. By imposing the constraint upon one or more target reference points 34 , the definition subsystem 40 can ensure that the target reference points 34 are adequate for generation of the geometrical object 35 .
  • the definition subsystem 40 can impose any number, combination, or type of constraint. These restraints may include a requirement that a minimum number of target reference points 34 be identified, that a minimum number of target reference points 34 not be co-linear, or that target reference points 34 be within or outside of a specified area of an image.
  • the definition subsystem 40 generates the geometrical object 35 and coordinates the geometrical object 35 with the second image 32 , which generation and coordination can be accomplished by any method known to a person skilled in the art, including by transferring or copying the geometrical object 35 to the second image 32 .
  • the definition subsystem 40 can provide a plurality of controls for positioning the geometrical object 35 within the second image 32 .
  • the controls may include any one of or any combination of a control for shifting the geometrical object 35 along a dimensional axis, a control for rotating the geometrical object 35 , a control for changing a magnification of the geometrical object 35 , a course position control, a fine position control, or any other control helpful for a positioning of the geometrical object 35 in relation to the second image 32 .
  • the definition subsystem 40 can include a thumbnail image of the geometrical object 35 .
  • the definition subsystem 40 can identify a plurality of positions of the geometrical object 35 in relation to the second image 32 . Those positions may include a gross position and a fine position.
  • the thumbnail image may be used to identify gross or fine positions of the geometrical object 35 in relation to the second image 32 .
  • the definition subsystem 40 can identify a plurality of positions of the geometrical object in a substantially similar and consistent manner.
  • the definition subsystem 40 adjusts the geometrical object 35 within the second image 32 .
  • the definition subsystem 40 may adjust a positioning of the geometrical object 35 within the second image 32 .
  • the geometrical object 35 can be used to define template reference points 36 .
  • vertices of the geometrical object 35 correspond with template reference points 36 when the geometrical object 35 is located within or about the second image 32 .
  • a positioning of the geometrical object 35 in relation to the second image 32 positions the vertices or other relevant points of the geometrical object 35 so as to define the template reference points 36 .
  • the definition subsystem 40 can provide for an accuracy metric related to at least one of the template reference points 36 .
  • the accuracy metric can identify a measurement of accuracy of a positioning of at least one of the template reference points 36 in relation to an estimated or predicted position of reference points within the second image 32 .
  • the alignment system 20 can be applied to images involving two, three, or more dimensions.
  • an Affine transform heuristic is performed using various target reference points 34 and template points 36 .
  • the Affine transform can eliminate shift, rotational, and magnification differences between different images.
  • different types of relationship-related heuristics may be used by the definition subsystem 40 and/or the combination subsystem 42 .
  • Other examples of heuristics known in the art that relate to potential relationships between images and/or points include a linear conformal heuristic, a projective heuristic, a polynomial heuristic, a piecewise linear heuristic, and a locally weighted mean heuristic.
  • the various relationship-related heuristics allow the system 20 to compare images and points that would otherwise not be in a format suitable for the establishment of a relationship between the various images and/or points.
  • the relationship-related heuristics such as the Affine transform heuristic are used to “compare apples to apples and oranges to oranges.”
  • the combination subsystem 42 is responsible for creating the aligned image 38 from the images and relationships maintained in the definition subsystem 40 .
  • the combination subsystem 42 includes the aligned image 38 .
  • the combination subsystem 42 is configured to generate the aligned image 38 from the first image 30 , the second image 32 , at least one of the target reference points 34 , and at least one of the template reference points 36 .
  • the generation of the aligned image 38 by the combination subsystem 42 can be accomplished in a number of ways.
  • the combination subsystem 42 may access the target reference points 34 and the template reference points 36 from the definition subsystem 42 .
  • the combination subsystem 42 can generate an alignment calculation or determine a relationship between at least one of the target reference points 34 and at least one of the template reference points 36 .
  • the combination subsystem 42 can use an alignment calculation or relationship to align the first image 30 and the second image 32 .
  • the combination subsystem 42 uses locations of the target reference points 34 and the template reference points 36 to generate the aligned image 38 .
  • a detection subsystem 45 can be configured to detect distortions, or other indications of a problem, relating to an aligned image 38 .
  • the detection subsystem 45 also allows a user 22 to check for distortions in an aligned image 38 . Once a distortion has been detected, the detection subsystem 45 identifies the extent and nature of the distortion.
  • the user 22 can use data provided by the detection subsystem 45 to check for a misalignment of a device or system that generated the first image 30 or the second image 32 .
  • the detection subsystem 45 can be configured by a user 22 through the use of the interface subsystem 44 .
  • FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output.
  • a computer program 50 residing on a computer-readable medium receives user input 46 through an input interface 48 and provides output 54 to the user 22 through an output interface 52 .
  • the computer program 50 includes the target reference points 34 , the geometrical shape 35 ; FIGS. 1-2 , the first image 30 , the second image 32 , the template reference points 36 , a third image, and the interface 26 .
  • the target reference points 34 are associated with the first image 30 .
  • the computer program 50 can generate a geometrical object 35 or shape in a number of ways, including by connecting at least a subset of the target reference points 34 .
  • the geometrical shape 35 can be any number or combination of any shape, including but not limited to a segment, line, ellipse, arc, polygon, and triangle.
  • the input 46 may include a constraint imposed upon the target reference points 34 or the geometrical shape 35 by the computer program 50 .
  • the computer program 50 ensures that the target reference points 34 are adequate for generation of the geometrical shape 35 .
  • the system 20 can impose any number, combination, or type of constraint, including a requirement that a minimum number of target reference points 34 be identified, that a minimum number of target reference points 34 not be co-linear, or that target reference points 34 be within or without an area.
  • the computer program 50 requires more than four target reference points 34 .
  • the computer program 50 may identify a centroid of the geometrical shape 35 .
  • the second image 32 can be configured to include the geometrical shape 35 .
  • the geometrical shape 35 is generated by the computer program 50 within the second image 32 .
  • the computer program 50 can accomplish a generation of the geometrical shape 35 within the second image 32 in a number of ways. For example, the computer program 50 may transfer or copy the geometrical shape 35 from one image to another.
  • the computer program 50 provides for identifying the template reference points 36 or locations of the template reference points 36 in relation to the second image 32 .
  • the template reference points 36 can be identified by a positioning of the geometrical shape 35 in relation to a second image 32 , which positioning is provided for by the computer program 50 .
  • the computer program 50 provides for a number of controls for positioning the geometrical shape 35 within the second image 32 .
  • the manipulation of the controls is a form of input 46 .
  • the controls may include any one of or any combination of a shift control, a rotation control, a magnification control, a course position control, a fine position control, or any other control helpful for a positioning of the geometrical shape 35 in relation to a second image 32 .
  • the controls can function in a number of modes, including a coarse mode and a fine mode.
  • the computer program 50 provides for positioning the geometrical shape 35 by shifting the geometrical shape 35 along a dimensional axis, rotating the geometrical shape 35 , and changing a magnification of the geometrical shape 35 .
  • a positioning of the geometrical shape 35 can include a coarse adjustment and a fine adjustment.
  • the computer program 50 is capable of identifying of plurality of positions of the geometrical shape 35 in relation to the second image 32 , including a gross position and a fine position of the geometrical shape 35 in relation to the second image 32 . This identification can be performed in a substantially simultaneous manner.
  • a thumbnail image of an area adjacent to a vertex of the geometrical shape 35 can be provided by the computer program 50 .
  • the computer program 50 can provide for an accuracy metric related to at least one of the template reference points 36 .
  • the accuracy metric is a form of output 54 .
  • the accuracy metric can identify a measurement of accuracy of a positioning of at least one of the template reference points 36 in relation to an estimated or predicted position of reference points within the second image 32 .
  • the third image (the aligned image 38 ) is created from the first image 30 , the second image 32 , and a relationship between the target reference points 34 and the template reference points 36 .
  • the creation of the third image by the computer program 50 can be accomplished in a number of ways.
  • the computer program 50 can generate an alignment calculation or determine a relationship between at least one of the target reference points 34 and at least one of the template reference points 36 .
  • the computer program 50 can use an alignment calculation or relationship to align the first image 30 and the second image 32 .
  • the computer program 50 uses locations of the target reference points 34 and the template reference points 36 to generate the third image.
  • the computer program 50 can be configured to detect distortions of the third image. Once a distortion has been detected, the computer program 50 can identify the extent and nature of the distortion. A user 22 can use data generated by the computer program 50 to check for a misalignment of a device or system that generated the first image 30 or the second image 32 .
  • the output 54 of the computer program 50 can include various distortion metrics, misalignment metrics, and other forms of error metrics (collectively “accuracy metrics”).
  • the interface 26 of the computer program 50 is configured to receive input.
  • the interface 26 can include an input interface 48 and an output interface 52 .
  • the input can include but is not limited to an instruction for defining the target reference points 34 and a command for positioning the geometrical shape 35 in relation to the second image 32 .
  • the computer program 50 can be configured to execute other operations disclosed herein or known to a person skilled in the art that are relevant to the present invention.
  • FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images.
  • a relationship is defined between the various images to be aligned.
  • the system 20 facilitates the positioning of the images in accordance with the previously defined relationship. For example, the template image 32 is positioned in relation the target image 30 and the target image 30 is positioned in relation to the template image 32 .
  • the system generates the aligned image 38 in accordance with the positioning performed at 57 .
  • the system 20 can perform the three steps identified above in a wide number of different ways.
  • the positioning of the images can be facilitated by providing controls for the positioning of the template image in relation to the object image.
  • FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system 20 may execute to generate the aligned image 38 .
  • the system 20 receives input for defining the target reference points 34 associated with a first image 30 . Once the input 46 is received, or as it is received, the system 20 can then at 62 generate at the geometrical object 35 .
  • the input 46 may include a command.
  • the geometrical object 35 can be generated in a variety of ways, such as by connecting the target reference points 34 . In the preferred embodiment, the system 20 may be configured to require that at least four target reference points 34 be connected in generating the geometrical object 35 .
  • the geometrical object 35 can take any form or shape that connects the target reference points 34 , and each target reference point 34 is a vertex or other defining feature of the geometrical object 35 . In one category of embodiments, the geometrical object 35 is a polygon.
  • the system 20 imposes and checks a constraint against the target reference points 34 . If the target reference points 34 at 66 do not meet constraints imposed by the system 20 , the system 20 at 68 prompts and waits for input changing or adding to definitions of the target reference points 34 . Once addition reference point data is received, the system 20 again generates a geometrical object 35 at 62 and checks constraints against the target reference points 34 at 64 . The system 20 may repeat these steps until the target reference points 34 satisfy constraints. Any type of a constraint can be imposed upon the target reference points 34 , including requiring enough target reference points 34 to define a particular form of geometrical object 35 . For example, the system 20 may require that at least four target reference points 34 are defined. If more than two target reference points 34 are co-linear, the system 20 may require that additional target reference points 34 be defined. The system 20 may use the geometrical object 35 to impose constraints upon the target reference points 34 .
  • the system 20 generates a geometrical object 35 within the first image 30 and regenerates the geometrical object 35 in the second image 32 space at 70 .
  • the geometrical object 35 can be generated in the second image 32 space in a number of ways, including transferring or copying the geometrical object 35 from the first image 30 to the second image 32 .
  • the geometrical object 35 can be represented by a set of connected points, a solid object, a semi-transparent object, a transparent object, or any other object that can be used to represent a geometrical object 35 . Any such representation can be displayed by the system 20 .
  • the system 20 identifies the template reference points 36 based on a placement of the geometrical object 35 in relation to the second image 32 .
  • the method of identifying the template reference points is providing controls for positioning at 71 the geometrical object 35 in relation to the second image 32 .
  • a variety of controls can be made available, including one of or a combination of controls for shifting the geometrical object 35 up, down, left, or right in relation to the second image 32 , rotating the geometrical object 35 in relation to the second image 32 , changing the magnification or size of the geometrical object 35 in relation to the second image 32 , moving the geometrical object 35 through multiple dimensions, switching between coarse and fine positioning of the geometrical object 35 , or any other control that can be used to adjust the geometrical object 35 in relation to the second image 32 .
  • a command can be received as an input 46 allowing for the positioning of the geometrical object 35 by at least one of rotating the geometrical object 35 , adjusting a magnification of the geometrical object 35 , and shifting the geometrical object 35 along a dimensional axis.
  • a command may allow for coarse and fine adjustments of the geometrical object 35 .
  • the system 20 provides a thumbnail image to the interface 26 for displaying an area proximate to at least one of the template reference points 36 .
  • the thumbnail image can be configured to allow for a substantially simultaneous display of fine positioning detail and coarse positioning detail, for example, by providing both a view of thumbnail images and a larger view of the geometrical object 35 in relation to the second image 32 to the user 22 for simultaneous viewing.
  • the system 20 may provide for an accuracy metric or an accuracy measurement detail for either a composite of the template reference points 36 or individually for at least one or more of the individual template reference points 36 .
  • the system 20 may provide accuracy metrics by calculating a number of accuracy metrics.
  • Accuracy metrics facilitate an optimal positioning of the geometrical object 35 within the second image 32 .
  • the system 20 receives input commands from an interface or from the user 22 for positioning the geometrical object 35 at 72 in relation to the second image 32 .
  • the system 20 adjusts a positioning of the geometrical object 35 at 74 within the second image 32 . This adjustment can be based upon the accuracy metric.
  • the system 20 may use a computer implemented process, such as a refinement heuristic, or any other image alignment tool for adjusting a placement of the geometrical object 35 in relation to the second image 32 .
  • the locations of the template reference points 36 can be determined in other ways.
  • the user 22 may define the template reference points 36 by pointing and clicking on locations within the second image 32 , or the template reference points 36 can be predefined.
  • the system 20 can determine a relationship at 78 between the target reference points 34 and the template reference points 36 .
  • Such a relationship can be a mathematical relationship and can be determined in any of a number of ways.
  • the system 20 at 80 generates the aligned image 38 from the first image 30 and the second image 32 .
  • the generation occurs by the system producing the aligned image 38 from the first image 30 , the second image 32 , and a relationship between at least one of the target reference points 34 in the first image 30 and at least one of the template reference points 36 in the second image 32 .
  • the system 20 can use an alignment calculation or a computer implemented combination heuristic to generate the aligned image 38 . Some such heuristics are known in the prior art.
  • the system 20 analyzes the degree and nature of any misalignment between locations of the vertices of the geometrical object 35 and defined locations of the template reference points 36 to reveal information about the degree and nature of any misalignment of an image generating device or system. Analyzing distortions allows the system 20 or the user 22 to analyze the alignment status of an image generation device.
  • FIG. 6 is a flow diagram illustrating an example of steps that a user 22 of an image alignment system 20 can perform through an access device 24 and an interface 26 to generate an aligned image 38 .
  • the user 22 selects or inputs images for alignment 84 .
  • the user 22 can provide images to the system 20 in any form recognizable by the system 20 , including digital representations of images.
  • the user 22 at 86 defines the target reference points 34 of a first image 30 .
  • the target reference points 34 can be defined by pointing and clicking on locations within the first image 30 , by importing or selecting predefined target reference points 34 , or by any other way understandable to the system 20 .
  • the user 22 of the system 20 at 88 can initiate generation of the geometrical object 35 .
  • the geometrical object 35 can be initiated in a number of ways, including the defining the target reference points 34 , defining a set number of the target reference points 34 that satisfy constraints, submitting a specific instruction to the system 20 to generate the geometrical object 35 , or any other means by which the user 22 may signal the system 20 to generate the geometrical object 35 .
  • the system 20 can select an appropriate type of geometrical object 35 to generate, or the user 22 may select a type of geometrical object 35 to be generated. In one embodiment, the system 20 generates a geometrical object 35 by connecting the target reference points 34 .
  • a user determines a centroid 104 of a geometrical object 35 .
  • the system 20 can determine and indicate the centroid 104 of the geometrical object 35 .
  • a determination of the centroid 104 is helpful for eliminating or at least mitigating errors that can occur in the image alignment system 20 .
  • the user 22 can verify that the centroid 104 is near the center of a critical area of the first image 30 . If the system 20 or the user 22 of the system 20 determines that the centroid 104 of the geometrical object 35 is not near enough to a critical area of the first image 30 as is desired, the user 22 can redefine the target reference points 34 .
  • the user 22 positions the geometrical object 35 within the second image 32 space.
  • the user 22 can use controls provided by the system 20 or that are a part of the system 20 to position the geometrical object 35 .
  • the user 22 can shift the geometrical object 35 up, down, left, or right in relation to the second image 32 , rotate the geometrical object 35 in relation to a second image 32 , change the magnification or size of the geometrical object 35 in relation to the second image 32 , move the geometrical object 35 through multiple dimensions, switch between coarse and fine positioning of the geometrical object 35 , or execute any other control that can be used to adjust the geometrical object 35 in relation to the second image 32 .
  • the user 22 may use a thumbnail view or an accuracy metric to position the geometrical object 35 .
  • the user 22 of the system 20 initiates alignment of the first image 30 and the second image 32 .
  • the user 22 may signal the system 20 to transfer the geometrical object 35 in any way recognizable by the system 20 .
  • One such way is to send an instruction for alignment to the system 20 via the access device 24 or the interface 26 .
  • the system 20 Upon receipt of an alignment signal, the system 20 generates the aligned image 38 from the first image 30 and the second image 32 .
  • the user 22 of the system 20 checks for distortion of the aligned image 38 .
  • the user 22 determines and inputs to the system 20 desired locations of the template reference points 36 in relation to the second image 32 .
  • the system 20 can analyze the desired locations and post-alignment locations of template reference points 34 to discover information about any distortions in an aligned image 38 .
  • the system 20 may reveal to the user 22 any information pertaining to a distortion analysis.
  • FIG. 7A illustrates the target reference points 34 defined in relation to the first image 32 .
  • FIG. 7B illustrates the geometrical object 35 connecting the target reference points 35 .
  • FIG. 7C illustrates an indication of the centroid 104 of the geometrical object 35 .
  • FIG. 7D illustrates a transferred geometrical object 35 and the template reference points 36 in relation to the second image 32 .
  • FIG. 7A is a diagram illustrating one example of target reference points 34 defined in relation to the first image 32 .
  • FIG. 7B is a diagram illustrating one example of a geometrical object 35 connecting target reference points 34 associated with a first image 30 .
  • FIG. 7C is a diagram illustrating an example of a geometrical object 35 and a centroid 104 associated with that geometrical object 35 .
  • FIG. 7D is a diagram illustrating a transferred geometrical object 35 and various template reference points 36 positioned in relation to a second image 32 .

Abstract

The invention is a system or method for aligning images (the “system”). A definition subsystem, including a first image, a second image, one or more target reference points, one or more template reference points, and a geometrical object. The definition subsystem identifies one or more target reference points associated with the first image and one or more template reference points associated with the second image by providing a geometrical object for positioning the first image in relation to the second image. A combination subsystem is configured to generate an aligned image from the first image and second image. An interface subsystem may be used to facilitate interactions between a user and the system.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. application Ser. No. 10/630,015, filed on Jul. 30, 2003, which application is hereby incorporated herein by reference in its entirety.
  • BACKGROUND OF THE INVENTION
  • The present invention relates in general to the alignment of images. More specifically, the present invention relates to a system or method for aligning two or more images (collectively “alignment system” or simply the “system”).
  • Image processing often requires that two or more images from the same source or from different sources be “registered,” or aligned, so that they can occupy the same image space. Once properly aligned to the same image space, images can then be compared or combined to form a multidimensional image. Image alignment can be useful in many applications. One such possible application is in medical imaging. For example, an image produced by magnetic resonance imaging (“MRI”) and an image produced by computerized axial tomography (“CAT” or “CT”) originate from different sources. When the images are overlaid, information acquired in relation to soft tissue (MRI) and hard tissue (CT) can be combined to more accurately depict an area of the body. The total value of the combined integrated image can exceed the sum of its parts.
  • Another possible application of image alignment is for quality assurance measurements. For example, radiation oncology often requires image treatment plans to be compared to quality assurance films to determine if the treatment plan is actually being executed. There are also numerous non-medical applications for which image alignment can be very useful.
  • Several methods are available for image alignment, including automated and manual alignment methods. However, currently available image alignment tools and techniques are inadequate. In many instances, computer automated methods are unsuccessful in aligning images because boundaries are not well defined and images can be poorly focused. Although automated alignment methods perform alignment activities more quickly than existing manual alignment methods, manual alignment methods are often more accurate than automated methods. Thus, manual image alignment methods are often used to make up for deficiencies and inaccuracies of automated alignment methods. However, existing manual alignment systems and methods can be tedious, time consuming, and error prone. It would be desirable for an alignment system to perform in an efficient, accurate, and automated manner.
  • SUMMARY OF THE INVENTION
  • The invention is a system or method for aligning images (the “system”). A definition subsystem, including a first image, a second image, one or more target reference points, one or more template reference points, and a geometrical object. The definition subsystem identifies one or more target reference points associated with the first image and one or more template reference points associated with the second image by providing a geometrical object for positioning the first image in relation to the second image. A combination subsystem is configured to generate an aligned image from the first image and second image. An interface subsystem may be used to facilitate interactions between users and the system.
  • The alignment system can be applied to images involving two, three, or more dimensions. In some embodiments, an Affine transform heuristic is performed using various target and template points. The Affine transform can eliminate shift, rotational, and magnification differences between different images. In other embodiments, different types of combination heuristics may be used.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Certain embodiments of present invention will now be described, by way of examples, with reference to the accompanying drawings, in which:
  • FIG. 1 is an environmental block diagram illustrating an example of an image alignment system accessible by a user.
  • FIG. 2A is a subsystem-level block diagram illustrating an example of a definition subsystem and a combination subsystem.
  • FIG. 2B is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, and an interface subsystem.
  • FIG. 2C is a subsystem-level block diagram illustrating an example of a definition subsystem, a combination subsystem, an interface subsystem, and a detection subsystem.
  • FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output.
  • FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images.
  • FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system or method may execute to generate an aligned image.
  • FIG. 6 is a flow diagram illustrating an example of steps that a user of an image alignment system may perform to generate an aligned image.
  • FIG. 7A is a diagram illustrating one example of target reference points associated with a first image.
  • FIG. 7B is a diagram illustrating one example of a geometrical object connecting target reference points associated with a first image.
  • FIG. 7C is a diagram illustrating an example of a geometrical object and a centroid associated with that geometrical object.
  • FIG. 7D is a diagram illustrating a geometrical object and various template reference points positioned in relation to a second image.
  • DETAILED DESCRIPTION I. Introduction of Elements and Definitions
  • The present invention relates generally to methods and systems for aligning images (collectively an “image alignment system” or “the system”) by producing an aligned image from a number of images and a relationship between various reference points associated with those images. A geometrical object can be formed from selected reference points in one image, copied or transferred to a second image, and positioned within that second image to establish a relationship between reference points.
  • The system can be used in a wide variety of different contexts, including medical applications, photography, geology, and any other field that involves the use of images. The system can be implemented in a wide variety of different devices and hardware configurations. A wide variety of different interfaces, software applications, operating systems, computer hardware, and peripheral components may be incorporated into or interface with the system. There are numerous combinations and environments that can utilize one or more different embodiments of the system. Referring now to the drawings, FIG. 1 is a block diagram illustrating an example of some of the elements that can be incorporated into an image alignment system 20. For illustrative purposes only, FIG. 1 shows a human being to represent a user 22, a computer terminal to represent an access device 24, a GUI to represent an interface 26, and a computer tower to represent a computer 28.
  • A. User
  • A user 22 can access the system 20 through an access device 24. In many embodiments of the system 20, the user 22 is a human being. In some embodiments of the system 20, the user 22 may be an automated agent, a robot, a neural network, an expert system, an artificial technology device, or some other form of intelligence technology (collectively “intelligence technology”). The system 20 can be implemented in many different ways, giving users 22 a potentially wide variety of different ways to configure the processing performed by the system 20.
  • B. Access Device
  • The user 22 accesses the system 20 through the access device 24. The access device 24 can be any device that is either: (a) capable of performing the programming logic of the system 20; or (b) communicating a device that is capable of performing the programming logic of the system 20. Access devices 24 can include desktop computers, laptop computers, mainframe computers, mini-computers, programmable logic devices, embedded computers, hardware devices capable of performing the processing required by the system 20, cell phones, satellite pagers, personal data assistants (“PDAs”), and a wide range of future devices that may not yet currently exist. The access device 24 can also include various peripherals associated with the device such as a terminal, keyboard, mouse, screen, printer, input device, output device, or any other apparatus that can relay data or commands between a user 22 and an interface 26.
  • C. Interface
  • The user 22 uses the access device 24 to interact with an interface 26. In an Internet embodiment of the system 20, the interface 26 is typically web page that is viewable from a browser in the access device 22. In other embodiments, the interface 26 is likely to be influenced by the operating system and other characteristics of the access device 24. Users 22 can view system 20 outputs through the interface 26, and users 22 can also provide system 20 inputs by interacting with the interface 26.
  • In many embodiments, the interface 26 can be describe as a combination of the various information technology layers relevant to communications between various software applications and the user 22. For example, the interface 26 can be the aggregate characteristics of a graphical user interface (“GUI”), an intranet, an extranet, the Internet, a local area network (“LAN”), a wide area network (“WAN”), a software application, other type of network, and any other factor relating to the relaying of data or commands between an access device 24 and a computer 28, or between a user 22 and a computer 28.
  • D. Computer
  • A computer 28 is any device or combination of devices that allows the processing of the system 20 to be performed. The computer 28 may be a general purpose computer capable of running a wide variety of different software applications or a specialized device limited to particular functions. In some embodiments, the computer 28 is the same device as the access device 24. In other embodiments, the computer 28 is a network of computers 28 accessed by the accessed device 24. The system 20 can incorporate a wide variety of different information technology architectures. The computer 28 is able to receive, incorporate, store, and process information that may relate to operation of the image alignment system 20. The computer 28 may include any type, number, form, or configuration of processors, system memory, computer-readable mediums, peripheral devices, and operating systems. In many embodiments, the computer 28 is a server and the access device 24 is a client device accessing the server.
  • Many of the processing elements of the system 20 exist as representations within the computer 28. Images to be aligned by the system 20, such as a first image 30 and a second image 32, are examples of processing elements existing as representations within the computer 28. An image may include various reference points, and those reference points can exist as representations within the computer 28. A geometrical object 35 of reference point(s) that is used to align a first image 30 with respect to a second image 32, also exist as representations within the computer 28.
  • E. Images
  • The images 30 and 32 can be any representation that can be read or acted upon by the computer 28, including graphical or data representations. The representations can involve two-dimensional, three-dimensional, or even greater than three-dimensional information. One or more of the images may be a digital image. An aligned image 38 can be formed from any number of images.
  • An image is potentially any visual representation that can potentially be aligned with one or more other visual representations. In many embodiments, images are captured through the use of a light-based sensor, such as a camera. In other embodiments, images can be generated from non-light based sensors or the sources of information and data. An ultrasound image is an example of an image that is generated from a non-light based sensor.
  • The images processed by the system 20 are preferably digital images. In some embodiments, the images are initially captured in a digital format and are passed unmodified to the system 20. In other embodiments, digital images may be generated from analog images. Various enhancement heuristics may be applied to an image before it is aligned by the system 20, but the system 20 does not require the performance of such pre-alignment enhancement processing.
  • The computer 28 may act upon multiple images in myriad ways, including the execution of commands or instructions that are provided by the user 22 of the system 20. For example, the computer 28 can receive input from the user 22 through the interface 26 and from a first image 30 (a “target” image), a second image 32 (a “template image”), target reference points 34, a geometrical object 35, and template reference points 36 generate an aligned image 38.
  • F. Reference Points
  • A reference point is a location on an image that is used by the system 20 to align the image with another image. Reference points may be as small as an individual pixel, or a large constellation of pixels. In a preferred embodiment, the user 22 identifies the reference points and the system 20 generates the aligned image 38 from the reference points in an automated fashion without further user 22 intervention.
  • As seen in FIG. 1, target reference points 34 are associated with the first image 30 (the “target image”) and template reference points 36 are associated with a second image 32 (the “template image”). Any number of target images can be aligned with respect to a single template image. The target reference points 34 and template reference points 36 are locations in relation to an image, and the system 20 uses the locations of the target reference points 34 and the template reference points 36 to determine a relationship so that an aligned 38 can be generated. Locations of the template reference points 36 may be determined by positioning the geometrical object 35 within the second image 32. Thus, the geometrical object 35 can be used to facilitate a generation of an aligned image 38. In the embodiment shown in FIG. 1, a geometrical object 35 is transmitted or copied from a first image 30 to a second image 32. In alternative embodiments, the geometrical object 35 may be reproduced in the second image 32 in some other way.
  • G. Geometrical Object
  • The geometrical object 35 is the configuration of target reference point(s) 34 within the target image 30 that are used to align the target image 30 with the template image 32. In a preferred embodiment, the geometrical object 35 is made up at least three points.
  • II. Subsystem-Level Views
  • The system 20 can be implemented in the form of various subsystems. A wide variety of different subsystem configurations can be incorporated into the system 20.
  • FIGS. 2A, 2B, and 2C illustrate different subsystem-level configurations of the system 20. FIG. 2A shows a system 20 made up of two subsystems: a definition subsystem 40 and a combination subsystem 42. FIG. 2B illustrates a system 20 made up of three subsystems: the definition subsystem 40, the combination subsystem 42, and an interface subsystem 44. FIG. 2C displays an association of a four subsystems: the definition subsystem 40, the combination subsystem 42, the interface subsystem 44, and a detection subsystem 45. Interaction between subsystems 40-44 can include an exchange of data, algorithms, instructions, commands, locations of points in relation to images, or any other communication helpful for implementation of the system 20.
  • A. Definition Subsystem
  • The definition subsystem 40 allows the system 20 to define the relationship(s) between the first image 30 and the second image 32 so that the combination subsystem 42 can create the aligned image 38 from the first image 30 and the second image 32.
  • The processing elements of the definition subsystem 40 can include the first image 30, the second image 32, the target reference points 34, the template reference points 36, and the geometrical object 35. The target reference points 34 are associated with the first image 34, and the template reference points 36 are associated with the second image 32. The target reference points 34 may be selected through an interface subsystem 44 or by any other method readable to the definition subsystem 40. The definition subsystem 40 is configured to define or create the geometrical object 35.
  • In one embodiment, the definition subsystem 40 generates the geometrical object 35 by connecting at least the subset of target reference points 34. The definition subsystem 40 may further identify a centroid of the geometrical object 35. In addition, the definition subsystem 40 may impose a constraint upon one or more target reference points 34. Constraints may be purely user defined on a case-by-case basis, or may be created by the system 20 through the implementation of user-defined processing rules. By imposing the constraint upon one or more target reference points 34, the definition subsystem 40 can ensure that the target reference points 34 are adequate for generation of the geometrical object 35. The definition subsystem 40 can impose any number, combination, or type of constraint. These restraints may include a requirement that a minimum number of target reference points 34 be identified, that a minimum number of target reference points 34 not be co-linear, or that target reference points 34 be within or outside of a specified area of an image.
  • The definition subsystem 40 generates the geometrical object 35 and coordinates the geometrical object 35 with the second image 32, which generation and coordination can be accomplished by any method known to a person skilled in the art, including by transferring or copying the geometrical object 35 to the second image 32. The definition subsystem 40 can provide a plurality of controls for positioning the geometrical object 35 within the second image 32. The controls may include any one of or any combination of a control for shifting the geometrical object 35 along a dimensional axis, a control for rotating the geometrical object 35, a control for changing a magnification of the geometrical object 35, a course position control, a fine position control, or any other control helpful for a positioning of the geometrical object 35 in relation to the second image 32.
  • The definition subsystem 40 can include a thumbnail image of the geometrical object 35. In some embodiments, the definition subsystem 40 can identify a plurality of positions of the geometrical object 35 in relation to the second image 32. Those positions may include a gross position and a fine position. The thumbnail image may be used to identify gross or fine positions of the geometrical object 35 in relation to the second image 32. The definition subsystem 40 can identify a plurality of positions of the geometrical object in a substantially similar and consistent manner. In some embodiments, the definition subsystem 40 adjusts the geometrical object 35 within the second image 32. The definition subsystem 40 may adjust a positioning of the geometrical object 35 within the second image 32.
  • The geometrical object 35 can be used to define template reference points 36. In one embodiment, vertices of the geometrical object 35 correspond with template reference points 36 when the geometrical object 35 is located within or about the second image 32. A positioning of the geometrical object 35 in relation to the second image 32 positions the vertices or other relevant points of the geometrical object 35 so as to define the template reference points 36. The definition subsystem 40 can provide for an accuracy metric related to at least one of the template reference points 36. The accuracy metric can identify a measurement of accuracy of a positioning of at least one of the template reference points 36 in relation to an estimated or predicted position of reference points within the second image 32.
  • The alignment system 20 can be applied to images involving two, three, or more dimensions. In some embodiments, an Affine transform heuristic is performed using various target reference points 34 and template points 36. The Affine transform can eliminate shift, rotational, and magnification differences between different images. In other embodiments, different types of relationship-related heuristics may be used by the definition subsystem 40 and/or the combination subsystem 42. Other examples of heuristics known in the art that relate to potential relationships between images and/or points include a linear conformal heuristic, a projective heuristic, a polynomial heuristic, a piecewise linear heuristic, and a locally weighted mean heuristic. The various relationship-related heuristics allow the system 20 to compare images and points that would otherwise not be in a format suitable for the establishment of a relationship between the various images and/or points. In other words, the relationship-related heuristics such as the Affine transform heuristic are used to “compare apples to apples and oranges to oranges.”
  • B. Combination Subsystem
  • The combination subsystem 42 is responsible for creating the aligned image 38 from the images and relationships maintained in the definition subsystem 40. The combination subsystem 42 includes the aligned image 38. The combination subsystem 42 is configured to generate the aligned image 38 from the first image 30, the second image 32, at least one of the target reference points 34, and at least one of the template reference points 36. The generation of the aligned image 38 by the combination subsystem 42 can be accomplished in a number of ways. The combination subsystem 42 may access the target reference points 34 and the template reference points 36 from the definition subsystem 42. The combination subsystem 42 can generate an alignment calculation or determine a relationship between at least one of the target reference points 34 and at least one of the template reference points 36. The combination subsystem 42 can use an alignment calculation or relationship to align the first image 30 and the second image 32. In another embodiment, the combination subsystem 42 uses locations of the target reference points 34 and the template reference points 36 to generate the aligned image 38.
  • C. Interface Subsystem
  • An interface subsystem 44 can be included in the system 20 and configured to allow the system 20 to interact with users 22. Inputs may received by the system 20 from the user 22 through the interface subsystem 44, and users 22 may view the outputs of the system 20 through the interface subsystem 44. Any data, command, or other item understandable to the system 20 may be communicated to or from the interface subsystem 44. In a preferred embodiment, the user 22 can create processing rules through the interface subsystem 44 that can be applied to many different processing contexts in an ongoing basis. The interface subsystem 44 includes the interface 26 discussed above.
  • D. Detection Subsystem
  • A detection subsystem 45 can be configured to detect distortions, or other indications of a problem, relating to an aligned image 38. The detection subsystem 45 also allows a user 22 to check for distortions in an aligned image 38. Once a distortion has been detected, the detection subsystem 45 identifies the extent and nature of the distortion. The user 22 can use data provided by the detection subsystem 45 to check for a misalignment of a device or system that generated the first image 30 or the second image 32. The detection subsystem 45 can be configured by a user 22 through the use of the interface subsystem 44.
  • III. Input/Output View
  • FIG. 3 is a flow diagram illustrating an example of how the system receives input and generates output. A computer program 50 residing on a computer-readable medium receives user input 46 through an input interface 48 and provides output 54 to the user 22 through an output interface 52. The computer program 50 includes the target reference points 34, the geometrical shape 35; FIGS. 1-2, the first image 30, the second image 32, the template reference points 36, a third image, and the interface 26. As previously discussed, the target reference points 34 are associated with the first image 30. The computer program 50 can generate a geometrical object 35 or shape in a number of ways, including by connecting at least a subset of the target reference points 34. The geometrical shape 35 can be any number or combination of any shape, including but not limited to a segment, line, ellipse, arc, polygon, and triangle.
  • The input 46 may include a constraint imposed upon the target reference points 34 or the geometrical shape 35 by the computer program 50. By imposing a constraint upon target reference points 34, the computer program 50 ensures that the target reference points 34 are adequate for generation of the geometrical shape 35. The system 20 can impose any number, combination, or type of constraint, including a requirement that a minimum number of target reference points 34 be identified, that a minimum number of target reference points 34 not be co-linear, or that target reference points 34 be within or without an area. In one embodiment, the computer program 50 requires more than four target reference points 34. The computer program 50 may identify a centroid of the geometrical shape 35.
  • The second image 32 can be configured to include the geometrical shape 35. The geometrical shape 35 is generated by the computer program 50 within the second image 32. The computer program 50 can accomplish a generation of the geometrical shape 35 within the second image 32 in a number of ways. For example, the computer program 50 may transfer or copy the geometrical shape 35 from one image to another.
  • The computer program 50 provides for identifying the template reference points 36 or locations of the template reference points 36 in relation to the second image 32. In one embodiment, the template reference points 36 can be identified by a positioning of the geometrical shape 35 in relation to a second image 32, which positioning is provided for by the computer program 50. The computer program 50 provides for a number of controls for positioning the geometrical shape 35 within the second image 32. The manipulation of the controls is a form of input 46. The controls may include any one of or any combination of a shift control, a rotation control, a magnification control, a course position control, a fine position control, or any other control helpful for a positioning of the geometrical shape 35 in relation to a second image 32. The controls can function in a number of modes, including a coarse mode and a fine mode. The computer program 50 provides for positioning the geometrical shape 35 by shifting the geometrical shape 35 along a dimensional axis, rotating the geometrical shape 35, and changing a magnification of the geometrical shape 35. A positioning of the geometrical shape 35 can include a coarse adjustment and a fine adjustment. The computer program 50 is capable of identifying of plurality of positions of the geometrical shape 35 in relation to the second image 32, including a gross position and a fine position of the geometrical shape 35 in relation to the second image 32. This identification can be performed in a substantially simultaneous manner. A thumbnail image of an area adjacent to a vertex of the geometrical shape 35 can be provided by the computer program 50.
  • The computer program 50 can provide for an accuracy metric related to at least one of the template reference points 36. The accuracy metric is a form of output 54. The accuracy metric can identify a measurement of accuracy of a positioning of at least one of the template reference points 36 in relation to an estimated or predicted position of reference points within the second image 32.
  • The third image (the aligned image 38) is created from the first image 30, the second image 32, and a relationship between the target reference points 34 and the template reference points 36. The creation of the third image by the computer program 50 can be accomplished in a number of ways. The computer program 50 can generate an alignment calculation or determine a relationship between at least one of the target reference points 34 and at least one of the template reference points 36. The computer program 50 can use an alignment calculation or relationship to align the first image 30 and the second image 32. In another embodiment, the computer program 50 uses locations of the target reference points 34 and the template reference points 36 to generate the third image.
  • The computer program 50 can be configured to detect distortions of the third image. Once a distortion has been detected, the computer program 50 can identify the extent and nature of the distortion. A user 22 can use data generated by the computer program 50 to check for a misalignment of a device or system that generated the first image 30 or the second image 32. The output 54 of the computer program 50 can include various distortion metrics, misalignment metrics, and other forms of error metrics (collectively “accuracy metrics”).
  • The interface 26 of the computer program 50 is configured to receive input. The interface 26 can include an input interface 48 and an output interface 52. The input can include but is not limited to an instruction for defining the target reference points 34 and a command for positioning the geometrical shape 35 in relation to the second image 32. The computer program 50 can be configured to execute other operations disclosed herein or known to a person skilled in the art that are relevant to the present invention.
  • IV. Process-Flow Views
  • A. Example 1
  • FIG. 4 is a flow diagram illustrating an example of facilitating a positioning of images and generating an aligned image according to the positioned images. At 56, a relationship is defined between the various images to be aligned. At 57, the system 20 facilitates the positioning of the images in accordance with the previously defined relationship. For example, the template image 32 is positioned in relation the target image 30 and the target image 30 is positioned in relation to the template image 32. At 58, the system generates the aligned image 38 in accordance with the positioning performed at 57.
  • The system 20 can perform the three steps identified above in a wide number of different ways. For example, the positioning of the images can be facilitated by providing controls for the positioning of the template image in relation to the object image.
  • B. Example 2
  • FIG. 5 is a flow diagram illustrating an example of steps that an image alignment system 20 may execute to generate the aligned image 38.
  • At 60, the system 20 receives input for defining the target reference points 34 associated with a first image 30. Once the input 46 is received, or as it is received, the system 20 can then at 62 generate at the geometrical object 35. The input 46 may include a command. The geometrical object 35 can be generated in a variety of ways, such as by connecting the target reference points 34. In the preferred embodiment, the system 20 may be configured to require that at least four target reference points 34 be connected in generating the geometrical object 35. The geometrical object 35 can take any form or shape that connects the target reference points 34, and each target reference point 34 is a vertex or other defining feature of the geometrical object 35. In one category of embodiments, the geometrical object 35 is a polygon.
  • At 64, the system 20 imposes and checks a constraint against the target reference points 34. If the target reference points 34 at 66 do not meet constraints imposed by the system 20, the system 20 at 68 prompts and waits for input changing or adding to definitions of the target reference points 34. Once addition reference point data is received, the system 20 again generates a geometrical object 35 at 62 and checks constraints against the target reference points 34 at 64. The system 20 may repeat these steps until the target reference points 34 satisfy constraints. Any type of a constraint can be imposed upon the target reference points 34, including requiring enough target reference points 34 to define a particular form of geometrical object 35. For example, the system 20 may require that at least four target reference points 34 are defined. If more than two target reference points 34 are co-linear, the system 20 may require that additional target reference points 34 be defined. The system 20 may use the geometrical object 35 to impose constraints upon the target reference points 34.
  • Once the target reference points 34 are deemed at 66 to meet the constraints imposed by the system 20, the system 20 generates a geometrical object 35 within the first image 30 and regenerates the geometrical object 35 in the second image 32 space at 70. The geometrical object 35 can be generated in the second image 32 space in a number of ways, including transferring or copying the geometrical object 35 from the first image 30 to the second image 32. The geometrical object 35 can be represented by a set of connected points, a solid object, a semi-transparent object, a transparent object, or any other object that can be used to represent a geometrical object 35. Any such representation can be displayed by the system 20.
  • The system 20 identifies the template reference points 36 based on a placement of the geometrical object 35 in relation to the second image 32. In one embodiment, the method of identifying the template reference points is providing controls for positioning at 71 the geometrical object 35 in relation to the second image 32. A variety of controls can be made available, including one of or a combination of controls for shifting the geometrical object 35 up, down, left, or right in relation to the second image 32, rotating the geometrical object 35 in relation to the second image 32, changing the magnification or size of the geometrical object 35 in relation to the second image 32, moving the geometrical object 35 through multiple dimensions, switching between coarse and fine positioning of the geometrical object 35, or any other control that can be used to adjust the geometrical object 35 in relation to the second image 32. A command can be received as an input 46 allowing for the positioning of the geometrical object 35 by at least one of rotating the geometrical object 35, adjusting a magnification of the geometrical object 35, and shifting the geometrical object 35 along a dimensional axis. A command may allow for coarse and fine adjustments of the geometrical object 35.
  • In one embodiment, the system 20 provides a thumbnail image to the interface 26 for displaying an area proximate to at least one of the template reference points 36. The thumbnail image can be configured to allow for a substantially simultaneous display of fine positioning detail and coarse positioning detail, for example, by providing both a view of thumbnail images and a larger view of the geometrical object 35 in relation to the second image 32 to the user 22 for simultaneous viewing.
  • The system 20 may provide for an accuracy metric or an accuracy measurement detail for either a composite of the template reference points 36 or individually for at least one or more of the individual template reference points 36. The system 20 may provide accuracy metrics by calculating a number of accuracy metrics. Accuracy metrics facilitate an optimal positioning of the geometrical object 35 within the second image 32. In one embodiment, the system 20 receives input commands from an interface or from the user 22 for positioning the geometrical object 35 at 72 in relation to the second image 32. In one category of embodiments, the system 20 adjusts a positioning of the geometrical object 35 at 74 within the second image 32. This adjustment can be based upon the accuracy metric. The system 20 may use a computer implemented process, such as a refinement heuristic, or any other image alignment tool for adjusting a placement of the geometrical object 35 in relation to the second image 32.
  • In other embodiments, the locations of the template reference points 36 can be determined in other ways. For example, the user 22 may define the template reference points 36 by pointing and clicking on locations within the second image 32, or the template reference points 36 can be predefined. Once locations of the template reference points 36 have been determined, the system 20 can determine a relationship at 78 between the target reference points 34 and the template reference points 36. Such a relationship can be a mathematical relationship and can be determined in any of a number of ways.
  • The system 20 at 80 generates the aligned image 38 from the first image 30 and the second image 32. In one embodiment, the generation occurs by the system producing the aligned image 38 from the first image 30, the second image 32, and a relationship between at least one of the target reference points 34 in the first image 30 and at least one of the template reference points 36 in the second image 32. The system 20 can use an alignment calculation or a computer implemented combination heuristic to generate the aligned image 38. Some such heuristics are known in the prior art.
  • The system 20 can check at 82 for distortions of the aligned image 38. By checking for a distortion in the aligned image 38, the system 20 can detect a possible misalignment of a device used to generate the first image 30 or the second image 32. In one embodiment of the present invention, the system 20 checks for distortions in the aligned image 38 by comparing the locations of the vertices of the geometrical object 35 in relation to the second image 32 with defined or desired locations of the template reference points 36, which defined or desired points may be indicated by the user 22 of the system 20. This comparison of discrepancies produces an alignment status. The system 20 analyzes the degree and nature of any misalignment between locations of the vertices of the geometrical object 35 and defined locations of the template reference points 36 to reveal information about the degree and nature of any misalignment of an image generating device or system. Analyzing distortions allows the system 20 or the user 22 to analyze the alignment status of an image generation device.
  • C. Example 3
  • FIG. 6 is a flow diagram illustrating an example of steps that a user 22 of an image alignment system 20 can perform through an access device 24 and an interface 26 to generate an aligned image 38.
  • At 84, the user 22 selects or inputs images for alignment 84. The user 22 can provide images to the system 20 in any form recognizable by the system 20, including digital representations of images. The user 22 at 86 defines the target reference points 34 of a first image 30. The target reference points 34 can be defined by pointing and clicking on locations within the first image 30, by importing or selecting predefined target reference points 34, or by any other way understandable to the system 20.
  • The user 22 of the system 20 at 88 can initiate generation of the geometrical object 35. The geometrical object 35 can be initiated in a number of ways, including the defining the target reference points 34, defining a set number of the target reference points 34 that satisfy constraints, submitting a specific instruction to the system 20 to generate the geometrical object 35, or any other means by which the user 22 may signal the system 20 to generate the geometrical object 35. The system 20 can select an appropriate type of geometrical object 35 to generate, or the user 22 may select a type of geometrical object 35 to be generated. In one embodiment, the system 20 generates a geometrical object 35 by connecting the target reference points 34.
  • A user determines a centroid 104 of a geometrical object 35. In an alternative embodiment, the system 20 can determine and indicate the centroid 104 of the geometrical object 35. A determination of the centroid 104 is helpful for eliminating or at least mitigating errors that can occur in the image alignment system 20. By determining the centroid 104 of the geometrical object 35, the user 22 can verify that the centroid 104 is near the center of a critical area of the first image 30. If the system 20 or the user 22 of the system 20 determines that the centroid 104 of the geometrical object 35 is not near enough to a critical area of the first image 30 as is desired, the user 22 can redefine the target reference points 34.
  • The user 22 of the system 20 at 92 initiates a transfer or copy of the geometrical object 35 to the second image 32 space. The user 22 may signal the system 20 to transfer the geometrical object 35 in any way recognizable by the system 20. One such way is by sending an instruction to the system 20 for generation of the geometrical object 35 in the second image 32. Upon receipt of an appropriate signal, the system 20 transfers or copies the geometrical object 35 to the second image 32 space.
  • Once the geometrical object 35 is transferred to the second image 32 space, the user 22 positions the geometrical object 35 within the second image 32 space. The user 22 can use controls provided by the system 20 or that are a part of the system 20 to position the geometrical object 35. In one embodiment, the user 22 can shift the geometrical object 35 up, down, left, or right in relation to the second image 32, rotate the geometrical object 35 in relation to a second image 32, change the magnification or size of the geometrical object 35 in relation to the second image 32, move the geometrical object 35 through multiple dimensions, switch between coarse and fine positioning of the geometrical object 35, or execute any other control that can be used to adjust the geometrical object 35 in relation to the second image 32. The user 22 may use a thumbnail view or an accuracy metric to position the geometrical object 35.
  • The user 22 of the system 20 initiates alignment of the first image 30 and the second image 32. The user 22 may signal the system 20 to transfer the geometrical object 35 in any way recognizable by the system 20. One such way is to send an instruction for alignment to the system 20 via the access device 24 or the interface 26. Upon receipt of an alignment signal, the system 20 generates the aligned image 38 from the first image 30 and the second image 32.
  • At 98, the user 22 of the system 20 checks for distortion of the aligned image 38. In one embodiment, the user 22 determines and inputs to the system 20 desired locations of the template reference points 36 in relation to the second image 32. The system 20 can analyze the desired locations and post-alignment locations of template reference points 34 to discover information about any distortions in an aligned image 38. The system 20 may reveal to the user 22 any information pertaining to a distortion analysis.
  • V. Examples of Reference Points and Geometric Objects
  • FIG. 7A illustrates the target reference points 34 defined in relation to the first image 32. FIG. 7B illustrates the geometrical object 35 connecting the target reference points 35. FIG. 7C illustrates an indication of the centroid 104 of the geometrical object 35. FIG. 7D illustrates a transferred geometrical object 35 and the template reference points 36 in relation to the second image 32.
  • FIG. 7A is a diagram illustrating one example of target reference points 34 defined in relation to the first image 32. FIG. 7B is a diagram illustrating one example of a geometrical object 35 connecting target reference points 34 associated with a first image 30. FIG. 7C is a diagram illustrating an example of a geometrical object 35 and a centroid 104 associated with that geometrical object 35. FIG. 7D is a diagram illustrating a transferred geometrical object 35 and various template reference points 36 positioned in relation to a second image 32.
  • VI. Alternative Embodiments
  • The above description is intended to be illustrative and not restrictive. Many embodiments and applications other than the examples provided would be apparent to those of skill in the art upon reading the above description. The scope of the invention should be determined, not with reference to the above description, but should instead be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. It is anticipated and intended that future developments will occur in image alignment systems and methods, and that the invention will be incorporated into such future embodiments.

Claims (17)

1-49. (canceled)
50. A method for aligning radiographic images, comprising:
facilitating a positioning of a template image in relation to an object image; and
generating an aligned image from a target image and said object image according to said positioning of said template image in relation to said object image.
51. The method of claim 50, wherein said positioning is facilitated by providing controls for said positioning of said reference image in relation to said object image.
52. An apparatus for aligning images, comprising:
a computer program tangibly embodied on a computer-readable medium, said computer program including instructions for:
facilitating a positioning of a template image in relation to an object image; and
generating an aligned image from a target image and said object image according to said positioning of said template image in relation to said object image.
53. The apparatus of claim 52, wherein said positioning is facilitated by providing controls for said positioning of said reference image in relation to said object image.
54. An apparatus for aligning images, comprising:
a computer program tangibly embodied on a computer-readable medium, said computer program including instructions for:
receiving an input for defining target reference points associated with a first image;
generating a geometrical object by connecting at least four said target reference points;
identifying template reference points based on a placement of said geometrical object in relation to said second image; and
producing an aligned image from said first image, said second image, and a relationship between at least one of said target reference points in said first image and at least one of said template reference points in said second image.
55. The apparatus of claim 54, said computer program further including instructions for imposing a constraint on said target reference points.
56. The apparatus of claim 54, wherein said input includes a command.
57. The apparatus of claim 56, wherein said command allows for at least one of shifting said geometrical object along a dimensional axis, rotating said geometrical object, and adjusting a magnification of said geometrical object.
58. The apparatus of claim 56, wherein said command allows for coarse adjustments and fine adjustments of said geometrical object.
59. The apparatus of claim 54, said computer program further including instructions for calculating a plurality of accuracy metrics.
60. The apparatus of claim 54, said computer program further including instructions for displaying said geometrical object as one of a solid object, a semi-transparent object, and a transparent object.
61. The apparatus of claim 54, said computer program further including instructions for displaying a thumbnail image of an area proximate to at least one of said template reference points.
62. The apparatus of claim 61, wherein said thumbnail image is configured to allow for a substantially simultaneous display of fine positioning detail and coarse positioning detail.
63. The apparatus of claim 54, said computer program further including instructions for providing an accuracy measurement detail for at least one of said template reference points or for a composite of said template reference points.
64. The apparatus of claim 61, said computer program further including instructions for analyzing an alignment status of an image generation device, wherein said alignment status is determined from a discrepancy between said accuracy measurement detail and defined locations for said template reference points.
65. The apparatus of claim 54, further comprising adjusting said positioning of said geometrical object within said second image.
US11/133,544 2003-07-30 2005-05-20 System and method for aligning images Abandoned US20050232513A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/133,544 US20050232513A1 (en) 2003-07-30 2005-05-20 System and method for aligning images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/630,015 US6937751B2 (en) 2003-07-30 2003-07-30 System and method for aligning images
US11/133,544 US20050232513A1 (en) 2003-07-30 2005-05-20 System and method for aligning images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/630,015 Continuation US6937751B2 (en) 2003-07-30 2003-07-30 System and method for aligning images

Publications (1)

Publication Number Publication Date
US20050232513A1 true US20050232513A1 (en) 2005-10-20

Family

ID=34103737

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/630,015 Expired - Lifetime US6937751B2 (en) 2003-07-30 2003-07-30 System and method for aligning images
US11/133,544 Abandoned US20050232513A1 (en) 2003-07-30 2005-05-20 System and method for aligning images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/630,015 Expired - Lifetime US6937751B2 (en) 2003-07-30 2003-07-30 System and method for aligning images

Country Status (5)

Country Link
US (2) US6937751B2 (en)
EP (1) EP1649424A2 (en)
JP (1) JP2007501451A (en)
CA (1) CA2529760A1 (en)
WO (1) WO2005020150A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050285812A1 (en) * 2004-06-23 2005-12-29 Fuji Photo Film Co., Ltd. Image display method, apparatus and program
US20080059205A1 (en) * 2006-04-26 2008-03-06 Tal Dayan Dynamic Exploration of Electronic Maps
US7933897B2 (en) 2005-10-12 2011-04-26 Google Inc. Entity display priority in a distributed geographic information system
US20170147552A1 (en) * 2015-11-19 2017-05-25 Captricity, Inc. Aligning a data table with a reference table

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8724865B2 (en) * 2001-11-07 2014-05-13 Medical Metrics, Inc. Method, computer software, and system for tracking, stabilizing, and reporting motion between vertebrae
US7298876B1 (en) * 2002-11-04 2007-11-20 R2 Technology, Inc. Method and apparatus for quality assurance and quality control in radiological equipment using automatic analysis tools
US20050219558A1 (en) * 2003-12-17 2005-10-06 Zhengyuan Wang Image registration using the perspective of the image rotation
US20050285947A1 (en) * 2004-06-21 2005-12-29 Grindstaff Gene A Real-time stabilization
US7819057B2 (en) * 2005-03-30 2010-10-26 Goss International Americas, Inc. Print unit having blanket cylinder throw-off bearer surfaces
CN101631679B (en) 2005-03-30 2011-12-07 高斯国际美洲公司 Cantilevered blanket cylinder lifting mechanism
CN101111379B (en) 2005-03-30 2011-12-07 高斯国际美洲公司 Web offset printing press with articulated tucker
CN101208201B (en) * 2005-03-30 2011-10-05 高斯国际美洲公司 Web offset printing press with autoplating
JP4829291B2 (en) * 2005-04-11 2011-12-07 ゴス インターナショナル アメリカス インコーポレイテッド Printing unit that enables automatic plating using a single motor drive
KR20080025055A (en) * 2005-06-08 2008-03-19 코닌클리케 필립스 일렉트로닉스 엔.브이. Point subselection for fast deformable point-based imaging
JP2009544446A (en) * 2006-07-28 2009-12-17 トモセラピー・インコーポレーテッド Method and apparatus for calibrating a radiation therapy system
US8872911B1 (en) * 2010-01-05 2014-10-28 Cognex Corporation Line scan calibration method and apparatus
EP2962309B1 (en) 2013-02-26 2022-02-16 Accuray, Inc. Electromagnetically actuated multi-leaf collimator
US11884311B2 (en) * 2016-08-05 2024-01-30 Transportation Ip Holdings, Llc Route inspection system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696835A (en) * 1994-01-21 1997-12-09 Texas Instruments Incorporated Apparatus and method for aligning and measuring misregistration
US5926568A (en) * 1997-06-30 1999-07-20 The University Of North Carolina At Chapel Hill Image object matching using core analysis and deformable shape loci
US6219462B1 (en) * 1997-05-09 2001-04-17 Sarnoff Corporation Method and apparatus for performing global image alignment using any local match measure
US6351573B1 (en) * 1994-01-28 2002-02-26 Schneider Medical Technologies, Inc. Imaging device and method
US6351660B1 (en) * 2000-04-18 2002-02-26 Litton Systems, Inc. Enhanced visualization of in-vivo breast biopsy location for medical documentation
US20020048393A1 (en) * 2000-09-19 2002-04-25 Fuji Photo Film Co., Ltd. Method of registering images
US6528803B1 (en) * 2000-01-21 2003-03-04 Radiological Imaging Technology, Inc. Automated calibration adjustment for film dosimetry
US6563942B2 (en) * 1994-03-07 2003-05-13 Fuji Photo Film Co., Ltd. Method for adjusting positions of radiation images
US6675116B1 (en) * 2000-09-22 2004-01-06 Radiological Imaging Technology, Inc. Automated calibration for radiation dosimetry using fixed or moving beams and detectors
US6754374B1 (en) * 1998-12-16 2004-06-22 Surgical Navigation Technologies, Inc. Method and apparatus for processing images with regions representing target objects
US6839454B1 (en) * 1999-09-30 2005-01-04 Biodiscovery, Inc. System and method for automatically identifying sub-grids in a microarray

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010036302A1 (en) * 1999-12-10 2001-11-01 Miller Michael I. Method and apparatus for cross modality image registration

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5696835A (en) * 1994-01-21 1997-12-09 Texas Instruments Incorporated Apparatus and method for aligning and measuring misregistration
US6351573B1 (en) * 1994-01-28 2002-02-26 Schneider Medical Technologies, Inc. Imaging device and method
US6563942B2 (en) * 1994-03-07 2003-05-13 Fuji Photo Film Co., Ltd. Method for adjusting positions of radiation images
US6219462B1 (en) * 1997-05-09 2001-04-17 Sarnoff Corporation Method and apparatus for performing global image alignment using any local match measure
US5926568A (en) * 1997-06-30 1999-07-20 The University Of North Carolina At Chapel Hill Image object matching using core analysis and deformable shape loci
US6754374B1 (en) * 1998-12-16 2004-06-22 Surgical Navigation Technologies, Inc. Method and apparatus for processing images with regions representing target objects
US6839454B1 (en) * 1999-09-30 2005-01-04 Biodiscovery, Inc. System and method for automatically identifying sub-grids in a microarray
US6528803B1 (en) * 2000-01-21 2003-03-04 Radiological Imaging Technology, Inc. Automated calibration adjustment for film dosimetry
US6351660B1 (en) * 2000-04-18 2002-02-26 Litton Systems, Inc. Enhanced visualization of in-vivo breast biopsy location for medical documentation
US20020048393A1 (en) * 2000-09-19 2002-04-25 Fuji Photo Film Co., Ltd. Method of registering images
US6675116B1 (en) * 2000-09-22 2004-01-06 Radiological Imaging Technology, Inc. Automated calibration for radiation dosimetry using fixed or moving beams and detectors

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7626597B2 (en) * 2004-06-23 2009-12-01 Fujifilm Corporation Image display method, apparatus and program
US20050285812A1 (en) * 2004-06-23 2005-12-29 Fuji Photo Film Co., Ltd. Image display method, apparatus and program
US8965884B2 (en) 2005-10-12 2015-02-24 Google Inc. Entity display priority in a distributed geographic information system
US7933897B2 (en) 2005-10-12 2011-04-26 Google Inc. Entity display priority in a distributed geographic information system
US8290942B2 (en) 2005-10-12 2012-10-16 Google Inc. Entity display priority in a distributed geographic information system
US9715530B2 (en) 2005-10-12 2017-07-25 Google Inc. Entity display priority in a distributed geographic information system
US9785648B2 (en) 2005-10-12 2017-10-10 Google Inc. Entity display priority in a distributed geographic information system
US9870409B2 (en) 2005-10-12 2018-01-16 Google Llc Entity display priority in a distributed geographic information system
US10592537B2 (en) 2005-10-12 2020-03-17 Google Llc Entity display priority in a distributed geographic information system
US11288292B2 (en) 2005-10-12 2022-03-29 Google Llc Entity display priority in a distributed geographic information system
US7616217B2 (en) * 2006-04-26 2009-11-10 Google Inc. Dynamic exploration of electronic maps
US20080059205A1 (en) * 2006-04-26 2008-03-06 Tal Dayan Dynamic Exploration of Electronic Maps
US20170147552A1 (en) * 2015-11-19 2017-05-25 Captricity, Inc. Aligning a data table with a reference table
US10417489B2 (en) * 2015-11-19 2019-09-17 Captricity, Inc. Aligning grid lines of a table in an image of a filled-out paper form with grid lines of a reference table in an image of a template of the filled-out paper form

Also Published As

Publication number Publication date
US6937751B2 (en) 2005-08-30
JP2007501451A (en) 2007-01-25
WO2005020150A3 (en) 2005-12-22
WO2005020150A2 (en) 2005-03-03
CA2529760A1 (en) 2005-03-03
US20050025386A1 (en) 2005-02-03
EP1649424A2 (en) 2006-04-26

Similar Documents

Publication Publication Date Title
US20050232513A1 (en) System and method for aligning images
US7496222B2 (en) Method to define the 3D oblique cross-section of anatomy at a specific angle and be able to easily modify multiple angles of display simultaneously
US6954212B2 (en) Three-dimensional computer modelling
US8831324B2 (en) Surgical method and workflow
US6975326B2 (en) Image processing apparatus
US6459821B1 (en) Simultaneous registration of multiple image fragments
US8917924B2 (en) Image processing apparatus, image processing method, and program
US20030210254A1 (en) Method, system and computer product for displaying axial images
US20080304621A1 (en) Display System For the Evaluation of Mammographies
Hering et al. Unsupervised learning for large motion thoracic CT follow-up registration
JP5461782B2 (en) Camera image simulator program
US11020189B2 (en) System and method for component positioning by registering a 3D patient model to an intra-operative image
US20160287345A1 (en) Surgical method and workflow
Al‐Kofahi et al. Algorithms for accurate 3D registration of neuronal images acquired by confocal scanning laser microscopy
Penzias et al. AutoStitcher: An automated program for efficient and robust reconstruction of digitized whole histological sections from tissue fragments
US20180374224A1 (en) Dynamic local registration system and method
Manda et al. Image stitching using RANSAC and Bayesian refinement
Perdigoto et al. Estimation of mirror shape and extrinsic parameters in axial non-central catadioptric systems
US11600030B2 (en) Transforming digital design objects utilizing dynamic magnetic guides
US11334997B2 (en) Hinge detection for orthopedic fixation
Hauenstein On Determination of Curvature in Range Images and Volumes
Lobonc Jr Human supervised tools for digital photogrammetric systems
JP2023104399A (en) Image processing device, image processing method, and program
KR20220006292A (en) Apparatus for Generating Learning Data and Driving Method Thereof, and Computer Readable Recording Medium
Van Essen et al. User’s Guide to SureFit Cortical Segmentation and Surface Reconstruction

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION