US20010046331A1 - Apparatus and method of increasing scanner resolution - Google Patents
Apparatus and method of increasing scanner resolution Download PDFInfo
- Publication number
- US20010046331A1 US20010046331A1 US09/181,346 US18134698A US2001046331A1 US 20010046331 A1 US20010046331 A1 US 20010046331A1 US 18134698 A US18134698 A US 18134698A US 2001046331 A1 US2001046331 A1 US 2001046331A1
- Authority
- US
- United States
- Prior art keywords
- representation
- scanner
- resolution
- image
- motion error
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
- H04N1/393—Enlarging or reducing
- H04N1/3935—Enlarging or reducing with modification of image resolution, i.e. determining the values of picture elements at new relative positions
Definitions
- the invention relates to digital imaging. More specifically, the invention relates to optical scanners and methods of increasing scanner resolution.
- Scanners are typically advertised as having two resolutions: a hardware resolution and an enhanced or interpolated resolution.
- the hardware resolution provides a measure of the imaging ability of the scanner.
- a typical resolution for a low-end scanner might be 300 dots per inch (“dpi”).
- the hardware resolution of a scanner is dependent, in part, upon quality of the scanner's sensor array and imaging optics.
- the sensor array and optics would image a point source as a point of light. In reality, however, the image is smeared.
- Factors contributing to the smearing of the image include the geometry of the sensor's receptive field, optical defocus and chromatic aberration effects in which different wavelengths of light from the single point source do not coincide on the sensor array's surface. Scanners including higher quality sensor arrays and imaging optics will cause less smearing than scanners including lower quality sensor arrays and imaging optics.
- the enhanced or interpolated resolution is more a function of software.
- Software-based techniques such as bilinear interpolation and pixel replication are typically used to enhance the hardware resolution. For example, a hardware resolution of 300 dpi might be enhanced to a resolution of 4800 dpi. Enhancing or interpolating the hardware resolution allows the size of the scanned image to be enlarged.
- enhancing the hardware resolution does not increase the real detail that is collected by the scanner. That is, enhancing the hardware resolution does not provide real information about the image.
- An exemplary bilinear interpolation algorithm might interpolate a pixel by finding four neighboring pixels, multiplying color intensities of the four neighboring pixels by weighting coefficients, and adding the results to obtain the color intensity value of the interpolated pixel.
- resolution enhancement is merely estimated from the information provided by the scanner, it does not increase the amount of real information obtained by the scanner.
- Hardware resolution of the scanner may be increased by increasing the number of detector elements in the sensor array, using higher quality imaging optics, etc.
- the density of detector elements could be increased from 300 dpi to 600 dpi, or aspherical imaging lenses could be used instead of spherical imaging lenses.
- the present invention offers an approach for increasing scanner resolution without substantially increasing the cost of manufacture.
- a first representation of an image is generated during a first scan of the image is performed, and a second representation of the image is generated during a second scan of the image.
- a motion error is intentionally induced in one of the scans.
- the first and second representations are processed to generate a third representation of the image.
- the third representation of the image has a higher resolution than either then first representation or the second representation.
- FIG. 1 is an illustration of an imaging model of a scanner
- FIG. 1 a is an illustration of two low-resolution representations of an image and a composite representation formed from the two low resolution representations;
- FIG. 2 is an illustration of a scanner according to the present invention
- FIG. 2 a is an illustration of a sensor array of the scanner
- FIG. 3 is an illustration of light rays passing through a deflection plate of the scanner
- FIGS. 4 and 5 illustrate a method of generating a high resolution representation from two lower resolution representations
- FIG. 6 is a flowchart of a method of performing a scanning operation according to the present invention.
- the present invention is embodied in an optical scanner.
- the invention improves the hardware resolution of the scanner without increasing the number of sensors in the sensor array or without increasing the quality of the imaging optics. Instead, the hardware resolution is improved by performing multiple scans of an image and using a super resolution technique to process outputs of the multiple scans into a high-resolution image.
- the invention offers a low cost approach for increasing hardware resolution of a scanner.
- a scanner performs first and second scans of a continuous-domain image f(x,y,t) to produce first and second representations R 1 and R 2 of the image f(x,y,t).
- the first two variables x and y of the continuous-domain image f represent space, and the third variable t represents time.
- blurring (modeled by a first block 202 ) of the continuous image f(x,y,t) occurs due to optical lens blurring, sensor integration area blurring, etc.
- There is also a motion error or warping intentionally induced between successive scans of the continuous (modeled by a second block 204 ).
- the continuous-domain image f(x,y,t) is sampled to produce a low-resolution digital image g(m 1 , m 2 , k) where k is a scan index (modeled by a third block 206 ).
- the first representation R 1 includes a first sequence of sample values
- the second representation R 2 includes a second sequence of sample values.
- the image is intentionally shifted relative to the scanner during the second scan to create the motion error. Consequently, the first value g(1,1,1) in the first representation R 1 corresponds to a different part of the image than the first value g(1,1,2) in the second representation R 2 .
- each scan yields different information about the image.
- sample values of the first and second image representations R 1 and R 2 are combined into a more densely sampled, composite representation R 3 .
- a high-resolution representation R 4 of the continuous-domain image f(x,y,t) can then be obtained from composite image R 3 .
- a super resolution technique can then be used to generate the high-resolution representation R 4 of the image from the composite representation R 3 .
- the relationship of the composite representation R 3 to a continuous high-resolution representation R 4 might be as follows:
- R 3 D ( W ( PSF*R 4 ))
- D is the downsampling
- W is the intentional shift or motion warp
- PSF is the Point Spread Function of the scanner (which takes into account sensor integration blur, optical lens blur, etc.)
- the operator “*” represents a convolution operation.
- the composite representation R 3 can be deconvolved into the high-resolution representation R 4 .
- the scanner can generate a higher resolution representation R 4 of the image from lower resolution representations R 1 and R 2 .
- hardware resolution of the scanner can be improved without increasing the cost of the scanner's sensor array or imaging optics.
- FIG. 2 shows a flatbed scanner 10 according to the present invention.
- the flatbed scanner 10 includes a scan head 12 , a dc motor drive assembly 14 ,an electronics assembly 16 , a glass pane 18 and a housing (not shown).
- the glass pane 18 and the housing form an enclosure inside which the scan head 12 , the motor drive assembly 14 and the electronics assembly 16 are mounted.
- the glass plane 18 also defines a scanning plane.
- the dc motor drive assembly 14 includes a pair of slider rods 20 for guiding the scan head 12 in a linear direction along the glass pane 18 , and a dc motor 22 and transmission for moving the scan head 12 along the slider rods 20 .
- the dc motor 22 can be a brush or brushless dc motor.
- the transmission is typical to flatbed scanners: a motor shaft 24 turns a worm gear 26 , which drives a gear 28 , which turns a timing belt 30 .
- the timing belt 30 moves the scan head 12 in a first, scanning direction, which is indicated by an arrow D.
- the sensor array 34 moves with the scan head 12 in the scanning direction D.
- the scan head 12 includes imaging optics 32 and a sensor array 34 .
- the sensor array 34 might be a CCD array having a resolution of 300 dpi.
- the CCD array of the scan head 12 might include three color sensors at each pixel location or CCD cell. The cells extend in a second direction indicated by the arrow A. A linear response of the sensor array 34 is preferred but not required.
- the scan head 12 further includes a device that induces intentional motion error between successive scans.
- the motion error may be a global translational or affine motion between the successive scans.
- FIGS. 2 and 3 A number of different approaches may be used to induce the motion error.
- One such approach is shown in FIGS. 2 and 3.
- a light-transmissible plate 36 is placed in an optical path between the sensor array 34 .
- the plate 36 is movable between a first position a second position.
- the plate 36 is moved to the first position by a second motor 38 and linkage 40 .
- Light reflected at the scanning plane is transmitted by the plate 36 without changing the direction of the beam light (i.e., the light is normal to the light-receiving surface of the plate 36 ), and the transmitted light impinges a light-receiving surface of the sensor array 34 .
- the path of the light transmitted by the plate 36 during the first scan is indicated by the solid line P 0 .
- the plate 36 is moved or tilted to the second position.
- the plate 36 changes the direction of the light path, as indicated by the dashed line P 1 .
- the sensor array 34 “sees” different images during the first and second scans.
- the plate 36 may be moved to produce a motion error ⁇ between fractional values of 0.35 and 0.65 pixels (e.g., 0.40 pixels, 1.40 pixels).
- a motion error ⁇ of 0.5 pixels would improve resolution by a factor of two.
- the plate 36 may be made of an optical quality glass.
- the material for the plate 36 is homogenous such that direction changes are relatively uniform across the plate 36 . It would be undesirable for the beam P 0 to be deflected by one degree at one portion of the plate 36 and by two degrees at another portion of the plate 36 .
- Dimensions of the plate 36 could include, for example, a thickness of one millimeter, a length of one or two centimeters, and a width of one centimeter.
- the electronics assembly 16 includes a controller 42 for controlling the operation of the scanner 10 .
- the controller 42 controls the dc motor 22 to move the scan head 12 along the slider rods 20
- the controller 42 controls the second motor 38 and linkage 40 to move the plate 36 between the first and second positions.
- the controller 42 can control the second motor 38 and linkage 40 to cause a precise motion error.
- the motion error need not be controlled precisely.
- the plate 36 may be moved cause an approximate subpixel error, and the precise motion error ⁇ could be determined by a motion estimation algorithm in software (e.g., a host computer) or hardware (e.g., the controller 42 ).
- the sheet S is positioned atop the glass pane 18 , and a host (e.g., a personal computer) commands the electronics assembly 16 to scan the sheet S.
- a host e.g., a personal computer
- the second motor 38 and linkage 40 move the plate 36 to the first position
- the dc motor drive assembly 14 moves the scan head 12 along the slider rods 20 to an initial position (e.g., a wall) and then starts moving the scan head 12 along the slider rods 20 .
- a fluorescent bulb 48 in the scan head 12 is turned on to illuminate a portion of the sheet S with white light, and the imaging optics 32 focuses an image of the illuminated portion onto the sensor array 34 .
- the sensor array 34 is exposed to, and integrates, a line of pixels at a time, and the electronics assembly 16 processes signals generated by the sensor array 34 and buffers a first representation of the scanned image in memory 44 .
- the controller 42 commands the dc motor assembly 14 to return the scan head 12 to the initial position, and the controller 42 commands the second motor 38 and linkage 40 to move the plate 36 to the second position.
- a second scan is performed. During the second scan, the plate 36 changes the direction of the light reflected by the sheet S and thereby causes the sensor array 34 to see a different image of the sheet S.
- a second representation of the scanned image is buffered in the memory 44 .
- the first and second representations are processed into a higher resolution representation. Processing can be performed by an on-board processor 46 . In the alternative, the low-resolution representations may be sent to the host, which would generate the high-resolution image.
- FIG. 2 happens to show a scanner 10 that performs the processing.
- the memory 44 also stores the Point Spread Function of the scanner 10 and a program that instructs the processor 46 to generate the composite representation from the two low resolution representations, and use the Point Spread Function and a super resolution algorithm to process the composite representation into the high resolution representation.
- the high-resolution representation is then sent to the host.
- exemplary super resolution algorithms include the Projections onto Convex Sets-based algorithm (“POCS”); and the Maximum A-posteriori Probability estimation based algorithm (“MAP”).
- POCS Projections onto Convex Sets-based algorithm
- MAP Maximum A-posteriori Probability estimation based algorithm
- FIG. 4 shows that the composite representation R 3 is formed by interleaving the two low-resolution representations R 1 and R 2 .
- a 600 ⁇ 600 dpi composite image could be formed from two low-resolution 300 ⁇ 600 dpi images.
- FIG. 5 shows that the composite representation R 3 is then convolved with an N ⁇ M filter support FS as follows to produce the high-resolution representation R 4 :
- f(n1, n2) is a pixel in the high-resolution representation R 4 ;
- s(n1+i, n2+j) is a pixel in the composite representation R 3 , and
- w(i,j) is a weighting coefficient.
- weighting coefficients w may be determined by the algorithm disclosed in U.S. Ser. No. ______ filed on ______ (Attorney Docket No. 10960578-1), assigned to the assignee of the present invention, and incorporated herein by reference.
- the weighting coefficients w could instead be determined via a POCS technique or a frequency domain technique.
- a POCS technique or a frequency domain technique.
- Such techniques are disclosed in T. S. Huang and R. Y. Tsui, “Multiple frame image restoration and registration,” Advances in Computer Vision and Image Processing , vol. 1, 1984, pp. 317-39; R. R. Schultz and R. L. Stevenson, “Improved definition video frame enhancement” IEEE Int. Conference on Acoustics, Speech and Signal Processing , vol. IV, May 1995, pp. 2169-2172; and A. M. Tekalp, M. K. Ozkan and M. I. Sezan, “High resolution image reconstruction from lower resolution image sequences and space varying image restoration,” IEEE Int. Conference on Acoustics, Speech and Signal Processing , vol. III, March 1992, pp. 169-172
- the Point Spread Function of the scanner can be measured directly.
- interferometrically generated sinusoidal imaging targets could measure the Point Spread Function. See, for example, J. Grievenkamp and A. Lowman, “Modulation transfer function measurement of sparse-array sensors using a self-calibrating fringe pattern,” Applied Optics , Vol. 33, No. 22, pp. 5029-36, August 1994; and S. Johansson et al., “Experimental MTF measurements of CCD using an interferometrically generated test pattern,” Journal of Imaging Science , Vol. 35, No. 5, pp 320-25, September-October 1991.
- the Point Spread Function could be estimated from Line Spread Functions (“LSF”) of the sensor array 34 .
- LSF Line Spread Functions
- Estimating the PSF from the LSF can be done in any number of ways. See, for example, A. Bewsher and l. Powell, “Optical transfer function measurement facility for aerial survey cameras,” Applied Optics , Vol. 33, No. 28, pp. 6573-77, October 1994.
- the Line Spread Function possesses a useful number of properties from a measurement standpoint. Fine lines on an image target can closely approximate an impulsive line, while providing sufficient illumination to achieve meaningful sensor responses.
- the Line Response Function is a one-dimensional function; therefore, accurate, densely sampled Line Response Function measurements would appear to be much more feasible from a single exposure to some appropriate imaging target. Additionally, the simple geometry of a linear stimulus could be exploited to recover precise relative displacements between the line and the sensor array 34 .
- FIG. 6 shows a generalized method of using a scanner to generate a high-resolution representation of an image.
- a first scan of the image is performed to produce a first representation of the image (block 102 ).
- the scanner is then configured to cause a motion error in next scan (block 104 ).
- the motion error could be induced by moving the plate 36 .
- the motion error could be induced by offsetting the initial position of the scan. For example, the first scan could be started at the wall, and the second scan could be started at a distance of, say, 0.5 pixels from the wall.
- a second scan of the image is performed to produce a second representation of the image (block 106 ).
- a motion error exists between the first and second representations.
- a composite representation is then formed from the first and second representations of the image (block 108 ).
- a super resolution technique is then used to generate a high-resolution representation of the image from the composite representation (block 110 ).
- a linear filtering method may be used to generate the high-resolution image from the composite representation.
- the present invention is not limited to the specific embodiments described and illustrated above.
- the present invention is not limited to a particular super resolution algorithm.
- Other algorithms that process the composite representation into a high-resolution representation could be used.
- the composite representation could be formed from more then two low-resolution representations. Step size of the first motor 22 would be adjusted to achieve the proper motion error in the scanning direction D for each scan. Step size of the second motor 38 would be adjusted to achieve the proper motion error in the second direction A, Additional representations having additional motion error would increase the density of the composite representation and thereby increase the resolution of high-resolution representation in the scanning direction D.
- the motion error may be non-translational. If a non-translational motion error is intentionally induced, a super resolution algorithm based on POCS could be used. Moreover, the motion error may be induced in both directions A and D or only in a single direction A or D.
- the plate 36 may be used to induce motion error in the scanning and second directions D and A; or the plate 36 may be used to induce motion error in the second direction A and the dc motor drive assembly 14 may be used to induce motion error in the scanning direction D; or the plate 36 may be used to induce motion error only in the scanning direction D or only in the second direction A; or the dc motor drive assembly 14 may be used to induce motion error only in the scanning direction D.
- the processing could be performed on-board the scanner 10 .
- the processing could be performed off-board the scanner by an external device such as the host computer.
- the location of the plate 36 is not limited to the optical path between the glass pane 18 and the sensor array 34 .
- the plate 36 could be placed along the optical path between the light source 48 and the glass pane 18 .
- the motion error could be induced by offsetting the initial position of the scan.
- Optical elements other than a flat, transmissive plate 36 could be used.
- Optical elements such as prisms or diffraction gratings could be used instead.
Abstract
Description
- The invention relates to digital imaging. More specifically, the invention relates to optical scanners and methods of increasing scanner resolution.
- Scanners are typically advertised as having two resolutions: a hardware resolution and an enhanced or interpolated resolution. The hardware resolution provides a measure of the imaging ability of the scanner. A typical resolution for a low-end scanner might be 300 dots per inch (“dpi”).
- The hardware resolution of a scanner is dependent, in part, upon quality of the scanner's sensor array and imaging optics. Ideally, the sensor array and optics would image a point source as a point of light. In reality, however, the image is smeared. Factors contributing to the smearing of the image include the geometry of the sensor's receptive field, optical defocus and chromatic aberration effects in which different wavelengths of light from the single point source do not coincide on the sensor array's surface. Scanners including higher quality sensor arrays and imaging optics will cause less smearing than scanners including lower quality sensor arrays and imaging optics.
- The enhanced or interpolated resolution, in contrast, is more a function of software. Software-based techniques such as bilinear interpolation and pixel replication are typically used to enhance the hardware resolution. For example, a hardware resolution of 300 dpi might be enhanced to a resolution of 4800 dpi. Enhancing or interpolating the hardware resolution allows the size of the scanned image to be enlarged.
- However, enhancing the hardware resolution does not increase the real detail that is collected by the scanner. That is, enhancing the hardware resolution does not provide real information about the image. An exemplary bilinear interpolation algorithm might interpolate a pixel by finding four neighboring pixels, multiplying color intensities of the four neighboring pixels by weighting coefficients, and adding the results to obtain the color intensity value of the interpolated pixel. Thus, resolution enhancement is merely estimated from the information provided by the scanner, it does not increase the amount of real information obtained by the scanner.
- Hardware resolution of the scanner may be increased by increasing the number of detector elements in the sensor array, using higher quality imaging optics, etc. For instance, the density of detector elements could be increased from 300 dpi to 600 dpi, or aspherical imaging lenses could be used instead of spherical imaging lenses.
- However, increasing the number of detector elements and improving the quality of the imaging optics will substantially increase the cost of manufacturing the scanner. The market for scanners is fiercely competitive. Increasing hardware resolution by increasing the density of detector elements or improving the quality of the optics is something that manufacturers of low-end scanners cannot afford.
- There is a need to increase the hardware resolution of a scanner without substantially increasing the cost of manufacturing the scanner.
- The present invention offers an approach for increasing scanner resolution without substantially increasing the cost of manufacture. A first representation of an image is generated during a first scan of the image is performed, and a second representation of the image is generated during a second scan of the image. A motion error is intentionally induced in one of the scans. Using a super resolution technique, the first and second representations are processed to generate a third representation of the image. The third representation of the image has a higher resolution than either then first representation or the second representation.
- Other aspects and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
- FIG. 1 is an illustration of an imaging model of a scanner;
- FIG. 1a is an illustration of two low-resolution representations of an image and a composite representation formed from the two low resolution representations;
- FIG. 2 is an illustration of a scanner according to the present invention;
- FIG. 2a is an illustration of a sensor array of the scanner;
- FIG. 3 is an illustration of light rays passing through a deflection plate of the scanner;
- FIGS. 4 and 5 illustrate a method of generating a high resolution representation from two lower resolution representations; and
- FIG. 6 is a flowchart of a method of performing a scanning operation according to the present invention.
- As shown in the drawings for purposes of illustration, the present invention is embodied in an optical scanner. The invention improves the hardware resolution of the scanner without increasing the number of sensors in the sensor array or without increasing the quality of the imaging optics. Instead, the hardware resolution is improved by performing multiple scans of an image and using a super resolution technique to process outputs of the multiple scans into a high-resolution image. Thus, the invention offers a low cost approach for increasing hardware resolution of a scanner.
- The scanner according to the present invention will be described below in connection with FIGS.2 to 6. First, however, the general principle behind generating a high-resolution image from two or more low-resolution images will be described.
- Referring to FIGS. 1 and 1a, a scanner performs first and second scans of a continuous-domain image f(x,y,t) to produce first and second representations R1 and R2 of the image f(x,y,t). The first two variables x and y of the continuous-domain image f represent space, and the third variable t represents time. In each scan, blurring (modeled by a first block 202) of the continuous image f(x,y,t) occurs due to optical lens blurring, sensor integration area blurring, etc. There is also a motion error or warping intentionally induced between successive scans of the continuous (modeled by a second block 204). The continuous-domain image f(x,y,t) is sampled to produce a low-resolution digital image g(m1, m2, k) where k is a scan index (modeled by a third block 206). Thus, the first representation R1 includes a first sequence of sample values, and the second representation R2 includes a second sequence of sample values. The image is intentionally shifted relative to the scanner during the second scan to create the motion error. Consequently, the first value g(1,1,1) in the first representation R1 corresponds to a different part of the image than the first value g(1,1,2) in the second representation R2. Thus, each scan yields different information about the image.
- The sample values of the first and second image representations R1 and R2 are combined into a more densely sampled, composite representation R3. A high-resolution representation R4 of the continuous-domain image f(x,y,t) can then be obtained from composite image R3.
- A super resolution technique can then be used to generate the high-resolution representation R4 of the image from the composite representation R3. In the frequency domain, the relationship of the composite representation R3 to a continuous high-resolution representation R4 might be as follows:
- R 3=D(W(PSF*R 4 ))
- where D is the downsampling; W is the intentional shift or motion warp, PSF is the Point Spread Function of the scanner (which takes into account sensor integration blur, optical lens blur, etc.), and the operator “*” represents a convolution operation. There may be a single PSF corresponding to the scanner, a PSF corresponding to each detector element in the scanner, etc.
- Given the Point Spread Function of the scanner and the amount of intentionally induced motion error, the composite representation R3 can be deconvolved into the high-resolution representation R4. In this manner, the scanner can generate a higher resolution representation R4 of the image from lower resolution representations R1 and R2. Thus, hardware resolution of the scanner can be improved without increasing the cost of the scanner's sensor array or imaging optics.
- Reference is now made to FIG. 2, which shows a
flatbed scanner 10 according to the present invention. Theflatbed scanner 10 includes ascan head 12, a dcmotor drive assembly 14,anelectronics assembly 16, aglass pane 18 and a housing (not shown). Theglass pane 18 and the housing form an enclosure inside which thescan head 12, themotor drive assembly 14 and theelectronics assembly 16 are mounted. Theglass plane 18 also defines a scanning plane. - The dc
motor drive assembly 14 includes a pair ofslider rods 20 for guiding thescan head 12 in a linear direction along theglass pane 18, and adc motor 22 and transmission for moving thescan head 12 along theslider rods 20. Thedc motor 22 can be a brush or brushless dc motor. The transmission is typical to flatbed scanners: a motor shaft 24 turns aworm gear 26, which drives agear 28, which turns atiming belt 30. Thetiming belt 30 moves thescan head 12 in a first, scanning direction, which is indicated by an arrow D. Thesensor array 34 moves with thescan head 12 in the scanning direction D. - Referring additionally to FIG. 2a, the
scan head 12 includesimaging optics 32 and asensor array 34. Thesensor array 34 might be a CCD array having a resolution of 300 dpi. The CCD array of thescan head 12 might include three color sensors at each pixel location or CCD cell. The cells extend in a second direction indicated by the arrow A. A linear response of thesensor array 34 is preferred but not required. - The
scan head 12 further includes a device that induces intentional motion error between successive scans. The motion error may be a global translational or affine motion between the successive scans. - A number of different approaches may be used to induce the motion error. One such approach is shown in FIGS. 2 and 3. A light-
transmissible plate 36 is placed in an optical path between thesensor array 34. Theplate 36 is movable between a first position a second position. During a first scan, theplate 36 is moved to the first position by asecond motor 38 andlinkage 40. Light reflected at the scanning plane is transmitted by theplate 36 without changing the direction of the beam light (i.e., the light is normal to the light-receiving surface of the plate 36), and the transmitted light impinges a light-receiving surface of thesensor array 34. The path of the light transmitted by theplate 36 during the first scan is indicated by the solid line P0. During a second scan, theplate 36 is moved or tilted to the second position. When moved to the second position, theplate 36 changes the direction of the light path, as indicated by the dashed line P1. Thus, thesensor array 34 “sees” different images during the first and second scans. Theplate 36 may be moved to produce a motion error Δ between fractional values of 0.35 and 0.65 pixels (e.g., 0.40 pixels, 1.40 pixels). A motion error Δ of 0.5 pixels would improve resolution by a factor of two. - The
plate 36 may be made of an optical quality glass. The material for theplate 36 is homogenous such that direction changes are relatively uniform across theplate 36. It would be undesirable for the beam P0 to be deflected by one degree at one portion of theplate 36 and by two degrees at another portion of theplate 36. Dimensions of theplate 36 could include, for example, a thickness of one millimeter, a length of one or two centimeters, and a width of one centimeter. - The
electronics assembly 16 includes acontroller 42 for controlling the operation of thescanner 10. Among its functions, thecontroller 42 controls thedc motor 22 to move thescan head 12 along theslider rods 20, and thecontroller 42 controls thesecond motor 38 andlinkage 40 to move theplate 36 between the first and second positions. Thecontroller 42 can control thesecond motor 38 andlinkage 40 to cause a precise motion error. In the alternative, the motion error need not be controlled precisely. Instead, theplate 36 may be moved cause an approximate subpixel error, and the precise motion error Δ could be determined by a motion estimation algorithm in software (e.g., a host computer) or hardware (e.g., the controller 42). - To scan a sheet S of paper, the sheet S is positioned atop the
glass pane 18, and a host (e.g., a personal computer) commands theelectronics assembly 16 to scan the sheet S. Under control of thecontroller 42, thesecond motor 38 andlinkage 40 move theplate 36 to the first position, and the dcmotor drive assembly 14 moves thescan head 12 along theslider rods 20 to an initial position (e.g., a wall) and then starts moving thescan head 12 along theslider rods 20. Afluorescent bulb 48 in thescan head 12 is turned on to illuminate a portion of the sheet S with white light, and theimaging optics 32 focuses an image of the illuminated portion onto thesensor array 34. Thesensor array 34 is exposed to, and integrates, a line of pixels at a time, and theelectronics assembly 16 processes signals generated by thesensor array 34 and buffers a first representation of the scanned image inmemory 44. - After the first scan is completed, the
controller 42 commands thedc motor assembly 14 to return thescan head 12 to the initial position, and thecontroller 42 commands thesecond motor 38 andlinkage 40 to move theplate 36 to the second position. After theplate 36 has been moved to the second position, a second scan is performed. During the second scan, theplate 36 changes the direction of the light reflected by the sheet S and thereby causes thesensor array 34 to see a different image of the sheet S. A second representation of the scanned image is buffered in thememory 44. - After the second scan has been completed, the first and second representations are processed into a higher resolution representation. Processing can be performed by an on-
board processor 46. In the alternative, the low-resolution representations may be sent to the host, which would generate the high-resolution image. FIG. 2 happens to show ascanner 10 that performs the processing. - The
memory 44 also stores the Point Spread Function of thescanner 10 and a program that instructs theprocessor 46 to generate the composite representation from the two low resolution representations, and use the Point Spread Function and a super resolution algorithm to process the composite representation into the high resolution representation. The high-resolution representation is then sent to the host. - There are a number of different super resolution algorithms that could be used to generate the high-resolution representation. Exemplary super resolution algorithms include the Projections onto Convex Sets-based algorithm (“POCS”); and the Maximum A-posteriori Probability estimation based algorithm (“MAP”).
- A faster algorithm that uses linear filtering is illustrated in FIGS. 4 and 5. FIG. 4 shows that the composite representation R3 is formed by interleaving the two low-resolution representations R1 and R2. For example, a 600×600 dpi composite image could be formed from two low-resolution 300×600 dpi images.
-
- where f(n1, n2) is a pixel in the high-resolution representation R4; s(n1+i, n2+j) is a pixel in the composite representation R3, and w(i,j) is a weighting coefficient.
- The weighting coefficients w may be determined by the algorithm disclosed in U.S. Ser. No. ______ filed on ______ (Attorney Docket No. 10960578-1), assigned to the assignee of the present invention, and incorporated herein by reference.
- The weighting coefficients w could instead be determined via a POCS technique or a frequency domain technique. Such techniques are disclosed in T. S. Huang and R. Y. Tsui, “Multiple frame image restoration and registration,”Advances in Computer Vision and Image Processing, vol. 1, 1984, pp. 317-39; R. R. Schultz and R. L. Stevenson, “Improved definition video frame enhancement” IEEE Int. Conference on Acoustics, Speech and Signal Processing, vol. IV, May 1995, pp. 2169-2172; and A. M. Tekalp, M. K. Ozkan and M. I. Sezan, “High resolution image reconstruction from lower resolution image sequences and space varying image restoration,” IEEE Int. Conference on Acoustics, Speech and Signal Processing, vol. III, March 1992, pp. 169-172
- The Point Spread Function of the scanner can be measured directly. For example, interferometrically generated sinusoidal imaging targets could measure the Point Spread Function. See, for example, J. Grievenkamp and A. Lowman, “Modulation transfer function measurement of sparse-array sensors using a self-calibrating fringe pattern,”Applied Optics, Vol. 33, No. 22, pp. 5029-36, August 1994; and S. Johansson et al., “Experimental MTF measurements of CCD using an interferometrically generated test pattern,” Journal of Imaging Science, Vol. 35, No. 5, pp 320-25, September-October 1991.
- In the alternative, the Point Spread Function could be estimated from Line Spread Functions (“LSF”) of the
sensor array 34. Estimating the PSF from the LSF can be done in any number of ways. See, for example, A. Bewsher and l. Powell, “Optical transfer function measurement facility for aerial survey cameras,” Applied Optics, Vol. 33, No. 28, pp. 6573-77, October 1994. - The Line Spread Function possesses a useful number of properties from a measurement standpoint. Fine lines on an image target can closely approximate an impulsive line, while providing sufficient illumination to achieve meaningful sensor responses. The Line Response Function is a one-dimensional function; therefore, accurate, densely sampled Line Response Function measurements would appear to be much more feasible from a single exposure to some appropriate imaging target. Additionally, the simple geometry of a linear stimulus could be exploited to recover precise relative displacements between the line and the
sensor array 34. - FIG. 6 shows a generalized method of using a scanner to generate a high-resolution representation of an image. A first scan of the image is performed to produce a first representation of the image (block102).
- The scanner is then configured to cause a motion error in next scan (block104). The motion error could be induced by moving the
plate 36. Instead of using theplate 36 to induce motion error between the first and second representations, the motion error could be induced by offsetting the initial position of the scan. For example, the first scan could be started at the wall, and the second scan could be started at a distance of, say, 0.5 pixels from the wall. - After the scanner has been configured to cause a motion error, a second scan of the image is performed to produce a second representation of the image (block106). A motion error exists between the first and second representations.
- A composite representation is then formed from the first and second representations of the image (block108). A super resolution technique is then used to generate a high-resolution representation of the image from the composite representation (block 110). For example, a linear filtering method may be used to generate the high-resolution image from the composite representation.
- Thus disclosed is an invention that increases hardware resolution of a scanner without increasing the quality of the sensor array or imaging optics of the scanner. The addition of the
plate 36 andsecond motor 38 andlinkage 40 adds little to the cost of manufacturing thescanner 10. If the initial position of thescan head 12 can be controlled to induce the motion error, theplate 36 andmotor 38 andlinkage 40 would not be needed, and the higher resolution could be obtained without an additional increase in the cost of hardware. - The present invention is not limited to the specific embodiments described and illustrated above. For example, the present invention is not limited to a particular super resolution algorithm. Other algorithms that process the composite representation into a high-resolution representation could be used.
- The composite representation could be formed from more then two low-resolution representations. Step size of the
first motor 22 would be adjusted to achieve the proper motion error in the scanning direction D for each scan. Step size of thesecond motor 38 would be adjusted to achieve the proper motion error in the second direction A, Additional representations having additional motion error would increase the density of the composite representation and thereby increase the resolution of high-resolution representation in the scanning direction D. - The motion error may be non-translational. If a non-translational motion error is intentionally induced, a super resolution algorithm based on POCS could be used. Moreover, the motion error may be induced in both directions A and D or only in a single direction A or D. For example, the
plate 36 may be used to induce motion error in the scanning and second directions D and A; or theplate 36 may be used to induce motion error in the second direction A and the dcmotor drive assembly 14 may be used to induce motion error in the scanning direction D; or theplate 36 may be used to induce motion error only in the scanning direction D or only in the second direction A; or the dcmotor drive assembly 14 may be used to induce motion error only in the scanning direction D. - The processing could be performed on-board the
scanner 10. In the alternative, the processing could be performed off-board the scanner by an external device such as the host computer. - The location of the
plate 36 is not limited to the optical path between theglass pane 18 and thesensor array 34. Theplate 36 could be placed along the optical path between thelight source 48 and theglass pane 18. - Instead of using the
plate 36 to induce the motion error between the first and second representations, the motion error could be induced by offsetting the initial position of the scan. Optical elements other than a flat,transmissive plate 36 could be used. Optical elements such as prisms or diffraction gratings could be used instead. - Therefore, the invention is not limited to the specific embodiments described and illustrated above. Instead, the invention is construed according to the claims that follow.
Claims (20)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/181,346 US6459823B2 (en) | 1998-10-28 | 1998-10-28 | Apparatus and method of increasing scanner resolution |
JP11305573A JP2000285228A (en) | 1998-10-28 | 1999-10-27 | Image scanning method |
EP99308489A EP0998122A3 (en) | 1998-10-28 | 1999-10-27 | Apparatus and method of increasing scanner resolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/181,346 US6459823B2 (en) | 1998-10-28 | 1998-10-28 | Apparatus and method of increasing scanner resolution |
Publications (2)
Publication Number | Publication Date |
---|---|
US20010046331A1 true US20010046331A1 (en) | 2001-11-29 |
US6459823B2 US6459823B2 (en) | 2002-10-01 |
Family
ID=22663900
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/181,346 Expired - Fee Related US6459823B2 (en) | 1998-10-28 | 1998-10-28 | Apparatus and method of increasing scanner resolution |
Country Status (2)
Country | Link |
---|---|
US (1) | US6459823B2 (en) |
JP (1) | JP2000285228A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6567192B1 (en) * | 1998-03-17 | 2003-05-20 | Matsushita Electric Industrial Co., Ltd. | Image reading apparatus |
US20050163402A1 (en) * | 2003-09-30 | 2005-07-28 | Seiji Aiso | Generation of high-resolution image based on multiple low-resolution images |
US20070097682A1 (en) * | 2005-10-31 | 2007-05-03 | Razavi Hosein A | Illumination source comprising a pluralitiy of light emitting diode groups |
US20070097463A1 (en) * | 2005-10-31 | 2007-05-03 | Razavi Hosein A | Illumination source comprising more light emitting diodes than terminals |
US20120314198A1 (en) * | 2011-06-10 | 2012-12-13 | Sang-Hee Lee | Methods of estimating point spread functions in electron-beam lithography processes |
US10506177B2 (en) * | 2012-01-10 | 2019-12-10 | Sharp Kabushiki Kaisha | Image processing device, image processing method, image processing program, image capture device, and image display device |
KR20210002591A (en) * | 2018-05-21 | 2021-01-08 | 가부시키가이샤 시마즈세이사쿠쇼 | X-ray inspection device |
Families Citing this family (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8352400B2 (en) | 1991-12-23 | 2013-01-08 | Hoffberg Steven M | Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore |
US7966078B2 (en) | 1999-02-01 | 2011-06-21 | Steven Hoffberg | Network media appliance system and method |
DE10001800C2 (en) * | 2000-01-18 | 2002-07-18 | Om Engineering Gmbh | Method and device for measuring, in particular, surface topologies in microscopic resolution |
US6947133B2 (en) * | 2000-08-08 | 2005-09-20 | Carl Zeiss Jena Gmbh | Method for increasing the spectral and spatial resolution of detectors |
US7239428B2 (en) * | 2001-06-11 | 2007-07-03 | Solectronics, Llc | Method of super image resolution |
US6906305B2 (en) * | 2002-01-08 | 2005-06-14 | Brion Technologies, Inc. | System and method for aerial image sensing |
US6828542B2 (en) * | 2002-06-07 | 2004-12-07 | Brion Technologies, Inc. | System and method for lithography process monitoring and control |
US6807503B2 (en) * | 2002-11-04 | 2004-10-19 | Brion Technologies, Inc. | Method and apparatus for monitoring integrated circuit fabrication |
US6759297B1 (en) | 2003-02-28 | 2004-07-06 | Union Semiconductor Technology Corporatin | Low temperature deposition of dielectric materials in magnetoresistive random access memory devices |
US7053355B2 (en) | 2003-03-18 | 2006-05-30 | Brion Technologies, Inc. | System and method for lithography process monitoring and control |
US7492967B2 (en) * | 2003-09-24 | 2009-02-17 | Kabushiki Kaisha Toshiba | Super-resolution processor and medical diagnostic imaging apparatus |
US7707039B2 (en) * | 2004-02-15 | 2010-04-27 | Exbiblio B.V. | Automatic modification of web pages |
US8442331B2 (en) | 2004-02-15 | 2013-05-14 | Google Inc. | Capturing text from rendered documents using supplemental information |
US20060041484A1 (en) | 2004-04-01 | 2006-02-23 | King Martin T | Methods and systems for initiating application processes by data capture from rendered documents |
US10635723B2 (en) | 2004-02-15 | 2020-04-28 | Google Llc | Search engines and systems with handheld document data capture devices |
US7812860B2 (en) | 2004-04-01 | 2010-10-12 | Exbiblio B.V. | Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device |
US8799303B2 (en) | 2004-02-15 | 2014-08-05 | Google Inc. | Establishing an interactive environment for rendered documents |
US7389002B1 (en) | 2004-03-22 | 2008-06-17 | Knight Andrew F | Method for increasing resolution in a camera |
US8081849B2 (en) | 2004-12-03 | 2011-12-20 | Google Inc. | Portable scanning and memory device |
US8793162B2 (en) | 2004-04-01 | 2014-07-29 | Google Inc. | Adding information or functionality to a rendered document via association with an electronic counterpart |
US8146156B2 (en) | 2004-04-01 | 2012-03-27 | Google Inc. | Archive of text captures from rendered documents |
US20060098900A1 (en) * | 2004-09-27 | 2006-05-11 | King Martin T | Secure data gathering from rendered documents |
US7894670B2 (en) | 2004-04-01 | 2011-02-22 | Exbiblio B.V. | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
US9143638B2 (en) | 2004-04-01 | 2015-09-22 | Google Inc. | Data capture from rendered documents using handheld device |
US8621349B2 (en) | 2004-04-01 | 2013-12-31 | Google Inc. | Publishing techniques for adding value to a rendered document |
US20060081714A1 (en) | 2004-08-23 | 2006-04-20 | King Martin T | Portable scanning device |
US9008447B2 (en) | 2004-04-01 | 2015-04-14 | Google Inc. | Method and system for character recognition |
US20070300142A1 (en) | 2005-04-01 | 2007-12-27 | King Martin T | Contextual dynamic advertising based upon captured rendered text |
US7990556B2 (en) | 2004-12-03 | 2011-08-02 | Google Inc. | Association of a portable scanner with input/output and storage devices |
US9116890B2 (en) | 2004-04-01 | 2015-08-25 | Google Inc. | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
US20080313172A1 (en) | 2004-12-03 | 2008-12-18 | King Martin T | Determining actions involving captured information and electronic content associated with rendered documents |
US8713418B2 (en) | 2004-04-12 | 2014-04-29 | Google Inc. | Adding value to a rendered document |
US8489624B2 (en) | 2004-05-17 | 2013-07-16 | Google, Inc. | Processing techniques for text capture from a rendered document |
US8874504B2 (en) | 2004-12-03 | 2014-10-28 | Google Inc. | Processing techniques for visual capture data from a rendered document |
US9460346B2 (en) | 2004-04-19 | 2016-10-04 | Google Inc. | Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device |
US8620083B2 (en) | 2004-12-03 | 2013-12-31 | Google Inc. | Method and system for character recognition |
US8346620B2 (en) | 2004-07-19 | 2013-01-01 | Google Inc. | Automatic modification of web pages |
WO2006015305A2 (en) | 2004-07-30 | 2006-02-09 | Sionex Corporation | Systems and methods for ion mobility control |
US20110029504A1 (en) * | 2004-12-03 | 2011-02-03 | King Martin T | Searching and accessing documents on private networks for use with captures from rendered documents |
US7920169B2 (en) | 2005-01-31 | 2011-04-05 | Invention Science Fund I, Llc | Proximity of shared image devices |
US9124729B2 (en) | 2005-01-31 | 2015-09-01 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US9910341B2 (en) | 2005-01-31 | 2018-03-06 | The Invention Science Fund I, Llc | Shared image device designation |
US20060174203A1 (en) | 2005-01-31 | 2006-08-03 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Viewfinder for shared image device |
US8902320B2 (en) | 2005-01-31 | 2014-12-02 | The Invention Science Fund I, Llc | Shared image device synchronization or designation |
US20060170956A1 (en) | 2005-01-31 | 2006-08-03 | Jung Edward K | Shared image devices |
US9325781B2 (en) | 2005-01-31 | 2016-04-26 | Invention Science Fund I, Llc | Audio sharing |
US9489717B2 (en) * | 2005-01-31 | 2016-11-08 | Invention Science Fund I, Llc | Shared image device |
US7876357B2 (en) | 2005-01-31 | 2011-01-25 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US8606383B2 (en) | 2005-01-31 | 2013-12-10 | The Invention Science Fund I, Llc | Audio sharing |
US9082456B2 (en) | 2005-01-31 | 2015-07-14 | The Invention Science Fund I Llc | Shared image device designation |
US9001215B2 (en) | 2005-06-02 | 2015-04-07 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US10003762B2 (en) | 2005-04-26 | 2018-06-19 | Invention Science Fund I, Llc | Shared image devices |
US9819490B2 (en) | 2005-05-04 | 2017-11-14 | Invention Science Fund I, Llc | Regional proximity for shared image device(s) |
US8270049B2 (en) * | 2006-08-01 | 2012-09-18 | Xerox Corporation | System and method for high resolution characterization of spatial variance of color separation misregistration |
US8274717B2 (en) | 2006-08-01 | 2012-09-25 | Xerox Corporation | System and method for characterizing color separation misregistration |
EP2067119A2 (en) | 2006-09-08 | 2009-06-10 | Exbiblio B.V. | Optical scanners, such as hand-held optical scanners |
JP4829750B2 (en) * | 2006-11-20 | 2011-12-07 | キヤノン株式会社 | Image reading device |
US8228559B2 (en) | 2007-05-21 | 2012-07-24 | Xerox Corporation | System and method for characterizing color separation misregistration utilizing a broadband multi-channel scanning module |
US7974498B2 (en) * | 2007-08-08 | 2011-07-05 | Microsoft Corporation | Super-resolution in periodic and aperiodic pixel imaging |
JP2009171563A (en) * | 2007-12-21 | 2009-07-30 | Canon Inc | Image processor, image processing method,program for executing image processing method, and storage medium |
JP5305883B2 (en) * | 2007-12-28 | 2013-10-02 | キヤノン株式会社 | Image processing apparatus, image processing method, and program for executing image processing method |
JP5028328B2 (en) | 2008-05-13 | 2012-09-19 | キヤノン株式会社 | Image processing apparatus and image processing method |
DE202010018601U1 (en) | 2009-02-18 | 2018-04-30 | Google LLC (n.d.Ges.d. Staates Delaware) | Automatically collecting information, such as gathering information using a document recognizing device |
US8447066B2 (en) * | 2009-03-12 | 2013-05-21 | Google Inc. | Performing actions based on capturing information from rendered documents, such as documents under copyright |
WO2010105245A2 (en) | 2009-03-12 | 2010-09-16 | Exbiblio B.V. | Automatically providing content associated with captured information, such as information captured in real-time |
JP2010257179A (en) * | 2009-04-24 | 2010-11-11 | Fuji Xerox Co Ltd | Image processing apparatus and image processing program |
US9081799B2 (en) | 2009-12-04 | 2015-07-14 | Google Inc. | Using gestalt information to identify locations in printed information |
US9323784B2 (en) | 2009-12-09 | 2016-04-26 | Google Inc. | Image search using text-based elements within the contents of images |
JP5761958B2 (en) * | 2010-10-25 | 2015-08-12 | キヤノン株式会社 | Image processing apparatus, image processing apparatus control method, and program |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3867569A (en) * | 1974-02-25 | 1975-02-18 | Bell Telephone Labor Inc | Compact flatbed page scanner |
JPH05314250A (en) | 1992-05-11 | 1993-11-26 | Fuji Xerox Co Ltd | Method and device for smoothing enlarged image |
US5412577A (en) * | 1992-10-28 | 1995-05-02 | Quad/Tech International | Color registration system for a printing press |
US5414782A (en) | 1992-12-16 | 1995-05-09 | The United States Of Amercia As Represented By The Secretary Of Commerce | Procedure for digital image restoration |
US5739898A (en) * | 1993-02-03 | 1998-04-14 | Nikon Corporation | Exposure method and apparatus |
US5336878A (en) | 1993-05-10 | 1994-08-09 | Hewlett-Packard Company | Variable speed single pass color optical scanner |
DE4429416A1 (en) * | 1994-08-19 | 1996-02-22 | Velzel Christiaan H F | Method and interference microscope for microscoping an object to achieve a resolution beyond the diffraction limit (super resolution) |
US5578813A (en) * | 1995-03-02 | 1996-11-26 | Allen; Ross R. | Freehand image scanning device which compensates for non-linear movement |
US6240219B1 (en) * | 1996-12-11 | 2001-05-29 | Itt Industries Inc. | Apparatus and method for providing optical sensors with super resolution |
US5949914A (en) * | 1997-03-17 | 1999-09-07 | Space Imaging Lp | Enhancing the resolution of multi-spectral image data with panchromatic image data using super resolution pan-sharpening |
US6037584A (en) * | 1998-05-08 | 2000-03-14 | Hewlett-Packard Company | Optical scanner including exposure control |
US6239883B1 (en) * | 1998-08-20 | 2001-05-29 | Microtek International Inc. | High resolution scanner |
-
1998
- 1998-10-28 US US09/181,346 patent/US6459823B2/en not_active Expired - Fee Related
-
1999
- 1999-10-27 JP JP11305573A patent/JP2000285228A/en active Pending
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6567192B1 (en) * | 1998-03-17 | 2003-05-20 | Matsushita Electric Industrial Co., Ltd. | Image reading apparatus |
US20100150474A1 (en) * | 2003-09-30 | 2010-06-17 | Seiko Epson Corporation | Generation of high-resolution images based on multiple low-resolution images |
US20050163402A1 (en) * | 2003-09-30 | 2005-07-28 | Seiji Aiso | Generation of high-resolution image based on multiple low-resolution images |
US7953297B2 (en) | 2003-09-30 | 2011-05-31 | Seiko Epson Corporation | Generation of high-resolution images based on multiple low-resolution images |
US7702184B2 (en) * | 2003-09-30 | 2010-04-20 | Seiko Epson Corporation | Generation of high-resolution image based on multiple low-resolution images |
US7835038B2 (en) | 2005-10-31 | 2010-11-16 | Hewlett-Packard Development Company, L.P. | Illumination source comprising more light emitting diodes than terminals |
US20070097463A1 (en) * | 2005-10-31 | 2007-05-03 | Razavi Hosein A | Illumination source comprising more light emitting diodes than terminals |
US7852530B2 (en) | 2005-10-31 | 2010-12-14 | Hewlett-Packard Development Company, L.P. | Illumination source comprising a plurality of light emitting diode groups |
US20070097682A1 (en) * | 2005-10-31 | 2007-05-03 | Razavi Hosein A | Illumination source comprising a pluralitiy of light emitting diode groups |
US20120314198A1 (en) * | 2011-06-10 | 2012-12-13 | Sang-Hee Lee | Methods of estimating point spread functions in electron-beam lithography processes |
US10506177B2 (en) * | 2012-01-10 | 2019-12-10 | Sharp Kabushiki Kaisha | Image processing device, image processing method, image processing program, image capture device, and image display device |
KR20210002591A (en) * | 2018-05-21 | 2021-01-08 | 가부시키가이샤 시마즈세이사쿠쇼 | X-ray inspection device |
EP3798623A4 (en) * | 2018-05-21 | 2021-06-09 | Shimadzu Corporation | X-ray inspection device |
KR102355657B1 (en) | 2018-05-21 | 2022-02-08 | 가부시키가이샤 시마즈세이사쿠쇼 | X-ray inspection device |
US11268917B2 (en) * | 2018-05-21 | 2022-03-08 | Shimadzu Corporation | X-ray inspection apparatus |
Also Published As
Publication number | Publication date |
---|---|
US6459823B2 (en) | 2002-10-01 |
JP2000285228A (en) | 2000-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6459823B2 (en) | Apparatus and method of increasing scanner resolution | |
US6414760B1 (en) | Image scanner with optical waveguide and enhanced optical sampling rate | |
JP4021594B2 (en) | Image scanning method using image scanner | |
JPH0654182A (en) | Image scanning device | |
US20070273930A1 (en) | Method and system for super-resolution of confocal images acquired through an image guide, and device used for implementing such a method | |
KR20080097218A (en) | Method and apparatus and computer program product for collecting digital image data from microscope media-based specimens | |
EP0748108A3 (en) | Method of electronic scanning | |
JPH10290389A (en) | Multi-focus image formation method and image formation device | |
JP4653123B2 (en) | Image acquisition apparatus and image acquisition method | |
JP5202267B2 (en) | Image reading device | |
US20200301121A1 (en) | Method for high-resolution scanning microscopy | |
JP4947072B2 (en) | Image reading device | |
JP4913089B2 (en) | Image reading device | |
JP5068236B2 (en) | Image reading device | |
US6201619B1 (en) | Autofocus process and system with fast multi-region sampling | |
EP0998122A2 (en) | Apparatus and method of increasing scanner resolution | |
KR101150987B1 (en) | Image processing apparatus and control method thereof | |
AU663760B2 (en) | Image input device having optical deflection elements for capturing multiple sub-images | |
JPH0735992A (en) | Solid-state image pickup device | |
JPH07322151A (en) | Solid-state image pickup device | |
CN115052077B (en) | Scanning device and method | |
JPH07113944A (en) | Picture read out device | |
EP0953861A2 (en) | Method and apparatus for doubling a CCD's resolution using a gated shifted optical path | |
JP3362927B2 (en) | Image reading device | |
Lenz et al. | ProgRes 3000: a digital color camera with a 2-D array CCD sensor and programmable resolution up to 2994 x 2320 picture elements |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HEWLETT-PACKARD COMPANY, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ALTUNBASAK, YUCEL;TAUBMAN, DAVID S.;REEL/FRAME:009742/0455;SIGNING DATES FROM 19981027 TO 19981028 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
AS | Assignment |
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:026945/0699 Effective date: 20030131 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Expired due to failure to pay maintenance fee |
Effective date: 20141001 |