WO2010151686A1 - Non-uniformity error correction with a bilateral filter - Google Patents
Non-uniformity error correction with a bilateral filter Download PDFInfo
- Publication number
- WO2010151686A1 WO2010151686A1 PCT/US2010/039852 US2010039852W WO2010151686A1 WO 2010151686 A1 WO2010151686 A1 WO 2010151686A1 US 2010039852 W US2010039852 W US 2010039852W WO 2010151686 A1 WO2010151686 A1 WO 2010151686A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- bilateral filter
- image
- photometric
- detectors
- pixels
- Prior art date
Links
- 230000002146 bilateral effect Effects 0.000 title claims abstract description 84
- 238000012937 correction Methods 0.000 title claims abstract description 68
- 238000000926 separation method Methods 0.000 claims abstract description 42
- 230000000694 effects Effects 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims description 63
- 230000004044 response Effects 0.000 claims description 36
- 230000007246 mechanism Effects 0.000 claims description 20
- 230000005855 radiation Effects 0.000 claims description 20
- 238000013519 translation Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 30
- 238000003384 imaging method Methods 0.000 description 16
- 230000002123 temporal effect Effects 0.000 description 14
- 230000007423 decrease Effects 0.000 description 8
- 230000005670 electromagnetic radiation Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000003491 array Methods 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000000737 periodic effect Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000001429 visible spectrum Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 235000000177 Indigofera tinctoria Nutrition 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000013213 extrapolation Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 229940097275 indigo Drugs 0.000 description 1
- COHYTHOBJLSHDF-UHFFFAOYSA-N indigo powder Natural products N1C2=CC=CC=C2C(=O)C1=C1C(=O)C2=CC=CC=C2N1 COHYTHOBJLSHDF-UHFFFAOYSA-N 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/67—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
- H04N25/671—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
- H04N25/673—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction by using reference sources
- H04N25/674—Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction by using reference sources based on the scene itself, e.g. defocusing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Definitions
- Focal plane arrays are used to digitally capture electromagnetic radiation, and are common components of digital imaging systems.
- a focal plane array comprises an array of individual detectors, generally photodiodes, which produce current when exposed to photons of sufficient energy. This current, later converted to a voltage, may be treated in a variety of ways to generate an image that is displayed to the end user.
- Individual detectors in focal plane arrays are subject to non-uniformity errors. Specifically, if two detectors at a uniform temperature are exposed to the same scene information, their response voltages may differ. Non- uniformity errors are described in terms of offset (the difference in response to a calibrated input from the expected response) and gain (the difference in slope of the line between two responses to two calibrated inputs from the expected slope).
- correction maps are the two-point correction method, in which uniform radiation at two different precisely known temperatures is applied to the array and the response of each detector is measured. The offset and gain response of each detector is then calculated by approximating the response as a line through these two calibration points. For a given detector, the extrapolation of the line through any particular temperature gives the offset response of the detector, and the gain is just the slope of the response line.
- detectors are assumed to be linear devices, correcting detector gain and offset at one set of exposure intensity levels is assumed to allow correction at all other levels. However, detectors often do not respond linearly, resulting in errors even after correction by the two-point method.
- detector response is measured at various temperatures to experimentally obtain offset and gain corrections for each detector over the range of expected operating temperatures. Because detector response typically changes non-linearly with temperature, measuring at various temperatures is commonly more accurate than the two-point correction method.
- focal plane array detectors are tested under controlled conditions after manufacture, for example using the two-point correction method, to derive an offset table and/or a gain table.
- Such tables might include, for example, the gain of each detector and the offset of the detector at one particular temperature, allowing the offset to be calculated at any other temperature for each detector in an array.
- These tables are fixed after testing and used to correct for each detector's unique non-uniformity error.
- non-uniformity errors may be nonlinear and may vary over time and under different operating conditions, correction using data solely from calibration shortly after manufacture is error-prone.
- manufacturers have developed non-uniformity error correction methods that can be performed in the field.
- one proposal involves application of known electromagnetic radiation signals using shutters, mirrors, or the like to achieve a two-point calibration during operation.
- correction values are calculated for each detector in the array based on the assumption that all detectors should exhibit equal signals when exposed to an equal intensity of radiation. Because the in-field method may be performed repeatedly during operation, rather than once following manufacture, non-uniformity errors that evolve over time or in response to operating conditions may be corrected.
- U.S. Pat. No. 6,507,018 describes a passive method of non-uniformity correction.
- a determination is first made whether sufficient relative motion exists between the scene and focal plane array. If so, the difference in temporal frequency between the high-frequency moving scene information and the low-frequency stationary detector non -uniformity errors may permit the two signals to be decoupled.
- a spatial low pass filter algorithm is used to perform corrections to the high-temporal -frequency moving scene information. The algorithm estimates each detector's offset and gain value by comparing each detector's response to a local response average among neighboring detectors that are exposed to substantially identical scene information.
- This method may be performed iteratively, to update the offset and gain tables in response to changing conditions.
- one limitation to this method is that sufficient scene motion is required to separate scene information from detector non-uniformity error.
- intentional motion such as mechanical dithering may be introduced.
- An example of a prior art nonuniformity correction method involving mechanical dithering is described in U.S. Pat. No. 5,925,880.
- periodic mechanical motion is introduced to the imaging system such that under the assumption of a slowly changing scene, multiple detectors are exposed to substantially identical scene information.
- neighborhood averaging is used to determine the ideal output for each detector, and thus to derive updated gain and offset correction update tables. This method, however, suffers from several drawbacks.
- the algorithm assumes a precise magnitude of scene translation across the detector, the method is very sensitive to dither translation accuracy.
- any detector with a response substantially different from the ideal value will skew the average detector response of those detectors in its neighborhood.
- performance can suffer during rapid scene movement. This potential decline in performance may be overcome to some extent by increasing the dithering frequency, but such an increase may come at the expense of dithering accuracy and mechanical reliability. Therefore, the mechanical dithering method is not particularly suitable for applications with rapid scene motion, such as moving platform applications.
- the present teachings disclose methods and apparatus for correction of spatial non-uniformities among detectors in a focal plane array.
- Incoming image data is incident on the array, and the resulting image signals are corrected with a bilateral filter.
- the bilateral filter accounts for edge effects by filtering based both on spatial separation between image points and photometric separation between image points.
- wavelength ranges identified in these meanings are exemplary, not limiting, and may overlap slightly, depending on source or context.
- the wavelength ranges lying between about 1 nm and about 1 mm, which include ultraviolet, visible, and infrared radiation, and which are bracketed by x-ray radiation and microwave radiation, may collectively be termed optical radiation.
- Ultraviolet radiation Invisible electromagnetic radiation having wavelengths from about 100 nm, just longer than x-ray radiation, to about 400 nm, just shorter than violet light in the visible spectrum.
- Ultraviolet radiation includes (A) UV-C (from about 100 nm to about 280 or 290 nm), (B) UV-B (from about 280 or 290 nm to about 315 or 320 nm), and (C) UV-A (from about 315 or 320 nm to about 400 nm).
- Visible light Visible electromagnetic radiation having wavelengths from about 360 or 400 nanometers, just longer than ultraviolet radiation, to about 760 or 800 nanometers, just shorter than infrared radiation. Visible light may be imaged and detected by the human eye and includes violet (about 390-425 nm), indigo (about 425445 nm), blue (about 445-500 nm), green (about 500-575 nm), yellow (about 575-585 nm), orange (about 585- 620 nm), and red (about 620-740 nm) light, among others.
- violet about 390-425 nm
- indigo about 425445 nm
- blue about 445-500 nm
- green about 500-575 nm
- yellow about 575-585 nm
- orange about 585- 620 nm
- red about 620-740 nm
- Infrared radiation Invisible electromagnetic radiation having wavelengths from about 700 nanometers, just longer than red light in the visible spectrum, to about 1 millimeter, just shorter than microwave radiation.
- Infrared radiation includes (A) IR-A (from about 700 nm to about 1 ,400 nm), (B) IR-B (from about 1 ,400 nm to about 3,000 nm), and (C) IR-C (from about 3,000 nm to about 1 mm).
- IR radiation, particularly IR-C may be caused or produced by heat and may be emitted by an object in proportion to its temperature and emissivity.
- Portions of the infrared having wavelengths between about 3,000 and 5,000 nm (i.e., 3 and 5 ⁇ m) and between about 7,000 or 8,000 and 14,000 nm (i.e., 7 or 8 and 14 ⁇ m) may be especially useful in thermal imaging, because they correspond to minima in atmospheric absorption and thus are more easily detected (particularly at a distance).
- NIR near infrared
- SWIR short-wave infrared
- MWIR mid-wave infrared
- LWIR long-wave infrared
- VLWIR very long-wave infrared
- Figure 1 is a schematic diagram showing elements of an image correction system for correcting focal plane non- uniformity errors, according to aspects of the present disclosure.
- Figure 2 depicts an airborne moving platform suitable for use with the apparatus depicted in Fig. 1 and methods depicted in Figs. 3-5.
- Figure 3 is a flowchart depicting a method of correcting focal plane non-uniformity errors, according to aspects of the present disclosure.
- Figure 4 is a flowchart depicting another method of correcting focal plane non-uniformity errors, according to aspects of the present disclosure.
- Figure 5 is a flowchart depicting another method of correcting focal plane non-uniformity errors, according to aspects of the present disclosure. Detailed Description
- a bilateral filter is used in this disclosure to mean an image smoothing filter that simultaneously considers location similarity and attribute similarity between pixels, where a "pixel" refers to the portion of a displayed image corresponding to one detector in a detector array.
- a bilateral filter may be defined using a combination of spatial separation and photometric separation.
- a bilateral filter according to the present teachings may be defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between pixels.
- the weight afforded to a particular pixel by the filter depends on both its spatial distance and its photometric similarity to the pixel upon which the filter is operating.
- S represents the spatial domain, consisting of the set of all possible positions in an image
- • p, ⁇ are vectors representing 2-dimensional positions (x,y) in an image; • / represents an unfiltered image intensity, and 7 represents an image intensity to which a bilateral filter has been applied;
- • /(!/?- ⁇ ? I) represents a function of the difference in spatial positions of points in an image
- M(P) represents a general photometric property (such as intensity or color) of point p in an image
- g(M(p)-M(q)) represents a function of the difference in values of this property.
- ) is a weight factor that depends on the spatial separation between point q and point p of the image
- g(M( ⁇ )-M(q)) is a weight factor that depends on the photometric separation, i.e., the difference in photometric property M , between point q and point p of the image.
- Photometric separation between image points may be a function of a difference in intensity or color (R, G and/or B) between the points.
- any photometric variable by which it may be advantageous to define a bilateral filter may be used as the photometric property M .
- the bilateral filter may operate to correct errors in properties other than the intensity / of the image signal.
- the bilateral filter may operate to correct errors in color, in which case / and 7 in Eq. 1 would be replaced by suitable vector quantities representing unfiltered and filtered color components.
- ) and g(M(p)-M(q)) can generally take any functional form that assigns a decreasing weight to more distant and more dissimilar pixels in an image, and the precise weight functions chosen may be selected based on a particular imaging application. Suitable weight functions may include, for example, decreasing polynomials of various degrees, and any of the numerous well-known distribution functions that assign decreasing weight to points further from a central value.
- One particular type of weight function that may be suitable is a Gaussian function, generally defined by
- a bilateral filter may be defined by the product of a first Gaussian function of spatial separation characterized by a first width parameter ⁇ r, and a second Gaussian function of photometric separation characterized by a second width parameter ⁇ M .
- the photometric property M may be defined, for example, as intensity / such that the second weight factor is a Gaussian function of intensity difference.
- ⁇ f and ⁇ M are independent width parameters, allowing any desired relative importance to be given to the spatial and photometric distance between pixels when applying the bilateral filter.
- the bilateral filter smoothes images while also taking into account photometric variation, and thus preserves edges better than a conventional smoothing filter. More specifically, in regions of nearly uniform intensity with image noise consisting mainly of temporal noise, photometric values of neighboring pixels are similar to each other such that the photometric distribution function approaches unity, and the filter operates as a conventional smoothing filter. However, at an intensity edge, where p lies on the high intensity side, the photometric function assigns high values for neighbors on the high intensity side and low values for neighbors on the low intensity side. In this case, the bilateral filter essentially considers only those pixels with similar intensity to p , and ⁇ (p) becomes a weighted average of neighbors on the high intensity side. Thus, noise is reduced while edge features are substantially preserved.
- a bilateral filter can effectively filter high spatial frequency data with relatively less blurring than a conventional filter.
- a bilateral filter may be used to effectively remove row and column noise, which has high spatial frequency, from an image containing other high spatial frequency data. Because row and column noise will typically have significant intensity dissimilarity among near neighbors beyond the affected row or column, the bilateral filter may smooth this noise while preserving other high spatial frequency data.
- a bilateral filter can reduce temporal noise in relatively smooth areas, as a conventional spatial filter would.
- a conventional spatial Gaussian filter averages each pixel's properties among all near neighbors regardless of photometric differences, such that isolated noise is smoothed into the background but high spatial frequency information is blurred. In such cases, a Gaussian filter would blur row and column noise into the image.
- Figure 1 shows a system, according to aspects of the present disclosure, for correcting fixed position non-uniformities, one of which is generally indicated at 10, in a focal plane array 12 observing a scene.
- Incoming image data 14 encounters one or more optical elements, generally depicted at 16, and may also encounter a dithering mechanism 18, before arriving at a focal plane array 12.
- Optical elements 16 may include, for example, one or more lenses, mirrors, apertures or the like configured to receive, direct and/or focus incoming image data, and to pass the image data toward the focal plane array.
- Focal plane array 12 generally includes a plurality of detectors 20, disposed substantially within a substantially planar array.
- Detectors 20 may be any devices configured to receive image data, generally in the form of electromagnetic radiation, and to produce an image signal corresponding to a pixel in the acquired image in response.
- each detector 20 may be capable of producing an image signal in response to visible, near-infrared, or infrared radiation.
- methods according to the present disclosure are suitable for use with detectors sensitive to any wavelength regime of electromagnetic radiation.
- a dithering mechanism 18 may be configured to spatially translate incoming image data 14. If utilized, dithering mechanism 18 moves with a dithering path 22, which spatially translates image data 14 in a known manner relative to focal plane array 12. Such translation may occur, for example, with fixed frequency and sub-pixel resolution, although any known dithering path, whether fixed or variable, may be suitable.
- dithering need not occur at particularly high frequencies even when the system is mounted to a moving platform or used to image rapidly moving objects.
- dithering mechanism 18 is configured to spatially translate incoming image data with a frequency of less than one cycle per second. This is in contrast to some prior art systems, which as described previously require a relatively high frequency dithering when scene information changes rapidly.
- the dithering motion changes the position over time of image data 14 relative to each detector 20 in the focal plane array.
- the effect of the dithering motion is to translate image data 14 across focal plane array 12 in a known path, thus decoupling scene information from fixed non-uniformity errors in focal plane array 12.
- Decoupling permits a more accurate determination of detector non-uniformity errors, which can thus be corrected in subsequent frames (as described in detail below with respect to Fig. 5). Therefore, scene information may be more accurately rendered in the resultant image.
- other methods may be used to decouple scene information from non-uniformity errors. For example, a motion detector or estimator may be used to determine scene motion, as will be described in more detail below.
- a processor 24 may be configured to receive image signals produced by detectors 20 of focal plane array 12, apply a non-uniformity correction algorithm, including a bilateral filter, to the received signals to at least partially correct the image signals for non-uniformity errors among the detectors 20, and produce corrected image data.
- processor 24 may be further configured to remove translation effects of dithering mechanism 18 from the signals and to iteratively update detector offset correction data based on non-uniformity errors removed by the bilateral filter (as described in detail below with respect to Fig. 5).
- the processor may be configured to operate as a motion sensor or detector, which updates detector offset correction data only when sufficient scene motion is detected.
- processor 24 is configured to apply a bilateral filter defined using a combination of spatial separation and photometric separation between pixels.
- processor 24 may be configured where the photometric separation between pixels is a function of a difference in intensity between the pixels.
- processor 24 may be configured where the photometric separation between pixels is a function of a difference in color between the pixels.
- processor 24 may be configured using any photometric attribute by which it may be advantageous to define a bilateral filter.
- processor 24 is configured to apply a bilateral filter defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels.
- a bilateral filter defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels.
- processor 24 may be configured to apply a bilateral filter defined such that the first weight factor is a Gaussian function of spatial separation characterized by a first width parameter, and the second weight factor is a Gaussian function of photometric separation characterized by a second width parameter.
- the photometric property may be defined, for example, as intensity such that the second weight factor is a Gaussian function of intensity difference.
- processor 24 may be configured to apply a bilateral filter defined using weight factors that generally take any functional form that assigns a decreasing weight to more distant and more dissimilar pixels in an image, and the precise weight functions chosen may be selected based on a particular imaging application.
- processor 24 is configured to apply a threshold bilateral filter.
- a threshold changes the neighborhood of pixels considered by the bilateral filter and thus may be used to improve non-uniformity correction results and enhance system performance.
- processor 24 is configured such that the bilateral filter considers only those pixels meeting a predetermined criterion in its calculations. Thus, only pixels with parameter values below a certain threshold with respect to some parameter are considered by the bilateral filter, while pixels above the threshold are ignored.
- the threshold represents a maximum or minimum spatial or photometric difference that the filter will consider when calculating a correction for each pixel.
- the threshold may improve bilateral filter performance by reducing the number of pixels considered to those most likely to be relevant to the correction of a particular pixel.
- a threshold filter may consider only those pixels with parameter values either above or below a threshold, depending on the parameter selected and the application.
- the predetermined criterion may, for example, be defined as photometric dissimilarity g(M(p)-M(q)) below a certain value, such that only those image points with photometric dissimilarity below a certain value are considered by the bilateral filter.
- photometric dissimilarity may be defined as intensity difference (l(p)-l(q)) between pixels, so that the bilateral filter only considers pixels below a defined intensity difference. In some cases, this may improve the correction of non-uniformity detector errors while leaving naturally occurring intensity differences unchanged.
- the threshold may be defined by any other function that reflects the likelihood of a particular detector or group of detectors within a focal plane array to exhibit non-uniformity errors at a particular time.
- processor 24 is configured to apply an adaptable threshold bilateral filter, where the threshold value of photometric dissimilarity is adaptable to changing imaging conditions.
- Processor 24 may be configured to change the threshold in response to changes in image noise, contrast, non-uniformity, or any other parameter that may affect bilateral filter performance. For example, with a non-uniformity-based adaptable threshold, as residual non-uniformity increases, the threshold may change (adapt) so that more pixels are considered by the filter. Conversely, as residual non-uniformity decreases, the threshold may change in the opposite direction, so that fewer pixels are considered.
- a threshold that adapts to changing imaging conditions may be advantageous.
- Fig. 2 illustrates an imaging system 100 mounted to a helicopter, which is a common application where an adaptable threshold bilateral filter may be particularly suitable.
- any moving platform application is likely to benefit from implementation of an adaptable threshold bilateral filter, and that the filter also may be advantageous for viewing rapidly changing scene information from a stationary platform.
- the corrected image data may be directed to a real-time display 26, recorded for subsequent use and/or analysis, or both.
- Figure 3 depicts a method, generally indicated at 27, for correcting focal plane array non-uniformity errors using a bilateral filter, according to aspects of the present disclosure.
- image data is received at a substantially planar array of detectors, and image signal 28 is produced by the array of detectors in response to the received image data.
- Image signal 28 produced by the detectors may be partially corrected by means of a gain correction table 30 to produce an approximately gain-corrected image signal .
- gain is a measure of the response gradient of each detector 20.
- gain correction table 30 provides for at least partial correction of response gradient non-uniformities between detectors 20.
- gain correction table 30 will be determined for the focal plane array 12 at the time of its manufacture. However, any suitable means may be used to determine response gradients for focal plane array 12 such that an approximate gain correction table may be determined.
- the gain-corrected image signal may be further partially corrected by means of offset correction table 32 to produce an approximately gain- and offset-corrected image signal 48.
- Initial offset correction approximately corrects for the differential response of each detector 20 to a fixed input signal, typically a "dark" signal corresponding to zero or near-zero input, although an offset correction corresponding to any fixed input may be used.
- the offset correction table 32 will be determined for the focal plane array at time of manufacture.
- offset correction table 32 is calculated from residual fixed position noise filtered from approximately gain- and offset-corrected image signals.
- any suitable means may be used to determine detector response differences for focal plane array 12 such that an approximate offset correction table may be determined.
- bilateral filter 34 is applied to image signal 28 produced by the detectors, which may be approximately gain- and offset-corrected, to at least partially correct for non-uniformity errors among the detectors 20.
- bilateral filter 34 may be defined using a combination of spatial separation and photometric separation, such as a difference in intensity between pixels.
- bilateral filter 34 may be defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels.
- the first weight factor is a Gaussian function of spatial separation characterized by a first width parameter
- the second weight factor being a Gaussian function of photometric separation characterized by a second width parameter.
- the photometric property may be defined, for example, as intensity such that the second weight factor is a Gaussian function of intensity difference.
- bilateral filter 34 may be configured using any photometric property and weight functions, depending on the details of a particular imaging application.
- filtered image 50 may be shifted by a demodulator 36, to remove translation effects of dithering (as described in detail below with respect to Fig. 5) before displaying image 38 to the user.
- Adaptable Threshold Bilateral Filter Figure 4 illustrates a method 27' of non-uniformity correction which is similar to method 27 of Fig. 3, except that method 27' of Fig. 4 includes the use of an adaptable threshold bilateral filter 40.
- bilateral filter 34 considers only those pixels meeting a predetermined criterion, as defined by threshold 44, in its calculations.
- threshold 44 represents a maximum spatial or photometric difference that the filter will consider when calculating a correction for a particular image point.
- the filter may be applied to pixels with parameter values above (rather than below) a threshold, depending on the parameter selected and the application.
- Threshold 44 may be defined, for example, where the predetermined criterion is photometric dissimilarity g(M(p)-M(q)) such that only pixels with photometric dissimilarity below a certain value are considered by bilateral filter 34.
- threshold 44 may be defined by any other function that reflects the likelihood of a particular detector or group of detectors within a focal plane array to exhibit non-uniformity errors at a particular time.
- threshold 44 may change, such that the threshold value of photometric dissimilarity is adaptable to changing imaging conditions. With an adaptable threshold, as residual non-uniformity increases, threshold 44 adapts and more scene information is filtered.
- adaptable threshold bilateral filter 40 typically operates by iteratively calculating a non-uniformity correction metric ("NUC metric") 42 and adjusting threshold 44 in response to changes in the NUC metric.
- NUC metric a non-uniformity correction metric
- the NUC metric may be calculated using any parameter that is expected to change in response to iterative non-uniformity error correction by the system. For example, in one embodiment where detector column noise is the primary source of detector non-uniformity error, the intensity difference between affected columns and scene information in neighboring columns may be used to calculate the NUC metric.
- the intensity difference between affected columns and scene information in neighboring columns will decrease, and the NUC metric will correspondingly decrease.
- the NUC metric may be calculated based on image noise, contrast, non-uniformity, or any other parameter that may affect bilateral filter performance.
- threshold 44 may then be updated in response to changes in the NUC metric.
- threshold 44 is updated such that, as the NUC metric decreases, threshold 44 decreases and the neighborhood of pixels considered by bilateral filter 34 decreases.
- threshold 44 increases and the neighborhood considered by bilateral filter 34 increases.
- threshold 44 may be calculated from the NUC metric using any algorithm that is responsive to changes in non-uniformity error, depending on the details of a particular imaging application.
- bilateral filter 34 then operates to remove non-uniformity noise from each image point, as described above.
- filtered image 50 again may be shifted by a demodulator 36, to remove translation effects of dithering (as described in detail below with respect to Fig. 5) before displaying image 38 to the user.
- Figure 5 illustrates a method 27" of non-uniformity correction which is similar to method 27 of Fig. 4, except that method 27" of Fig. 5 also includes a flow to iteratively update detector offset correction data based on non- uniformity errors corrected by the bilateral filter, generally indicated at 46.
- method 27 mechanical dithering may be used to obtain non-uniformity error data from noise removed by bilateral filter 34.
- Detector offset correction table 32 is then iteratively updated based on this data.
- a similar scheme may be used to update gain correction table 30, although this may not be necessary when offset table 32 is updated frequently and/or at input levels corresponding to the actual imaging temperature of each detector.
- a motion detector or estimator may be used to determine when the offset and/or gain correction tables are to be updated.
- a reference image frame is first saved by the processor. As each subsequent image frame is captured, a determination is made as to whether scene motion has occurred relative to the reference frame.
- Motion detection can include an actual determination of the amount of scene motion, or motion may be detected without measuring the amount of motion. Detecting motion without calculating the precise amount of motion may be less computationally intensive than measuring the amount of the motion.
- a simple motion detector may project each frame of a captured image frame onto two vectors: a horizontal "profile” vector and a vertical "profile” vector. Correlation between the profile vectors of the reference frame and the captured frame then gives a rough estimate of scene motion. If there is sufficient motion detected, noise image 52 may be sent to iterative flow 46, including (temporal) low pass filter 54 and integrator 58. If there is insufficient motion detected, no offset and/or gain map update may be performed. When a motion estimator is used, the reference image is also generally periodically updated. This can be done on a fixed schedule (e.g., every 200 frames) or when sufficient motion of a captured frame relative to the previous reference frame has been measured.
- Residual noise image 52 commonly includes detector non-uniformity noise, residual scene information, and temporal noise.
- Dithering mechanism 18 and/or scene motion spatially translates image data and associated temporal noise, but not detector non- uniformity noise, relative to focal plane array 12.
- residual scene information and associated temporal noise in residual noise image 52 are spatially translated along dithering path 22 and/or along the path of the scene motion, while detector non-uniformity noise remains relatively fixed.
- low pass filter 54 may be used to obtain the relatively low frequency detector non-uniformity noise 56 from the residual noise image 52, while filtering out the relatively higher frequency residual scene information and associated temporal noise. Because detector non-uniformity noise is relatively fixed, neither dithering nor scene motion need occur at particularly high frequency for low pass filter 54 to remove residual scene information and associated temporal noise from residual noise image 52. For example, in some embodiments according to the present teachings, spatial translation due to dithering may occur with frequency less than one cycle per second.
- detector non-uniformity noise 56 is used to update offset correction table 32 by means of integrator 58.
- Integrator 58 calculates the inverse of detector non-uniformity noise image 56, which corresponds to the degree of non-uniformity error in the image. Integrator 58 then uses the inverse of detector non-uniformity noise image 56 to update offset correction table 32.
- Various algorithms may be used by integrator 58 to produce updated offset correction table 60, which replaces the prior offset correction table 32 in the system. For example, the inverse noise image may simply replace the previous values in the offset table. Alternatively, the inverse noise image may be averaged with or otherwise combined with the previous values in the offset table, to produce an updated correction table.
- integrator 58 is adjustable by means of constant block 62, which may be adjusted to either slow or quicken the responsiveness of the integrator to changes in detector non-uniformity noise image 56.
- constant block 62 may be adjusted to rapidly update offset correction table 32 in response, while in applications where stability is expected, constant block 62 may be adjusted to slowly update offset correction table 32.
- the updated offset table is then applied to the next image signal, and the partially corrected image proceeds through a bilateral filter just as in methods 27 and 27 .
- translation effects (if any) of dithering mechanism 18 may be removed from filtered image 50 by means of demodulator 36.
- Demodulator 36 stores each filtered image 50 in a frame buffer, where a signal is applied to remove translation effects of dithering mechanism 18 when dithering is employed.
- the image 38 presented to the user appears stable and free of system- induced motion.
- the present disclosure contemplates methods and apparatus suitable for applying a bilateral filter 34 and/or a demodulator 36 with or without the adaptable threshold depicted in Fig. 4, and with or without the iteratively adjusted offset table depicted in Fig. 5.
- the disclosure set forth above may encompass multiple distinct inventions with independent utility.
- the disclosure relates information regarding specific embodiments, which are included for illustrative purposes, and which are not to be considered in a limiting sense, because numerous variations are possible.
- the inventive subject matter of the disclosure includes all novel and nonobvious combinations and subcombinations of the various elements, features, functions, and/or properties disclosed herein.
- the following claims particularly point out certain combinations and subcombinations regarded as novel and nonobvious.
Abstract
Correction of spatial nonuniformities among detectors in a focal plane array. Incoming image data is incident on the array, and the resulting image signals are corrected with a bilateral filter. The bilateral filter accounts for edge effects by filtering based both on spatial separation between image points and photometric separation between image points.
Description
NON-UNIFORMITY ERROR CORRECTION WITH A BILATERAL FILTER
Background
Focal plane arrays are used to digitally capture electromagnetic radiation, and are common components of digital imaging systems. A focal plane array comprises an array of individual detectors, generally photodiodes, which produce current when exposed to photons of sufficient energy. This current, later converted to a voltage, may be treated in a variety of ways to generate an image that is displayed to the end user. Individual detectors in focal plane arrays are subject to non-uniformity errors. Specifically, if two detectors at a uniform temperature are exposed to the same scene information, their response voltages may differ. Non- uniformity errors are described in terms of offset (the difference in response to a calibrated input from the expected response) and gain (the difference in slope of the line between two responses to two calibrated inputs from the expected slope). These errors increase the minimum detectable signal, decrease the signal-to-noise ratio, and thus have a detrimental effect on system performance. The effect is particularly severe for high performance, high sensitivity detector arrays. Various solutions to the problem of detector non-uniformity error have been developed. These solutions generally involve exposing the detector array to calibrated electromagnetic radiation. Under the assumption that all detectors should exhibit equal signals when exposed to an equal intensity of radiation, response deviations from the nominal signals are used to obtain offset and gain correction maps, which are then applied to detected signals before displaying the image data.
Perhaps the simplest way of obtaining correction maps is the two-point correction method, in which uniform radiation at two different precisely known temperatures is applied to the array and the response of each detector is measured. The offset and gain response of each detector is then calculated by approximating the response as a line through these two calibration points. For a given detector, the extrapolation of the line through any particular
temperature gives the offset response of the detector, and the gain is just the slope of the response line.
Because the detectors are assumed to be linear devices, correcting detector gain and offset at one set of exposure intensity levels is assumed to allow correction at all other levels. However, detectors often do not respond linearly, resulting in errors even after correction by the two-point method. In one enhancement of the two-point method, detector response is measured at various temperatures to experimentally obtain offset and gain corrections for each detector over the range of expected operating temperatures. Because detector response typically changes non-linearly with temperature, measuring at various temperatures is commonly more accurate than the two-point correction method.
Commonly, focal plane array detectors are tested under controlled conditions after manufacture, for example using the two-point correction method, to derive an offset table and/or a gain table. Such tables might include, for example, the gain of each detector and the offset of the detector at one particular temperature, allowing the offset to be calculated at any other temperature for each detector in an array. These tables are fixed after testing and used to correct for each detector's unique non-uniformity error. However, because non-uniformity errors may be nonlinear and may vary over time and under different operating conditions, correction using data solely from calibration shortly after manufacture is error-prone. Thus, manufacturers have developed non-uniformity error correction methods that can be performed in the field. For example, one proposal involves application of known electromagnetic radiation signals using shutters, mirrors, or the like to achieve a two-point calibration during operation. As in the two-point correction method performed described above, correction values are calculated for each detector in the array based on the assumption that all detectors should exhibit equal signals when exposed to an equal intensity of radiation. Because the in-field method may be performed repeatedly during operation, rather than once
following manufacture, non-uniformity errors that evolve over time or in response to operating conditions may be corrected.
One specific in-field calibration approach has been to periodically place a mechanical paddle, designed to act as a uniform radiation source, in front of the detector array such that an image of the ideally uniform paddle surface is captured. Because the paddle is assumed to be uniform, deviations in individual detectors are treated as errors, from which a revised offset table may be derived. However, this particular approach has several drawbacks. First, scene acquisition must be interrupted, which is undesirable where continuously obtaining scene information is required or preferred. Second, because the paddle is treated as an ideal surface, actual non-uniformities in the reflectivity of the paddle will mistakenly be assumed to be detector non- uniformity error and "burned in" to the offset table. Furthermore, paddle systems correct detector response at only one electromagnetic frequency at a time, making it difficult or impossible to correct for errors that vary as a function of radiation wavelength or intensity.
Another general approach to non-uniformity error correction in focal plane arrays has been to image substantially identical scene information on a plurality of detectors within the array. By comparing the resultant output from a plurality of detectors exposed to substantially identical scene information, detector non-uniformity errors, which remain spatially fixed, may be identified and corrected. In this general method, scene information is translated across the focal plane array either passively, by comparing subsequent frames of a moving set of objects, or actively, by introducing mechanical motion such as dithering in the imaging system. Under either method, because scene information has been translated across the array, multiple detectors are assumed to have seen substantially identical scene information in subsequent frames. Thus, any differences between reported values are assumed to be due to detector non-uniformity errors, and may be used to derive the offset and gain correction tables.
For example, U.S. Pat. No. 6,507,018 describes a passive method of non-uniformity correction. In the passive method described by U.S. Pat.
No. 6,507,018, a determination is first made whether sufficient relative motion exists between the scene and focal plane array. If so, the difference in temporal frequency between the high-frequency moving scene information and the low-frequency stationary detector non -uniformity errors may permit the two signals to be decoupled. Then, a spatial low pass filter algorithm is used to perform corrections to the high-temporal -frequency moving scene information. The algorithm estimates each detector's offset and gain value by comparing each detector's response to a local response average among neighboring detectors that are exposed to substantially identical scene information. This method may be performed iteratively, to update the offset and gain tables in response to changing conditions. However, one limitation to this method is that sufficient scene motion is required to separate scene information from detector non-uniformity error. To address this limitation, intentional motion such as mechanical dithering may be introduced. An example of a prior art nonuniformity correction method involving mechanical dithering is described in U.S. Pat. No. 5,925,880. In the described method, periodic mechanical motion is introduced to the imaging system such that under the assumption of a slowly changing scene, multiple detectors are exposed to substantially identical scene information. As in the passive method, neighborhood averaging is used to determine the ideal output for each detector, and thus to derive updated gain and offset correction update tables. This method, however, suffers from several drawbacks. First, because the algorithm assumes a precise magnitude of scene translation across the detector, the method is very sensitive to dither translation accuracy. Second, any detector with a response substantially different from the ideal value will skew the average detector response of those detectors in its neighborhood. Furthermore, because the method assumes a slowly changing scene, performance can suffer during rapid scene movement. This potential decline in performance may be overcome to some extent by increasing the dithering frequency, but such an increase may come at the expense of dithering accuracy and mechanical reliability. Therefore, the mechanical dithering
method is not particularly suitable for applications with rapid scene motion, such as moving platform applications.
Another nonuniformity correction method using mechanical dithering is described in U.S. Pat. No. 5,925,875. The described method entails passing the dithered image through a temporal high pass filter. Because dithering introduces known periodic motion to scene information, the high pass filter passes high temporal frequency scene information while removing low temporal frequency non-uniformity error. The image is then restored either by time-delay-integration (TDI) or by spatial inversion. In TDI, detectors imaging similar scene information in consecutive frames are matched, and their signals are time-averaged over several frames, from which the image may be restored. In spatial inversion, effects of dithering and filtering are reduced using precise knowledge of the dither pattern and high pass filter response, such that an image signal is constructed from the filtered signal. Both of the methods disclosed in U.S. Pat. No. 5,925,875 suffer from several weaknesses. The temporal high pass filter algorithm requires storage and real-time access of several full image frames simultaneously. Because of the storage limitations of embedded memory technology, the application of this method to higher resolution images is limited. Second, the high pass filter will pass all high temporal frequency information, whether induced by dithering or by imaging platform motion. Deconvolution of these two sources of motion introduces severe complications in a moving platform application. Under the TDI technique, where detectors measuring similar scene information in consecutive frames are matched, the platform motion must be measured and compensated for in the detector matching process. Platform motion may be measured either with mechanical devices or additional image processing, but at a penalty of size, cost, and complexity. Under the spatial inversion technique, where the image signal is constructed using precise knowledge of the dither pattern and filter response to remove image distortion, the algorithm will need to take platform motion into account. As in the TDI case, precise knowledge of platform motion will be needed, at the penalty of size, cost, and complexity.
In light of the foregoing, there is a need for a relatively simple and robust method to provide non-uniformity error correction of detectors in a focal plane array, particularly in a high resolution imaging system and in a moving platform application. Summary
The present teachings disclose methods and apparatus for correction of spatial non-uniformities among detectors in a focal plane array. Incoming image data is incident on the array, and the resulting image signals are corrected with a bilateral filter. The bilateral filter accounts for edge effects by filtering based both on spatial separation between image points and photometric separation between image points.
Definitions
Technical terms used in this disclosure have the meanings that are commonly recognized by those skilled in the art. However, the following terms may have additional meanings, as described below. The wavelength ranges identified in these meanings are exemplary, not limiting, and may overlap slightly, depending on source or context. The wavelength ranges lying between about 1 nm and about 1 mm, which include ultraviolet, visible, and infrared radiation, and which are bracketed by x-ray radiation and microwave radiation, may collectively be termed optical radiation.
Ultraviolet radiation. Invisible electromagnetic radiation having wavelengths from about 100 nm, just longer than x-ray radiation, to about 400 nm, just shorter than violet light in the visible spectrum. Ultraviolet radiation includes (A) UV-C (from about 100 nm to about 280 or 290 nm), (B) UV-B (from about 280 or 290 nm to about 315 or 320 nm), and (C) UV-A (from about 315 or 320 nm to about 400 nm).
Visible light. Visible electromagnetic radiation having wavelengths from about 360 or 400 nanometers, just longer than ultraviolet radiation, to about 760 or 800 nanometers, just shorter than infrared radiation. Visible light may be imaged and detected by the human eye and includes violet (about 390-425 nm), indigo (about 425445 nm), blue (about 445-500 nm),
green (about 500-575 nm), yellow (about 575-585 nm), orange (about 585- 620 nm), and red (about 620-740 nm) light, among others.
Infrared (IR) radiation. Invisible electromagnetic radiation having wavelengths from about 700 nanometers, just longer than red light in the visible spectrum, to about 1 millimeter, just shorter than microwave radiation. Infrared radiation includes (A) IR-A (from about 700 nm to about 1 ,400 nm), (B) IR-B (from about 1 ,400 nm to about 3,000 nm), and (C) IR-C (from about 3,000 nm to about 1 mm). IR radiation, particularly IR-C, may be caused or produced by heat and may be emitted by an object in proportion to its temperature and emissivity. Portions of the infrared having wavelengths between about 3,000 and 5,000 nm (i.e., 3 and 5 μm) and between about 7,000 or 8,000 and 14,000 nm (i.e., 7 or 8 and 14 μm) may be especially useful in thermal imaging, because they correspond to minima in atmospheric absorption and thus are more easily detected (particularly at a distance). The particular interest in relatively shorter wavelength IR has led to the following classifications: (A) near infrared (NIR) (from about 780 nm to about 1 ,000 nm), (B) short-wave infrared (SWIR) (from about 1 ,000 nm to about 3,000 nm), (C) mid-wave infrared (MWIR) (from about 3,000 nm to about 6,000 nm), (D) long-wave infrared (LWIR) (from about 6,000 nm to about 15,000 nm), and (E) very long-wave infrared (VLWIR) (from about 15,000 nm to about 1 mm). Portions of the infrared, particularly portions in the far or thermal IR having wavelengths between about 0.1 and 1 mm, may alternatively, or in addition, be termed millimeter-wave (MMV) wavelengths.
Brief Description of the Drawings Figure 1 is a schematic diagram showing elements of an image correction system for correcting focal plane non- uniformity errors, according to aspects of the present disclosure.
Figure 2 depicts an airborne moving platform suitable for use with the apparatus depicted in Fig. 1 and methods depicted in Figs. 3-5. Figure 3 is a flowchart depicting a method of correcting focal plane non-uniformity errors, according to aspects of the present disclosure.
Figure 4 is a flowchart depicting another method of correcting focal plane non-uniformity errors, according to aspects of the present disclosure.
Figure 5 is a flowchart depicting another method of correcting focal plane non-uniformity errors, according to aspects of the present disclosure. Detailed Description
This disclosure describes a system and method for correcting focal plane array non-uniformity errors using a bilateral filter. The term "bilateral filter" is used in this disclosure to mean an image smoothing filter that simultaneously considers location similarity and attribute similarity between pixels, where a "pixel" refers to the portion of a displayed image corresponding to one detector in a detector array. For example, a bilateral filter may be defined using a combination of spatial separation and photometric separation. More specifically, a bilateral filter according to the present teachings may be defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between pixels. Thus, the weight afforded to a particular pixel by the filter depends on both its spatial distance and its photometric similarity to the pixel upon which the filter is operating. To more precisely define the bilateral filters disclosed herein, the following terminology will be used:
• S represents the spatial domain, consisting of the set of all possible positions in an image;
• p,ξ are vectors representing 2-dimensional positions (x,y) in an image; • / represents an unfiltered image intensity, and 7 represents an image intensity to which a bilateral filter has been applied;
• /(!/?- <? I) represents a function of the difference in spatial positions of points in an image; and
• M(P) represents a general photometric property (such as intensity or color) of point p in an image, and g(M(p)-M(q)) represents a function of the difference in values of this property.
Using this terminology, a bilateral filter according to the present teachings may be generally represented by the following equation:
7(P) -M(<I))I(<I) , (1)
where the denominator is a normalization factor defined by: Wp =
-qf)g(M(p)-M(q)) . (2) q≡S
In Eq. 1 , the function /(|/?- g|) is a weight factor that depends on the spatial separation between point q and point p of the image, and g(M(β)-M(q)) is a weight factor that depends on the photometric separation, i.e., the difference in photometric property M , between point q and point p of the image. Photometric separation between image points may be a function of a difference in intensity or color (R, G and/or B) between the points. However, any photometric variable by which it may be advantageous to define a bilateral filter may be used as the photometric property M . Furthermore, the bilateral filter may operate to correct errors in properties other than the intensity / of the image signal. For instance, the bilateral filter may operate to correct errors in color, in which case / and 7 in Eq. 1 would be replaced by suitable vector quantities representing unfiltered and filtered color components.
The weight factors /(!p- g|) and g(M(p)-M(q)) can generally take any functional form that assigns a decreasing weight to more distant and more dissimilar pixels in an image, and the precise weight functions chosen may be selected based on a particular imaging application. Suitable weight functions may include, for example, decreasing polynomials of various degrees, and any of the numerous well-known distribution functions that assign decreasing weight to points further from a central value. One particular
type of weight function that may be suitable is a Gaussian function, generally defined by
the neighborhood effectively considered by the weight function. For example, a bilateral filter may be defined by the product of a first Gaussian function of spatial separation characterized by a first width parameter σr, and a second Gaussian function of photometric separation characterized by a second width parameter σM. Furthermore, the photometric property M may be defined, for example, as intensity / such that the second weight factor is a Gaussian function of intensity difference. Substituting Gaussian distributions of the form of Eq. 3 into Eq. 1 , and using the intensity / as the photometric property M , a suitable bilateral filter according to the present teachings is described by the equation:
In Eq. 4, σf and σM are independent width parameters, allowing any desired relative importance to be given to the spatial and photometric distance between pixels when applying the bilateral filter.
By defining a bilateral filter using both a function of spatial distance /(l/>- <zl) between pixels and a function of photometric difference g(M(β)-M(q)) between pixels, the bilateral filter smoothes images while also taking into account photometric variation, and thus preserves edges better than a conventional smoothing filter. More specifically, in regions of nearly uniform intensity with image noise consisting mainly of temporal noise, photometric values of neighboring pixels are similar to each other such that the photometric distribution function approaches unity, and the filter operates as a conventional smoothing filter. However, at an intensity edge, where p
lies on the high intensity side, the photometric function assigns high values for neighbors on the high intensity side and low values for neighbors on the low intensity side. In this case, the bilateral filter essentially considers only those pixels with similar intensity to p , and ϊ(p) becomes a weighted average of neighbors on the high intensity side. Thus, noise is reduced while edge features are substantially preserved.
By simultaneously considering spatial and photometric information, a bilateral filter can effectively filter high spatial frequency data with relatively less blurring than a conventional filter. For example, a bilateral filter may be used to effectively remove row and column noise, which has high spatial frequency, from an image containing other high spatial frequency data. Because row and column noise will typically have significant intensity dissimilarity among near neighbors beyond the affected row or column, the bilateral filter may smooth this noise while preserving other high spatial frequency data. Additionally, a bilateral filter can reduce temporal noise in relatively smooth areas, as a conventional spatial filter would. In contrast, a conventional spatial Gaussian filter averages each pixel's properties among all near neighbors regardless of photometric differences, such that isolated noise is smoothed into the background but high spatial frequency information is blurred. In such cases, a Gaussian filter would blur row and column noise into the image. I. System Overview
Figure 1 shows a system, according to aspects of the present disclosure, for correcting fixed position non-uniformities, one of which is generally indicated at 10, in a focal plane array 12 observing a scene. Incoming image data 14 encounters one or more optical elements, generally depicted at 16, and may also encounter a dithering mechanism 18, before arriving at a focal plane array 12. Optical elements 16 may include, for example, one or more lenses, mirrors, apertures or the like configured to receive, direct and/or focus incoming image data, and to pass the image data toward the focal plane array. Focal plane array 12 generally includes a
plurality of detectors 20, disposed substantially within a substantially planar array.
Detectors 20 may be any devices configured to receive image data, generally in the form of electromagnetic radiation, and to produce an image signal corresponding to a pixel in the acquired image in response. For example, each detector 20 may be capable of producing an image signal in response to visible, near-infrared, or infrared radiation. However, methods according to the present disclosure are suitable for use with detectors sensitive to any wavelength regime of electromagnetic radiation. Concurrent with the acquisition of scene information, a dithering mechanism 18 may be configured to spatially translate incoming image data 14. If utilized, dithering mechanism 18 moves with a dithering path 22, which spatially translates image data 14 in a known manner relative to focal plane array 12. Such translation may occur, for example, with fixed frequency and sub-pixel resolution, although any known dithering path, whether fixed or variable, may be suitable.
When systems according to the present disclosure include a dithering mechanism 18, dithering need not occur at particularly high frequencies even when the system is mounted to a moving platform or used to image rapidly moving objects. For instance, in one embodiment, dithering mechanism 18 is configured to spatially translate incoming image data with a frequency of less than one cycle per second. This is in contrast to some prior art systems, which as described previously require a relatively high frequency dithering when scene information changes rapidly. In systems including a dithering mechanism 18, the dithering motion changes the position over time of image data 14 relative to each detector 20 in the focal plane array. The effect of the dithering motion is to translate image data 14 across focal plane array 12 in a known path, thus decoupling scene information from fixed non-uniformity errors in focal plane array 12. Decoupling permits a more accurate determination of detector non-uniformity errors, which can thus be corrected in subsequent frames (as described in detail below with respect to Fig. 5). Therefore, scene information may be more
accurately rendered in the resultant image. Aside from a dithering mechanism, other methods may be used to decouple scene information from non-uniformity errors. For example, a motion detector or estimator may be used to determine scene motion, as will be described in more detail below. A processor 24 may be configured to receive image signals produced by detectors 20 of focal plane array 12, apply a non-uniformity correction algorithm, including a bilateral filter, to the received signals to at least partially correct the image signals for non-uniformity errors among the detectors 20, and produce corrected image data. In systems including dithering mechanism 18, processor 24 may be further configured to remove translation effects of dithering mechanism 18 from the signals and to iteratively update detector offset correction data based on non-uniformity errors removed by the bilateral filter (as described in detail below with respect to Fig. 5). Also as described in more detail below, the processor may be configured to operate as a motion sensor or detector, which updates detector offset correction data only when sufficient scene motion is detected.
In some embodiments according to the present teachings, processor 24 is configured to apply a bilateral filter defined using a combination of spatial separation and photometric separation between pixels. For example, processor 24 may be configured where the photometric separation between pixels is a function of a difference in intensity between the pixels. Alternatively, processor 24 may be configured where the photometric separation between pixels is a function of a difference in color between the pixels. However, processor 24 may be configured using any photometric attribute by which it may be advantageous to define a bilateral filter.
In some embodiments according to the present teachings, processor 24 is configured to apply a bilateral filter defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels. Thus, the weight of a particular pixel (i.e., its effect on the application of the filter to a pixel upon which the filter is operating) would depend on both its spatial
distance and its photometric similarity to the pixel upon which the filter is operating.
For example, processor 24 may be configured to apply a bilateral filter defined such that the first weight factor is a Gaussian function of spatial separation characterized by a first width parameter, and the second weight factor is a Gaussian function of photometric separation characterized by a second width parameter. Furthermore, the photometric property may be defined, for example, as intensity such that the second weight factor is a Gaussian function of intensity difference. However, processor 24 may be configured to apply a bilateral filter defined using weight factors that generally take any functional form that assigns a decreasing weight to more distant and more dissimilar pixels in an image, and the precise weight functions chosen may be selected based on a particular imaging application.
In some embodiments according to the present teachings, processor 24 is configured to apply a threshold bilateral filter. A threshold changes the neighborhood of pixels considered by the bilateral filter and thus may be used to improve non-uniformity correction results and enhance system performance. In a threshold bilateral filter, processor 24 is configured such that the bilateral filter considers only those pixels meeting a predetermined criterion in its calculations. Thus, only pixels with parameter values below a certain threshold with respect to some parameter are considered by the bilateral filter, while pixels above the threshold are ignored. In other words, the threshold represents a maximum or minimum spatial or photometric difference that the filter will consider when calculating a correction for each pixel. Thus, the threshold may improve bilateral filter performance by reducing the number of pixels considered to those most likely to be relevant to the correction of a particular pixel.
A threshold filter may consider only those pixels with parameter values either above or below a threshold, depending on the parameter selected and the application. The predetermined criterion may, for example, be defined as photometric dissimilarity g(M(p)-M(q)) below a certain value, such that only those image points with photometric dissimilarity below a certain value are
considered by the bilateral filter. More specifically, photometric dissimilarity may be defined as intensity difference (l(p)-l(q)) between pixels, so that the bilateral filter only considers pixels below a defined intensity difference. In some cases, this may improve the correction of non-uniformity detector errors while leaving naturally occurring intensity differences unchanged. Alternatively, the threshold may be defined by any other function that reflects the likelihood of a particular detector or group of detectors within a focal plane array to exhibit non-uniformity errors at a particular time.
In some embodiments according to the present teachings, processor 24 is configured to apply an adaptable threshold bilateral filter, where the threshold value of photometric dissimilarity is adaptable to changing imaging conditions. Processor 24 may be configured to change the threshold in response to changes in image noise, contrast, non-uniformity, or any other parameter that may affect bilateral filter performance. For example, with a non-uniformity-based adaptable threshold, as residual non-uniformity increases, the threshold may change (adapt) so that more pixels are considered by the filter. Conversely, as residual non-uniformity decreases, the threshold may change in the opposite direction, so that fewer pixels are considered. Because the degree of non-uniformity error may change rapidly during imaging, for example in a moving platform application, a threshold that adapts to changing imaging conditions may be advantageous. For example, Fig. 2 illustrates an imaging system 100 mounted to a helicopter, which is a common application where an adaptable threshold bilateral filter may be particularly suitable. However, any moving platform application is likely to benefit from implementation of an adaptable threshold bilateral filter, and that the filter also may be advantageous for viewing rapidly changing scene information from a stationary platform.
Following correction of non-uniformity errors by processor 24, the corrected image data may be directed to a real-time display 26, recorded for subsequent use and/or analysis, or both.
II. Correction of Non-uniformitv Errors With A Bilateral Filter
Figure 3 depicts a method, generally indicated at 27, for correcting focal plane array non-uniformity errors using a bilateral filter, according to aspects of the present disclosure. In method 27, image data is received at a substantially planar array of detectors, and image signal 28 is produced by the array of detectors in response to the received image data. Image signal 28 produced by the detectors may be partially corrected by means of a gain correction table 30 to produce an approximately gain-corrected image signal . As described previously, gain is a measure of the response gradient of each detector 20. Accordingly, gain correction table 30 provides for at least partial correction of response gradient non-uniformities between detectors 20. In one embodiment, gain correction table 30 will be determined for the focal plane array 12 at the time of its manufacture. However, any suitable means may be used to determine response gradients for focal plane array 12 such that an approximate gain correction table may be determined.
According to method 27, the gain-corrected image signal may be further partially corrected by means of offset correction table 32 to produce an approximately gain- and offset-corrected image signal 48. Initial offset correction approximately corrects for the differential response of each detector 20 to a fixed input signal, typically a "dark" signal corresponding to zero or near-zero input, although an offset correction corresponding to any fixed input may be used. In one embodiment, the offset correction table 32 will be determined for the focal plane array at time of manufacture. In another embodiment, described further below and depicted in Fig. 5, offset correction table 32 is calculated from residual fixed position noise filtered from approximately gain- and offset-corrected image signals. However, any suitable means may be used to determine detector response differences for focal plane array 12 such that an approximate offset correction table may be determined. According to the present teachings, bilateral filter 34 is applied to image signal 28 produced by the detectors, which may be approximately gain- and offset-corrected, to at least partially correct for non-uniformity errors
among the detectors 20. As mentioned earlier, bilateral filter 34 may be defined using a combination of spatial separation and photometric separation, such as a difference in intensity between pixels. Furthermore, bilateral filter 34 may be defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels. In some embodiments according to the present teachings, the first weight factor is a Gaussian function of spatial separation characterized by a first width parameter, and the second weight factor being a Gaussian function of photometric separation characterized by a second width parameter. In some cases, the photometric property may be defined, for example, as intensity such that the second weight factor is a Gaussian function of intensity difference. However, bilateral filter 34 may be configured using any photometric property and weight functions, depending on the details of a particular imaging application. In systems equipped with dithering mechanism 18, filtered image 50 may be shifted by a demodulator 36, to remove translation effects of dithering (as described in detail below with respect to Fig. 5) before displaying image 38 to the user. III. Adaptable Threshold Bilateral Filter Figure 4 illustrates a method 27' of non-uniformity correction which is similar to method 27 of Fig. 3, except that method 27' of Fig. 4 includes the use of an adaptable threshold bilateral filter 40. In method 27', bilateral filter 34 considers only those pixels meeting a predetermined criterion, as defined by threshold 44, in its calculations. Generally, threshold 44 represents a maximum spatial or photometric difference that the filter will consider when calculating a correction for a particular image point. Thus, only pixels with parameter values below a certain threshold 44 are considered by the adaptable threshold bilateral filter, while pixels above the threshold are ignored. Alternatively, the filter may be applied to pixels with parameter values above (rather than below) a threshold, depending on the parameter selected and the application.
Threshold 44 may be defined, for example, where the predetermined criterion is photometric dissimilarity g(M(p)-M(q)) such that only pixels with photometric dissimilarity below a certain value are considered by bilateral filter 34. Alternatively, threshold 44 may be defined by any other function that reflects the likelihood of a particular detector or group of detectors within a focal plane array to exhibit non-uniformity errors at a particular time. Furthermore, in some embodiments according to the present teachings, threshold 44 may change, such that the threshold value of photometric dissimilarity is adaptable to changing imaging conditions. With an adaptable threshold, as residual non-uniformity increases, threshold 44 adapts and more scene information is filtered.
As Fig. 4 depicts, adaptable threshold bilateral filter 40 typically operates by iteratively calculating a non-uniformity correction metric ("NUC metric") 42 and adjusting threshold 44 in response to changes in the NUC metric. As threshold 44 adjusts, the neighborhood of pixels considered by bilateral filter 34 grows or shrinks, ideally to those pixels most likely to be relevant to the correction of a particular image point. The NUC metric may be calculated using any parameter that is expected to change in response to iterative non-uniformity error correction by the system. For example, in one embodiment where detector column noise is the primary source of detector non-uniformity error, the intensity difference between affected columns and scene information in neighboring columns may be used to calculate the NUC metric. As such column noise is reduced by adaptable threshold bilateral filter 40, the intensity difference between affected columns and scene information in neighboring columns will decrease, and the NUC metric will correspondingly decrease. In other embodiments according to the present teachings, the NUC metric may be calculated based on image noise, contrast, non-uniformity, or any other parameter that may affect bilateral filter performance. As depicted in Fig. 4, threshold 44 may then be updated in response to changes in the NUC metric. Typically, threshold 44 is updated such that, as the NUC metric decreases, threshold 44 decreases and the neighborhood of
pixels considered by bilateral filter 34 decreases. Likewise, as the NUC metric increases, threshold 44 increases and the neighborhood considered by bilateral filter 34 increases. These adjustments may be performed by means of an algorithm that calculates an appropriate threshold 44 from each NUC metric value. For example, in one embodiment where detector column noise is the primary source of detector non-uniformity error, the NUC metric may be defined as the maximum intensity difference between adjacent image points within a row that traverses the affected column. Then, threshold 44 may be defined directly as the magnitude of intensity difference, such that no pixels will be considered by bilateral filter 34 with an intensity difference (l(p)-I(q)) greater than this value. This allows detector column non-uniformity noise to be corrected without unnecessarily blurring high-contrast portions of the actual image, such as the image edges. However, threshold 44 may be calculated from the NUC metric using any algorithm that is responsive to changes in non-uniformity error, depending on the details of a particular imaging application.
Considering the neighborhood of pixels as defined by threshold 44, bilateral filter 34 then operates to remove non-uniformity noise from each image point, as described above. In systems equipped with dithering mechanism 18, filtered image 50 again may be shifted by a demodulator 36, to remove translation effects of dithering (as described in detail below with respect to Fig. 5) before displaying image 38 to the user. IV. Iterative Offset Correction Table Update Flow
Figure 5 illustrates a method 27" of non-uniformity correction which is similar to method 27 of Fig. 4, except that method 27" of Fig. 5 also includes a flow to iteratively update detector offset correction data based on non- uniformity errors corrected by the bilateral filter, generally indicated at 46. In method 27", mechanical dithering may be used to obtain non-uniformity error data from noise removed by bilateral filter 34. Detector offset correction table 32 is then iteratively updated based on this data. A similar scheme may be used to update gain correction table 30, although this may not be necessary
when offset table 32 is updated frequently and/or at input levels corresponding to the actual imaging temperature of each detector.
Alternatively or in addition to a dithering mechanism, a motion detector or estimator may be used to determine when the offset and/or gain correction tables are to be updated. In this case, a reference image frame is first saved by the processor. As each subsequent image frame is captured, a determination is made as to whether scene motion has occurred relative to the reference frame. Motion detection can include an actual determination of the amount of scene motion, or motion may be detected without measuring the amount of motion. Detecting motion without calculating the precise amount of motion may be less computationally intensive than measuring the amount of the motion.
For example, a simple motion detector may project each frame of a captured image frame onto two vectors: a horizontal "profile" vector and a vertical "profile" vector. Correlation between the profile vectors of the reference frame and the captured frame then gives a rough estimate of scene motion. If there is sufficient motion detected, noise image 52 may be sent to iterative flow 46, including (temporal) low pass filter 54 and integrator 58. If there is insufficient motion detected, no offset and/or gain map update may be performed. When a motion estimator is used, the reference image is also generally periodically updated. This can be done on a fixed schedule (e.g., every 200 frames) or when sufficient motion of a captured frame relative to the previous reference frame has been measured.
Because bilateral filter 34 removes noise from gain- and offset- corrected image 48, subtracting filtered image 50 from gain- and offset- corrected image 48 yields residual noise image 52. Residual noise image 52 commonly includes detector non-uniformity noise, residual scene information, and temporal noise. Dithering mechanism 18 and/or scene motion spatially translates image data and associated temporal noise, but not detector non- uniformity noise, relative to focal plane array 12. Thus, residual scene information and associated temporal noise in residual noise image 52 are spatially translated along dithering path 22 and/or along the path of the scene
motion, while detector non-uniformity noise remains relatively fixed. Therefore, low pass filter 54 may be used to obtain the relatively low frequency detector non-uniformity noise 56 from the residual noise image 52, while filtering out the relatively higher frequency residual scene information and associated temporal noise. Because detector non-uniformity noise is relatively fixed, neither dithering nor scene motion need occur at particularly high frequency for low pass filter 54 to remove residual scene information and associated temporal noise from residual noise image 52. For example, in some embodiments according to the present teachings, spatial translation due to dithering may occur with frequency less than one cycle per second.
In method 27" of Fig. 5, detector non-uniformity noise 56 is used to update offset correction table 32 by means of integrator 58. Integrator 58 calculates the inverse of detector non-uniformity noise image 56, which corresponds to the degree of non-uniformity error in the image. Integrator 58 then uses the inverse of detector non-uniformity noise image 56 to update offset correction table 32. Various algorithms may be used by integrator 58 to produce updated offset correction table 60, which replaces the prior offset correction table 32 in the system. For example, the inverse noise image may simply replace the previous values in the offset table. Alternatively, the inverse noise image may be averaged with or otherwise combined with the previous values in the offset table, to produce an updated correction table.
In method 27" of Fig. 5, integrator 58 is adjustable by means of constant block 62, which may be adjusted to either slow or quicken the responsiveness of the integrator to changes in detector non-uniformity noise image 56. Thus, in applications where detector non-uniformity is expected to evolve rapidly over time, constant block 62 may be adjusted to rapidly update offset correction table 32 in response, while in applications where stability is expected, constant block 62 may be adjusted to slowly update offset correction table 32. In any event, the updated offset table is then applied to the next image signal, and the partially corrected image proceeds through a bilateral filter just as in methods 27 and 27 . Following bilateral filtering, translation effects (if any) of dithering mechanism 18 may be removed from
filtered image 50 by means of demodulator 36. Demodulator 36 stores each filtered image 50 in a frame buffer, where a signal is applied to remove translation effects of dithering mechanism 18 when dithering is employed. Thus, the image 38 presented to the user appears stable and free of system- induced motion.
V. Other Embodiments
The present disclosure contemplates methods and apparatus suitable for applying a bilateral filter 34 and/or a demodulator 36 with or without the adaptable threshold depicted in Fig. 4, and with or without the iteratively adjusted offset table depicted in Fig. 5. The disclosure set forth above may encompass multiple distinct inventions with independent utility. The disclosure relates information regarding specific embodiments, which are included for illustrative purposes, and which are not to be considered in a limiting sense, because numerous variations are possible. The inventive subject matter of the disclosure includes all novel and nonobvious combinations and subcombinations of the various elements, features, functions, and/or properties disclosed herein. The following claims particularly point out certain combinations and subcombinations regarded as novel and nonobvious. Inventions embodied in other combinations and subcombinations of features, functions, elements, and/or properties may be claimed in applications claiming priority from this or a related application. Such claims, whether directed to a different invention or to the same invention, and whether broader, narrower, equal, or different in scope to the original claims, also are regarded as included within the subject matter of the inventions of the present disclosure.
Claims
1. A non-uniformity error correction system, comprising: a plurality of detectors disposed substantially within a planar array, the detectors configured to receive image data and to produce image signals in response; and a processor configured to receive the image signals produced by the detectors and to apply a bilateral filter to the signals; wherein the bilateral filter is configured to at least partially correct the image signals for non-uniformity errors among the detectors.
2. The system of claim 1 , wherein the bilateral filter is defined using a combination of spatial separation and photometric separation between pixels.
3. The system of claim 2, wherein the photometric separation between pixels is a function of a difference in intensity between the pixels.
4. The system of claim 2, wherein the photometric separation between pixels is a function of a difference in color between the pixels.
5. The system of claim 1 , wherein the bilateral filter is defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels.
6. The system of claim 5, wherein the first weight factor is a Gaussian function of spatial separation characterized by a first width parameter, and the second weight factor is a Gaussian function of photometric separation characterized by a second width parameter.
7. The system of claim 6, wherein the second weight factor is a Gaussian function of intensity difference.
8. The system of claim 1 , wherein the processor is configured such that the bilateral filter considers only those pixels meeting a predetermined criterion in its calculations.
9. The system of claim 8, wherein the predetermined criterion is photometric dissimilarity below a certain threshold value.
10. The system of claim 9, wherein the threshold value of photometric dissimilarity is adaptable to changing image conditions.
11. The system of claim 1 , further comprising a dithering mechanism configured to spatially translate incoming image data; wherein the processor is further configured to remove translation effects of the dithering mechanism from the signals and to iteratively update detector offset correction data based on non-uniformity errors removed by the bilateral filter.
12. The system of claim 11 , wherein the dithering mechanism is further configured to spatially translate incoming image data with frequency less than one cycle per second.
13. The system of claim 1 , wherein the processor is further configured to detect scene motion and to iteratively update detector offset correction data based on non-uniformity errors removed by the bilateral filter when sufficient scene motion is detected.
14. A method of correcting focal plane array non-uniformity errors, comprising: receiving image data at a substantially planar array of detectors; producing image signals with the array of detectors in response to the received image data; and applying a bilateral filter to the image signals produced by the detectors to at least partially correct the signals for non-uniformity errors among the detectors.
15. The method of claim 14, wherein the bilateral filter is defined using a combination of spatial separation and a difference in intensity between pixels.
16. The method of claim 14, wherein the bilateral filter is defined using the product of a first weight factor that depends on spatial separation between pixels and a second weight factor that depends on photometric separation between the pixels.
17. The method of claim 16, wherein the first weight factor is a Gaussian function of spatial separation characterized by a first width parameter, and the second weight factor is a Gaussian function of photometric separation characterized by a second width parameter.
18. The method of claim 17, wherein the second weight factor is a Gaussian function of intensity difference.
19. The method of claim 14, wherein the bilateral filter considers only those pixels meeting a predetermined criterion.
20. The method of claim 19, wherein the predetermined criterion is photometric dissimilarity below a certain threshold value.
21. The method of claim 20, wherein the threshold value of photometric dissimilarity is adaptable to changing image conditions.
22. The method of claim 14, further comprising iteratively updating detector offset correction data based on non-uniformity errors corrected by the bilateral filter.
23. The method of claim 22, further comprising dithering the image data to spatially translate the data, prior to receiving the image data at the array of detectors.
24. The method of claim 23, wherein dithering the image data occurs at frequency less than one cycle per second.
25. The method of claim 22, further comprising detecting scene motion by comparing a captured image frame with a reference frame, prior to iteratively updating the detector offset correction data.
26. A non-uniformity error correction system, comprising: a dithering mechanism configured to spatially translate image data on a focal plane array of infrared radiation detectors; and a processor configured to receive image signals produced by the array of detectors, to at least partially correct the image signals for non-uniformity errors among the detectors by applying a bilateral filter to the signals, and to remove translation effects of the dithering mechanism from the signals; wherein the bilateral filter is defined using the mathematical product of a function of spatial separation between the image signals and a function of photometric separation between the image signals.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/491,165 | 2009-06-24 | ||
US12/491,165 US8428385B2 (en) | 2009-06-24 | 2009-06-24 | Non-uniformity error correction with a bilateral filter |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010151686A1 true WO2010151686A1 (en) | 2010-12-29 |
Family
ID=43380817
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2010/039852 WO2010151686A1 (en) | 2009-06-24 | 2010-06-24 | Non-uniformity error correction with a bilateral filter |
Country Status (2)
Country | Link |
---|---|
US (1) | US8428385B2 (en) |
WO (1) | WO2010151686A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372285A (en) * | 2023-12-05 | 2024-01-09 | 成都市晶林科技有限公司 | Time domain high-pass filtering method and system for static and dynamic region distinction |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8594445B2 (en) | 2005-11-29 | 2013-11-26 | Adobe Systems Incorporated | Fast bilateral filtering using rectangular regions |
US8655097B2 (en) * | 2008-08-22 | 2014-02-18 | Adobe Systems Incorporated | Adaptive bilateral blur brush tool |
US9635285B2 (en) | 2009-03-02 | 2017-04-25 | Flir Systems, Inc. | Infrared imaging enhancement with fusion |
US9756264B2 (en) | 2009-03-02 | 2017-09-05 | Flir Systems, Inc. | Anomalous pixel detection |
US9237284B2 (en) | 2009-03-02 | 2016-01-12 | Flir Systems, Inc. | Systems and methods for processing infrared images |
US9843742B2 (en) | 2009-03-02 | 2017-12-12 | Flir Systems, Inc. | Thermal image frame capture using de-aligned sensor array |
US9451183B2 (en) | 2009-03-02 | 2016-09-20 | Flir Systems, Inc. | Time spaced infrared image enhancement |
US9208542B2 (en) * | 2009-03-02 | 2015-12-08 | Flir Systems, Inc. | Pixel-wise noise reduction in thermal images |
US10091439B2 (en) | 2009-06-03 | 2018-10-02 | Flir Systems, Inc. | Imager with array of multiple infrared imaging modules |
KR101248808B1 (en) * | 2011-06-03 | 2013-04-01 | 주식회사 동부하이텍 | Apparatus and method for removing noise on edge area |
CN103748867B (en) | 2011-06-10 | 2019-01-18 | 菲力尔系统公司 | Low-power consumption and small form factor infrared imaging |
CN103875235B (en) | 2011-06-10 | 2018-10-12 | 菲力尔系统公司 | Nonuniformity Correction for infreared imaging device |
WO2012177875A1 (en) | 2011-06-21 | 2012-12-27 | Lockheed Martin Corporation | Direct magnetic imaging and thermal ablation for cancer diagnosis and treatment |
US9664562B1 (en) * | 2013-02-12 | 2017-05-30 | Lockheed Martin Corporation | Method and system for scanning staring focal plane array imaging |
US9778040B1 (en) * | 2013-07-29 | 2017-10-03 | Hanking Electronics, Ltd. | Systems and methods to reduce sensor bias |
US9106857B1 (en) * | 2014-05-09 | 2015-08-11 | Teledyne Dalsa, Inc. | Dynamic fixed-pattern noise reduction in a CMOS TDI image sensor |
US9924116B2 (en) | 2014-08-05 | 2018-03-20 | Seek Thermal, Inc. | Time based offset correction for imaging systems and adaptive calibration control |
WO2016022374A1 (en) | 2014-08-05 | 2016-02-11 | Seek Thermal, Inc. | Local contrast adjustment for digital images |
WO2016022525A1 (en) | 2014-08-05 | 2016-02-11 | Seek Thermal, Inc. | Time based offset correction for imaging systems |
WO2016073054A2 (en) | 2014-08-20 | 2016-05-12 | Seek Thermal, Inc. | Gain calibration for an imaging system |
WO2016028755A1 (en) | 2014-08-20 | 2016-02-25 | Seek Thermal, Inc. | Adaptive adjustment of operating bias of an imaging system |
US10600164B2 (en) | 2014-12-02 | 2020-03-24 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
US10467736B2 (en) | 2014-12-02 | 2019-11-05 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
WO2016089823A1 (en) | 2014-12-02 | 2016-06-09 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
EP3289758A1 (en) | 2015-04-27 | 2018-03-07 | Flir Systems, Inc. | Moisture measurement device with thermal imaging capabilities and related methods |
US9549130B2 (en) | 2015-05-01 | 2017-01-17 | Seek Thermal, Inc. | Compact row column noise filter for an imaging system |
US10521888B2 (en) * | 2015-12-23 | 2019-12-31 | Huazhong University Of Science And Technology | Aerothermal radiation effect frequency domain correction method |
US10867371B2 (en) | 2016-06-28 | 2020-12-15 | Seek Thermal, Inc. | Fixed pattern noise mitigation for a thermal imaging system |
US10417745B2 (en) | 2016-06-28 | 2019-09-17 | Raytheon Company | Continuous motion scene based non-uniformity correction |
DE102018001076A1 (en) * | 2018-02-10 | 2019-08-14 | Diehl Defence Gmbh & Co. Kg | Method for determining characteristic correction factors of a matrix detector imaging in the infrared spectral region |
US10692191B2 (en) * | 2018-08-10 | 2020-06-23 | Apple Inc. | Per-pixel photometric contrast enhancement with noise control |
US10798309B2 (en) | 2018-11-21 | 2020-10-06 | Bae Systems Information And Electronic Systems Integration Inc. | Method and apparatus for nonuniformity correction of IR focal planes |
US11276152B2 (en) | 2019-05-28 | 2022-03-15 | Seek Thermal, Inc. | Adaptive gain adjustment for histogram equalization in an imaging system |
CN111862227B (en) * | 2020-04-28 | 2024-04-12 | 南京航空航天大学 | On-orbit non-uniformity correction method of mechanical staggered spliced camera based on complex scene |
FR3118558B1 (en) * | 2020-12-24 | 2023-03-03 | Safran Electronics & Defense | Method for calibrating an array of photodetectors, calibration device and associated imaging system |
US11632506B2 (en) | 2021-07-13 | 2023-04-18 | Simmonds Precision Products, Inc. | Non-uniformity correction (NUC) self-calibration using images obtained using multiple respective global gain settings |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5925880A (en) * | 1996-08-30 | 1999-07-20 | Raytheon Company | Non uniformity compensation for infrared detector arrays |
US20020186309A1 (en) * | 2001-03-21 | 2002-12-12 | Renato Keshet | Bilateral filtering in a demosaicing process |
WO2007106018A1 (en) * | 2006-03-16 | 2007-09-20 | Flir Systems Ab | Method for correction of non-uniformity in detector elements comprised in an ir-detector |
US20080170800A1 (en) * | 2007-01-16 | 2008-07-17 | Ruth Bergman | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
US20080267530A1 (en) * | 2007-04-27 | 2008-10-30 | Suk Hwan Lim | Generating compound images having increased sharpness and reduced noise |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3756721A (en) | 1972-03-06 | 1973-09-04 | Univ Florida State | Spectrometer system |
US4675532A (en) | 1985-11-06 | 1987-06-23 | Irvine Sensors Corporation | Combined staring and scanning photodetector sensing system having both temporal and spatial filtering |
US5077816A (en) | 1989-12-26 | 1991-12-31 | United Technologies Corporation | Fiber embedded grating frequency standard optical communication devices |
US5439000A (en) | 1992-11-18 | 1995-08-08 | Spectrascience, Inc. | Method of diagnosing tissue with guidewire |
US5651047A (en) | 1993-01-25 | 1997-07-22 | Cardiac Mariners, Incorporated | Maneuverable and locateable catheters |
EP0648049A1 (en) | 1993-10-08 | 1995-04-12 | Hitachi, Ltd. | Information recording and reproducing method and apparatus |
GB9406605D0 (en) | 1994-04-05 | 1994-06-08 | British Nuclear Fuels Plc | Radiation beam position sensor |
US5514865A (en) | 1994-06-10 | 1996-05-07 | Westinghouse Electric Corp. | Dither image scanner with compensation for individual detector response and gain correction |
US5582171A (en) | 1994-07-08 | 1996-12-10 | Insight Medical Systems, Inc. | Apparatus for doppler interferometric imaging and imaging guidewire |
CA2143900C (en) | 1995-03-03 | 2000-05-09 | John Robbins | Scanner mechanism for use in differential optical absorption spectroscopy (doas) |
US5602820A (en) | 1995-08-24 | 1997-02-11 | International Business Machines Corporation | Method and apparatus for mass data storage |
US5905571A (en) | 1995-08-30 | 1999-05-18 | Sandia Corporation | Optical apparatus for forming correlation spectrometers and optical processors |
CA2185865C (en) | 1995-09-26 | 2002-07-16 | Richard Edward Epworth | Dispersion compensation |
US6515285B1 (en) | 1995-10-24 | 2003-02-04 | Lockheed-Martin Ir Imaging Systems, Inc. | Method and apparatus for compensating a radiation sensor for ambient temperature variations |
US5925875A (en) | 1996-04-26 | 1999-07-20 | Lockheed Martin Ir Imaging Systems | Apparatus and method for compensating for fixed pattern noise in planar arrays |
US5717208A (en) | 1996-05-30 | 1998-02-10 | He Holdings, Inc. | Staring IR-FPA with dither-locked frame circuit |
US6507018B2 (en) | 1996-08-30 | 2003-01-14 | Raytheon Company | Ditherless non-uniformity compensation for infrared detector arrays with recursive spatial low pass filtering |
US5721427A (en) | 1996-12-19 | 1998-02-24 | Hughes Electronics | Scene-based nonuniformity correction processor incorporating motion triggering |
US5838813A (en) | 1996-12-20 | 1998-11-17 | Lockheed Martin Corp. | Dithered image reconstruction |
US6184527B1 (en) | 1997-08-26 | 2001-02-06 | Raytheon Company | Dither correction for infrared detector arrays |
AU9472298A (en) | 1997-09-05 | 1999-03-22 | Micron Optics, Inc. | Tunable fiber fabry-perot surface-emitting lasers |
US6222861B1 (en) | 1998-09-03 | 2001-04-24 | Photonic Solutions, Inc. | Method and apparatus for controlling the wavelength of a laser |
US6243498B1 (en) | 1998-10-19 | 2001-06-05 | Raytheon Company | Adaptive non-uniformity compensation using feedforwarding shunting |
US6330371B1 (en) | 1998-10-19 | 2001-12-11 | Raytheon Company | Adaptive non-uniformity compensation using feedforward shunting and min-mean filter |
US6211515B1 (en) | 1998-10-19 | 2001-04-03 | Raytheon Company | Adaptive non-uniformity compensation using feedforward shunting and wavelet filter |
US6901173B2 (en) | 2001-04-25 | 2005-05-31 | Lockheed Martin Corporation | Scene-based non-uniformity correction for detector arrays |
US7862188B2 (en) | 2005-07-01 | 2011-01-04 | Flir Systems, Inc. | Image detection improvement via compensatory high frequency motions of an undedicated mirror |
US7515767B2 (en) | 2005-07-01 | 2009-04-07 | Flir Systems, Inc. | Image correction across multiple spectral regimes |
US7697142B2 (en) * | 2007-12-21 | 2010-04-13 | Xerox Corporation | Calibration method for compensating for non-uniformity errors in sensors measuring specular reflection |
-
2009
- 2009-06-24 US US12/491,165 patent/US8428385B2/en active Active
-
2010
- 2010-06-24 WO PCT/US2010/039852 patent/WO2010151686A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5925880A (en) * | 1996-08-30 | 1999-07-20 | Raytheon Company | Non uniformity compensation for infrared detector arrays |
US20020186309A1 (en) * | 2001-03-21 | 2002-12-12 | Renato Keshet | Bilateral filtering in a demosaicing process |
WO2007106018A1 (en) * | 2006-03-16 | 2007-09-20 | Flir Systems Ab | Method for correction of non-uniformity in detector elements comprised in an ir-detector |
US20080170800A1 (en) * | 2007-01-16 | 2008-07-17 | Ruth Bergman | One-pass filtering and infrared-visible light decorrelation to reduce noise and distortions |
US20080267530A1 (en) * | 2007-04-27 | 2008-10-30 | Suk Hwan Lim | Generating compound images having increased sharpness and reduced noise |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372285A (en) * | 2023-12-05 | 2024-01-09 | 成都市晶林科技有限公司 | Time domain high-pass filtering method and system for static and dynamic region distinction |
CN117372285B (en) * | 2023-12-05 | 2024-02-20 | 成都市晶林科技有限公司 | Time domain high-pass filtering method and system for static and dynamic region distinction |
Also Published As
Publication number | Publication date |
---|---|
US20100329583A1 (en) | 2010-12-30 |
US8428385B2 (en) | 2013-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8428385B2 (en) | Non-uniformity error correction with a bilateral filter | |
Budzier et al. | Calibration of uncooled thermal infrared cameras | |
KR102391619B1 (en) | Method of infrared image processing for non-uniformity correction | |
US7899271B1 (en) | System and method of moving target based calibration of non-uniformity compensation for optical imagers | |
US8373757B1 (en) | Flat field correction for infrared cameras | |
US20120091340A1 (en) | Scene based non-uniformity correction for infrared detector arrays | |
KR101955498B1 (en) | Infrared image correction apparatus using neural network structure and method thereof | |
US6075903A (en) | Process for correcting the intensity of images from a digital infrared camera | |
CN109813442B (en) | Multi-frame processing-based internal stray radiation non-uniformity correction method | |
US8481918B2 (en) | System and method for improving the quality of thermal images | |
US20170372453A1 (en) | Continuous motion scene based non-uniformity correction | |
CN103985089B (en) | With reference to weight edge analysis and the image streak correction method of frame inner iteration | |
CN111932478A (en) | Self-adaptive non-uniform correction method for uncooled infrared focal plane | |
CN116109491A (en) | Method for correcting image non-uniformity of imaging spectrometer by using bright-dark uniform region | |
Fischer et al. | Median spectral-spatial bad pixel identification and replacement for hyperspectral SWIR sensors | |
Svensson | An evaluation of image quality metrics aiming to validate long term stability and the performance of NUC methods | |
Rossi et al. | A comparison of deghosting techniques in adaptive nonuniformity correction for IR focal-plane array systems | |
Wang et al. | An enhanced non-uniformity correction algorithm for IRFPA based on neural network | |
Zhou et al. | Local spatial correlation-based stripe non-uniformity correction algorithm for single infrared images | |
Lingxiao et al. | A novel infrared focal plane non-uniformity correction method based on co-occurrence filter and adaptive learning rate | |
US11875483B2 (en) | Method and device for removing remanence in an infrared image of a changing scene | |
CN108700462B (en) | Double-spectrum imager without moving part and drift correction method thereof | |
US11875484B2 (en) | Method and device for removing remanence in an infrared image of a static scene | |
Rossi et al. | A technique for ghosting artifacts removal in scene-based methods for non-uniformity correction in IR systems | |
Petrov et al. | Calibration of thermal imaging systems based on matrix IR photodetectors |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10792675 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10792675 Country of ref document: EP Kind code of ref document: A1 |