CA2211258C - Method and apparatus for separating foreground from background in images containing text - Google Patents

Method and apparatus for separating foreground from background in images containing text Download PDF

Info

Publication number
CA2211258C
CA2211258C CA002211258A CA2211258A CA2211258C CA 2211258 C CA2211258 C CA 2211258C CA 002211258 A CA002211258 A CA 002211258A CA 2211258 A CA2211258 A CA 2211258A CA 2211258 C CA2211258 C CA 2211258C
Authority
CA
Canada
Prior art keywords
image
value
bin
contrast
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002211258A
Other languages
French (fr)
Other versions
CA2211258A1 (en
Inventor
Michael C. Moed
Izrail S. Gorian
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
United Parcel Service of America Inc
Original Assignee
United Parcel Service of America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by United Parcel Service of America Inc filed Critical United Parcel Service of America Inc
Publication of CA2211258A1 publication Critical patent/CA2211258A1/en
Application granted granted Critical
Publication of CA2211258C publication Critical patent/CA2211258C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

A method and apparatus for thresholding gray-scale images containing text as foreground and dirt, colors, noise and the like as background. The method is adaptive in nature. The method uses multiple sets of thresholding process parameters in a contrast-based histogram evaluation to produce multiple candidate thresholded binary images. The method automatically selects the highest quality binary image, that is, the binary image that best represents text in the foreground, from among the multiple candidate binary images. The apparatus for implementing the method is a pipeline processor for highest speed.

Description

W O96/24114 PCTrUS96/01060 "METHOD AND APPARATUS FOR SEPARATING FOREGROUND
FROM BACKGROUND IN IMAGES CONTAINING ll~;Xl~

FIELD OF THE INVENTION

The present invention relates generally to a method and apparatus for parameterized thresholding of gray-scale images to create binary black/whiteimages co~.~,sponding to the foreground/background of the original gray-scale images, and more particularly to gray-scale images which contain text characters as the foreground. The invention in.~ludes a method and al/p&atus of d~te....;ning and selecting the best quality binary image for instances where multiple sets of p~ll~tel~ have been used to create multiple binary images.

BACKGROUND OF THE INVENTION

In many applications, it is desirable to extract textual information from an image by ~utom~iC means. For e~ , using a ..~c~ e to read address labels without manual intervention is a labor and cost saving technique. One technique for automatic label address reading employs a ch~ge-coupled device ("CCD") camera to capture an image of the address label area. The n x m image provided by the CCD camera is called a gray-scale image, because each of the n x m pixels in the image can assume a gray value ~t~.~,en zero and a maximum value, where zero is black, the m~im~lm value is white and the i~....e~ values are gray. Typically, the m~Yimllm value will be 255 or 1023. For most address label reading application, the gray~scale image must be converted to a binary image, where pixels take on only one of two values, 0 for black and 1 for white. In the W 096/24114 CA 02211258 1997-07-23 PCTrUS96/01060 binary image, black pixels collespond to foreground, usually text, while white pixels coll~,spond to background. The process of converting from a gray-scale image to a binary image is called thresholding. After the image has been thresholded, the label can be read automatically using an optical character recognition ("OCR") process.
Unfortunately, many common thresholding methods have great ~iffir~ y s~,pd~dt ng fc,leglo~d from bac~ùul,d on address labels due to the large variety of labels. Labels can have dirr."e.ll color backglù~l,ds, dirf~ l color text printing, blurs, blotch~s, belt burns, tape effects, low cont,~l text, graphics, and plastic windows. Text on labels can be dot matrix, laser printed, typewritten, or handwritten and can be of any one of a number of fonts. The variety of labels makes it difficult for many thresholding processes to adequately separate foreground text from background and may cause many erroneous foreground/background assignm~lltc An e.l~neous ~Cci&nm~n~ occurs when a background pixel is acsigned a value of O (black) or a fol~,~luund text pixel is~ccign.od a value of 1 (white). If the former case, noise is introduced into theresultant binary image, and this may lead to difficulty reading the label ~l-tom~tir~lly using OCR. In the latter case, text that was present in the gray level image is removed, and again the OCR process may fail.
In addition to the- requirement for accuracy, an automatic thresholding technique must be fast and efi;cient to be effective as a cost-savings technique In many applications, ~utom~ti~ label readers are employed by mail andpackage delivery services, where ~utom~ti~ systems convey p~c~aEes at a rate of up to 3600 per hour. At this rate, a label must be imaged and plucesse~ in less than one second in order to be co rl~lod before procescing on the next label must begin. In order for the threchol~ te~hnique to be used in real time, therefole, it must threshold the gray-scale image in conc~derably less than one second to givethe OCR lcehniques time to process the th~cho'~ed text image.
Thus there is a need for a thresholding method and apparatus that employs a set of fast algoli~ulls for sep~a~iulg fol~gluui~d text from background in a way that minimi7es the number of fol~,~lù~ d/background misassignments.
There is also a need for the thresholding method and app~alus to be adaptable sothat it may adjust to the wide variety of ch~e-r fonts and other differences in text prese.~ ion SUMMARY OF THE INVENTION

The invention is a met-h-od and al)p~ c of pro~ncing a high-quality thresholded binary image from a gray-scale image. In a ~lefe.l~,d embo~im~nt themethod is co~ ;ced of two stages. In the first stage, a nul~lbel of c~n~ 3t~- binary label images are generated from the origin~l gray-scale image. Each candidate image is gene.~ted by applying a different set of thrrch~ process p~r~ te. a to an interrn~ te image (the COlllllal image), which is conallu~-~ed from the original gray-scale label image by using local and global contrast information. In the second stage, the c~nfli~te binary images are colllp~ and the best quality binary image is selected The invention is called Label Adaptive Threch- Irling ("LAT") because it adapts to many different kinds of labels and characters on the labels.
The method of constructing the intermçdi~te contrast image coll.,s~onding to a gray-scale image colll~lises the steps of selecting a position (i,j) in the contrast image, where i denotes the row of the position in the contrast image and j denotes the column; finding the miniml~m gray-scale value, MINij, of all the gray-scale image pixels in a rectangular window centered on the corresponding row-and-column position in the gray-scale irnage; finding the m~Yiml~m gray-scale value, MAxjj, of all the gray-scale image pixels in the same rectangular window;computing the local con~ LCij, of the gray-scale image by the formula LCij = C ( 1 - (MlNjj +p)/(MAXU + p)), where C is a preselected m~imnm conlfds~ value and p is a preselected positive number small in COIllp~ ison to C; and ~ccigr~ing a gray-scale value equal to Lci; to the (iJ) position of the contrast image; and l~,pea~ing the above steps for eachposition which co,l~sponds to a portion of the gray-scale image which is to be thresholded.
As part of the process of gene~d~ing a c~ id~1e thresh- Ided binary image, a minimllm-allowable-contr~l value is solect~d The method of selec~ing this value comprises the steps of constructing a CQn~ ( histogram for all possible contrast values in the contrast image; finding all peaks in the contrast histogram greater than a precelected peak-floor-level value and separated from any higher peak by a preselected nominal-peak-width value; s~l~rting from among the peaks found the minim--m-text peak by locating the peak oc~ g at the lowest contrast value which is greater than a preselectec~ minimnm-corltr~l value; and obtaining the W 096/24114 CA 02211258 1997-07-23 PCTrUS96/01060 minim~-m-allowable-contrast value by sub~ g a preselçcted nominal-text-peak-width value from the contrast value of the minimllm-text peak. The preselected values for the peak-floor-level value, the norninal-peak-width value, the minim-~m-contrast value, and the nomin~l-text-peak-width value, along with the two cut-off values ~l~scrihed below, cG...l.l ;~ a set of threchQl-ling p~cess pdl; ~... t~
Finally, a c~n~ te binary irnage is consll,Jc~d by ~ccigning a 1 to each pixel in the bacl~und and a O to each pixel in the fo,eglound. The pixels of a gray-scale image are divided ~t~.~cn a background class and a fol~igl~und class by a m~thod co,llplised of the steps of finding the minimllm gray-scale value, MINij, of all the pixels in a rectangular window cenle.ed on the pixel; finding the m~cimum gray-scale value, MAXjj, of all the pixels in the rectangular window;
co...puling the local co..l.~ LCj;, by the formula LCj; = C ( l-(MINj; + p)l(MAxii +p)).
where C is a preselecte~ m~Yimllm-contrast value and p is a preselected positivenumber small in col..r~ ;co,- to C; and ~ccigning the pixel to the background class if LCjj is less than a preselected minimllm-allowable-contrast value. The previous steps are the same as those required for co,ll~u~ing the contrast image; if the contrast image has been constructed, the steps need not be repeated.
For pixels not ~ccigned to the background class by the comparison of the local contrast to the minin~m-allowable-contrast value, the difference between MAXij and MINj; is computed and the pixel is ~scigne~ to the background class if the max-to-min difference is less than a p,esel~ -collt~ -scalevalue. If the pixel is still not ~csjgned to the background class, the difference between MAXjj and the gray-scale value of the pixel is colllpuled and the pixel is ~ssign~cl to the background class if the max-to-pixel-value dirr~.ence is less than a preselected nominal-contrast-excess value. If the pixel is not ~ccigne~l to the background class by the above steps, the pixel is ~csigr~ed to the foruglu.lnd class.
The invention envisions the use of two or more sets of thresholding process parallleters, yielding two or more c~n~ te binary images. To choose between any two c~n~lid~te binary thresholded images of an original gray-scale image, the invention uses a method CGl~ g the steps of converting the portions of the binary images corresponding to the for.,glou,ld into a run-length-encoded(RLE) image; detellllining the mJIIlbe.~ and sizes of the pieces (i.e., the piece size distribution) of the RLE images using piece e~tra~;lion; cor.slluctihg histograms of the piece size distributions of the RLE images; and using the his~ogram for W O96/24114 PCTrUS96/01060 st~ictir~l or neural networ~ analysis to select one of the two binary images. If there are more than two binary images, this method is applied pairwise until all binary - images save one have been e1imin~tPr~ The use of RLE images is optional, but their use greatly reduces the plocesi;ng time over the terhnique that det~r~ ines the ~ piece size ~ ionc from the binary images in po~;l;o.-~1 ,c,plese.~ ionc.
The piecc size di~l ~ ;kul ;on of a binary irnage is a s~c~ifin~1 ;on of the number of pieces by size for pieces of each said binary image having one or the other of the two values of the binary image. The piece size distribution may be used for simple co...r~ ;~nC of two binary images even where no quality selection is involved.
It is thus an object of this invention to provide a method and app~dt~ls for quickly th.~sl.o'~ing a gray-scale image.
It is a further object of this invention to provide a method and apparatus which uses local contrast information in a gray-scale image to unarnbiguously identify text in the image.
It is a further object of this invention to provide a method and apparatus to select the binary image that best ~ se.lts text in the foreground and non-text in the background from among two or more can~id~te binary images which were threcho'~ from an original gray-scale image cor.~ ;ng text.
It is a further object of this in~ell~ion to ~ o~ lir~lly process labels with poor readability to genc,ate a high-quality binary image for OCR input.
The present invention meets these objects and overcomes the drawbacks of the prior art, as will be appal.,nt from the det~iled descliption of the emW;...~..tc that follow.

BREF DESCRIPrION OF THE DRAW~GS

Figure 1 depicts the major steps and data flow of an embodimen~ of the method of the in~ erl~on.
Figure 2 depicts the major steps and data flow ~csocj~ted with the th~çcho' 1in~ ~chni~lue of an e..lW;~ t of the method of the invention.
Figure 3 depicts a sample set of pixel values for a gray-scale image and illustrates the construction of the MIN image.
Figure 4 depicts a sample set of pixels values for a gray-scale image and illu~ll..t~s the constlul:tion of the MAX image.

W 096/24114 CA 02211258 1997-07-23 PCTrUS96tO1060 Figure S illustrates the constmction of ~e contrast image from ~he MIN and MAX images.
Figure 6 depicts a typical contrast bistogram and illustrates the various thresholding process pararneters associated with an embodiment of the method of the invention.
Figure 7 depicts an e~mrle bina~y image to illustrate the reduction in ~plY ser ~;onal data resulting from Coll~C~ g the binary image to run-length-encoded (RLE) format.
Figure 8 depicts a gray-scale irnage of a spc~.. ~n label.
Figure 9 shows a threchol-ied binary image of the gray-scale image in Figure 8, using one set of thresholding ~
Figure 10 shows a thresholded, binary image of the gray-scale image in Figure 8, using a second set of thresho'~ing pa.~ te,~
Figure l l shows a thresholded, binary image of the gray-scale image in Figure 8, using a third set of thre chol-ling p~i~ t~ l s Figure 12 is a block diagram of the major co...l)onel.t~ of a preferred embodiment of the appaldlus of the invention.

DETAILED DESCRIPllON OF THE PREFERRED EMBODIMENTS

Figure I is a block diagram of the overall process and data flow of an embodiment of the invention. Figure 2 is a block diagrarn of the major steps and data flow in stage one of an embodiment of the invention. In block l of Figure 1, stage one of the process begins when digital data is acqui~d which represents a gray-scale image. The gray-scale image has n x m pixels ~)~se"~ g n rows and m columns of pixels, each having a gray-scale value from I to the maximum provided by the device which gene.~tes the image. ~l~fe.~bly, the device is a CCD
camera and the pixel's gray values vary from 0 (black) to 255 (white). Also, in block 1 of Figure 1, three sep~te n x m pixel images are constructed from the gray-scale image: the MIN image; the MAX image; and the contrast image. The pixel at row i and column j in the MIN image cq..ti~;n~ the ~--;n;.--.-.-- gray-scale value in a small region, preferably a rectangular box, around row i and column j of the gray-scale image. In this specification, row i and column j of a image will also be called the (i,j) position of an image. The MAX image is similarly constructed using m~xim-~m values instead of minimnm values. The pixel at the (i,j) position of the WO 96t24114 PCT/US96/01060 contlasl image is computed algebraically from the values in the (iJ) positions in the MIN and MAX images. In addition, a histogram of the values in the conlrasl imageis colllp~lled. These steps are shown in greater detail in blocks 1 through 4 ofFigure 2. As shown in block 2 of Figure 1 and block 6 of Figure 2, peak analysisusing one set of threshokling p~a~ ,tel~ is perforrned on the contrast histogram.
The results of the peak analysis are used to select a minimllm conlrasl value, so that, as shown in block 7 of Figure 2, the conl~l image can be ~hresholded. As shown in block 8 of Figure 2, the tl~cholded cQntr~ct image is used in conju"clion with the onginAl gray scale image, the MIN image, the MAX image and the contrastimage itself to yield a binary image.
Figure 2 also shows the data flow ACcoci~çd with the construction and use of the various interm~-li3te constructions of the method, such as the MIN, the MAX, and the contla~l images and the contl~l histogram. The data gene.~ed inone step of the metho~ may, if necess~ry, acco...?~ny the pl~Jcescing from one method step to the next method step For exarnple, construction of the MIN, the MAX, and the contrast images is nP~cessz~y to fonn the contrast histogram and toconduct the histogram peak analysis in accordance with the invention. However, once these three images are constructed, they may be used again in the final steps of peak analysis and thresholding the original gray-scale image. The thick lines ofFigure 2 show how the data for the eriginal gray-scale, the MIN, the MAX, and the contrast images are made available for use by later steps in the method.
Recognition of the dual usc, of these interme~i~te images and design of the thresholding m~-hod to re-use the int~rrne~ e images, rather than re-constructing them, is an illlpOl~ part of the time savings and speed achi~ by the invention.
Stage two of the invention begins with the binary images produced in stage one. Figure 1 depicts a preferred mode of operation where three sets ofthresholding process },~ulleters are used to gel erate three thresholded binary images. If it is desired to use additional sets of process parameters, the stepsrcpr~sented by blocks 2 through 5 of Figure 1 would be ~pca~cd for each set of process ~I~A~I~. t~ ~
Each binary image is converted to run-length-en~oded (RLE) forrn ~ in block 3, 7, and 11 of Figure 1. Next, in blocks 1, 8, and 12, piece extraction is performed to determine the number and size of the conn~cted colllponents. In - blocks 5, 9, and 13 a histogram of the piece size ~iictributio~ is constructed for each image. In block 14, the piece size histograms and the RLE images are used to W 096/24114 CA 02211258 1997-07-23 PCTrUS96/01060 select the highest quality binary image. The thick lines in Figure 1 show the data flow associated with providing the RLE data for each binary image directly to the final step in block 14, in addition to the norrnal data flow from one method step to the next.
The method of the invention may be iu~ple~rl~ed on a pipeline processor for highest speed. E~feldbly, the pir~eline processor employs a selection of 150/40 processing modules from ~ma~ing Technology, Inc. The 150/40 mo~ules may be intelconnr~te~ via a VME bus, as ~ep;cted in Figure 12. One or more general processors 21, 22, and 23, such as the Nitro60 processor sold by Heurikon Co,~,dtion, may also be connr~l to the VME bus. An image manager 31, such as the IMA 150/40 module sold by Tmaging Technology, Inc., and a co-..l.ul~ional module controller 41, such as the CMC 150/40 m~llle, also sold by ~m~ging Technology, Inc., may also be connpc~r~ to the VME bus. The method of the invention may be coded in the C l~g,i~e and cross40mpiled for inct~ tion on the Nitro60 processors 21, 22, and 23 and the other processing modules on the VME bus. One Nitro60 processor 21 may have a mouse-control line 25 and a RS232 port 26 to provide operator control over the system. One Nitro60 processor 21 may also have data ports such as a SCSI-2 port 28 and a Ethernet port 29. The IMA lS0/40 module 31 preferably provides four megabytes of memory for storing up to four 1024 x 1024 images and provides overall timing control to the pipeline har.lw~r~. The IMA 150/40 module 31 also allows up to three daughter boards, 32, 33, and 34, ~liccucse~ below. The CMC 150/40 module 41 operates as a pipeline ~wilclling module~ allows up to three daughter boards, 42, 43, and 44, and routes images to and from any ~ ghter boards. The IMA 150/40 module 31 and the CMC 150/40 module 41 are separately int~rcol-nP~led via a pixel bus 71.
Three ~ught~r boards are ~ u~ d on the IMA 150/40 module 31.
The AMDIG daughter board 34 r~ s gray scale image data from the CCD
camera, preferably a KodakTM Meg~rlus 1.4 camera, and routes the image data to an app.opliate area of memory in the IMA 150/40 module 31. The AMDIG
daughter board 34 can also scale the input image data if desired. The DMPC
ghter board 32 can provide a display of any image via a RGB high-resolution monitor 61. The MMP d~ughter board 33 is a morphological processor which autom~ti~lly provides the MAX and M~ images when provided with a gray-scale image and a rectangular hox size. Three: ~lition~ ghter boards are mounted on W O96/24114 PCTrUS96/01060 the CMC 150/40 module 41. The HF ~ ght~r board 43 is a histogram and fea~ure extractor which ~tom~tiç~lly provides a histogram of 256 bins when provided witha binary image. The feature e~ ctor may also be used to autom~tir~lly generate RLE images. The CLU d~l~ght~r board 44 is a convolver and logical processor which a~ r~lly provides the contr~l image. The PA d~ugh~er board 42 is a sepa~te digital signal processor which operates ~yllnhronously as a co-pr~cessorin parallel with the pi~lin~ ~noceC~;n,e This (~ ter board is used to provide the quality e~ ing rn~thod receiving mlll~ipl~ RLE images, sel~ting the best RLE
image, and providing the data for the best RLE image to one of the Nitro60 processors 21, 22, and 23 for OCR processing. The entire hardware configuration Figure 12 ope~alcs at the speed of 40 meg~pi~elc per secon~l~ which implies that a one megabyte image (1024 x 1024 pixels) l~Uil~S appr~ tely28 milliceconds for processing At this speed, less than 0.5 seconds is required to construct three parametric thresholdings of a single 1024 X 1024 gray scale image and to select the best quality binary thresholded image for OCR p~cessing The method of the invention may also be imple m~n~ed on any digital computer having sufficien~ storage to ~ccommo i~ the original gray-scale image, the interm~ te processing images (the MIN, the MAX, and the contrast images), and at least two binary thresholded images. Progr~mming of the method on a digital computer may be carried out in the C l~ng~ e or any colllpar~ble pl )cessing language.

From this overview, we now describe the entire method in detail.
At a given location (i,j) in the gray-scale label image, local contrast is obtained by the following plocedule;

Step 1. Find the minimllm gray level value (MlNjj) in a k x k window centered at (i,j).
Step 2. Find the m~Yimllm gray level value (MAxij) in the same window.
Step 3. The local contl~l LC at (i,j) is CO~ JI~ d as:
LCij = C x ( 1 - (MINj; + p) / (MAXij + p)) The value C is chosen based on the desired m~Yimllm local contrast value. For high contrast gray-scale images, a smaller value of C may be sufficient to W 096/24114 CA 02211258 1997-07-23 PCT~US96/01060 differentiate between foreground and background. For low contrast gray-scale images, more differentiation in contrast may be necess~ry to select between for~gru~ d and background. A value of 255 for C is preferred. This value works for most gray-scale images and allows the values of the CO~ image to be stored one per 8-bit byte. The value p is selçcted to be positive and small in CO~llp~h ison to C. Its p~ role is to protect against dividing by zero and it should be selected to f~çilit~t~ rapid cGlll~ ~ion of the local cont~àsl value. A preferred value of p is one.
The formula above for local CO~ltla l leads to a large local contrast LCij when there is a large difference in a gray level ~t~.een the mjnim~m and m~Yiml-m pixel in the k x k window. The local conlrdsl value can vary from 0 - C, where 0 is minim.lm contrd~, and C is m~ximnnl contra~l. It is not npcessary that the m~Yiml-m COlltla~l value be the same as the l"~xi.,.~.." possible pixel value in the original gray-scale image. However, the p~e~ll.,d m~Yimllm CO~ aSl value is 255, and this value will is used in the rem~in-ler of the detailed ~esrription A new image is created, called the contrast image, that contains the local contrast information. For each gray-scale pixel at (i,j) in the original label image, a corresponding pixel in the conlr~l image is set equal in value to LCij.Each pixel in the contrasl image is thereby ~c5igned a value between zero and the m~Ximllm cGnlla~l. Sirnilarly, the two other new images are created, the MIN image and the MAX image. The pixel in the (i,j) position of the MIN image has the value MINij and the pixel in the co,l~,s~.lding position in the MAX image has the value MAXj;.
The window size parameter, k, is determined by finding the m~im~lm allowable stroke width for a text ch~aLter. For example, at 250 dots perinch, an 8 point font character has a stroke width of about 3 pixels. At the same resolution, an 18 point font character has a stroke width of about 14 pixels. The value k is then set slightly larger than this n~Yimnm allowable width. For example, to have high text col.t~l values for a label with an 18 point font, the dimension k of the window must be greater than 14 pixels. If the rlim~ncion is smaller than this ~mollnt, a very low COIlllal~ value will be obtained for pixels that are in the center of a character stroke, because all the gray-scale values within the centered k x k window will be almost equal. For some applications or character fonts, it may bepreferable to use a rectangular window of size k x j, where k ana j have different values. A preferred window size is a square of 15 pixels by 15 pixels. This WO 96/24114 PCTnUS96101060 window size will allow processing of chalact~l~ in any ro~tion~l orientation up to a font size of 18 points.
Examples showing the derivation of the pixel values for the MIN, MAX, and contrast images are shown in Figures 3, 4, and 5. In all three cases, asquare window size is used and k is chosen to have a value of 3. In Figure 3, the original gray-scale image pixel values are shown at the top portion of the Figure in the upper left corner, the ~,.;ni~ pixel value of "3" in a 3 x 3 square around the (2,2) position det~,.ll.illes the (2,2) posilion of the MIN image to have a value of "3."
Similarly, the (6,6) position of the MIN image is ~ete, ....n~d to be a " 195." At the edges of the gray-scale image, the window will include areas off the gray-scale image. The pixel value for any area off the original image is considered to be zero.
In Figure 4, the (2,2) position of the MAX image is seen to have a value of "39," while the (6,6) position has a value of "231." Figure 5 illustrates how the derivation of the MIN and MAX images leads directly, through algebraic co.l,y~ ion, to the CGl~ image.
To construct the contrast histogram, a set of 256 bins is created, each co.l~,sponding to one of the contl~l levels (0-255). Each pixel in the contrast image is e~min~d in tum, and the bin col.~,;,ponding to the pixel value is increased by one. This forms the contrast histogram, which is a count of the number of pixels in the image that have pixel values at each gray level. An example of a contrast histogram is pn,sented in Figure 6.
Peaks in the colll,~l histo~am correspond to parts of the gray-scale label image that have sirnilar COllllà5l. In general, these parts, or cOhtl~l segments are sirnilar types of tex~, graphics, or noise. Peak analysis is the process of deterrnining which peaks co~ ~n~c to co~ sl seglll~nls that are text, and which peaks col-~spond to contla~l scg...~-.t~ that are background or noise. The result of peak analysis will aid in the correct ~Cci~r~m~n~ of 0 (black) for text pixels, and I
(white) for bac~.uund or noise pixels.
The underlying ~c~u~ d;on in the peak analysis process is that, for a given label, the contrast value for COIltraSl scglll~,nts cone~nding to background pixels and noise pixels is on the average lower than the values for foreground/text pixels. However, there are a nu~ of difficulties that occur due to the variety of labels. These tiiffi~ ties are en~ aled as follows:

W 096124114 CA 02211258 1997-07-23 PCT~US96/01060 1. The number of contrast segments is variable. Therefore, the number of contrast peaks in the contrast histogram is variable. This also means that there may be several background or noise peaks, as well as several text peak .
2. The average con~a~l value for a cont asl segment is variable.
The~fore, the loca~ion of contri~ct peaks in the contrast histogram is variable.3. The numbcr of pixels in a contrast se~.l.ent is variable.
Th.,lefon~, the height of a contrast peak in the contrast hislo~-- is variable.
4. The ~istribu~ion of conl.~l values around the average Conl value in a contrast segment is v~iab'- The-~fore, the width of a COntra~l peak in the co~,tl~l histogram is variable.

Given a contrast histogram with several peaks of various heights, widths, and loc~iorls, the goal of peak analysis is to find a cont aSI value called the minimllm allowable contlà ~l such that any pixel having a col~ value below this value will be autom~ ly ~ccign~d a value of 1 (background or noise).
To accomplish this task, a set of threch~ ng process pa~ ete.~ are used for analyzing the contrast histogram. These ~ elers are: (1) Peak Floor Level, (2) ~ininlllm Contrast Value, (3) Nominal Peak Width, and (4) Nominal Text Peak Width. The parameters are illustrated in Figure 6.
Peak Floor Level is tbe minimllm height that a Collk~sl histogram peak may have. Any contrast histogram peaks that are below the Peak Floor Level are not considered to correspond to possible text conl,~sl se~...en~. Typically, the Peak Floor Level will vary from 800 to 1200, and is plefer~bly set to 1100 for an image which is 1024 x 1024 pixels in size.
~ inimllm Contrast Value is the l...ni....l... value a CO~llra~l pixel may have and still be concidered part of a text contl~l se~ Preferably, it has avalue of 25 to 75 when C has a value of 255.
Norninal Peak Width is the number of difre~,.llCOIllra,l values that are e~rect~d to be in a nominal contrast hi~logl~-. peak. Preferably, it has a value of 50 to 75 when C has a value of 255.
Norninal Text Peak Width is the number of diff,l~nt COIItla~l values that are expected to be in a nominal contr~sl hislo~-arn peak that corresponds to a text contrast segment. Preferably, it has a value of 25 to 75 when C has a value of 255.

CA 022112~8 1997-07-23 n4ll4 ~CTnUS96/01060 The peak analysis process for the contrast histogram is a process for enabling the thresholding of the contrast image by determining a minimum allowable cont~ . This process is as follows:

Step 1. Peak search: Find the contrasl histogram contrast value with the greatest number of pixels. This is the largest current peak.
Step 2. Peak eV~ tion If the peak height is not larger than Peak Floor Level, go to step 6. The peak is disallowed because it is too short.
Step 3. Peak storage: Store the peak contrasl value.
Step 4. Peak inhibition: Disallow peak search within Nominal Peak Width of this peak, and all previous peaks.
Step 5. Go to step 1.
Step 6. Peak selçction Find the stored peak that is located at the smallest contl~l value greater than ~finimllm Contrast Value. This is called theminimnm text peak.
Step 7. Threshold Determination: Subtract Nominal Text Peak Width from the contrast value of the minimllm text peak. This is the minimnm allowable contrast.

Application of the peak analysis process to the contrast histogram shown in Figure 6 results in the ide~tifica~ion of three text peaks: Peak 2, Peak 3, and Peak 4. Peak 3 is the minimllm text peak. Idelltifi~ ion of Peak 3 allows the determination of the minimll-n allowable co~ltl~l, as shown in Figure 6.
Once the minimllm allowable contrast is detellllined, the original gray-scale image can be threcholde~l Given the pixel at (ij) in the oîigin~l image, its MINij, MAxij and LCij values, the following steps are taken to determine if the pixel is foreground (text) or background:

Step 1. Is Lcij greater than the minimllm allowable contrast? If not, the pixel at (i,j) is labeled background.
Step 2. Is MAXij - MINU suffi~içntly large? If not, the pixel at (i,j) is labeled background. To implement this test, the difference MAXij-MINij is compared to a first cutoff value, called the ~inimnm Contrast Scale. If this difference is less, the pixel is labeled background. Preferably, the Minimum Contrast Scale value is between 20 and 32 when C has a value of 255.

W 096/24114 CA 022112S8 1997-07-23 PCTnUS96/01060 Step 3. Is the pixel at (i,j) much closer to MINij, or closer to MAXij? If sufficient1y closer to MAXij, the pixel at (i,j) is labeled background. To )le .~nt this test, the dirre~el~ce MAXij - (value of the pixel at (iJ)) is co,llp~,d to a second cutoff value, called the Nominal Contrast Excess. If this difference isless, the pixel is labeled background. Preferably, the Nominal Contrast Excess value is midway b~l~. ~n MINij and MAXjj~ i.e. (MINij + MAXjj) / 2.
Step 4. If the pixel at (i,j) has not been labeled background by any of the above three tests, then the pixel is labeled fGl~;~ol-,.d (text).

The threshol~ing of the gray scale image with a specific set of parameters comrletes the first stage of procecsi~g The pelro~ n~e of stage one is depend~nt on the selection of an ~equ~e set of thresholding process pararneters.The set of pararneters specified above produces s~ticfactory results for a wide variety of labels. It is difficult, though, to find a single set of parameters that produces s~ticf~tory results for the entire variety of labels that may be en~ ou~.t ..,d in any given enviro~m~n-c However, with a method of choosing ~ en multiple sets of process pal~l~tel~, it is possible to find two or three sets of p~ll~,ters that cover almost all possibilities. Three sets of process p~d.llete.~ are given in Table 1 and are used in conjunction with Exarnple 1, ~ c~ below.
To threshold a given gray scale image with a set of processing pald~llcters, it is preferable to preselect the pararneter values to be used. As can be appreciated from the above description of the thresholding steps, the term "preselected" means not only the preselection of constant values but also the preselection of formulae which may be used to dct~ ~.;n~ the values in question.If a label image is processed by the LAT method and apparatus several times, each time with a different set of process parameters, a set of thresholded label images will be the result. It then becomes necessary to autom~tir~lly judge which thresholded image of the set is the best, i.e., has the highest quality for OCR procescin~ The second stage of the invention, Quality Analysis, supplem~ntc the stage one processing described above and provides the neces~. y judgm~nt to choose the best thresholded irnage.
The Quality Analysis process has the following major steps:

Step 1. Generate n thresholded images using n sets of process parameters.

Step 2. Convert each binary image to a Run-Length Encoded (RLE) image.
Step 3. P~Çu~lu piece extraction on each RLE image and ~ t~ ;n~
the number and size of con~ ct~d colll~nerlts in the image.
Step 4. Using st~tictir~l information, detelllune pairwise which of two images has the higher quality, b~d on the ~ ulion of piece sizes in each of the two images.
Step 5. Using the image selected in step 4 with any other image not yet Cljlllpa~'Gl;lt repeat step 4. I~on-inlle until all images have been colllpat'edto select the threcholded image with the highest quality.

In step one, stage one of the LAT method is perforrned n times to produce n images, each with a different set of process parameters. Each binary image thus produced is run-length encoded. Run-length encoding (RLE) is a process that transforms a binary pixel image to another rGpresent~tion. In this represent~tion, each horizontally connectecl group of black pixels, called a run, is transforrned to RLE forrnat by ret~ining only the start pixel location for the run, and the length of the run. All runs are transformed in this manner. RLE is a standard image proceccing t~rhni-lue.
The RLE transformation tends to reduce the size of the image signific~ntly, without reducing the quality of the binary image. For example, the image in Figure 7 is a 100 x 100 binary image and norrnally requires storage of 10,000 bits. However, since it contains only a vertical bar (of height 100 pixels, and width of 3 pixels), it can bc le~ se,lted in RLE forrnat by 100 start bytes, each with 1 length bytes (of valuc 3). Therefore, this image only IG IUU'GS storage of 200 bytes in RLE format. Some algolillulls have been shown to take less execution time due to the nature of this format, as well as the smaller IGI,le~ tion Next, each RLE image is subjected to piece extraction. Piece extraction is a standard image processing technique for finding runs that are touching on different lines of the images. Runs that touch are called connected collll,onents. By assembling groups of conn~cted runs, pieces are forrned. For example, a piece can be a typewritten cha~ er or a blotch of noise.
With the st~tictic~l information thus assembled, the imagc with the best quality can be selected. Quality estim~tion is a method for exarnining the results of the LAT process and judging which threshold is the best among a group W 096/24114 CA 02211258 1997-07-23 PCTrUS96101060 of images. This process uses the size of pieces (i.e., number of pixels in a piece) generated by the piece extraction algorithm to e,li...~e quality.
Piece size is a very useful feature to examine because it provides good represen~ion of the quality of text-based images such as labels. The underlying ~s~llmr~ion is that the dis~ ulion of piece sizes provides information on the amount of noise and text that is found in a threch~'~çd image. Images with a lot of tiny pieces tend to be noisy. Images with many large pieces tend to have highquality text. Often~ s, images have some l,lez~ule of pieces of many different sizes.
The invention uses a histogram of the piece sizes as the feature set for quality estim~tion The range of piece sizes is divided into b bins. Each bincont~ins the number of pieces in the images that within the size range of the bin. A
decision making process is used to çY~rninP the histograrn to dct~ e if the image is of better quality than any of the otha thrP-cho~P~I images. Two Lrrelc.~t decision processes have been developed for performing this r-~min~tion 1. The first dec.sion process uses a hislog~ of two bins. Bin 1 (Noise) contains small pieces bet-.een 0 and 10 pixels in size. Bin 2 (Big) contains big pieces with size greater than 100 pixels. Label the images so that the image with more big pieces is labeled with the number 1, i.e., Big(1) > Big(2).
The following comparison/selection is made pairwise for two threchol~led images,Image(l) and Image(2):

Step 1. If Big(2) < 30, then select Image(1) Step2 If Qx Bi~(l) > Noi~p(l)~thenselectImage(l);
Big(2) Noise(2) otherwise, select Image(2).

The value Q is sehPctP~ to have the following values:

If 30 5 Big(2) 5 150 Q = r-2 )rBi~r2)) + 60 + 3 If 150 < Big(2), Q = I

W 096t24114 CA 02211258 1997-07-23 PCTnUS96/01060 For more than two images, the algorithm is applied pairwise until a suitable selectioll is made.

2. The second decision process uses a piece size histogram of four bins as follows:

Bin 1: 0-10.
Bin 2: 11 40.
Bin 3: 41-100.
Bin 4: more than 100 Bin I corresponds to small pieces (image noise) and Bin 4 corresponds to big pieces of the image. Bin 3 co,~ onds to disconnected big pieces such as may be encountered in an image of text produced by a dot-matrix printer.
The values in these bins can be considered as the colllponents of a four--limPnsional vector. Quality analysis rnay then be p.,.folllled on two images, Image( 1 ) and Image(2), with a feedfor vard neural nctwoll~ with eight inputs. The eight inputs are, in order, the values in the four bins from Image(l) and the values in the four bins from Image(2). The eight input network produces a value between0-255 as output. The network is trained on a large set of training images using the back propagation algorithm.
To train the network, the large set of training images are e~mined pairwise and the better image of each pair is selecte~ by visual ins~.~ion. The neural network is trained by sllccil~ring an output of 255 when the vector values for the better image appear first in the 8-con~ne.ll input vector, then reversing the two four-component vectors in the input and s~cif~ing a neural ml~olk output of 0.
This training process is l~r o ~e d for a large number of pairs of the training images.
After training, the nelwul~ funr~ionc as follows: For a given input pair of images, Image(l) and Image(2), the two four-colll~onent vectors from thepiece size histogram are provided as input to the neural network. If the output value of the network is greater than 128, Image(l) is better than Image(2). Otherwise,Image(2) is better. For more than two images, the algolillllll is applied pairwise until a suitable se~e~tiorl is made.

W 096/24114 CA 022112S8 1997-07-23 PCTnUS96/01060 Other .l,P~h~nic...c for e~ g quality can also be employed using the histogram of piece sizes. For exarnple, one such m.och~nicm employs a peak detection algolitlllll similar to the one in the first stage of the LAT process to find text and noise piece segmPn~c For another example, neural net~oll~s can be used with only selecte~ bins of the piece size hislo~l~ls being used as inputs.

F$~m~le Figures 8~ Jstra~e the use of the invention with a sample label.
Figure 8 depicts the ori~jn~l gray-scale image of a ~per;.--~n label. Figure 9 shows a thre-sholded, binary image of the image in Figure 8, using the ~a,~llcters in column 1 of Table 1. Note that, while all ba~l~glvund has been set to white, some of the text is miC~Cing. Figure 10 shows a threch~ e-l binary irnage of the gray-scale image in Figure 8, using the par~m~ters in column 2 of Table 1. Here, all of the text is present, but some of the background, co,lcspo~ ng to wrinkles in thelabel, have been set to foreground black. Figure 11 shows a thresholded, binary image of the gray-scale image in Figure 8, using the p~l,clels in column 3 of Table 1. Here, all of the text is present, and all of the background has been set to white. This last binary image is, from visual incpection~ clearly the best quality image of the three binary images shown in Figures 9, 10, and 11. When these three binary images were subjected to the first method of quality analysis, the last image, co.,ei,~nding to Figure 11, was sel~cted P~a~ ~r Set 1 2 3 Peak Floor Level 1100 1100 1100 Nominal Peak Width 75 25 75 ~inimllrn Contrast Value 75 S0 50 Nominal Text Peak Width 75 75 25 ~inimllm Contrast Scale 32 20 20 Nominal Contrast Excess (MINij+MAXijy2 (MINij+MAXij)/2 (MlNij+MAXij)/2 Table 1. Process P, "--... t~", The selection of the last image, col,~s~onding to Pigure 11, as the best quality thresholded binary image can be illustrated by reference to Table 2, which cont~in~ the values for the Noise bin and Big bin of the piece-size histogram.

W 096/24114 CA 02211258 1997-07-23 PCTrUS96/01060 ImageCo,l~sponding Figure Noise Big Table 2. Number of small and large size binary image pieces.

The quality analysis process firstcolllp&~s images A and B. Since image B has more big pieces, it is labeled image l and image A is labeled 2. Thevalue of Q in this inC~n~e is Q = (-2)(B~(2)) + 60 + 3 = -2r30) + 60 + 3 = 3 The test for the better quality image is Qx Big(l) = 3(~Q) = 7 Big(2) 30 which is not greater than Noise(l)/Noise(2) = 1884/95, so image 2 (image A) is selecte~l Next, image A is colllpar~d to image C. In this case, image C is labeled image l. The value Q is again computed to be 3. The test for the better quality image is therefore QXBig(l) = 3(50 = 5.2 Big(2) 30 which is greater than Noise(l)/Noise(2) = 109/95. Hence image 1 (image C), col,.,~pol1ding to Figure l l, is selected as the best image of the three.

While this invention has been described in detail with particular reference to preferred embo-iim.ontc thereof, it will be understood that variations and modifications can be made to these embo~ c without departing from the spirit and scope of the invention as described herein and as defined in the appended claims.

Claims (19)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. ~A method for selecting one of two binary images corresponding to different thresholdings of a gray-scale image, each of said binary images having a first one of two values for portions of said binary image and a second one of two values for the remainder of said binary image, comprising, prior to applying a character recognition process to said images, the steps of:
locating connected components in each of said binary images;
determining a piece size of each of said connected components, said piece size having a value corresponding to a number of pixels in each of said connected components;
determining piece size distributions for said binary images, each piece size distribution having a number of connected components by size for connected components of each said binary image having said first one of two values;
comparing said piece size distributions; and selecting one of said images based on said comparison of piece size distributions.
2. ~The method of claim 1, wherein the step of determining piece size distributions for said binary images comprises the steps of:
converting the portions of each of said binary images having said first one of two values into a run-length-encoded (RLE) image; and determining the piece size distribution of connected components of each of said RLE images using piece extraction.
3. ~The method of claim 1, wherein the step of comparing said piece size distributions comprises the steps of:
constructing a multi-bin histogram of each of said piece size distributions;
and comparing the values in selected bins of said multi-bin histograms.
4. ~The method of claim 3, wherein the step of comparing the values in selected bins of said multi-bin histograms comprises the step of:

comparing the ratio of values in a first bin of each of said multi-bin histograms with the ratio of values in a second bin of each of said multi-bin histograms;
and wherein the step of selecting one of said images comprises:
selecting one of said two binary images based on the comparison in the previous step.
5. The method of claim 1, wherein the step of comparing said piece size distributions comprises the steps of:
constructing a multi-bin histogram of each of said piece size distributions;
and providing values from selected bins of each of said multi-bin histograms as inputs to a previously-trained neural network; and wherein the step of selecting one of said images comprises:
selecting one of said two binary images based upon a value of the output of said previously-trained neural network.
6. ~The method of claim 5, wherein said histograms are multi-bin histograms with a first bin corresponding to small connected components in said binary images and with a second bin corresponding to big connected components in said binary images and said step of comparing the values in selected bins of said multi-bin histograms comprises the steps of:
computing a first quotient by dividing the value of said first bin of said histogram for a first of said two binary images by the value of said first bin of said histogram for a second of said two binary images;
computing a second quotient by dividing the value of said second bin of said histogram for a first of said two binary images by the value of said second bin of said histogram for a second of said two binary images; and wherein the step of selecting one of said images comprises:
selecting said first binary image if said first quotient is less than said second quotient multiplied by a quality value.
7. ~The method of claim 6, wherein said quality value is a preselected constant.
8. ~The method of claim 6, wherein said quality value is computed from a predetermined formula employing preselected constants and said value of said second bin of said histogram for said first of two binary images.
9. ~The method of claim 6, wherein said quality value is computed from a predetermined formula employing preselected constants and said value of said second bin of said histogram for said second of two binary images.
10. ~The method of claim 6, wherein said first bin of said multi-bin histogram contains the count of connected components of said binary images having from 0 to approximately 10 pixels.
11. ~The method of claim 6, wherein said second bin of said multi-bin histogram contains the count of connected components of said binary images having more than approximately 100 pixels.
12. ~A method of thresholding a gray-scale image, comprising the steps of:
constructing a contrast image corresponding to said gray-scale image, wherein each pixel of said contrast image has a value LC ij given by the formula LC ij = C(1 - (MIN ij + p)/(MAX ij + p)), where (ij) is the position of said pixel in said contrast image, C is a preselected maximum contrast value, p is a preselected positive value small in comparison to C, MIN ij is the minimum gray-scale value of all pixels in a rectangular window centered on the corresponding position in said gray-scale image and MAX ij is the maximum gray-scale value of all pixels in said rectangular window;
selecting a predetermined number of sets of thresholding parameters, said thresholding parameters being a peak-floor-level value, a nominal-peak-width value, a minimum-contrast value, a nominal-text-peak-width value, a minimum-contrast-scale value and a nominal-contrast-excess value;
for each set of thresholding parameters, obtaining a candidate binary image by constructing a contrast histogram of said contrast image over all possible contrast values in said contrast image, fording all peaks in said contrast histogram greater than said preselected peak-floor-level value and separated from any higher peak by said preselected nominal-peak-width value, selecting from among said found peaks a minimum-text peak by locating a peak occurring at the lowest contrast value which is greater than said preselected minimum-contrast value, obtaining a minimum-allowable-contrast value by subtracting said preselected nominal-text-peak-width value from the contrast value of said minimum-text peak, assigning the pixel at the (ij) position of said gray scale image to a background class of said candidate binary image if LC ij is less than said minimum-allowable-contrast value, computing, if said pixel of said gray scale image is not already assigned to said background class, the difference between MAX ij and MIN ij and assigning said pixel to said background class if said max-to-min difference is less than said preselected minimum-contrast-scale value, computing, if said pixel of said gray scale image is not already assigned to said background class, the difference between MAX ij and the gray-scale value of said pixel and assigning said pixel to said background class if said max-to-pixel-value difference is less than said preselected nominal-contrast-excess value and assigning, if said pixel of said gray scale image is not already assigned to said background class, said pixel to a foreground class;
selecting a first one of said candidate binary images, converting portions of said first-selected candidate binary image having a first one of two values into a first run-length-encoded (RLE) image;
prior to applying a character recognition process to said first RLE image, locating connected components in said first RLE image and determining a piece size of each of said connected components, said piece size having a value corresponding to the number of pixels in each of said connected components;
determining a first piece size distribution of connected components of said first RLE image, constructing a first multi-bin histogram of said first piece size distribution;
selecting a second one of said candidate binary images from among said candidate binary images not previously selected, converting portions of said second-selected candidate binary image having a first one of two values into a second RLE
image;
prior to applying a character recognition process to said second RLE image, locating connected components in said second RLE image and determining a piece size of each of said connected components, said piece size having a value corresponding to the number of pixels in each of said connected components;
determining a second piece size distribution of connected components of said second RLE image, constructing a second multi-bin histogram of said second piece size distribution;
picking one of said first-selected candidate binary image and second-selected candidate binary image based on an evaluation of said first and second multi-bin histograms;
and repeating the previous step, with said picked candidate binary image replacing said first-selected candidate binary image, until all said candidate binary images have been selected.
13. ~The method of claim 12, wherein the step of picking one of said first-selected candidate binary image and second-selected candidate binary image based on an evaluation of said first and second multi-bin histograms comprises the steps of:
comparing the ratio of the value in a first bin of said first multi-bin histogram to the value in a first bin of said second multi-bin histogram with the ratio of the value in a second bin of said first multi-bin histogram to the value in a second bin of said second multi-bin histogram; and picking one of said first-selected candidate binary image and second-selected candidate binary image based on the comparison of said ratios.
14. ~The method of claim 13, wherein the step of picking one of said first-selected candidate binary image and second-selected candidate binary image based on an evaluation of said first and second multi-bin histograms comprises the steps of:
providing the values from selected bins of each of said multi-bin histograms as inputs to a previously-trained neural network; and picking one of said first-selected candidate binary image and second-selected candidate binary image based upon a value of the output of said previously-trained neural network.
15. ~An apparatus for thresholding a gray-scale image, comprising:
means for constructing a contrast image corresponding to said gray-scale image, wherein each pixel of said contrast image has a value LC ij given by the formula LC ij = C(1 - (MIN ij + p)/(MAX ij + p)), where (ij) is the position of said pixel in said contrast image, C is a preselected maximum contrast value, p is a preselected positive value small in comparison to C, MIN ij is the minimum gray-scale value of all pixels in a rectangular window centered on the corresponding position in said gray-scale image and MAX ij is the maximum gray-scale value of all pixels in said rectangular window;
means for selecting a predetermined number of sets of thresholding parameters, said thresholding parameters being a peak-floor-level value, a nominal-peak-width value, a minimum-contrast value, a nominal-text-peak-width value, a minimum- contrast-scale value and a nominal-contrast-excess value;
means for obtaining, for each set of thresholding parameters, a candidate binary image by constructing a contrast histogram of said contrast image over all possible contrast values in said contrast image, finding all peaks in said contrast histogram greater than said preselected peak-floor-level value and separated from any higher peak by said preselected nominal-peak-width value, selecting from among said found peaks a minimum-text peak by locating a peak occurring at the lowest contrast value which is greater than said preselected minimum-contrast value, obtaining a minimum-allowable-contrast value by subtracting said preselected nominal-text-peak-width value from the contrast value of said minimum-text peak, assigning the pixel at the (ij) position of said gray scale image to a background class of said candidate binary image if LC ij is less than said minimum-allowable-contrast value, computing, if said pixel of said gray scale image is not already assigned to said background class, the difference between MAX ij and MIN ij and assigning said pixel to said background class if said max-to-min difference is less than said preselected minimum-contrast-scale value, computing, if said pixel of said gray scale image is not already assigned to said background class, the difference between MAX ij and the gray-scale value of said pixel and assigning said pixel to said background class if said max-to-pixel-value difference is less than said preselected nominal-contrast-excess value and assigning, if said pixel of said gray scale image is not already assigned to said background class, said pixel to a foreground class;
means for selecting a first one of said candidate binary images, converting the portions of said first-selected candidate binary image having a first one of two values into a first run-length-encoded (RLE) image, means operative prior to applying a character recognition process to said first RLE image, for locating connected components in said first RLE image and determining a piece size of each of said connected components, said piece size having a value corresponding to the number of pixels in each of said connected components, determining a first piece size distribution of connected components of said first RLE image and constructing a first multi-bin histogram of said first piece size distribution;
means for selecting a second one of said candidate binary images from among said candidate binary images not previously selected, converting the portions of said second-selected candidate binary image having a first one of two values into a second RLE image;
means operative prior to applying a character recognition process to said second RLE image, for locating connected components in said second RLE image and determining a piece size of each of said connected components, said piece size having a value corresponding to the number of pixels in each of said connected components, determining a second piece size distribution of connected components of said second RLE
image and constructing a second multi-bin histogram of said second piece size distribution;
means for picking one of said first-selected candidate binary image and second-selected candidate binary image based on an evaluation of said first and second multi-bin histograms; and means for repeating the previous function, with said picked candidate binary image replacing said first-selected candidate binary image, until all said candidate binary images have been selected.
16. The apparatus of claim 15, wherein the means for picking one of said first-selected candidate binary image and second-selected candidate binary image based on an evaluation of said first and second multi-bin histograms comprises:
means for comparing the ratio of the value in a first bin of said first multi-bin histogram to the value in a first bin of said second multi-bin histogram with the ratio of the value in a second bin of said first multi-bin histogram to the value in a second bin of said second multi-bin histogram; and means for picking one of said first-selected candidate binary image and second-selected candidate binary image based on the comparison of said ratios.
17. The apparatus of claim 15, wherein the means for picking one of said first-selected candidate binary image and second-selected candidate binary image based on an evaluation of said first and second multi-bin histograms comprises:
means for providing the values from selected bins of each of said multi-bin histograms as inputs to a previously-trained neural network; and means for picking one of said first-selected candidate binary image and second-selected candidate binary image based upon a value of the output of said previously-trained neural network.
18. ~An apparatus for selecting one of two binary images corresponding to different thresholdings of a gray-scale image, each of said binary images having a first one of two values for portions of said binary image and a second one of two values for the remainder of said binary image, comprising:
means, operative prior to applying a character recognition process to said images, for locating connected components in each of said binary images;
means for determining a piece size of each of said connected components, said piece size having a value corresponding to the number of pixels in each of said connected components;
means for determining piece size distributions for said binary images, each piece size distribution having the number of connected components by size for connected components of each said binary image having said first one of two values; and means for comparing said piece size distributions; and means for selecting one of said images based on said comparison of piece size distributions.
19. ~A computer-readable memory for storing statements or instructions for use in the execution in a computer of the process of claim 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 or 14.
CA002211258A 1995-01-31 1996-01-25 Method and apparatus for separating foreground from background in images containing text Expired - Fee Related CA2211258C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US38073295A 1995-01-31 1995-01-31
US08/380,732 1995-01-31
PCT/US1996/001060 WO1996024114A2 (en) 1995-01-31 1996-01-25 Method and apparatus for separating foreground from background in images containing text

Publications (2)

Publication Number Publication Date
CA2211258A1 CA2211258A1 (en) 1996-08-08
CA2211258C true CA2211258C (en) 2000-12-26

Family

ID=23502251

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002211258A Expired - Fee Related CA2211258C (en) 1995-01-31 1996-01-25 Method and apparatus for separating foreground from background in images containing text

Country Status (10)

Country Link
US (1) US5889885A (en)
EP (1) EP0807297B1 (en)
JP (1) JP3264932B2 (en)
AT (1) ATE185211T1 (en)
CA (1) CA2211258C (en)
DE (1) DE69604481T2 (en)
DK (1) DK0807297T3 (en)
ES (1) ES2140825T3 (en)
GR (1) GR3031478T3 (en)
WO (1) WO1996024114A2 (en)

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2211258C (en) * 1995-01-31 2000-12-26 United Parcel Service Of America, Inc. Method and apparatus for separating foreground from background in images containing text
JP3380831B2 (en) * 1996-03-08 2003-02-24 シャープ株式会社 Image forming device
FR2769733B1 (en) * 1997-10-15 2000-01-07 Jeffrey Horace Johnson PROCESS FOR AUTOMATICALLY EXTRACTING PRINTED OR HANDWRITED INSCRIPTIONS ON A BACKGROUND, IN A MULTI-LEVEL DIGITAL IMAGE
US6411737B2 (en) * 1997-12-19 2002-06-25 Ncr Corporation Method of selecting one of a plurality of binarization programs
US6411741B1 (en) * 1998-01-14 2002-06-25 Kabushiki Kaisha Toshiba Image processing apparatus
US6507670B1 (en) * 1998-03-05 2003-01-14 Ncr Corporation System and process for removing a background pattern from a binary image
EP0961218B1 (en) * 1998-05-28 2004-03-24 International Business Machines Corporation Method of binarization in an optical character recognition system
US6222642B1 (en) * 1998-08-10 2001-04-24 Xerox Corporation System and method for eliminating background pixels from a scanned image
TW429348B (en) * 1999-02-12 2001-04-11 Inst Information Industry The method of dividing an image
EP1226508B1 (en) * 1999-07-30 2018-10-10 Microsoft Technology Licensing, LLC Adjusting character dimensions to compensate for low contrast character features
US6738526B1 (en) * 1999-07-30 2004-05-18 Microsoft Corporation Method and apparatus for filtering and caching data representing images
US6577762B1 (en) * 1999-10-26 2003-06-10 Xerox Corporation Background surface thresholding
US6728391B1 (en) 1999-12-03 2004-04-27 United Parcel Service Of America, Inc. Multi-resolution label locator
EP1104916B1 (en) * 1999-12-04 2011-05-11 LuraTech Imaging GmbH Method for compressing color and/or grey-level scanned documents
DE19958553A1 (en) 1999-12-04 2001-06-07 Luratech Ges Fuer Luft Und Rau Image compression scheme for scanned images divides picture into three levels allows separate maximum compression of text and pictures
US6652489B2 (en) * 2000-02-07 2003-11-25 Medrad, Inc. Front-loading medical injector and syringes, syringe interfaces, syringe adapters and syringe plungers for use therewith
JP2001251507A (en) * 2000-03-03 2001-09-14 Fujitsu Ltd Image processor
US6473522B1 (en) * 2000-03-14 2002-10-29 Intel Corporation Estimating text color and segmentation of images
US6470094B1 (en) * 2000-03-14 2002-10-22 Intel Corporation Generalized text localization in images
US6674900B1 (en) * 2000-03-29 2004-01-06 Matsushita Electric Industrial Co., Ltd. Method for extracting titles from digital images
US6771804B1 (en) * 2000-05-16 2004-08-03 Siemens Aktiengesellschaft Method and apparatus for signal segmentation
KR100430273B1 (en) * 2000-07-21 2004-05-04 엘지전자 주식회사 Multimedia Query System Using Non-uniform Bin Quantization Of Color Histogram
AU2002227262A1 (en) * 2000-12-04 2002-06-18 Isurftv E-mail, telephone number or url within tv frame
JP2002271611A (en) * 2001-03-14 2002-09-20 Fujitsu Ltd Image processing unit
JP4140048B2 (en) * 2001-08-20 2008-08-27 富士フイルム株式会社 Image management apparatus, image management program, and image management method
JP4003428B2 (en) * 2001-10-10 2007-11-07 セイコーエプソン株式会社 Check processing apparatus and check processing method
KR100425324B1 (en) * 2002-02-09 2004-03-30 삼성전자주식회사 Method for processing the contrast of a video signal
RU2251737C2 (en) * 2002-10-18 2005-05-10 Аби Софтвер Лтд. Method for automatic recognition of language of recognized text in case of multilingual recognition
US7079686B2 (en) * 2002-08-20 2006-07-18 Lexmark International, Inc. Systems and methods for content-based document image enhancement
KR100484170B1 (en) * 2002-10-25 2005-04-19 삼성전자주식회사 Method and apparatus for improvement of digital image quality
JP4118749B2 (en) * 2002-09-05 2008-07-16 株式会社リコー Image processing apparatus, image processing program, and storage medium
DE10326035B4 (en) * 2003-06-10 2005-12-22 Hema Electronic Gmbh Method for adaptive error detection on a structured surface
DE10326033B4 (en) * 2003-06-10 2005-12-22 Hema Electronic Gmbh Method for adaptive error detection on an inhomogeneous surface
DE10326032B4 (en) * 2003-06-10 2006-08-31 Hema Electronic Gmbh Method for self-testing an image processing system
DE10326031B4 (en) * 2003-06-10 2005-12-22 Hema Electronic Gmbh Method for adaptive edge detection
JP4259949B2 (en) * 2003-08-08 2009-04-30 株式会社リコー Image creating apparatus, image creating program, and recording medium
US7921159B1 (en) * 2003-10-14 2011-04-05 Symantec Corporation Countering spam that uses disguised characters
US7313275B2 (en) * 2005-02-15 2007-12-25 Primax Electronics Ltd. Method of separating text and graphs in digital image data
US7492957B1 (en) * 2005-08-29 2009-02-17 Symantec Corporation Using run length encoding to detect target images
JP4215038B2 (en) * 2005-09-16 2009-01-28 セイコーエプソン株式会社 Image processing apparatus, image processing method, and program
US7668394B2 (en) * 2005-12-21 2010-02-23 Lexmark International, Inc. Background intensity correction of a scan of a document
DE102006001075A1 (en) * 2006-01-09 2007-07-12 Carl Zeiss Ag Electronic vision aid and electronic visual aid procedure
US20070201743A1 (en) * 2006-02-28 2007-08-30 Sharp Laboratories Of America, Inc. Methods and systems for identifying characteristics in a digital image
US7813547B1 (en) 2006-04-05 2010-10-12 Unisys Corporation Apparatus and method for detection and analysis of imagery
US8306336B2 (en) * 2006-05-17 2012-11-06 Qualcomm Incorporated Line or text-based image processing tools
ATE409334T1 (en) * 2006-05-19 2008-10-15 Datasensor Spa IMAGE SENSOR FOR SECURITY SYSTEM AND ASSOCIATED OPERATING METHOD
EP2064658A4 (en) * 2006-09-07 2017-08-23 Lumex As Relative threshold and use of edges in optical character recognition process
US8467614B2 (en) * 2007-11-28 2013-06-18 Lumex As Method for processing optical character recognition (OCR) data, wherein the output comprises visually impaired character images
US8351720B2 (en) * 2008-04-24 2013-01-08 Hewlett-Packard Development Company, L.P. Method and system providing edge enhanced image binarization
US8170291B2 (en) * 2008-05-09 2012-05-01 The United States Postal Service Methods and systems for analyzing the quality of digital signature confirmation images
TWI386030B (en) * 2008-06-11 2013-02-11 Vatics Inc Algorithm for hybrid connected-component-labeling
US8442348B2 (en) * 2008-11-06 2013-05-14 Seiko Epson Corporation Image noise reduction for digital images using Gaussian blurring
US8675060B2 (en) * 2009-08-28 2014-03-18 Indian Institute Of Science Machine vision based obstacle avoidance system
US20110149753A1 (en) * 2009-12-21 2011-06-23 Qualcomm Incorporated Switching between media broadcast streams having varying levels of quality
US8693743B1 (en) 2010-02-19 2014-04-08 Olive Tree Media, LLC Analysis and display of multiple biomarker co-expression in cells and tissues
CN101807253B (en) * 2010-03-22 2012-07-25 南京工程学院 Transmission line-oriented and zone width information-based image framework extraction method
US8977629B2 (en) * 2011-05-24 2015-03-10 Ebay Inc. Image-based popularity prediction
AU2013343203B2 (en) * 2012-11-08 2019-08-15 Total Sa Method for processing an image
JP5939154B2 (en) * 2012-12-27 2016-06-22 ブラザー工業株式会社 Image processing apparatus and computer program
JP6080259B2 (en) * 2013-02-06 2017-02-15 日本電産サンキョー株式会社 Character cutting device and character cutting method
CN104036498B (en) * 2014-05-28 2017-01-11 杭州电子科技大学 Fast evaluation method of OCT image quality based on layer by layer classification
JP6642970B2 (en) * 2015-03-05 2020-02-12 キヤノン株式会社 Attention area detection device, attention area detection method, and program
US9367899B1 (en) * 2015-05-29 2016-06-14 Konica Minolta Laboratory U.S.A., Inc. Document image binarization method
US9710911B2 (en) * 2015-11-30 2017-07-18 Raytheon Company System and method for generating a background reference image from a series of images to facilitate moving object identification
DE102016004549A1 (en) * 2016-04-15 2017-11-02 Noventi Gmbh Method for recognizing printed texts on documents
CN106775548B (en) * 2016-11-25 2018-02-23 北京小米移动软件有限公司 page processing method and device
US10242449B2 (en) * 2017-01-04 2019-03-26 Cisco Technology, Inc. Automated generation of pre-labeled training data
CN111368837B (en) * 2018-12-25 2023-12-05 中移(杭州)信息技术有限公司 Image quality evaluation method and device, electronic equipment and storage medium

Family Cites Families (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3715724A (en) * 1969-12-24 1973-02-06 Olivetti & Co Spa Apparatus for recognizing graphic symbols
US4003999A (en) * 1972-10-25 1977-01-18 A. H. Robins Company, Incorporated Aspirin-tea coprecipitates for treating inflammation
US4119947A (en) * 1977-07-20 1978-10-10 Howard Noyes Leighton Optical signal processor
DE3107521A1 (en) * 1981-02-27 1982-09-16 Siemens AG, 1000 Berlin und 8000 München METHOD FOR AUTOMATICALLY DETECTING IMAGE AND TEXT OR GRAPHIC AREAS ON PRINT ORIGINALS
GB2103449B (en) * 1981-06-29 1985-05-30 Nippon Telegraph & Telephone Method and apparatus for gray level signal processing
JPS5960690A (en) * 1982-09-30 1984-04-06 Toshiba Corp Binary coding device
JPS60146378A (en) * 1984-01-11 1985-08-02 Matsushita Electric Ind Co Ltd Picture recognition device
JPS6158087A (en) * 1984-08-29 1986-03-25 Nec Corp Picture area dividing device
EP0198481B1 (en) * 1985-04-17 1996-03-13 Hitachi, Ltd. Image processing apparatus
JPS61276080A (en) * 1985-05-31 1986-12-06 Toshiba Corp Top and reverse surface deciding device
US4837450A (en) * 1985-07-01 1989-06-06 Canon Kabushiki Kaisha Apparatus for reading a film image with controllable illumination and threshold value
US4742556A (en) * 1985-09-16 1988-05-03 Davis Jr Ray E Character recognition method
US5046114A (en) * 1985-10-01 1991-09-03 The Palantir Corporation Method and structure for separating joined patterns for use in pattern and character recognition system
US4809308A (en) * 1986-02-20 1989-02-28 Irt Corporation Method and apparatus for performing automated circuit board solder quality inspections
CA1268547A (en) * 1986-03-31 1990-05-01 Ahmed Mostafa El-Sherbini Thresholding algorithm selection apparatus
US4794239A (en) * 1987-10-13 1988-12-27 Intermec Corporation Multitrack bar code and associated decoding method
JPH01137385A (en) * 1987-11-25 1989-05-30 Matsushita Electric Ind Co Ltd Character recognizing device
US4924078A (en) * 1987-11-25 1990-05-08 Sant Anselmo Carl Identification symbol, system and method
US4929979A (en) * 1988-01-29 1990-05-29 Konica Corporation Method and apparatus for processing image
JP2727549B2 (en) * 1988-01-29 1998-03-11 日本電気株式会社 Optimal image quality selection device
JPH0746979B2 (en) * 1988-05-26 1995-05-24 不二製油株式会社 Special nutrition food
EP0358815B1 (en) * 1988-09-12 1993-05-26 Océ-Nederland B.V. System and method for automatic segmentation
JPH02100575A (en) * 1988-10-07 1990-04-12 Toshiba Corp Picture processor
JPH02196565A (en) * 1989-01-25 1990-08-03 Eastman Kodatsuku Japan Kk Picture binarizing system
JPH087785B2 (en) * 1989-05-16 1996-01-29 松下電器産業株式会社 Binarization processor
US5048096A (en) * 1989-12-01 1991-09-10 Eastman Kodak Company Bi-tonal image non-text matter removal with run length and connected component analysis
JP2935863B2 (en) * 1989-12-27 1999-08-16 本田技研工業株式会社 Hough transform apparatus and method
JP2768786B2 (en) * 1990-02-20 1998-06-25 キヤノン株式会社 Image reading device
US5081690A (en) * 1990-05-08 1992-01-14 Eastman Kodak Company Row-by-row segmentation and thresholding for optical character recognition
US5121224A (en) * 1990-06-01 1992-06-09 Eastman Kodak Company Reproduction apparatus with selective screening and continuous-tone discrimination
US5091965A (en) * 1990-07-16 1992-02-25 Sony Corporation Video image processing apparatus
US5086485A (en) * 1990-10-19 1992-02-04 Xerox Corporation Method and apparatus for dynamically setting a background level
JPH04268989A (en) * 1991-02-25 1992-09-24 Nippon Steel Corp Method and device for recognizing character
JPH04268987A (en) * 1991-02-25 1992-09-24 Nippon Steel Corp Character recognizing device
US5179599A (en) * 1991-06-17 1993-01-12 Hewlett-Packard Company Dynamic thresholding system for documents using structural information of the documents
US5268967A (en) * 1992-06-29 1993-12-07 Eastman Kodak Company Method for automatic foreground and background detection in digital radiographic images
JP2951814B2 (en) * 1993-02-25 1999-09-20 富士通株式会社 Image extraction method
US5384864A (en) * 1993-04-19 1995-01-24 Xerox Corporation Method and apparatus for automatic determination of text line, word and character cell spatial features
US5444797A (en) * 1993-04-19 1995-08-22 Xerox Corporation Method and apparatus for automatic character script determination
JP2933801B2 (en) * 1993-06-11 1999-08-16 富士通株式会社 Method and apparatus for cutting out characters
US5452367A (en) * 1993-11-29 1995-09-19 Arch Development Corporation Automated method and system for the segmentation of medical images
US5630037A (en) * 1994-05-18 1997-05-13 Schindler Imaging, Inc. Method and apparatus for extracting and treating digital images for seamless compositing
US5583659A (en) * 1994-11-10 1996-12-10 Eastman Kodak Company Multi-windowing technique for thresholding an image using local image properties
CA2211258C (en) * 1995-01-31 2000-12-26 United Parcel Service Of America, Inc. Method and apparatus for separating foreground from background in images containing text

Also Published As

Publication number Publication date
WO1996024114A3 (en) 1996-11-28
ES2140825T3 (en) 2000-03-01
GR3031478T3 (en) 2000-01-31
DE69604481T2 (en) 2000-03-30
DK0807297T3 (en) 2000-04-10
JP3264932B2 (en) 2002-03-11
WO1996024114A2 (en) 1996-08-08
CA2211258A1 (en) 1996-08-08
EP0807297B1 (en) 1999-09-29
EP0807297A2 (en) 1997-11-19
DE69604481D1 (en) 1999-11-04
JPH10506733A (en) 1998-06-30
US5889885A (en) 1999-03-30
ATE185211T1 (en) 1999-10-15

Similar Documents

Publication Publication Date Title
CA2211258C (en) Method and apparatus for separating foreground from background in images containing text
US6701010B1 (en) Color image processing apparatus and pattern extracting apparatus
KR100625755B1 (en) Character recognition apparatus, character recognition method, medium processing method and computer readable recording medium having character recognition program
US5761344A (en) Image pre-processor for character recognition system
EP0677817B1 (en) Page segmentation and character recognition system
US6771813B1 (en) Image processing apparatus and pattern extraction apparatus
EP0632402B1 (en) Method for image segmentation and classification of image elements for document processing
US6026177A (en) Method for identifying a sequence of alphanumeric characters
US5867277A (en) Reduced resolution document storage and retrieval system
US6674900B1 (en) Method for extracting titles from digital images
US6757426B2 (en) System and method for image processing by automatic color dropout
JP2003228712A (en) Method for identifying text-like pixel from image
CN110378337B (en) Visual input method and system for drawing identification information of metal cutting tool
Gllavata et al. Finding text in images via local thresholding
JPH0291789A (en) Character recognizing system
JP2832928B2 (en) Character recognition method
ZhiWei et al. A new automatic extraction method of container identity codes
JP2972516B2 (en) Print pattern inspection device for inkjet printer
JPH06187490A (en) Area dividing method
JP3220226B2 (en) Character string direction determination method
Fernandez-Maloigne et al. Texture analysis for road detection
JP2003016386A (en) Method and device for character recognition
JPH0644404A (en) Inspecting instrument for printed matter
Wong et al. Use of colour in form layout analysis
JP2918363B2 (en) Character classification method and character recognition device

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed

Effective date: 20150126