US20110091098A1 - System and Method for Detecting Text in Real-World Color Images - Google Patents

System and Method for Detecting Text in Real-World Color Images Download PDF

Info

Publication number
US20110091098A1
US20110091098A1 US12/906,997 US90699710A US2011091098A1 US 20110091098 A1 US20110091098 A1 US 20110091098A1 US 90699710 A US90699710 A US 90699710A US 2011091098 A1 US2011091098 A1 US 2011091098A1
Authority
US
United States
Prior art keywords
text
cascade
classifiers
regions
logic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/906,997
Inventor
Alan Yuille
Xiangrong Chen
Stellan Lagerstrom
Daniel Terry
Mark Nitzberg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/906,997 priority Critical patent/US20110091098A1/en
Publication of US20110091098A1 publication Critical patent/US20110091098A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • the present invention relates to image analysis, and more particularly to identifying images in real-world
  • FIG. 1 illustrates one embodiment of the data flow in the system.
  • FIG. 2 is a flowchart of one embodiment of the process used by the system to detect text.
  • FIG. 3 is a flowchart of one embodiment of how the detection system can be trained and customized for new applications or configurations using labeled training images.
  • FIG. 4 is a block diagram of one embodiment of the text detection system.
  • the method and apparatus described is designed is provide a system and method for detecting and reading text in real-world color images or video taken in a diverse range of environments, such as indoor environments and outdoor street scenes.
  • the system and method are accurate even with different fonts and sizes of text, changes in lighting, and perspective distortions due to viewer angle.
  • this system and method for text detection has a rapid processing time while maintaining a low rate of false positives and negatives.
  • the system and method uses a learning algorithm that enables it to adapt to novel image domains and new hardware components such as different cameras and mobile devices including cell phones. While the examples below address text detection, this algorithm may be used for detecting other types of data in images, such as UPC codes, or other orderly marking systems.
  • the system receives color or black and white digital image as input and outputs outlined and labeled regions indicating where text is present in the image.
  • the “digital image” may be a frame from a video, a digitized image, a digital photograph, or any other type of data which can be presented as one or more digital images.
  • FIG. 1 illustrates one embodiment of the data flow in the system.
  • the user ( 1 ) submits a digital image ( 2 ) for processing by the text detection and extension system ( 3 ).
  • the system processes the original digital image and outputs a digital image with outlined and labeled regions indicating where text is present ( 4 ).
  • This image and its associated text regions can be loaded into a subsequent image enhancement system ( 5 ), and shown on a display device ( 6 ).
  • the image and its text regions can be binarized by the system ( 7 ), loaded into an optical character recognition system ( 8 ), and output as text ( 9 ), output in Braille or another format, read aloud via an audio output device ( 10 ), or output via other means.
  • the output may be a summary or other short version of the text.
  • the output may be further processed for outputting in a different language or different format.
  • the system may exist as a standalone computing device, an application on a computing device, or a plug-in or extension to an existing application on a computing device.
  • FIG. 2 is a flowchart of one embodiment of the process used by the system to detect text.
  • the digital image is broken into multiple layers of regions of various sizes ( 1 ). In one embodiment, this is done using a standard pyramid algorithm. Each individual region is fed into the detection algorithm. In one embodiment, the regions are fed into the detection algorithm in parallel. Alternatively, the regions may be analyzed in series. The ordering of the regions may be modified for optimal results in a variety of application domains and tasks.
  • the detection algorithm ( 2 ) consists of cascading levels of tests (classifiers). Each cascade level contains one or more individual tests that are performed in series. Each test uses one or more image features that detect cues indicating the possible presence of text in the region being analyzed. Each test yields a confidence value for the region; the region passes the test if its confidence value is greater than the threshold value for that test. After passing each test, the region's overall confidence value for the cascade level is updated, and the region is either submitted to the subsequent test in that level, accepted for that level, or rejected for that level depending on how its overall confidence value compares to the overall threshold value for the cascade level.
  • the region is rejected at any cascade level, it is not processed further ( 3 ). If the region is accepted by a cascade level, it is passed on to the next level of the cascade for further testing ( 4 ). In one embodiment, if the confidence value is high enough, the region may be accepted and not passed to further testing. Image regions which are accepted by all cascade levels are given an overall confidence value ( 5 ). In one embodiment, the overall confidence value ( 5 ) is based on the region's performance in the final level of the cascade. Alternatively, the overall confidence value may be a cumulative value based on results from multiple levels of the cascade.
  • overlapping and adjacent regions are clustered ( 6 ) to create larger contiguous regions. These larger regions are each given a confidence value based on the confidence values of their smaller constituent regions. At this stage, the specific outline of each region is determined, and the regions are labeled using their confidence values.
  • an extension algorithm ( 7 ) is used to expand the regions to include nearby text or truncated characters.
  • the region in one embodiment, can also be binarized as desired before being output to the user or to another image processing system.
  • FIG. 3 is a flowchart of one embodiment of how the detection system can be trained and customized for new applications or configurations using labeled training images.
  • a set of example images from a new application domain are human-labeled to indicate regions with and without text. In another embodiment, if such a thing is available, a set of example images with known good automatically labeled regions may be utilized. In one embodiment at least 100 images are used for training. These images ( 1 ) are then fed to the system in training mode to allow it to learn to successfully detect text in the new domain. The training mode can also be used to refine the system's text detection when a new image capture device or type of image is used.
  • the system processes the training images using the detection algorithm ( 2 ) just as it does in regular mode. After the regions have been accepted or rejected, the system compares them to the actual labels to determine the accuracy of the algorithm. The system then adapts the algorithm, selecting the most effective features for each test, the most effective ways to order and combine the tests and cascade levels to produce more accurate results, and the most effective test weights and thresholds for the calculations.
  • the training system seeks to reduce the number of false positive and false negative text detections. It also seeks to minimize the processing time for the first few cascade levels. This ensures that most regions of the image can be rapidly rejected by only a small number of tests, making the overall detection algorithm more time-efficient. In one embodiment, an authorized user can manually modify or refine the cascade levels, tests, and weights to further customize the system.
  • AdaBoost AdaBoost machine learning algorithm.
  • the various detection cascade levels in the system can use this algorithm to process and detect text in regions of the image.
  • rectangular image regions of various sizes are used to analyze predefined image features.
  • the rectangular regions are selected using a standard pyramid algorithm.
  • luminance (brightness) values of the pixels are used in the detection process, while color information is used later in the binarization process.
  • an AdaBoost cascade with 7 layers is used.
  • Each layer of the cascade contains 1 to 30 tests.
  • Each test uses one or more image feature values, each of which is sorted into bins by comparing it with test-specific threshold values.
  • the threshold values are set by the system during training as described earlier in this document.
  • the bin numbers are used as an index to a test-specific n-dimensional matrix, where the value at the intersection is either true or false, which specifies the result of the test as a whole.
  • the specific image features used in one embodiment of each test are listed below.
  • each test is given a weight that is used when combining the individual test results within the cascade layer into an overall result for the layer.
  • the overall result is updated as each test is performed to determine if the image region can be accepted or rejected by the layer, or if the next test in the layer must be performed.
  • the layers, classifiers, and image features in the preferred system embodiment are as follows. Classifier weights are given as well, but could be further refined through system training. The individual image features used in each classifier are defined later in this document. Note that these layers, classifiers, image features, and classifier weights are merely exemplary, and one of skill in the art would understand that the layers may be reorganized, altered, or removed from the cascade without changing the underlying idea.
  • the classifiers are explained in more detail below.
  • Adaboost Layer 1 1 weak classifier
  • Adaboost Layer 2 1 weak classifier
  • Adaboost Layer 3 5 weak classifiers
  • Classifier Weight 1 D4, D15 3.870142 2 D0, D3 2.046390 3 D4, D15 1.947373 4 S6, D14 1.538185 5 S5, S11 1.069461
  • Adaboost Layer 4 10 weak classifiers
  • Adaboost Layer 5 22 weak classifiers
  • Classifier Weight 1 S5, D6 3.951040 2 D4, D14 1.571396 3 D0, D15 1.308625 4 S6, D3 1.025399 5 S4, D14 0.823495 6 S9, D4 0.872460 7 S4, D16 0.743971 8 D4, D13 0.736302 9 D0, D3 0.665261 10 M0, D14 0.630531 11 S5, D6 0.684585 12 S3, D3 0.587298 13 D3, D4 0.578154 14 M3, S11 0.566080 15 S3, D13 0.496378 16 S5, S10 0.490426 17 S0, D1 0.526227 18 M0, M3 0.473949 12 D4, D12 0.436995 20 M0, M2 0.490757 21 S4, D14 0.501030 22 D0, D2 0.520316
  • Adaboost Layer 6 30 weak classifiers
  • Adaboost layer 7 30 weak classifiers
  • Classifier Weight 1 E0, E1 4.140843 2 H5, H11 0.981255 3 H6, H10 0.707663 4 H0, H3 0.644695 5 H13, E0 0.558645 6 H8, H9 0.531337 7 H1, E3 0.420097 8 H2, E0 0.407218 9 H3, H7 0.374002 10 H7, H11 0.360664 11 H10, E2 0.331540 12 H0, H1 0.302133 13 H5, H10 0.312395 14 H1, E4 0.272916 15 E0, E5 0.281763 16 H1, H9 0.290753 17 H2, E0 0.262424 18 H0, H6 0.250681 19 H10, E4 0.259521 20 H2, H3 0.252718 21 H8, H13 0.235930 22 H0, E5 0.227033 23 H10, H12 0.211346 24 H5, H11 0.250197 25 H5, E2 0.264241 26 H1, H8 0.199238 27 H9, E0 0.
  • the image features used in the classifiers are grouped and defined as follows. These features as well as their positions within the cascade (described above) define only one, of many possible, configurations of the system. This particular configuration is the product of much fine tuning and is optimized to detect text in real world images.
  • Group A In this group, the region is divided into 3 subregions. In one embodiment the division is horizontal (like a Spanish flag). The subregions from the top have mean values of luminance of m1, m, and m2. In the current implementation, m1 and m2 are of equal height, each 1 ⁇ 8 of the total height, while m is 3 ⁇ 4th of the total height.
  • the classifiers of Group A are:
  • Group B This group is divided as in group A, but the values used are based on the standard deviation (STD) of the luminance values in the region. From the top the values are referred to as s1, s, and s2.
  • STD standard deviation
  • the classifiers of Group B are:
  • Group C This group uses the same s, s1 and s2 as in group B, but divides the s region horizontally into two equal parts and computes the standard deviation of luminance values within the two parts of the newly divided region, referring to them as s3 and s4.
  • the classifiers of Group C are:
  • Group D This group uses the same s1 and s2 as in group B, and divides s vertically into two equal parts with STDs s5 and s6.
  • the classifiers of Group D are:
  • Group E This group uses the same s1 and s2 as in group B, and divides s vertically into three equal parts with STDs s7, s8, and s9.
  • the classifiers of Group E are:
  • Group F This group uses the same divisions and s as group B, but calculates the mean of the absolute value of the horizontal gradient for all vertical-edge pixels in of the subregions: from the top, dx1, dx, and dx2.
  • the pixel is a horizontal-edge pixel, otherwise a vertical-edge pixel.
  • the classifiers of Group F are:
  • Group G This group uses the same dx as group F, and also divides that region as in group C with mean horizontal gradients dx3 and dx4.
  • the classifiers of Group G are:
  • Group H This group uses the same dx as group F, and also divides that region as in group D with mean vertical gradients dx5 and dx6.
  • the classifiers of Group H are:
  • Group I This group uses the same dx as group F, and also divides that region as in group E with mean vertical gradients dx7, dx8, and dx9.
  • Groups J, K, L, and M are analogous to groups F, G, H, and I but use the mean of the absolute value of the vertical gradient for all horizontal-edge pixels in of the subregions.
  • D12-D23 are then analogous to D0-D11.
  • Group N This group is based on a histogram of the Sobel gradient directions in the region. Using the horizontal and vertical Sobel gradients as above for each pixel, the direction is determined as 0-8, where 0-7 signify N, NE, E, SE, S, SW, W and NW and 8 indicates a flat, edgeless region. d[n] is then the proportion of pixels with the gradient direction value n.
  • Group O This group is based on an adaptive histogram of the values calculated as dx, in group F, above. In the current embodiment, three buckets are used; hdx[0] is the relative number of pixels with horizontal gradients in the lowest third of the range, etc.
  • Group P This group is analogous to group O, but uses dy.
  • Group Q This group divides the entire region into horizontal stripes of equal size. In the current embodiment, 3 stripes are used. For each stripe, the average of the absolute value of the horizontal difference is calculated. The following convolution kernel is used:
  • edx[n] is the average for the stripe n.
  • the classifiers for Group Q are:
  • Group R This group is like group Q, except for each horizontal stripe, the average of the absolute value of the vertical difference is calculated.
  • the following convolution kernel is used:
  • edy[n] is the average for the stripe n.
  • the classifiers for Group R are:
  • the following method is used to calculate the posterior probability value for a rectangular region once it has been identified as a text region.
  • an algorithm is first applied to the detected regions to classify individual pixels as non-text or potential-text.
  • Neighboring pixels within the same band are grouped into connected components (denoted cc's) and each connected component is then classified as text or non-text. This is accomplished using a number of statistics including the number of pixels in the cc (NP), the number of cc pixels on the border of the cc's bounding box (NB), the height of the bounding box (h), the width of the bounding box (w), the ratios h/w and NP/w*h, and a measure of the local size of the text as determined by the detection algorithm (MS).
  • the system groups words or stray cc's into lines of text and uses the context of nearby cc's to reject any cc's that do not fit into any group. This is accomplished by calculating the bounding box for each cc and giving it a label i. The system then calculates features like the center of the box (xi, yi), the height (hi), the average luminance intensity of the box (li).
  • a color distance cdist (i,j) between the colors of two cc's i,j is computed, in one embodiment, by:
  • dist( i,j ) ( w x
  • s is the expected height of characters, computed as the average height of the detection rectangles that were merged to produce the detected region and w′s are constants selected to maximize the performance of the system.
  • each cc is grouped with its closest neighbors. Neighbors are then grouped into lines of text. Grouping never extends beyond a constant distance T. The algorithm thus rejects a cc(k) provided dist(k,I)>T, for all I. In one embodiment the value of T used is 2.2.
  • a baseline is fitted through the centers using robust regression (giving low weight to outliers).
  • every rejected cc(k) is tested against each group and recovered if all of the following conditions are true:
  • the region is expanded. Then, the above described binarization process is applied to the newly included area(s), and any cc's found there are submitted to the same recovery process as above, if originally rejected.
  • the system has the ability to store the results of the various intermediate stages into a database along with any useful annotations about those intermediate results.
  • the database gets populated with a large amount of detailed information that can be used to calculate specific performance metrics as well as pinpoint and categorize sources of error.
  • the database can be used locate errors in virtually every step of the algorithm:
  • the database may be used in conjunction with analysis to further tweak the settings of the system.
  • FIG. 4 is a block diagram of one embodiment of the text detection system.
  • the entire text detection system 400 consists of a digital camera 410 , a computing device 420 (including processor, program code, and data storage), optionally a display 430 , and/or speakers 440 in a wearable configuration 405 .
  • the system 400 in one embodiment, is designed to be used by blind or visually impaired persons to help detect and identify text including street signs, bus numbers, shop signs, and building directories.
  • the system in the embodiment can be used to visually enhance and display these regions of text and optionally read them aloud to the user.
  • the integration of OCR systems can also enhance the performance of the system by ruling out false positives that cannot be classified as characters and words.
  • alternative outputs such as Braille, or translated output, may also be used.
  • the digital camera 410 may be integrated into a multi-function wireless communications device 470 that either (a) contains sufficient computing power to perform the computation described above in a reasonably short time, or (b) is able to transfer the digital image—or subregions thereof detected as likely text regions—to a more powerful remote computing device 450 elsewhere via a wireless communications medium 460 , wait for the remote computing device 450 to perform the computation describe above, and receive the resulting text in a response from the remote computing device 450 , all within a reasonably short time.
  • a multi-function wireless communications device 470 that either (a) contains sufficient computing power to perform the computation described above in a reasonably short time, or (b) is able to transfer the digital image—or subregions thereof detected as likely text regions—to a more powerful remote computing device 450 elsewhere via a wireless communications medium 460 , wait for the remote computing device 450 to perform the computation describe above, and receive the resulting text in a response from the remote computing device 450 , all within a reasonably short time.
  • the wireless communications medium 460 may be a cellular network, a wireless connection such as a WiFi connection, or any other connection which enables the communications device 470 to communicate with a remote device.
  • the remote computing device 450 may be a personal computer running a program, or may be a server system accessed through the Internet.
  • applications may include (a) a sign reader to assist drivers by automatically reading street signs aloud, (b) a generalized text reader/translator for tourists or military personnel in foreign lands where they cannot understand the language—or even the alphabet—of signs and other text, or (c) a system, such as a web crawler, designed to detect and index the location and value of text in images on the world wide web or in any other set of images.
  • a sign reader to assist drivers by automatically reading street signs aloud
  • a generalized text reader/translator for tourists or military personnel in foreign lands where they cannot understand the language—or even the alphabet—of signs and other text
  • a system such as a web crawler, designed to detect and index the location and value of text in images on the world wide web or in any other set of images.
  • the present system functions well to detect text in various languages, including non-Latin languages such as Cyrillic, Chinese, Arabic, etc. Furthermore by modifying the feature choice and training the system on new datasets other embodiments may serve to detect various families of graphics such as text in other non-Latin writing systems such as Cuneiform, Hieroglyphics, etc., as well as other classes of targets such as bar codes, logos, etc. that may be derived from or resemble an orderly marking system.
  • non-Latin languages such as Cyrillic, Chinese, Arabic, etc.
  • other embodiments may serve to detect various families of graphics such as text in other non-Latin writing systems such as Cuneiform, Hieroglyphics, etc., as well as other classes of targets such as bar codes, logos, etc. that may be derived from or resemble an orderly marking system.

Abstract

A method and apparatus for detecting text in real-world images comprises calculating a cascade of classifiers, the cascade comprising a plurality of stages, each stage including one or more weak classifiers, the plurality of stages organized to start out with classifiers that are most useful for ruling out non-text regions, and removing regions classified as non-text regions from the cascade prior to completion of the cascade, to further speed up processing.

Description

    RELATED APPLICATIONS
  • The present invention claims priority to U.S. Provisional Patent Application No. 60/711,100, filed Sep. 2, 2005.
  • U.S. GOVERNMENT RIGHTS
  • This invention was made with United States government support under Grants R44EY011821 and R44EY014487 from the National Institutes of Health (NIH). The United States Government has certain rights in this invention.
  • FIELD OF THE INVENTION
  • The present invention relates to image analysis, and more particularly to identifying images in real-world
  • BACKGROUND
  • The challenge of text detection has been attempted to be addressed by many efforts. The accurate detection and identification of text in documents has been achieved via optical character recognition. This method is most effective with high-quality, black and white documents that make it easy to segment the images into text and non-text regions—a much simpler problem than detecting and reading text in diverse, real-world, color images. The detection of captions in video sequences is also largely a solved problem due to fact that the position and size of captions are generally standardized, and the backgrounds change rapidly while the captions change more slowly. This too is a simpler problem than real-world text detection because of the presence of these additional image cues.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
  • FIG. 1 illustrates one embodiment of the data flow in the system.
  • FIG. 2 is a flowchart of one embodiment of the process used by the system to detect text.
  • FIG. 3 is a flowchart of one embodiment of how the detection system can be trained and customized for new applications or configurations using labeled training images.
  • FIG. 4 is a block diagram of one embodiment of the text detection system.
  • DETAILED DESCRIPTION
  • The method and apparatus described is designed is provide a system and method for detecting and reading text in real-world color images or video taken in a diverse range of environments, such as indoor environments and outdoor street scenes. The system and method are accurate even with different fonts and sizes of text, changes in lighting, and perspective distortions due to viewer angle. At the same time, this system and method for text detection has a rapid processing time while maintaining a low rate of false positives and negatives. The system and method uses a learning algorithm that enables it to adapt to novel image domains and new hardware components such as different cameras and mobile devices including cell phones. While the examples below address text detection, this algorithm may be used for detecting other types of data in images, such as UPC codes, or other orderly marking systems.
  • System and Method
  • The system receives color or black and white digital image as input and outputs outlined and labeled regions indicating where text is present in the image. In one embodiment, the “digital image” may be a frame from a video, a digitized image, a digital photograph, or any other type of data which can be presented as one or more digital images.
  • FIG. 1 illustrates one embodiment of the data flow in the system. The user (1) submits a digital image (2) for processing by the text detection and extension system (3). The system processes the original digital image and outputs a digital image with outlined and labeled regions indicating where text is present (4). This image and its associated text regions can be loaded into a subsequent image enhancement system (5), and shown on a display device (6). Alternatively, or additionally, the image and its text regions can be binarized by the system (7), loaded into an optical character recognition system (8), and output as text (9), output in Braille or another format, read aloud via an audio output device (10), or output via other means. In one embodiment, the output may be a summary or other short version of the text. In one embodiment, the output may be further processed for outputting in a different language or different format.
  • In its various embodiments, the system may exist as a standalone computing device, an application on a computing device, or a plug-in or extension to an existing application on a computing device.
  • FIG. 2 is a flowchart of one embodiment of the process used by the system to detect text. In one embodiment, the digital image is broken into multiple layers of regions of various sizes (1). In one embodiment, this is done using a standard pyramid algorithm. Each individual region is fed into the detection algorithm. In one embodiment, the regions are fed into the detection algorithm in parallel. Alternatively, the regions may be analyzed in series. The ordering of the regions may be modified for optimal results in a variety of application domains and tasks.
  • In one embodiment, the detection algorithm (2) consists of cascading levels of tests (classifiers). Each cascade level contains one or more individual tests that are performed in series. Each test uses one or more image features that detect cues indicating the possible presence of text in the region being analyzed. Each test yields a confidence value for the region; the region passes the test if its confidence value is greater than the threshold value for that test. After passing each test, the region's overall confidence value for the cascade level is updated, and the region is either submitted to the subsequent test in that level, accepted for that level, or rejected for that level depending on how its overall confidence value compares to the overall threshold value for the cascade level.
  • If the region is rejected at any cascade level, it is not processed further (3). If the region is accepted by a cascade level, it is passed on to the next level of the cascade for further testing (4). In one embodiment, if the confidence value is high enough, the region may be accepted and not passed to further testing. Image regions which are accepted by all cascade levels are given an overall confidence value (5). In one embodiment, the overall confidence value (5) is based on the region's performance in the final level of the cascade. Alternatively, the overall confidence value may be a cumulative value based on results from multiple levels of the cascade.
  • Once all regions have been accepted or rejected by the detection algorithm, overlapping and adjacent regions are clustered (6) to create larger contiguous regions. These larger regions are each given a confidence value based on the confidence values of their smaller constituent regions. At this stage, the specific outline of each region is determined, and the regions are labeled using their confidence values.
  • Once the final text-containing regions have been determined, in one embodiment, an extension algorithm (7) is used to expand the regions to include nearby text or truncated characters. The region, in one embodiment, can also be binarized as desired before being output to the user or to another image processing system.
  • FIG. 3 is a flowchart of one embodiment of how the detection system can be trained and customized for new applications or configurations using labeled training images.
  • A set of example images from a new application domain are human-labeled to indicate regions with and without text. In another embodiment, if such a thing is available, a set of example images with known good automatically labeled regions may be utilized. In one embodiment at least 100 images are used for training. These images (1) are then fed to the system in training mode to allow it to learn to successfully detect text in the new domain. The training mode can also be used to refine the system's text detection when a new image capture device or type of image is used.
  • In training mode, the system processes the training images using the detection algorithm (2) just as it does in regular mode. After the regions have been accepted or rejected, the system compares them to the actual labels to determine the accuracy of the algorithm. The system then adapts the algorithm, selecting the most effective features for each test, the most effective ways to order and combine the tests and cascade levels to produce more accurate results, and the most effective test weights and thresholds for the calculations. The training system seeks to reduce the number of false positive and false negative text detections. It also seeks to minimize the processing time for the first few cascade levels. This ensures that most regions of the image can be rapidly rejected by only a small number of tests, making the overall detection algorithm more time-efficient. In one embodiment, an authorized user can manually modify or refine the cascade levels, tests, and weights to further customize the system.
  • One type of learning algorithm that may be used is the AdaBoost machine learning algorithm. The various detection cascade levels in the system can use this algorithm to process and detect text in regions of the image.
  • Tests (Classifiers) Used in One Embodiment
  • In the preferred embodiment of this system, rectangular image regions of various sizes are used to analyze predefined image features. In one embodiment, the rectangular regions are selected using a standard pyramid algorithm. In one embodiment, luminance (brightness) values of the pixels are used in the detection process, while color information is used later in the binarization process.
  • In one embodiment, an AdaBoost cascade with 7 layers is used. Each layer of the cascade contains 1 to 30 tests. Each test uses one or more image feature values, each of which is sorted into bins by comparing it with test-specific threshold values. The threshold values are set by the system during training as described earlier in this document. The bin numbers are used as an index to a test-specific n-dimensional matrix, where the value at the intersection is either true or false, which specifies the result of the test as a whole. The specific image features used in one embodiment of each test are listed below.
  • In one embodiment, each test is given a weight that is used when combining the individual test results within the cascade layer into an overall result for the layer. The overall result is updated as each test is performed to determine if the image region can be accepted or rejected by the layer, or if the next test in the layer must be performed.
  • The layers, classifiers, and image features in the preferred system embodiment are as follows. Classifier weights are given as well, but could be further refined through system training. The individual image features used in each classifier are defined later in this document. Note that these layers, classifiers, image features, and classifier weights are merely exemplary, and one of skill in the art would understand that the layers may be reorganized, altered, or removed from the cascade without changing the underlying idea.
  • This is an exemplary cascade, including seven layers. The classifiers are explained in more detail below.
  • Adaboost Layer 1: 1 weak classifier
  • Classifier Weight
    1 D3, D4 4.626257
  • Adaboost Layer 2: 1 weak classifier
  • Classifier Weight
    1 D0, D3 4.278939
  • Adaboost Layer 3: 5 weak classifiers
  • Classifier Weight
    1 D4, D15 3.870142
    2 D0, D3 2.046390
    3 D4, D15 1.947373
    4 S6, D14 1.538185
    5 S5, S11 1.069461
  • Adaboost Layer 4: 10 weak classifiers
  • Classifier Weight
    1 D7, D14 3.886540
    2 D0, D4 1.752814
    3 M0, D13 1.367982
    4 D3, D14 1.274082
    5 D0, D6 0.967092
    6 S11, D17 0.873878
    7 S3, D13 0.942438
    8 D1, D14 0.840898
    1 S5, S10 0.666019
    10 S4, D4 0.660017
  • Adaboost Layer 5: 22 weak classifiers
  • Classifier Weight
    1 S5, D6 3.951040
    2 D4, D14 1.571396
    3 D0, D15 1.308625
    4 S6, D3 1.025399
    5 S4, D14 0.823495
    6 S9, D4 0.872460
    7 S4, D16 0.743971
    8 D4, D13 0.736302
    9 D0, D3 0.665261
    10 M0, D14 0.630531
    11 S5, D6 0.684585
    12 S3, D3 0.587298
    13 D3, D4 0.578154
    14 M3, S11 0.566080
    15 S3, D13 0.496378
    16 S5, S10 0.490426
    17 S0, D1 0.526227
    18 M0, M3 0.473949
    12 D4, D12 0.436995
    20 M0, M2 0.490757
    21 S4, D14 0.501030
    22 D0, D2 0.520316
  • Adaboost Layer 6: 30 weak classifiers
  • Classifier Weight
    1 D3, D4 3.001183
    2 D0, D16 1.351147
    3 D3, D13 1.121551
    4 S5, D4 0.758123
    5 D3, D5 0.656535
    6 S3, D13 0.712661
    7 M0, D14 0.653778
    8 D0, D4 0.601257
    9 M3, S8 0.556955
    10 S4, D13 0.510116
    11 D0, D16 0.519914
    12 S4, D4 0.548812
    13 S0, D18 0.490303
    14 S9, D13 0.453983
    15 S3, D15 0.470483
    16 D1, D15 0.526004
    17 D0, D14 0.417721
    18 M0, S0 0.433557
    19 S4, D14 0.415910
    20 S5, D2 0.444604
    21 S6, D14 0.424369
    22 D0, D1 0.379253
    23 S3, D13 0.405478
    24 D4, D13 0.472468
    25 S4, D14 0.407701
    26 D1, D2 0.397965
    27 M2, S0 0.378079
    28 S0, D3 0.387972
    29 S10, D12 0.371740
    30 M0, S0 0.370144
  • Adaboost layer 7: 30 weak classifiers
  • Classifier Weight
    1 E0, E1 4.140843
    2 H5, H11 0.981255
    3 H6, H10 0.707663
    4 H0, H3 0.644695
    5 H13, E0 0.558645
    6 H8, H9 0.531337
    7 H1, E3 0.420097
    8 H2, E0 0.407218
    9 H3, H7 0.374002
    10 H7, H11 0.360664
    11 H10, E2 0.331540
    12 H0, H1 0.302133
    13 H5, H10 0.312395
    14 H1, E4 0.272916
    15 E0, E5 0.281763
    16 H1, H9 0.290753
    17 H2, E0 0.262424
    18 H0, H6 0.250681
    19 H10, E4 0.259521
    20 H2, H3 0.252718
    21 H8, H13 0.235930
    22 H0, E5 0.227033
    23 H10, H12 0.211346
    24 H5, H11 0.250197
    25 H5, E2 0.264241
    26 H1, H8 0.199238
    27 H9, E0 0.189235
    28 H7, H11 0.194733
    29 H13, E3 0.189933
    30 E0, E3 0.182727
  • Image Features
  • In the preferred embodiment of this system, the image features used in the classifiers are grouped and defined as follows. These features as well as their positions within the cascade (described above) define only one, of many possible, configurations of the system. This particular configuration is the product of much fine tuning and is optimized to detect text in real world images.
  • Group A. In this group, the region is divided into 3 subregions. In one embodiment the division is horizontal (like a Spanish flag). The subregions from the top have mean values of luminance of m1, m, and m2. In the current implementation, m1 and m2 are of equal height, each ⅛ of the total height, while m is ¾th of the total height.
  • m1
    m
    m2
  • The classifiers of Group A are:
  • M0. m
  • M1. m1−m
  • M2. m2−m
  • M3. (m1−m)*(m2−m)
  • Group B. This group is divided as in group A, but the values used are based on the standard deviation (STD) of the luminance values in the region. From the top the values are referred to as s1, s, and s2.
  • s1
    s
    s2
  • The classifiers of Group B are:
  • S0. s
  • S1. s1
  • S2. s2
  • S3. s1/s
  • S4. s2/s
  • Group C. This group uses the same s, s1 and s2 as in group B, but divides the s region horizontally into two equal parts and computes the standard deviation of luminance values within the two parts of the newly divided region, referring to them as s3 and s4.
  • s1
    s3
    s4
    s2
  • The classifiers of Group C are:
  • S5. s3/s
  • S6. s4/s
  • Group D. This group uses the same s1 and s2 as in group B, and divides s vertically into two equal parts with STDs s5 and s6.
  • s1
    s5 s6
    s2
  • The classifiers of Group D are:
  • S7. s5/s
  • S8. s6/s
  • Group E. This group uses the same s1 and s2 as in group B, and divides s vertically into three equal parts with STDs s7, s8, and s9.
  • s1
    s7 s8 s9
    s2
  • The classifiers of Group E are:
  • S9. s7/s
  • S10. s8/s
  • S11. s9/s
  • Group F. This group uses the same divisions and s as group B, but calculates the mean of the absolute value of the horizontal gradient for all vertical-edge pixels in of the subregions: from the top, dx1, dx, and dx2.
  • The horizontal gradient at each pixel is defined as the result of this Sobel convolution kernel:
  • −1 0 1
    −2 0 2
    −1 0 1
  • The vertical gradient at each pixel is defined as the result of this Sobel convolution kernel:
  • 1 2 1
    0 0 0
    −1 −2 −1
  • If the absolute value of the vertical gradient is lager than the horizontal, the pixel is a horizontal-edge pixel, otherwise a vertical-edge pixel.
  • The classifiers of Group F are:
  • D0. dx/s
  • D1. dx1/s
  • D2. dx2/s
  • D3. dx1/dx
  • D4. dx2/dx
  • Group G. This group uses the same dx as group F, and also divides that region as in group C with mean horizontal gradients dx3 and dx4.
  • The classifiers of Group G are:
  • D5. dx3/dx
  • D6. dx4/dx
  • Group H. This group uses the same dx as group F, and also divides that region as in group D with mean vertical gradients dx5 and dx6.
  • The classifiers of Group H are:
  • D7. dx5/dx
  • D8. dx6/dx
  • Group I. This group uses the same dx as group F, and also divides that region as in group E with mean vertical gradients dx7, dx8, and dx9.
  • D9. dx7/dx
  • D10. dx8/dx
  • D11. dx9/dx
  • Groups J, K, L, and M are analogous to groups F, G, H, and I but use the mean of the absolute value of the vertical gradient for all horizontal-edge pixels in of the subregions.
  • D12-D23 are then analogous to D0-D11.
  • Group N. This group is based on a histogram of the Sobel gradient directions in the region. Using the horizontal and vertical Sobel gradients as above for each pixel, the direction is determined as 0-8, where 0-7 signify N, NE, E, SE, S, SW, W and NW and 8 indicates a flat, edgeless region. d[n] is then the proportion of pixels with the gradient direction value n.
  • H0. d[0]+d[4]
  • H1. d[1]+d[5]
  • H2. d[2]+d[6]
  • H3. d[3]+d[7]
  • H4. d[4]
  • H5. d[5]
  • H6. d[6]
  • H7. d[7]
  • Group O. This group is based on an adaptive histogram of the values calculated as dx, in group F, above. In the current embodiment, three buckets are used; hdx[0] is the relative number of pixels with horizontal gradients in the lowest third of the range, etc.
  • H8. hdx[0]
  • H9. hdx[1]
  • H10. hdx[2]
  • Group P. This group is analogous to group O, but uses dy.
  • H11. hdy[0]
  • H12. hdy[1]
  • H13. hdy[2]
  • Group Q. This group divides the entire region into horizontal stripes of equal size. In the current embodiment, 3 stripes are used. For each stripe, the average of the absolute value of the horizontal difference is calculated. The following convolution kernel is used:
  • 0 0 0
    −1 1 0
    0 0 0
  • edx[n] is the average for the stripe n.
  • The classifiers for Group Q are:
  • E0. edx[0]−edx[1]
  • E1. edx[1] Center stripe
  • E2. edx[2]−edx[1]
  • Group R. This group is like group Q, except for each horizontal stripe, the average of the absolute value of the vertical difference is calculated. The following convolution kernel is used:
  • 0 −1 0
    0 1 0
    0 0 0
  • edy[n] is the average for the stripe n.
  • The classifiers for Group R are:
  • E3. edy[0]−edy[1]
  • E4. edy[1] Center stripe
  • E5. edy[2]−edy[1]
  • In the preferred embodiment, the following method is used to calculate the posterior probability value for a rectangular region once it has been identified as a text region.
  • float CAdaBClassifier::Classify(CASample *pSample)
    {
    float fAlpha = 0;
    // m_fSumAlpha is really 1/2\sum\alpha
    float fRes = m_fSumAlpha + m_fSumAlpha;
    float p, fVal;
    int nClassifiers = m_vpClassifiers.size( );
    for (int i = 0; i < nClassifiers; i ++)
    {
    CAClassifier *pClassifier = m_vpClassifiers[i];
    p = pClassifier−>Classify(pSample); //weak classifier returning 0
    or 1
    fVal = m_vfAlpha[i]; //weight from training
    // 0.5 == probability of text
    // this implements \sum\a_ih_i
    // assuming p is 0 or 1
    // this should be fAlpha += fVal * p in general
    if (p > 0.5)
    {
    fAlpha += fVal;
    if (fAlpha > m_fSumAlpha) break;
    }
    // test if can't ever reach threshold (assumes p \in[0,1])
    fRes −= fVal;
    if (fAlpha + fRes < m_fSumAlpha) break;
    }
    // WARNING: final Adaboost posterior NOT fully computed in most
    cases
    // returns a negative number if fAlpha < 0.5
    return (fAlpha − m_fSumAlpha);
    }
  • In on embodiment, overlapping detected rectangles are joined, and the total posterior probability is calculated:
  • p = 1 - i ( 1 - p ( i ) ) .
  • Image Extension and Binarization
  • In one embodiment of the image extension and binarization process, an algorithm is first applied to the detected regions to classify individual pixels as non-text or potential-text. In one embodiment, for each pixel the algorithm examines neighborhoods of increasing size centered at that pixel until it finds one with a luminance variance above a given variance threshold. Two neighborhood thresholds are then created, TLight=μ+kσ and TDark=μ−kσ where and μ and σ are the mean and variance within the selected neighborhood respectively, and k is a constant. This process produces a three-band image in which each pixel has been classified as non-text, light potential-text, or dark potential-text.
  • Neighboring pixels within the same band (light potential-text and dark potential-text) are grouped into connected components (denoted cc's) and each connected component is then classified as text or non-text. This is accomplished using a number of statistics including the number of pixels in the cc (NP), the number of cc pixels on the border of the cc's bounding box (NB), the height of the bounding box (h), the width of the bounding box (w), the ratios h/w and NP/w*h, and a measure of the local size of the text as determined by the detection algorithm (MS).
  • Following the removal of non-text cc's, the system groups words or stray cc's into lines of text and uses the context of nearby cc's to reject any cc's that do not fit into any group. This is accomplished by calculating the bounding box for each cc and giving it a label i. The system then calculates features like the center of the box (xi, yi), the height (hi), the average luminance intensity of the box (li).
  • A color distance cdist (i,j) between the colors of two cc's i,j is computed, in one embodiment, by:
      • 1. Computing a set for each cc consisting of the color values for each pixel in the cc in 3-dimensional YCrCb space with values in the range [0,255]. Call these Ci and Cj, producing vectors of 3D points.
      • 2. Computing the average points as the geometric center of gravity of these vectors: μi and μj, both 3D points.
      • 3. Taking the smaller of the two Mahalanobis distances DM between one average point and the other vector.

  • cdist(i,j)=min(D Mij),D Mji))
  • The result will be in the range [0,441]. (sqrt(3·2552)]
  • The distance (dist) between two cc's i, j is then defined as

  • dist(i,j)=(w x |x i −x j |+w y |y i −y j |+w h |hi−hj|)/s+w I |I i −I j |+w c ·cdist(i,j),
  • where s is the expected height of characters, computed as the average height of the detection rectangles that were merged to produce the detected region and w′s are constants selected to maximize the performance of the system. In one embodiment the values of w used by this system are: wx=1.0, wy=0.7, wI=0.01 for Ii in [0,255], wh=0.3, wc=0.05.
  • By using this metric, each cc is grouped with its closest neighbors. Neighbors are then grouped into lines of text. Grouping never extends beyond a constant distance T. The algorithm thus rejects a cc(k) provided dist(k,I)>T, for all I. In one embodiment the value of T used is 2.2.
  • Recovery
  • For each of these groups, in one embodiment, a baseline is fitted through the centers using robust regression (giving low weight to outliers). In one embodiment, every rejected cc(k) is tested against each group and recovered if all of the following conditions are true:
      • 1. The cc height (hk) is close to the average height of the group (hg). Ta*hg<hk<Tb*hg.
      • 2. The vertical distance between the center of the cc and the baseline is less than Tv*hg.
      • 3. The cc's color is close to the nearest cc of the group (n). Cdist(k,n)<Tr
  • In one embodiment the values for these constants are
      • Ta=0.8
      • Tb=1.5
      • Tv=0.5
      • Tr=1.1
  • In one embodiment, if any groups adjoin the edges of the detection region, the region is expanded. Then, the above described binarization process is applied to the newly included area(s), and any cc's found there are submitted to the same recovery process as above, if originally rejected.
  • Performance and error reporting and categorization:
  • In one embodiment, the system has the ability to store the results of the various intermediate stages into a database along with any useful annotations about those intermediate results. When the system is run in this mode on an entire dataset the database gets populated with a large amount of detailed information that can be used to calculate specific performance metrics as well as pinpoint and categorize sources of error.
  • Used in conjunction with detailed ground truth (that has all pixels in the dataset labeled as text/non-text and each text character labeled with its value—e.g. “a”) the database can be used locate errors in virtually every step of the algorithm:
      • 1. The database may include an image corresponding to the output of the initial stage of binarization in which pixels have been classified as “non-text, light potential-text, or dark potential-text.” For each region of detected text, this image can be compared to the ground truth in order to gather a set of examples where individual pixels have been mistakenly classified as text or non-text.
      • 2. The database may also contain an image corresponding to the result of the text/non-text connected component classifier. This can be used to gather a group of examples where cc's are incorrectly classified as text or non-text.
      • 3. The database may further contain an image corresponding to the result of the character/word grouping and can be used to find examples where characters are incorrectly grouped together into words or where they are incorrectly not grouped into words.
      • 4. Finally the database may store the output of the OCR system which can be compared to the true characters in each word to determine in what cases the OCR system fails.
  • In this way, the database may be used in conjunction with analysis to further tweak the settings of the system.
  • Hardware Implementation in One Embodiment
  • FIG. 4 is a block diagram of one embodiment of the text detection system. In one embodiment, the entire text detection system 400 consists of a digital camera 410, a computing device 420 (including processor, program code, and data storage), optionally a display 430, and/or speakers 440 in a wearable configuration 405. The system 400, in one embodiment, is designed to be used by blind or visually impaired persons to help detect and identify text including street signs, bus numbers, shop signs, and building directories. When coupled with additional image enhancement or OCR systems, the system in the embodiment can be used to visually enhance and display these regions of text and optionally read them aloud to the user. The integration of OCR systems can also enhance the performance of the system by ruling out false positives that cannot be classified as characters and words. In one embodiment alternative outputs, such as Braille, or translated output, may also be used.
  • In one embodiment the digital camera 410 may be integrated into a multi-function wireless communications device 470 that either (a) contains sufficient computing power to perform the computation described above in a reasonably short time, or (b) is able to transfer the digital image—or subregions thereof detected as likely text regions—to a more powerful remote computing device 450 elsewhere via a wireless communications medium 460, wait for the remote computing device 450 to perform the computation describe above, and receive the resulting text in a response from the remote computing device 450, all within a reasonably short time.
  • In one embodiment, the wireless communications medium 460 may be a cellular network, a wireless connection such as a WiFi connection, or any other connection which enables the communications device 470 to communicate with a remote device. The remote computing device 450 may be a personal computer running a program, or may be a server system accessed through the Internet.
  • Other embodiments of the system may serve as an image processing and text detection algorithm component within larger applications or computing devices. For example, applications may include (a) a sign reader to assist drivers by automatically reading street signs aloud, (b) a generalized text reader/translator for tourists or military personnel in foreign lands where they cannot understand the language—or even the alphabet—of signs and other text, or (c) a system, such as a web crawler, designed to detect and index the location and value of text in images on the world wide web or in any other set of images.
  • In one embodiment, the present system functions well to detect text in various languages, including non-Latin languages such as Cyrillic, Chinese, Arabic, etc. Furthermore by modifying the feature choice and training the system on new datasets other embodiments may serve to detect various families of graphics such as text in other non-Latin writing systems such as Cuneiform, Hieroglyphics, etc., as well as other classes of targets such as bar codes, logos, etc. that may be derived from or resemble an orderly marking system.
  • In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (17)

1. A method of detecting text in real-world images comprising:
dividing an image representing a real-world scene into one or more regions;
feeding the one or more regions into a cascade of classifiers, the cascade comprising a plurality of stages; and
removing regions of the image classified as non-text regions from the cascade prior to completion of the cascade to avoid subsequent processing of the removed regions.
2. The method of claim 1, wherein the cascade comprises seven AdaBoost layers.
3. The method of claim 2, wherein each layer of the cascade has an equal or greater number of classifiers than each previous layer of the cascade.
4. The method of claim 2, wherein the classifiers in layers are secondarily ordered based on speed of computation.
5. The method of claim 1, further comprising: outputting an output data comprising identified text regions separated from non-text regions.
6. The method of claim 1, further comprising utilizing a binarization process including:
classifying individual pixels as one of: non-text, light potential-text, and dark potential-text; and
outputting binarization output data.
7. The method of claim 6, further comprising utilizing two neighborhood thresholds: TLight=μ+kσ and TDark=μ−kσ where and μ and σ are the mean and variance within the selected neighborhood respectively, and k is a constant.
8. The method of claim 6, further comprising:
grouping the pixels into connected components based on their classification and proximity to other pixels.
9. The method of claim 8, further comprising classifying the connected component as text or non-text based on one or more factors including:
a number of pixels in the connected component;
a number of pixels on the border of the connected component;
a height of the connected component;
a width of the connected component;
a ratio of the height of the connected component to the width of the connected component;
a ratio of the pixels in the connected component to the width of the connected component multiplied by the height of the connected component; and
a local size of text in the connected component.
10. The method of claim 9, further comprising grouping the connected components into lines of text based on a color distance between colors of two connected components.
11. The method of claim 1, further comprising removing regions classified as text regions from the cascade prior to completion of the cascade when a confidence level exceeds a threshold, wherein the confidence level indicates the likelihood of a region being a text region.
12. The method of claim 1, further comprising:
receiving training images;
feeding the training images into the cascade;
comparing classifier results to known training image results; and
adapting one or more of an order of stages in the cascade, an order of classifiers in the stages, one or more classifier confidence level thresholds, and the classifiers by selecting features for each classifier that reduce a number of false positive and false negative detections by a reduced number of tests.
13. A system for detecting text in real-world images comprising:
a processor including:
a dividing logic to divide an image into one or more regions;
a calculating logic to calculate a cascade of classifiers, the cascade comprising a plurality of stages, each stage including one or more weak classifiers, wherein the plurality of stages is organized to start out with classifiers that are most useful for ruling out non-text regions;
a feeding logic to feed the one or more regions into the cascade remove non-text image regions logic to remove image regions classified as the non-text regions from the cascade prior to completion of the cascade, to avoid subsequent processing of the removed regions.
14. The system of claim 13, further comprising:
an outputting logic to output an output data comprising identified text regions separated from non-text regions.
15. The system of claim 13, further comprising binarization logic including:
logic to classify individual pixels as one of: non-text, light potential-text, and dark potential-text.
16. The system of claim 13, further comprising a training system including:
a feed logic to feed training images into the cascade;
a comparison logic to compare classifier results to known training image results; and
an adapting logic to adapt one or more of an order of stages in cascade of classifiers, an order of classifiers in the stages, one or more classifiers confidence level thresholds, and the classifiers by selecting features for each classifier that reduce the number of false positive and false negative detections by a reduced number of tests.
17. A training system to detect text in real-world images comprising a processor including:
a cascade comprising a plurality of stages, each stage including one or more weak classifiers, wherein the plurality of stages is organized to start out with classifiers that are most useful for ruling out non-text regions;
a feed logic to feed training images into the cascade;
a comparison logic to compare classifier results to known training image results; and
an adapting logic to adapt one or more of:
an order of stages in the cascade of classifiers,
an order of classifiers in the stages,
one or more classifiers confidence level thresholds, and
the classifiers by selecting features for each classifier that reduce the number of false positive and false negative detections by a reduced number of tests.
US12/906,997 2005-09-02 2010-10-18 System and Method for Detecting Text in Real-World Color Images Abandoned US20110091098A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/906,997 US20110091098A1 (en) 2005-09-02 2010-10-18 System and Method for Detecting Text in Real-World Color Images

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US71110005P 2005-09-02 2005-09-02
US11/516,147 US7817855B2 (en) 2005-09-02 2006-09-05 System and method for detecting text in real-world color images
US12/906,997 US20110091098A1 (en) 2005-09-02 2010-10-18 System and Method for Detecting Text in Real-World Color Images

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/516,147 Continuation US7817855B2 (en) 2005-09-02 2006-09-05 System and method for detecting text in real-world color images

Publications (1)

Publication Number Publication Date
US20110091098A1 true US20110091098A1 (en) 2011-04-21

Family

ID=37809664

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/516,147 Expired - Fee Related US7817855B2 (en) 2005-09-02 2006-09-05 System and method for detecting text in real-world color images
US12/906,997 Abandoned US20110091098A1 (en) 2005-09-02 2010-10-18 System and Method for Detecting Text in Real-World Color Images

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/516,147 Expired - Fee Related US7817855B2 (en) 2005-09-02 2006-09-05 System and method for detecting text in real-world color images

Country Status (3)

Country Link
US (2) US7817855B2 (en)
EP (1) EP1938249A2 (en)
WO (1) WO2007028166A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120089545A1 (en) * 2009-04-01 2012-04-12 Sony Corporation Device and method for multiclass object detection
US20120212593A1 (en) * 2011-02-17 2012-08-23 Orcam Technologies Ltd. User wearable visual assistance system
US20140267651A1 (en) * 2013-03-15 2014-09-18 Orcam Technologies Ltd. Apparatus and method for using background change to determine context
US20140306986A1 (en) * 2013-04-12 2014-10-16 Facebook, Inc. Identifying Content in Electronic Images
US9389431B2 (en) 2011-11-04 2016-07-12 Massachusetts Eye & Ear Infirmary Contextual image stabilization
US9421866B2 (en) 2011-09-23 2016-08-23 Visteon Global Technologies, Inc. Vehicle system and method for providing information regarding an external item a driver is focusing on
US9524430B1 (en) * 2016-02-03 2016-12-20 Stradvision Korea, Inc. Method for detecting texts included in an image and apparatus using the same
US10636322B2 (en) 2013-03-10 2020-04-28 Orcam Technologies Ltd. Apparatus and method for analyzing images

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6735253B1 (en) * 1997-05-16 2004-05-11 The Trustees Of Columbia University In The City Of New York Methods and architecture for indexing and editing compressed video over the world wide web
US7143434B1 (en) 1998-11-06 2006-11-28 Seungyup Paek Video description system and method
US7339992B2 (en) 2001-12-06 2008-03-04 The Trustees Of Columbia University In The City Of New York System and method for extracting text captions from video and generating video summaries
JP4112968B2 (en) * 2002-12-26 2008-07-02 富士通株式会社 Video text processing device
WO2006096612A2 (en) 2005-03-04 2006-09-14 The Trustees Of Columbia University In The City Of New York System and method for motion estimation and mode decision for low-complexity h.264 decoder
WO2007028166A2 (en) * 2005-09-02 2007-03-08 Blindsight, Inc. A system and method for detecting text in real-world color images
CA2638774C (en) * 2006-03-31 2015-11-24 Medarex, Inc. Transgenic animals expressing chimeric antibodies for use in preparing human antibodies
US7720851B2 (en) * 2006-05-16 2010-05-18 Eastman Kodak Company Active context-based concept fusion
US8036415B2 (en) 2007-01-03 2011-10-11 International Business Machines Corporation Method and system for nano-encoding and decoding information related to printed texts and images on paper and other surfaces
US8929461B2 (en) * 2007-04-17 2015-01-06 Intel Corporation Method and apparatus for caption detection
US8326041B2 (en) * 2007-05-11 2012-12-04 John Wall Machine character recognition verification
US7840502B2 (en) 2007-06-13 2010-11-23 Microsoft Corporation Classification of images as advertisement images or non-advertisement images of web pages
US20090089677A1 (en) * 2007-10-02 2009-04-02 Chan Weng Chong Peekay Systems and methods for enhanced textual presentation in video content presentation on portable devices
CN101408874A (en) * 2007-10-09 2009-04-15 深圳富泰宏精密工业有限公司 Apparatus and method for translating image and character
WO2009126785A2 (en) 2008-04-10 2009-10-15 The Trustees Of Columbia University In The City Of New York Systems and methods for image archaeology
US8917935B2 (en) * 2008-05-19 2014-12-23 Microsoft Corporation Detecting text using stroke width based text detection
WO2009155281A1 (en) 2008-06-17 2009-12-23 The Trustees Of Columbia University In The City Of New York System and method for dynamically and interactively searching media data
US8620080B2 (en) * 2008-09-26 2013-12-31 Sharp Laboratories Of America, Inc. Methods and systems for locating text in a digital image
KR100957716B1 (en) 2008-10-07 2010-05-12 한국 한의학 연구원 Extraction Method of Skin-Colored Region using Variable Skin Color Model
US8671069B2 (en) 2008-12-22 2014-03-11 The Trustees Of Columbia University, In The City Of New York Rapid image annotation via brain state decoding and visual pattern mining
US8559672B2 (en) * 2009-06-01 2013-10-15 Hewlett-Packard Development Company, L.P. Determining detection certainty in a cascade classifier
US8867828B2 (en) 2011-03-04 2014-10-21 Qualcomm Incorporated Text region detection system and method
US8755595B1 (en) * 2011-07-19 2014-06-17 Google Inc. Automatic extraction of character ground truth data from images
US9064191B2 (en) 2012-01-26 2015-06-23 Qualcomm Incorporated Lower modifier detection and extraction from devanagari text images to improve OCR performance
US20130194448A1 (en) 2012-01-26 2013-08-01 Qualcomm Incorporated Rules for merging blocks of connected components in natural images
US8837830B2 (en) * 2012-06-12 2014-09-16 Xerox Corporation Finding text in natural scenes
US9076242B2 (en) 2012-07-19 2015-07-07 Qualcomm Incorporated Automatic correction of skew in natural images and video
US9262699B2 (en) 2012-07-19 2016-02-16 Qualcomm Incorporated Method of handling complex variants of words through prefix-tree based decoding for Devanagiri OCR
US9047540B2 (en) 2012-07-19 2015-06-02 Qualcomm Incorporated Trellis based word decoder with reverse pass
US9141874B2 (en) 2012-07-19 2015-09-22 Qualcomm Incorporated Feature extraction and use with a probability density function (PDF) divergence metric
US9014480B2 (en) 2012-07-19 2015-04-21 Qualcomm Incorporated Identifying a maximally stable extremal region (MSER) in an image by skipping comparison of pixels in the region
US9171224B2 (en) 2013-07-04 2015-10-27 Qualcomm Incorporated Method of improving contrast for text extraction and recognition applications
US9355311B2 (en) * 2014-09-23 2016-05-31 Konica Minolta Laboratory U.S.A., Inc. Removal of graphics from document images using heuristic text analysis and text recovery
US9892301B1 (en) 2015-03-05 2018-02-13 Digimarc Corporation Localization of machine-readable indicia in digital capture systems
US9471990B1 (en) * 2015-10-20 2016-10-18 Interra Systems, Inc. Systems and methods for detection of burnt-in text in a video
JP6808330B2 (en) * 2016-02-26 2021-01-06 キヤノン株式会社 Information processing equipment, information processing methods, and programs
US10255513B2 (en) * 2016-06-02 2019-04-09 Skyworks Solutions, Inc. Systems and methods for recognition of unreadable characters on printed circuit boards
US10943116B2 (en) 2019-02-22 2021-03-09 International Business Machines Corporation Translation to braille
US20210034907A1 (en) * 2019-07-29 2021-02-04 Walmart Apollo, Llc System and method for textual analysis of images
US20240073337A1 (en) * 2021-01-13 2024-02-29 Hewlett-Packard Development Company, L.P. Output resolution selections

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048096A (en) * 1989-12-01 1991-09-10 Eastman Kodak Company Bi-tonal image non-text matter removal with run length and connected component analysis
US5754684A (en) * 1994-06-30 1998-05-19 Samsung Electronics Co., Ltd. Image area discrimination apparatus
US5901243A (en) * 1996-09-30 1999-05-04 Hewlett-Packard Company Dynamic exposure control in single-scan digital input devices
US6738512B1 (en) * 2000-06-19 2004-05-18 Microsoft Corporation Using shape suppression to identify areas of images that include particular shapes
US20050114313A1 (en) * 2003-11-26 2005-05-26 Campbell Christopher S. System and method for retrieving documents or sub-documents based on examples
US20050125402A1 (en) * 2003-12-04 2005-06-09 Microsoft Corporation Processing an electronic document for information extraction
US20050144149A1 (en) * 2001-12-08 2005-06-30 Microsoft Corporation Method for boosting the performance of machine-learning classifiers
US7817855B2 (en) * 2005-09-02 2010-10-19 The Blindsight Corporation System and method for detecting text in real-world color images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048096A (en) * 1989-12-01 1991-09-10 Eastman Kodak Company Bi-tonal image non-text matter removal with run length and connected component analysis
US5754684A (en) * 1994-06-30 1998-05-19 Samsung Electronics Co., Ltd. Image area discrimination apparatus
US5901243A (en) * 1996-09-30 1999-05-04 Hewlett-Packard Company Dynamic exposure control in single-scan digital input devices
US6738512B1 (en) * 2000-06-19 2004-05-18 Microsoft Corporation Using shape suppression to identify areas of images that include particular shapes
US20050144149A1 (en) * 2001-12-08 2005-06-30 Microsoft Corporation Method for boosting the performance of machine-learning classifiers
US20050114313A1 (en) * 2003-11-26 2005-05-26 Campbell Christopher S. System and method for retrieving documents or sub-documents based on examples
US20050125402A1 (en) * 2003-12-04 2005-06-09 Microsoft Corporation Processing an electronic document for information extraction
US7817855B2 (en) * 2005-09-02 2010-10-19 The Blindsight Corporation System and method for detecting text in real-world color images

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8843424B2 (en) * 2009-04-01 2014-09-23 Sony Corporation Device and method for multiclass object detection
US20120089545A1 (en) * 2009-04-01 2012-04-12 Sony Corporation Device and method for multiclass object detection
US20120212593A1 (en) * 2011-02-17 2012-08-23 Orcam Technologies Ltd. User wearable visual assistance system
US9421866B2 (en) 2011-09-23 2016-08-23 Visteon Global Technologies, Inc. Vehicle system and method for providing information regarding an external item a driver is focusing on
US9389431B2 (en) 2011-11-04 2016-07-12 Massachusetts Eye & Ear Infirmary Contextual image stabilization
US10571715B2 (en) 2011-11-04 2020-02-25 Massachusetts Eye And Ear Infirmary Adaptive visual assistive device
US11335210B2 (en) 2013-03-10 2022-05-17 Orcam Technologies Ltd. Apparatus and method for analyzing images
US10636322B2 (en) 2013-03-10 2020-04-28 Orcam Technologies Ltd. Apparatus and method for analyzing images
US10339406B2 (en) * 2013-03-15 2019-07-02 Orcam Technologies Ltd. Apparatus and method for using background change to determine context
US20140267651A1 (en) * 2013-03-15 2014-09-18 Orcam Technologies Ltd. Apparatus and method for using background change to determine context
US10592763B2 (en) 2013-03-15 2020-03-17 Orcam Technologies Ltd. Apparatus and method for using background change to determine context
AU2014251221B2 (en) * 2013-04-12 2019-09-12 Facebook, Inc. Identifying content in electronic images
US10296933B2 (en) * 2013-04-12 2019-05-21 Facebook, Inc. Identifying content in electronic images
US20140306986A1 (en) * 2013-04-12 2014-10-16 Facebook, Inc. Identifying Content in Electronic Images
EP3203417A3 (en) * 2016-02-03 2017-08-16 StradVision Korea, Inc. Method for detecting texts included in an image and apparatus using the same
CN107038409A (en) * 2016-02-03 2017-08-11 斯特拉德视觉公司 Method, device and the computer readable recording medium storing program for performing of contained text in detection image
US9524430B1 (en) * 2016-02-03 2016-12-20 Stradvision Korea, Inc. Method for detecting texts included in an image and apparatus using the same

Also Published As

Publication number Publication date
EP1938249A2 (en) 2008-07-02
US20070110322A1 (en) 2007-05-17
WO2007028166A3 (en) 2007-09-27
US7817855B2 (en) 2010-10-19
WO2007028166A2 (en) 2007-03-08

Similar Documents

Publication Publication Date Title
US7817855B2 (en) System and method for detecting text in real-world color images
Lu et al. Scene text extraction based on edges and support vector regression
Epshtein et al. Detecting text in natural scenes with stroke width transform
Park et al. Automatic detection and recognition of Korean text in outdoor signboard images
US8744196B2 (en) Automatic recognition of images
Minetto et al. SnooperText: A text detection system for automatic indexing of urban scenes
CN110276342B (en) License plate identification method and system
US8620078B1 (en) Determining a class associated with an image
CN111428723B (en) Character recognition method and device, electronic equipment and storage medium
US8290268B2 (en) Segmenting printed media pages into articles
CN113158808B (en) Method, medium and equipment for Chinese ancient book character recognition, paragraph grouping and layout reconstruction
CN104182722A (en) Text detection method and device and text information extraction method and system
Faustina Joan et al. A survey on text information extraction from born-digital and scene text images
Matas et al. A new class of learnable detectors for categorisation
Qomariyah et al. The segmentation of printed Arabic characters based on interest point
Khan et al. Text detection and recognition on traffic panel in roadside imagery
KR20140112869A (en) Apparatus and method for recognizing character
JP2017228297A (en) Text detection method and apparatus
Chen et al. Adaboost learning for detecting and reading text in city scenes
JP3476595B2 (en) Image area division method and image binarization method
Mishra Understanding Text in Scene Images
Cherian et al. Automatic localization and recognition of perspectively distorted text in natural scene images
Nguyen et al. Digital transformation for shipping container terminals using automated container code recognition
Tamirat Customers Identity Card Data Detection and Recognition Using Image Processing
Usha et al. Traffic Signboard Recognition and Text Translation System using Word Spotting and Machine Learning

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION