WO2012089261A1 - Method of automatically extracting lane markings from road imagery - Google Patents

Method of automatically extracting lane markings from road imagery Download PDF

Info

Publication number
WO2012089261A1
WO2012089261A1 PCT/EP2010/070895 EP2010070895W WO2012089261A1 WO 2012089261 A1 WO2012089261 A1 WO 2012089261A1 EP 2010070895 W EP2010070895 W EP 2010070895W WO 2012089261 A1 WO2012089261 A1 WO 2012089261A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
lane
road
markings
imagery
Prior art date
Application number
PCT/EP2010/070895
Other languages
French (fr)
Inventor
Tim Bekaert
Original Assignee
Tomtom Belgium Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tomtom Belgium Nv filed Critical Tomtom Belgium Nv
Priority to PCT/EP2010/070895 priority Critical patent/WO2012089261A1/en
Publication of WO2012089261A1 publication Critical patent/WO2012089261A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • This invention relates generally to digital maps of the type for displaying road or pathway information, and more particularly to a method for automatically extracting lane markings from road imagery for incorporation into a digital map.
  • Personal navigation devices 10 like that shown, for example, in Figure 1 utilize digital maps combined with accurate positioning data from GPS or other data streams. These devices have been developed for many applications, such as navigation assistance for automobile drivers. The effectiveness of these devices is inherently dependent upon the accuracy of the information provided to them in the form of digital maps, stored in their memory or otherwise accessed through a suitable database connection such as wireless signal, cable, telephone line, etc.
  • the navigation device 10 ( Figure 1), such as several models manufactured by TomTom NV (www.tomtom.com), includes a display screen 12 that portrays a portion of a stored digital map as a network of roads 14. A traveler having access to the GPS-enabled navigation device 10 may then be generally located on the digital map close to or with regard to a particular road 14 or segment thereof.
  • Digital maps are obtained by various methods, including high resolution imagery from space, as well as orthorectified images taken from land-based mobile vehicles.
  • the images obtained from land-based mapping systems must be converted to an orthorectified image which is scale-corrected and depicts ground features as seen from above in their exact ground positions.
  • An orthorectified image is a kind of aerial photograph that has been geometrically corrected such that the scale of the photograph is uniform, meaning that the photograph can be considered equivalent to a map.
  • An orthorectified image can be used to measure true distances, because it is an accurate representation of the surface of interest, e.g., the Earth's surface. Orthorectified images are adjusted for topographic relief, lens distortion and camera tilt.
  • Mobile mapping vehicles typically terrestrial based vehicles such as a van or car, but possibly also aerial vehicles, are used to collect mobile data for enhancement of digital map databases.
  • the mobile mapping vehicles are typically fitted with a number of cameras, possibly some of them stereographic and all of them accurately geo-positioned as a result of having precision GPS and other position and orientation determination equipment (e.g., inertial navigation system - INS) on board.
  • position and orientation determination equipment e.g., inertial navigation system - INS
  • Geo- coded means that a position, computed by the GPS receiver and possibly INS, and possibly additional heading and/or orientation data associated with the image, is attached to the metadata of each image captured by the camera.
  • the mobile mapping vehicles record more than one image sequence of the surface of interest, e.g., a road surface, and for each image of an image sequence, the geo-position in a geographic coordinate reference system is accurately determined together with the position and orientation data of the image sequence with respect to the geo-position.
  • Image sequences with corresponding geo-position information are referred to as geo-coded image sequences.
  • Other data may also be collected by other sensors, simultaneously and similarly geo-coded.
  • orthorectified images are assembled together to create a mosaic without considering the quality of the image content contained therein. Rather, such images are typically tiled in sequence, one after the other, much like shingles are overlapped one upon another in courses on a roof.
  • This invention relates to methods and techniques for automatically identifying and extracting lane markings in the center of the lane on a road surface from road imagery without having to manually view the entire unmarked road surface extending between marked sections of the road surface.
  • the method includes a sequence of image processing steps.
  • One such sequence includes providing a stretch of road imagery. Further, performing a thresholding procedure on the road imagery to identify bright markings. Then, filtering the thresholded imagery to suppress bright markings on the sides of the lane and to maintain bright markings in the center of the lane. Further yet, classifying the bright markings in the center of the lane as a particular type of "lane marking”. Lastly, geo-coding the "lane marking".
  • principles of this invention can be used to more effectively and more efficiently locate lane markings in the center of a lane from an image of a road surface for incorporation into a digital map.
  • Figure 1 is an exemplary view of a portable navigation device according to one embodiment of this invention including a display screen for presenting map data information;
  • Figure 2 is an example of an imaged road segment to be analyzed for lane markings
  • Figure 3 is a view of the road segment of Figure 2 wherein the image has been processed in a thresholding procedure
  • Figure 4 is a view of the road segment of Figure 3 wherein the image has been further processed in the thresholding procedure;
  • Figure 5 is a view of the thresholded image of Figure 4 after the image has been filtered;
  • Figure 6 is a view of the filtered image of Figure 5 after the image has been further converted to a black-and-white image using a threshold value
  • Figure 7 is a graph showing a standard deviation of the pixel intensities of a white area of Figure 6 with the pixels inside the bounding oval being classified as desired, such as "lane marking” and the pixels outside the bounding oval being classified as “no lane marking", for example.
  • this invention pertains generally to digital maps as used by navigation systems, as well as other map applications which may include those viewable through internet enabled computers, PDAs, cellular phones, and the like.
  • Figure 2 depicts a sample of a portion of an imaged road segment 14 that has been imaged, such as can be imaged via a land-based vehicle, i.e., mobile mapping van, or via an aerial instrument, i.e., satellite (not shown).
  • a land-based vehicle i.e., mobile mapping van
  • an aerial instrument i.e., satellite
  • imaged road segment 14 there are may different possible image formats that can be utilized to obtain the imaged road segment 14, such as, for example, an orthorectified image or a linearly referenced image (LRI), which is an image constructed from geo-referenced (identified by latitude/longitude) mobile mapping van video frames stitched together.
  • LRI linearly referenced image
  • other formats can be used, e.g., aerial photographs or satellite images.
  • the image of the road segment 14 of Figure 2 is processed using a thresholding procedure in accordance with one step of the invention sequence to render any lane marking(s) on the road segment 14 more readily identifiable.
  • the image of Figure 2 is preferably first enhanced in a dilation operation to reduce the influence of undesired "noise" and JPG artifacts in the image, such that any bright objects, white for example, which typical lane markings constitute, are enhanced.
  • the image can be processed using a binary threshold procedure, by way of example, wherein the image from Figure 2 is converted from a color image to a grayscale image 16, as the color bands do not contain information needed to detect the lane markings.
  • other thresholding procedures could be used, such as those including more than two classifications or values, such as those using a predetermined set of probabilities, by way of example and without limitation.
  • the grayscale image 16 from Figure 3 is further processed in accordance with the thresholding procedure.
  • the grayscale image 16 is transformed further using a binary thresholding procedure to a black-and-white image 18 using a dynamic thresholding process.
  • the dynamic thresholding accounts for the differing brightness and contrast of the grayscale image 16, such as may result from changing meteorological conditions, shadows, tunnels, different types of road pavement, and varying camera settings, for example.
  • the threshold value is changed dynamically for each horizontal strip during the transformation process, such that each horizontal strip of the image 16 is thresholded based on the distribution of pixel intensities.
  • the resulting black-and-white image 18 is thereby accurately produced to maintain the bright areas of the grayscale image 16 as white portions of the black-and-white image 18.
  • other thresholding procedures could be used in lieu of a binary procedure.
  • the black-and-white image 18 from Figure 4 is further processed in accordance with yet another step of the invention.
  • the thresholded road imagery is filtered, wherein in the exemplary embodiment utilizing the binary thresholding procedure, the black-and-white image 18 is transformed via an image filtering process to produce a filtered image 20.
  • the filtering process is used to exploit bright lane markings located in the center of each lane, thereby making their presence more readily apparent, while allowing other bright markings along the sides of each lane in the black-and-white image that due not constitute lane markings to be suppressed.
  • the filtering could be performed using non-geometric properties, such as colors, for example. Accordingly, it should be understood that the filtering can be performed based on a characteristics desired to be captures, while allowing the other non-desired characteristics to be suppressed.
  • the filtering process moves along the road image 18, the unmarked portions of the road image 14 and the markings which do not represent "lane markings" are suppressed, i.e. made less visible, as not being of further interest, while the aspects in the image 18 which represent highly probable lane markings are enhanced, i.e., made more visible, in the filtered image 20.
  • the filtering process can be manipulated, as desired, to enhance various types and configurations of details within the road images which most likely constitute "lane markings", while allowing other types of marking configurations to be suppressed. This is done through establishing the predetermined aspects to be highlighted while allowing others to be suppressed.
  • the "filtered" image 20 from Figure 5 is further processed in accordance with yet another step of the invention.
  • the filtered image 20 is transformed via a secondary thresholding process to return the image to a thresholded black-and-white image 22, by way of example.
  • all remaining bright or white objects are labeled as "probable lane markings", shown here, by way of example and without limitation, as a white merge arrow.
  • a further process is performed to evaluate the probability of each bright or white object constituting a lane marking or not, e.g., parts of passing vehicles or bright areas in the road pavement.
  • the image 22 from Figure 6, which identifies those aspects most probable as being lane markings, is further processed using statistical analysis to classify the bright or white markings from Figure 6, such as via a binary classification, e.g. as either "lane marking” or "no lane marking", for example.
  • the statistical analysis identifies those features sought which are likely to indicate lane markings in the bright or white area of the image 20.
  • two features are used, by way of example and without limitation.
  • the features used in this example include the standard deviation of the pixel intensities "inside” and "outside” a predetermined boundary 24.
  • the markings identified within the specified boundary 24 are classified as “lane marking", while the markings identified outside the boundary 24 are classified as "no lane marking".
  • the classifying could be performed using other than binary classifications, and thus, the classifications could include a plurality of classes, such as different types of arrows, shapes, words, and the like, for example. Further, it should be recognized that the classification can be broadened to include 3, 4 or more classifications, and thus, the classifications need not be limited to just those "inside” and "outside” the boundary 24. For example, the classifications could further include things such as, average pixel intensity inside the boundary 24 or width/height ration of the boundary 24.
  • the specific location within the road image 14 is able to be inspected by an operator to further identify and classify the specific type of lane marking that is present.
  • the "lane marking" being automatically identified by the process steps in accordance with the invention, those areas of the road image 14 which do not include lane markings are able to be readily and summarily dismissed by the operator as not being of interest. This allows the operator to efficiently and readily view only the areas of the road imagery 14 that have been deemed to include lane markings. Thus, the operator can efficiently and cost effectively classify the automatically identified "lane markings" without having to scan through the entire stretch of road image 14.
  • the "lane markings” are geo-coded using the original image 14 geo-coding information, thereby providing a precise latitude/longitude associated with the "lane markings".
  • the lane markings can be readily attributed to the precise location within the digital map viewed via the navigation device 10.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for automatically identifying and extracting lane markings from the center of a lane on a road surface without having to manually view the entire unmarked road surface extending between marked sections of the road surface is provided. The method includes a sequence of image processing steps. One sequence includes providing a stretch of road imagery (14). Further, performing a thresholding procedure on the road imagery (14). Then, filtering the thresholded road image to suppress bright markings on the sides of the lane and to maintain bright markings in the center of the lane. Further yet, classifying the bright markings in the center of the lane as a particular type of "lane marking". Lastly, geo-coding the "lane marking".

Description

METHOD OF AUTOMATICALLY EXTRACTING LANE MARKINGS FROM
ROAD IMAGERY
BACKGROUND OF THE INVENTION
Field of the Invention
[0001] This invention relates generally to digital maps of the type for displaying road or pathway information, and more particularly to a method for automatically extracting lane markings from road imagery for incorporation into a digital map.
Related Art
[0002] Personal navigation devices 10 like that shown, for example, in Figure 1 utilize digital maps combined with accurate positioning data from GPS or other data streams. These devices have been developed for many applications, such as navigation assistance for automobile drivers. The effectiveness of these devices is inherently dependent upon the accuracy of the information provided to them in the form of digital maps, stored in their memory or otherwise accessed through a suitable database connection such as wireless signal, cable, telephone line, etc.
[0003] Typically, the navigation device 10 (Figure 1), such as several models manufactured by TomTom NV (www.tomtom.com), includes a display screen 12 that portrays a portion of a stored digital map as a network of roads 14. A traveler having access to the GPS-enabled navigation device 10 may then be generally located on the digital map close to or with regard to a particular road 14 or segment thereof.
[0004] Digital maps are obtained by various methods, including high resolution imagery from space, as well as orthorectified images taken from land-based mobile vehicles. In the latter case, the images obtained from land-based mapping systems must be converted to an orthorectified image which is scale-corrected and depicts ground features as seen from above in their exact ground positions. An orthorectified image is a kind of aerial photograph that has been geometrically corrected such that the scale of the photograph is uniform, meaning that the photograph can be considered equivalent to a map. An orthorectified image can be used to measure true distances, because it is an accurate representation of the surface of interest, e.g., the Earth's surface. Orthorectified images are adjusted for topographic relief, lens distortion and camera tilt. [0005] Mobile mapping vehicles, typically terrestrial based vehicles such as a van or car, but possibly also aerial vehicles, are used to collect mobile data for enhancement of digital map databases. The mobile mapping vehicles are typically fitted with a number of cameras, possibly some of them stereographic and all of them accurately geo-positioned as a result of having precision GPS and other position and orientation determination equipment (e.g., inertial navigation system - INS) on board. While driving the road network or an established course, the geo-coded image sequences are captured in successive frames or images. Geo- coded means that a position, computed by the GPS receiver and possibly INS, and possibly additional heading and/or orientation data associated with the image, is attached to the metadata of each image captured by the camera. The mobile mapping vehicles record more than one image sequence of the surface of interest, e.g., a road surface, and for each image of an image sequence, the geo-position in a geographic coordinate reference system is accurately determined together with the position and orientation data of the image sequence with respect to the geo-position. Image sequences with corresponding geo-position information are referred to as geo-coded image sequences. Other data may also be collected by other sensors, simultaneously and similarly geo-coded.
[0006] Prior techniques for obtaining orthorectified tiles for use in assembling a bird's eye mosaic (BEM) of a large surface of interest, such as the Earth, are known. An excellent example of this technique is described in the Applicant's International Publication No. WO/2008/044927, published July 17, 2008. In jurisdictions where incorporation by reference is recognized, the entire disclosure of the said International Publication is hereby incorporated by reference and relied upon.
[0007] According to known techniques, orthorectified images are assembled together to create a mosaic without considering the quality of the image content contained therein. Rather, such images are typically tiled in sequence, one after the other, much like shingles are overlapped one upon another in courses on a roof.
[0008] It is known to utilize the aforementioned imagery, regardless of source, to identify markings on the road surfaces, i.e. lane markings found in the center of lanes, such as, for example, "MERGE", "LEFT", "EXIT", "TOLL", "TURN ONLY", "STOP HERE", and "HOV", for inclusion in digital maps. However, present methods used to identify the lane markings are tedious, very time consuming, and thus, highly inefficient and costly. Presently, the markings are identified by having to view the entire road surface imaged until an indication of a lane marking is witnessed. Accordingly, many miles of unmarked road surface are necessarily viewed in order to finally arrive at a relatively small marked section of the imaged road surface. As such, there remains a need for a method that allows identifying markings on a road surface more readily without having to view the entire unmarked road surface.
SUMMARY OF THE INVENTION
[0009] This invention relates to methods and techniques for automatically identifying and extracting lane markings in the center of the lane on a road surface from road imagery without having to manually view the entire unmarked road surface extending between marked sections of the road surface. The method includes a sequence of image processing steps. One such sequence includes providing a stretch of road imagery. Further, performing a thresholding procedure on the road imagery to identify bright markings. Then, filtering the thresholded imagery to suppress bright markings on the sides of the lane and to maintain bright markings in the center of the lane. Further yet, classifying the bright markings in the center of the lane as a particular type of "lane marking". Lastly, geo-coding the "lane marking".
[0010] Accordingly, principles of this invention can be used to more effectively and more efficiently locate lane markings in the center of a lane from an image of a road surface for incorporation into a digital map.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] These and other aspects, features and advantages will be readily apparent to those skilled in the art in view of the following detailed description of presently preferred embodiments and best mode, appended claims, and accompanying drawings, in which:
[0012] Figure 1 is an exemplary view of a portable navigation device according to one embodiment of this invention including a display screen for presenting map data information;
[0013] Figure 2 is an example of an imaged road segment to be analyzed for lane markings;
[0014] Figure 3 is a view of the road segment of Figure 2 wherein the image has been processed in a thresholding procedure;
[0015] Figure 4 is a view of the road segment of Figure 3 wherein the image has been further processed in the thresholding procedure; [0016] Figure 5 is a view of the thresholded image of Figure 4 after the image has been filtered;
[0017] Figure 6 is a view of the filtered image of Figure 5 after the image has been further converted to a black-and-white image using a threshold value; and
[0018] Figure 7 is a graph showing a standard deviation of the pixel intensities of a white area of Figure 6 with the pixels inside the bounding oval being classified as desired, such as "lane marking" and the pixels outside the bounding oval being classified as "no lane marking", for example.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0019] Referring in more detail to the drawings, this invention pertains generally to digital maps as used by navigation systems, as well as other map applications which may include those viewable through internet enabled computers, PDAs, cellular phones, and the like.
[0020] Figure 2 depicts a sample of a portion of an imaged road segment 14 that has been imaged, such as can be imaged via a land-based vehicle, i.e., mobile mapping van, or via an aerial instrument, i.e., satellite (not shown). Of course, there are may different possible image formats that can be utilized to obtain the imaged road segment 14, such as, for example, an orthorectified image or a linearly referenced image (LRI), which is an image constructed from geo-referenced (identified by latitude/longitude) mobile mapping van video frames stitched together. Though, as mentioned, other formats can be used, e.g., aerial photographs or satellite images. As the image is presently shown, it is not readily apparent where along the road segment 14 lane markings, such as those mentioned above, may be present. Thus, it is desirable to further process the image of the road image of the road segment 14 to determine if lane markings are present.
[0021] In Figure 3, the image of the road segment 14 of Figure 2 is processed using a thresholding procedure in accordance with one step of the invention sequence to render any lane marking(s) on the road segment 14 more readily identifiable. In this step, the image of Figure 2 is preferably first enhanced in a dilation operation to reduce the influence of undesired "noise" and JPG artifacts in the image, such that any bright objects, white for example, which typical lane markings constitute, are enhanced. Further, the image can be processed using a binary threshold procedure, by way of example, wherein the image from Figure 2 is converted from a color image to a grayscale image 16, as the color bands do not contain information needed to detect the lane markings. Otherwise, other thresholding procedures could be used, such as those including more than two classifications or values, such as those using a predetermined set of probabilities, by way of example and without limitation.
[0022] In Figure 4, the grayscale image 16 from Figure 3 is further processed in accordance with the thresholding procedure. In this step, the grayscale image 16 is transformed further using a binary thresholding procedure to a black-and-white image 18 using a dynamic thresholding process. The dynamic thresholding accounts for the differing brightness and contrast of the grayscale image 16, such as may result from changing meteorological conditions, shadows, tunnels, different types of road pavement, and varying camera settings, for example. Thus, with the brightness and contrast differing throughout the image 16, the threshold value is changed dynamically for each horizontal strip during the transformation process, such that each horizontal strip of the image 16 is thresholded based on the distribution of pixel intensities. The resulting black-and-white image 18 is thereby accurately produced to maintain the bright areas of the grayscale image 16 as white portions of the black-and-white image 18. Of course, as mentioned, other thresholding procedures could be used in lieu of a binary procedure.
[0023] In Figure 5, the black-and-white image 18 from Figure 4 is further processed in accordance with yet another step of the invention. In this step, the thresholded road imagery is filtered, wherein in the exemplary embodiment utilizing the binary thresholding procedure, the black-and-white image 18 is transformed via an image filtering process to produce a filtered image 20. The filtering process is used to exploit bright lane markings located in the center of each lane, thereby making their presence more readily apparent, while allowing other bright markings along the sides of each lane in the black-and-white image that due not constitute lane markings to be suppressed. This is done, by way of example and without limitation, by enhancing geometries that are expected to represent centrally located lane markings, i.e., horizontally extended, compact markings, while allowing other geometries which are not expected to represent lane markings, i.e., vertically extended along the axis of the road, e.g. lane dividers, to be suppressed. Otherwise, the filtering could be performed using non-geometric properties, such as colors, for example. Accordingly, it should be understood that the filtering can be performed based on a characteristics desired to be captures, while allowing the other non-desired characteristics to be suppressed. Thus, as the filtering process moves along the road image 18, the unmarked portions of the road image 14 and the markings which do not represent "lane markings" are suppressed, i.e. made less visible, as not being of further interest, while the aspects in the image 18 which represent highly probable lane markings are enhanced, i.e., made more visible, in the filtered image 20. It should be recognized that the filtering process can be manipulated, as desired, to enhance various types and configurations of details within the road images which most likely constitute "lane markings", while allowing other types of marking configurations to be suppressed. This is done through establishing the predetermined aspects to be highlighted while allowing others to be suppressed.
[0024] In Figure 6, the "filtered" image 20 from Figure 5 is further processed in accordance with yet another step of the invention. In this step, the filtered image 20 is transformed via a secondary thresholding process to return the image to a thresholded black-and-white image 22, by way of example. After this step, all remaining bright or white objects are labeled as "probable lane markings", shown here, by way of example and without limitation, as a white merge arrow. Of course, some of the bright or white objects will constitute something other than a lane marking, and thus, a further process is performed to evaluate the probability of each bright or white object constituting a lane marking or not, e.g., parts of passing vehicles or bright areas in the road pavement.
[0025] In Figure 7, the image 22 from Figure 6, which identifies those aspects most probable as being lane markings, is further processed using statistical analysis to classify the bright or white markings from Figure 6, such as via a binary classification, e.g. as either "lane marking" or "no lane marking", for example. The statistical analysis identifies those features sought which are likely to indicate lane markings in the bright or white area of the image 20. In the exemplary classification illustrated, two features are used, by way of example and without limitation. The features used in this example include the standard deviation of the pixel intensities "inside" and "outside" a predetermined boundary 24. The markings identified within the specified boundary 24 are classified as "lane marking", while the markings identified outside the boundary 24 are classified as "no lane marking". It should be recognized that the classifying could be performed using other than binary classifications, and thus, the classifications could include a plurality of classes, such as different types of arrows, shapes, words, and the like, for example. Further, it should be recognized that the classification can be broadened to include 3, 4 or more classifications, and thus, the classifications need not be limited to just those "inside" and "outside" the boundary 24. For example, the classifications could further include things such as, average pixel intensity inside the boundary 24 or width/height ration of the boundary 24.
[0026] Then, upon automatically being identified as a "lane marking", the specific location within the road image 14 is able to be inspected by an operator to further identify and classify the specific type of lane marking that is present. Of course, by the "lane marking" being automatically identified by the process steps in accordance with the invention, those areas of the road image 14 which do not include lane markings are able to be readily and summarily dismissed by the operator as not being of interest. This allows the operator to efficiently and readily view only the areas of the road imagery 14 that have been deemed to include lane markings. Thus, the operator can efficiently and cost effectively classify the automatically identified "lane markings" without having to scan through the entire stretch of road image 14.
[0027] Lastly, once the position and type of the identified "lane markings" are identified in the road image 14 via pixel coordinates, the "lane markings" are geo-coded using the original image 14 geo-coding information, thereby providing a precise latitude/longitude associated with the "lane markings". Thus, the lane markings can be readily attributed to the precise location within the digital map viewed via the navigation device 10.
[0028] The foregoing invention has been described in accordance with the relevant legal standards, thus the description is exemplary rather than limiting in nature. Variations and modifications to the disclosed embodiment may become apparent to those skilled in the art and fall within the scope of the invention. Accordingly, it should be recognized that other sequences of image processing steps than those exemplified above could be used to achieve the invention as claimed.

Claims

1. A method for automatically extracting lane markings in the center of a road lane from road imagery, said method comprising the steps of:
providing a stretch of road imagery (14);
performing a thresholding procedure on the road imagery to identify bright markings; filtering the thresholded road imagery to suppress bright markings on the sides of the lane and to maintain bright markings in the center of the lane;
classifying the bright markings in the center of the lane as a particular type of "lane marking"; and
geo-coding the classified lane markings.
2. The method according to claim 1 wherein the thresholding procedure includes converting the road imagery to a black-and-white image (18).
3. The method according to claim 2 wherein the thresholding procedure includes converting the road imagery to a grayscale image (16) prior to converting the road imagery to the black-and-white image (18), and then transforming the grayscale image (16) to the black- and-white image (18).
4. The method according to claim 3 wherein the transforming of the grayscale image (16) to the black-and-white image (18) further includes dynamically changing a threshold value across each horizontal strip within the image with the threshold value being determined by a pixel intensity of the horizontal image.
5. The method according to any one of the preceding claims wherein the filtering further includes providing a filter for detecting a specific road image.
6. The method according to any one of the preceding claims further including producing a grayscale image (20) during the filtering step and transforming the filtered grayscale image (20) to a black-and-white image (22).
7. The method according to claim 6 further including applying a threshold value to the filtered grayscale image (20) to retain only those areas having a high response to the filter in the form of highlighted white portions in the black-and-white image (22) transformed from the filtered image (20).
8. The method according to claim 7 further including classifying the highlighted bright portions as either "lane marking" or "no lane marking".
9. The method according to claim 8 further including assigning a standard deviation to the pixel intensities of the highlighted bright portions to determine the classification of "lane marking" or "no lane marking".
10. The method according to any one of the preceding claims including providing the road imagery (14) as an orthorectified image.
11. The method according to claim 1 wherein the thresholding is performed using probabilities.
12. The method according to claim 1 wherein the filtering is performed by filtering colors.
PCT/EP2010/070895 2010-12-29 2010-12-29 Method of automatically extracting lane markings from road imagery WO2012089261A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/070895 WO2012089261A1 (en) 2010-12-29 2010-12-29 Method of automatically extracting lane markings from road imagery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/070895 WO2012089261A1 (en) 2010-12-29 2010-12-29 Method of automatically extracting lane markings from road imagery

Publications (1)

Publication Number Publication Date
WO2012089261A1 true WO2012089261A1 (en) 2012-07-05

Family

ID=44624987

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/070895 WO2012089261A1 (en) 2010-12-29 2010-12-29 Method of automatically extracting lane markings from road imagery

Country Status (1)

Country Link
WO (1) WO2012089261A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060044389A1 (en) * 2004-08-27 2006-03-02 Chai Sek M Interface method and apparatus for video imaging device
EP1667085A1 (en) * 2003-09-24 2006-06-07 Aisin Seiki Kabushiki Kaisha Device for detecting road traveling lane
WO2007145566A1 (en) * 2006-06-11 2007-12-21 Volvo Technology Corporation Method and apparatus for determining and analyzing a location of visual interest
WO2008044927A1 (en) 2006-10-09 2008-04-17 Tele Atlas B.V. Method and apparatus for generating an orthorectified tile
EP1975558A2 (en) * 2007-03-30 2008-10-01 Aisin AW Co., Ltd. Image recognition apparatus and image recognition method
WO2008130219A1 (en) * 2007-04-19 2008-10-30 Tele Atlas B.V. Method of and apparatus for producing road information
US20090033540A1 (en) * 1997-10-22 2009-02-05 Intelligent Technologies International, Inc. Accident Avoidance Systems and Methods

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090033540A1 (en) * 1997-10-22 2009-02-05 Intelligent Technologies International, Inc. Accident Avoidance Systems and Methods
EP1667085A1 (en) * 2003-09-24 2006-06-07 Aisin Seiki Kabushiki Kaisha Device for detecting road traveling lane
US20060044389A1 (en) * 2004-08-27 2006-03-02 Chai Sek M Interface method and apparatus for video imaging device
WO2007145566A1 (en) * 2006-06-11 2007-12-21 Volvo Technology Corporation Method and apparatus for determining and analyzing a location of visual interest
WO2008044927A1 (en) 2006-10-09 2008-04-17 Tele Atlas B.V. Method and apparatus for generating an orthorectified tile
EP1975558A2 (en) * 2007-03-30 2008-10-01 Aisin AW Co., Ltd. Image recognition apparatus and image recognition method
WO2008130219A1 (en) * 2007-04-19 2008-10-30 Tele Atlas B.V. Method of and apparatus for producing road information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GROTE A ET AL: "Segmentation Based on Normalized Cuts for the Detection of Suburban Roads in Aerial Imagery", URBAN REMOTE SENSING JOINT EVENT, 2007, IEEE, PI, 1 April 2007 (2007-04-01), pages 1 - 5, XP031177639, ISBN: 978-1-4244-0711-8, DOI: 10.1109/URS.2007.371817 *
JUBERTS M ET AL: "Vision-based Vehicle Control For AVCS", INTELLIGENT VEHICLES '93 SYMPOSIUM TOKYO, JAPAN 14-16 JULY 1993, NEW YORK, NY, USA,IEEE, US, 14 July 1993 (1993-07-14), pages 195 - 200, XP010117283, ISBN: 978-0-7803-1370-5 *
LOPEZ A ET AL: "Detection of lane markings based on ridgeness and RANSAC", INTELLIGENT TRANSPORTATION SYSTEMS, 2005. PROCEEDINGS. 2005 IEEE VIENNA, AUSTRIA 13-16 SEPT. 2005, PISCATAWAY, NJ, USA,IEEE, 13 September 2005 (2005-09-13), pages 733 - 738, XP010843114, ISBN: 978-0-7803-9215-1, DOI: 10.1109/ITSC.2005.1520139 *
SCHNEIDERMAN H ET AL: "Visual processing for autonomous driving", APPLICATIONS OF COMPUTER VISION, PROCEEDINGS, 1992., IEEE WORKSHOP ON PALM SPRINGS, CA, USA 30 NOV.-2 DEC. 1992, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 30 November 1992 (1992-11-30), pages 164 - 171, XP010029151, ISBN: 978-0-8186-2840-5, DOI: 10.1109/ACV.1992.240315 *
WIJESOMA W S ET AL: "A laser and a camera for mobile robot navigation", CONTROL, AUTOMATIOM, ROBOTICS AND VISION, 2002. ICARCV 2002. 7TH INTER NATIONAL CONFERENCE ON DEC. 2-5, 2002, PISCATAWAY, NJ, USA,IEEE, vol. 2, 2 December 2002 (2002-12-02), pages 740 - 745, XP010663135, ISBN: 978-981-04-8364-7, DOI: 10.1109/ICARCV.2002.1238514 *

Similar Documents

Publication Publication Date Title
US11604076B2 (en) Vision augmented navigation
US8325979B2 (en) Method and apparatus for detecting objects from terrestrial based mobile mapping data
US8280107B2 (en) Method and apparatus for identification and position determination of planar objects in images
CN106919915B (en) Map road marking and road quality acquisition device and method based on ADAS system
KR102362714B1 (en) Methods and systems for generating route data
US20100086174A1 (en) Method of and apparatus for producing road information
CN111179152B (en) Road identification recognition method and device, medium and terminal
CN109949365B (en) Vehicle designated position parking method and system based on road surface feature points
CN101842808A (en) Method of and apparatus for producing lane information
WO2011023244A1 (en) Method and system of processing data gathered using a range sensor
KR101735557B1 (en) System and Method for Collecting Traffic Information Using Real time Object Detection
JP2006119591A (en) Map information generation method, map information generation program and map information collection apparatus
KR102267517B1 (en) Road fog detecting appartus and method using thereof
KR100981588B1 (en) A system for generating geographical information of city facilities based on vector transformation which uses magnitude and direction information of feature point
US11676397B2 (en) System and method for detecting an object collision
WO2012089261A1 (en) Method of automatically extracting lane markings from road imagery
KR100959246B1 (en) A method and a system for generating geographical information of city facilities using stereo images and gps coordination
CN114821288A (en) Image identification method and unmanned aerial vehicle system
CN117152166A (en) Drainage ditch ponding area example segmentation method and device
TW201232422A (en) Method of automatically extracting lane markings from road imagery

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10800955

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10800955

Country of ref document: EP

Kind code of ref document: A1