US9070023B2 - System and method of alerting a driver that visual perception of pedestrian may be difficult - Google Patents

System and method of alerting a driver that visual perception of pedestrian may be difficult Download PDF

Info

Publication number
US9070023B2
US9070023B2 US14/034,103 US201314034103A US9070023B2 US 9070023 B2 US9070023 B2 US 9070023B2 US 201314034103 A US201314034103 A US 201314034103A US 9070023 B2 US9070023 B2 US 9070023B2
Authority
US
United States
Prior art keywords
pedestrian
set forth
clutter
score
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US14/034,103
Other versions
US20150086077A1 (en
Inventor
Eliza Y. Du
Kai Yang
Pingge Jiang
Rini Sherony
Hiroyuki Takahashi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Indiana University Research and Technology Corp
Original Assignee
Toyota Motor Corp
Indiana University Research and Technology Corp
Toyota Motor Engineering and Manufacturing North America Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US14/034,103 priority Critical patent/US9070023B2/en
Application filed by Toyota Motor Corp, Indiana University Research and Technology Corp, Toyota Motor Engineering and Manufacturing North America Inc filed Critical Toyota Motor Corp
Assigned to TOYOTA JIDOSHA KABUSHIKI KAISHA reassignment TOYOTA JIDOSHA KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TAKAHASHI, HIROYUKI
Assigned to TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC. reassignment TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHERONY, RINI
Assigned to INDIANA UNIVERSITY RESEARCH AND TECHNOLOGY CORPORATION reassignment INDIANA UNIVERSITY RESEARCH AND TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DU, ELIZA YINGZI
Assigned to INDIANA UNIVERSITY RESEARCH AND TECHNOLOGY CORPORATION reassignment INDIANA UNIVERSITY RESEARCH AND TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DU, ELIZA YINGZI, Jiang, Pingge, YANG, KAI
Priority to EP20140182908 priority patent/EP2851841A3/en
Priority to JP2014192637A priority patent/JP6144656B2/en
Publication of US20150086077A1 publication Critical patent/US20150086077A1/en
Assigned to TOYOTA JIDOSHA KABUSHIKI KAISHA reassignment TOYOTA JIDOSHA KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
Publication of US9070023B2 publication Critical patent/US9070023B2/en
Application granted granted Critical
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06K9/00805
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • G06K9/00362
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Definitions

  • the invention relates to a system and method for alerting a driver that the visual perception of a pedestrian may be difficult. More particular, the system and method generate a global clutter score and a local pedestrian clutter score, processes both the global clutter score and local pedestrian clutter score so as to calculate a pedestrian detection score, wherein the driver is alerted when the pedestrian detection score is outside of a predetermined threshold.
  • Pedestrian perception alert systems utilizing three dimensional features are known in the art.
  • three dimensional detection systems require the use of range sensors such as radar, sonar, laser or the like.
  • three dimensional detection systems require robust computing platforms capable of fusing the three dimensional data with a two dimensional video camera image.
  • Pedestrian detection utilizing two dimensional video image analyses is also known.
  • current analysis of two dimensional pedestrian detection systems are configured to process the two dimensional image so as to ascertain the presence of a pedestrian.
  • the two dimensional pedestrian detection systems Upon detecting a pedestrian, the two dimensional pedestrian detection systems will identify the location of the detected pedestrian and/or alert the driver.
  • current systems may provide a lot of false positives.
  • current two dimensional pedestrian detection systems do not address the difficulty that a driver may have in visually perceiving a pedestrian. Thus, by alerting the driver that visual perception is difficult, the driver may be able to ascertain with better certainty, whether a pedestrian detection alert is a false positive.
  • a pedestrian perception alert system and a method for issuing an alert to a driver are provided.
  • the system and method are configured to issue an alert in real-time where a driver's visual detection of a pedestrian is difficult.
  • the pedestrian perception alert system includes a video camera, a processor, and an alert.
  • the video camera is configured to capture two dimensional video images.
  • the alert is configured to issue a warning that it is difficult to visually perceive a pedestrian within the driving environment.
  • the processor is in electrical communication with the camera.
  • the pedestrian perception alert system further includes a Pedestrian Detection Unit (“PDU”), Global Clutter Analysis Unit (“GCAU”), and Local Pedestrian Clutter Analysis Unit (“LPCAU”).
  • PDU is configured to analyze the video image to detect a pedestrian.
  • the GCAU is configured to generate a global clutter score of the video image.
  • the global clutter score measures the clutter of the entire video image.
  • the LPCAU is configured to generate a local pedestrian clutter score.
  • the local pedestrian clutter score measures the clutter of each of the pedestrians detected in the video image.
  • the PDU detects a pedestrian in the video image, and the processor subsequently initiates both the GCAU and the LPCAU.
  • the processor process the global clutter score and local pedestrian clutter score so as to generate a pedestrian detection score.
  • the processor is further configured to actuate the alert when the pedestrian detection score is outside of a predetermined threshold.
  • the pedestrian perception alert system may further include a Saliency Map Generating Unit (“SMGU”).
  • the SMGU is configured to process the video image and extract salient features from the video image.
  • the processor is further configured to actuate the LPCAU so as to process the extracted salient features when generating the local pedestrian clutter score.
  • the local pedestrian clutter score is processed with the global clutter score so as to calculate a pedestrian detection score.
  • the salient features may include pedestrian behavior, such as pedestrian motion. The pedestrian behavior may be further predicated upon the environment surrounding the pedestrian.
  • the pedestrian perception alert system may further include a Pedestrian Group Analysis Unit (“PGAU”) configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians.
  • the PGAU analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians so as to determine the impact the group of pedestrians may have on the driver's ability to visually perceive the group or an individual pedestrian within the group.
  • PGAU Pedestrian Group Analysis Unit
  • a method for issuing an alert in real-time when a driver's visual detection of a pedestrian is difficult includes the steps of providing a video camera, an alert, and a processor.
  • the video camera is configured to capture video image.
  • the alert is configured to issue a warning that a pedestrian within the driving environment is difficult to visually perceive.
  • the processor is in electrical communication with the camera.
  • the method further includes the steps of providing a Pedestrian Detection Unit (“PDU”), a Global Clutter Analysis Unit (“GCAU”), and a Local Pedestrian Clutter Analysis Unit (“LPCAU”).
  • PDU is configured to analyze the video camera image to detect a pedestrian.
  • the GCAU is configured to generate a global clutter score.
  • the global clutter score is a measurement of the clutter of the entire video image.
  • the LPCAU is configured to generate a local pedestrian clutter score.
  • the local pedestrian clutter score is a measurement of the clutter of each of the pedestrians detected in the video image.
  • the processor process the global clutter score and local pedestrian clutter score so as to generate a pedestrian detection score.
  • the processor is further configured to actuate the alert when the pedestrian detection score is outside of a predetermined threshold.
  • the method further includes the step of providing a Saliency Map Generating Unit (“SMGU”).
  • SMGU is configured to process the video image and extract salient features from the video image.
  • the processor is further configured to process the extracted salient features with LPCAU so as to generate the local pedestrian clutter score.
  • the local pedestrian clutter score is processed with the global clutter score so as to calculate a pedestrian detection score.
  • the salient features may include pedestrian behavior, such as pedestrian motion. The pedestrian behavior may be further predicated upon the environment surrounding the pedestrian.
  • the method may further include the step of providing a Pedestrian Group Analysis Unit (“PGAU”) configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians.
  • PGAU Pedestrian Group Analysis Unit
  • the PGAU analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians so as to determine the impact the group of pedestrians may have on the driver's ability to visually perceive the group or an individual pedestrian within the group.
  • FIG. 1 is a perspective view showing the system employed in a natural driving environment
  • FIG. 2 is diagram of the system
  • FIG. 3 is a perspective view showing the operation of an embodiment of the GCAU populating a luminance variation matrix
  • FIG. 4 is an illustration showing the operation of an embodiment of the PCGU utilizing a pedestrian mask
  • FIG. 5 is an illustration of the operation of an embodiment of the PCGU applying a cloth mask
  • FIG. 6 is an illustration of the operation of an embodiment of the LPCAU generating a background window and a detected pedestrian window
  • FIG. 7 is chart showing the global pedestrian clutter score and local pedestrian clutter score for a corresponding driving scene
  • FIG. 8 is a diagram of a system showing the input of the SMGU, and PGAU to generate a pedestrian detection score
  • FIG. 9 is an example of a saliency map
  • FIG. 10 is a diagram showing the steps of a method for issuing real-time warning when a driver's visual detection of a pedestrian is difficult.
  • a pedestrian perception alert system 10 is provided.
  • the pedestrian perception alert system 10 is configured to issue an alert in instances during real-time environment where a driver's visual detection of a pedestrian is difficult.
  • the pedestrian perception alert system 10 may be further incorporated with an autonomous control system wherein vehicle movement is further restricted, or in the alternative, the autonomous control system may be configured to take control of the vehicle in instances where it is difficult for a driver to visually perceive a pedestrian.
  • the pedestrian perception alert system 10 may be further advantageous in that the driver may be able to ascertain whether a pedestrian detection is a false positive.
  • the pedestrian perception alert system 10 may be integrated into an automotive vehicle 100 .
  • the pedestrian perception alert system 10 includes a video camera 12 configured to capture video images.
  • the pedestrian perception alert system 10 further includes an alert 14 , and a processor 16 .
  • the alert 14 is configured to issue a warning that the pedestrian is within the driving environment, and is visually difficult to perceive.
  • the alert 14 may be disposed within the cabin space of the vehicle, and may be a visual notification such as a light, or an audible signal such as a chime, or a series of chimes.
  • the processor 16 is in electrical communication with the video camera 12 and is configured to process the video image utilizing analysis units, as described below, so as to issue a warning to the driver.
  • FIG. 1 shows the video camera 12 mounted to the underside of a rearview mirror, it should be appreciated that the video camera 12 may be mounted elsewhere. Further, multiple video cameras 12 may be used to provide 360 degree coverage of the natural driving environment. In such an embodiment, it should be appreciated that the processor 16 may be further configured to fuse the video image caught by each video camera 12 so as to build a 360 degree view of the natural driving environment.
  • the video camera 12 is a high resolution camera configured to capture a 122 degree camera view, record 32 frames per second at 1280 ⁇ 720 resolution, commonly referenced as the DOD GS600 Digital Video Recorder (“DVR”).
  • the video camera 12 may include other features such as GPS antenna 12 a for obtaining geographic location, and a gravity sensor 12 b for sensing motion.
  • the system 10 captures video image, measures the global clutter of the image, processes the image to detect a pedestrian, utilizes pedestrian contour and color clustering to verify that the detected pedestrian is indeed a pedestrian, and then measures the clutter of the pedestrian.
  • Features such as the pedestrian contour, and color of the cloth may also be used to measure the clutter.
  • the output is a measurement of the difficulty a driver may have of visually perceiving the detected pedestrian within the driving environment.
  • the pedestrian perception alert system 10 further includes a Pedestrian Detection Unit (“PDU”) 18 , a Global Clutter Analysis Unit (“GCAU”) 20 , and a Local Pedestrian Clutter Analysis Unit (“LPCAU”) 22 .
  • PDU Pedestrian Detection Unit
  • GCAU Global Clutter Analysis Unit
  • LPCAU Local Pedestrian Clutter Analysis Unit
  • the PDU 18 , GCAU 20 , and LPCAU 22 may be manufactured as firmware with protocol configured to be processed and actuated by the processor 16 .
  • the firmware may be a separate unit disposed with other electronics of the vehicle.
  • the PDU 18 is configured to analyze the video camera 12 image to detect a pedestrian.
  • the PDU 18 may use input such as the geographic location of the vehicle gathered by the GPS antenna 12 a , or motion input gathered by the gravity sensor 12 b to perform pedestrian detection.
  • the processor 16 actuates the PDU 18 wherein the PDU 18 analyzes predetermined frames to determine if a pedestrian is present in the natural driving environment. For instance, the PDU 18 may be configured to identify regions of interests within each frame, wherein the background of the frame is eliminated so as to focus processing and analysis on the regions of interest.
  • the PDU 18 may then apply pedestrian feature matching, to include size, motion and speed, height-width ratio and orientation.
  • the PDU 18 notifies the processor 16 in the event a pedestrian is present within the natural driving environment.
  • the processor 16 then actuates both the GCAU 20 and the LPCAU 22 upon notification from the PDU 18 .
  • the GCAU 20 is configured to generate a global clutter score 24 .
  • the global clutter score 24 is a measurement of the clutter of the entire video image.
  • the LPCAU 22 is configured to generate a local pedestrian clutter score 26 .
  • the local pedestrian clutter score 26 is a measurement of the clutter of each pedestrian detected in the video image.
  • the processer 16 is further configured to process both the global clutter score 24 and local pedestrian clutter score 26 so as to generate a pedestrian detection score 28 .
  • the pedestrian detection score 28 is the difference between the global clutter score 24 and local pedestrian clutter score 26 .
  • the pedestrian detection score 28 measures the difficulty of visually seeing a pedestrian based upon both the amount of clutter in the driving environment and the clutter of the detected pedestrian with respect to the clutter in the environment.
  • clutter refers to a combination of foreground and background in a view that provides distracting details for some individuals who are unable to detect object(s) from its background.
  • the processor 16 is further configured to actuate the alert 14 when the pedestrian detection score 28 is outside of a predetermined threshold.
  • the GCAU 20 measures the overall clutter score of the entire video image based upon the edge density, luminance variation and chrominance variation of the video image to calculate the global clutter score 24 .
  • the edge density may be calculated by applying a detector, such as a Canny detector, with fixed threshold range to detect an edge and to compare the edge density of various frames of the video image having different driving scenarios, illumination and weather conditions.
  • a detector such as a Canny detector
  • the lower threshold may be set to 0.11
  • the upper threshold may be set to 0.27.
  • a 7 ⁇ 7 Gaussian filter is applied to each video frame processed by the Canny detector so as to remove excess high frequency image components to which human vision is not sensitive.
  • the dimensions provided herein are used for processing the dimensions and resolution of the video image captured by the DOD GS600 Digital Video Recorder, and that the dimensions may change to correspond to dimensions of resolution of the video image captured by the camera.
  • the edge density is calculated as the ratio between the number of edge pixels and the total number of pixels within the frame of the video image.
  • Luminance variation is measured globally. Luminance variation measures the luminance change of the entire video image 200 .
  • the GCAU 20 may include a sliding window 34 and a luminance variation matrix 36 , as shown in FIG. 3 .
  • the luminance variation matrix 36 is dimensioned the same size as the video frame.
  • a 9 ⁇ 9 sliding window 34 is slid across the frame of the video image so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the luminance variation matrix 36 .
  • the standard deviation for a particular area of the video frame is entered into the corresponding position of the luminance variation matrix 36 .
  • the global luminance variation is calculated as the mean value of the populated luminance variation matrix 36 .
  • the chrominance variation is calculated using two chrominance channels, “a” and “b”.
  • the chrominance variation is calculated by determining the standard deviation for each respective channel.
  • the global clutter score 24 may be outputted as a weighted sum of the edge density, luminance variation, and chrominance variation.
  • the edge density, luminance variation, and chrominance variation may be evenly weighted, with each selected at 1 ⁇ 3 weighted value.
  • the resultant global environmental clutter score may be scaled and normalized to a value between 0 and 1 such that the higher score means higher clutter.
  • a Pedestrian Contour Generation Unit (“PCGU”) 30 is provided.
  • the LPCAU 22 processes edge density of the detected pedestrian, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity, and mean chrominance intensity to calculate the local pedestrian clutter score 26 .
  • the PCGU 30 is configured to generate a pedestrian mask 32 , which may be used to obtain edge density, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity and mean chrominance intensity of the detected pedestrian.
  • the pedestrian mask 32 shown as a dashed silhouette of a pedestrian, is a constructed image of the pedestrian based upon features commonly associated with a pedestrian.
  • the pedestrian mask 32 may include the contours of the pedestrian which are applied to the detected pedestrian so as to verify that the detected pedestrian is indeed an actual pedestrian. It should be appreciated that these features may vary based upon the location of the pedestrian within the driving environment, and/or the time at which the PCGU 30 is actuated, may be used to generate the pedestrian mask 32 , and to refine the pedestrian mask 32 through subsequent video frames so as to ensure accuracy of the verification process.
  • the pedestrian mask 32 is deformable model 40 , as indicated by the quotation marks surrounding the pedestrian mask 32 shown in FIG. 4 .
  • the deformable mask is applied around the pedestrian contour 38 . Energy minimization may be used to evolve the contour 38 .
  • the edge detector function may be defined as:
  • the generated contour 38 defines the pedestrian mask 32 which may be used by the LPCAU 22 to compute pedestrian clutter features, to include local pedestrian luminance variation and local pedestrian chrominance variation.
  • the PCGU 30 may be further configured to generate a cloth mask 42 .
  • the cloth mask 42 may be used to replicate a human visual attention model by providing a cloth region that is homogenous in both color and luminance intensity, wherein the cloth region may be compared with the background so as to simulate the human visual attention model.
  • the cloth mask 42 is generated by K-mean color clustering based cloth region segmentation which is subsequently applied to the detected pedestrian to segment the cloth region.
  • 1(x,y) is the chrominance pixel value
  • ⁇ n is the mean value of each cluster.
  • the cloth mask 42 is then formed as an intersection of the pedestrian mask 32 by active contour 38 and cloth region derived from K-mean color clustering algorithm.
  • the LPCAU 22 is further configured to process the pedestrian mask 32 and the cloth mask 42 so as to compute the local pedestrian clutter score 26 .
  • the local pedestrian clutter score 26 may include features of shapes associated with the pedestrian which may be affected by movement of the pedestrian, the location of the pedestrian, and the color of the pedestrian's clothes.
  • the LPCAU 22 may be further configured to generate a background window 44 and a detected pedestrian window 46 .
  • the background window 44 is a portion of the video image having a predetermined dimension of the environment surrounding the detected pedestrian.
  • the detected pedestrian window 46 is a portion of the video frame dimensioned to capture the image of the detected pedestrian.
  • the background window 44 may be at least twice the area of the detected pedestrian window 46 .
  • the LPCAU 22 is further configured to determine the ratio between the number of edge pixels and the total number of pixels within both (1) the detected pedestrian window 46 and (2) the background window 44 , and absent the detected pedestrian window 46 , so as to calculate an edge density for a pedestrian.
  • the edge density may be calculated in a similar manner as the edge density for the global environment.
  • the edge density of the background window 44 and the detected pedestrian window 46 may be calculated by applying a detector for removing excess high frequency image components with fixed threshold range to detect an edge and to compare the edge density of background window 44 with respect to the detected pedestrian window 46 .
  • the fixed threshold range and the detector may be selected based upon factors such as the dimensions of the detected pedestrian window 46 or the background window 44 , the resolution of the video image, processing capabilities of the processor 16 , and the like. For example when detecting the edge density of the detected pedestrian window of a video image taken by the DOD GS600 Digital Video Recorder, the lower threshold may be set to 0.11 and the upper threshold may be set to 0.27.
  • a 7 ⁇ 7 Gaussian filter is respectively applied to the detected pedestrian window 46 or the background window 44 processed by a Canny detector so as to remove excess high frequency image components to which human vision is not sensitive.
  • the edge density of the detected pedestrian window 46 may be calculated by applying a Canny detector with fixed threshold range to detect an edge and to compare the edge density of detected pedestrian window 46 . Again, detecting the edge density of the detected pedestrian window of video image taken by the DOD GS600 Digital Video Recorder, the lower threshold may be set to 0.11 and the upper threshold may be set to 0.27.
  • a 7 ⁇ 7 Gaussian filter is applied to the detected pedestrian window 46 processed by the Canny detector so as to remove excess high frequency image components to which human vision is not sensitive.
  • the LPCAU 22 is configured to calculate an edge distribution of the background window 44 and the detected pedestrian by determining the histogram of edge magnitude binned by the edge orientation for both (1) the detected pedestrian window 46 and (2) the Isolated Background Window, wherein the Isolated Background Window is the background window 44 minus the detected pedestrian window 46 .
  • the edge distribution is a feature which may be used to calculate the local pedestrian clutter score 26 .
  • the edge distribution is also useful to help verify that the detected pedestrian is in fact a pedestrian.
  • the LPCAU 22 may be configured to calculate the local luminance variation within the pedestrian mask 32 and also within a region defined by the subtraction of the pedestrian mask 32 from the background window 44 (the “Maskless Background Window”).
  • the LPCAU 22 utilizes a sliding window 34 and a mask luminance variation matrix 36 .
  • the mask luminance variation matrix 36 is dimensioned the same size as that of the pedestrian mask 32 so as to calculate the luminance variation of the pedestrian mask 32 .
  • a sliding window 34 is slid across the pedestrian mask 32 so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the mask luminance variation matrix 36 .
  • the standard deviation for a particular area of the pedestrian mask 32 is entered into the corresponding position of the luminance variation matrix 36 .
  • the luminance variation of the pedestrian mask 32 is calculated as the mean value of the populated mask luminance variation matrix 36 .
  • a sliding window 34 and a Maskless Background Window Luminance (the “MBWL”) variation matrix 36 is provided.
  • the MBWL variation matrix 36 is dimensioned the same size as the Maskless Background Window so as to calculate the luminance variation of the Maskless Background Window.
  • sliding window 34 is slid across the Maskless Background Window so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the MBWL variation matrix 36 .
  • the standard deviation for a particular area of the Maskless Background Window is entered into the corresponding position of the MBWL variation matrix 36 .
  • the luminance variation of the Maskless Background Window is calculated as the mean value of the populated MBWL variation matrix 36 .
  • the LPCAU 22 may be further configured to calculate the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window.
  • the computation of local chrominance variation is calculated using two chrominance channels, “a” and “b” for both the pedestrian mask 32 and the Maskless Background Window.
  • the chrominance variation is calculated by determining the standard deviation for each respective channel.
  • the LPCAU 22 may be further configured to calculate the mean luminance intensity within the cloth mask 42 and a region generated by subtracting the cloth mask 42 from the background window 44 (the “Cloth Maskless Background Region”).
  • the LPCAU 22 may also calculate the mean chrominance intensity within the cloth mask 42 and Cloth Maskless Background Region.
  • the LPCAU 22 may calculate the local pedestrian clutter using features described above, that is the: (1) calculated edge density and edge distribution; (2) the local luminance variation of the pedestrian mask 32 and the Maskless Background Window; (3) the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window; (4) the mean luminance intensity within the cloth mask 42 and also of the Cloth Maskless Background Region, and (5) the mean chrominance intensity of the cloth mask 42 and the Cloth Maskless Background Region.
  • the local pedestrian clutter (the “LPC”) score may be calculated by computing the above referenced figures in the following formulation:
  • LPC 1 - dist ⁇ ( T , B ) ⁇ dist ⁇ ( T , B ) ⁇ , where T is a dimensional feature vector of the pedestrian area and B is a corresponding dimensional feature vector of the background area, wherein the features are the calculated edge distribution, the local luminance variation of the pedestrian mask 32 and the Maskless Background Window, the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window, the mean luminance intensity within the cloth mask 42 and also of the Cloth Maskless Background Region, and the mean chrominance intensity of the cloth mask 42 and the Cloth Maskless Background Region.
  • dist measures the distance between the two vectors, which may be measured using Euclidean distance.
  • the local pedestrian clutter score 26 is normalized to a value between 0 to 1, wherein the higher the local pedestrian clutter score 26 , the more cluttered the pedestrian is, and thus the more difficult it is for a driver to perceive the pedestrian from the environment.
  • the chart includes both the global clutter score 24 and the local pedestrian clutter score 26 , each of which were computed in accordance with the details provided herein.
  • Image 4 and 5 are of the same environment with a global clutter score 24 of 0.307.
  • the global clutter score 24 provides reasonable reference to the global clutter level although they are not very discriminative while comparing some similar driving scenes.
  • the local pedestrian clutter score 26 reflects the difficulty of pedestrian perception is more compared to the global clutter score 24 .
  • the images indicate that (1) low contrast image tends to have lower global clutter score 24 , such as night image (Image 1 with global clutter score 24 of 0.116) and image with excessive glares and reflections (Image 2 with a global clutter score 24 of 0.220); (2) color saliency is the most important factor that may affect the local pedestrian clutter score 26 , e.g., Image 6 has the lowest local pedestrian clutter score 26 (0.527) due to its highly saturated and discriminative pants color compared to the neighborhood area; and (3) local pedestrian clutter could be a better indicator and reference for pedestrian perception difficulty in naturalistic driving scenarios. For example, even though Image 1 has the lowest global clutter score 24 (0.116), it is the most difficult to detect the pedestrian in dark clothing relative to the other images, because of its high local pedestrian clutter score 26 (0.928).
  • the pedestrian perception alert system 10 processes both the global clutter score 24 and the local pedestrian clutter score 26 so as to calculate a pedestrian detection score 28 .
  • the pedestrian detection score 28 may be calculated by simply determining the difference between the two scores, wherein the alert 14 is actuated when the pedestrian detection score 28 is outside of a predetermined threshold, or above a desired value.
  • the global clutter score 24 or the local pedestrian clutter score 26 is weighted based upon the environment such that one of the scores factors more heavily in calculation of the pedestrian detection score 28 .
  • the pedestrian perception alert system 10 includes a PDU 18 .
  • the PDU 18 is configured to process two dimensional video to detect a pedestrian.
  • the PDU 18 is configured to execute a first detection method 48 or a second detection method 50 based upon the probability of a pedestrian appearance within the video image.
  • the first detection method 48 is executed in instances where there is a low chance of pedestrian appearance and the second detection method 50 is executed in instances where there is a high chance of pedestrian appearance.
  • the PDU 18 may determine a probability of a pedestrian appearance based upon the time of day, geographic location, or traffic scene. Alternatively, the PDU 18 may process a look-up table having pre-calculated or observed statistics regarding the probability of a pedestrian based upon time, geographic location, or traffic scene. For illustrative purposes, the look-up table may indicate that there is a five (5) percent probability of a pedestrian at 3:22 a.m., on December 25 th , in Beaverton, Oreg., on a dirt road. Accordingly, as the probability of a pedestrian appearance in the driving scene is relatively low, the PDU 18 executes the first detection method 48 .
  • the first detection method 48 is configured to identify a region of interest within the video image by determining the variation between sequential frames of the video image.
  • the PDU 18 identifies a region of interest in instances where the variation between sequential frames exceeds a predetermined threshold.
  • the first detection method 48 further applies a set of constraints, such as pedestrian size, shape, orientation, height-width ratio and the like, to each of the regions of interest, wherein each region of interest having a requisite number of constraints is labeled as having a pedestrian.
  • the second detection method 50 is configured to determine regions of interests within the video image by detecting vertical edges within the frame.
  • the PDU 18 identifies a region of interests in instances where the vertical edge has a predetermined characteristic.
  • the second detection method 50 further applies a feature filter, illustratively including, but not limited to, a Histogram of Oriented Gradient detector to each region of interest, wherein each region of interest having a requisite number of features is labeled as having a pedestrian.
  • the pedestrian perception alert system 10 may include additional units configured to calculate a pedestrian detection score 28 .
  • the pedestrian detection score 28 may be computed using the global clutter score 24 , saliency measure, location prior, local pedestrian clutter score 26 , pedestrian behavior analysis, and group interaction.
  • the Factors may be processed together by the processor 16 to generate a Probabilistic Learned Model (the “PLM”) which may be further processed so as to generate a pedestrian detection score 28 .
  • the PLM stores the Factors over time and calculates the pedestrian detection score 28 based in part upon the learned influence one Factor may have upon the other Factor.
  • the PLM is helpful in refining and providing an accurate pedestrian detection score through learned experiences.
  • the pedestrian perception alert system 10 may further include a Saliency Map Generating Unit (“SMGU”) 52 .
  • the SMGU 52 is configured to process the video image and extract salient features from the video image.
  • the SMGU 52 is directed to replicating the human vision system wherein between the pre-attention stage and the recognition state task and target functions of the human vision system are completed.
  • the SMGU 52 computes and generates a task and target independent bottom up saliency map using saliency computation approaches currently known and used in the art, illustratively including the saliency map shown in FIG. 9 .
  • the map shows strong connected edges of the image above. Specifically, the region with high salient features has high intensity.
  • the processor 16 processes the extracted salient features and provides the salient features to the LPCAU 22 so as to generate a local pedestrian clutter score 26 .
  • the salient features may include, but are not limited to: (1) edges of the image; and (2) connecting edges of the image.
  • the pedestrian perception alert system 10 may be further configured to process pedestrian behavior to calculate the pedestrian detection score 28 .
  • Pedestrian behavior may include how the pedestrian motion affects the perception difficulty of the driver, and may be further used to verify pedestrian detection.
  • Pedestrian behavior may also be examined in the context of the environment. Wherein pedestrian behavior includes analyzing the location and status of the appearing pedestrians, including standing, walking, running, carrying objects, etc., the perceived pedestrian clutter determined/calculated by the environment surrounding the pedestrian.
  • the SMGU 52 may be programmed with the behavior of a pedestrian at an urban cross walk, or on a side walk adjacent a residential street.
  • the pedestrian perception alert system 10 may further include a Pedestrian Group Analysis Unit (“PGAU”) 54 configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians.
  • the PGAU 54 analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians. For the within group interaction case, pedestrians located close within the scene with similar behavior pattern, e.g. standing/crossing/walking in the same direction, may grouped by the viewer so that the clutter score of an individual pedestrian within the group will be limited to describe the pedestrian perception difficulty. Accordingly, a high cluttered pedestrian would be much easier to detect if he/she were grouped by the viewer into a group with much more salient pedestrians.
  • the PGAU 54 utilizes group pedestrians' characteristics combined with individual pedestrian clutter features in judgment of visual clutter.
  • the PGAU 54 accounts for the fact that the perception of a pedestrian may also be affected by other pedestrians or distracting events/objects existing in the same scene. For example, a moving pedestrian may distract driver's attention more easily relative to a static pedestrian, and a dashing vehicle or bicycle may catch the attention of driver immediately.
  • the PGAU 54 may utilize learned behavior of pedestrians in group interactions to calculate the pedestrian detection score 28 .
  • a method for issuing alert 14 in real-time when a driver's visual detection of a pedestrian is difficult includes the steps of providing a video camera 12 , an alert 14 and a processor 16 . These steps are referenced in FIGS. 8 as 110 , 120 , and 130 respectively.
  • the video camera 12 is configured to capture video image.
  • the alert 14 is configured to issue a warning that the pedestrian within the driving environment is difficult to visually perceive.
  • the processor 16 is in electrical communication with the camera and processes the video image.
  • the method further includes detecting a pedestrian in the video image 140 , measuring the clutter of the entire video image 150 , measuring the clutter of each of the pedestrians detected in the video image 160 , calculating a global clutter score 170 , calculating a local pedestrian clutter score 180 .
  • the method 100 proceeds to step 190 wherein the global clutter score and local pedestrian clutter score are processed so as to calculate a pedestrian detection score, and in step 200 the method issues a warning when the pedestrian detection score is outside of a predetermined threshold so as to notify the driver that visual perception of a pedestrian is difficult.
  • the method 100 may utilize the PDU 18 , GCAU 20 , and LPCAU 22 as described herein so as to detect a pedestrian, measure global clutter and pedestrian clutter, and calculate a global clutter score and a local pedestrian clutter score.
  • the PDU 18 analyzes the video camera 12 image to detect a pedestrian.
  • the GCAU 20 generates the global clutter score 24 which measures the clutter of the entire video image.
  • the LPCAU 22 generates the local pedestrian clutter score 26 which measures the clutter of each of the pedestrians detected in the video image.
  • Both the GCAU 20 and the LPCAU 22 are initiated when the PDU 18 detects a pedestrian in the video image.
  • the GCAU 20 and the LPCAU 22 may calculate a respective global clutter score 24 and local pedestrian clutter score 26 as described herein.
  • the method proceeds to the step of processing the global clutter score 24 and local pedestrian clutter score 26 so as to generate a pedestrian detection score 28 , and actuating the alert 14 when a pedestrian detection score 28 is outside of a predetermined threshold.
  • the method may further include step 210 , a generating a pedestrian mask 32 .
  • the PCGU 30 may be configured to generate the pedestrian mask 32 .
  • the pedestrian mask 32 is a constructed image of the pedestrian based upon features commonly associated with a pedestrian.
  • the pedestrian mask 32 includes the contour of the pedestrian which are applied to the detected pedestrian so as to verify that the detected pedestrian is indeed an actual pedestrian. It should be appreciated that these features may vary based upon the location of the pedestrian within the driving environment, and/or the time at which the PCGU 30 is actuated, may be used to generate the pedestrian mask 32 , and to refine the pedestrian mask 32 through subsequent video frames so as to ensure accuracy of the verification process.
  • the pedestrian mask 32 is a deformable model 40 which is applied around the pedestrian contour 38 .
  • Energy minimization may be used to evolve the contour 38 .
  • C′(s) is the tangent of the curve and C′′(s) is normal to the curve.
  • the edge detector function may be defined as:
  • the generated contour 38 defines the pedestrian mask 32 which may be used by the LPCAU 22 to compute pedestrian clutter features, to include local pedestrian luminance variation and local pedestrian chrominance variation.
  • the method may include utilizing edge density, luminance variation and chrominance variation of the video image to calculate the global clutter score 24 and edge density of the detected pedestrian, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity, and mean chrominance intensity to calculate the local pedestrian clutter score 26 .
  • the pedestrian detection score 28 is the difference between the global clutter score 24 and local pedestrian score.
  • Edge density may be calculated by removing high frequency image components and subsequently determining a ratio between the number of edge pixels and the total number of pixels within the video frame.
  • the method may utilize a sliding window 34 and a luminance variation matrix 36 dimensioned the same size as the video frame, to calculate the luminance variation, wherein the GCAU 20 is configured to slide the sliding window 34 across the entire video frame so as to calculate a standard deviation of luminance value within the sliding window 34 .
  • the luminance variance may be calculated by entering the standard deviation for a particular area of the video frame into the corresponding position of the luminance variation matrix 36 , and calculating the mean value of the luminance matrix.
  • the chrominance variation may be calculated using two chrominance channels as described above.
  • the global clutter score 24 may be outputted as a weighted sum of the edge density, luminance variation, and chrominance variation.
  • the edge density, luminance variation, and chrominance variation may be evenly weighted, with each selected at 1 ⁇ 3 weighted value.
  • the resultant global environmental clutter score may be scaled and normalized to a value between 0 and 1 such that the higher score means higher clutter.
  • the LPCAU 22 may be further configured to generate a background window 44 and a detected pedestrian window 46 .
  • the background window 44 is a portion of the video image having a predetermined dimension of the environment surrounding the detected pedestrian.
  • the detected pedestrian window 46 is a portion of the video frame dimensioned to capture the image of the detected pedestrian.
  • the background window 44 may be at least twice the area of the detected pedestrian window 46 .
  • the LPCAU 22 is further configured to determine the ratio between the number of edge pixels and the total number of pixels within both (1) the detected pedestrian window 46 and (2) the background window 44 and absent the detected pedestrian window 46 , so as to calculate an edge density for a pedestrian.
  • the LPCAU 22 is configured to calculate an edge distribution of the background window 44 and the detected pedestrian by determining the histogram of edge magnitude binned by the edge orientation for both (1) the detected pedestrian window 46 and (2) the Isolated Background Window, as defined herein.
  • the edge distribution is a feature which may be used to calculate the local pedestrian clutter score 26 .
  • the edge distribution is also useful to help verify that the detected pedestrian is in fact a pedestrian.
  • the LPCAU 22 may be configured to calculate the local luminance variation within the pedestrian mask 32 and also within a region defined by the subtraction of the pedestrian mask 32 from the background window 44 (the “Maskless Background Window”).
  • the LPCAU 22 utilizes a sliding window 34 and a mask luminance variation matrix 36 .
  • the mask luminance variation matrix 36 is dimensioned the same size as that of the pedestrian mask 32 so as to calculate the luminance variation of the pedestrian mask 32 .
  • a sliding window 34 is slid across the pedestrian mask 32 so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the mask luminance variation matrix 36 .
  • the standard deviation for a particular area of the pedestrian mask 32 is entered into the corresponding position of the luminance variation matrix 36 .
  • the luminance variation of the pedestrian mask 32 is calculated as the mean value of the populated mask luminance variation matrix 36 .
  • a sliding window 34 and a MBWL variation matrix 36 is provided.
  • the MBWL variation matrix 36 is dimensioned the same size as the Maskless Background Window so as to calculate the luminance variation of the Maskless Background Window.
  • sliding window 34 is slid across the Maskless Background Window so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the MBWL variation matrix 36 .
  • the standard deviation for a particular area of the Maskless Background Window is entered into the corresponding position of the MBWL variation matrix 36 .
  • the luminance variation of the Maskless Background Window is calculated as the mean value of the populated MBWL variation matrix 36 .
  • the LPCAU 22 may be further configured to calculate the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window.
  • the computation of local chrominance variation is calculated using two chrominance channels, “a” and “b” for both the pedestrian mask 32 and the Maskless Background Window.
  • the chrominance variation is calculated by determining the standard deviation for each respective channel.
  • the LPCAU 22 may be further configured to calculate the mean luminance intensity within the cloth mask 42 and a region generated by subtracting the cloth mask 42 from the background window 44 (the “Cloth Maskless Background Region”).
  • the LPCAU 22 may also calculate the mean chrominance intensity within the cloth mask 42 and Cloth Maskless Background Region.
  • the LPCAU 22 may calculate the local pedestrian clutter using features described above, that is the: (1) calculated edge distribution; (2) the local luminance variation of the pedestrian mask 32 and the Maskless Background Window; (3) the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window; (4) the mean luminance intensity within the cloth mask 42 and also of the Cloth Maskless Background Region, and (5) the mean chrominance intensity of the cloth mask 42 and the Cloth Maskless Background Region.
  • the local pedestrian clutter (LPC) score may be calculated by computing the above referenced figures in the following formulation:
  • LPC 1 - dist ⁇ ( T , B ) ⁇ dist ⁇ ( T , B ) ⁇ , where T is a dimensional feature vector of the pedestrian area and B is a corresponding dimensional feature vector of the background area. dist measures the distance between the two vectors, which may be measured using Euclidean distance.
  • the local pedestrian clutter score 26 is normalized to a value between 0 to 1, wherein the higher the local pedestrian clutter score 26 , the more cluttered the pedestrian is, and thus the more difficult it is for a human to perceive the pedestrian from the environment.
  • the method includes the step of providing a PDU 18 to detect a pedestrian.
  • the PDU 18 is configured to execute a first detection method 48 or a second detection method 50 based upon the probability of a pedestrian appearance within the video image.
  • the first detection method 48 is executed in instances where there is a low chance of pedestrian appearance and the second detection method 50 is executed in instances where there is a high chance of pedestrian appearance.
  • the PDU 18 may determine a probability of a pedestrian appearance based upon the time of day, geographic location, or traffic scene. Alternatively, the PDU 18 may process a look-up table having pre-calculated or observed statistics regarding the probability of a pedestrian based upon time, geographic location, or traffic scene. For illustrative purposes, the look-up table may indicate that there is a five (5) percent probability of a pedestrian at 0322 AM, during December 25 th , in Beaverton Oreg., on a dirt road. Accordingly, as the probability of a pedestrian appearance in the driving scene is relatively low, the PDU 18 executes the first detection method 48 .
  • the first detection method 48 is configured to identify regions of interests within the video image by determining the variation between sequential frames of the video image.
  • the PDU 18 identifies a region of interests in instances where the variation between sequential frames exceeds a predetermined threshold.
  • the first detection method 48 further applies a set of constraints, such as pedestrian size, shape, orientation, height-width ratio and the like to each of the regions of interest, wherein each region of interest having a requisite number of constraints is labeled as having a pedestrian.
  • the second detection method 50 is configured to determine regions of interests within the video image by detecting vertical edges within the frame.
  • the PDU 18 identifies a region of interests in instances where the vertical edge has a predetermined characteristic.
  • the second detection method 50 further applies a feature filter, illustratively including, but not limited to, a Histogram of Oriented Gradient detector, to each region of interest, wherein each region of interest having a requisite number of features is labeled as having a pedestrian.
  • the method may include the processing of additional features to calculate a pedestrian detection score 28 .
  • the pedestrian detection score 28 may be computed using the global clutter score 24 , saliency measure, location prior, local pedestrian clutter score 26 , pedestrian behavior analysis, and group interaction, (each referenced hereafter as a “Factor” and collectively as the “Factors”).
  • the Factors may be processed together by the processor 16 to generate a Probabilistic Learned Model (the “PLM”) which may be further processed so as to generate a pedestrian detection score 28 .
  • the PLM stores the Factors over time and calculates the pedestrian detection score 28 based in part upon the learned influence one Factor may have upon the other Factor. Thus, the PLM is helpful in refining and providing an accurate pedestrian detection score through learned experiences.
  • the method may further include the step of providing a Saliency Map Generating Unit (“SMGU 52 ”).
  • the SMGU 52 is configured to process the video image and extract salient features from the video image.
  • the SMGU 52 is directed to replicating the human vision system wherein between the pre-attention stage and the recognition state task and target functions of the human vision system are completed.
  • the SMGU 52 computes and generates a task and target independent bottom up saliency map using saliency computation approaches currently known and used in the art, illustratively including the saliency map shown in FIG. 9 .
  • the map shows strong connected edges of the image above. Specifically, the region with high salient features has high intensity.
  • the processor 16 processes the extracted salient features and provides the salient features to the LPCAU 22 so as to generate a local pedestrian clutter score 26 .
  • the salient features may include, but are not limited to: (1) edges of the image; and (2) connecting edges of the image.
  • the method may further include step 220 , processing pedestrian behavior to calculate the pedestrian detection score 28 .
  • Pedestrian behavior may include how the pedestrian motion affects the perception difficulty of the driver, and may be further used to verify pedestrian detection.
  • Pedestrian behavior may also be examined in the context of the environment. Wherein pedestrian behavior includes analyzing the location and status of the appearing pedestrians, including standing, walking, running, carrying objects, etc., the perceived pedestrian clutter determined/calculated by the environment surrounding the pedestrian.
  • the SMGU 52 may be programmed with the behavior of a pedestrian at an urban cross walk, or on a side walk adjacent a residential street.
  • the method may further include step 230 , analyzing individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians to calculate the pedestrian detection score.
  • a Pedestrian Group Analysis Unit (“PGAU 54 ”) is configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians.
  • the PGAU 54 analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians. For the within group interaction case, pedestrians located close within the scene with similar behavior pattern, e.g., standing/crossing/walking in the same direction, may grouped by the viewer so that the clutter score of an individual pedestrian within the group will be limited to describe the pedestrian perception difficulty. Accordingly, a high cluttered pedestrian would be much easier to detect if he/she grouped by the viewer into a group with much more salient pedestrians.
  • the PGAU 54 utilizes group pedestrians' characteristics combined with individual pedestrian clutter features in judgment of visual clutter.
  • the PGAU 54 accounts for the fact that the perception of a pedestrian may also be affected by other pedestrians or distracting events/objects existing in the same scene. For example, a moving pedestrian may distract driver's attention more easily relative to a static pedestrian, and a dashing vehicle or bicycle may catch the attention of driver immediately.
  • the PGAU 54 may utilize learned behavior of pedestrians in group interactions to calculate the pedestrian detection score 28 .

Abstract

A pedestrian perception alert system configured to issue a warning during real-time when a driver's visual detection of a pedestrian is difficult, and a method thereof is provided. The system includes a video camera, an alert for issuing a warning, a processor, and a Pedestrian Detection Unit (“PDU”). The PDU analyzes the video camera image to detect a pedestrian. A Global Clutter Analysis Unit (“GCAU”) generates a global clutter score. A Local Pedestrian Clutter Analysis Unit (“LPCAU”) generates a local pedestrian clutter score. The processer processes the global clutter score and local pedestrian clutter score so as to generate a pedestrian detection score. The alert is actuated when the pedestrian detection score is outside of a predetermined threshold so as to notify the driver that perception of a pedestrian is difficult at that time.

Description

FIELD OF THE INVENTION
The invention relates to a system and method for alerting a driver that the visual perception of a pedestrian may be difficult. More particular, the system and method generate a global clutter score and a local pedestrian clutter score, processes both the global clutter score and local pedestrian clutter score so as to calculate a pedestrian detection score, wherein the driver is alerted when the pedestrian detection score is outside of a predetermined threshold.
BACKGROUND OF THE INVENTION
Pedestrian perception alert systems utilizing three dimensional features are known in the art. However, three dimensional detection systems require the use of range sensors such as radar, sonar, laser or the like. Further, three dimensional detection systems require robust computing platforms capable of fusing the three dimensional data with a two dimensional video camera image.
Pedestrian detection utilizing two dimensional video image analyses is also known. However, current analysis of two dimensional pedestrian detection systems are configured to process the two dimensional image so as to ascertain the presence of a pedestrian. Upon detecting a pedestrian, the two dimensional pedestrian detection systems will identify the location of the detected pedestrian and/or alert the driver. However, without additional three dimensional features, current systems may provide a lot of false positives. Further, current two dimensional pedestrian detection systems do not address the difficulty that a driver may have in visually perceiving a pedestrian. Thus, by alerting the driver that visual perception is difficult, the driver may be able to ascertain with better certainty, whether a pedestrian detection alert is a false positive.
Further, current two dimensional pedestrian detection systems do not take into account pedestrian behavior as a factor for generating a clutter value. Though it is known to project the movement of a pedestrian in subsequent video image so as to facilitate the detection of a pedestrian in two dimensional space, current systems do not consider how the movement or location of a pedestrian affects a driver's ability to see the pedestrian.
Accordingly, it remains desirable to have a system and method for alerting the driver in instances where visual detection of a pedestrian is difficult. Further, it remains desirable to have a system and method utilizing two-dimensional video imagery for alerting the driver in instances where visual detection of a pedestrian is difficult. Further, it remains desirable to have a system and method wherein pedestrian behavior is calculated into determining the difficulty of perceiving a pedestrian.
SUMMARY OF THE INVENTION
A pedestrian perception alert system and a method for issuing an alert to a driver are provided. The system and method are configured to issue an alert in real-time where a driver's visual detection of a pedestrian is difficult. The pedestrian perception alert system includes a video camera, a processor, and an alert. The video camera is configured to capture two dimensional video images. The alert is configured to issue a warning that it is difficult to visually perceive a pedestrian within the driving environment. The processor is in electrical communication with the camera.
The pedestrian perception alert system further includes a Pedestrian Detection Unit (“PDU”), Global Clutter Analysis Unit (“GCAU”), and Local Pedestrian Clutter Analysis Unit (“LPCAU”). The PDU is configured to analyze the video image to detect a pedestrian. The GCAU is configured to generate a global clutter score of the video image. The global clutter score measures the clutter of the entire video image. The LPCAU is configured to generate a local pedestrian clutter score. The local pedestrian clutter score measures the clutter of each of the pedestrians detected in the video image.
In operation, the PDU detects a pedestrian in the video image, and the processor subsequently initiates both the GCAU and the LPCAU. The processor process the global clutter score and local pedestrian clutter score so as to generate a pedestrian detection score. The processor is further configured to actuate the alert when the pedestrian detection score is outside of a predetermined threshold.
The pedestrian perception alert system may further include a Saliency Map Generating Unit (“SMGU”). The SMGU is configured to process the video image and extract salient features from the video image. The processor is further configured to actuate the LPCAU so as to process the extracted salient features when generating the local pedestrian clutter score. The local pedestrian clutter score is processed with the global clutter score so as to calculate a pedestrian detection score. The salient features may include pedestrian behavior, such as pedestrian motion. The pedestrian behavior may be further predicated upon the environment surrounding the pedestrian.
The pedestrian perception alert system may further include a Pedestrian Group Analysis Unit (“PGAU”) configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians. The PGAU analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians so as to determine the impact the group of pedestrians may have on the driver's ability to visually perceive the group or an individual pedestrian within the group.
A method for issuing an alert in real-time when a driver's visual detection of a pedestrian is difficult is also provided. The method includes the steps of providing a video camera, an alert, and a processor. The video camera is configured to capture video image. The alert is configured to issue a warning that a pedestrian within the driving environment is difficult to visually perceive. The processor is in electrical communication with the camera.
The method further includes the steps of providing a Pedestrian Detection Unit (“PDU”), a Global Clutter Analysis Unit (“GCAU”), and a Local Pedestrian Clutter Analysis Unit (“LPCAU”). The PDU is configured to analyze the video camera image to detect a pedestrian. The GCAU is configured to generate a global clutter score. The global clutter score is a measurement of the clutter of the entire video image. The LPCAU is configured to generate a local pedestrian clutter score. The local pedestrian clutter score is a measurement of the clutter of each of the pedestrians detected in the video image. The processor process the global clutter score and local pedestrian clutter score so as to generate a pedestrian detection score. The processor is further configured to actuate the alert when the pedestrian detection score is outside of a predetermined threshold.
The method further includes the step of providing a Saliency Map Generating Unit (“SMGU”). The SMGU is configured to process the video image and extract salient features from the video image. The processor is further configured to process the extracted salient features with LPCAU so as to generate the local pedestrian clutter score. The local pedestrian clutter score is processed with the global clutter score so as to calculate a pedestrian detection score. The salient features may include pedestrian behavior, such as pedestrian motion. The pedestrian behavior may be further predicated upon the environment surrounding the pedestrian.
The method may further include the step of providing a Pedestrian Group Analysis Unit (“PGAU”) configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians. The PGAU analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians so as to determine the impact the group of pedestrians may have on the driver's ability to visually perceive the group or an individual pedestrian within the group.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a perspective view showing the system employed in a natural driving environment;
FIG. 2 is diagram of the system;
FIG. 3 is a perspective view showing the operation of an embodiment of the GCAU populating a luminance variation matrix;
FIG. 4 is an illustration showing the operation of an embodiment of the PCGU utilizing a pedestrian mask;
FIG. 5 is an illustration of the operation of an embodiment of the PCGU applying a cloth mask;
FIG. 6 is an illustration of the operation of an embodiment of the LPCAU generating a background window and a detected pedestrian window;
FIG. 7 is chart showing the global pedestrian clutter score and local pedestrian clutter score for a corresponding driving scene;
FIG. 8 is a diagram of a system showing the input of the SMGU, and PGAU to generate a pedestrian detection score;
FIG. 9 is an example of a saliency map; and
FIG. 10 is a diagram showing the steps of a method for issuing real-time warning when a driver's visual detection of a pedestrian is difficult.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
With reference first to FIG. 1, a pedestrian perception alert system 10 according to an embodiment of the invention is provided. The pedestrian perception alert system 10 is configured to issue an alert in instances during real-time environment where a driver's visual detection of a pedestrian is difficult. Thus, by alerting a driver that a pedestrian is difficult to visually perceive, the driver may adjust his/her driving behavior. Further, the pedestrian perception alert system 10 may be further incorporated with an autonomous control system wherein vehicle movement is further restricted, or in the alternative, the autonomous control system may be configured to take control of the vehicle in instances where it is difficult for a driver to visually perceive a pedestrian. The pedestrian perception alert system 10 may be further advantageous in that the driver may be able to ascertain whether a pedestrian detection is a false positive.
The pedestrian perception alert system 10 may be integrated into an automotive vehicle 100. The pedestrian perception alert system 10 includes a video camera 12 configured to capture video images. The pedestrian perception alert system 10 further includes an alert 14, and a processor 16. The alert 14 is configured to issue a warning that the pedestrian is within the driving environment, and is visually difficult to perceive. The alert 14 may be disposed within the cabin space of the vehicle, and may be a visual notification such as a light, or an audible signal such as a chime, or a series of chimes. The processor 16 is in electrical communication with the video camera 12 and is configured to process the video image utilizing analysis units, as described below, so as to issue a warning to the driver.
Though FIG. 1 shows the video camera 12 mounted to the underside of a rearview mirror, it should be appreciated that the video camera 12 may be mounted elsewhere. Further, multiple video cameras 12 may be used to provide 360 degree coverage of the natural driving environment. In such an embodiment, it should be appreciated that the processor 16 may be further configured to fuse the video image caught by each video camera 12 so as to build a 360 degree view of the natural driving environment. In one embodiment, the video camera 12 is a high resolution camera configured to capture a 122 degree camera view, record 32 frames per second at 1280×720 resolution, commonly referenced as the DOD GS600 Digital Video Recorder (“DVR”). The video camera 12 may include other features such as GPS antenna 12 a for obtaining geographic location, and a gravity sensor 12 b for sensing motion.
With reference also to FIG. 2, an overall diagram showing the operation of the pedestrian perception alert system 10 is provided. The system 10 captures video image, measures the global clutter of the image, processes the image to detect a pedestrian, utilizes pedestrian contour and color clustering to verify that the detected pedestrian is indeed a pedestrian, and then measures the clutter of the pedestrian. Features such as the pedestrian contour, and color of the cloth may also be used to measure the clutter. The output is a measurement of the difficulty a driver may have of visually perceiving the detected pedestrian within the driving environment. A more detailed description is provided below.
The pedestrian perception alert system 10 further includes a Pedestrian Detection Unit (“PDU”) 18, a Global Clutter Analysis Unit (“GCAU”) 20, and a Local Pedestrian Clutter Analysis Unit (“LPCAU”) 22. The PDU 18, GCAU 20, and LPCAU 22 may be manufactured as firmware with protocol configured to be processed and actuated by the processor 16. The firmware may be a separate unit disposed with other electronics of the vehicle.
The PDU 18 is configured to analyze the video camera 12 image to detect a pedestrian. The PDU 18 may use input such as the geographic location of the vehicle gathered by the GPS antenna 12 a, or motion input gathered by the gravity sensor 12 b to perform pedestrian detection. The processor 16 actuates the PDU 18 wherein the PDU 18 analyzes predetermined frames to determine if a pedestrian is present in the natural driving environment. For instance, the PDU 18 may be configured to identify regions of interests within each frame, wherein the background of the frame is eliminated so as to focus processing and analysis on the regions of interest. The PDU 18 may then apply pedestrian feature matching, to include size, motion and speed, height-width ratio and orientation.
The PDU 18 notifies the processor 16 in the event a pedestrian is present within the natural driving environment. The processor 16 then actuates both the GCAU 20 and the LPCAU 22 upon notification from the PDU 18. The GCAU 20 is configured to generate a global clutter score 24. The global clutter score 24 is a measurement of the clutter of the entire video image. The LPCAU 22 is configured to generate a local pedestrian clutter score 26. The local pedestrian clutter score 26 is a measurement of the clutter of each pedestrian detected in the video image. The processer 16 is further configured to process both the global clutter score 24 and local pedestrian clutter score 26 so as to generate a pedestrian detection score 28. The pedestrian detection score 28 is the difference between the global clutter score 24 and local pedestrian clutter score 26. The pedestrian detection score 28 measures the difficulty of visually seeing a pedestrian based upon both the amount of clutter in the driving environment and the clutter of the detected pedestrian with respect to the clutter in the environment. For use herein, the term clutter refers to a combination of foreground and background in a view that provides distracting details for some individuals who are unable to detect object(s) from its background. The processor 16 is further configured to actuate the alert 14 when the pedestrian detection score 28 is outside of a predetermined threshold.
The GCAU 20 measures the overall clutter score of the entire video image based upon the edge density, luminance variation and chrominance variation of the video image to calculate the global clutter score 24. The global clutter score 24 may be expressed as follows:
GEC=αρE+βσL+(1−α−β)σc,
where ρE is the edge density, σL is the luminance variation and σc is the chrominance variation. α>0 and β>0 are feature weights.
The edge density may be calculated by applying a detector, such as a Canny detector, with fixed threshold range to detect an edge and to compare the edge density of various frames of the video image having different driving scenarios, illumination and weather conditions. For example, the lower threshold may be set to 0.11 and the upper threshold may be set to 0.27. To replicate the low pass characteristic of human vision, a 7×7 Gaussian filter is applied to each video frame processed by the Canny detector so as to remove excess high frequency image components to which human vision is not sensitive. It should be appreciated that the dimensions provided herein are used for processing the dimensions and resolution of the video image captured by the DOD GS600 Digital Video Recorder, and that the dimensions may change to correspond to dimensions of resolution of the video image captured by the camera. The edge density is calculated as the ratio between the number of edge pixels and the total number of pixels within the frame of the video image.
Luminance variation is measured globally. Luminance variation measures the luminance change of the entire video image 200. For example, the GCAU 20 may include a sliding window 34 and a luminance variation matrix 36, as shown in FIG. 3. The luminance variation matrix 36 is dimensioned the same size as the video frame. When using a DOD GS600 Digital Video Recorder, a 9×9 sliding window 34 is slid across the frame of the video image so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the luminance variation matrix 36. The standard deviation for a particular area of the video frame is entered into the corresponding position of the luminance variation matrix 36. The global luminance variation is calculated as the mean value of the populated luminance variation matrix 36.
The chrominance variation is calculated using two chrominance channels, “a” and “b”. The chrominance variation is calculated by determining the standard deviation for each respective channel. The global chrominance variation may be calculated as follows:
σc=√{square root over (σa 2b 2)},
where σc is the global chrominance variation, σa is the chrominance variation of channel “a,” and σb is the chrominance variation of channel “b.”
The global clutter score 24 may be outputted as a weighted sum of the edge density, luminance variation, and chrominance variation. The edge density, luminance variation, and chrominance variation may be evenly weighted, with each selected at ⅓ weighted value. The resultant global environmental clutter score may be scaled and normalized to a value between 0 and 1 such that the higher score means higher clutter.
With reference now to FIG. 4, an illustrative diagram showing the operation of a Pedestrian Contour Generation Unit (“PCGU”) 30 is provided. As discussed above, the LPCAU 22 processes edge density of the detected pedestrian, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity, and mean chrominance intensity to calculate the local pedestrian clutter score 26. The PCGU 30 is configured to generate a pedestrian mask 32, which may be used to obtain edge density, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity and mean chrominance intensity of the detected pedestrian. The pedestrian mask 32, shown as a dashed silhouette of a pedestrian, is a constructed image of the pedestrian based upon features commonly associated with a pedestrian. The pedestrian mask 32 may include the contours of the pedestrian which are applied to the detected pedestrian so as to verify that the detected pedestrian is indeed an actual pedestrian. It should be appreciated that these features may vary based upon the location of the pedestrian within the driving environment, and/or the time at which the PCGU 30 is actuated, may be used to generate the pedestrian mask 32, and to refine the pedestrian mask 32 through subsequent video frames so as to ensure accuracy of the verification process. Thus, by continuously refining the pedestrian mask 32 through iteration, the pedestrian mask 32 is deformable model 40, as indicated by the quotation marks surrounding the pedestrian mask 32 shown in FIG. 4. The deformable mask is applied around the pedestrian contour 38. Energy minimization may be used to evolve the contour 38. The energy function may be expressed as follows:
E(C)=α∫0 1 |C′(s)|2 ds+β∫ 0 1 |C″(s)|2 ds−γ∫ 0 1 |∇u 0(C(s))|2 ds,
Where the first two integrals stand for the internal energy which control the contour 38 smoothness and the third integral is the external energy which evolves the contour 38 to the object. C′(s) is the tangent of the curve and C″(s) is normal to the curve. The edge detector function may be defined as:
g ( u 0 ( x , y ) ) = 1 1 + G σ ( x , y ) * u 0 ( x , y ) p ,
where Gσ is a Gaussian smooth filter and ∇u0 is the image gradient. The generated contour 38 defines the pedestrian mask 32 which may be used by the LPCAU 22 to compute pedestrian clutter features, to include local pedestrian luminance variation and local pedestrian chrominance variation.
With reference now to FIG. 5, an illustrative diagram showing the operation of an embodiment of the PCGU 30 is provided. The PCGU 30 may be further configured to generate a cloth mask 42. The cloth mask 42 may be used to replicate a human visual attention model by providing a cloth region that is homogenous in both color and luminance intensity, wherein the cloth region may be compared with the background so as to simulate the human visual attention model. The cloth mask 42 is generated by K-mean color clustering based cloth region segmentation which is subsequently applied to the detected pedestrian to segment the cloth region. For instance, K color subsets are generated to minimize the within-cluster distance:
argminsΣn=l kΣI(x,y)εS n ∥I(x,y)−μn2
Where S={S1, . . . Sk} is the k clusters, 1(x,y) is the chrominance pixel value and μn is the mean value of each cluster. The cloth mask 42 is then formed as an intersection of the pedestrian mask 32 by active contour 38 and cloth region derived from K-mean color clustering algorithm. The LPCAU 22 is further configured to process the pedestrian mask 32 and the cloth mask 42 so as to compute the local pedestrian clutter score 26. Accordingly, the local pedestrian clutter score 26 may include features of shapes associated with the pedestrian which may be affected by movement of the pedestrian, the location of the pedestrian, and the color of the pedestrian's clothes.
With reference now to FIG. 6, the LPCAU 22 may be further configured to generate a background window 44 and a detected pedestrian window 46. The background window 44 is a portion of the video image having a predetermined dimension of the environment surrounding the detected pedestrian. The detected pedestrian window 46 is a portion of the video frame dimensioned to capture the image of the detected pedestrian. For example, the background window 44 may be at least twice the area of the detected pedestrian window 46. The LPCAU 22 is further configured to determine the ratio between the number of edge pixels and the total number of pixels within both (1) the detected pedestrian window 46 and (2) the background window 44, and absent the detected pedestrian window 46, so as to calculate an edge density for a pedestrian.
The edge density may be calculated in a similar manner as the edge density for the global environment. For instance, the edge density of the background window 44 and the detected pedestrian window 46 may be calculated by applying a detector for removing excess high frequency image components with fixed threshold range to detect an edge and to compare the edge density of background window 44 with respect to the detected pedestrian window 46. The fixed threshold range and the detector may be selected based upon factors such as the dimensions of the detected pedestrian window 46 or the background window 44, the resolution of the video image, processing capabilities of the processor 16, and the like. For example when detecting the edge density of the detected pedestrian window of a video image taken by the DOD GS600 Digital Video Recorder, the lower threshold may be set to 0.11 and the upper threshold may be set to 0.27. To replicate the low pass characteristic of human vision, a 7×7 Gaussian filter is respectively applied to the detected pedestrian window 46 or the background window 44 processed by a Canny detector so as to remove excess high frequency image components to which human vision is not sensitive. Likewise, the edge density of the detected pedestrian window 46 may be calculated by applying a Canny detector with fixed threshold range to detect an edge and to compare the edge density of detected pedestrian window 46. Again, detecting the edge density of the detected pedestrian window of video image taken by the DOD GS600 Digital Video Recorder, the lower threshold may be set to 0.11 and the upper threshold may be set to 0.27. To replicate the low pass characteristic of human vision, a 7×7 Gaussian filter is applied to the detected pedestrian window 46 processed by the Canny detector so as to remove excess high frequency image components to which human vision is not sensitive.
The LPCAU 22 is configured to calculate an edge distribution of the background window 44 and the detected pedestrian by determining the histogram of edge magnitude binned by the edge orientation for both (1) the detected pedestrian window 46 and (2) the Isolated Background Window, wherein the Isolated Background Window is the background window 44 minus the detected pedestrian window 46. The edge distribution is a feature which may be used to calculate the local pedestrian clutter score 26. The edge distribution is also useful to help verify that the detected pedestrian is in fact a pedestrian.
The LPCAU 22 may be configured to calculate the local luminance variation within the pedestrian mask 32 and also within a region defined by the subtraction of the pedestrian mask 32 from the background window 44 (the “Maskless Background Window”). The LPCAU 22 utilizes a sliding window 34 and a mask luminance variation matrix 36. The mask luminance variation matrix 36 is dimensioned the same size as that of the pedestrian mask 32 so as to calculate the luminance variation of the pedestrian mask 32. When calculating the luminance variation of the pedestrian mask 32, a sliding window 34 is slid across the pedestrian mask 32 so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the mask luminance variation matrix 36. The standard deviation for a particular area of the pedestrian mask 32 is entered into the corresponding position of the luminance variation matrix 36. The luminance variation of the pedestrian mask 32 is calculated as the mean value of the populated mask luminance variation matrix 36.
Likewise, a sliding window 34 and a Maskless Background Window Luminance (the “MBWL”) variation matrix 36 is provided. The MBWL variation matrix 36 is dimensioned the same size as the Maskless Background Window so as to calculate the luminance variation of the Maskless Background Window. When calculating the luminance variation of the pedestrian mask 32, sliding window 34 is slid across the Maskless Background Window so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the MBWL variation matrix 36. The standard deviation for a particular area of the Maskless Background Window is entered into the corresponding position of the MBWL variation matrix 36. The luminance variation of the Maskless Background Window is calculated as the mean value of the populated MBWL variation matrix 36.
The LPCAU 22 may be further configured to calculate the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window. As with computing global chrominance, the computation of local chrominance variation is calculated using two chrominance channels, “a” and “b” for both the pedestrian mask 32 and the Maskless Background Window. The chrominance variation is calculated by determining the standard deviation for each respective channel. The local chrominance variation may be calculated as follows:
σc=√{square root over (σa 2b 2)},
where σc is the global chrominance variation, σa is the chrominance variation of channel “a,” and σb is the chrominance variation of channel “b.”
The LPCAU 22 may be further configured to calculate the mean luminance intensity within the cloth mask 42 and a region generated by subtracting the cloth mask 42 from the background window 44 (the “Cloth Maskless Background Region”). The LPCAU 22 may also calculate the mean chrominance intensity within the cloth mask 42 and Cloth Maskless Background Region. The LPCAU 22 may calculate the local pedestrian clutter using features described above, that is the: (1) calculated edge density and edge distribution; (2) the local luminance variation of the pedestrian mask 32 and the Maskless Background Window; (3) the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window; (4) the mean luminance intensity within the cloth mask 42 and also of the Cloth Maskless Background Region, and (5) the mean chrominance intensity of the cloth mask 42 and the Cloth Maskless Background Region. For instance, the local pedestrian clutter (the “LPC”) score may be calculated by computing the above referenced figures in the following formulation:
LPC = 1 - dist ( T , B ) dist ( T , B ) ,
where T is a dimensional feature vector of the pedestrian area and B is a corresponding dimensional feature vector of the background area, wherein the features are the calculated edge distribution, the local luminance variation of the pedestrian mask 32 and the Maskless Background Window, the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window, the mean luminance intensity within the cloth mask 42 and also of the Cloth Maskless Background Region, and the mean chrominance intensity of the cloth mask 42 and the Cloth Maskless Background Region. dist measures the distance between the two vectors, which may be measured using Euclidean distance. The local pedestrian clutter score 26 is normalized to a value between 0 to 1, wherein the higher the local pedestrian clutter score 26, the more cluttered the pedestrian is, and thus the more difficult it is for a driver to perceive the pedestrian from the environment.
With reference now to FIG. 7, a chart and accompanying view of the driving environment is provided. The chart includes both the global clutter score 24 and the local pedestrian clutter score 26, each of which were computed in accordance with the details provided herein. Image 4 and 5 are of the same environment with a global clutter score 24 of 0.307. The global clutter score 24 provides reasonable reference to the global clutter level although they are not very discriminative while comparing some similar driving scenes. However, the local pedestrian clutter score 26 reflects the difficulty of pedestrian perception is more compared to the global clutter score 24. The images indicate that (1) low contrast image tends to have lower global clutter score 24, such as night image (Image 1 with global clutter score 24 of 0.116) and image with excessive glares and reflections (Image 2 with a global clutter score 24 of 0.220); (2) color saliency is the most important factor that may affect the local pedestrian clutter score 26, e.g., Image 6 has the lowest local pedestrian clutter score 26 (0.527) due to its highly saturated and discriminative pants color compared to the neighborhood area; and (3) local pedestrian clutter could be a better indicator and reference for pedestrian perception difficulty in naturalistic driving scenarios. For example, even though Image 1 has the lowest global clutter score 24 (0.116), it is the most difficult to detect the pedestrian in dark clothing relative to the other images, because of its high local pedestrian clutter score 26 (0.928).
The pedestrian perception alert system 10 processes both the global clutter score 24 and the local pedestrian clutter score 26 so as to calculate a pedestrian detection score 28. The pedestrian detection score 28 may be calculated by simply determining the difference between the two scores, wherein the alert 14 is actuated when the pedestrian detection score 28 is outside of a predetermined threshold, or above a desired value. In another embodiment the global clutter score 24 or the local pedestrian clutter score 26 is weighted based upon the environment such that one of the scores factors more heavily in calculation of the pedestrian detection score 28.
As stated above, the pedestrian perception alert system 10 includes a PDU 18. The PDU 18 is configured to process two dimensional video to detect a pedestrian. In one embodiment, the PDU 18 is configured to execute a first detection method 48 or a second detection method 50 based upon the probability of a pedestrian appearance within the video image. The first detection method 48 is executed in instances where there is a low chance of pedestrian appearance and the second detection method 50 is executed in instances where there is a high chance of pedestrian appearance.
The PDU 18 may determine a probability of a pedestrian appearance based upon the time of day, geographic location, or traffic scene. Alternatively, the PDU 18 may process a look-up table having pre-calculated or observed statistics regarding the probability of a pedestrian based upon time, geographic location, or traffic scene. For illustrative purposes, the look-up table may indicate that there is a five (5) percent probability of a pedestrian at 3:22 a.m., on December 25th, in Beaverton, Oreg., on a dirt road. Accordingly, as the probability of a pedestrian appearance in the driving scene is relatively low, the PDU 18 executes the first detection method 48.
The first detection method 48 is configured to identify a region of interest within the video image by determining the variation between sequential frames of the video image. The PDU 18 identifies a region of interest in instances where the variation between sequential frames exceeds a predetermined threshold. The first detection method 48 further applies a set of constraints, such as pedestrian size, shape, orientation, height-width ratio and the like, to each of the regions of interest, wherein each region of interest having a requisite number of constraints is labeled as having a pedestrian.
The second detection method 50 is configured to determine regions of interests within the video image by detecting vertical edges within the frame. The PDU 18 identifies a region of interests in instances where the vertical edge has a predetermined characteristic. The second detection method 50 further applies a feature filter, illustratively including, but not limited to, a Histogram of Oriented Gradient detector to each region of interest, wherein each region of interest having a requisite number of features is labeled as having a pedestrian.
With reference now to FIG. 8, the pedestrian perception alert system 10 may include additional units configured to calculate a pedestrian detection score 28. As shown, the pedestrian detection score 28 may be computed using the global clutter score 24, saliency measure, location prior, local pedestrian clutter score 26, pedestrian behavior analysis, and group interaction. The Factors may be processed together by the processor 16 to generate a Probabilistic Learned Model (the “PLM”) which may be further processed so as to generate a pedestrian detection score 28. The PLM stores the Factors over time and calculates the pedestrian detection score 28 based in part upon the learned influence one Factor may have upon the other Factor. Thus, the PLM is helpful in refining and providing an accurate pedestrian detection score through learned experiences.
The pedestrian perception alert system 10 may further include a Saliency Map Generating Unit (“SMGU”) 52. The SMGU 52 is configured to process the video image and extract salient features from the video image. The SMGU 52 is directed to replicating the human vision system wherein between the pre-attention stage and the recognition state task and target functions of the human vision system are completed. The SMGU 52 computes and generates a task and target independent bottom up saliency map using saliency computation approaches currently known and used in the art, illustratively including the saliency map shown in FIG. 9. The map shows strong connected edges of the image above. Specifically, the region with high salient features has high intensity. The processor 16 processes the extracted salient features and provides the salient features to the LPCAU 22 so as to generate a local pedestrian clutter score 26. The salient features may include, but are not limited to: (1) edges of the image; and (2) connecting edges of the image.
The pedestrian perception alert system 10 may be further configured to process pedestrian behavior to calculate the pedestrian detection score 28. Pedestrian behavior may include how the pedestrian motion affects the perception difficulty of the driver, and may be further used to verify pedestrian detection. Pedestrian behavior may also be examined in the context of the environment. Wherein pedestrian behavior includes analyzing the location and status of the appearing pedestrians, including standing, walking, running, carrying objects, etc., the perceived pedestrian clutter determined/calculated by the environment surrounding the pedestrian. For instance, the SMGU 52 may be programmed with the behavior of a pedestrian at an urban cross walk, or on a side walk adjacent a residential street.
The pedestrian perception alert system 10 may further include a Pedestrian Group Analysis Unit (“PGAU”) 54 configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians. The PGAU 54 analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians. For the within group interaction case, pedestrians located close within the scene with similar behavior pattern, e.g. standing/crossing/walking in the same direction, may grouped by the viewer so that the clutter score of an individual pedestrian within the group will be limited to describe the pedestrian perception difficulty. Accordingly, a high cluttered pedestrian would be much easier to detect if he/she were grouped by the viewer into a group with much more salient pedestrians. The PGAU 54 utilizes group pedestrians' characteristics combined with individual pedestrian clutter features in judgment of visual clutter.
With respect to the analysis of the between group interactions, the PGAU 54 accounts for the fact that the perception of a pedestrian may also be affected by other pedestrians or distracting events/objects existing in the same scene. For example, a moving pedestrian may distract driver's attention more easily relative to a static pedestrian, and a dashing vehicle or bicycle may catch the attention of driver immediately. The PGAU 54 may utilize learned behavior of pedestrians in group interactions to calculate the pedestrian detection score 28.
With reference now to FIG. 10, a method for issuing alert 14 in real-time when a driver's visual detection of a pedestrian is difficult is also provided. The method includes the steps of providing a video camera 12, an alert 14 and a processor 16. These steps are referenced in FIGS. 8 as 110, 120, and 130 respectively. The video camera 12 is configured to capture video image. The alert 14 is configured to issue a warning that the pedestrian within the driving environment is difficult to visually perceive. The processor 16 is in electrical communication with the camera and processes the video image.
The method further includes detecting a pedestrian in the video image 140, measuring the clutter of the entire video image 150, measuring the clutter of each of the pedestrians detected in the video image 160, calculating a global clutter score 170, calculating a local pedestrian clutter score 180. The method 100 proceeds to step 190 wherein the global clutter score and local pedestrian clutter score are processed so as to calculate a pedestrian detection score, and in step 200 the method issues a warning when the pedestrian detection score is outside of a predetermined threshold so as to notify the driver that visual perception of a pedestrian is difficult.
The method 100 may utilize the PDU 18, GCAU 20, and LPCAU 22 as described herein so as to detect a pedestrian, measure global clutter and pedestrian clutter, and calculate a global clutter score and a local pedestrian clutter score. The PDU 18 analyzes the video camera 12 image to detect a pedestrian. The GCAU 20 generates the global clutter score 24 which measures the clutter of the entire video image. The LPCAU 22 generates the local pedestrian clutter score 26 which measures the clutter of each of the pedestrians detected in the video image.
Both the GCAU 20 and the LPCAU 22 are initiated when the PDU 18 detects a pedestrian in the video image. The GCAU 20 and the LPCAU 22 may calculate a respective global clutter score 24 and local pedestrian clutter score 26 as described herein. The method proceeds to the step of processing the global clutter score 24 and local pedestrian clutter score 26 so as to generate a pedestrian detection score 28, and actuating the alert 14 when a pedestrian detection score 28 is outside of a predetermined threshold.
The method may further include step 210, a generating a pedestrian mask 32. The PCGU 30 may be configured to generate the pedestrian mask 32. The pedestrian mask 32 is a constructed image of the pedestrian based upon features commonly associated with a pedestrian. The pedestrian mask 32 includes the contour of the pedestrian which are applied to the detected pedestrian so as to verify that the detected pedestrian is indeed an actual pedestrian. It should be appreciated that these features may vary based upon the location of the pedestrian within the driving environment, and/or the time at which the PCGU 30 is actuated, may be used to generate the pedestrian mask 32, and to refine the pedestrian mask 32 through subsequent video frames so as to ensure accuracy of the verification process. Thus, by continuously refining the pedestrian mask 32, the pedestrian mask 32 is a deformable model 40 which is applied around the pedestrian contour 38. Energy minimization may be used to evolve the contour 38. The energy function may be expressed as follows:
E(C)=α∫0 1 |C′(s)|2 ds+β∫ 0 1 |C″(s)|2 ds−γ∫ 0 1 |∇u 0(C(s))|2 ds,
where the first two integrals stand for the internal energy which control the contour 38 smoothness and the third integral is the external energy which evolves the contour 38 to the object. C′(s) is the tangent of the curve and C″(s) is normal to the curve. The edge detector function may be defined as:
g ( u 0 ( x , y ) ) = 1 1 + G σ ( x , y ) * u 0 ( x , y ) p ,
where Gσ is a Gaussian smooth filter and ∇u0 is the image gradient. The generated contour 38 defines the pedestrian mask 32 which may be used by the LPCAU 22 to compute pedestrian clutter features, to include local pedestrian luminance variation and local pedestrian chrominance variation.
The method may include utilizing edge density, luminance variation and chrominance variation of the video image to calculate the global clutter score 24 and edge density of the detected pedestrian, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity, and mean chrominance intensity to calculate the local pedestrian clutter score 26. The pedestrian detection score 28 is the difference between the global clutter score 24 and local pedestrian score.
Edge density may be calculated by removing high frequency image components and subsequently determining a ratio between the number of edge pixels and the total number of pixels within the video frame. The method may utilize a sliding window 34 and a luminance variation matrix 36 dimensioned the same size as the video frame, to calculate the luminance variation, wherein the GCAU 20 is configured to slide the sliding window 34 across the entire video frame so as to calculate a standard deviation of luminance value within the sliding window 34. The luminance variance may be calculated by entering the standard deviation for a particular area of the video frame into the corresponding position of the luminance variation matrix 36, and calculating the mean value of the luminance matrix. The chrominance variation may be calculated using two chrominance channels as described above.
The global clutter score 24 may be outputted as a weighted sum of the edge density, luminance variation, and chrominance variation. The edge density, luminance variation, and chrominance variation may be evenly weighted, with each selected at ⅓ weighted value. The resultant global environmental clutter score may be scaled and normalized to a value between 0 and 1 such that the higher score means higher clutter.
LPCAU 22 may be further configured to generate a background window 44 and a detected pedestrian window 46. The background window 44 is a portion of the video image having a predetermined dimension of the environment surrounding the detected pedestrian. The detected pedestrian window 46 is a portion of the video frame dimensioned to capture the image of the detected pedestrian. For example the background window 44 may be at least twice the area of the detected pedestrian window 46. The LPCAU 22 is further configured to determine the ratio between the number of edge pixels and the total number of pixels within both (1) the detected pedestrian window 46 and (2) the background window 44 and absent the detected pedestrian window 46, so as to calculate an edge density for a pedestrian.
The LPCAU 22 is configured to calculate an edge distribution of the background window 44 and the detected pedestrian by determining the histogram of edge magnitude binned by the edge orientation for both (1) the detected pedestrian window 46 and (2) the Isolated Background Window, as defined herein. The edge distribution is a feature which may be used to calculate the local pedestrian clutter score 26. The edge distribution is also useful to help verify that the detected pedestrian is in fact a pedestrian.
The LPCAU 22 may be configured to calculate the local luminance variation within the pedestrian mask 32 and also within a region defined by the subtraction of the pedestrian mask 32 from the background window 44 (the “Maskless Background Window”). The LPCAU 22 utilizes a sliding window 34 and a mask luminance variation matrix 36. The mask luminance variation matrix 36 is dimensioned the same size as that of the pedestrian mask 32 so as to calculate the luminance variation of the pedestrian mask 32. When calculating the luminance variation of the pedestrian mask 32, a sliding window 34 is slid across the pedestrian mask 32 so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the mask luminance variation matrix 36. The standard deviation for a particular area of the pedestrian mask 32 is entered into the corresponding position of the luminance variation matrix 36. The luminance variation of the pedestrian mask 32 is calculated as the mean value of the populated mask luminance variation matrix 36.
Likewise, a sliding window 34 and a MBWL variation matrix 36 is provided. The MBWL variation matrix 36 is dimensioned the same size as the Maskless Background Window so as to calculate the luminance variation of the Maskless Background Window. When calculating the luminance variation of the pedestrian mask 32, sliding window 34 is slid across the Maskless Background Window so as to calculate a standard deviation of luminance value within the sliding window 34 with respect to the same space of the MBWL variation matrix 36. The standard deviation for a particular area of the Maskless Background Window is entered into the corresponding position of the MBWL variation matrix 36. The luminance variation of the Maskless Background Window is calculated as the mean value of the populated MBWL variation matrix 36.
The LPCAU 22 may be further configured to calculate the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window. As with computing global chrominance, the computation of local chrominance variation is calculated using two chrominance channels, “a” and “b” for both the pedestrian mask 32 and the Maskless Background Window. The chrominance variation is calculated by determining the standard deviation for each respective channel. The global chrominance variation may be calculated as follows:
σc=√{square root over (σa 2b 2)},
where σc is the global chrominance variation, σa is the chrominance variation of channel “a,” and σb is the chrominance variation of channel “b.”
The LPCAU 22 may be further configured to calculate the mean luminance intensity within the cloth mask 42 and a region generated by subtracting the cloth mask 42 from the background window 44 (the “Cloth Maskless Background Region”). The LPCAU 22 may also calculate the mean chrominance intensity within the cloth mask 42 and Cloth Maskless Background Region. The LPCAU 22 may calculate the local pedestrian clutter using features described above, that is the: (1) calculated edge distribution; (2) the local luminance variation of the pedestrian mask 32 and the Maskless Background Window; (3) the local chrominance variation within the pedestrian mask 32 and also within Maskless Background Window; (4) the mean luminance intensity within the cloth mask 42 and also of the Cloth Maskless Background Region, and (5) the mean chrominance intensity of the cloth mask 42 and the Cloth Maskless Background Region. For instance, the local pedestrian clutter (LPC) score may be calculated by computing the above referenced figures in the following formulation:
LPC = 1 - dist ( T , B ) dist ( T , B ) ,
where T is a dimensional feature vector of the pedestrian area and B is a corresponding dimensional feature vector of the background area. dist measures the distance between the two vectors, which may be measured using Euclidean distance. The local pedestrian clutter score 26 is normalized to a value between 0 to 1, wherein the higher the local pedestrian clutter score 26, the more cluttered the pedestrian is, and thus the more difficult it is for a human to perceive the pedestrian from the environment.
As stated above, the method includes the step of providing a PDU 18 to detect a pedestrian. In one embodiment, the PDU 18 is configured to execute a first detection method 48 or a second detection method 50 based upon the probability of a pedestrian appearance within the video image. The first detection method 48 is executed in instances where there is a low chance of pedestrian appearance and the second detection method 50 is executed in instances where there is a high chance of pedestrian appearance.
The PDU 18 may determine a probability of a pedestrian appearance based upon the time of day, geographic location, or traffic scene. Alternatively, the PDU 18 may process a look-up table having pre-calculated or observed statistics regarding the probability of a pedestrian based upon time, geographic location, or traffic scene. For illustrative purposes, the look-up table may indicate that there is a five (5) percent probability of a pedestrian at 0322 AM, during December 25th, in Beaverton Oreg., on a dirt road. Accordingly, as the probability of a pedestrian appearance in the driving scene is relatively low, the PDU 18 executes the first detection method 48.
The first detection method 48 is configured to identify regions of interests within the video image by determining the variation between sequential frames of the video image. The PDU 18 identifies a region of interests in instances where the variation between sequential frames exceeds a predetermined threshold. The first detection method 48 further applies a set of constraints, such as pedestrian size, shape, orientation, height-width ratio and the like to each of the regions of interest, wherein each region of interest having a requisite number of constraints is labeled as having a pedestrian.
The second detection method 50 is configured to determine regions of interests within the video image by detecting vertical edges within the frame. The PDU 18 identifies a region of interests in instances where the vertical edge has a predetermined characteristic. The second detection method 50 further applies a feature filter, illustratively including, but not limited to, a Histogram of Oriented Gradient detector, to each region of interest, wherein each region of interest having a requisite number of features is labeled as having a pedestrian.
The method may include the processing of additional features to calculate a pedestrian detection score 28. As shown, the pedestrian detection score 28 may be computed using the global clutter score 24, saliency measure, location prior, local pedestrian clutter score 26, pedestrian behavior analysis, and group interaction, (each referenced hereafter as a “Factor” and collectively as the “Factors”). The Factors may be processed together by the processor 16 to generate a Probabilistic Learned Model (the “PLM”) which may be further processed so as to generate a pedestrian detection score 28. The PLM stores the Factors over time and calculates the pedestrian detection score 28 based in part upon the learned influence one Factor may have upon the other Factor. Thus, the PLM is helpful in refining and providing an accurate pedestrian detection score through learned experiences.
The method may further include the step of providing a Saliency Map Generating Unit (“SMGU 52”). The SMGU 52 is configured to process the video image and extract salient features from the video image. The SMGU 52 is directed to replicating the human vision system wherein between the pre-attention stage and the recognition state task and target functions of the human vision system are completed. The SMGU 52 computes and generates a task and target independent bottom up saliency map using saliency computation approaches currently known and used in the art, illustratively including the saliency map shown in FIG. 9. The map shows strong connected edges of the image above. Specifically, the region with high salient features has high intensity. The processor 16 processes the extracted salient features and provides the salient features to the LPCAU 22 so as to generate a local pedestrian clutter score 26. The salient features may include, but are not limited to: (1) edges of the image; and (2) connecting edges of the image.
The method may further include step 220, processing pedestrian behavior to calculate the pedestrian detection score 28. Pedestrian behavior may include how the pedestrian motion affects the perception difficulty of the driver, and may be further used to verify pedestrian detection. Pedestrian behavior may also be examined in the context of the environment. Wherein pedestrian behavior includes analyzing the location and status of the appearing pedestrians, including standing, walking, running, carrying objects, etc., the perceived pedestrian clutter determined/calculated by the environment surrounding the pedestrian. For instance, the SMGU 52 may be programmed with the behavior of a pedestrian at an urban cross walk, or on a side walk adjacent a residential street.
The method may further include step 230, analyzing individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians to calculate the pedestrian detection score. A Pedestrian Group Analysis Unit (“PGAU 54”) is configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians. The PGAU 54 analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians. For the within group interaction case, pedestrians located close within the scene with similar behavior pattern, e.g., standing/crossing/walking in the same direction, may grouped by the viewer so that the clutter score of an individual pedestrian within the group will be limited to describe the pedestrian perception difficulty. Accordingly, a high cluttered pedestrian would be much easier to detect if he/she grouped by the viewer into a group with much more salient pedestrians. The PGAU 54 utilizes group pedestrians' characteristics combined with individual pedestrian clutter features in judgment of visual clutter.
With respect to the analysis of the between group interactions, the PGAU 54 accounts for the fact that the perception of a pedestrian may also be affected by other pedestrians or distracting events/objects existing in the same scene. For example, a moving pedestrian may distract driver's attention more easily relative to a static pedestrian, and a dashing vehicle or bicycle may catch the attention of driver immediately. The PGAU 54 may utilize learned behavior of pedestrians in group interactions to calculate the pedestrian detection score 28.
Obviously, many modifications and variations of the present invention are possible in light of the above teachings and may be practiced otherwise than as specifically described while within the scope of the appended claims.

Claims (55)

The invention claimed is:
1. A pedestrian perception alert system configured to issue a warning in real-time when driver visual perception of a pedestrian is difficult, the pedestrian perception alert system comprising:
a video camera configured to capture video image;
an alert for issuing the warning;
a processor in electrical communication with the video camera;
a Pedestrian Detection Unit (“PDU”) configured to analyze the video image to detect the pedestrian;
a Global Clutter Analysis Unit (“GCAU”) configured to generate a global clutter score, the global clutter score measuring the clutter of the entire video image; and
a Local Pedestrian Clutter Analysis Unit (“LPCAU”) configured to generate a local pedestrian clutter score, the local pedestrian clutter score measuring the clutter of each of the pedestrians detected in the video image, wherein when the PDU detects a pedestrian in the video image the processor initiates both the GCAU and the LPCAU, the processer processes the global clutter score and local pedestrian clutter score so as to generate a pedestrian detection score, and processor further actuating the alert when the pedestrian detection score is outside of a predetermined threshold so as to warn a driver that it is difficult to visually perceive a pedestrian.
2. The pedestrian perception alert system as set forth in claim 1, further including a Pedestrian Contour Generation Unit (“PCGU”) configured to generate a pedestrian mask, the LCPAU further processing the pedestrian mask so as to compute the local pedestrian clutter score.
3. The pedestrian perception alert system as set forth in claim 2, wherein the GCAU processes edge density, luminance variation and chrominance variation of the video image to calculate the global clutter score and the LPCAU processes edge density of the detected pedestrian, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity, and mean chrominance intensity to calculate the local pedestrian clutter score.
4. The pedestrian perception alert system as set forth in claim 3, wherein pedestrian detection score is the difference between the global clutter score and local pedestrian score.
5. The pedestrian perception alert system as set forth in claim 3, wherein the GCAU is further configured to remove high frequency image components and subsequently calculating a ratio between a number of edge pixels and a total number of pixels within a video frame so as to calculate the edge density.
6. The pedestrian perception alert system as set forth in claim 3, wherein the GCAU includes a sliding window and a luminance variation matrix dimensioned the same size as the video frame, the GCAU configured to slide the sliding window across the entire video frame so as to calculate a standard deviation of luminance value within the sliding window, the standard deviation for a particular area of the video frame is entered into a corresponding position of the luminance variation matrix, and wherein the luminance variation is calculated as the mean value of the luminance matrix.
7. The pedestrian perception alert system as set forth in claim 3, wherein the chrominance variation is calculated using two chrominance channels.
8. The pedestrian perception alert system as set forth in claim 3, wherein the edge density, luminance variation, and chrominance variation are evenly weighted when calculating the global clutter score.
9. The pedestrian perception alert system as set forth in claim 2, wherein the PCGU generates a contour of the detected pedestrian and generates a deformable model which is applied to the contour, wherein the PCGU applies energy minimization function to further refine the contour so as to generate the pedestrian mask.
10. The pedestrian perception alert system as set forth in claim 9, wherein PCGU is further configured to segment a cloth region from a background image of the video image so as to further generate a cloth mask.
11. The pedestrian perception alert system as set forth in claim 9, wherein the LPCAU is further configured to generate a background window and a detected pedestrian window, the background window being at least twice the area of the detected pedestrian window, and wherein the background window includes the video image surrounding the detected pedestrian.
12. The pedestrian perception alert system as set forth in claim 11, wherein the LPCAU is further configured to determine the ratio between the number of edge pixels and the total number of pixels within both (1) the detected pedestrian window and (2) the background window and absent the detected pedestrian window, so as to calculate an edge density for a pedestrian.
13. The pedestrian perception alert system as set forth in claim 12, wherein the LPCAU is configured to calculate the edge distribution of the background window and the detected pedestrian by determining the histogram of edge magnitude binned by the edge orientation for both (1) the detected pedestrian window and (2) the background window and absent the detected pedestrian window.
14. The pedestrian perception alert system as set forth in claim 11, wherein the LPCAU is configured to calculate the local luminance variation within the pedestrian mask and also within a region defined by the subtraction of the pedestrian mask from the background window.
15. The pedestrian perception alert system as set forth in claim 11, wherein the LPCAU calculates the local chrominance variation within the pedestrian mask and also within a region defined by the subtraction of the pedestrian mask from the background window.
16. The pedestrian perception alert system as set forth in claim 11, wherein the LPCAU calculates the mean luminance intensity within the cloth mask and a region generated by subtracting the cloth mask from the background window.
17. The pedestrian perception alert system as set forth in claim 11, wherein the LPCAU is configured to calculate the mean chrominance intensity within the cloth mask and a region generated by subtracting the cloth mask from the background window.
18. The pedestrian perception alert system as set forth in claim 1, wherein the PDU is configured to execute a first detection method or a second detection method based upon the probability of pedestrian appearance within the video image, wherein the first detection method is executed in instances where there is a low chance of pedestrian appearance and the second detection method is executed in instances where there is a high chance of pedestrian appearance.
19. The pedestrian perception alert system as set forth in claim 18, wherein the PDU determines a probability of a pedestrian appearance based upon at least one of the following criteria: time, geographic location, or traffic scene.
20. The pedestrian perception alert system as set forth in claim 19, wherein the first detection method is configured to identify a region of interest within the video image by determining the variation between sequential frames of the video image, and identifies a region of interest in instances where the variation exceeds a predetermined threshold, the first detection method further applying a set of constraints to each of the regions of interest, wherein each region of interest having a requisite number of constraints is labeled as having a pedestrian.
21. The pedestrian perception alert system as set forth in claim 20, wherein the second detection method is configured to determine the region of interest within the video image by detecting vertical edges within the frame, and identifies the region of interests in instances where the vertical edge has a predetermined characteristic, the second detection method further applying a feature filter to each region of interest, wherein each region of interest having a requisite number of features is labeled as having a pedestrian.
22. The pedestrian perception alert system as set forth in claim 1, further including a Saliency Map Generating Unit (“SMGU”), the SMGU is configured to process the video image and extract salient features from the video image, wherein the processor is further configured to actuate the LPCAU wherein the extracted salient features are processed so as to generate the local pedestrian clutter score.
23. The pedestrian perception alert system as set forth in claim 22, wherein the salient features include pedestrian behavior.
24. The pedestrian perception alert system as set forth in claim 23, wherein the pedestrian behavior is pedestrian motion.
25. The pedestrian perception alert system as set forth in claim 23, wherein the pedestrian behavior is based upon an environment surrounding the pedestrian.
26. The pedestrian perception alert system as set forth in claim 22, further including a Pedestrian Group Analysis Unit (“PGAU”) configured to detect a group of pedestrians and assign a perception difficulty value to the group of pedestrians, wherein the PGAU analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians.
27. A method for issuing an alert in real-time when a driver's visual detection of a pedestrian is difficult, the method comprising the steps of:
providing a video camera configured to capture video image;
providing an alert for issuing a warning that the pedestrian in a driving environment is visually difficult to perceive;
providing a processor in electrical communication with the camera;
analyzing the video image to detect a pedestrian;
measuring a clutter of an entire video image and calculating a global clutter score;
measuring the clutter of each pedestrian detected in the video image and calculating a local pedestrian clutter score;
processing the global clutter score and local pedestrian clutter score to calculate a pedestrian detection score; and
issuing a warning when the pedestrian detection score is outside of a predetermined threshold so as to notify the driver that visual perception of a pedestrian is difficult.
28. The method as set forth in claim 27, further including the step of providing a processor, Pedestrian Detection Unit (“PDU”), a Global Clutter Analysis Unit (“GCAU”), and a Local Pedestrian Clutter Analysis Unit (“LPCAU”), the PDU analyzes the video image to detect a pedestrian, the GCAU analyzes the video image to measure the clutter of the entire video image and calculate a global clutter score, and the LPCAU analyzes the detected pedestrians to measure the clutter of the detected pedestrians and calculate a local pedestrian clutter score, the processor processing the global clutter score and local pedestrian clutter score so as to calculate the pedestrian detection score.
29. The method as set forth in claim 28, further including the step of generating a pedestrian mask, the LPCAU processing the pedestrian mask to calculate the local pedestrian clutter score.
30. The method as set forth in claim 29, including a Pedestrian Contour Generation Unit (“PCGU”) configured to generate the pedestrian mask.
31. The method as set forth in claim 30, wherein the GCAU processes edge density, luminance variation and chrominance variation of the video image to calculate the global clutter score and the LPCAU processes edge density of the detected pedestrian, edge distribution, local luminance variation, local chrominance variation, mean luminance intensity, and mean chrominance intensity to calculate the local pedestrian clutter score.
32. The method as set forth in claim 31, wherein pedestrian detection score is the difference between the global clutter score and local pedestrian score.
33. The method as set forth in claim 32, wherein the GCAU is further configured to remove high frequency image components and subsequently determine a ratio between a number of edge pixels and a total number of pixels within a video frame so as to calculate the edge density.
34. The method as set forth in claim 33, wherein the GCAU includes a sliding window and a luminance variation matrix dimensioned the same size as the video frame, the GCAU configured to slide the sliding window across the entire video frame so as to calculate a standard deviation of luminance value within the sliding window, the standard deviation for a particular area of the video frame is entered into the corresponding position of the luminance variation matrix, and wherein the luminance variation is calculated as the mean value of the luminance variation matrix.
35. The method as set forth in claim 31, wherein the chrominance variation is calculated using two chrominance channels.
36. The method as set forth in claim 31, wherein the edge density, luminance variation, and chrominance variation are evenly weighted when calculating the global clutter score.
37. The method as set forth in claim 30, wherein the PCGU generates a contour of the detected pedestrian and generates a deformable model which is applied to the contour, wherein the PCGU applies energy minimization function to further refine the contour so as to generate the pedestrian mask.
38. The method as set forth in claim 37, wherein PCGU is further configured to segment a cloth region from a background image of the video image so as to further generate a cloth mask.
39. The method as set forth in claim 38, wherein the LPCAU is further configured to generate a background window and a detected pedestrian window, the background window being at least twice the area of the detected pedestrian window, and wherein the background window includes the video image surrounding the detected pedestrian.
40. The method as set forth in claim 39, wherein the LPCAU is further configured to determine the ratio between the number of edge pixels and the total number of pixels within both (1) the detected pedestrian window and (2) the background window and absent the detected pedestrian window, so as to calculate an edge density for a pedestrian.
41. The method as set forth in claim 40, wherein the LPCAU is configured to calculate an edge distribution of the background window and the detected pedestrian by determining the histogram of edge magnitude binned by the edge orientation for both (1) the detected pedestrian window and (2) the background window and absent the detected pedestrian window.
42. The method as set forth in claim 39, wherein the LPCAU is configured to calculate the local luminance variation within the pedestrian mask and also within a region defined by the subtraction of the pedestrian mask from the background window.
43. The method as set forth in claim 39, wherein the LPCAU is configured to calculate the local chrominance variation within the pedestrian mask and also within a region defined by the subtraction of the pedestrian mask from the background window.
44. The method as set forth in claim 39, wherein the LPCAU is configured to calculate the mean luminance intensity within the cloth mask and a region generated by subtracting the cloth mask from the background window.
45. The method as set forth in claim 39, wherein the LPCAU is configured to calculate the mean chrominance intensity within the cloth mask and a region generated by subtracting the cloth mask from the background window.
46. The method as set forth in claim 27, wherein the PDU is configured to execute a first detection method or a second detection method based upon the probability of a pedestrian appearance within the video image, wherein the first detection method is executed in instances where there is a low chance of pedestrian appearance and the second detection method is execute in instances where there is a high chance of pedestrian appearance.
47. The method as set forth in claim 44, wherein the PDU determines a probability of a pedestrian appearance based upon at least one of time, geographic location, or traffic scene.
48. The method as set forth in claim 45, wherein the first detection method is configured to identify regions of interest within the video image by determining the variation between sequential frames of the video image, and identifies a region of interest in instances where the variation exceeds a predetermined threshold, the first detection method further applying a set of constraints to each of the regions of interest, wherein each region of interest having a requisite number of constraints is labeled as having a pedestrian.
49. The method as set forth in claim 46, wherein the second detection method is configured to determine regions of interest within the video image by detecting vertical edges within the frame, and identifies a region of interest in instances where the vertical edge has a predetermined characteristic, the second detection method further applying a feature filter to each region of interest, wherein each region of interest having a requisite number of features is labeled as having a pedestrian.
50. The method as set forth in claim 27, further including the step of utilizing pedestrian behavior to calculate the pedestrian detection score.
51. The method as set forth in claim 50, including a Saliency Map Generating Unit (“SMGU”), the SMGU is configured to process the video image and extract salient features from the video image, wherein the processor is further configured to process the extracted salient features with LPCAU so as to generate a local pedestrian clutter score, and wherein the salient features include pedestrian behavior.
52. The method as set forth in claim 51, wherein the pedestrian behavior is pedestrian motion.
53. The method as set forth in claim 51, wherein the pedestrian behavior is based upon an environment surrounding the pedestrian.
54. The method as set forth in claim 27, further including the step of analyzing individual pedestrian interaction within a group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians to calculate the pedestrian detection score.
55. The method as set forth in claim 54, further including a Pedestrian Group Analysis Unit (“PGAU”) configured to detect the group of pedestrians and assign a perception difficulty value to the group of pedestrians, the PGAU analyzes individual pedestrian interaction within the group of pedestrians, and the interaction of one group of pedestrians with respect to another group of pedestrians.
US14/034,103 2013-09-23 2013-09-23 System and method of alerting a driver that visual perception of pedestrian may be difficult Active 2033-11-04 US9070023B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/034,103 US9070023B2 (en) 2013-09-23 2013-09-23 System and method of alerting a driver that visual perception of pedestrian may be difficult
EP20140182908 EP2851841A3 (en) 2013-09-23 2014-08-29 System and method of alerting a driver that visual perception of pedestrian may be difficult
JP2014192637A JP6144656B2 (en) 2013-09-23 2014-09-22 System and method for warning a driver that visual recognition of a pedestrian may be difficult

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/034,103 US9070023B2 (en) 2013-09-23 2013-09-23 System and method of alerting a driver that visual perception of pedestrian may be difficult

Publications (2)

Publication Number Publication Date
US20150086077A1 US20150086077A1 (en) 2015-03-26
US9070023B2 true US9070023B2 (en) 2015-06-30

Family

ID=51726293

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/034,103 Active 2033-11-04 US9070023B2 (en) 2013-09-23 2013-09-23 System and method of alerting a driver that visual perception of pedestrian may be difficult

Country Status (3)

Country Link
US (1) US9070023B2 (en)
EP (1) EP2851841A3 (en)
JP (1) JP6144656B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150091715A1 (en) * 2013-09-27 2015-04-02 Fuji Jukogyo Kabushiki Kaisha Vehicle external environment recognition device
US20150169962A1 (en) * 2013-12-13 2015-06-18 Huawei Technologies Co., Ltd. Video synopsis method and apparatus
US20210304415A1 (en) * 2018-08-02 2021-09-30 Nippon Telegraph And Telephone Corporation Candidate region estimation device, candidate region estimation method, and program

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10043067B2 (en) * 2012-12-03 2018-08-07 Harman International Industries, Incorporated System and method for detecting pedestrians using a single normal camera
US10019823B2 (en) 2013-10-24 2018-07-10 Adobe Systems Incorporated Combined composition and change-based models for image cropping
US9299004B2 (en) 2013-10-24 2016-03-29 Adobe Systems Incorporated Image foreground detection
US9330334B2 (en) * 2013-10-24 2016-05-03 Adobe Systems Incorporated Iterative saliency map estimation
KR101498114B1 (en) * 2013-11-28 2015-03-05 현대모비스 주식회사 Device and method for detecting pedestrains
US10095935B2 (en) * 2013-12-20 2018-10-09 Magna Electronics Inc. Vehicle vision system with enhanced pedestrian detection
JP6483360B2 (en) * 2014-06-30 2019-03-13 本田技研工業株式会社 Object recognition device
US9747812B2 (en) * 2014-10-22 2017-08-29 Honda Motor Co., Ltd. Saliency based awareness modeling
US9789822B2 (en) * 2015-10-22 2017-10-17 Feniex Industries, Inc. Mirror controller unit for emergency vehicle warning devices
US20170206426A1 (en) * 2016-01-15 2017-07-20 Ford Global Technologies, Llc Pedestrian Detection With Saliency Maps
JP6786264B2 (en) 2016-05-27 2020-11-18 株式会社東芝 Information processing equipment, information processing methods, and vehicles
US9858817B1 (en) * 2016-10-04 2018-01-02 International Busines Machines Corporation Method and system to allow drivers or driverless vehicles to see what is on the other side of an obstruction that they are driving near, using direct vehicle-to-vehicle sharing of environment data
US10429926B2 (en) * 2017-03-15 2019-10-01 International Business Machines Corporation Physical object addition and removal based on affordance and view
US11030465B1 (en) * 2019-12-01 2021-06-08 Automotive Research & Testing Center Method for analyzing number of people and system thereof
US11565653B2 (en) 2020-01-03 2023-01-31 Aptiv Technologies Limited Vehicle occupancy-monitoring system
EP3845431A3 (en) 2020-01-06 2021-07-28 Aptiv Technologies Limited Driver-monitoring system
CN113034908A (en) * 2021-03-12 2021-06-25 深圳市雷铭科技发展有限公司 Traffic condition prompting method based on intelligent street lamp, electronic equipment and storage medium
US11718314B1 (en) * 2022-03-11 2023-08-08 Aptiv Technologies Limited Pedestrian alert system

Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4181917A (en) 1977-07-01 1980-01-01 Quadricolor Technology L.P. Color television receiving system utilizing inferred high frequency signal components to reduce color infidelities in regions of color transitions
US5053864A (en) 1989-06-01 1991-10-01 Thompson Electronics Ltd. Video capture, compression and display system, including averaging of chrominance information
US5602760A (en) 1994-02-02 1997-02-11 Hughes Electronics Image-based detection and tracking system and processing method employing clutter measurements and signal-to-clutter ratios
US6081753A (en) 1996-03-12 2000-06-27 The United States Of America As Represented By The Secretary Of The Army Method of determining probability of target detection in a visually cluttered scene
US20070047809A1 (en) * 2005-08-24 2007-03-01 Denso Corporation Environment recognition device
US7218756B2 (en) * 2004-03-24 2007-05-15 Cernium, Inc. Video analysis using segmentation gain by area
US7409092B2 (en) * 2002-06-20 2008-08-05 Hrl Laboratories, Llc Method and apparatus for the surveillance of objects in images
US7454058B2 (en) * 2005-02-07 2008-11-18 Mitsubishi Electric Research Lab, Inc. Method of extracting and searching integral histograms of data samples
US7460951B2 (en) * 2005-09-26 2008-12-02 Gm Global Technology Operations, Inc. System and method of target tracking using sensor fusion
US7486803B2 (en) * 2003-12-15 2009-02-03 Sarnoff Corporation Method and apparatus for object tracking prior to imminent collision detection
US7657059B2 (en) 2003-08-08 2010-02-02 Lockheed Martin Corporation Method and apparatus for tracking an object
US7796056B2 (en) * 2007-03-28 2010-09-14 Fein Gene S Digital windshield information system employing a recommendation engine keyed to a map database system
US20100253541A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Traffic infrastructure indicator on head-up display
US7848565B2 (en) 2005-06-27 2010-12-07 Honda Research Institute Europe Gmbh Peripersonal space and object recognition for humanoid robots
US7853076B2 (en) 2003-12-18 2010-12-14 Thomson Licensing Device and method for creating a saliency map of an image
US7881554B2 (en) 2006-06-05 2011-02-01 Stmicroelectronics S.R.L. Method for correcting a digital image
US7885453B1 (en) 2007-06-07 2011-02-08 Cognex Technology And Investment Corporation Image preprocessing for probe mark inspection
US8294794B2 (en) * 2010-07-06 2012-10-23 GM Global Technology Operations LLC Shadow removal in an image captured by a vehicle-based camera for clear path detection
US8345100B2 (en) * 2010-07-06 2013-01-01 GM Global Technology Operations LLC Shadow removal in an image captured by a vehicle-based camera using an optimized oriented linear axis
US8350724B2 (en) * 2009-04-02 2013-01-08 GM Global Technology Operations LLC Rear parking assist on full rear-window head-up display
US8384532B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Lane of travel on windshield head-up display
US8396282B1 (en) 2008-10-31 2013-03-12 Hrl Labortories, Llc Method and system for computing fused saliency maps from multi-modal sensory inputs
US8411145B2 (en) * 2007-04-27 2013-04-02 Honda Motor Co., Ltd. Vehicle periphery monitoring device, vehicle periphery monitoring program and vehicle periphery monitoring method
US8437549B2 (en) 2008-03-14 2013-05-07 Panasonic Corporation Image processing method and image processing apparatus
US8447139B2 (en) 2010-04-13 2013-05-21 International Business Machines Corporation Object recognition using Haar features and histograms of oriented gradients
US8611590B2 (en) * 2008-12-23 2013-12-17 Canon Kabushiki Kaisha Video object fragmentation detection and management
US8704653B2 (en) * 2009-04-02 2014-04-22 GM Global Technology Operations LLC Enhanced road vision on full windshield head-up display
US8731291B2 (en) * 2012-09-24 2014-05-20 Eastman Kodak Company Estimating the clutter of digital images
US8861787B2 (en) * 2011-05-12 2014-10-14 Fuji Jukogyo Kabushiki Kaisha Environment recognition device and environment recognition method
US8897560B2 (en) * 2012-09-24 2014-11-25 Eastman Kodak Company Determining the estimated clutter of digital images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4752158B2 (en) * 2001-08-28 2011-08-17 株式会社豊田中央研究所 Environment complexity calculation device, environment recognition degree estimation device and obstacle alarm device
JP4638143B2 (en) * 2003-12-26 2011-02-23 富士重工業株式会社 Vehicle driving support device
JP4670805B2 (en) * 2006-12-13 2011-04-13 株式会社豊田中央研究所 Driving support device and program
JP5136504B2 (en) * 2009-04-02 2013-02-06 トヨタ自動車株式会社 Object identification device

Patent Citations (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4181917A (en) 1977-07-01 1980-01-01 Quadricolor Technology L.P. Color television receiving system utilizing inferred high frequency signal components to reduce color infidelities in regions of color transitions
US5053864A (en) 1989-06-01 1991-10-01 Thompson Electronics Ltd. Video capture, compression and display system, including averaging of chrominance information
US5602760A (en) 1994-02-02 1997-02-11 Hughes Electronics Image-based detection and tracking system and processing method employing clutter measurements and signal-to-clutter ratios
US6081753A (en) 1996-03-12 2000-06-27 The United States Of America As Represented By The Secretary Of The Army Method of determining probability of target detection in a visually cluttered scene
US7409092B2 (en) * 2002-06-20 2008-08-05 Hrl Laboratories, Llc Method and apparatus for the surveillance of objects in images
US7657059B2 (en) 2003-08-08 2010-02-02 Lockheed Martin Corporation Method and apparatus for tracking an object
US7660438B2 (en) * 2003-12-15 2010-02-09 Sarnoff Corporation Method and apparatus for object tracking prior to imminent collision detection
US7486803B2 (en) * 2003-12-15 2009-02-03 Sarnoff Corporation Method and apparatus for object tracking prior to imminent collision detection
US7853076B2 (en) 2003-12-18 2010-12-14 Thomson Licensing Device and method for creating a saliency map of an image
US7218756B2 (en) * 2004-03-24 2007-05-15 Cernium, Inc. Video analysis using segmentation gain by area
US7440589B2 (en) * 2004-03-24 2008-10-21 Cernium Corporation Video analysis using segmentation gain by area
US7454058B2 (en) * 2005-02-07 2008-11-18 Mitsubishi Electric Research Lab, Inc. Method of extracting and searching integral histograms of data samples
US7848565B2 (en) 2005-06-27 2010-12-07 Honda Research Institute Europe Gmbh Peripersonal space and object recognition for humanoid robots
US20070047809A1 (en) * 2005-08-24 2007-03-01 Denso Corporation Environment recognition device
US7460951B2 (en) * 2005-09-26 2008-12-02 Gm Global Technology Operations, Inc. System and method of target tracking using sensor fusion
US7881554B2 (en) 2006-06-05 2011-02-01 Stmicroelectronics S.R.L. Method for correcting a digital image
US7796056B2 (en) * 2007-03-28 2010-09-14 Fein Gene S Digital windshield information system employing a recommendation engine keyed to a map database system
US8411145B2 (en) * 2007-04-27 2013-04-02 Honda Motor Co., Ltd. Vehicle periphery monitoring device, vehicle periphery monitoring program and vehicle periphery monitoring method
US7885453B1 (en) 2007-06-07 2011-02-08 Cognex Technology And Investment Corporation Image preprocessing for probe mark inspection
US8437549B2 (en) 2008-03-14 2013-05-07 Panasonic Corporation Image processing method and image processing apparatus
US8396282B1 (en) 2008-10-31 2013-03-12 Hrl Labortories, Llc Method and system for computing fused saliency maps from multi-modal sensory inputs
US8611590B2 (en) * 2008-12-23 2013-12-17 Canon Kabushiki Kaisha Video object fragmentation detection and management
US8384532B2 (en) * 2009-04-02 2013-02-26 GM Global Technology Operations LLC Lane of travel on windshield head-up display
US8350724B2 (en) * 2009-04-02 2013-01-08 GM Global Technology Operations LLC Rear parking assist on full rear-window head-up display
US20100253541A1 (en) * 2009-04-02 2010-10-07 Gm Global Technology Operations, Inc. Traffic infrastructure indicator on head-up display
US8704653B2 (en) * 2009-04-02 2014-04-22 GM Global Technology Operations LLC Enhanced road vision on full windshield head-up display
US8447139B2 (en) 2010-04-13 2013-05-21 International Business Machines Corporation Object recognition using Haar features and histograms of oriented gradients
US8345100B2 (en) * 2010-07-06 2013-01-01 GM Global Technology Operations LLC Shadow removal in an image captured by a vehicle-based camera using an optimized oriented linear axis
US8294794B2 (en) * 2010-07-06 2012-10-23 GM Global Technology Operations LLC Shadow removal in an image captured by a vehicle-based camera for clear path detection
US8861787B2 (en) * 2011-05-12 2014-10-14 Fuji Jukogyo Kabushiki Kaisha Environment recognition device and environment recognition method
US8731291B2 (en) * 2012-09-24 2014-05-20 Eastman Kodak Company Estimating the clutter of digital images
US8897560B2 (en) * 2012-09-24 2014-11-25 Eastman Kodak Company Determining the estimated clutter of digital images

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
"A New Approach of Visual Clutter Analysis for Pedestrian Detection" by Kai Yang et al., Oct. 2013.
"An Efficient k-Means Clustering Algorithm: Analysis and Implementation" by Kanungo, 2002.
"An Efficient Method for Contour Tracking Using Active Shape Models" by Baumberg et al., 1984.
"An Efficient Method for Contour Tracking Using Active Shape Models" by Baumberg et al., 1994.
"Feature Congestion: A Measure of Display Clutter" by Ruth Rosenholtz et al., 2005.
"Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter" by Beck et al., 2010.
"Measuring Visual Clutter" by Rosenholtz et al., Feb. 2005.
"Person Following Using Histograms of Oriented Gradients" by J. Brookshire, Jun. 1, 2010.
"Regional Effects of Clutter on Human Target Detection Performance" by Asher et al., Apr. 2013.
"Snakes: Active Contour Models" by Kass et al., 1988.

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150091715A1 (en) * 2013-09-27 2015-04-02 Fuji Jukogyo Kabushiki Kaisha Vehicle external environment recognition device
US9280900B2 (en) * 2013-09-27 2016-03-08 Fuji Jukogyo Kabushiki Kaisha Vehicle external environment recognition device
US20150169962A1 (en) * 2013-12-13 2015-06-18 Huawei Technologies Co., Ltd. Video synopsis method and apparatus
US9454701B2 (en) * 2013-12-13 2016-09-27 Huawei Technologies Co., Ltd. Video synopsis method and apparatus
US20210304415A1 (en) * 2018-08-02 2021-09-30 Nippon Telegraph And Telephone Corporation Candidate region estimation device, candidate region estimation method, and program

Also Published As

Publication number Publication date
EP2851841A2 (en) 2015-03-25
JP6144656B2 (en) 2017-06-07
JP2015062121A (en) 2015-04-02
US20150086077A1 (en) 2015-03-26
EP2851841A3 (en) 2015-05-20
EP2851841A8 (en) 2015-07-29

Similar Documents

Publication Publication Date Title
US9070023B2 (en) System and method of alerting a driver that visual perception of pedestrian may be difficult
US8810653B2 (en) Vehicle surroundings monitoring apparatus
US9842266B2 (en) Method for detecting driver cell phone usage from side-view images
JP4173901B2 (en) Vehicle periphery monitoring device
CN100545867C (en) Aerial shooting traffic video frequency vehicle rapid checking method
JP4173902B2 (en) Vehicle periphery monitoring device
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
US10187617B2 (en) Automatic detection of moving object by using stereo vision technique
US20150213624A1 (en) Image monitoring apparatus for estimating gradient of singleton, and method therefor
JP2015062121A5 (en)
CN110415544B (en) Disaster weather early warning method and automobile AR-HUD system
JP4528283B2 (en) Vehicle periphery monitoring device
US8174578B2 (en) Vehicle periphery monitoring device
JP2012198857A (en) Approaching object detector and approaching object detection method
JP4813304B2 (en) Vehicle periphery monitoring device
CN112613568B (en) Target identification method and device based on visible light and infrared multispectral image sequence
Chen et al. Automatic head detection for passenger flow analysis in bus surveillance videos
CN103093204B (en) Behavior monitoring method and device
CN104463135A (en) Vehicle logo recognition method and system
JP2003256844A (en) Pattern estimating method, pattern estimating device, program for the same method and recording medium with its program recorded
Wibowo et al. Implementation of Background Subtraction for Counting Vehicle Using Mixture of Gaussians with ROI Optimization
Kou et al. A lane boundary detection method based on high dynamic range image
He Practical research on lane recognition and driving state monitoring method based on computer vision
KR20230076703A (en) Contact detection device and method using deep learning-based relative distance prediction
KR20200024997A (en) System for visibility measurement with vehicle speed measurement

Legal Events

Date Code Title Description
AS Assignment

Owner name: TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AME

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHERONY, RINI;REEL/FRAME:032301/0254

Effective date: 20131022

Owner name: TOYOTA JIDOSHA KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAKAHASHI, HIROYUKI;REEL/FRAME:032301/0441

Effective date: 20140206

AS Assignment

Owner name: INDIANA UNIVERSITY RESEARCH AND TECHNOLOGY CORPORA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DU, ELIZA YINGZI;REEL/FRAME:032875/0434

Effective date: 20140510

AS Assignment

Owner name: INDIANA UNIVERSITY RESEARCH AND TECHNOLOGY CORPORA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DU, ELIZA YINGZI;YANG, KAI;JIANG, PINGGE;SIGNING DATES FROM 20131108 TO 20140510;REEL/FRAME:032892/0843

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: TOYOTA JIDOSHA KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.;REEL/FRAME:035906/0060

Effective date: 20150616

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8