US20050249382A1 - System and Method for Restricting Access through a Mantrap Portal - Google Patents
System and Method for Restricting Access through a Mantrap Portal Download PDFInfo
- Publication number
- US20050249382A1 US20050249382A1 US10/908,557 US90855705A US2005249382A1 US 20050249382 A1 US20050249382 A1 US 20050249382A1 US 90855705 A US90855705 A US 90855705A US 2005249382 A1 US2005249382 A1 US 2005249382A1
- Authority
- US
- United States
- Prior art keywords
- mantrap
- zone
- primary
- sensor
- door
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/10—Movable barriers with registering means
- G07C9/15—Movable barriers with registering means with arrangements to prevent the passage of more than one individual at a time
Definitions
- This invention relates to security systems that permit controlled access to a secured area. Specifically, this invention relates to automatic door control in a secured area using a mantrap portal.
- a typical security issue associated with most access controlled portal security systems is to ensure that when a person is authorized for entry into a secured area, it is only that person that is permitted to enter.
- a mantrap secured portal is a configuration of a secured portal that is commonly employed for restricting access to that of only an authorized person at a time.
- a mantrap portal is a small room with two doors: one door for access to/from an unsecured area (called “landside”); and one door for access to/from a secured area (called “airside”).
- landside one door for access to/from an unsecured area
- airside one door for access to/from a secured area
- a representative mantrap 100 is shown to provide an access portal between an unsecured landside region 130 and a secured airside region 140 .
- the mantrap 100 has a landside door 120 and an airside door 110 .
- the landside door 120 can be locked in the closed position with landside lock 150
- the airside door 110 can be locked in the closed position with airside lock 160 .
- the airside door 110 is closed and locked with airside lock 160
- the landside door 120 is closed, but not locked by landside lock 150 .
- a person seeking access to the secured area will approach the mantrap 100 represented by person 125 .
- the landside door can be opened while the airside door is locked.
- a request for entry can be made at entry request 155 .
- the entry request can be a card reader, a doorbell, or a biometric input such as a palm or fingerprint reader, or a retina scan.
- the landside door 120 is in the closed position and locked by landside lock 150 .
- the airside lock 160 can be released, and the airside door 110 can be opened.
- the person seeking access can enter the secured area, represented as person 115 .
- the landside lock 150 can be released, to return the mantrap to the normal, unoccupied position.
- the mantrap 100 can operate to permit a person to exit the secured airside region 140 while maintaining a high degree of security.
- a request can be made at exit request 165 , which starts the door locking cycle.
- the landside door 120 is locked by landside lock 150
- the airside door 110 is unlocked by airside lock 160 .
- the person seeking to exit can enter the mantrap, and the airside door 110 can be locked so that the landside door 120 can be unlocked, thereby permitting a person to exit.
- the mantrap configuration operates to control access since the door to the unsecured area can be locked in the closed position while the door to the secured area is open.
- mantrap portals are commonly equipped with IR sensors, pressure mats, to prevent piggyback and tailgate violations.
- Piggybacking can occur when an authorized person knowingly or unknowingly provides access through a portal to another traveling in the same direction. If a second, unauthorized person is permitted to enter the secured area with the authorized person, the security is breached.
- Tailgating can occur when an authorized person knowingly or unknowingly provides unauthorized access through a portal to another traveling in the opposite direction. For example, an unauthorized person entering the mantrap from the unsecured area can wait until someone leaves the secured area - and while the door is opened into the mantrap from the secured area, the unauthorized person can enter, thereby breaching security.
- Piggybacking and tailgating can be prevented in a mantrap using door locks controlled by a door controller that has the ability to count the number of people in the mantrap.
- the door to the secured area is only unlocked if there is exactly one authorized person seeking access to the secured area.
- Tailgating is prevented by only unlocking the door from the secured area to permit someone to exit the secured area only if there is nobody detected in the mantrap.
- Mantrap portals with enhanced security such as pressure mats and IR sensors, are easily defeated by two people walking close together, or by carrying one person by the other. Accordingly, there exists a need for a system that can effectively enhance the security of a mantrap portal.
- the present invention provides for improved methods and systems for restricting access to a secured area using a mantrap portal.
- An embodiment of the present invention continuously monitors a primary zone to determine the presence or absence of one person in the primary zone.
- the primary zone is a region of the mantrap having an area less than the area of the entire mantrap, preferably located at a location proximal to the airside door. While the primary zone is monitored, the present invention continuously monitors a secondary zone to determine that no persons are present.
- the secondary zone is a region of the mantrap not including the primary zone.
- An exemplary embodiment of the present invention uses a three-dimensional machine vision sensor to monitor the primary zone and the secondary zone to identify and track detected features that can be associated with people or a person.
- alarm conditions can be generated when unexpected conditions are detected.
- inventions of the present invention use a three-dimensional machine vision sensor to monitor the primary zone in combination with one or more presence/absence detectors to monitor the secondary zone.
- FIG. 1 discloses a three-dimensional image analysis so that the extreme extents of the respective primary and secondary zones, and regions not captured by the respective primary and secondary zones are analyzed for the presence of any people or objects.
- FIG. 1 is a plan view of a mantrap security portal according to the background art
- FIG. 2 is a plan view of a mantrap security portal according to the present invention.
- FIG. 3 is a block diagram of a control system according to the present invention.
- FIG. 4 is a flowchart of the operation of the mantrap security portal according to the present invention.
- FIG. 5 is a perspective view of an embodiment of the present invention.
- FIG. 6 is a flowchart of the method used to detect people or objects according to the exemplary embodiment of the present invention.
- FIG. 7 is a flowchart of the additional image analysis methods used to detect people or objects according to an alternate embodiment of the present invention.
- FIG. 8 is a plan view of a mantrap security portal according to an exemplary embodiment of the present invention.
- FIG. 9 is a block diagram illustrating a coarse segmentation process that identifies coarse people candidates according to an embodiment of the present invention.
- FIG. 10 is a diagram illustrating the coarse segmentation process that identifies coarse people candidates according to an embodiment of the present invention.
- FIG. 11 is a block diagram illustrating a fine segmentation process for validating or discarding coarse people candidates according to an embodiment of the present invention
- FIG. 12 is a diagram illustrating a fine segmentation process for validating or discarding coarse people candidates according to an embodiment of the present invention
- FIG. 13 is a diagram illustrating a fine segmentation process for validating or discarding coarse people candidates according to an embodiment of the present invention.
- FIG. 14 is a block diagram illustrating a method used to determine the number of people candidates by confidence level scoring according to an embodiment of the present invention.
- the mantrap 100 is a portal region between an insecure landside region 130 and a secured airside region 140 .
- the mantrap 100 has a landside door 120 for access into and out from the landside region 130 , and an airside door 110 , for access into and out from the airside region 140 .
- An airside door lock 160 permits remote locking of the airside door 160
- a landside door lock 150 permits remote locking of the landside door 120 .
- An access request 155 is shown as a panel for requesting access into the secure airside region 140
- an exit access 165 is shown as a panel for requesting access from the secured airside region 140 into the mantrap 100 .
- a primary zone 210 is established as a region in the mantrap having an area less than the area of the mantrap 100 .
- a primary sensor 230 monitors the primary zone 210 to determine if exactly one person is present in the primary zone.
- the primary zone 210 can be located anywhere within the mantrap 100 , though preferably, the primary zone 210 is located adjacent to the airside door 110 .
- a secondary zone 220 is established as a region within the mantrap 100 , not including the primary zone 210 .
- the secondary zone 220 does not need to include the entire region of the mantrap 100 exclusive of the primary zone 210 , though it is preferred that any region within the mantrap not inclusive of the primary zone 210 and the secondary zone 220 be not large enough for a person to occupy.
- a secondary sensor 240 monitors the secondary zone to determine whether or not a person or object exists within the secondary zone 220 .
- a controller 310 of the type conventionally known in the art of access control for security applications is used to control the airside door lock 160 and the landside door lock 150 .
- the controller can be any device that is capable of reading inputs, processing simple logic, and controlling the landside door and airside door.
- the controller may have the capability for performing automatic door control, i.e., opening and closing, in addition to actuation of the respective door locks.
- the controller 310 can be a Programmable Logic Controller (PLC), or a Personal Computer (PC) with the appropriate software instructions.
- PLC Programmable Logic Controller
- PC Personal Computer
- the controller 310 is responsive to signals from an entry request 155 and an exit request 165 upon presentation of an appropriate credential by the person seeking access/exit.
- Each of the entry request 155 and exit request 165 being of the type conventionally known in the art of access control for security, including, but not limited to, card readers, keypad terminals, or biometric input stations, such as finger- or palm-print readers, retinal scanners, or voice recognition stations.
- the controller 310 is adapted to receive input signals from the primary sensor 230 and the secondary sensor 240 to actuate the airside door lock 160 and the landside door lock 150 in response to either of the entry request 155 or exit request 165 terminals.
- FIG. 4 depicts a flowchart of the basic operation of the controller 310 according to the present invention.
- the controller initializes the mantrap 100 with the appropriate signals to lock the airside door at step 410 , and unlock the landside door 420 .
- the entry request terminal 155 is monitored at step 430 and the exit request terminal 165 is monitored at step 440 . If neither an entry request 430 or exit request 440 is made, processing loops continuously.
- step 450 the output of the primary sensor 230 and secondary sensor are considered by the controller 310 . If the primary sensor does not output a signal indicating that one person is in the primary zone, or if the secondary sensor does not output a signal indicating that no objects or people are detected in the secondary zone, processing continues by looping in place, as shown by processing path 455 , until both conditions are met.
- the primary sensor When the person seeking access is in the primary zone 210 , shown in FIG. 2 as person 105 , the primary sensor outputs a signal indicating that one person is detected in the primary zone. If there are no people or objects detected in the secondary zone 220 , the secondary sensor outputs a signal indicating that no such people or objects are detected. and processing continues. At this point, the landside door is locked at step 470 and the airside door is unlocked at step 480 , so that the person seeking access can enter the secured airside region, shown as person 115 .
- Processing continues by looping back to step 410 where the airside door is returned to the locked state, and the landside door is unlocked at step 420 .
- step 460 processing continues to step 460 , where the signals from the primary sensor 230 and secondary sensor 240 are considered by the controller 310 .
- step 460 if the primary sensor does not indicate that no people are present in the primary zone 210 or if the secondary sensor does not indicate that no people or objects are present in the secondary zone 220 , processing continues by looping in place, as shown by processing path 465 .
- step 470 processing continues to step 470 where the landside door is locked.
- step 480 the person requesting to exit from the secured airside region can enter the mantrap through the airside door 110 .
- the airside door can be locked at step 410 and the landside door can be unlocked at 420 , so that the person can exit the mantrap through the landside door 120 .
- the entry request terminal 155 can be placed outside the mantrap in the unsecured landside region 130 , and the normal idle state of the mantrap can be configured with both the airside door 110 and the landside door 120 in the locked state. Further, several alarm conditions can be initiated by the controller 310 if the looping path 455 or the looping path 465 are traversed for a specified duration.
- the primary sensor 230 and the secondary sensor 240 are each a three-dimensional machine vision sensor described herein with reference to FIG. 5 .
- Each of the primary sensor 230 and the secondary sensor 240 has a 3D image processor, memory, discrete I/O, and a set of stereo cameras 10 , in an integrated unit mounted in the mantrap 100 .
- the primary sensor 230 is mounted in the ceiling above the airside door 110 looking downward and outward towards the primary zone 210 .
- the secondary sensor 240 is mounted in a position so that it can observe the secondary zone 220 .
- the primary sensor 230 and the secondary sensor 240 can be mounted in any number of positions relative to the respective primary and secondary zones.
- the set of cameras 10 is calibrated to provide heights above the ground plane for any point in the field of view. Therefore, when any object enters the field of view, it generates interest points called “features”,the heights of which are measured relative to the ground plane. These points are then clustered in 3D space to provide “objects”. These objects are then tracked in multiple frames to provide “trajectories”.
- the baseline distance between the optical centers of the cameras is 12 mm and the lenses have a focal length of 2.1 mm (150 degree Horizontal Field of View (HFOV)).
- the cameras are mounted approximately 2.2 meters from the ground and have a viewing area that is approximately 2.5 by 2.5 meters.
- the surface normal to the plane of the cameras points downward and outward as shown in FIG. 5 wherein the cameras are angled just enough to view the area just below the mounting point.
- various parameters are set up in the factory.
- the factory setup involves calibration and the computation of the intrinsic parameters for the cameras and the relative orientation between the cameras.
- Calibration involves the solution of several sub-problems, as discussed hereinafter, each of which has several solutions that are well understood by persons having ordinary skill in the art.
- rectification coefficients described hereinafter, must be computed to enable run time image correction.
- Stereo measurements could be made in a coordinate system that is different from the coordinate systems of either camera.
- the scene or world coordinates correspond to the points in a viewed scene.
- Camera coordinates (left and right) correspond to the viewer-centered representation of scene points.
- Undistorted image coordinates correspond to scene points projected onto the image plane.
- Distorted image coordinates correspond to points having undergone lens distortion.
- Pixel coordinates correspond to the grid of image samples in the image array.
- one camera is designated to be a “reference camera”,to which the stereo coordinate system is tied.
- An interior orientation process is performed to determine the internal geometry of a camera.
- These parameters also called the intrinsic parameters, include the following: effective focal length, also called the camera constant; location of the principal point, also called the image center; radial distortion coefficients; and horizontal scale factor, also called the aspect ratio.
- the cameras used in the exemplary embodiment have fixed-focus lenses that cannot be modified; therefore these parameters can be computed and preset at the factory.
- a relative orientation process is also performed to determine the relative position and orientation between two cameras from projections of calibration points in the scene. Again, the cameras are mechanically fixtured such that they stay in alignment and hence these parameters can also be preset at the factory.
- Rectification is the process of resampling stereo images so that epipolar lines correspond to image rows.
- “An epipolar line on one stereo image corresponding to a given point in another stereo image is the perspective projection on the first stereo image of the three-dimensional ray that is the inverse perspective projection of the given point from the other stereo image, as described in Robert M. Haralick & Linda G. Shapiro, Computer and Robot Vision Vol. II 598 (1993), incorporated herein by reference. If the left and right images are coplanar and the horizontal axes is collinear (no rotation about the optical axis), then the image rows are epipolar lines and stereo correspondences can be found along corresponding rows. These images, referred to as normal image pairs provide computational advantages because the rectification of normal image pairs need only be performed one time.
- the method for rectifying the images is independent of the representation used for the given pose of the two cameras. It relies on the principal that any perspective projection is a projective projection. Image planes corresponding to the two cameras are replaced by image planes with the desired geometry (normal image pair) while keeping the geometry of the rays spanned by the points and the projection centers intact. This results in a planar projective transformation. These coefficients can also be computed at the factory.
- the camera images can be corrected for distortion and misalignment either in software or hardware.
- the resulting corrected images have the geometry of a normal image pair i.e., square pixels, aligned optical planes, aligned axes (rows), and pinhole camera model.
- Exterior orientation process is also performed during factory set up of the exemplary embodiment.
- the exterior orientation process is needed because 3D points in a viewed scene are only known relative to the camera coordinate system.
- Exterior orientation determines the position and orientation of a camera in an absolute coordinate system.
- An absolute 3D coordinate system is established such that the XY plane corresponds to the ground plane and the origin is chosen to be an arbitrary point on the plane.
- Ground plane calibration is performed at the location of the installation.
- the primary sensor 230 and the secondary sensor 240 are mounted on a plane that is parallel to the floor, and the distance between the respective sensor and the floor is entered.
- calibration targets can be laid out in the floor to compute the relationship between the stereo coordinate system attached to the reference camera and the world or scene coordinates system attached to the ground plane.
- Regions of interest are also set up manually at the location of the installation. This involves capturing the image from the reference camera (camera that the stereo coordinate system is tied to), rectifying it, displaying it and then using a graphics overlay tool to specify the zones to be monitored. Multiple zones can be pre-selected to allow for different run-time algorithms to run in each of the zones. The multiple zones typically include particular 3D spaces of interest. Filtering is performed to eliminate features outside of the zones being monitored, i.e., the primary zone 210 . In alternative embodiments of the invention, automatic setup can be performed by laying out fiducial markings or tape on the floor.
- This method detects features in a 3D scene using primarily boundary points or edges (due to occlusion and reflectance) because the information is most reliable only at these points.
- One skilled in the art will appreciate that the following method can be performed by each of the primary sensor 230 and the secondary sensor 240 simultaneously and independently. By the manner in which each of the respective sensors are independently coupled to the controller 310 , it is not necessary for both primary and secondary sensors to communicate directly with each other.
- a set of two dimensional images are provided, e.g., a right image and a left image.
- One of the images is designated the reference image.
- Both of the images are rectified at step 610 .
- Each respective rectification step is performed by applying an image rectification transform that corrects for alignment and lens distortion, resulting in virtually coplanar images. Rectification can be performed by using standard image rectification transforms known in the art.
- the image rectification transform is implemented as a lookup table through which pixels of a raw image are transformed into pixels of a rectified image.
- the rectified two-dimensional image points from the reference image (X R , Y R ) are matched to corresponding two-dimensional image points in the non-reference image (X L , Y L ).
- reference image points (X R , Y R ) are matched to non-reference image points (X L , Y L ) along the same row, or epipolar line. Matching can be performed through known techniques in the art, such as in T. Kanade et al, A Stereo Machine for Video-rate Dense Depth Mapping and its New Applications, Proc. IEEE Computer Vision and Pattern Recognition (CVPR), pp. 196-202 (1996), the entire contents of which are incorporated herein by reference.
- CVPR Computer Vision and Pattern Recognition
- a set of disparities D corresponding to the matched image points is computed relative to the reference image points (X R , Y R ), resulting in a disparity map (X R , Y R , D), also called the depth map or the depth image.
- the disparity map contains a corresponding disparity ‘d’ for each reference image point (X R , Y R ). By rectifying the images, each disparity ‘d’ corresponds to a shift in the x-direction.
- a three dimensional model of the door scene is generated in 3D world coordinates.
- the three dimensional scene is first generated in 3D camera coordinates (X c , Y c , Z c ) from the disparity map (X R , Y R , D) and intrinsic parameters of the reference camera geometry.
- the 3D camera coordinates (X c , Y c , Z c ) for each image point are then converted into 3D world coordinates (X w , Y w , Z w ) by applying an appropriate coordinate system transform.
- the target volume i.e., the volume of space directly above the observed zone
- the 3D world coordinates of the mantrap scene (X w , Y w , Z w ) that fall outside the 3D world coordinates of target volume are clipped.
- clipping can be effectively performed by setting the disparity value ‘d’ to zero for each image points (X R , Y R ) whose corresponding 3D world coordinates fall outside the target volume, resulting in a filtered disparity map “filtered (X R , Y R , D)”.
- a disparity value that is equal to zero is considered invalid.
- the filtered disparity map is provided as input to a multi-resolution people segmentation process commencing at 660 .
- coarse segmentation is performed for identifying people candidates within the target volume.
- coarse segmentation includes generating a topological profile of the target volume from a low resolution view of the filtered disparity map. Peaks within the topological profile are identified as potential people candidates.
- FIGS. 9 and 10 A particular embodiment for performing coarse segmentation is illustrated in FIGS. 9 and 10 .
- fine segmentation is performed for confirming or discarding people candidates identified during course segmentation.
- the filtered disparity map is analyzed within localized areas at full resolution.
- the localized areas correspond to the locations of the people candidates identified during the coarse segmentation process.
- the fine segmentation process attempts to detect head and shoulder profiles within three dimensional volumes generated from the localized areas of the disparity map.
- FIGS. 11 through 13 A particular embodiment for performing fine segmentation is illustrated in FIGS. 11 through 13 .
- FIGS. 9 and 10 are diagrams illustrating a coarse segmentation process that identifies coarse people candidates according to one embodiment.
- FIG. 9 is a flow diagram illustrating a coarse segmentation process that identifies coarse people candidates according to one embodiment. The detected locations of the coarse people candidates resulting from the segmentation process are then forwarded to a fine segmentation process for validation or discard.
- the filtered disparity map is segmented into bins.
- the filtered disparity map 755 includes points (X R , Y R , D) which are segmented into bins 752 , such that each bin contains a set of image points (X BIN , Y BIN ) and their corresponding disparities (D BIN ).
- a low resolution disparity map is generated from calculated mean disparity values of the bins.
- a low resolution disparity map 760 is generated including points (X M , Y M , D M ) where the points (X M , Y M ) correspond to bin locations in the high resolution disparity map 755 and D M corresponds to the mean disparity values d M calculated from those bins.
- a mean disparity value d M for a particular bin can be calculated by generating a histogram of all of the disparities D BIN in the bin having points (X BIN , Y BIN ). Excluding the bin points in which the disparities are equal to zero and thus invalid, a normalized mean disparity value d M is calculated. The normalized mean disparity d M is assigned to a point in the low resolution disparity map for that bin.
- peaks are identified in the topological profile of the low resolution disparity map.
- a peak is identified at a location in the low resolution disparity map having the largest value for mean disparity value d M .
- the extent of the peak is determined by traversing points in every direction, checking the disparity values at each point, and stopping in a direction when the disparity values start to rise. After determining the extent of the first peak, the process repeats for any remaining points in the low resolution map that have not been traversed.
- peak locations are identified at (x M1 , y M1 ) and (x M2 , y M2 ) of the low resolution disparity map 760 having mean disparity values d M1 , d M2 .
- the arrows extending from the peak locations illustrate the paths traversed from the peak locations.
- a watershed algorithm can be implemented for performing the traversal routine.
- pixels in the disparity map having at least 3 ⁇ 3 neighborhoods can be determine to be relatively flat regions, that can be considered peak locations.
- each of the peak locations are converted to approximate head location in the high resolution filtered disparity map.
- peak locations (x M1 , y M1 ) and (x M2 , y M2 ) in the low resolution disparity map 760 are converted into locations (x R1 , y R1 ) and (x R2 , y R2 ) in the high resolution disparity map 755 .
- This conversion can be accomplished by multiplying the peak locations by the number and size of the bins in the corresponding x-or y-direction.
- the locations of the coarse people candidates (e.g., (x R1 , y R1 ) and (x R2 , y R2 )) in the filtered disparity map and the mean disparity values d M1 , d M2 of the corresponding peak locations are forwarded to a fine segmentation process for validating or discarding these locations as people candidates, as in FIG. 11 .
- FIGS. 11, 12 , and 13 are diagrams illustrating a fine segmentation process for validating or discarding coarse people candidates according to one embodiment.
- FIG. 11 is a flow diagram illustrating fine segmentation process for validating or discarding coarse people candidates according to one embodiment.
- the fine segmentation process obtains more accurate, or fine, locations of the coarse people candidates in the filtered disparity map and then determines whether the coarse people candidates have the characteristic head/shoulder profiles from localized analysis of the high resolution filtered disparity map. Depending on the results, the fine segmentation process either validates or discards the people candidates.
- FIG. 12 is a block diagram of an exemplary head template according to one embodiment.
- the template model 870 includes a head template 875 .
- the head template 875 is a circular model that corresponds to the top view of a head.
- the dimensions of the head template 875 are based on the coarse location of the candidate (e.g., x R1 , y R1 ), the mean disparity value (e.g., d M1 ), and known dimensions of a standard head (e.g. 20 cm in diameter, 10 cm in radius).
- the position of the head is computed in 3D world coordinates (X, Y, Z) from the calculated coarse location and a mean disparity value using the factory data (e.g., intrinsic parameters of camera geometry) and field calibration data (e.g., camera to world coordinate system transform).
- each point within the area of the resulting head template 875 is assigned the mean disparity value (e.g., d M1 ) determined for that candidate. Points outside the head template 875 are assigned an invalid disparity value equal to zero.
- a fine location for the candidate is determined through template matching.
- the template model 870 overlays the filter disparity map 755 at an initial position corresponding to the coarse head location (e.g., x R1 , y R1 ).
- the disparities of the filtered disparity map 755 that fall within the head template 875 are then subtracted from the mean disparity value for the coarse people candidate (e.g., d M1 ).
- a sum of the absolute values of these differences is then computed as a template score that serves as a relative indication of whether the underlying points of the filtered disparity map correspond to a head.
- Other correlation techniques may also be implemented to generate the template score.
- the template matching is repeated, for example, by positioning the template 870 to other areas such that the center of the head template 875 corresponds to locations about the original coarse location of the candidate (e.g., x R1 , y R1 ).
- a fine location for the candidate (x F1 , y F1 ) is obtained from the position of the head template 875 at which the best template score was obtained.
- another mean disparity value d F1 is computed from the points of the filtered disparity map within the head template 875 centered at the fine candidate location (x F1 , y F1 ).
- the mean disparity value d F1 can be calculated by generating a histogram of all the disparities of the filtered disparity map that fall within the head template. Excluding the points in which the disparities are equal to zero and thus invalid, the normalized mean disparity value d F1 is calculated.
- people candidates are discarded for lack of coverage by analyzing the disparities that fall within the head template which is fixed at the fine head location. For example, it is known that disparity corresponds to the height of an object. Thus, a histogram of a person's head is expected to have a distribution, or coverage, of disparities that is centered at a particular disparity tapering downward. If the resulting histogram generated at 820 does not conform to such a distribution, it is likely that the candidate is not a person and the candidate is discarded for lack of coverage.
- the process determines whether there are more coarse candidates to process. If so, the process returns to 800 to analyze the next candidate. Otherwise, the process continues at 850 .
- people candidates having head locations that overlap with head locations of other people candidates are discarded.
- the head locations of all of the people candidates are converted from the filtered disparity map into their corresponding 3D world coordinates. People candidates whose head locations overlap with the head locations of other people candidates result in at least one of the candidates being discarded.
- the candidate corresponding to a shorter head location is discarded, because the candidate likely corresponds to a neck, shoulder, or other object other than a person.
- the one or more resulting fine head locations (e.g., x F1 , y F1 ) of the validated people candidates and the corresponding mean disparity values (e.g., d F1 ) are forwarded for further processing to determine if the number of people in the observed zone can be determined, at step 652 .
- FIG. 14 is a flow diagram illustrating augmenting people candidates by confidence level scoring according to one embodiment.
- the input to the scoring algorithm includes the list of validated people candidates and their locations in the filtered disparity map.
- the input can be a data structure (e.g., array or linked list data structure) in which the size of the data structure corresponds to the number of validated people candidates.
- a confidence score F 1 can be generated at 910 .
- the confidence score F 1 corresponds to a confidence level that the target volume contains only one person.
- the confidence score F 1 can be a value between 0 and 1.
- a confidence score F 2 can be generated at 930 .
- the confidence score F 2 corresponds to a confidence level that the target volume contains two or more persons.
- the confidence score F 2 can be a value between 0 and 1.
- a confidence score F 0 can be generated regardless of the number of validated people candidates.
- the confidence score F 0 corresponds to a confidence level that the target volume contains at least one person.
- the confidence score F 0 can be a value between 0 and 1.
- the confidence scores F 0 , F 1 , and F 2 are each averaged with confidence scores from previous frames, resulting in average confidence scores F 0 AVG , F 1 AVG and F 2 AVG .
- the confidence scores F 0 , F 1 , F 2 are weighted according to weights assigned to each frame. The weights are intended to filter out confidence scores generated from frames giving spurious results.
- the average confidence scores F 0 AVG , F 1 AVG and F 2 AVG are used to determine the number of people present (or absent) in the target volume.
- the primary sensor 230 and the secondary sensor 240 considers the confidence scores from step 980 , to make a determination about the number of people candidates in the respective primary zone 210 and secondary zone 220 , and a confidence level of that determination, as shown at decision step 652 . If the confidence that such a determination can be made, when interfaced to the controller 310 using discrete I/O, a signal can be asserted to the controller 310 to indicate if no people are present, one person is present, or greater than one person is present at step 672 . If the confidence level is not sufficient to make such a determination, a signal is asserted to indicate that the sensor is “not ready”,at step 662 .
- motion analysis between frames is used for the purpose of asserting a “not ready” signal, i.e., that the respective sensor does not have an ambiguous result, and can determine the number of people in the observed zone.
- motion detection is performed using an orthographic projection histogram of 3D points on the floor. Each point in the histogram is weighted such that the closer the point is to the sensor, the less it contributes to the histogram value following the square law. A point twice as far away contributes four times as much resulting in a normalized count. The sum of absolute differences is computed for the current frame and several frames earlier, using a ring buffer.
- the exemplary embodiment of the present invention can be implemented using the CPS-1000 PeopleSensor available from Cognex Corporation, Natick, Mass. for both the primary sensor 230 and the secondary sensor 240 .
- FIG. 8 depicts an exemplary arrangement of a plurality of secondary sensors in an “L” shaped mantrap 105 .
- the primary sesor 230 is mounted to observe the primary zone 210 in front of the airside door 110 .
- the secondary zone is split into two regions, each with a secondary sensor.
- the first secondary zone 221 is observed by a first secondary sensor 241 .
- the second secondary zone 222 is observed by a second secondary sensor 242 .
- the first secondary zone 221 can overlap the second secondary zone 222 to ensure complete coverage.
- a plurality of secondary sensors can be adapted to provide complete coverage of a secondary zone of a mantrap that is shaped in an irregular pattern, or where regions of the mantrap secondary zone would be obscured from view of a single secondary sensor due to internal walls and/or partitions.
- additional image analysis can be performed to provide increased levels of security.
- the primary and secondary sensors in the exemplary embodiment analyze a three-dimensional space for features associated with objects or people in the respective zones. As described above, each of the sensors performs volume filtering to consider only those features that are detected in the 3D space above the respective primary zone 210 or secondary zone 220 .
- the additional image analysis of the alternate embodiment will detect a person lying down, or attempting to bring foreign objects into the secure area.
- FIG. 7 A flowchart of the operation of the additional image analysis of the alternate embodiment is shown in FIG. 7 .
- the three-dimensional space is analyzed according to the methods described above.
- processing continues to step 720 where a comparison of a two-dimensional image is made to a baseline image 725 .
- An initial baseline image is provided during an initial setup configuration.
- a plurality of images of the scene are acquired and statistics about the variation of each pixel are computed. If the variance of the intensity of a pixel is too high, it is added into a mask image so that it is not considered by subsequent processing. For example, a video monitor mounted within the mantrap will appear to be constantly changing appearance, and therefore, can be masked from consideration so that it does not falsely indicate the presence of a person or object in the region during operation.
- the computed statistics can also be used to set threshold levels used to determine changes that are significant and those that are not.
- the comparison step 720 compares the current two-dimensional rectified image (from steps 610 and 612 of FIG. 6 ) to the baseline image. If a pixel in the current image significantly differs in value from the baseline, it is noted. These differing points can be clustered together and if the resulting clusters have sufficient size, it would suggest that a foreign object is in the mantrap.
- the clustering can be performed using conventional blob image analysis.
- step 730 if a significant difference is not detected, processing continues to step 740 where the baseline image is updated so that the comparison step 720 does not become susceptible to gradual changes in appearance.
- the baseline image 725 is linearly combined with the current image compared at step 720 . Processing then continues for another of a continuous cycle of analysis.
- step 730 if a significant difference is detected, processing continues to step 735 where a significantly differing pixel increments a timestamp count.
- step 745 if the timestamp count exceeds a threshold, the baseline pixel is updated at step 740 .
- This threshold could be user settable, allowing the user to decide how fast differences in the appearance of the mantrap get blended into the baseline image. By setting the threshold long enough, the dynamic baseline can be rendered essentially static.
- step 750 a signal is asserted to indicate to the controller that a person or object is detected, and processing continues for another of a continuous cycle of analysis.
- the primary zone When a person seeking entry into the secured region and enters the mantrap, the primary zone must be masked out of the image in addition to the regions of high pixel value variance. When someone is exiting the secured region through the mantrap, the entire space (both primary and secondary zones) can be examined to make sure that the area is clear and no one is attempting an ambush.
- both the primary sensor 230 and the secondary sensor 240 are a single three-dimensional machine vision sensor configured to observe both the primary zone and the secondary zone at the same time, or in rapid succession.
- the secondary sensor 240 is a presence/absence detector, or a series of presence/absence detectors.
- the secondary sensor can be a pressure-sensitive mat that outputs a signal indicating that a person or object is standing or resting on the mat.
- the presence/absence detector can be one or more light beam emitter/detector pairs that outputs a signal indicating that a person or object blocks the light emissions directed from the emitter to the detector.
- the presence/absence detector can be an IR sensor that outputs a signal indicating that motion of a person or object is detected in the secondary zone.
- the secondary sensor according to the present invention can be any of a combination of various types of presence/absence detectors that can be logically combined to output a signal indicating that a person or object exists in the secondary zone.
- ground plane calibration in the exemplary embodiments described herein is performed at the location of installation, persons having ordinary skill in the art should appreciate that ground plane calibration could also be performed in the factory or at alternate locations without departing from the spirit and scope of the invention.
- a single camera can be used to take two or more images from different locations to provide stereo images within the scope of the invention.
- a camera could take separate images from a plurality of locations.
- a plurality of optical components could be arranged to provide a plurality of consecutive views to a stationary camera for use as stereo images according to the invention.
- Such optical components include reflective optical components, for example, mirrors, and refractive optical components, for example, lenses.
- exemplary embodiments of the present invention are described in terms of filtering objects having predetermined heights above the ground plain, persons having ordinary skill in the art should appreciate that a stereo vision system according to the present invention could also filter objects at a predetermined distance from any arbitrary plain such as a wall, without departing from the spirit or scope of the invention.
Abstract
A method and system provides increased levels of security for a mantrap portal by continuously monitoring two zones of the mantrap; a primary zone and a secondary zone. A primary sensor determines that exactly one or zero people are present in the primary zone when requesting access into a secured area. A secondary sensor determines that exactly zero people are present in the secondary zone when access to the secured area is granted. The primary and secondary sensors in combination can detect piggyback events and tailgating events before granting access to a secured area. Further, the primary and secondary sensors in combination can detect the presence of unauthorized persons in a mantrap prior to granting access to the mantrap for exit from a secured area.
Description
- This application is a continuation-in-part to U.S. patent application Ser. No. 10/702,059, entitled “Method and System for Enhanced Portal Security through Stereoscopy,” filed Nov. 5, 2003, the contents of which are hereby incorporated by reference.
- This invention relates to security systems that permit controlled access to a secured area. Specifically, this invention relates to automatic door control in a secured area using a mantrap portal.
- A typical security issue associated with most access controlled portal security systems is to ensure that when a person is authorized for entry into a secured area, it is only that person that is permitted to enter. A mantrap secured portal is a configuration of a secured portal that is commonly employed for restricting access to that of only an authorized person at a time.
- A mantrap portal is a small room with two doors: one door for access to/from an unsecured area (called “landside”); and one door for access to/from a secured area (called “airside”). The basic operation of a mantrap for entry into a secured area from an unsecured area can be described with reference to
FIG. 1 . - A
representative mantrap 100 is shown to provide an access portal between anunsecured landside region 130 and a securedairside region 140. Themantrap 100 has alandside door 120 and anairside door 110. Thelandside door 120 can be locked in the closed position withlandside lock 150, and theairside door 110 can be locked in the closed position withairside lock 160. In the normal, unoccupied configuration (not shown), theairside door 110 is closed and locked withairside lock 160, while thelandside door 120 is closed, but not locked bylandside lock 150. - A person seeking access to the secured area (airside) will approach the
mantrap 100 represented byperson 125. The landside door can be opened while the airside door is locked. Once the person seeking access is fully inside the mantrap, as represented byperson 105, a request for entry can be made atentry request 155. The entry request can be a card reader, a doorbell, or a biometric input such as a palm or fingerprint reader, or a retina scan. Once entry access is granted, thelandside door 120 is in the closed position and locked bylandside lock 150. With thelandside door 120 locked closed, theairside lock 160 can be released, and theairside door 110 can be opened. The person seeking access can enter the secured area, represented asperson 115. Once theairside door 110 is closed and locked byairside lock 160, thelandside lock 150 can be released, to return the mantrap to the normal, unoccupied position. - The
mantrap 100 can operate to permit a person to exit the securedairside region 140 while maintaining a high degree of security. A request can be made atexit request 165, which starts the door locking cycle. Thelandside door 120 is locked bylandside lock 150, and theairside door 110 is unlocked byairside lock 160. The person seeking to exit can enter the mantrap, and theairside door 110 can be locked so that thelandside door 120 can be unlocked, thereby permitting a person to exit. The mantrap configuration operates to control access since the door to the unsecured area can be locked in the closed position while the door to the secured area is open. - The basic operation of a mantrap portal becomes increasingly complex as security of the portal is enhanced. For example, mantrap portals are commonly equipped with IR sensors, pressure mats, to prevent piggyback and tailgate violations.
- Piggybacking can occur when an authorized person knowingly or unknowingly provides access through a portal to another traveling in the same direction. If a second, unauthorized person is permitted to enter the secured area with the authorized person, the security is breached.
- Tailgating can occur when an authorized person knowingly or unknowingly provides unauthorized access through a portal to another traveling in the opposite direction. For example, an unauthorized person entering the mantrap from the unsecured area can wait until someone leaves the secured area - and while the door is opened into the mantrap from the secured area, the unauthorized person can enter, thereby breaching security.
- Piggybacking and tailgating can be prevented in a mantrap using door locks controlled by a door controller that has the ability to count the number of people in the mantrap. To prevent piggybacking violations, the door to the secured area is only unlocked if there is exactly one authorized person seeking access to the secured area. Tailgating is prevented by only unlocking the door from the secured area to permit someone to exit the secured area only if there is nobody detected in the mantrap.
- Mantrap portals with enhanced security, such as pressure mats and IR sensors, are easily defeated by two people walking close together, or by carrying one person by the other. Accordingly, there exists a need for a system that can effectively enhance the security of a mantrap portal.
- The present invention provides for improved methods and systems for restricting access to a secured area using a mantrap portal. An embodiment of the present invention continuously monitors a primary zone to determine the presence or absence of one person in the primary zone. The primary zone is a region of the mantrap having an area less than the area of the entire mantrap, preferably located at a location proximal to the airside door. While the primary zone is monitored, the present invention continuously monitors a secondary zone to determine that no persons are present. The secondary zone is a region of the mantrap not including the primary zone. When the primary zone has exactly one or zero people present, and at the same time the secondary zone has exactly zero people present, the mantrap door locking/unlocking cycle can commence to permit access/egress to/from the secured area.
- An exemplary embodiment of the present invention uses a three-dimensional machine vision sensor to monitor the primary zone and the secondary zone to identify and track detected features that can be associated with people or a person. When used in conjunction with a door access control system, alarm conditions can be generated when unexpected conditions are detected.
- Other embodiments of the present invention use a three-dimensional machine vision sensor to monitor the primary zone in combination with one or more presence/absence detectors to monitor the secondary zone.
- Further embodiments disclose methods and systems that perform additional two-dimensional image analysis of regions of the mantrap in combination with a three-dimensional image analysis so that the extreme extents of the respective primary and secondary zones, and regions not captured by the respective primary and secondary zones are analyzed for the presence of any people or objects.
- The present invention is further described in the detailed description which follows, by reference to the noted drawings by way of non-limiting exemplary embodiments, in which like reference numerals represent similar parts throughout the several views of the drawings, and wherein:
-
FIG. 1 is a plan view of a mantrap security portal according to the background art; -
FIG. 2 is a plan view of a mantrap security portal according to the present invention; -
FIG. 3 is a block diagram of a control system according to the present invention; -
FIG. 4 is a flowchart of the operation of the mantrap security portal according to the present invention; -
FIG. 5 is a perspective view of an embodiment of the present invention; -
FIG. 6 is a flowchart of the method used to detect people or objects according to the exemplary embodiment of the present invention; -
FIG. 7 is a flowchart of the additional image analysis methods used to detect people or objects according to an alternate embodiment of the present invention; -
FIG. 8 is a plan view of a mantrap security portal according to an exemplary embodiment of the present invention. -
FIG. 9 is a block diagram illustrating a coarse segmentation process that identifies coarse people candidates according to an embodiment of the present invention; -
FIG. 10 is a diagram illustrating the coarse segmentation process that identifies coarse people candidates according to an embodiment of the present invention; -
FIG. 11 is a block diagram illustrating a fine segmentation process for validating or discarding coarse people candidates according to an embodiment of the present invention; -
FIG. 12 is a diagram illustrating a fine segmentation process for validating or discarding coarse people candidates according to an embodiment of the present invention; -
FIG. 13 is a diagram illustrating a fine segmentation process for validating or discarding coarse people candidates according to an embodiment of the present invention; and -
FIG. 14 is a block diagram illustrating a method used to determine the number of people candidates by confidence level scoring according to an embodiment of the present invention. - Referring to
FIG. 2 , in accordance with the present invention, there is provided amantrap 100 to permit an enhanced level of security. Themantrap 100 is a portal region between an insecurelandside region 130 and a securedairside region 140. Themantrap 100 has alandside door 120 for access into and out from thelandside region 130, and anairside door 110, for access into and out from theairside region 140. Anairside door lock 160 permits remote locking of theairside door 160, and alandside door lock 150 permits remote locking of thelandside door 120. Anaccess request 155 is shown as a panel for requesting access into the secureairside region 140, and anexit access 165 is shown as a panel for requesting access from the securedairside region 140 into themantrap 100. - As shown in
FIG. 2 , aprimary zone 210 is established as a region in the mantrap having an area less than the area of themantrap 100. Aprimary sensor 230 monitors theprimary zone 210 to determine if exactly one person is present in the primary zone. Theprimary zone 210 can be located anywhere within themantrap 100, though preferably, theprimary zone 210 is located adjacent to theairside door 110. - A
secondary zone 220 is established as a region within themantrap 100, not including theprimary zone 210. Thesecondary zone 220 does not need to include the entire region of themantrap 100 exclusive of theprimary zone 210, though it is preferred that any region within the mantrap not inclusive of theprimary zone 210 and thesecondary zone 220 be not large enough for a person to occupy. - A
secondary sensor 240 monitors the secondary zone to determine whether or not a person or object exists within thesecondary zone 220. - Referring to
FIG. 3 , a block diagram is shown in accordance with the present invention. Acontroller 310 of the type conventionally known in the art of access control for security applications is used to control theairside door lock 160 and thelandside door lock 150. The controller can be any device that is capable of reading inputs, processing simple logic, and controlling the landside door and airside door. The controller may have the capability for performing automatic door control, i.e., opening and closing, in addition to actuation of the respective door locks. Thecontroller 310 can be a Programmable Logic Controller (PLC), or a Personal Computer (PC) with the appropriate software instructions. - The
controller 310 is responsive to signals from anentry request 155 and anexit request 165 upon presentation of an appropriate credential by the person seeking access/exit. Each of theentry request 155 andexit request 165 being of the type conventionally known in the art of access control for security, including, but not limited to, card readers, keypad terminals, or biometric input stations, such as finger- or palm-print readers, retinal scanners, or voice recognition stations. - The
controller 310 is adapted to receive input signals from theprimary sensor 230 and thesecondary sensor 240 to actuate theairside door lock 160 and thelandside door lock 150 in response to either of theentry request 155 orexit request 165 terminals.FIG. 4 depicts a flowchart of the basic operation of thecontroller 310 according to the present invention. - Referring to
FIG. 4 , the controller initializes themantrap 100 with the appropriate signals to lock the airside door atstep 410, and unlock thelandside door 420. Theentry request terminal 155 is monitored atstep 430 and theexit request terminal 165 is monitored atstep 440. If neither anentry request 430 orexit request 440 is made, processing loops continuously. - Referring to
FIG. 2 in conjunction withFIG. 4 , a person seeking access to the secured airside region approaches the mantrap, shown asperson 125. Once an entry request is made, processing continues to step 450 where the output of theprimary sensor 230 and secondary sensor are considered by thecontroller 310. If the primary sensor does not output a signal indicating that one person is in the primary zone, or if the secondary sensor does not output a signal indicating that no objects or people are detected in the secondary zone, processing continues by looping in place, as shown by processingpath 455, until both conditions are met. - When the person seeking access is in the
primary zone 210, shown inFIG. 2 asperson 105, the primary sensor outputs a signal indicating that one person is detected in the primary zone. If there are no people or objects detected in thesecondary zone 220, the secondary sensor outputs a signal indicating that no such people or objects are detected. and processing continues. At this point, the landside door is locked atstep 470 and the airside door is unlocked atstep 480, so that the person seeking access can enter the secured airside region, shown asperson 115. - Processing continues by looping back to step 410 where the airside door is returned to the locked state, and the landside door is unlocked at
step 420. - If an exit request is detected at
step 440, processing continues to step 460, where the signals from theprimary sensor 230 andsecondary sensor 240 are considered by thecontroller 310. Atstep 460, if the primary sensor does not indicate that no people are present in theprimary zone 210 or if the secondary sensor does not indicate that no people or objects are present in thesecondary zone 220, processing continues by looping in place, as shown by processingpath 465. - When both the primary sensor detects that zero people are present in the
primary zone 210, and the secondary sensor detects that no people or objects are present in the secondary zone, processing continues to step 470 where the landside door is locked. When the airside door is unlocked atstep 480, the person requesting to exit from the secured airside region can enter the mantrap through theairside door 110. At that point, the airside door can be locked atstep 410 and the landside door can be unlocked at 420, so that the person can exit the mantrap through thelandside door 120. - One skilled in the art of controlling access to a secured area using a conventional door control system will appreciate that the basic operation of the
mantrap 100 can be modified in various ways without departing from the scope and spirit of the present invention. For example, theentry request terminal 155 can be placed outside the mantrap in the unsecuredlandside region 130, and the normal idle state of the mantrap can be configured with both theairside door 110 and thelandside door 120 in the locked state. Further, several alarm conditions can be initiated by thecontroller 310 if the loopingpath 455 or the loopingpath 465 are traversed for a specified duration. - In an exemplary embodiment of the present invention, the
primary sensor 230 and thesecondary sensor 240 are each a three-dimensional machine vision sensor described herein with reference toFIG. 5 . Each of theprimary sensor 230 and thesecondary sensor 240 has a 3D image processor, memory, discrete I/O, and a set ofstereo cameras 10, in an integrated unit mounted in themantrap 100. Theprimary sensor 230 is mounted in the ceiling above theairside door 110 looking downward and outward towards theprimary zone 210. Thesecondary sensor 240 is mounted in a position so that it can observe thesecondary zone 220. One skilled in the art will appreciate that theprimary sensor 230 and thesecondary sensor 240 can be mounted in any number of positions relative to the respective primary and secondary zones. - In each of the sensors, the set of
cameras 10 is calibrated to provide heights above the ground plane for any point in the field of view. Therefore, when any object enters the field of view, it generates interest points called “features”,the heights of which are measured relative to the ground plane. These points are then clustered in 3D space to provide “objects”. These objects are then tracked in multiple frames to provide “trajectories”. - In an exemplary system, the baseline distance between the optical centers of the cameras is 12 mm and the lenses have a focal length of 2.1 mm (150 degree Horizontal Field of View (HFOV)). The cameras are mounted approximately 2.2 meters from the ground and have a viewing area that is approximately 2.5 by 2.5 meters. The surface normal to the plane of the cameras points downward and outward as shown in
FIG. 5 wherein the cameras are angled just enough to view the area just below the mounting point. - In the exemplary embodiment of the present invention various parameters are set up in the factory. The factory setup involves calibration and the computation of the intrinsic parameters for the cameras and the relative orientation between the cameras. Calibration involves the solution of several sub-problems, as discussed hereinafter, each of which has several solutions that are well understood by persons having ordinary skill in the art. Further, rectification coefficients, described hereinafter, must be computed to enable run time image correction.
- Stereo measurements could be made in a coordinate system that is different from the coordinate systems of either camera. For example, the scene or world coordinates correspond to the points in a viewed scene. Camera coordinates (left and right) correspond to the viewer-centered representation of scene points. Undistorted image coordinates correspond to scene points projected onto the image plane. Distorted image coordinates correspond to points having undergone lens distortion. Pixel coordinates correspond to the grid of image samples in the image array.
- In the exemplary embodiment one camera is designated to be a “reference camera”,to which the stereo coordinate system is tied. An interior orientation process is performed to determine the internal geometry of a camera. These parameters, also called the intrinsic parameters, include the following: effective focal length, also called the camera constant; location of the principal point, also called the image center; radial distortion coefficients; and horizontal scale factor, also called the aspect ratio. The cameras used in the exemplary embodiment have fixed-focus lenses that cannot be modified; therefore these parameters can be computed and preset at the factory.
- A relative orientation process is also performed to determine the relative position and orientation between two cameras from projections of calibration points in the scene. Again, the cameras are mechanically fixtured such that they stay in alignment and hence these parameters can also be preset at the factory.
- A rectification process, closely associated with the relative orientation, is also performed. Rectification is the process of resampling stereo images so that epipolar lines correspond to image rows. “An epipolar line on one stereo image corresponding to a given point in another stereo image is the perspective projection on the first stereo image of the three-dimensional ray that is the inverse perspective projection of the given point from the other stereo image, as described in Robert M. Haralick & Linda G. Shapiro, Computer and Robot Vision Vol. II 598 (1993), incorporated herein by reference. If the left and right images are coplanar and the horizontal axes is collinear (no rotation about the optical axis), then the image rows are epipolar lines and stereo correspondences can be found along corresponding rows. These images, referred to as normal image pairs provide computational advantages because the rectification of normal image pairs need only be performed one time.
- The method for rectifying the images is independent of the representation used for the given pose of the two cameras. It relies on the principal that any perspective projection is a projective projection. Image planes corresponding to the two cameras are replaced by image planes with the desired geometry (normal image pair) while keeping the geometry of the rays spanned by the points and the projection centers intact. This results in a planar projective transformation. These coefficients can also be computed at the factory.
- Given the parameters computed in interior orientation, relative orientation and rectification, the camera images can be corrected for distortion and misalignment either in software or hardware. The resulting corrected images have the geometry of a normal image pair i.e., square pixels, aligned optical planes, aligned axes (rows), and pinhole camera model.
- An exterior orientation process is also performed during factory set up of the exemplary embodiment. The exterior orientation process is needed because 3D points in a viewed scene are only known relative to the camera coordinate system. Exterior orientation determines the position and orientation of a camera in an absolute coordinate system. An absolute 3D coordinate system is established such that the XY plane corresponds to the ground plane and the origin is chosen to be an arbitrary point on the plane.
- Ground plane calibration is performed at the location of the installation. In an embodiment, the
primary sensor 230 and thesecondary sensor 240 are mounted on a plane that is parallel to the floor, and the distance between the respective sensor and the floor is entered. Alternatively, calibration targets can be laid out in the floor to compute the relationship between the stereo coordinate system attached to the reference camera and the world or scene coordinates system attached to the ground plane. - Regions of interest are also set up manually at the location of the installation. This involves capturing the image from the reference camera (camera that the stereo coordinate system is tied to), rectifying it, displaying it and then using a graphics overlay tool to specify the zones to be monitored. Multiple zones can be pre-selected to allow for different run-time algorithms to run in each of the zones. The multiple zones typically include particular 3D spaces of interest. Filtering is performed to eliminate features outside of the zones being monitored, i.e., the
primary zone 210. In alternative embodiments of the invention, automatic setup can be performed by laying out fiducial markings or tape on the floor. - While there are several methods to perform stereo vision to monitor each of the
primary zone 210 and thesecondary zone 220 according to the present invention, one such method is outlined below with reference toFIG. 6 . This method detects features in a 3D scene using primarily boundary points or edges (due to occlusion and reflectance) because the information is most reliable only at these points. One skilled in the art will appreciate that the following method can be performed by each of theprimary sensor 230 and thesecondary sensor 240 simultaneously and independently. By the manner in which each of the respective sensors are independently coupled to thecontroller 310, it is not necessary for both primary and secondary sensors to communicate directly with each other. - Referring to
FIG. 6 , a set of two dimensional images are provided, e.g., a right image and a left image. One of the images is designated the reference image. Both of the images are rectified atstep 610. Each respective rectification step is performed by applying an image rectification transform that corrects for alignment and lens distortion, resulting in virtually coplanar images. Rectification can be performed by using standard image rectification transforms known in the art. In an exemplary embodiment, the image rectification transform is implemented as a lookup table through which pixels of a raw image are transformed into pixels of a rectified image. - At 620, the rectified two-dimensional image points from the reference image (XR, YR) are matched to corresponding two-dimensional image points in the non-reference image (XL, YL). By first rectifying the images, reference image points (XR, YR) are matched to non-reference image points (XL, YL) along the same row, or epipolar line. Matching can be performed through known techniques in the art, such as in T. Kanade et al, A Stereo Machine for Video-rate Dense Depth Mapping and its New Applications, Proc. IEEE Computer Vision and Pattern Recognition (CVPR), pp. 196-202 (1996), the entire contents of which are incorporated herein by reference.
- At 630, a set of disparities D corresponding to the matched image points is computed relative to the reference image points (XR, YR), resulting in a disparity map (XR, YR, D), also called the depth map or the depth image. The disparity map contains a corresponding disparity ‘d’ for each reference image point (XR, YR). By rectifying the images, each disparity ‘d’ corresponds to a shift in the x-direction.
- At 640, a three dimensional model of the door scene is generated in 3D world coordinates. In one embodiment, the three dimensional scene is first generated in 3D camera coordinates (Xc, Yc, Zc) from the disparity map (XR, YR, D) and intrinsic parameters of the reference camera geometry. The 3D camera coordinates (Xc, Yc, Zc) for each image point are then converted into 3D world coordinates (Xw, Yw, Zw) by applying an appropriate coordinate system transform.
- At 650, the target volume, i.e., the volume of space directly above the observed zone, can be dynamically adjusted and image points outside the target volume are clipped. The 3D world coordinates of the mantrap scene (Xw, Yw, Zw) that fall outside the 3D world coordinates of target volume are clipped. In a particular embodiment, clipping can be effectively performed by setting the disparity value ‘d’ to zero for each image points (XR, YR) whose corresponding 3D world coordinates fall outside the target volume, resulting in a filtered disparity map “filtered (XR, YR, D)”. A disparity value that is equal to zero is considered invalid. The filtered disparity map is provided as input to a multi-resolution people segmentation process commencing at 660.
- At 660, coarse segmentation is performed for identifying people candidates within the target volume. According to one embodiment, coarse segmentation includes generating a topological profile of the target volume from a low resolution view of the filtered disparity map. Peaks within the topological profile are identified as potential people candidates. A particular embodiment for performing coarse segmentation is illustrated in
FIGS. 9 and 10 . - At 670, fine segmentation is performed for confirming or discarding people candidates identified during course segmentation. According to one embodiment, the filtered disparity map is analyzed within localized areas at full resolution. The localized areas correspond to the locations of the people candidates identified during the coarse segmentation process. In particular, the fine segmentation process attempts to detect head and shoulder profiles within three dimensional volumes generated from the localized areas of the disparity map. A particular embodiment for performing fine segmentation is illustrated in
FIGS. 11 through 13 . - Coarse Segmentation of People Candidates
-
FIGS. 9 and 10 are diagrams illustrating a coarse segmentation process that identifies coarse people candidates according to one embodiment. In particular,FIG. 9 is a flow diagram illustrating a coarse segmentation process that identifies coarse people candidates according to one embodiment. The detected locations of the coarse people candidates resulting from the segmentation process are then forwarded to a fine segmentation process for validation or discard. - At 700, the filtered disparity map is segmented into bins. For example, in
FIG. 10 , the filtereddisparity map 755 includes points (XR, YR, D) which are segmented intobins 752, such that each bin contains a set of image points (XBIN, YBIN) and their corresponding disparities (DBIN). - At 701 of
FIG. 9 , a low resolution disparity map is generated from calculated mean disparity values of the bins. For example, inFIG. 10 , a lowresolution disparity map 760 is generated including points (XM, YM, DM) where the points (XM, YM) correspond to bin locations in the highresolution disparity map 755 and DM corresponds to the mean disparity values dM calculated from those bins. - In a particular embodiment, a mean disparity value dM for a particular bin can be calculated by generating a histogram of all of the disparities DBIN in the bin having points (XBIN, YBIN). Excluding the bin points in which the disparities are equal to zero and thus invalid, a normalized mean disparity value dM is calculated. The normalized mean disparity dM is assigned to a point in the low resolution disparity map for that bin.
- At 702 of
FIG. 9 , peaks are identified in the topological profile of the low resolution disparity map. In a particular embodiment, a peak is identified at a location in the low resolution disparity map having the largest value for mean disparity value dM. The extent of the peak is determined by traversing points in every direction, checking the disparity values at each point, and stopping in a direction when the disparity values start to rise. After determining the extent of the first peak, the process repeats for any remaining points in the low resolution map that have not been traversed. - For example, in
FIG. 10 , peak locations are identified at (xM1, yM1) and (xM2, yM2) of the lowresolution disparity map 760 having mean disparity values dM1, dM2. The arrows extending from the peak locations illustrate the paths traversed from the peak locations. A watershed algorithm can be implemented for performing the traversal routine. - Alternatively, pixels in the disparity map having at least 3×3 neighborhoods can be determine to be relatively flat regions, that can be considered peak locations.
- At 703 of
FIG. 9 , each of the peak locations are converted to approximate head location in the high resolution filtered disparity map. For example, inFIG. 10 , peak locations (xM1, yM1) and (xM2, yM2) in the lowresolution disparity map 760 are converted into locations (xR1, yR1) and (xR2, yR2) in the highresolution disparity map 755. This conversion can be accomplished by multiplying the peak locations by the number and size of the bins in the corresponding x-or y-direction. - At 704 of
FIG. 9 , the locations of the coarse people candidates (e.g., (xR1, yR1) and (xR2, yR2)) in the filtered disparity map and the mean disparity values dM1, dM2 of the corresponding peak locations are forwarded to a fine segmentation process for validating or discarding these locations as people candidates, as inFIG. 11 . - Fine Segmentation of People Candidates
-
FIGS. 11, 12 , and 13 are diagrams illustrating a fine segmentation process for validating or discarding coarse people candidates according to one embodiment. In particular,FIG. 11 is a flow diagram illustrating fine segmentation process for validating or discarding coarse people candidates according to one embodiment. In particular, the fine segmentation process obtains more accurate, or fine, locations of the coarse people candidates in the filtered disparity map and then determines whether the coarse people candidates have the characteristic head/shoulder profiles from localized analysis of the high resolution filtered disparity map. Depending on the results, the fine segmentation process either validates or discards the people candidates. - At 800, a two dimensional head template is generated having a size relative to the disparity of one of the coarse candidates. Disparity corresponds indirectly to height such that as disparity increases, the distance from the camera decreases, and thus the height of the person increases. For example,
FIG. 12 is a block diagram of an exemplary head template according to one embodiment. In the illustrated embodiment, thetemplate model 870 includes ahead template 875. Thehead template 875 is a circular model that corresponds to the top view of a head. - The dimensions of the
head template 875 are based on the coarse location of the candidate (e.g., xR1, yR1), the mean disparity value (e.g., dM1), and known dimensions of a standard head (e.g. 20 cm in diameter, 10 cm in radius). For example, to compute the dimensions of the head template, the position of the head is computed in 3D world coordinates (X, Y, Z) from the calculated coarse location and a mean disparity value using the factory data (e.g., intrinsic parameters of camera geometry) and field calibration data (e.g., camera to world coordinate system transform). Next, consider another point in the world coordinate system which is (X+10 cm, Y, Z) and compute the position of the point in the rectified image space (e.g., xR2, yR2) which is the image space in which all the image coordinates are maintained. The length of the vector defined by (xR1, yR1) and (xR2, yR2) corresponds to the radius of the circular model for thehead template 875. - Furthermore, each point within the area of the resulting
head template 875 is assigned the mean disparity value (e.g., dM1) determined for that candidate. Points outside thehead template 875 are assigned an invalid disparity value equal to zero. - At 810 of
FIG. 11 , a fine location for the candidate is determined through template matching. For example, in the illustrated embodiment ofFIG. 13 , thetemplate model 870 overlays thefilter disparity map 755 at an initial position corresponding to the coarse head location (e.g., xR1, yR1). The disparities of the filtereddisparity map 755 that fall within thehead template 875 are then subtracted from the mean disparity value for the coarse people candidate (e.g., dM1). A sum of the absolute values of these differences is then computed as a template score that serves as a relative indication of whether the underlying points of the filtered disparity map correspond to a head. Other correlation techniques may also be implemented to generate the template score. - The template matching is repeated, for example, by positioning the
template 870 to other areas such that the center of thehead template 875 corresponds to locations about the original coarse location of the candidate (e.g., xR1, yR1). A fine location for the candidate (xF1, yF1) is obtained from the position of thehead template 875 at which the best template score was obtained. - At 820, another mean disparity value dF1 is computed from the points of the filtered disparity map within the
head template 875 centered at the fine candidate location (xF1, yF1). In a particular embodiment, the mean disparity value dF1 can be calculated by generating a histogram of all the disparities of the filtered disparity map that fall within the head template. Excluding the points in which the disparities are equal to zero and thus invalid, the normalized mean disparity value dF1 is calculated. - At 830, people candidates are discarded for lack of coverage by analyzing the disparities that fall within the head template which is fixed at the fine head location. For example, it is known that disparity corresponds to the height of an object. Thus, a histogram of a person's head is expected to have a distribution, or coverage, of disparities that is centered at a particular disparity tapering downward. If the resulting histogram generated at 820 does not conform to such a distribution, it is likely that the candidate is not a person and the candidate is discarded for lack of coverage.
- At 840, the process determines whether there are more coarse candidates to process. If so, the process returns to 800 to analyze the next candidate. Otherwise, the process continues at 850.
- At 850, people candidates having head locations that overlap with head locations of other people candidates are discarded. In a particular embodiment, the head locations of all of the people candidates are converted from the filtered disparity map into their corresponding 3D world coordinates. People candidates whose head locations overlap with the head locations of other people candidates result in at least one of the candidates being discarded. Preferably, the candidate corresponding to a shorter head location is discarded, because the candidate likely corresponds to a neck, shoulder, or other object other than a person.
- At 860, the one or more resulting fine head locations (e.g., xF1, yF1) of the validated people candidates and the corresponding mean disparity values (e.g., dF1) are forwarded for further processing to determine if the number of people in the observed zone can be determined, at
step 652. - Confidence Level Scoring of the Fuzzy Scoring Module
-
FIG. 14 is a flow diagram illustrating augmenting people candidates by confidence level scoring according to one embodiment. The input to the scoring algorithm includes the list of validated people candidates and their locations in the filtered disparity map. In particular, the input can be a data structure (e.g., array or linked list data structure) in which the size of the data structure corresponds to the number of validated people candidates. - If, at 900, the number of validated people candidates is equal to one or more persons, a confidence score F1 can be generated at 910. The confidence score F1 corresponds to a confidence level that the target volume contains only one person. The confidence score F1 can be a value between 0 and 1.
- If, at 920, the number of validated people candidates is equal to two or more persons, a confidence score F2 can be generated at 930. The confidence score F2 corresponds to a confidence level that the target volume contains two or more persons. The confidence score F2 can be a value between 0 and 1.
- At 940, a confidence score F0 can be generated regardless of the number of validated people candidates. The confidence score F0 corresponds to a confidence level that the target volume contains at least one person. The confidence score F0 can be a value between 0 and 1.
- At 950, 960, and 970 respectively, the confidence scores F0, F1, and F2 are each averaged with confidence scores from previous frames, resulting in average confidence scores F0 AVG, F1 AVG and F2 AVG. In a preferred embodiment, the confidence scores F0, F1, F2 are weighted according to weights assigned to each frame. The weights are intended to filter out confidence scores generated from frames giving spurious results.
- At 980, the average confidence scores F0 AVG, F1 AVG and F2 AVG are used to determine the number of people present (or absent) in the target volume.
- Referring back to
FIG. 6 , theprimary sensor 230 and thesecondary sensor 240 according to the exemplary embodiment considers the confidence scores fromstep 980, to make a determination about the number of people candidates in the respectiveprimary zone 210 andsecondary zone 220, and a confidence level of that determination, as shown atdecision step 652. If the confidence that such a determination can be made, when interfaced to thecontroller 310 using discrete I/O, a signal can be asserted to thecontroller 310 to indicate if no people are present, one person is present, or greater than one person is present atstep 672. If the confidence level is not sufficient to make such a determination, a signal is asserted to indicate that the sensor is “not ready”,atstep 662. - At
step 652, motion analysis between frames is used for the purpose of asserting a “not ready” signal, i.e., that the respective sensor does not have an ambiguous result, and can determine the number of people in the observed zone. In an illustrative embodiment, motion detection is performed using an orthographic projection histogram of 3D points on the floor. Each point in the histogram is weighted such that the closer the point is to the sensor, the less it contributes to the histogram value following the square law. A point twice as far away contributes four times as much resulting in a normalized count. The sum of absolute differences is computed for the current frame and several frames earlier, using a ring buffer. If the difference is excessive, motion is sufficient to suggest that the observed scene is not at a steady state to report a result. One skilled in the art will appreciate that other methods of motion detection an/or tracking objects between frames can be performed to determine a steady state sufficient to report a result. A sequence of such views and statistics for a duration (determined by the size of the ring buffer) is used to determine if the system “ready/not ready” signal can be asserted so that the number (or absence) of people in the observed zone can be determined. - The exemplary embodiment of the present invention can be implemented using the CPS-1000 PeopleSensor available from Cognex Corporation, Natick, Mass. for both the
primary sensor 230 and thesecondary sensor 240. - While the exemplary embodiment describes an implementation of the present invention in a basic rectangular mantrap, the invention can also be applied to large mantrap implementations and complex geometrical shaped mantraps. The secondary sensor can accommodate a large or an irregularly shaped secondary zone, through the use of a plurality of secondary sensors with the respective outputs logically combined (i.e., “ORed”).
FIG. 8 depicts an exemplary arrangement of a plurality of secondary sensors in an “L” shapedmantrap 105. Referring toFIG. 8 , theprimary sesor 230 is mounted to observe theprimary zone 210 in front of theairside door 110. The secondary zone is split into two regions, each with a secondary sensor. The firstsecondary zone 221 is observed by a firstsecondary sensor 241. The second secondary zone 222 is observed by a second secondary sensor 242. As shown inFIG. 8 , the firstsecondary zone 221 can overlap the second secondary zone 222 to ensure complete coverage. One skilled in the art will appreciate that a plurality of secondary sensors can be adapted to provide complete coverage of a secondary zone of a mantrap that is shaped in an irregular pattern, or where regions of the mantrap secondary zone would be obscured from view of a single secondary sensor due to internal walls and/or partitions. - In an alternate embodiment of the present invention, additional image analysis can be performed to provide increased levels of security. The primary and secondary sensors in the exemplary embodiment analyze a three-dimensional space for features associated with objects or people in the respective zones. As described above, each of the sensors performs volume filtering to consider only those features that are detected in the 3D space above the respective
primary zone 210 orsecondary zone 220. The additional image analysis of the alternate embodiment will detect a person lying down, or attempting to bring foreign objects into the secure area. - A flowchart of the operation of the additional image analysis of the alternate embodiment is shown in
FIG. 7 . During operation, the three-dimensional space is analyzed according to the methods described above. Atstep 710, if there are no people or objects detected, e.g., the signal asserted bystep 672 ofFIG. 6 corresponds to no people or objects present, processing continues to step 720 where a comparison of a two-dimensional image is made to abaseline image 725. - An initial baseline image is provided during an initial setup configuration. To collect the initial baseline image, a plurality of images of the scene are acquired and statistics about the variation of each pixel are computed. If the variance of the intensity of a pixel is too high, it is added into a mask image so that it is not considered by subsequent processing. For example, a video monitor mounted within the mantrap will appear to be constantly changing appearance, and therefore, can be masked from consideration so that it does not falsely indicate the presence of a person or object in the region during operation. The computed statistics can also be used to set threshold levels used to determine changes that are significant and those that are not.
- The
comparison step 720 compares the current two-dimensional rectified image (fromsteps 610 and 612 ofFIG. 6 ) to the baseline image. If a pixel in the current image significantly differs in value from the baseline, it is noted. These differing points can be clustered together and if the resulting clusters have sufficient size, it would suggest that a foreign object is in the mantrap. The clustering can be performed using conventional blob image analysis. - At
step 730, if a significant difference is not detected, processing continues to step 740 where the baseline image is updated so that thecomparison step 720 does not become susceptible to gradual changes in appearance. Atstep 740, thebaseline image 725 is linearly combined with the current image compared atstep 720. Processing then continues for another of a continuous cycle of analysis. - At
step 730, if a significant difference is detected, processing continues to step 735 where a significantly differing pixel increments a timestamp count. Atstep 745, if the timestamp count exceeds a threshold, the baseline pixel is updated atstep 740. This threshold could be user settable, allowing the user to decide how fast differences in the appearance of the mantrap get blended into the baseline image. By setting the threshold long enough, the dynamic baseline can be rendered essentially static. Atstep 750, a signal is asserted to indicate to the controller that a person or object is detected, and processing continues for another of a continuous cycle of analysis. - Optionally, for an even higher level of security, one might cluster the pixels being updated, and if there are sufficient numbers and areas, a security guard might be notified with an image of the new baseline image.
- If most of the pixels are different than the dynamic baseline, it could signify a drastic lighting change. This could be caused by something like a light burning out. In this case one could automatically reselect image exposure parameters, run 3-d processing, reselect a dynamic 2D baseline and/or notify a security guard about the change.
- When a person seeking entry into the secured region and enters the mantrap, the primary zone must be masked out of the image in addition to the regions of high pixel value variance. When someone is exiting the secured region through the mantrap, the entire space (both primary and secondary zones) can be examined to make sure that the area is clear and no one is attempting an ambush.
- In a second alternative embodiment of the present invention, both the
primary sensor 230 and thesecondary sensor 240 are a single three-dimensional machine vision sensor configured to observe both the primary zone and the secondary zone at the same time, or in rapid succession. - In yet another alternative embodiment of the present invention, the
secondary sensor 240 is a presence/absence detector, or a series of presence/absence detectors. In this embodiment, for example, the secondary sensor can be a pressure-sensitive mat that outputs a signal indicating that a person or object is standing or resting on the mat. Alternatively, the presence/absence detector can be one or more light beam emitter/detector pairs that outputs a signal indicating that a person or object blocks the light emissions directed from the emitter to the detector. - Alternatively, the presence/absence detector can be an IR sensor that outputs a signal indicating that motion of a person or object is detected in the secondary zone. Further, one skilled in the art will appreciate that the secondary sensor according to the present invention can be any of a combination of various types of presence/absence detectors that can be logically combined to output a signal indicating that a person or object exists in the secondary zone.
- Although various calibration methods are described herein in terms of exemplary embodiments of the invention, persons having ordinary skill in the art should appreciate that any number of calibration methods can be used without departing from the spirit and scope of the invention. Although the exemplary embodiment described herein is setup in the factory using factory setup procedures, persons having ordinary skill in the art should appreciate that any of the described setup steps can also be performed in the field without departing from the scope of the invention.
- Although an interior orientation process for determining the internal geometry of cameras in terms of the camera constant, the image center, radial distortion coefficients and aspect ratio, persons having ordinary skill in the art should appreciate that additional intrinsic parameters may be added or some of these parameters ignored in alternative embodiments within the scope of the present invention.
- Although ground plane calibration in the exemplary embodiments described herein is performed at the location of installation, persons having ordinary skill in the art should appreciate that ground plane calibration could also be performed in the factory or at alternate locations without departing from the spirit and scope of the invention.
- Although the invention is described herein in terms of a two camera stereo vision system, persons skilled in the art should appreciate that a single camera can be used to take two or more images from different locations to provide stereo images within the scope of the invention. For example, a camera could take separate images from a plurality of locations. Alternatively, a plurality of optical components could be arranged to provide a plurality of consecutive views to a stationary camera for use as stereo images according to the invention. Such optical components include reflective optical components, for example, mirrors, and refractive optical components, for example, lenses.
- Although exemplary embodiments of the present invention are described in terms of filtering objects having predetermined heights above the ground plain, persons having ordinary skill in the art should appreciate that a stereo vision system according to the present invention could also filter objects at a predetermined distance from any arbitrary plain such as a wall, without departing from the spirit or scope of the invention.
Claims (17)
1. A method of controlling access to a secured area using a mantrap, the mantrap having a landside door and an airside door, the method comprising:
monitoring a primary zone, the primary zone comprising a region within the mantrap having an area less than the area of the mantrap, to determine the presence of one person in the primary zone;
monitoring a secondary zone, the secondary zone being an area comprising a region of the mantrap not including the primary zone, to determine the absence of any persons in the secondary zone; and
controlling access through the landside door and the airside door in response to the steps of monitoring the primary zone and monitoring the secondary zone.
2. The method according to claim 1 wherein the step monitoring the primary zone further comprises:
acquiring a stereo image of the primary zone;
computing a first set of 3D features from the stereo image of the primary zone; and
determining the presence of one person in the primary zone using the first set of 3D features.
3. The method according to claim 2 wherein the step of monitoring the secondary zone further comprises:
acquiring a stereo image of the secondary zone;
computing a set of 3D features from the stereo image of the secondary zone; and
determining the absence of any person in the secondary zone using the second set of 3D features.
4. The method according to claim 1 further comprising setting an alarm signal if the step of monitoring the primary zone fails to determine the presence of one person in the primary zone.
5. The method according to claim 4 further comprising setting an alarm signal if the step of monitoring the secondary zone fails to determine the absence of any persons in the secondary zone.
6. The method according to claim 2 further comprising filtering the first set of 3D features to exclude features that are computed to be substantially near the ground in the primary zone.
7. The method according to claim 3 further comprising filtering the second set of 3D features to exclude features that are computed to be substantially near the ground in the secondary zone.
8. The method according to claim 1 wherein both the step of monitoring the primary zone and monitoring the secondary zone are performed by a single three-dimensional machine vision sensor.
9. A system for controlling access to a secured area using a mantrap, the system comprising:
a mantrap having a lockable landside door and a lockable airside door;
a primary sensor to detect the presence of a person in a primary zone within the mantrap, the primary zone comprising a region within the mantrap having an area less than the area of the mantrap;
a secondary sensor to detect the absence of any persons within a secondary zone within the mantrap, the secondary zone comprising a region within the mantrap not including the primary zone;
a controller coupled to the primary sensor and the secondary sensor, the controller actuating the lockable landside door and the lockable airside door in response to the output of the primary sensor and the secondary sensor.
10. The system according to claim 9 wherein the primary sensor is a three-dimensional machine vision sensor adapted to monitor a first volume of space directly above the primary zone.
11. The system according to claim 10 wherein the secondary sensor is a three-dimensional machine vision sensor adapted to monitor a second volume of space directly above the secondary zone.
12. The system according to claim 10 wherein the secondary sensor comprises a plurality of three-dimensional machine vision sensors, the plurality of three dimensional machine vision sensors adapted to cooperatively monitor a second volume of space directly above the secondary zone, and wherein the controller is cooperatively coupled to each of the plurality of three dimensional machine vision sensors.
13. The system according to claim 10 wherein the secondary sensor is a presence/absence detector.
14. The system according to claim 13 wherein the presence/absence detector is a sensor selected from the list consisting of a pressure-sensitive mat, a light beam emitter/detector pair, and an infra-red motion sensor.
15. A method for detecting objects in a mantrap, the method comprising:
acquiring a stereo image of a region of the mantrap, the stereo image comprising a plurality of two-dimensional images of the region;
computing a set of 3D features from the stereo image;
determining the absence of any person in the region using the set of 3D features;
comparing one of the plurality of two dimensional images of the region to a baseline image; and
detecting an object in the mantrap from the step of comparing.
16. The method according to claim 15 wherein the baseline image is computed from a plurality of images of the region of the mantrap when no known objects are present.
17. The method according to claim 15 further comprising combining the baseline image with at least one of the plurality of two-dimensional images of the region if no objects are detected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/908,557 US20050249382A1 (en) | 2003-11-05 | 2005-05-17 | System and Method for Restricting Access through a Mantrap Portal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/702,059 US7623674B2 (en) | 2003-11-05 | 2003-11-05 | Method and system for enhanced portal security through stereoscopy |
US10/908,557 US20050249382A1 (en) | 2003-11-05 | 2005-05-17 | System and Method for Restricting Access through a Mantrap Portal |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/702,059 Continuation-In-Part US7623674B2 (en) | 2003-11-05 | 2003-11-05 | Method and system for enhanced portal security through stereoscopy |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050249382A1 true US20050249382A1 (en) | 2005-11-10 |
Family
ID=34551584
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/702,059 Expired - Fee Related US7623674B2 (en) | 2003-11-05 | 2003-11-05 | Method and system for enhanced portal security through stereoscopy |
US10/908,557 Abandoned US20050249382A1 (en) | 2003-11-05 | 2005-05-17 | System and Method for Restricting Access through a Mantrap Portal |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/702,059 Expired - Fee Related US7623674B2 (en) | 2003-11-05 | 2003-11-05 | Method and system for enhanced portal security through stereoscopy |
Country Status (3)
Country | Link |
---|---|
US (2) | US7623674B2 (en) |
EP (1) | EP1683113A2 (en) |
WO (1) | WO2005048200A2 (en) |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060037372A1 (en) * | 2003-04-10 | 2006-02-23 | Barrie Jones | Door lock |
US20060143470A1 (en) * | 2004-12-24 | 2006-06-29 | Fujitsu Limited | Personal authentication apparatus |
US20070098253A1 (en) * | 2005-09-23 | 2007-05-03 | Neuricam Spa | Electro-optical device for counting persons, or other, based on stereoscopic vision, and relative method |
US20070257790A1 (en) * | 2006-05-04 | 2007-11-08 | Shmuel Hershkovitz | Security system entry control |
US20080100438A1 (en) * | 2002-09-05 | 2008-05-01 | Marrion Cyril C | Multi-Zone Passageway Monitoring System and Method |
WO2008049404A2 (en) | 2006-10-25 | 2008-05-02 | Norbert Link | Method and apparatus for monitoring a spatial volume and a calibration method |
US20080117020A1 (en) * | 2004-12-23 | 2008-05-22 | Christian Martin | Method of Detecting Presence and Motion for Door Control Devices and Door Control Devices Implementing Such a Demand |
US20080266165A1 (en) * | 2007-04-27 | 2008-10-30 | Robert Patrick Daly | System for deployment of a millimeter wave concealed object detection system |
US20090010490A1 (en) * | 2007-07-03 | 2009-01-08 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US20090087039A1 (en) * | 2007-09-28 | 2009-04-02 | Takayuki Matsuura | Image taking apparatus and image taking method |
US20090140908A1 (en) * | 2007-06-07 | 2009-06-04 | Robert Patrick Daly | System for deployment of a millimeter wave concealed object detection system using an outdoor passively illuminated structure |
DE102008006449A1 (en) * | 2008-01-29 | 2009-07-30 | Kaba Gallenschütz GmbH | Method and device for monitoring a volume of space |
DE202009010858U1 (en) | 2009-08-11 | 2009-10-22 | Magnetic Autocontrol Gmbh | Passage or transit barrier with a device for monitoring the passage or passage area |
US20100147201A1 (en) * | 2008-12-17 | 2010-06-17 | 1St United Services Credit Union | Security, Monitoring and Control System for Preventing Unauthorized Entry into a Bank or Other Building |
US20100188509A1 (en) * | 2009-01-23 | 2010-07-29 | Ik Huh | Central access control apparatus |
US20110032341A1 (en) * | 2009-08-04 | 2011-02-10 | Ignatov Artem Konstantinovich | Method and system to transform stereo content |
US20110157191A1 (en) * | 2009-12-30 | 2011-06-30 | Nvidia Corporation | Method and system for artifically and dynamically limiting the framerate of a graphics processing unit |
US20110169917A1 (en) * | 2010-01-11 | 2011-07-14 | Shoppertrak Rct Corporation | System And Process For Detecting, Tracking And Counting Human Objects of Interest |
US20110181414A1 (en) * | 2010-01-28 | 2011-07-28 | Honeywell International Inc. | Access control system based upon behavioral patterns |
US20110267440A1 (en) * | 2010-04-29 | 2011-11-03 | Heejin Kim | Display device and method of outputting audio signal |
US8326084B1 (en) | 2003-11-05 | 2012-12-04 | Cognex Technology And Investment Corporation | System and method of auto-exposure control for image acquisition hardware using three dimensional information |
US20130201286A1 (en) * | 2010-04-15 | 2013-08-08 | Iee International Electronics & Engineering S.A. | Configurable access control sensing device |
US20140161305A1 (en) * | 2012-12-07 | 2014-06-12 | Morris Lee | Methods and apparatus to monitor environments |
US20150242691A1 (en) * | 2012-09-07 | 2015-08-27 | Siemens Schweiz AG a corporation | Methods and apparatus for establishing exit/entry criteria for a secure location |
US9177195B2 (en) | 2011-09-23 | 2015-11-03 | Shoppertrak Rct Corporation | System and method for detecting, tracking and counting human objects of interest using a counting system and a data capture device |
CN106997453A (en) * | 2016-01-22 | 2017-08-01 | 三星电子株式会社 | Event signal processing method and equipment |
US9900584B2 (en) * | 2016-04-27 | 2018-02-20 | Semyon Nisenzon | Depth map generation based on cluster hierarchy and multiple multiresolution camera clusters |
US20180075680A1 (en) * | 2016-09-15 | 2018-03-15 | Deutsche Post Ag | Method for Providing Security for a Transfer Point |
US20180336737A1 (en) * | 2017-05-17 | 2018-11-22 | Bespoke, Inc. d/b/a Topology Eyewear | Systems and methods for determining the scale of human anatomy from images |
US10706528B2 (en) * | 2015-02-27 | 2020-07-07 | Cognex Corporation | Detecting object presence on a target surface |
US10936859B2 (en) | 2011-09-23 | 2021-03-02 | Sensormatic Electronics, LLC | Techniques for automatically identifying secondary objects in a stereo-optical counting system |
WO2021217011A1 (en) | 2020-04-24 | 2021-10-28 | Alarm.Com Incorporated | Enhanced property access with video analytics |
US11180344B2 (en) | 2017-05-23 | 2021-11-23 | Otis Elevator Company | Elevator doorway display systems for elevator cars |
WO2023039485A1 (en) * | 2021-09-08 | 2023-03-16 | Boon Edam, Inc. | Separation systems, piggybacking detection devices, and related computer program products for controlling access to a restricted area and related methods |
Families Citing this family (71)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7295106B1 (en) * | 2003-09-03 | 2007-11-13 | Siemens Schweiz Ag | Systems and methods for classifying objects within a monitored zone using multiple surveillance devices |
US7831087B2 (en) * | 2003-10-31 | 2010-11-09 | Hewlett-Packard Development Company, L.P. | Method for visual-based recognition of an object |
US9087380B2 (en) * | 2004-05-26 | 2015-07-21 | Timothy J. Lock | Method and system for creating event data and making same available to be served |
US8330814B2 (en) * | 2004-07-30 | 2012-12-11 | Panasonic Corporation | Individual detector and a tailgate detection device |
EP1810220A4 (en) * | 2004-11-03 | 2008-04-30 | Tyzx Inc | An integrated image processor |
KR20060063265A (en) * | 2004-12-07 | 2006-06-12 | 삼성전자주식회사 | Method and apparatus for processing image |
US7639841B2 (en) * | 2004-12-20 | 2009-12-29 | Siemens Corporation | System and method for on-road detection of a vehicle using knowledge fusion |
JP4122384B2 (en) * | 2005-01-31 | 2008-07-23 | オプテックス株式会社 | Traffic monitoring device |
EP1875030B2 (en) * | 2005-04-19 | 2019-09-04 | Cedes AG | Device for controlling a driving moving element, for example, a door |
US20070047837A1 (en) * | 2005-08-29 | 2007-03-01 | John Schwab | Method and apparatus for detecting non-people objects in revolving doors |
WO2007067721A2 (en) | 2005-12-08 | 2007-06-14 | Lenel Systems International, Inc. | System and method for counting people near objects |
US7733043B2 (en) * | 2006-06-27 | 2010-06-08 | B.E.A., Inc. | Revolving door control system |
KR101311896B1 (en) * | 2006-11-14 | 2013-10-14 | 삼성전자주식회사 | Method for shifting disparity of three dimentions and the three dimentions image apparatus thereof |
US7900398B2 (en) * | 2006-11-14 | 2011-03-08 | Overhead Door Corporation | Security door system |
DE102006053708A1 (en) * | 2006-11-15 | 2008-05-29 | BLASI - GMBH Automatische Türanlagen | Revolving door |
KR101345303B1 (en) * | 2007-03-29 | 2013-12-27 | 삼성전자주식회사 | Dynamic depth control method or apparatus in stereo-view or multiview sequence images |
US20080244978A1 (en) * | 2007-04-05 | 2008-10-09 | Rahmi Soyugenc | Motorized security revolving door |
FR2919413B1 (en) * | 2007-07-23 | 2009-11-13 | Gunnebo Electronic Security Sa | METHOD OF MONITORING ACCESSIBLE SPACE BY A CONTROL DOOR AND CONTROL DOOR. |
WO2009085233A2 (en) * | 2007-12-21 | 2009-07-09 | 21Ct, Inc. | System and method for visually tracking with occlusions |
ITVI20080099A1 (en) | 2008-04-23 | 2009-10-24 | Bft S P A | SAFETY SYSTEM FOR AUTOMATION OF GATES, DOORS AND MOTORIZED BARRIERS |
JP2012501506A (en) * | 2008-08-31 | 2012-01-19 | ミツビシ エレクトリック ビジュアル ソリューションズ アメリカ, インコーポレイテッド | Conversion of 3D video content that matches the viewer position |
EP2194504A1 (en) * | 2008-12-02 | 2010-06-09 | Koninklijke Philips Electronics N.V. | Generation of a depth map |
CN101527046B (en) * | 2009-04-28 | 2012-09-05 | 青岛海信数字多媒体技术国家重点实验室有限公司 | Motion detection method, device and system |
KR20100135032A (en) * | 2009-06-16 | 2010-12-24 | 삼성전자주식회사 | Conversion device for two dimensional image to three dimensional image and method thereof |
GB2475104A (en) * | 2009-11-09 | 2011-05-11 | Alpha Vision Design Res Ltd | Detecting movement of 3D objects using a TOF camera |
BR112012024277A2 (en) * | 2010-03-26 | 2016-05-24 | Siemens Sas | door closing and opening safety installation |
WO2011139734A2 (en) * | 2010-04-27 | 2011-11-10 | Sanjay Nichani | Method for moving object detection using an image sensor and structured light |
EP2656613A1 (en) * | 2010-12-22 | 2013-10-30 | Thomson Licensing | Apparatus and method for determining a disparity estimate |
EP2729915B1 (en) | 2011-07-05 | 2017-12-27 | Omron Corporation | A method and apparatus for projective volume monitoring |
JP6102930B2 (en) * | 2011-10-14 | 2017-03-29 | オムロン株式会社 | Method and apparatus for projective space monitoring |
TWI448990B (en) * | 2012-09-07 | 2014-08-11 | Univ Nat Chiao Tung | Real-time people counting system using layer scanning method |
ES2496665B1 (en) * | 2013-02-14 | 2015-06-16 | Holding Assessoria I Lideratge, S.L. | FRAUDULENT ACCESS DETECTION METHOD IN CONTROLLED ACCESS POINTS |
JP2014186547A (en) * | 2013-03-22 | 2014-10-02 | Toshiba Corp | Moving object tracking system, method and program |
US9349179B2 (en) * | 2013-05-10 | 2016-05-24 | Microsoft Technology Licensing, Llc | Location information determined from depth camera data |
CN103670129A (en) * | 2013-12-24 | 2014-03-26 | 北京宝盾门业技术有限公司 | Rotation speed control system and method for manual rotating door |
US9589402B2 (en) * | 2014-08-25 | 2017-03-07 | Accenture Global Services Limited | Restricted area access control system |
US9922294B2 (en) | 2014-08-25 | 2018-03-20 | Accenture Global Services Limited | Secure short-distance-based communication and enforcement system |
US9514589B2 (en) | 2014-08-25 | 2016-12-06 | Accenture Global Services Limited | Secure short-distance-based communication and access control system |
US9633493B2 (en) | 2014-08-25 | 2017-04-25 | Accenture Global Services Limited | Secure short-distance-based communication and validation system for zone-based validation |
US10009745B2 (en) | 2014-08-25 | 2018-06-26 | Accenture Global Services Limited | Validation in secure short-distance-based communication and enforcement system according to visual objects |
US9608999B2 (en) | 2014-12-02 | 2017-03-28 | Accenture Global Services Limited | Smart beacon data security |
CN106144797B (en) | 2015-04-03 | 2020-11-27 | 奥的斯电梯公司 | Traffic list generation for passenger transport |
CN106144861B (en) | 2015-04-03 | 2020-07-24 | 奥的斯电梯公司 | Depth sensor based passenger sensing for passenger transport control |
CN104828664B (en) * | 2015-04-03 | 2020-05-22 | 奥的斯电梯公司 | Automatic debugging system and method |
CN106144795B (en) | 2015-04-03 | 2020-01-31 | 奥的斯电梯公司 | System and method for passenger transport control and security by identifying user actions |
CN106144801B (en) | 2015-04-03 | 2021-05-18 | 奥的斯电梯公司 | Depth sensor based sensing for special passenger transport vehicle load conditions |
CN106144862B (en) | 2015-04-03 | 2020-04-10 | 奥的斯电梯公司 | Depth sensor based passenger sensing for passenger transport door control |
WO2017153626A1 (en) * | 2016-03-09 | 2017-09-14 | Kone Corporation | Access gate arrangement |
US10614578B2 (en) * | 2016-03-23 | 2020-04-07 | Akcelita, LLC | System and method for tracking people, animals and objects using a volumetric representation and artificial intelligence |
US10049462B2 (en) * | 2016-03-23 | 2018-08-14 | Akcelita, LLC | System and method for tracking and annotating multiple objects in a 3D model |
US10360445B2 (en) * | 2016-03-23 | 2019-07-23 | Akcelita, LLC | System and method for tracking persons using a volumetric representation |
US10074225B2 (en) | 2016-04-18 | 2018-09-11 | Accenture Global Solutions Limited | Validation in secure short-distance-based communication and enforcement system according to visual object flow |
US10380814B1 (en) * | 2016-06-27 | 2019-08-13 | Amazon Technologies, Inc. | System for determining entry of user to an automated facility |
US10445593B1 (en) | 2016-06-27 | 2019-10-15 | Amazon Technologies, Inc. | User interface for acquisition of group data |
WO2018061812A1 (en) * | 2016-09-30 | 2018-04-05 | パナソニックIpマネジメント株式会社 | Gate device |
US10755428B2 (en) * | 2017-04-17 | 2020-08-25 | The United States Of America, As Represented By The Secretary Of The Navy | Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model |
US10655389B2 (en) * | 2017-04-17 | 2020-05-19 | Conduent Business Services, Llc | Rotary gate |
US10186051B2 (en) | 2017-05-11 | 2019-01-22 | Dantec Dynamics A/S | Method and system for calibrating a velocimetry system |
US10386460B2 (en) | 2017-05-15 | 2019-08-20 | Otis Elevator Company | Self-calibrating sensor for elevator and automatic door systems |
US10221610B2 (en) | 2017-05-15 | 2019-03-05 | Otis Elevator Company | Depth sensor for automatic doors |
CN108229292A (en) * | 2017-07-28 | 2018-06-29 | 北京市商汤科技开发有限公司 | target identification method, device, storage medium and electronic equipment |
US10509969B2 (en) * | 2017-09-12 | 2019-12-17 | Cisco Technology, Inc. | Dynamic person queue analytics |
KR102437456B1 (en) * | 2017-11-14 | 2022-08-26 | 애플 인크. | Event camera-based deformable object tracking |
CN108737810B (en) * | 2018-05-23 | 2019-08-06 | 苏州新光维医疗科技有限公司 | Image processing method, device and 3-D imaging system |
JP7220373B2 (en) * | 2018-06-28 | 2023-02-10 | パナソニックIpマネジメント株式会社 | Gate device and system |
JP7180243B2 (en) * | 2018-09-27 | 2022-11-30 | 株式会社デンソーウェーブ | surveillance systems and cameras |
CN110390747A (en) * | 2019-06-26 | 2019-10-29 | 深圳中青文化投资管理有限公司 | A kind of Intelligent Office space building guard method and computer readable storage medium |
CN111325082B (en) * | 2019-06-28 | 2024-02-02 | 杭州海康威视系统技术有限公司 | Personnel concentration analysis method and device |
DE202020100583U1 (en) * | 2020-02-03 | 2020-03-17 | KEMAS Gesellschaft für Elektronik, Elektromechanik, Mechanik und Systeme mbH | Revolving door |
WO2021233719A1 (en) * | 2020-05-18 | 2021-11-25 | Inventio Ag | Additional zone monitoring for a building door |
CN112017346B (en) * | 2020-08-25 | 2023-08-18 | 杭州海康威视数字技术股份有限公司 | Access control method, access control terminal, access control system and storage medium |
Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3727034A (en) * | 1972-01-19 | 1973-04-10 | Gen Electric | Counting system for a plurality of locations |
US3779178A (en) * | 1972-02-14 | 1973-12-18 | G Riseley | Restrained access protection apparatus |
US4000400A (en) * | 1975-04-09 | 1976-12-28 | Elder Clarence L | Bidirectional monitoring and control system |
US4303851A (en) * | 1979-10-16 | 1981-12-01 | Otis Elevator Company | People and object counting system |
US4481887A (en) * | 1982-08-31 | 1984-11-13 | Enrique Urbano | Security doors |
US4799243A (en) * | 1987-09-01 | 1989-01-17 | Otis Elevator Company | Directional people counting arrangement |
US4847485A (en) * | 1986-07-15 | 1989-07-11 | Raphael Koelsch | Arrangement for determining the number of persons and a direction within a space to be monitored or a pass-through |
US5201906A (en) * | 1989-10-11 | 1993-04-13 | Milan Schwarz | Anti-piggybacking: sensor system for security door to detect two individuals in one compartment |
US5519784A (en) * | 1992-10-07 | 1996-05-21 | Vermeulen; Pieter J. E. | Apparatus for classifying movement of objects along a passage by type and direction employing time domain patterns |
US5581625A (en) * | 1994-01-31 | 1996-12-03 | International Business Machines Corporation | Stereo vision system for counting items in a queue |
US5866887A (en) * | 1996-09-04 | 1999-02-02 | Matsushita Electric Industrial Co., Ltd. | Apparatus for detecting the number of passers |
US6028626A (en) * | 1995-01-03 | 2000-02-22 | Arc Incorporated | Abnormality detection and surveillance system |
US6081619A (en) * | 1995-07-19 | 2000-06-27 | Matsushita Electric Industrial Co., Ltd. | Movement pattern recognizing apparatus for detecting movements of human bodies and number of passed persons |
US6195102B1 (en) * | 1987-03-17 | 2001-02-27 | Quantel Limited | Image transformation processing which applies realistic perspective conversion to a planar image |
US6205233B1 (en) * | 1997-09-16 | 2001-03-20 | Invisitech Corporation | Personal identification system using multiple parameters having low cross-correlation |
US6295367B1 (en) * | 1997-06-19 | 2001-09-25 | Emtera Corporation | System and method for tracking movement of objects in a scene using correspondence graphs |
US6297844B1 (en) * | 1999-11-24 | 2001-10-02 | Cognex Corporation | Video safety curtain |
US20010030689A1 (en) * | 1999-12-10 | 2001-10-18 | Spinelli Vito A. | Automatic door assembly with video imaging device |
US6307951B1 (en) * | 1996-03-29 | 2001-10-23 | Giken Trastem Co., Ltd. | Moving body detection method and apparatus and moving body counting apparatus |
US6308644B1 (en) * | 1994-06-08 | 2001-10-30 | William Diaz | Fail-safe access control chamber security system |
US6345105B1 (en) * | 1998-09-01 | 2002-02-05 | Mitsubishi Denki Kabushiki Kaisha | Automatic door system and method for controlling automatic door |
US20020039135A1 (en) * | 1999-12-23 | 2002-04-04 | Anders Heyden | Multiple backgrounds |
US20020041698A1 (en) * | 2000-08-31 | 2002-04-11 | Wataru Ito | Object detecting method and object detecting apparatus and intruding object monitoring apparatus employing the object detecting method |
US6408109B1 (en) * | 1996-10-07 | 2002-06-18 | Cognex Corporation | Apparatus and method for detecting and sub-pixel location of edges in a digital image |
US20020118114A1 (en) * | 2001-02-27 | 2002-08-29 | Hiroyuki Ohba | Sensor for automatic doors |
US6469734B1 (en) * | 2000-04-29 | 2002-10-22 | Cognex Corporation | Video safety detector with shadow elimination |
US20030053660A1 (en) * | 2001-06-21 | 2003-03-20 | Anders Heyden | Adjusted filters |
US20030071199A1 (en) * | 2001-09-28 | 2003-04-17 | Stefan Esping | System for installation |
US20030135483A1 (en) * | 1997-06-04 | 2003-07-17 | Sharp Gary L. | Database structure and management |
US6678394B1 (en) * | 1999-11-30 | 2004-01-13 | Cognex Technology And Investment Corporation | Obstacle detection system |
US20040017929A1 (en) * | 2002-04-08 | 2004-01-29 | Newton Security Inc. | Tailgating and reverse entry detection, alarm, recording and prevention using machine vision |
US6701005B1 (en) * | 2000-04-29 | 2004-03-02 | Cognex Corporation | Method and apparatus for three-dimensional object segmentation |
US20040045339A1 (en) * | 2002-09-05 | 2004-03-11 | Sanjay Nichani | Stereo door sensor |
US20040061781A1 (en) * | 2002-09-17 | 2004-04-01 | Eastman Kodak Company | Method of digital video surveillance utilizing threshold detection and coordinate tracking |
US6720874B2 (en) * | 2000-09-29 | 2004-04-13 | Ids Systems, Inc. | Portal intrusion detection apparatus and method |
US20040153671A1 (en) * | 2002-07-29 | 2004-08-05 | Schuyler Marc P. | Automated physical access control systems and methods |
US6791461B2 (en) * | 2001-02-27 | 2004-09-14 | Optex Co., Ltd. | Object detection sensor |
US20040218784A1 (en) * | 2002-09-05 | 2004-11-04 | Sanjay Nichani | Method and apparatus for monitoring a passageway using 3D images |
US20050105765A1 (en) * | 2003-11-17 | 2005-05-19 | Mei Han | Video surveillance system with object detection and probability scoring based on object class |
US6999600B2 (en) * | 2003-01-30 | 2006-02-14 | Objectvideo, Inc. | Video scene background maintenance using change detection and classification |
US7003136B1 (en) * | 2002-04-26 | 2006-02-21 | Hewlett-Packard Development Company, L.P. | Plan-view projections of depth image data for object tracking |
US7260241B2 (en) * | 2001-06-12 | 2007-08-21 | Sharp Kabushiki Kaisha | Image surveillance apparatus, image surveillance method, and image surveillance processing program |
Family Cites Families (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
IL64015A (en) * | 1980-10-24 | 1984-10-31 | Pretini Gisberto | Automated bank counter |
US5339163A (en) | 1988-03-16 | 1994-08-16 | Canon Kabushiki Kaisha | Automatic exposure control device using plural image plane detection areas |
ATE108288T1 (en) | 1988-07-28 | 1994-07-15 | Contraves Ag | AUTOMATIC BRIGHTNESS AND CONTRAST CONTROL OF AN INDUSTRIAL/MILITARY VIDEO CAMERA. |
US5097454A (en) * | 1989-10-11 | 1992-03-17 | Milan Schwarz | Security door with improved sensor for detecting unauthorized passage |
US5432712A (en) | 1990-05-29 | 1995-07-11 | Axiom Innovation Limited | Machine vision stereo matching |
US5387768A (en) * | 1993-09-27 | 1995-02-07 | Otis Elevator Company | Elevator passenger detector and door control system which masks portions of a hall image to determine motion and court passengers |
US5559551A (en) | 1994-05-30 | 1996-09-24 | Sony Corporation | Subject tracking apparatus |
FR2725278B1 (en) | 1994-10-04 | 1997-08-14 | Telecommunications Sa | THREE-DIMENSIONAL SHAPE RECOGNITION EQUIPMENT |
JPH08186761A (en) | 1994-12-30 | 1996-07-16 | Sony Corp | Video camera device and video camera exposure control method |
US5850352A (en) | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
GB9511140D0 (en) | 1995-06-02 | 1995-07-26 | Mayor Limited | Security control system |
US6198484B1 (en) | 1996-06-27 | 2001-03-06 | Kabushiki Kaisha Toshiba | Stereoscopic display system |
GB9617592D0 (en) | 1996-08-22 | 1996-10-02 | Footfall Limited | Video imaging systems |
IT1289712B1 (en) | 1996-12-04 | 1998-10-16 | Ist Trentino Di Cultura | PROCEDURE AND DEVICE FOR THE DETECTION AND AUTOMATIC COUNTING OF BODIES CROSSING A GATE |
DE19700811A1 (en) | 1997-01-13 | 1998-07-16 | Heinrich Landert | Method and device for controlling door systems depending on the presence of people |
US6215898B1 (en) | 1997-04-15 | 2001-04-10 | Interval Research Corporation | Data processing system and method |
JP3077745B2 (en) | 1997-07-31 | 2000-08-14 | 日本電気株式会社 | Data processing method and apparatus, information storage medium |
US6205242B1 (en) * | 1997-09-29 | 2001-03-20 | Kabushiki Kaisha Toshiba | Image monitor apparatus and a method |
US6173070B1 (en) | 1997-12-30 | 2001-01-09 | Cognex Corporation | Machine vision method using search models to find features in three dimensional images |
JP3459000B2 (en) * | 1998-09-22 | 2003-10-20 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Method of displaying objects displayed in a plurality of client areas and display device used therefor |
US6963661B1 (en) | 1999-09-09 | 2005-11-08 | Kabushiki Kaisha Toshiba | Obstacle detection system and method therefor |
US6710770B2 (en) | 2000-02-11 | 2004-03-23 | Canesta, Inc. | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
US7479980B2 (en) | 1999-12-23 | 2009-01-20 | Wespot Technologies Ab | Monitoring system |
JP3873554B2 (en) * | 1999-12-27 | 2007-01-24 | 株式会社日立製作所 | Monitoring device, recording medium on which monitoring program is recorded |
US6940545B1 (en) | 2000-02-28 | 2005-09-06 | Eastman Kodak Company | Face detecting camera and method |
GB0008037D0 (en) | 2000-04-01 | 2000-05-24 | Integrated Design Limited | Monitoring entry through doorways |
US6301440B1 (en) | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
AU2001290608A1 (en) * | 2000-08-31 | 2002-03-13 | Rytec Corporation | Sensor and imaging system |
US7058204B2 (en) * | 2000-10-03 | 2006-06-06 | Gesturetek, Inc. | Multiple camera control system |
US6680745B2 (en) | 2000-11-10 | 2004-01-20 | Perceptive Network Technologies, Inc. | Videoconferencing method with tracking of face and dynamic bandwidth allocation |
US6690354B2 (en) | 2000-11-19 | 2004-02-10 | Canesta, Inc. | Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions |
US6744462B2 (en) | 2000-12-12 | 2004-06-01 | Koninklijke Philips Electronics N.V. | Apparatus and methods for resolution of entry/exit conflicts for security monitoring systems |
WO2002056251A1 (en) * | 2000-12-27 | 2002-07-18 | Mitsubishi Denki Kabushiki Kaisha | Image processing device and elevator mounting it thereon |
WO2002095692A1 (en) | 2001-05-21 | 2002-11-28 | Gunnebo Mayor Ltd. | Security door |
JP2003015019A (en) | 2001-06-27 | 2003-01-15 | Minolta Co Ltd | Device for detecting object and camera |
AUPS170902A0 (en) | 2002-04-12 | 2002-05-16 | Canon Kabushiki Kaisha | Face detection and tracking in a video sequence |
US7088236B2 (en) * | 2002-06-26 | 2006-08-08 | It University Of Copenhagen | Method of and a system for surveillance of an environment utilising electromagnetic waves |
US20040086152A1 (en) | 2002-10-30 | 2004-05-06 | Ramakrishna Kakarala | Event detection for video surveillance systems using transform coefficients of compressed images |
EP1614159B1 (en) | 2003-04-11 | 2014-02-26 | Microsoft Corporation | Method and system to differentially enhance sensor dynamic range |
EP1633950B1 (en) | 2003-06-16 | 2010-01-13 | Secumanagement B.V. | Sensor arrangements, systems and method in relation to automatic door openers |
US7471846B2 (en) | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US7280673B2 (en) | 2003-10-10 | 2007-10-09 | Intellivid Corporation | System and method for searching for changes in surveillance video |
JP2005175853A (en) | 2003-12-10 | 2005-06-30 | Canon Inc | Imaging apparatus and imaging system |
US20060170769A1 (en) | 2005-01-31 | 2006-08-03 | Jianpeng Zhou | Human and object recognition in digital video |
-
2003
- 2003-11-05 US US10/702,059 patent/US7623674B2/en not_active Expired - Fee Related
-
2004
- 2004-10-28 WO PCT/US2004/035754 patent/WO2005048200A2/en active Application Filing
- 2004-10-28 EP EP04796603A patent/EP1683113A2/en not_active Withdrawn
-
2005
- 2005-05-17 US US10/908,557 patent/US20050249382A1/en not_active Abandoned
Patent Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3727034A (en) * | 1972-01-19 | 1973-04-10 | Gen Electric | Counting system for a plurality of locations |
US3779178A (en) * | 1972-02-14 | 1973-12-18 | G Riseley | Restrained access protection apparatus |
US4000400A (en) * | 1975-04-09 | 1976-12-28 | Elder Clarence L | Bidirectional monitoring and control system |
US4303851A (en) * | 1979-10-16 | 1981-12-01 | Otis Elevator Company | People and object counting system |
US4481887A (en) * | 1982-08-31 | 1984-11-13 | Enrique Urbano | Security doors |
US4847485A (en) * | 1986-07-15 | 1989-07-11 | Raphael Koelsch | Arrangement for determining the number of persons and a direction within a space to be monitored or a pass-through |
US6195102B1 (en) * | 1987-03-17 | 2001-02-27 | Quantel Limited | Image transformation processing which applies realistic perspective conversion to a planar image |
US4799243A (en) * | 1987-09-01 | 1989-01-17 | Otis Elevator Company | Directional people counting arrangement |
US5201906A (en) * | 1989-10-11 | 1993-04-13 | Milan Schwarz | Anti-piggybacking: sensor system for security door to detect two individuals in one compartment |
US5519784A (en) * | 1992-10-07 | 1996-05-21 | Vermeulen; Pieter J. E. | Apparatus for classifying movement of objects along a passage by type and direction employing time domain patterns |
US5581625A (en) * | 1994-01-31 | 1996-12-03 | International Business Machines Corporation | Stereo vision system for counting items in a queue |
US6308644B1 (en) * | 1994-06-08 | 2001-10-30 | William Diaz | Fail-safe access control chamber security system |
US6028626A (en) * | 1995-01-03 | 2000-02-22 | Arc Incorporated | Abnormality detection and surveillance system |
US6081619A (en) * | 1995-07-19 | 2000-06-27 | Matsushita Electric Industrial Co., Ltd. | Movement pattern recognizing apparatus for detecting movements of human bodies and number of passed persons |
US6307951B1 (en) * | 1996-03-29 | 2001-10-23 | Giken Trastem Co., Ltd. | Moving body detection method and apparatus and moving body counting apparatus |
US5866887A (en) * | 1996-09-04 | 1999-02-02 | Matsushita Electric Industrial Co., Ltd. | Apparatus for detecting the number of passers |
US6408109B1 (en) * | 1996-10-07 | 2002-06-18 | Cognex Corporation | Apparatus and method for detecting and sub-pixel location of edges in a digital image |
US20030135483A1 (en) * | 1997-06-04 | 2003-07-17 | Sharp Gary L. | Database structure and management |
US6295367B1 (en) * | 1997-06-19 | 2001-09-25 | Emtera Corporation | System and method for tracking movement of objects in a scene using correspondence graphs |
US6205233B1 (en) * | 1997-09-16 | 2001-03-20 | Invisitech Corporation | Personal identification system using multiple parameters having low cross-correlation |
US6345105B1 (en) * | 1998-09-01 | 2002-02-05 | Mitsubishi Denki Kabushiki Kaisha | Automatic door system and method for controlling automatic door |
US6297844B1 (en) * | 1999-11-24 | 2001-10-02 | Cognex Corporation | Video safety curtain |
US6678394B1 (en) * | 1999-11-30 | 2004-01-13 | Cognex Technology And Investment Corporation | Obstacle detection system |
US20010030689A1 (en) * | 1999-12-10 | 2001-10-18 | Spinelli Vito A. | Automatic door assembly with video imaging device |
US20020039135A1 (en) * | 1999-12-23 | 2002-04-04 | Anders Heyden | Multiple backgrounds |
US6469734B1 (en) * | 2000-04-29 | 2002-10-22 | Cognex Corporation | Video safety detector with shadow elimination |
US6701005B1 (en) * | 2000-04-29 | 2004-03-02 | Cognex Corporation | Method and apparatus for three-dimensional object segmentation |
US20020041698A1 (en) * | 2000-08-31 | 2002-04-11 | Wataru Ito | Object detecting method and object detecting apparatus and intruding object monitoring apparatus employing the object detecting method |
US6720874B2 (en) * | 2000-09-29 | 2004-04-13 | Ids Systems, Inc. | Portal intrusion detection apparatus and method |
US6791461B2 (en) * | 2001-02-27 | 2004-09-14 | Optex Co., Ltd. | Object detection sensor |
US20020118114A1 (en) * | 2001-02-27 | 2002-08-29 | Hiroyuki Ohba | Sensor for automatic doors |
US7260241B2 (en) * | 2001-06-12 | 2007-08-21 | Sharp Kabushiki Kaisha | Image surveillance apparatus, image surveillance method, and image surveillance processing program |
US20030053660A1 (en) * | 2001-06-21 | 2003-03-20 | Anders Heyden | Adjusted filters |
US20030071199A1 (en) * | 2001-09-28 | 2003-04-17 | Stefan Esping | System for installation |
US20040017929A1 (en) * | 2002-04-08 | 2004-01-29 | Newton Security Inc. | Tailgating and reverse entry detection, alarm, recording and prevention using machine vision |
US7003136B1 (en) * | 2002-04-26 | 2006-02-21 | Hewlett-Packard Development Company, L.P. | Plan-view projections of depth image data for object tracking |
US20040153671A1 (en) * | 2002-07-29 | 2004-08-05 | Schuyler Marc P. | Automated physical access control systems and methods |
US20040045339A1 (en) * | 2002-09-05 | 2004-03-11 | Sanjay Nichani | Stereo door sensor |
US20040218784A1 (en) * | 2002-09-05 | 2004-11-04 | Sanjay Nichani | Method and apparatus for monitoring a passageway using 3D images |
US20040061781A1 (en) * | 2002-09-17 | 2004-04-01 | Eastman Kodak Company | Method of digital video surveillance utilizing threshold detection and coordinate tracking |
US6999600B2 (en) * | 2003-01-30 | 2006-02-14 | Objectvideo, Inc. | Video scene background maintenance using change detection and classification |
US20050105765A1 (en) * | 2003-11-17 | 2005-05-19 | Mei Han | Video surveillance system with object detection and probability scoring based on object class |
Cited By (72)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080100438A1 (en) * | 2002-09-05 | 2008-05-01 | Marrion Cyril C | Multi-Zone Passageway Monitoring System and Method |
US7920718B2 (en) | 2002-09-05 | 2011-04-05 | Cognex Corporation | Multi-zone passageway monitoring system and method |
US20060037372A1 (en) * | 2003-04-10 | 2006-02-23 | Barrie Jones | Door lock |
US8326084B1 (en) | 2003-11-05 | 2012-12-04 | Cognex Technology And Investment Corporation | System and method of auto-exposure control for image acquisition hardware using three dimensional information |
US8068006B2 (en) * | 2004-12-23 | 2011-11-29 | Celec Conception Electronique | Method of detecting presence and motion for door control devices and door control devices implementing such a demand |
US20080117020A1 (en) * | 2004-12-23 | 2008-05-22 | Christian Martin | Method of Detecting Presence and Motion for Door Control Devices and Door Control Devices Implementing Such a Demand |
US7818583B2 (en) * | 2004-12-24 | 2010-10-19 | Fujitsu Limited | Personal authentication apparatus |
US20060143470A1 (en) * | 2004-12-24 | 2006-06-29 | Fujitsu Limited | Personal authentication apparatus |
US20070098253A1 (en) * | 2005-09-23 | 2007-05-03 | Neuricam Spa | Electro-optical device for counting persons, or other, based on stereoscopic vision, and relative method |
US20070257790A1 (en) * | 2006-05-04 | 2007-11-08 | Shmuel Hershkovitz | Security system entry control |
US7965171B2 (en) * | 2006-05-04 | 2011-06-21 | Shmuel Hershkovitz | Security system entry control |
WO2008049404A2 (en) | 2006-10-25 | 2008-05-02 | Norbert Link | Method and apparatus for monitoring a spatial volume and a calibration method |
US8384768B2 (en) * | 2006-10-25 | 2013-02-26 | Vitracom Ag | Pass-through compartment for persons and method for monitoring a spatial volume enclosed by a pass-through compartment for persons |
US20100026786A1 (en) * | 2006-10-25 | 2010-02-04 | Norbert Link | Method and device for monitoring a spatial volume as well as calibration method |
US20080266165A1 (en) * | 2007-04-27 | 2008-10-30 | Robert Patrick Daly | System for deployment of a millimeter wave concealed object detection system |
US20090140908A1 (en) * | 2007-06-07 | 2009-06-04 | Robert Patrick Daly | System for deployment of a millimeter wave concealed object detection system using an outdoor passively illuminated structure |
US7858938B2 (en) * | 2007-06-07 | 2010-12-28 | Brijot Imaging Systems, Inc. | System for deployment of a millimeter wave concealed object detection system using an outdoor passively illuminated structure |
GB2463819B (en) * | 2007-07-03 | 2012-05-30 | Shoppertrak Rct Corp | System and process for detecting, tracking and counting human objects of interest |
US20220148321A1 (en) * | 2007-07-03 | 2022-05-12 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US9384407B2 (en) | 2007-07-03 | 2016-07-05 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US10558890B2 (en) | 2007-07-03 | 2020-02-11 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US20090010490A1 (en) * | 2007-07-03 | 2009-01-08 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US11232326B2 (en) * | 2007-07-03 | 2022-01-25 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US7965866B2 (en) * | 2007-07-03 | 2011-06-21 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US8472672B2 (en) | 2007-07-03 | 2013-06-25 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US8238607B2 (en) | 2007-07-03 | 2012-08-07 | Shoppertrak Rct Corporation | System and method for detecting, tracking and counting human objects of interest |
US11670086B2 (en) * | 2007-07-03 | 2023-06-06 | Shoppertrak Rct Llc | System and process for detecting, tracking and counting human objects of interest |
US20090087039A1 (en) * | 2007-09-28 | 2009-04-02 | Takayuki Matsuura | Image taking apparatus and image taking method |
US8477993B2 (en) * | 2007-09-28 | 2013-07-02 | Fujifilm Corporation | Image taking apparatus and image taking method |
DE102008006449A1 (en) * | 2008-01-29 | 2009-07-30 | Kaba Gallenschütz GmbH | Method and device for monitoring a volume of space |
WO2009095014A1 (en) * | 2008-01-29 | 2009-08-06 | Kaba Gallenschütz GmbH | Method and device for monitoring a spatial volume |
US20100147201A1 (en) * | 2008-12-17 | 2010-06-17 | 1St United Services Credit Union | Security, Monitoring and Control System for Preventing Unauthorized Entry into a Bank or Other Building |
US8171864B2 (en) | 2008-12-17 | 2012-05-08 | 1St United Services Credit Union | Security, monitoring and control system for preventing unauthorized entry into a bank or other building |
US20100188509A1 (en) * | 2009-01-23 | 2010-07-29 | Ik Huh | Central access control apparatus |
US20110032341A1 (en) * | 2009-08-04 | 2011-02-10 | Ignatov Artem Konstantinovich | Method and system to transform stereo content |
WO2011018078A1 (en) | 2009-08-11 | 2011-02-17 | Magnetic Autocontrol Gmbh | Installation for blocking passage by walking or driving, having a device for monitoring the passage area by walking or driving |
DE202009010858U1 (en) | 2009-08-11 | 2009-10-22 | Magnetic Autocontrol Gmbh | Passage or transit barrier with a device for monitoring the passage or passage area |
US20110157191A1 (en) * | 2009-12-30 | 2011-06-30 | Nvidia Corporation | Method and system for artifically and dynamically limiting the framerate of a graphics processing unit |
US9256265B2 (en) * | 2009-12-30 | 2016-02-09 | Nvidia Corporation | Method and system for artificially and dynamically limiting the framerate of a graphics processing unit |
US10909695B2 (en) | 2010-01-11 | 2021-02-02 | Shoppertrak Rct Corporation | System and process for detecting, tracking and counting human objects of interest |
US20110169917A1 (en) * | 2010-01-11 | 2011-07-14 | Shoppertrak Rct Corporation | System And Process For Detecting, Tracking And Counting Human Objects of Interest |
US8680995B2 (en) * | 2010-01-28 | 2014-03-25 | Honeywell International Inc. | Access control system based upon behavioral patterns |
US20110181414A1 (en) * | 2010-01-28 | 2011-07-28 | Honeywell International Inc. | Access control system based upon behavioral patterns |
US20130201286A1 (en) * | 2010-04-15 | 2013-08-08 | Iee International Electronics & Engineering S.A. | Configurable access control sensing device |
US9355556B2 (en) * | 2010-04-15 | 2016-05-31 | Iee International Electronics & Engineering S.A. | Configurable access control sensing device |
US8964010B2 (en) * | 2010-04-29 | 2015-02-24 | Lg Electronics Inc. | Display device and method of outputting audio signal |
US20110267440A1 (en) * | 2010-04-29 | 2011-11-03 | Heejin Kim | Display device and method of outputting audio signal |
US9734388B2 (en) | 2011-09-23 | 2017-08-15 | Shoppertrak Rct Corporation | System and method for detecting, tracking and counting human objects of interest using a counting system and a data capture device |
US10410048B2 (en) | 2011-09-23 | 2019-09-10 | Shoppertrak Rct Corporation | System and method for detecting, tracking and counting human objects of interest using a counting system and a data capture device |
US9177195B2 (en) | 2011-09-23 | 2015-11-03 | Shoppertrak Rct Corporation | System and method for detecting, tracking and counting human objects of interest using a counting system and a data capture device |
US10936859B2 (en) | 2011-09-23 | 2021-03-02 | Sensormatic Electronics, LLC | Techniques for automatically identifying secondary objects in a stereo-optical counting system |
US10733427B2 (en) | 2011-09-23 | 2020-08-04 | Sensormatic Electronics, LLC | System and method for detecting, tracking, and counting human objects of interest using a counting system and a data capture device |
US9305363B2 (en) | 2011-09-23 | 2016-04-05 | Shoppertrak Rct Corporation | System and method for detecting, tracking and counting human objects of interest using a counting system and a data capture device |
US9639760B2 (en) * | 2012-09-07 | 2017-05-02 | Siemens Schweiz Ag | Methods and apparatus for establishing exit/entry criteria for a secure location |
US20150242691A1 (en) * | 2012-09-07 | 2015-08-27 | Siemens Schweiz AG a corporation | Methods and apparatus for establishing exit/entry criteria for a secure location |
US9529451B2 (en) | 2012-12-07 | 2016-12-27 | The Nielsen Company (Us), Llc | Methods and apparatus to monitor environments |
US10049265B2 (en) | 2012-12-07 | 2018-08-14 | The Nielsen Company (Us), Llc | Methods and apparatus to monitor environments |
US10685221B2 (en) | 2012-12-07 | 2020-06-16 | The Nielsen Company (Us), Llc | Methods and apparatus to monitor environments |
US9020189B2 (en) * | 2012-12-07 | 2015-04-28 | The Nielsen Company (Us), Llc | Methods and apparatus to monitor environments |
US20140161305A1 (en) * | 2012-12-07 | 2014-06-12 | Morris Lee | Methods and apparatus to monitor environments |
US10706528B2 (en) * | 2015-02-27 | 2020-07-07 | Cognex Corporation | Detecting object presence on a target surface |
CN106997453A (en) * | 2016-01-22 | 2017-08-01 | 三星电子株式会社 | Event signal processing method and equipment |
US9900584B2 (en) * | 2016-04-27 | 2018-02-20 | Semyon Nisenzon | Depth map generation based on cluster hierarchy and multiple multiresolution camera clusters |
US11107309B2 (en) * | 2016-09-15 | 2021-08-31 | StreetScooter GmbH | Method for providing security for a transfer point |
US20180075680A1 (en) * | 2016-09-15 | 2018-03-15 | Deutsche Post Ag | Method for Providing Security for a Transfer Point |
US10777018B2 (en) * | 2017-05-17 | 2020-09-15 | Bespoke, Inc. | Systems and methods for determining the scale of human anatomy from images |
US20180336737A1 (en) * | 2017-05-17 | 2018-11-22 | Bespoke, Inc. d/b/a Topology Eyewear | Systems and methods for determining the scale of human anatomy from images |
US11495002B2 (en) * | 2017-05-17 | 2022-11-08 | Bespoke, Inc. | Systems and methods for determining the scale of human anatomy from images |
US11180344B2 (en) | 2017-05-23 | 2021-11-23 | Otis Elevator Company | Elevator doorway display systems for elevator cars |
WO2021217011A1 (en) | 2020-04-24 | 2021-10-28 | Alarm.Com Incorporated | Enhanced property access with video analytics |
EP4139836A4 (en) * | 2020-04-24 | 2023-08-02 | Alarm.com Incorporated | Enhanced property access with video analytics |
WO2023039485A1 (en) * | 2021-09-08 | 2023-03-16 | Boon Edam, Inc. | Separation systems, piggybacking detection devices, and related computer program products for controlling access to a restricted area and related methods |
Also Published As
Publication number | Publication date |
---|---|
EP1683113A2 (en) | 2006-07-26 |
WO2005048200A2 (en) | 2005-05-26 |
WO2005048200A3 (en) | 2005-12-15 |
US7623674B2 (en) | 2009-11-24 |
US20050093697A1 (en) | 2005-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20050249382A1 (en) | System and Method for Restricting Access through a Mantrap Portal | |
US7400744B2 (en) | Stereo door sensor | |
US7397929B2 (en) | Method and apparatus for monitoring a passageway using 3D images | |
US11232326B2 (en) | System and process for detecting, tracking and counting human objects of interest | |
US7920718B2 (en) | Multi-zone passageway monitoring system and method | |
US20040153671A1 (en) | Automated physical access control systems and methods | |
US8326084B1 (en) | System and method of auto-exposure control for image acquisition hardware using three dimensional information | |
US11657650B2 (en) | Techniques for automatically identifying secondary objects in a stereo-optical counting system | |
US8873804B2 (en) | Traffic monitoring device | |
WO2017114846A1 (en) | Depth sensing based system for detecting, tracking, estimating, and identifying occupancy in real-time | |
Snidaro et al. | Automatic camera selection and fusion for outdoor surveillance under changing weather conditions | |
US11354940B2 (en) | Method and apparatus for foreground geometry and topology based face anti-spoofing | |
US20220092807A1 (en) | Method and system for monitoring a spatial area in a personnel interlock | |
Chowdhury et al. | Human detection and localization in secure access control by analysing facial features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: COGNEX TECHNOLOGY AND INVESTMENT CORPORATION, CALI Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SCHWAB, JOHN;FIX, RAYMOND A.;NICHANI, SANJAY;REEL/FRAME:016310/0251 Effective date: 20050721 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |