WO2001055963A1 - Providing information of objects - Google Patents

Providing information of objects Download PDF

Info

Publication number
WO2001055963A1
WO2001055963A1 PCT/GB2001/000118 GB0100118W WO0155963A1 WO 2001055963 A1 WO2001055963 A1 WO 2001055963A1 GB 0100118 W GB0100118 W GB 0100118W WO 0155963 A1 WO0155963 A1 WO 0155963A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
boundary
picture element
picture elements
picture
Prior art date
Application number
PCT/GB2001/000118
Other languages
French (fr)
Inventor
Olli Oinonen
Seppo Olli Antero LEPPÄJÄRVI
Original Assignee
Robotic Technology Systems Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robotic Technology Systems Plc filed Critical Robotic Technology Systems Plc
Priority to AU2001225345A priority Critical patent/AU2001225345A1/en
Publication of WO2001055963A1 publication Critical patent/WO2001055963A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Definitions

  • the present invention relates to provision of information of objects, and m particular, but not exclusively, to provision of boundary information of objects by means of imaging apparatus .
  • the objects may be, for example, any workpieces, tools, goods, articles, pallets or similar articles that are to be handled in industrial or commercial processes. During the handling operations the objects may need to be gripped, picked, machined or processed otherwise. More particularly, in several applications there exists a need to pick the objects e.g. from a conveyor or similar and/or to move the objects to another location and/or to process the objects further e.g. by subjecting the objects to predefined packaging, machining or finishing operations. The objects may also need to be moved, e.g. during manufacturing or transporting operations, by an appropriate apparatus that is arranged to move the objects.
  • Machine vision systems may be used for providing at least part of the required information of an object to be handled and/or processed.
  • an ob j ect may be detected and/or recognised by imaging apparatus of the vision system.
  • the object may then be subjected to further processing based on information on the object provided by the vision system.
  • the vision system arrangement may even be such that an object and/or predefined characteristic information of an object is detected while the object is moving.
  • the object may, for example, be picked by appropriate means such as a gripping mechanism of a robot or a manipulator or similar actuator from a conveyor, another handling apparatus or other supporting element, and moved to a desired next location or stage of processing.
  • the machine vision based systems typically operate such that an object is detected and imaged by means of imaging apparatus such as a camera whereafter the object is recognised and predefined characteristics thereof are determined based on the image .
  • the object may be processed, such as gripped by appropriate gripping means .
  • the further processing may include operations such as machining that utilises the information received from the imaging of the object.
  • International publications Nos . WO95/00299 and W097/17173 discuss in more detail some examples of the (machine) vision systems and the possible processing operations that are base ⁇ on information by the machine vision system.
  • the imaging apparatus is arranged to take a picture from the object that is placed within the imaging area of the imaging apparatus.
  • each object may have been imaged beforehand and the necessary information required for the further processing may have been stored in an object image database or library of the system.
  • object image database or library of the system During the actual processing of subsequent ob j ects each of the objects entering m the imaging area is imaged again, and the new image is compared against the images m the database to find a matching image. When the matching image is found, it is possible to retrieve the associated information, such as machining information (parameters, tools, programs and so on) and/or software, from the database and to process the object accordingly.
  • the imaging system may create new processing information for the object based on the image and uses this new information for processing the object.
  • the system may also be based on a combination of these approaches.
  • the imaging area that is viewed by the imaging apparatus is typically divided into picture elements that can be referred to as pixels.
  • the image consist typically of an array of pixels that may, for example, have a square, rectangular or hexagonal shape.
  • the pixel resolution depends on the used imaging apparatus and standard. For example, CCIR cameras (CCIR is a European standard for machine vision cameras) employ a resolution of 768x576 pixels and RS170 cameras (a North American standard) operate with a resolution of 640x480 pixels. However, other pixel resolutions are also employe ⁇ .
  • Each of the pixels may indicate a value in a grey scale, such as a value between a range from 0 (black) to 255 (white) .
  • the object has been pictured in its entirety, and all image information has been compared pixel-by-pixel basis against the information stored m the database. Since all pixels on top of the visible surface of the object need to processed, this requires a lot of data processing resources and makes the working cycle of an object relatively long. The amount of data to be transmitted between the imaging apparatus and the actual processing device may also become excessive. In addition, m some applications the object that is to be processed may have features that may not be essential for the purposes of further processing of the object, e.g. for gripping to the object. Despite this, the image processing system may process the entire image information (i.e. all pixels) of the object and the background while processing the image data. This takes a considerable amount of time and requires a lot of computing resources.
  • the amount of data to be processed is reduced by detecting points on the boundary regions of the object.
  • the detected boundary region is then used for the purposes of the further processing instead of the digitised image of the entire object.
  • conditions such as the non-uniform lightning, colouring of the object and the background, reflections of light that may distort the image, dirt and dust and so on may decrease the accuracy and/or reliability of the border line detection.
  • a method of providing information of objects by means of imaging apparatus comprising: positioning an ob j ect within an imaging window of the imaging apparatus, the imaging window being divided into a plurality of picture elements; moving a detection mask that consists of a predefined pattern of detection elements that correspond the picture elements over the picture elements of the imaging window and generating initial boundary information defining the location of picture elements that indicate a boundary line and the direction of the boundary line at these picture elements, the generation of the initial boundary information being based on boundary strength values that are computed for the picture elements of the imaging window based on information provided by the detection mask; processing said initial boundary information such that the width of the detected boundary line is modified into a predefined width and those picture elements of the initial boundary information that are outside the predefined width are deleted; and generating boundary information only if a predefined number of adjacent picture elements form a continuous boundary segment.
  • a system for providing information of objects comprising: imaging apparatus arranged to generate an image of an object positioned within an imaging window of the imaging apparatus, the imaging window being divided into a plurality of picture elements; a processor connected to the imaging apparatus and arranged to process the image information provided by the imaging apparatus; a detection mask consisting of a predetermined pattern of detection elements that substantially correspond the picture elements, the detection mask being movable over the picture elements of the imaging window for generating initial boundary information defining the location of those picture elements that indicate a boundary line and the direction of the boundary line at said picture elements, the generation of the initial boundary information being based on boundary strength values that are computed by the processor for the picture elements of the imaging window based on information provided by the detection mask, wherein the processor is arranged to generate boundary line information based on said initial boundary information such that the width of the detected boundary line is modified into a predefined width and those picture elements of the initial boundary information that are outside the predefined width are deleted and such that the boundary information is generated only if
  • the width of the boundary line is modified to equal a measure that is defined by the size of one picture element.
  • the processing of the initial boundary information comprises further steps of determining a picture element of the initial boundary information that has a local maximum m the strength value, storing information of the picture element that has the local maximum in the strength value, determining the direction of the boundary line at sai ⁇ picture element, deleting information of any picture element positioned m the lateral direction from said picture element, determining if the next picture element m the direction of the boundary line has a local maximum m the strength value, and if the next picture element is determined to have a local maximum, storing information of the next picture element that has the local maximum in tne strength value and deleting information of any picture element that is positioned m the lateral direction from said next picture element.
  • the steps for generating the initial boundary information and processing said initial boundary information may overlap.
  • the method comprises steps of recognising the object based on the boundary information and retrieving information that associates with the ob j ect from a database.
  • the embodiments of the present invention may provide a fast and accurate image processing solution that does not necessarily process such image information that is of less importance for the purposes of recognising and/or positioning and/or further processing of the object.
  • the accuracy of the processed image information is good, especially relative to the time and processing capacity required for the processing of data from an imaging apparatus .
  • FIG 1 shows one embodiment of the present invention
  • Figure 2 shows an image window of an imaging apparatus
  • Figure 3 shows a borderline extraction mask
  • Figure 4 shows an example of use of the mask of Figure 3
  • Figure 5 illustrates possible mask element values for the mask of Figure 4.
  • Figures 6A and 6B illustrate extraction of a borderline
  • Figure 7 illustrates borderline information obtained by means of modifying the borderline information of Figure 6B;
  • Figure 8 illustrates borderline information obtained by means of processing further the borderline information of Figure 7; and
  • Figure 9 is a flowchart illustrating the operation of one embodiment of the present invention.
  • the exemplifying object processing system includes a belt conveyor 1 for supporting and moving objects 2.
  • the objects are moved at speed v m a direction from left to right as is indicated by the arrow 10.
  • the subsequent objects to be processed by means of the system may be different in size, shape and/or position thereof.
  • Figure 1 shows a conveyor
  • the objects to be processed may also be located in a fixed position, e.g. supported on a work table.
  • the processing apparatus such as a robot 5 an/or imaging apparatus 3
  • the processing apparatus such as a robot 5 an/or imaging apparatus 3
  • the skilled person is also familiar with many other possible types of conveying arrangements that could be used for moving the objects. These include chain conveyors and conveyors m which the objects are moved below the conveyor structure, e.g. are supported by appropriate hangers.
  • the objects may also be moved by another type of appropriate handling device (e.g. by means of the robot 5) .
  • the embodiments of the invention are not restricted to arrangements with conveyors but can be applied to any type of arrangements adapted to process objects.
  • the processing system includes imaging apparatus.
  • the imaging apparatus comprises a camera 3.
  • Various possibilities for imaging apparatus are known, these possibilities including, without restricting to these, cameras such as CCD (Charge Coupled Device) matrix cameras, progressive scan cameras, CCIR cameras and RS170 cameras and laser and infrared imaging applications.
  • the camera 3 is arranged to image objects 2 on the belt 1 that are within an imaging area i.e. window 40 between the dashed lines 4a and 4b (see also Fig. 2 or Fig. 6A) .
  • a single camera 3 is shown to be disposed above the conveyor 1.
  • the position and general arrangement of the imaging apparatus may differ from this, and the embodiments of the invention are applicable m various possible imaging apparatus variations in which the number and positioning of the imaging apparatus and components can be freely chosen.
  • the position of the imaging device 3 is chosen such that it is possible to detect desired points of the objects for the provision of a reliable detection of the objects forwarding on the belt.
  • the camera may also be positioned n the side of the conveyor or even below the conveyor.
  • the imaging apparatus may also consist of more than one camera, e.g. m applications where three dimensional images of the objects are produced or a great accuracy is required .
  • the exemplifying system of Figure 1 includes further a robot 5 for picking the objects from the conveyor 1. More particularly, the objects are picked by gripping means 6 of the robot 5. It is to be understood that the robot 5 is only an example of a possible subsequent handling and/or processing device. Any suitable actuator device may be used for the further processing of the objects 2 after they have been imaged and recognised by the machine vision system. In addition, instead of or in addition to the gripping, the further processing may comprise operations such as machining, finishing, painting and so on of an object based on the information of the object that is obtained by means of the machine vision system.
  • a control unit 7 of the machine vision system is also shown.
  • the control unit is arranged to process information received from the imaging apparatus 3 via connection 8 and to control the operation of the robot 5 via connection 9.
  • the controller unit preferably includes required data processing and storage capability.
  • Figure 1 shows schematically a central processing unit 11 and a database 12.
  • the central processing unit may be based on microprocessor technology.
  • the controller unit 7 may be based on a PentiumTM processor, even thoug a less or more powerfull processor may also be employed depending on the requirements of the system and the objects to be handled.
  • the controller 7 may be provided with appropriate memory devices, drives, display means, a keyboard, a mouse or other pointing device and any adapters and interfaces that may be required.
  • the controller may also be provided with a network card for installations where the imaging system is connected to a data network, such as to a network based on TCP/IP (Transport Control Protocol/Internet Protocol) or to a local area network (LAN) .
  • a data network such as to a network based on TCP/IP (Transport Control Protocol/Internet Protocol) or to a local area network (LAN) .
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • LAN local area network
  • tne imaging is based on methods where only the boundary information of the object is extracted from the image and/or analysed, and wherein the object is recognised and/or positioned based on the boundary characteristics.
  • This enables a faster processing as the computing capacity and time required for the detection and computations is less than if the entire object surface area is analysed on pixel to pixel basis.
  • an appropriate object information database e.g. from the database 12
  • This retrieved information may then be used in the actual further handling and/or processing of the ob j ect.
  • FIG. 2 shows an imaging window 40 and an ob j ect 2 that has been located within the area of the imaging window 40.
  • the imaging window is divided m a two-dimensional array of picture elements 20 that are referred to m the following as pixels.
  • Figure 2 shows schematically a slack net- like array of square pixels, wherein the square pixels 20 are arranged n rows and columns, it shall be understood that the imaging window may comprise a more tense and different array of pixels than the illustrated "grid".
  • the imaging window may consist of 700x500 pixels.
  • the pixels may also have a shape that is different from the square like form.
  • the pixels may be rectangular or hexagonal. It is also to be understood that the size of a pixel on tne surface of the object 2 depends on the distance between the object and the imaging apparatus .
  • the pixels 20 are arranged to indicate a value m a grey scale.
  • an object m the imaging window 40 can be detected based on analysis of the grey scale values m the pixels. For example, if lightly coloured or relatively white backgrounds are used, a darker i.e. greater value in the grey scale will indicate that an object is positioned on top of the lighter coloured background (e.g. on the belt i of the conveyor) .
  • the grey scale value detected by the pixels will be different on the surface area of the object and on the belt area. Based on this it is possible to define from the imaging window the area covered by the object and also, as will be described below, the boundary of the object.
  • pre-processing of the information obtained by means of the imaging apparatus 3 is to make the boundary or borderline information more accurate and to reduce the amount of data processing and/or computations required during the recognition of an object and/or the position thereof and/or further processing thereof.
  • the image information is "filtered" so that only such image information will be saved/used in further processing that is actually required for these purposes, while the non-important information is deleted/ignored.
  • the preprocessing of the image information aims to produce image information that comprises information of only those boundary segments of the object that are actually required and/or true boundaries and clearly and accurately detected.
  • the pre-processing may be divided into several substages such as extraction of the boundaries, processing the initial boundary information in order to reduce the width of the boundary lines to a predefined uniform width and rejection of any boundary line segments that are shorter than a predefined threshold value.
  • substages such as extraction of the boundaries, processing the initial boundary information in order to reduce the width of the boundary lines to a predefined uniform width and rejection of any boundary line segments that are shorter than a predefined threshold value.
  • the extraction of the borderlines comprises detection of the borderlines by convolving the grey scale image by means of an appropriate borderline extraction mask.
  • Figure 3 discloses as an exemplifying embodiment a lattice extraction mask 30.
  • the mask 30 comprises two lines of detection elements 21, 22 that have a common element i.e. intersect at point 23.
  • the detection elements of the mask 30 correspond the picture element or pixels 20 of the imaging window 40.
  • the mask 30 has two lines of detection elements that are normal to each other.
  • the preferred length of each element line 21,22 is five elements.
  • the common point 23 is preferably m the middle of each line, i.e. the third element of each five element long line of elements.
  • the mask 30 is moved over the area to be imaged.
  • the "strength" of a borderline is determined based on the grey scale value that is computed for each image pixel of the area to be imaged.
  • the strength of the borderline may be computed by means of the following equations .
  • Gy (Z5 * a5 + z6 * a6) - (z7 * a7 + z8 * a8) ( 2 )
  • value of G denotes strength of the borderline and zi value is the grey scale value of an image pixel above the corresponding mask element ai .
  • the zi values grey scale values
  • the mask element values may be any selected to have any appropriate range.
  • "*" designates the multiplying of the values of the mask elements by the grey scale values of the respective pixels.
  • the total strength G of the borderline may be computed by means of equation (3) from the computed horizontal and vertical strengths of the borderlines Gx, Gy.
  • the total strength G is computed for the middle point or pixel 31 of the mask 30. In other words, the respective mask elements and pixels are multiplied with each other and the results of the multiplying are summed together.
  • the mask 30 is moved on top of each pixel.
  • the strength of the borderline is computed for corresponding positions m the resulting image information.
  • the two columns at the right and left hand sides as well as the two rows on top and bottom of the imaging window 40 are preferably ignored. This may be done since otherwise the mask 30 would extend over the imaging area, and it would be impossible to compute strength values for the detection elements of the mask that are not positioned above the picture elements.
  • Figure 4 illustrates a portion (5x12 pixels) of an imaging window.
  • the numbers designate the grey scale values of the respective pixels.
  • a clear vertical line can be seen on the left hand side at a point where the grey scale values change from about 50 to about 150.
  • the strength values are computed for two mask positions (1) and (2) with mask element values that correspond to those of Figure 5 (i.e. with values 1, 2, -2, -1) .
  • the final G values are positioned m the resulting initial image (i.e. initial image information) to a point that corresponds the centre of the lattice mask 30.
  • the G value that is computed for each pixel defines the difference in the grey scale values between the different sides of a particular pixel at a given moment of time.
  • the particular pixel under consideration is positioned at the common point 23 where the vertical and horizontal lines 21, 22 of pixels detection elements 24 intersect.
  • the greater the difference between the values of the horizontal and respectively vertical "branches" the more likely it is that the mask 30 is positioned on top of a discontinuity, i.e. on top of a borderline of the object.
  • the boundary information may then be based on those G values that exceed a predefined threshold value.
  • Figure 4 shows a more practical example of the values that may be obtained by a borderline extraction mask 30 used for pre-processing of an image .
  • Figure 6A shows an example of an object to be imaged.
  • Figure 6B shows the obtained borderline image that is based on the image information obtained by the use of the mask 30 of Figure 5.
  • the borderlines may appear to be wider than what they actually are and/or what is necessary from the point of view of further processing of the object.
  • the borderline image or initial image information obtained by means of any type of borderline extraction mask comprises borderlines that in most cases have a width of several pixels. This may make the borderline information unclear and inaccurate. Therefore it is advantageous to be able to modify the borderline information such that an evenly dimensioned and preferably as thin borderline as possible may be produced.
  • the direction of the borderline may be stored during the extraction of the borderline information for use in the further processing of the borderline information, that is during the modifying of the borderline to produce an evenly dimensioned th n borderlines.
  • the borderline image may be used m applications where an ob j ect needs to be recognised and positioned, whereafter the further processing parameters may be retrieved from an appropriate database.
  • the inventors have found that m especially in this kind of applications it may be advantageous to narrow the borderlines to equal one pixel from the width thereof. By means of narrowing of the borderline it is possible to reduce the number of pixels that need to be processed during the recognition and/or positioning process by the positioning algorithm and thus to speed up the operation of the system processing objects based on information from the imaging apparatus .
  • a local maximum value is searched from the initial borderline image of Figure 6B during the borderline narrowing process (see also flowchart of Figure 9) .
  • the narrowing algorithm seeks for a pixel on the borderline that has a value that is greater than the values of the neighbouring pixels. This is based on the realisation that pixels with maximum G values should be found from the middle of the borderline segments (in the lateral direction of the borderlines) . Only those pixels that have the strongest borderline value are saved while the others are deleted from the borderline information. The deleted pixels will not be used when processing further the borderline information .
  • the direction of the borderline at the point of that pixel is determined.
  • the direction has already been determined during the extraction of the Dorderlme and this information may be used herein.
  • the next step is to move the point that is to be considered by one pixel to the direction of the borderline and to check whether this next pixel is also a local maximum.
  • the previous local maximum is preferably ignored at this stage so that the narrowing operation may forward even in such instances where the previous pixel has a local maximum that is greater than the next pixel under consideration. If the new pixel is a local maximum, it is saved m the final borderline image that comprises a borderline that is one pixel wide.
  • the new pixel is not a local maximum
  • the algorithm will start again to look for next such local maximum values that have not yet been processed.
  • the line segment will be investigated to its end, i.e. until a last local maximum that associates with the particular borderline segment has been processed.
  • the border line narrowing procedure may be stopped.
  • the local maximums and more particularly, information of the pixels that have a local maximum strength value, are saved only if they are strong enough, i.e. exceed a threshold value. That is, the grey scale value thereof is greater than a predefined minimum grey scale value for the borderline .
  • the borderline information is processed further to detect and delete such segments of the borderline that may not be of relevance for the further processing of the object or that may be a result of an error that occurred during the extraction process. More particularly, the length of each borderline segment is compared against a threshold value.
  • the borderline segment means a portion of borderline information that is similar throughout the length of the segment. If the length is smaller than a predefined value, for example 10 pixels, the borderline segment is determined as irrelevant and/or incorrect and is deleted from the picture information. By means of this it is possible to produce a smooth and clean borderline image (see Figure 7) as any "too short" and incorrect or “unreal" line segments can be removed from the image. By means of this it is also possible to reduce further the amount of required processing of the image data during the recognition and/or positioning of the object.
  • the stages may overlap. For example, when the borderline extraction mask forwards into a third pixel row of an image, it is possible to start the modifying operations of the borderlines n the first pixel row of the image. By means of this it is possible to reduce further the overall time of processing of the image information.
  • the embodiments solution are applicable also m case of several cameras or other imaging devices attached to the vision system.
  • the two or more imaging devices may also oe monitoring different conveyors.
  • Each of the imaging devices may have its own control and/or processor system or then the arrangement may be such that a common controller controls the operation of each of the separate imaging devices.
  • a message may be shown to the user to confirm a successful imaging procedure. If the process fails, an error message may be shown.
  • embodiments of the present invention are applicable to any other type of operations as well.
  • An example of the other possibilities is determination of a machining path for a machining tool.
  • the outer borderlines of the object are detected by means of the above described processing method, whereafter the object is subjected to machining operations by the machining tool (e.g. grinding) m accordance with the detected and processed border line information.
  • the machining tool may be attached to and moved by an arm of the robot 5 of Figure 1.

Abstract

The present invention relates to a method and apparatus for providing information of objects by means of imaging apparatus. The imaging apparatus (2) employs an imaging window that is divided into a plurality of picture elements. A detection mask is moved over the picture elements for generating initial boundary information of the object (2). The initial information defines the location and direction of a boundary line of the object. Said initial boundary information is generated based on boundary strength values that are computed based on information from the detection mask. Said initial boundary information is then processed such that the width of the boundary line is modified to equal a predefined width. The boundary line information is generated only if a predefined number of adjacent picture elements is determined to form a contiguous border line segment.

Description

PROVIDING INFORMATION OF OBJECTS
Field of the Invention
The present invention relates to provision of information of objects, and m particular, but not exclusively, to provision of boundary information of objects by means of imaging apparatus .
Background of the Invention
Handling and/or processing of objects is a commonplace m various fields of industry. The objects may be, for example, any workpieces, tools, goods, articles, pallets or similar articles that are to be handled in industrial or commercial processes. During the handling operations the objects may need to be gripped, picked, machined or processed otherwise. More particularly, in several applications there exists a need to pick the objects e.g. from a conveyor or similar and/or to move the objects to another location and/or to process the objects further e.g. by subjecting the objects to predefined packaging, machining or finishing operations. The objects may also need to be moved, e.g. during manufacturing or transporting operations, by an appropriate apparatus that is arranged to move the objects.
During the further processing of an object it may be necessary to know one or more of the characteristics of the object, such as the shape or size of the object, and/or position and orientation of the objects, so that it is possible, for example, to grip and move the object from one location to another or to machine one or several of the surfaces or edges of the object. Machine vision systems may be used for providing at least part of the required information of an object to be handled and/or processed. When employing a vision system an object may be detected and/or recognised by imaging apparatus of the vision system. The object may then be subjected to further processing based on information on the object provided by the vision system. The vision system arrangement may even be such that an object and/or predefined characteristic information of an object is detected while the object is moving. After the detection the object may, for example, be picked by appropriate means such as a gripping mechanism of a robot or a manipulator or similar actuator from a conveyor, another handling apparatus or other supporting element, and moved to a desired next location or stage of processing. In other words, the machine vision based systems typically operate such that an object is detected and imaged by means of imaging apparatus such as a camera whereafter the object is recognised and predefined characteristics thereof are determined based on the image .
After the determination of information that is required to be able to process the object, the object may be processed, such as gripped by appropriate gripping means . The further processing may include operations such as machining that utilises the information received from the imaging of the object. International publications Nos . WO95/00299 and W097/17173 discuss in more detail some examples of the (machine) vision systems and the possible processing operations that are baseα on information by the machine vision system.
According to a prior art arrangement the imaging apparatus is arranged to take a picture from the object that is placed within the imaging area of the imaging apparatus. In order to speed up the processing of subsequent objects e.g. on a conveyor, each object may have been imaged beforehand and the necessary information required for the further processing may have been stored in an object image database or library of the system. During the actual processing of subsequent objects each of the objects entering m the imaging area is imaged again, and the new image is compared against the images m the database to find a matching image. When the matching image is found, it is possible to retrieve the associated information, such as machining information (parameters, tools, programs and so on) and/or software, from the database and to process the object accordingly. A recreation of the object related processing information is not required, but it is enough that the object is recognised and the associated information is then retrieved from an appropriate database. According to another possibility, instead of using object information databases, the imaging system may create new processing information for the object based on the image and uses this new information for processing the object. The system may also be based on a combination of these approaches.
The imaging area that is viewed by the imaging apparatus is typically divided into picture elements that can be referred to as pixels. The image consist typically of an array of pixels that may, for example, have a square, rectangular or hexagonal shape. The pixel resolution depends on the used imaging apparatus and standard. For example, CCIR cameras (CCIR is a European standard for machine vision cameras) employ a resolution of 768x576 pixels and RS170 cameras (a North American standard) operate with a resolution of 640x480 pixels. However, other pixel resolutions are also employeα. Each of the pixels may indicate a value in a grey scale, such as a value between a range from 0 (black) to 255 (white) . Conventionally the object has been pictured in its entirety, and all image information has been compared pixel-by-pixel basis against the information stored m the database. Since all pixels on top of the visible surface of the object need to processed, this requires a lot of data processing resources and makes the working cycle of an object relatively long. The amount of data to be transmitted between the imaging apparatus and the actual processing device may also become excessive. In addition, m some applications the object that is to be processed may have features that may not be essential for the purposes of further processing of the object, e.g. for gripping to the object. Despite this, the image processing system may process the entire image information (i.e. all pixels) of the object and the background while processing the image data. This takes a considerable amount of time and requires a lot of computing resources.
According to an approach the amount of data to be processed is reduced by detecting points on the boundary regions of the object. The detected boundary region is then used for the purposes of the further processing instead of the digitised image of the entire object. However, conditions such as the non-uniform lightning, colouring of the object and the background, reflections of light that may distort the image, dirt and dust and so on may decrease the accuracy and/or reliability of the border line detection.
Summary of the Invention
It is an aim of the embodiments of the present invention to address one or several of the above problems.
According to one aspect of the present invention, there is provided a method of providing information of objects by means of imaging apparatus, comprising: positioning an object within an imaging window of the imaging apparatus, the imaging window being divided into a plurality of picture elements; moving a detection mask that consists of a predefined pattern of detection elements that correspond the picture elements over the picture elements of the imaging window and generating initial boundary information defining the location of picture elements that indicate a boundary line and the direction of the boundary line at these picture elements, the generation of the initial boundary information being based on boundary strength values that are computed for the picture elements of the imaging window based on information provided by the detection mask; processing said initial boundary information such that the width of the detected boundary line is modified into a predefined width and those picture elements of the initial boundary information that are outside the predefined width are deleted; and generating boundary information only if a predefined number of adjacent picture elements form a continuous boundary segment.
According to another aspect of the present invention there is provided a system for providing information of objects, comprising: imaging apparatus arranged to generate an image of an object positioned within an imaging window of the imaging apparatus, the imaging window being divided into a plurality of picture elements; a processor connected to the imaging apparatus and arranged to process the image information provided by the imaging apparatus; a detection mask consisting of a predetermined pattern of detection elements that substantially correspond the picture elements, the detection mask being movable over the picture elements of the imaging window for generating initial boundary information defining the location of those picture elements that indicate a boundary line and the direction of the boundary line at said picture elements, the generation of the initial boundary information being based on boundary strength values that are computed by the processor for the picture elements of the imaging window based on information provided by the detection mask, wherein the processor is arranged to generate boundary line information based on said initial boundary information such that the width of the detected boundary line is modified into a predefined width and those picture elements of the initial boundary information that are outside the predefined width are deleted and such that the boundary information is generated only if a predefined number of adjacent picture elements form a continuous boundary segment.
In accordance with an embodiment the width of the boundary line is modified to equal a measure that is defined by the size of one picture element.
In accordance with an embodiment the processing of the initial boundary information comprises further steps of determining a picture element of the initial boundary information that has a local maximum m the strength value, storing information of the picture element that has the local maximum in the strength value, determining the direction of the boundary line at saiα picture element, deleting information of any picture element positioned m the lateral direction from said picture element, determining if the next picture element m the direction of the boundary line has a local maximum m the strength value, and if the next picture element is determined to have a local maximum, storing information of the next picture element that has the local maximum in tne strength value and deleting information of any picture element that is positioned m the lateral direction from said next picture element. The steps for generating the initial boundary information and processing said initial boundary information may overlap.
According to a further embodiment the method comprises steps of recognising the object based on the boundary information and retrieving information that associates with the object from a database.
The embodiments of the present invention may provide a fast and accurate image processing solution that does not necessarily process such image information that is of less importance for the purposes of recognising and/or positioning and/or further processing of the object. The accuracy of the processed image information is good, especially relative to the time and processing capacity required for the processing of data from an imaging apparatus .
Brief Description of Drawings
For better understanding of the present invention, reference will now be made by way of example to the accompanying drawings m which:
Figure 1 shows one embodiment of the present invention;
Figure 2 shows an image window of an imaging apparatus; Figure 3 shows a borderline extraction mask;
Figure 4 shows an example of use of the mask of Figure 3;
Figure 5 illustrates possible mask element values for the mask of Figure 4;
Figures 6A and 6B illustrate extraction of a borderline; Figure 7 illustrates borderline information obtained by means of modifying the borderline information of Figure 6B;
Figure 8 illustrates borderline information obtained by means of processing further the borderline information of Figure 7; and Figure 9 is a flowchart illustrating the operation of one embodiment of the present invention.
Description of Preferred Embodiments of the Invention
Reference is first made to Figure 1 showing a schematic presentation of an embodiment of the present invention. The exemplifying object processing system includes a belt conveyor 1 for supporting and moving objects 2. The objects are moved at speed v m a direction from left to right as is indicated by the arrow 10. The subsequent objects to be processed by means of the system may be different in size, shape and/or position thereof.
It is noted that even though Figure 1 shows a conveyor, the skilled person understands that the objects to be processed may also be located in a fixed position, e.g. supported on a work table. In addition, the processing apparatus, such as a robot 5 an/or imaging apparatus 3, may be arranged to be movable relative to the objects. The skilled person is also familiar with many other possible types of conveying arrangements that could be used for moving the objects. These include chain conveyors and conveyors m which the objects are moved below the conveyor structure, e.g. are supported by appropriate hangers. The objects may also be moved by another type of appropriate handling device (e.g. by means of the robot 5) . Thus it is to be appreciated that the embodiments of the invention are not restricted to arrangements with conveyors but can be applied to any type of arrangements adapted to process objects.
As already briefly referred to, the processing system includes imaging apparatus. In Figure 1 the imaging apparatus comprises a camera 3. Various possibilities for imaging apparatus are known, these possibilities including, without restricting to these, cameras such as CCD (Charge Coupled Device) matrix cameras, progressive scan cameras, CCIR cameras and RS170 cameras and laser and infrared imaging applications. The camera 3 is arranged to image objects 2 on the belt 1 that are within an imaging area i.e. window 40 between the dashed lines 4a and 4b (see also Fig. 2 or Fig. 6A) .
In Figure 1 a single camera 3 is shown to be disposed above the conveyor 1. However, the position and general arrangement of the imaging apparatus may differ from this, and the embodiments of the invention are applicable m various possible imaging apparatus variations in which the number and positioning of the imaging apparatus and components can be freely chosen. The position of the imaging device 3 is chosen such that it is possible to detect desired points of the objects for the provision of a reliable detection of the objects forwarding on the belt. Thus the camera may also be positioned n the side of the conveyor or even below the conveyor. The imaging apparatus may also consist of more than one camera, e.g. m applications where three dimensional images of the objects are produced or a great accuracy is required .
The exemplifying system of Figure 1 includes further a robot 5 for picking the objects from the conveyor 1. More particularly, the objects are picked by gripping means 6 of the robot 5. It is to be understood that the robot 5 is only an example of a possible subsequent handling and/or processing device. Any suitable actuator device may be used for the further processing of the objects 2 after they have been imaged and recognised by the machine vision system. In addition, instead of or in addition to the gripping, the further processing may comprise operations such as machining, finishing, painting and so on of an object based on the information of the object that is obtained by means of the machine vision system.
A control unit 7 of the machine vision system is also shown. The control unit is arranged to process information received from the imaging apparatus 3 via connection 8 and to control the operation of the robot 5 via connection 9. The controller unit preferably includes required data processing and storage capability. Thus Figure 1 shows schematically a central processing unit 11 and a database 12. The central processing unit may be based on microprocessor technology. As a more practical example, the controller unit 7 may be based on a Pentium™ processor, even thoug a less or more powerfull processor may also be employed depending on the requirements of the system and the objects to be handled. Depending on the application, the controller 7 may be provided with appropriate memory devices, drives, display means, a keyboard, a mouse or other pointing device and any adapters and interfaces that may be required. In addition, an appropriate imaging software is typically required. The controller may also be provided with a network card for installations where the imaging system is connected to a data network, such as to a network based on TCP/IP (Transport Control Protocol/Internet Protocol) or to a local area network (LAN) .
According to the embodiments of the present invention tne imaging is based on methods where only the boundary information of the object is extracted from the image and/or analysed, and wherein the object is recognised and/or positioned based on the boundary characteristics. This enables a faster processing as the computing capacity and time required for the detection and computations is less than if the entire object surface area is analysed on pixel to pixel basis. According to a possibility information that is required for the further processing of the object is then retrieved from an appropriate object information database (e.g. from the database 12) based on the recognition of a predefined boundary information. This retrieved information may then be used in the actual further handling and/or processing of the object.
An image window in accordance with an embodiment of the present invention will now be discussed with reference to Figure 2. Figure 2 shows an imaging window 40 and an object 2 that has been located within the area of the imaging window 40. The imaging window is divided m a two-dimensional array of picture elements 20 that are referred to m the following as pixels. Although Figure 2 shows schematically a slack net- like array of square pixels, wherein the square pixels 20 are arranged n rows and columns, it shall be understood that the imaging window may comprise a more tense and different array of pixels than the illustrated "grid". For example, the imaging window may consist of 700x500 pixels. The pixels may also have a shape that is different from the square like form. For example, the pixels may be rectangular or hexagonal. It is also to be understood that the size of a pixel on tne surface of the object 2 depends on the distance between the object and the imaging apparatus .
The pixels 20 are arranged to indicate a value m a grey scale. Thus an object m the imaging window 40 can be detected based on analysis of the grey scale values m the pixels. For example, if lightly coloured or relatively white backgrounds are used, a darker i.e. greater value in the grey scale will indicate that an object is positioned on top of the lighter coloured background (e.g. on the belt i of the conveyor) . Depending on various parameters, such as the lightning conditions and/or colours of the object and the bel , the grey scale value detected by the pixels will be different on the surface area of the object and on the belt area. Based on this it is possible to define from the imaging window the area covered by the object and also, as will be described below, the boundary of the object.
The purpose of pre-processing of the information obtained by means of the imaging apparatus 3 is to make the boundary or borderline information more accurate and to reduce the amount of data processing and/or computations required during the recognition of an object and/or the position thereof and/or further processing thereof. In the pre-processing the image information is "filtered" so that only such image information will be saved/used in further processing that is actually required for these purposes, while the non-important information is deleted/ignored. In other words, the preprocessing of the image information aims to produce image information that comprises information of only those boundary segments of the object that are actually required and/or true boundaries and clearly and accurately detected.
The pre-processing may be divided into several substages such as extraction of the boundaries, processing the initial boundary information in order to reduce the width of the boundary lines to a predefined uniform width and rejection of any boundary line segments that are shorter than a predefined threshold value. The following will discuss these substages m more detail.
The extraction of the borderlines comprises detection of the borderlines by convolving the grey scale image by means of an appropriate borderline extraction mask. Figure 3 discloses as an exemplifying embodiment a lattice extraction mask 30. The mask 30 comprises two lines of detection elements 21, 22 that have a common element i.e. intersect at point 23. The detection elements of the mask 30 correspond the picture element or pixels 20 of the imaging window 40. The mask 30 has two lines of detection elements that are normal to each other. The preferred length of each element line 21,22 is five elements. The common point 23 is preferably m the middle of each line, i.e. the third element of each five element long line of elements.
To be able to locate the borderlines, the mask 30 is moved over the area to be imaged. The "strength" of a borderline is determined based on the grey scale value that is computed for each image pixel of the area to be imaged. The strength of the borderline may be computed by means of the following equations .
Gx = (zl * al + z2 * a2) - (z3 * a3 + z4 * a4) ( 1 )
Gy = (Z5 * a5 + z6 * a6) - (z7 * a7 + z8 * a8) ( 2 )
G = ^Gx2 + Gy2 ( 3 )
In the above equations value of G denotes strength of the borderline and zi value is the grey scale value of an image pixel above the corresponding mask element ai . The zi values (grey scale values) are typically within the range between 0 to 255. The mask element values may be any selected to have any appropriate range. "*" designates the multiplying of the values of the mask elements by the grey scale values of the respective pixels. For exemplifying values of the mask elements, see Figure 5. The total strength G of the borderline may be computed by means of equation (3) from the computed horizontal and vertical strengths of the borderlines Gx, Gy. The total strength G is computed for the middle point or pixel 31 of the mask 30. In other words, the respective mask elements and pixels are multiplied with each other and the results of the multiplying are summed together.
The mask 30 is moved on top of each pixel. The strength of the borderline is computed for corresponding positions m the resulting image information. The two columns at the right and left hand sides as well as the two rows on top and bottom of the imaging window 40 are preferably ignored. This may be done since otherwise the mask 30 would extend over the imaging area, and it would be impossible to compute strength values for the detection elements of the mask that are not positioned above the picture elements.
Figure 4 illustrates a portion (5x12 pixels) of an imaging window. The numbers designate the grey scale values of the respective pixels. A clear vertical line can be seen on the left hand side at a point where the grey scale values change from about 50 to about 150. In the following equations the strength values are computed for two mask positions (1) and (2) with mask element values that correspond to those of Figure 5 (i.e. with values 1, 2, -2, -1) . The final G values are positioned m the resulting initial image (i.e. initial image information) to a point that corresponds the centre of the lattice mask 30.
Gx = 1 * 147 + 2 50 + (-2) * 146 + (-1) * 147 =8 (1) G) =l5!50 + 2*51-r- (-2) * 150 + (-1) * 150 =- 298
G 8* 8 + 298* 298 = 298
Gx = 1 * 145 + 2 * 146 +(-2) * 146 +(-1) * 150 =- 5 (2) Gy =1 * 150 + 2 * 150 + (-2) * 150 + (-1) * 150 =0
Figure imgf000015_0001
As can be seen, the G value indicates that a borderline strength with the greatest magnitude was found at mask location ( 1 ) .
In other words, the G value that is computed for each pixel defines the difference in the grey scale values between the different sides of a particular pixel at a given moment of time. When the lattice mask 30 is used the particular pixel under consideration is positioned at the common point 23 where the vertical and horizontal lines 21, 22 of pixels detection elements 24 intersect. The greater the difference between the values of the horizontal and respectively vertical "branches", the more likely it is that the mask 30 is positioned on top of a discontinuity, i.e. on top of a borderline of the object. The boundary information may then be based on those G values that exceed a predefined threshold value. When the lattice mask 30 is used, large G values are obtained for the edge areas of the object and small G values are obtained for the plain surface areas of the object. Figure 4 shows a more practical example of the values that may be obtained by a borderline extraction mask 30 used for pre-processing of an image .
It is noted that also other types and size of masks may be used. For example, it is possible to extract the boundary by a 3x3 or by 5x5 box-type mask.
Figure 6A shows an example of an object to be imaged. Figure 6B shows the obtained borderline image that is based on the image information obtained by the use of the mask 30 of Figure 5. However, the inventors have found that conditions such as the non-uniform lightning, reflections, colouring of the object and background, dirt and dust and so on may substantially decrease the reliability of the border line detection. The borderlines may appear to be wider than what they actually are and/or what is necessary from the point of view of further processing of the object. In practice, the borderline image or initial image information obtained by means of any type of borderline extraction mask comprises borderlines that in most cases have a width of several pixels. This may make the borderline information unclear and inaccurate. Therefore it is advantageous to be able to modify the borderline information such that an evenly dimensioned and preferably as thin borderline as possible may be produced.
When the image of Figure 6A is examined by means of the lattice mask 30 it is possible to compute separately the strength of the borderline for both the horizontal and the vertical directions. Based on these two values it is then possible to compute the direction of the borderline at a given pixel. The direction of the borderline may be stored during the extraction of the borderline information for use in the further processing of the borderline information, that is during the modifying of the borderline to produce an evenly dimensioned th n borderlines.
The borderline image may be used m applications where an object needs to be recognised and positioned, whereafter the further processing parameters may be retrieved from an appropriate database. The inventors have found that m especially in this kind of applications it may be advantageous to narrow the borderlines to equal one pixel from the width thereof. By means of narrowing of the borderline it is possible to reduce the number of pixels that need to be processed during the recognition and/or positioning process by the positioning algorithm and thus to speed up the operation of the system processing objects based on information from the imaging apparatus .
According to a preferred embodiment, a local maximum value is searched from the initial borderline image of Figure 6B during the borderline narrowing process (see also flowchart of Figure 9) . In other words, the narrowing algorithm seeks for a pixel on the borderline that has a value that is greater than the values of the neighbouring pixels. This is based on the realisation that pixels with maximum G values should be found from the middle of the borderline segments (in the lateral direction of the borderlines) . Only those pixels that have the strongest borderline value are saved while the others are deleted from the borderline information. The deleted pixels will not be used when processing further the borderline information .
When a pixel with a local maximum is found and saved to form a part of the final borderline image, the direction of the borderline at the point of that pixel is determined. The direction has already been determined during the extraction of the Dorderlme and this information may be used herein. The next step is to move the point that is to be considered by one pixel to the direction of the borderline and to check whether this next pixel is also a local maximum. The previous local maximum is preferably ignored at this stage so that the narrowing operation may forward even in such instances where the previous pixel has a local maximum that is greater than the next pixel under consideration. If the new pixel is a local maximum, it is saved m the final borderline image that comprises a borderline that is one pixel wide.
In case the new pixel is not a local maximum, this indicates that a line segment has been investigated to its end. The algorithm will start again to look for next such local maximum values that have not yet been processed. When a new local maximum is found, the line segment will be investigated to its end, i.e. until a last local maximum that associates with the particular borderline segment has been processed. When no new i.e. unprocessed local maximums can be found, the border line narrowing procedure may be stopped.
It is noted that the local maximums, and more particularly, information of the pixels that have a local maximum strength value, are saved only if they are strong enough, i.e. exceed a threshold value. That is, the grey scale value thereof is greater than a predefined minimum grey scale value for the borderline .
The process is repeated until the all of the local maximums of the initial borderline information have been gone through. The result of this will be a clearer borderline image with even borderlines, as is disclosed by Figure 7.
According to a further embodiment the borderline information is processed further to detect and delete such segments of the borderline that may not be of relevance for the further processing of the object or that may be a result of an error that occurred during the extraction process. More particularly, the length of each borderline segment is compared against a threshold value. The borderline segment means a portion of borderline information that is similar throughout the length of the segment. If the length is smaller than a predefined value, for example 10 pixels, the borderline segment is determined as irrelevant and/or incorrect and is deleted from the picture information. By means of this it is possible to produce a smooth and clean borderline image (see Figure 7) as any "too short" and incorrect or "unreal" line segments can be removed from the image. By means of this it is also possible to reduce further the amount of required processing of the image data during the recognition and/or positioning of the object.
The above discusses the different stages of processing of the image information as subsequent operations . In some embodiments the stages may overlap. For example, when the borderline extraction mask forwards into a third pixel row of an image, it is possible to start the modifying operations of the borderlines n the first pixel row of the image. By means of this it is possible to reduce further the overall time of processing of the image information.
It is possible to change the accuracy level of the imaging apparatus. This may be done e.g. by means of changing the width of the borderline and/or the length of the acceptable borderline segments .
It was already noted above that the embodiments solution are applicable also m case of several cameras or other imaging devices attached to the vision system. The two or more imaging devices may also oe monitoring different conveyors. Each of the imaging devices may have its own control and/or processor system or then the arrangement may be such that a common controller controls the operation of each of the separate imaging devices.
A message may be shown to the user to confirm a successful imaging procedure. If the process fails, an error message may be shown.
It should be appreciated that whilst embodiments of the present invention have been described in relation to locating/finding of an object from an image, embodiments of the present invention are applicable to any other type of operations as well. An example of the other possibilities is determination of a machining path for a machining tool. In the embodiment the outer borderlines of the object are detected by means of the above described processing method, whereafter the object is subjected to machining operations by the machining tool (e.g. grinding) m accordance with the detected and processed border line information. The machining tool may be attached to and moved by an arm of the robot 5 of Figure 1.
It is also noted herein that while the above describes exemplifying embodiments of the invention, there are several variations and modifications which may be made to the disclosed solution without departing from the scope of the present invention as defined n the appended claims .

Claims

Claims
1. A method of providing information of objects by means of imaging apparatus, comprising: positioning an object within an imaging window of the imaging apparatus, the imaging window being divided into a plurality of picture elements; moving a detection mask that consists of a predefined pattern of detection elements that correspond the picture elements over the picture elements of the imaging window and generating initial boundary information defining the location of picture elements that indicate a boundary line and the direction of the boundary line at these picture elements, the generation of the initial boundary information being based on boundary strength values that are computed for the picture elements of the imaging window based on information provided by the detection mask; processing said initial boundary information such that the width of the detected boundary line is modified into a predefined width and those picture elements of the initial boundary information that are outside the predefined width are deleted; and generating boundary information only if a predefined number of adjacent picture elements form a continuous boundary segment.
2. A method according to claim 1, wherein the width of the boundary line is modified to equal a measure defined by the size of one picture element.
3. A method according to claim 1 or 2, wherein the minimum number of adjacent picture elements that are required to form a boundary segment is 10.
4. A method according to any of the preceding claims, wherein the boundary information is generated based on information of the picture elements that have the strongest value m a grey scale.
5. A method according to any of the preceding claims, wherein the detection mask comprises a lattice mask consisting of two lines of detection elements, wherein the boundary strength value for a picture element is computed for a point of the lattice mask in which the two lines of detection elements have a common element.
6. A method according to claim 5, wherein the lines are normal to each other, the length of each of the lines equals five picture elements, and the common element is positioned m the middle of each line.
7. A method according to any preceding claim, wherein the step of processing the initial boundary information comprises: determining a picture element of the initial boundary information that has a local maximum m the strength value; storing information of the picture element that has the local maximum in the strength value; determining the direction of the boundary line at said picture element; deleting information of any picture element positioned in the lateral direction from said picture element; determining if the next picture element m the direction of the boundary line has a local maximum m the strength value; and if the next picture element is determined to have a local maximum, storing information of the next picture element that has the local maximum m the strength value and deleting information of any picture element that is positioned in the lateral direction from said next picture element.
8. A method according to claim 7, wherein information of a picture element is stored only if the determined local maximum strength value exceeds a predefined threshold value.
9. A method according to claim 7 or 8, wherein the processing of the initial boundary information is continued until all picture elements of the initial boundary information that have a local maximum strength value are processed.
10. A method according to any of claims 7 to 9, wherein the local maximum strength value of the previous picture element in the boundary line is ignored when determining if the next adjacent picture element in the boundary line has a local maximum strength value.
11. A method according to any of claims 7 to 10, wherein, if it is detected that the next picture element does not have a local maximum value, it is determined that the previous picture element was the last element of a borderline segment.
12. A method according to any of the preceding claims, wherein the steps for generating the initial boundary information and processing said initial boundary information overlap .
13. A method according to any of the preceding claims, comprising recognising an object based on the boundary information and retrieving information that associates with the object from a database.
14. A method according to any preceding claim, comprising further processing of the object based on the boundary information .
15. A method according to claim 14, wherein the further processing comprises machining of the object.
16. A method according to any preceding claim, wherein the imaging apparatus comprises at least one camera.
17. A method according to any preceding claim, wherein the objects are moved into the imaging window of the imaging apparatus by conveyor apparatus.
18. A method according to any preceding claim, wherein the accuracy level of the boundary information is changed.
19. A system for providing information of objects, comprising : imaging apparatus arranged to generate an image of an object positioned within an imaging window of the imaging apparatus, the imaging window being divided into a plurality of picture elements; a processor connected to the imaging apparatus and arranged to process the image information provided by the imaging apparatus; a detection mask consisting of a predetermined pattern of detection elements that substantially correspond the picture elements, the detection mask being movable over the picture elements of the imaging window for generating initial boundary information defining the location of those picture elements that indicate a boundary line and the direction of the boundary line at saiα picture elements, the generation of the initial boundary information being based on boundary strength values that are computed by the processor for the picture elements of the imaging window based on information provided by the detection mask; wherein the processor is arranged to generate boundary line information based on said initial boundary information such that the width of the detected boundary line is modified into a predefined width and those picture elements of the initial boundary information that are outside the predefined width are deleted and such that the boundary information is generated only if a predefined number of adjacent picture elements form a continuous boundary segment.
20. A system according to claim 19, wherein the width of the boundary line is modified to equal a measure defined by the size of one picture element.
21. A system according to claim 19 or 20, wherein the boundary information is generated based on information of the picture elements that have the strongest detected grey scale value .
22. A system according to any of claims 19 to 21, wherein the detection mask comprises a lattice mask consisting of two lines of detection elements, the arrangement being such that the strength value for a picture element is computed for a point of the lattice mask m which the two lines of picture elements have a common element.
23. A system according to any of claims 19 to 22, wherein the processor is arranged to: determine a picture element of the initial boundary information that has a local maximum strength value; determine the direction of the boundary line at said picture element; store information of the picture element that has the local maximum strength value to a database and delete information of any picture elements that are positioned in the lateral direction from said picture element; determine if the next picture element in said direction of the boundary line has a local maximum strength value; and if the next picture element is determined to have a local maximum strength value, store information of the next picture element that has the local maximum strength value to the database and delete information of any picture elements that are positioned in the lateral direction from said next picture element .
24. A system according to any of claims 19 to 23, wherein the objects are moved by conveyor apparatus and processed by actuator apparatus, and the processor is arranged to control the operation of said at least one actuator apparatus based on information from the imaging apparatus.
25. A system according to claim 24, wherein the actuator apparatus comprises at least one industrial robot.
PCT/GB2001/000118 2000-01-26 2001-01-12 Providing information of objects WO2001055963A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001225345A AU2001225345A1 (en) 2000-01-26 2001-01-12 Providing information of objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0001813.5 2000-01-26
GB0001813A GB2358702A (en) 2000-01-26 2000-01-26 Providing information of objects by imaging

Publications (1)

Publication Number Publication Date
WO2001055963A1 true WO2001055963A1 (en) 2001-08-02

Family

ID=9884392

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/000118 WO2001055963A1 (en) 2000-01-26 2001-01-12 Providing information of objects

Country Status (3)

Country Link
AU (1) AU2001225345A1 (en)
GB (1) GB2358702A (en)
WO (1) WO2001055963A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757961A (en) * 1992-06-11 1998-05-26 Canon Kabushiki Kaisha Image processing method and apparatus for obtaining a zoom image using contour information of a binary image
US5974169A (en) * 1997-03-20 1999-10-26 Cognex Corporation Machine vision methods for determining characteristics of an object using boundary points and bounding regions

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5396584A (en) * 1992-05-29 1995-03-07 Destiny Technology Corporation Multi-bit image edge enhancement method and apparatus
WO1994018639A1 (en) * 1993-02-09 1994-08-18 Siemens Medical Systems, Inc. Method and apparatus for generating well-registered subtraction images for digital subtraction angiography
US5987172A (en) * 1995-12-06 1999-11-16 Cognex Corp. Edge peak contour tracker
US5845007A (en) * 1996-01-02 1998-12-01 Cognex Corporation Machine vision method and apparatus for edge-based image histogram analysis
JPH10191020A (en) * 1996-12-20 1998-07-21 Canon Inc Object image segmenting method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5757961A (en) * 1992-06-11 1998-05-26 Canon Kabushiki Kaisha Image processing method and apparatus for obtaining a zoom image using contour information of a binary image
US5974169A (en) * 1997-03-20 1999-10-26 Cognex Corporation Machine vision methods for determining characteristics of an object using boundary points and bounding regions

Also Published As

Publication number Publication date
AU2001225345A1 (en) 2001-08-07
GB2358702A (en) 2001-08-01
GB0001813D0 (en) 2000-03-22

Similar Documents

Publication Publication Date Title
EP2045772B1 (en) Apparatus for picking up objects
US4613269A (en) Robotic acquisition of objects by means including histogram techniques
US6748104B1 (en) Methods and apparatus for machine vision inspection using single and multiple templates or patterns
EP0851374B1 (en) Method of locating an object-applied code
US20070007924A1 (en) Handling system, work system, and program
JP2019111640A (en) Article conveying device, robot system, and article conveying method
CN112561882B (en) Logistics sorting method, system, equipment and storage medium
JP2006308372A (en) Visual inspection device and visual inspection method
JPH085021B2 (en) Workpiece positioning method
CN110852265A (en) Rapid target detection and positioning method applied to industrial production line
US11935216B2 (en) Vision inspection system and method of inspecting parts
WO2001055963A1 (en) Providing information of objects
Tönshoff et al. Image processing in a production environment
CN113495073A (en) Auto-focus function for vision inspection system
GB2356699A (en) Providing information of moving objects
CN114332220A (en) Automatic parcel sorting method based on three-dimensional vision and related equipment
CN115003613A (en) Device and method for separating piece goods
EP0493105A2 (en) Data processing method and apparatus
Bailey Machine vision: A multi-disciplinary systems engineering problem
Lo et al. Neural networks for bar code positioning in automated material handling
Ray et al. Visual and tactile sensing system for robotic acquisition of jumbled parts
Hutber Automatic inspection of 3D objects using stereo
Gao et al. Design of Automatic Test System for Dynamic Performance of Magnetic Head Based on Machine Vision.
CN117260700A (en) Mechanical arm control method, computer equipment and readable storage medium
Paun et al. IMAGE PROCESSING METHOD BASED QUALITY TEST ON A SMART FLEXIBLE ASSEMBLY MECHATRONIC SYSTEM WITH COMPONENT RECOVERY

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP