US20020123977A1 - Neural network assisted multi-spectral segmentation system - Google Patents

Neural network assisted multi-spectral segmentation system Download PDF

Info

Publication number
US20020123977A1
US20020123977A1 US09/970,610 US97061001A US2002123977A1 US 20020123977 A1 US20020123977 A1 US 20020123977A1 US 97061001 A US97061001 A US 97061001A US 2002123977 A1 US2002123977 A1 US 2002123977A1
Authority
US
United States
Prior art keywords
nuclear
digitized
analyzing
images
wavelength
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/970,610
Inventor
Ryan Raz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Veracel Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US09/970,610 priority Critical patent/US20020123977A1/en
Assigned to VERACEL INC. reassignment VERACEL INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAZ, RYAN
Publication of US20020123977A1 publication Critical patent/US20020123977A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G01N15/1433
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro

Definitions

  • the segmentation is the delineation of the objects of interest within the micrographic image.
  • background material debris and contamination that interferes with the identification of the cervical cells and therefore must be delineated.
  • debris and contamination that interferes with the identification of the cervical cells and therefore must be delineated.
  • Feature Extraction operation is performed after the completion of the segmentation operation.
  • Feature extraction comprises characterizing the segmented regions as a series of descriptors based on the morphological, textural, densitometric and calorimetric attributes of these regions.
  • the Classification step is the final step in the image analysis.
  • the features extracted in the previous stage are used in some type of discriminant-based classification procedure.
  • the results of this classification are then translated into a “diagnosis” of the cells in the image.
  • the stains or dyes included in the Papanicolaou Stain are Haematoxylin, Orange G and Eosin Azure (a mixture of two acid dyes, Eosin Y and Light Green SF Yellowish, together with Bismark Brown). Each stain component is sensitive to or binds selectively to a particular cell structure or material. Haematoxylin binds to the nuclear material coloring it dark blue. Orange G is an indicator of keratin protein content. Eosin Y stains nucleoli, red blood cells and mature squamous epithelial cells. Light Green SF yellowish acid stains metabolically active epithelial cells. Bismark Brown stains vegetable material and cellulose.
  • the first stage according to the present invention comprises the acquisition of three images of the same micrographic scene. Each image is obtained using a different narrow band-pass optical filter which has the effect of selecting a narrow band of optical wavelengths associated with distinguishing absorption peaks in the stain spectra. The choice of optical wavelength bands is guided by the degree of separation afforded by these peaks when used to distinguish the different types of cellular material on the slide surface.
  • the second stage according to the invention comprises a neural-network (trained on an extensive set of typical examples) to make decisions on the identity of material already deemed to be cellular in origin.
  • the neural network decides whether or not a picture element in the digitized image is nuclear or not nuclear in character. With the completion of this step the system can continue on applying a standard range of image processing techniques to refine the segmentation.
  • the relationship between the cellular components and the transmission intensity of the light images in each of the three spectral bands is a complex and non-linear one.
  • Papanicolaou Stain is a combination of several stains or dyes together with a specific protocol designed to emphasize and delineate cellular structures of importance to pathological analysis.
  • the stains or dyes included in the Papanicolaou Stain are Haematoxylin, Orange G and Eosin Azure (a mixture of two acid dyes, Eosin Y and Light Green SF Yellowish, together with Bismarck Brown).
  • Each stain component is sensitive to or binds selectively to a particular cellular structure or material.
  • three optical wavelength bands are used in a complex procedure to segment Papanicolaou-stained epithelial cells in digitized images.
  • the procedure utilizes standard segmentation operations (erosion, dilation, etc.) together with the neural-network to identify the location of nuclear components in areas already determined to be cellular material.
  • the purpose of the segmentation is to extract the cellular objects, i.e. to distinguish the nucleus of the cell from the cytoplasm.
  • the multi-spectral images are divided into two classes: cytoplasm objects and nuclear objects, which are separated by a multi-dimensional threshold t which comprises a 3-dimensional space.
  • the neural network according to the invention comprises a Probability Projection Neural Network (PPNN).
  • PPNN Probability Projection Neural Network
  • the PPNN according to the present invention features fast training for a large volume of data, processing of multi-modal non-Gaussian data distribution, good generalization simultaneously with high sensitivity to small clusters of patterns representing the useful subclasses of cells.
  • the PPNN is implemented as a hardware-encoded algorithm.
  • the step of analyzing the values for each pixel preferably includes utilizing a classifier trained on a training set of data developed from images having known regions of nuclear and cytoplasmic material.
  • the previously developed classification information preferably includes values for pixels stored in a look-up table in a memory storage device.
  • the previously developed classification information preferably has memory addresses in the look-up table, the memory addresses comprising a concatenation of the values from each of the digitized images representing the same region of the at least one cell.
  • Each of the digitized images are drawn from a band of optical wavelength.
  • the previously developed classification information includes a predetermined discriminant between values for pixels representing regions of nuclear and cytoplasmic material.
  • the method preferably includes the step of assigning a classification to each pixel as representing nuclear or cytoplasmic material.
  • the step of analyzing may include applying a linear discriminant analysis to define a linear boundary between values for pixels representing regions of nuclear and cytoplasmic material.
  • the linear discriminant analysis preferably discriminates between pixels of nuclear material and at least two types of cytoplasmic material.
  • the at least one cell of unknown nuclear or cytoplasmic material may comprise, for example, a cellular sample prepared according to the Papanicolaou staining procedure.
  • the previously developed classification information may be developed by: providing a plurality of digitized images of at least one cell of regions of known nuclear or cytoplasmic material, each digitized image being formed from an optical image having a plurality of pixels associated therewith, each digitized image being formed in a narrow band of optical wavelength different from the other digitized images, each of the plurality of pixels having values from each of the digitized images; assigning a classification to each pixel as representing regions of nuclear or cytoplasmic material; and storing the classifications in a look-up table in a memory storage device.
  • the step of storing preferably includes storing the classifications in locations of the look-up table, the locations having addresses corresponding to the pixels.
  • the step of analyzing comprises neural network processing of the values for pixels of the digitized images.
  • the neural network processing comprises accepting one or more inputs into at least one processing element, multiplying the inputs by weighing factors, and applying a formula to the weighed inputs to provide an output to a plurality of other processing elements.
  • Each value for a pixel is preferably accepted and processed by a different one of the at least one processing elements.
  • a method of analyzing cells comprises providing a plurality of digitized images of at least one cell.
  • Each digitized image is formed in a narrow band of optical wavelength different from the other digitized images and including: at least one first digitized image in a wavelength of between 525 to 575 nanometers; at least one second digitized image in a wavelength of between 565 to 582 nanometers; and at least one third digitized image in a wavelength of between 625 to 635 nanometers.
  • the digitized images are analyzed to identify nuclear and cytoplasmic material of the at least one cell.
  • the at least one first digitized image may have a wavelength of between 525 to 535 nanometers
  • the at least one second digitized image may have a wavelength of between 572 to 582 nanometers
  • the at least one third digitized image may have a wavelength of between 625 to 635 nanometers.
  • the at least one first digitized image may have a wavelength of between 535 to 545 nanometers
  • the at least one second digitized image may have a wavelength of between 572 to 582 nanometers
  • the at least one third digitized image may have a wavelength of between 625 to 635 nanometers.
  • the at least one first digitized image may have a wavelength of between 565 to 575 nanometers
  • the at least one second digitized image may have a wavelength of between 565 to 575 nanometers
  • the at least one third digitized image may have a wavelength of between 625 to 635 nanometers.
  • the step of analyzing the pixels preferably includes utilizing a classifier trained on a set of data developed from images having known regions of nuclear and cytoplasmic material.
  • the step of analyzing preferably includes analyzing the digitized images based upon previously developed classification information.
  • the step of analyzing may comprise analyzing values for each pixel, each pixel having values from each of the digitized images.
  • a classification is preferably assigned to each pixel as representing regions of nuclear or cytoplasmic material.
  • the at least one cell may comprise, for example, a cellular sample prepared according to the Papanicolaou staining procedure.
  • the previously developed classification information is preferably developed from at least one cell of known regions of nuclear or cytoplasmic material for analyzing cells of unknown regions of nuclear or cytoplasmic material.
  • the method includes the step of storing the previously developed classification information in a look-up table in an electronic memory device.
  • the step of analyzing may comprise neural network processing of the digitized images.
  • the present invention provides a method for identifying nuclear and cytoplasmic objects in a biological specimen, said method comprising the steps of: (a) acquiring a plurality of images of said biological specimen; (b) identifying cellular material from said images and creating a cellular material map; (c) applying a neural network to said cellular material map and classifying nuclear and cytoplasmic objects from said images.
  • the present invention provides a system for identifying nuclear and cytoplasmic objects in biological specimen, said system comprising: (a) image acquisition means for acquiring a plurality of images of said a biological specimen; (b) processing means for processing said images and generating a cellular material map identifying cellular material; (c) neural processor means for processing said cellular material map and including means for classifying nuclear and cytoplasmic objects from said images.
  • the present invention provides a hardware-encoded neural processor for classifying input data, said hardware-encoded processor comprising: (a) a memory having a plurality of addressable storage locations; (b) said addressable storage locations containing classification information associated with the input data; (c) address generation means for generating an address from said input data for accessing the classification information stored in said memory for selected input data.
  • FIG. 1 shows in flow chart form a neural network assisted multi-spectral segmentation method according to the present invention
  • FIG. 2 shows in diagrammatic form a processing element for the neural network
  • FIG. 3 shows in diagrammatic form a neural network comprising the processing elements of FIG. 2;
  • FIG. 4 shows in diagrammatic form a training step for the neural network
  • FIG. 5 shows in flow chart form a clustering algorithm for the neural network according to the present invention.
  • FIG. 6 shows a hardware implementation for the neural network according to the present invention.
  • the present invention provides a Neural Network Assisted Multi-Spectral Segmentation (also referred to as NNA-MSS) system and method.
  • NNA-MSS Neural Network Assisted Multi-Spectral Segmentation
  • the multi-spectral segmentation method is related to that described and claimed in co-pending International Patent Application No. CA96/00477 filed Jul. 18, 1996 and in the name of the applicant.
  • the NNA-MSS according to the present invention is particularly suited to Papanicolaou-stained gynaecological smears and will be described in this context. It is however to be understood that the present invention has wider applicability to applications outside of Papanicolaou-Stained smears.
  • FIG. 1 shows in flow chart a Neural Network Assisted Multi-Spectral Segmentation (NNAMSS) method 1 according to the present invention.
  • NAMSS Neural Network Assisted Multi-Spectral Segmentation
  • the first step 10 involves inputting three digitized images, i.e. micrographic scenes, of a cellular specimen.
  • the images are taken in each of the three narrow optical bands: 540 ⁇ 5 nm; 577 ⁇ 5 nm and 630 ⁇ 5 nm.
  • the images are generated by an imaging system (not shown) as will be understood by one skilled in the art, and thus need not be described in detail here.)
  • the images are next processed by the multi-segmentation method 1 and neural network as will be described.
  • the images are subjected to a leveling operation (block 12 ).
  • the leveling operation 12 involves removing the spatial variations in the illumination intensity from the images.
  • the leveling operation is implemented as a simple mathematical routine using known image processing techniques.
  • the result of the leveling operation is a set of 8-bit digitized images with uniform illumination across their fields.
  • the 8-bit digitized images first undergo a series of processing steps to identify cellular material in the digitized images.
  • the digitized images are then processed by the neural network to segment the nuclear objects from the cytoplasm objects.
  • the next operation comprises a threshold procedure block 14 .
  • the threshold procedure involves analyzing the leveled images in a search for material of cellular origin.
  • the threshold procedure 14 is applied to the 530 nm and 630 nm optical wavelength bands and comprises identifying material in the image of cellular origin as regions of the digitized image that fall within a range of specific digital values.
  • the threshold procedure 14 produces a single binary “map” of the image where the single binary bit identifies regions that are, or are not, cellular material.
  • the threshold operation 14 is followed by a dilation operation (block 16 ).
  • the dilation operation 16 is a conventional image processing operation which modifies The binary map of cellular material generated in block 14 .
  • the dilation operation allows the regions of cellular material to grow or dilate by one pixel in order to fill small voids in large regions.
  • the dilation operation 16 is modified with the condition that the dilation does not allow two separate regions of cellular material to join to make a single region, i.e. a “no-join” condition. This condition allows the accuracy of the binary map to be preserved through dilation operation 16 .
  • the dilation operation is applied twice to ensure a proper filling of voids.
  • the result of the dilation operations 16 is a modified binary map of cellular material.
  • the dilation operation 16 is followed by an erosion operation (block 18 ).
  • the erosion operation 18 brings the modified binary map of cellular material (a result of the dilation operation 16 ) back to its original boundaries.
  • the erosion operation 18 is implemented using conventional image processing techniques.
  • the erosion operation 18 allows the cellular boundaries in the binary image to shrink or erode but will not affect the filled voids.
  • the erosion operation 18 has the additional effect of eliminating small regions of cellular material that are not important to the later diagnostic analysis.
  • the result of the erosion operation 18 is a final binary map of the regions in the digitized image that are cytoplasm.
  • the next stage according to the invention is the operation of the neural network at block 20 .
  • the neural network 20 is applied to the 8-bit digitized images, with attention restricted to those regions that lie within the cytoplasm as determined by the final binary cytoplasm map generated as a result of the previous operations.
  • the neural network 20 makes decisions concerning the identity of individual picture elements (or “pixels”) in the binary image as either being part of a nucleus or not part of a nucleus.
  • the result of the operation of the neural network is a digital map of the regions within the cytoplasm that are considered to be nuclear material.
  • the nuclear material map is then subjected to further processing.
  • the neural network 20 according to the present invention is described in detail below.
  • the resulting nuclear material map is subjected to an erosion operation (block 22 ).
  • the erosion operation 22 eliminates regions of the nuclear material map that are too small to be of diagnostic significance.
  • the result is a modified binary map of nuclear regions.
  • the modified binary map resulting from the erosion operation 22 is then subjected to a dilation operation (block 24 ).
  • the dilation operation 24 is subject to a no-join condition, such that, the dilation operation does not allow two separate regions of nuclear material to join to make a single region. In this way the accuracy of the binary map is preserved notwithstanding the dilation operation.
  • the dilation operation 24 is preferably applied twice to ensure a proper filling of voids.
  • the result of these dilation operations is a modified binary map of nuclear material.
  • an erosion operation is applied (block 26 ). Double application of the erosion operation 26 eliminates regions of the nuclear material in the binary map that are too small to be of diagnostic significance. The result is a modified binary map of nuclear regions.
  • the remaining operations involve constructing a binary map comprising high gradients, i.e. boundaries, of pixel intensity, in order to sever nuclear regions that share high gradient boundaries.
  • high gradients i.e. boundaries, of pixel intensity
  • the first step in severing the high-gradient boundaries in the nuclear map is to construct a binary map of these high gradient boundaries using a threshold operation (block 28 ) applied to a Sobel map.
  • the Sobel map is generated by applying the Sobel gradient operator to the 577 nm 8-bit digitized image to determine regions of that image that contain high gradients of pixel intensity (block 29 ).
  • the 8-bit digitized image for the 577 nm band was obtained from the leveling operation in block 12 .
  • the result of the Sobel operation in block 29 is an 8-bit map of gradient intensity.
  • a logical NOT operation is performed (block 30 ).
  • the logical NOT operation 30 determines the coincidence of the two states, high-gradients and nuclei, and reverses the pixel value of the nuclear map at the point of the coincidence in order to eliminate it from regions that are presumed to be nuclear material.
  • the result of this logical operation is a modified nuclear map.
  • the modified nuclear map is next subjected to an erosion operation (block 32 ).
  • the erosion operation 32 eliminates regions in the modified nuclear map that are too small to be of diagnostic significance.
  • the result is a modified binary map of nuclear regions.
  • the binary map of nuclear regions is dramatically altered.
  • the dilation operation 34 includes the condition that no two nuclear regions will become joined as they dilate and that no nuclear region will be allowed to grow outside its old boundary as defined by the binary map that existed before the Sobel procedure was applied.
  • the dilation operation 34 is preferably applied four times. The result is a modified binary map of nuclear material.
  • the operation at block 20 in FIG. 1 comprises neural network processing of the digitized images.
  • the neural network 20 is a highly parallel, distributed, information processing system that has the topology of a directed graph.
  • the network comprises a set of “nodes” and series of “connections” between the nodes.
  • the nodes comprise processing elements and the connections between the nodes represent the transfer of information from one node to another.
  • FIG. 2 shows a node or processing element 100 a for a backpropagation neural network 20 .
  • Each of the nodes 100 a accepts one or more inputs 102 shown individually as a 1 , a 2 , a 3 . . . a n in FIG. 2.
  • the inputs 102 are taken into the node 100 a and each input 102 is multiplied by its own mathematical weighting factor before being summed together with the threshold factor for the processing element 100 a .
  • the processing element 100 a then generates a single output 104 (i.e. b j ) according to the “transfer function” being used in the network 20 .
  • the output 104 is then available as an input to other nodes or processing elements, for example processing elements 100 b , 100 c , 100 d , 100 e and 100 f as depicted in FIG. 1.
  • the transfer function may be any suitable mathematical function but it is usual to employ a “sigmoid” function.
  • the relationship between the inputs 102 into the node 100 and the output 104 is given by expression (1) as follows:
  • b j is the output 104 of the node 100
  • a i is the value of the input 102 to the node labeled “I”
  • w ji is the weighting given to that input 102
  • ⁇ j is the threshold value for the node 100 .
  • the transfer function is modeled after a sigmoid function.
  • the nodes or processing elements for the neural network are arranged in a series of layers denoted by 106 , 108 and 110 as shown in FIG. 3.
  • the first layer 106 comprises nodes or processing elements 112 shown individually as 112 a , 112 b , 112 c , 112 d and 112 e .
  • the first layer 106 is an input layer and accepts the information required for a decision.
  • the second layer 108 in the neural network 20 is known as the hidden layer and comprises processing elements 114 shown individually as 114 a , 114 b , 114 c , 114 d and 114 e . All of the nodes 112 in the input layer 106 are connected to all of the nodes 114 in the hidden layer 108 . It will be understood that there may be more than one hidden layer, with each node in the successive layer connected to each node of the previous layer. For convenience only one hidden layer 108 is shown in FIG. 3.
  • the (last) hidden layer 108 leads to the output layer 110 .
  • the output layer 110 comprises processing elements 116 shown individually as 116 a , 116 b , 116 c , 116 d and 116 e in FIG. 3.
  • Each node 114 of the (last) hidden layer 108 (FIG. 3) is connected to each node 116 of the output layer 110 .
  • the output layer 110 renders the decision to be interpreted by subsequent computing machinery.
  • the strength of the neural network architecture is its ability to generalize based on previous training of particular examples.
  • the neural network is presented a series of examples of the type of objects that it is destined to classify.
  • the backpropagation neural network organizes itself by altering the multiplicity of its connection weights and thresholds according to its success in rendering a correct decision. This is called supervised learning wherein the operator provides the network with the information regarding its success in classification.
  • the network relies on a standard general rule for modifying its connection weights and thresholds based on the success of its performance, i.e. back-propagation.
  • the multi-spectral images are divided into two classes:
  • the distribution of the pixels for the nuclear and cytoplasm objects is complex and the 3-D space comprises numerous clusters and non-overlapped regions. It has been found that the optimal threshold has a complex non-linear surface in the 3-D space, and the neural network according to the present invention provides the means for finding the complex threshold surface in the 3-D space in order to segment the nuclear and cytoplasmic objects.
  • the neural network 20 comprises an input layer 106 , a single hidden layer 108 , and an output layer 110 .
  • the input layer 106 comprises three nodes or processing elements 112 (FIG. 3) for each of the three 8-bit digitized values for the particular pixel being examined. (The three digitized values arise from the three leveled images collected in each of the three optical bands, as described above with reference to FIG. 1.)
  • the output layer 110 comprises a single processing element 116 (FIG. 3) which indicates whether the pixel under examination is or is not part of the nucleus.
  • the neural network 20 Before the neural network 20 can be successfully operated for decision-making it must first be “trained” in order to establish the proper combination of weights and thresholds. The training is performed outside of the segmentation procedure on a large set of examples. Errors made in the classification of pixels in the examples are “back-propagated” as corrections to the connection weights and the threshold values in each of the processing units. Once the classification error is acceptable the network is “frozen” at these weight and threshold values and it is integrated as a simple algebraic operation into the segmentation procedure as shown at block 20 in FIG. 1.
  • the neural network 20 comprises a probability Projection Neural Network which will also be referred to as a PPNN.
  • the PPNN according to the present invention features fast training for a large volume of data, processing of multi-modal non-Gaussian data distribution, good generalization simultaneously with high sensitivity to small clusters of patterns representing the useful subclasses of cells.
  • the PPNN is well-suited to a hardware-encoded implementation.
  • the PPNN utilizes a Probability Density Function (PDF) estimator.
  • PDF Probability Density Function
  • the PPNN is suitable for use as a Probability Density Function estimator or as a general classifier in pattern recognition.
  • the PPNN uses the training data to create an N-dimensional PDF array which in turn is used to estimate the likelihood of a feature vector being within the given classes as will now be described.
  • the input space is partitioned into m ⁇ m ⁇ . . . m discrete nodes (if the discrete input space is known, then m is usually selected less than the range). For example, for a 3-D PDF array creating a 2 6 ⁇ 2 6 ⁇ 2 6 grid is sufficient.
  • P j [ ⁇ o , ⁇ 1 , . . . ⁇ n ⁇ 1 ] is the current value of the [ ⁇ o , ⁇ 1 , . . . ⁇ n ⁇ 1 ] node after the j'th iteration
  • d j [ ⁇ o , ⁇ 1 , . . . ⁇ n ⁇ 1 ] represents the influence of j'th input pattern to the [ ⁇ o , ⁇ 1 , . . . ⁇ n ⁇ 1 ] node
  • r k is the distance from the pattern to the k'th node
  • r 0 is the minimum distance between two neighbor nodes
  • n is the dimension of the space.
  • the PPNN according to the present invention is particularly suited to handle multi-modal data distributions, in many practical situations there will be an unbalanced data set. This means that some clusters will contain less data samples than other clusters and as a result some natural clusters which were represented with a small number of patterns could be lost after PPNN joining. To solve this problem there is provided an algorithm which equalizes all natural clusters according to another aspect of the invention.
  • FIG. 5 shows in flow chart form an embodiment of a clustering algorithm 200 according to the present invention.
  • All training patterns, i.e. N samples, in block 202 and a given number (i.e. “K”) of clusters in block 204 are applied to a K-mean clustering operation block 206 .
  • the clustering operation 206 clusters the input data and generates clusters 1 through K (block 208 ).
  • all the training data which belongs to an i th -cluster is extracted into a separate subclass.
  • the final operation in the clustering algorithm comprises joining all of the K PPNN's together and normalizing the resulting PPNN by dividing all nodes by the number of clusters (block 212 ).
  • the operation performed in block 212 may be expressed as follows:
  • the clustering algorithm 200 may be implemented to the each class separately before creating the final classifier according the expression (6) above, as follows.
  • BP Backpropagation
  • EMF Elliptic Basic Functions
  • LQV Learning Vector Quantization
  • PPNN the PPNN is preferred. The performance results of the Probability Projection Neural Net have been found to exceed those achieved by conventional networks.
  • the neural network assisted multi-spectral segmentation process is implemented as a hardware-encoded procedure embedded in conventional FPGA (Field Programmable Gate Array) logic as part of a special-purpose computer.
  • FPGA Field Programmable Gate Array
  • the hardware implementation of this network is found in the form of a look-up table contained in a portion of hardware memory (FIG. 6).
  • the neural network 20 comprises three input nodes and a single, binary output node.
  • the structure of the neural network 20 according to the present invention also simplifies the hardware implementation of the network.
  • the three input nodes correspond to three optical bands 301 , 302 , 303 used in gathering the images.
  • the images taken in the 530 nm and 630 nm bands have 7-bits of useful resolution while the 577 nm band retains all 8-bits. (The 577 nm band is centered on the nucleus.)
  • the performance of the neural network 20 is then determined for all possible combinations of these three inputs. Since there are 22 bits in total, there are 222 or 4.2 million possible combinations.
  • To create the look-up table all input pixels in the space (2 7 ⁇ 2 7 ⁇ 2 8 variants for the three images in the present embodiment) are scanned and the look-up table is filled with the PPNN decision, i.e. 1—pixel belongs to nuclear; 0—pixel doesn't belong to nuclear, for all each of these pixel combinations.
  • the coding of the results (i.e. outputs) of the neural network comprises assigning each possible combination of inputs a unique address 304 in a look-up table 305 stored in memory.
  • the address 304 in the table 305 is formed by joining together the binary values of the three channel values indicated by 306 , 307 , 308 , respectively in FIG. 6.
  • the pixel for the image from the first channel 301 i.e. 530 nm
  • the pixel for the image from the second channel 302 i.e. 630 nm
  • the pixel for the image from the third channel 303 i.e.
  • 577 nm is binary 00101011, and concatenated together binary representations 306 , 307 , 308 form the address 304 which is binary (01010110101100101011).
  • the address 304 points to a location in the look-up table 305 (i.e. memory) which stores a single binary value 309 that represents the response of the neural network to this combination of inputs, e.g. the logic 0 at memory location 010101101101100101011 signifies that the pixel in question does not belong to the nucleus.
  • NNA-MSS The hardware-encoding of NNA-MSS advantageously allows the process to execute at a high speed while making a complex decision. Secondly, as experimental data is further tabulated and evaluated more complex decision spaces can be utilized to improve segmentation accuracy. Thus, an algorithm according to the present invention can be optimized further by the adjustment of a table of coefficients that describe the neural-network connection weights without the necessity of altering the system architecture.

Abstract

In a segmentation method and system, a plurality of digitized images having different optical bands are acquired for the same micrographic scene of a biological sample. Each digitized image has a plurality of pixels having values from each digitized image. The pixel values are processed to identify nuclear or cytoplasmic material utilizing previously developed classification information developed from at least one cell having known regions of nuclear or cytoplasmic material. In a preferred embodiment, the neural network comprises a hardware-encoded algorithm in the form of a look-up table.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of U.S. application Ser. No. 09/040,378, filed Mar. 18, 1998, the disclosure of which is hereby incorporated by reference herein. The '378 application is the national phase of International Application No. PCT/CA 96/00619, filed Sep. 18, 1996, the disclosure of which is hereby incorporated by reference herein. This application also claims benefit of U.S. Provisional Application No. 60/003,964, filed Sep. 19, 1995, the disclosure of which is hereby incorporated by reference herein.[0001]
  • FIELD OF THE INVENTION
  • The present invention relates to automated diagnostic techniques in medicine and biology, and more particularly to multi-spectral segmentation of nuclear and cytoplasmic objects. [0002]
  • BACKGROUND OF THE INVENTION
  • Automated diagnostic systems in medicine and biology often rely on the visual inspection of microscopic images. Known systems attempt to mimic or imitate the procedures employed by humans. An appropriate example of this type of system is an automated instrument designed to assist a cytotechnologist in the review or diagnosis of Pap smears. In its usual operation such a system will rapidly acquire microscopic images of the cellular content of the Pap smears and then subject them to a battery of image analysis procedures. The goal of these procedures is the identification of images that are likely to contain unusual or potentially abnormal cervical cells. [0003]
  • The image analysis techniques utilized by these automated instruments are similar to the procedures consciously, and often unconsciously, performed by the human cytotechnologist. There are three distinct operations that must follow each other for this type of evaluation: (1) segmentation; (2) feature extraction; and (3) classification. [0004]
  • The segmentation is the delineation of the objects of interest within the micrographic image. In addition to the cervical cells required for an analysis there is a wide range of “background” material, debris and contamination that interferes with the identification of the cervical cells and therefore must be delineated. Also for each cervical cell, it is necessary to delineate the nucleus with the cytoplasm. [0005]
  • The Feature Extraction operation is performed after the completion of the segmentation operation. Feature extraction comprises characterizing the segmented regions as a series of descriptors based on the morphological, textural, densitometric and calorimetric attributes of these regions. [0006]
  • The Classification step is the final step in the image analysis. The features extracted in the previous stage are used in some type of discriminant-based classification procedure. The results of this classification are then translated into a “diagnosis” of the cells in the image. [0007]
  • of the three stages outlined above, segmentation is the most crucial and the most difficult. This is particularly true for the types of images typically encountered in medical or biological specimens. [0008]
  • In the case of a Pap smear, the goal of segmentation is to accurately delineate the cervical cells and their nuclei. The situation is complicated not only by the variety of cells found in the smear, but also by the alterations in morphology produced by the sample preparation technique and by the quantity of debris associated with these specimens. Furthermore, during preparation it is difficult to control the way cervical cells are deposited on the surface of the slide which as a result leads to a large amount of cell overlap and distortion. [0009]
  • Under these circumstances a segmentation operation is difficult. One known way to improve the accuracy and speed of segmentation for these types of images involves exploiting the differential staining procedure associated with all Pap smears. According to the Papanicolaou protocol the nuclei are stained dark blue while the cytoplasm is stained anything from a blue-green to an orange-pink. The Papanicolaou Stain is a combination of several stains or dyes together with a specific protocol designed to emphasize and delineate cellular structures of importance for pathological analysis. The stains or dyes included in the Papanicolaou Stain are Haematoxylin, Orange G and Eosin Azure (a mixture of two acid dyes, Eosin Y and Light Green SF Yellowish, together with Bismark Brown). Each stain component is sensitive to or binds selectively to a particular cell structure or material. Haematoxylin binds to the nuclear material coloring it dark blue. Orange G is an indicator of keratin protein content. Eosin Y stains nucleoli, red blood cells and mature squamous epithelial cells. Light Green SF yellowish acid stains metabolically active epithelial cells. Bismark Brown stains vegetable material and cellulose. [0010]
  • The combination of these stains and their diagnostic interpretation has evolved into a stable medical protocol which predates the advent of computer-aided imaging instruments. Consequently, the dyes present a complex pattern of spectral properties to standard image analysis procedures. Specifically, a simple spectral decomposition based on the optical behavior of the dyes is not sufficient on its own to reliably distinguish the cellular components within an image. The overlap of the spectral response of the dyes is too large for this type of straight-forward segmentation. [0011]
  • The use of differential staining characteristics is only the means to the end in the solution to the problem of segmentation. Of equal importance is the procedure for handling the information provided by the spectral character of the cellular objects when making a decision concerning identity. [0012]
  • In the art, attempts have been made to automate diagnostic procedures, however, there remains a need for a system for performing the segmentation process. [0013]
  • BRIEF SUMMARY OF THE INVENTION
  • The present invention provides a Neural-Network Assisted Multi-Spectral Segmentation (also referred to as the NNA-MSS) method and system. [0014]
  • The first stage according to the present invention comprises the acquisition of three images of the same micrographic scene. Each image is obtained using a different narrow band-pass optical filter which has the effect of selecting a narrow band of optical wavelengths associated with distinguishing absorption peaks in the stain spectra. The choice of optical wavelength bands is guided by the degree of separation afforded by these peaks when used to distinguish the different types of cellular material on the slide surface. [0015]
  • The second stage according to the invention comprises a neural-network (trained on an extensive set of typical examples) to make decisions on the identity of material already deemed to be cellular in origin. The neural network decides whether or not a picture element in the digitized image is nuclear or not nuclear in character. With the completion of this step the system can continue on applying a standard range of image processing techniques to refine the segmentation. The relationship between the cellular components and the transmission intensity of the light images in each of the three spectral bands is a complex and non-linear one. By using a neural network to combine the information from these three images it is possible to achieve a high degree of success in separating the cervical cell from the background and the nuclei from the cytoplasm. A success that would not be possible with a set of linear operations alone. [0016]
  • The diagnosis and evaluation of Pap smears is aided by the introduction of a differential staining procedure called the Papanicolaou Stain. The Papanicolaou Stain is a combination of several stains or dyes together with a specific protocol designed to emphasize and delineate cellular structures of importance to pathological analysis. The stains or dyes included in the Papanicolaou Stain are Haematoxylin, Orange G and Eosin Azure (a mixture of two acid dyes, Eosin Y and Light Green SF Yellowish, together with Bismarck Brown). Each stain component is sensitive to or binds selectively to a particular cellular structure or material. Haematoxylin binds to the nuclear material coloring it dark blue; Orange G is an indicator of keratin protein content; Eosin Y stains nucleoli, red blood cells and mature squamous epithelial cells; Light Green SF yellowish stains metabolically active epithelial cells; Bismarck Brown stains vegetable material and cellulose. [0017]
  • According to another aspect of the invention, three optical wavelength bands are used in a complex procedure to segment Papanicolaou-stained epithelial cells in digitized images. The procedure utilizes standard segmentation operations (erosion, dilation, etc.) together with the neural-network to identify the location of nuclear components in areas already determined to be cellular material. [0018]
  • The purpose of the segmentation is to extract the cellular objects, i.e. to distinguish the nucleus of the cell from the cytoplasm. According to this segmentation the multi-spectral images are divided into two classes: cytoplasm objects and nuclear objects, which are separated by a multi-dimensional threshold t which comprises a 3-dimensional space. [0019]
  • The neural network according to the invention comprises a Probability Projection Neural Network (PPNN). The PPNN according to the present invention features fast training for a large volume of data, processing of multi-modal non-Gaussian data distribution, good generalization simultaneously with high sensitivity to small clusters of patterns representing the useful subclasses of cells. In another aspect, the PPNN is implemented as a hardware-encoded algorithm. [0020]
  • A method of analyzing cells comprises providing a plurality of digitized images of at least one cell of regions of unknown nuclear or cytoplasmic material. Each digitized image is formed from an optical image having a plurality of pixels associated therewith. Each digitized image is formed in a narrow band of optical wavelength different from the other digitized images. Each of the plurality of pixels has values from each of the digitized images. The method further includes analyzing the values for each pixel to identify nuclear and cytoplasmic material of the at least one cell of unknown regions of nuclear or cytoplasmic material. The analyzing step utilizes previously developed classification information for discriminating nuclear or cytoplasmic material. The previously developed classification information is developed from at least one cell of known regions of nuclear or cytoplasmic material. [0021]
  • The step of analyzing the values for each pixel preferably includes utilizing a classifier trained on a training set of data developed from images having known regions of nuclear and cytoplasmic material. [0022]
  • In a preferred embodiment, the previously developed classification information preferably includes values for pixels stored in a look-up table in a memory storage device. The previously developed classification information preferably has memory addresses in the look-up table, the memory addresses comprising a concatenation of the values from each of the digitized images representing the same region of the at least one cell. Each of the digitized images are drawn from a band of optical wavelength. [0023]
  • In a preferred embodiment, the previously developed classification information includes a predetermined discriminant between values for pixels representing regions of nuclear and cytoplasmic material. [0024]
  • The method preferably includes the step of assigning a classification to each pixel as representing nuclear or cytoplasmic material. [0025]
  • In a preferred embodiment, an absorption map is formed from each of the digitized images to represent the light absorption characteristics associated with each value for each pixel before the step of analyzing the values for each pixel. A classification is assigned to each pixel based upon absorption characteristics. The step of forming an absorption map preferably comprises applying a formula to each of the digitized images. [0026]
  • The step of analyzing may include applying a linear discriminant analysis to define a linear boundary between values for pixels representing regions of nuclear and cytoplasmic material. The linear discriminant analysis preferably discriminates between pixels of nuclear material and at least two types of cytoplasmic material. [0027]
  • The at least one cell of unknown nuclear or cytoplasmic material may comprise, for example, a cellular sample prepared according to the Papanicolaou staining procedure. [0028]
  • The previously developed classification information may be developed by: providing a plurality of digitized images of at least one cell of regions of known nuclear or cytoplasmic material, each digitized image being formed from an optical image having a plurality of pixels associated therewith, each digitized image being formed in a narrow band of optical wavelength different from the other digitized images, each of the plurality of pixels having values from each of the digitized images; assigning a classification to each pixel as representing regions of nuclear or cytoplasmic material; and storing the classifications in a look-up table in a memory storage device. [0029]
  • The step of storing preferably includes storing the classifications in locations of the look-up table, the locations having addresses corresponding to the pixels. In a preferred embodiment, the step of analyzing comprises neural network processing of the values for pixels of the digitized images. The neural network processing comprises accepting one or more inputs into at least one processing element, multiplying the inputs by weighing factors, and applying a formula to the weighed inputs to provide an output to a plurality of other processing elements. Each value for a pixel is preferably accepted and processed by a different one of the at least one processing elements. [0030]
  • In another aspect of the present invention, a method of analyzing cells comprises providing a plurality of digitized images of at least one cell. Each digitized image is formed in a narrow band of optical wavelength different from the other digitized images and including: at least one first digitized image in a wavelength of between 525 to 575 nanometers; at least one second digitized image in a wavelength of between 565 to 582 nanometers; and at least one third digitized image in a wavelength of between 625 to 635 nanometers. The digitized images are analyzed to identify nuclear and cytoplasmic material of the at least one cell. [0031]
  • For example, the at least one first digitized image may have a wavelength of between 525 to 535 nanometers, the at least one second digitized image may have a wavelength of between 572 to 582 nanometers, and the at least one third digitized image may have a wavelength of between 625 to 635 nanometers. [0032]
  • For example, the at least one first digitized image may have a wavelength of between 535 to 545 nanometers, the at least one second digitized image may have a wavelength of between 572 to 582 nanometers, and the at least one third digitized image may have a wavelength of between 625 to 635 nanometers. [0033]
  • For example, the at least one first digitized image may have a wavelength of between 565 to 575 nanometers, the at least one second digitized image may have a wavelength of between 565 to 575 nanometers, and the at least one third digitized image may have a wavelength of between 625 to 635 nanometers. [0034]
  • The step of analyzing the pixels preferably includes utilizing a classifier trained on a set of data developed from images having known regions of nuclear and cytoplasmic material. The step of analyzing preferably includes analyzing the digitized images based upon previously developed classification information. [0035]
  • The step of analyzing may comprise analyzing values for each pixel, each pixel having values from each of the digitized images. [0036]
  • A classification is preferably assigned to each pixel as representing regions of nuclear or cytoplasmic material. The at least one cell may comprise, for example, a cellular sample prepared according to the Papanicolaou staining procedure. [0037]
  • The previously developed classification information is preferably developed from at least one cell of known regions of nuclear or cytoplasmic material for analyzing cells of unknown regions of nuclear or cytoplasmic material. [0038]
  • The method, in certain preferred embodiments, includes the step of storing the previously developed classification information in a look-up table in an electronic memory device. In certain preferred embodiments, the step of analyzing may comprise neural network processing of the digitized images. [0039]
  • In a further aspect, the present invention provides a method for identifying nuclear and cytoplasmic objects in a biological specimen, said method comprising the steps of: (a) acquiring a plurality of images of said biological specimen; (b) identifying cellular material from said images and creating a cellular material map; (c) applying a neural network to said cellular material map and classifying nuclear and cytoplasmic objects from said images. [0040]
  • In another aspect, the present invention provides a system for identifying nuclear and cytoplasmic objects in biological specimen, said system comprising: (a) image acquisition means for acquiring a plurality of images of said a biological specimen; (b) processing means for processing said images and generating a cellular material map identifying cellular material; (c) neural processor means for processing said cellular material map and including means for classifying nuclear and cytoplasmic objects from said images. [0041]
  • In another aspect, the present invention provides a hardware-encoded neural processor for classifying input data, said hardware-encoded processor comprising: (a) a memory having a plurality of addressable storage locations; (b) said addressable storage locations containing classification information associated with the input data; (c) address generation means for generating an address from said input data for accessing the classification information stored in said memory for selected input data. [0042]
  • A preferred embodiment of the present invention will now be described, by way of example, with reference to the following specification, claims, and drawings.[0043]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows in flow chart form a neural network assisted multi-spectral segmentation method according to the present invention; [0044]
  • FIG. 2 shows in diagrammatic form a processing element for the neural network; [0045]
  • FIG. 3 shows in diagrammatic form a neural network comprising the processing elements of FIG. 2; [0046]
  • FIG. 4 shows in diagrammatic form a training step for the neural network; [0047]
  • FIG. 5 shows in flow chart form a clustering algorithm for the neural network according to the present invention; and [0048]
  • FIG. 6 shows a hardware implementation for the neural network according to the present invention.[0049]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • The present invention provides a Neural Network Assisted Multi-Spectral Segmentation (also referred to as NNA-MSS) system and method. The multi-spectral segmentation method is related to that described and claimed in co-pending International Patent Application No. CA96/00477 filed Jul. 18, 1996 and in the name of the applicant. [0050]
  • The NNA-MSS according to the present invention is particularly suited to Papanicolaou-stained gynaecological smears and will be described in this context. It is however to be understood that the present invention has wider applicability to applications outside of Papanicolaou-Stained smears. [0051]
  • Reference is first made to FIG. 1 which shows in flow chart a Neural Network Assisted Multi-Spectral Segmentation (NNAMSS) [0052] method 1 according to the present invention.
  • The [0053] first step 10 involves inputting three digitized images, i.e. micrographic scenes, of a cellular specimen. The images are taken in each of the three narrow optical bands: 540±5 nm; 577±5 nm and 630±5 nm. (The images are generated by an imaging system (not shown) as will be understood by one skilled in the art, and thus need not be described in detail here.) The images are next processed by the multi-segmentation method 1 and neural network as will be described.
  • As shown in FIG. 1, the images are subjected to a leveling operation (block [0054] 12). The leveling operation 12 involves removing the spatial variations in the illumination intensity from the images. The leveling operation is implemented as a simple mathematical routine using known image processing techniques. The result of the leveling operation is a set of 8-bit digitized images with uniform illumination across their fields.
  • The 8-bit digitized images first undergo a series of processing steps to identify cellular material in the digitized images. The digitized images are then processed by the neural network to segment the nuclear objects from the cytoplasm objects. [0055]
  • Referring to FIG. 1, following the leveling [0056] operation 12 the next operation comprises a threshold procedure block 14. The threshold procedure involves analyzing the leveled images in a search for material of cellular origin. The threshold procedure 14 is applied to the 530 nm and 630 nm optical wavelength bands and comprises identifying material in the image of cellular origin as regions of the digitized image that fall within a range of specific digital values. The threshold procedure 14 produces a single binary “map” of the image where the single binary bit identifies regions that are, or are not, cellular material.
  • The [0057] threshold operation 14 is followed by a dilation operation (block 16). The dilation operation 16 is a conventional image processing operation which modifies The binary map of cellular material generated in block 14. The dilation operation allows the regions of cellular material to grow or dilate by one pixel in order to fill small voids in large regions. Preferably, the dilation operation 16 is modified with the condition that the dilation does not allow two separate regions of cellular material to join to make a single region, i.e. a “no-join” condition. This condition allows the accuracy of the binary map to be preserved through dilation operation 16. Preferably, the dilation operation is applied twice to ensure a proper filling of voids. The result of the dilation operations 16 is a modified binary map of cellular material.
  • As shown in FIG. 1, the [0058] dilation operation 16 is followed by an erosion operation (block 18). The erosion operation 18 brings the modified binary map of cellular material (a result of the dilation operation 16) back to its original boundaries. The erosion operation 18 is implemented using conventional image processing techniques. The erosion operation 18 allows the cellular boundaries in the binary image to shrink or erode but will not affect the filled voids. Advantageously, the erosion operation 18 has the additional effect of eliminating small regions of cellular material that are not important to the later diagnostic analysis. The result of the erosion operation 18 is a final binary map of the regions in the digitized image that are cytoplasm.
  • The next stage according to the invention, is the operation of the neural network at [0059] block 20. The neural network 20 is applied to the 8-bit digitized images, with attention restricted to those regions that lie within the cytoplasm as determined by the final binary cytoplasm map generated as a result of the previous operations. The neural network 20 makes decisions concerning the identity of individual picture elements (or “pixels”) in the binary image as either being part of a nucleus or not part of a nucleus. The result of the operation of the neural network is a digital map of the regions within the cytoplasm that are considered to be nuclear material. The nuclear material map is then subjected to further processing. The neural network 20 according to the present invention is described in detail below.
  • Following the application of the [0060] neural network 20, the resulting nuclear material map is subjected to an erosion operation (block 22). The erosion operation 22 eliminates regions of the nuclear material map that are too small to be of diagnostic significance. The result is a modified binary map of nuclear regions.
  • The modified binary map resulting from the [0061] erosion operation 22 is then subjected to a dilation operation (block 24). The dilation operation 24 is subject to a no-join condition, such that, the dilation operation does not allow two separate regions of nuclear material to join to make a single region. In this way the accuracy of the binary map is preserved notwithstanding the dilation operation. The dilation operation 24 is preferably applied twice to ensure a proper filling of voids. The result of these dilation operations is a modified binary map of nuclear material.
  • Following the [0062] dilation operation 24, an erosion operation is applied (block 26). Double application of the erosion operation 26 eliminates regions of the nuclear material in the binary map that are too small to be of diagnostic significance. The result is a modified binary map of nuclear regions.
  • The remaining operations involve constructing a binary map comprising high gradients, i.e. boundaries, of pixel intensity, in order to sever nuclear regions that share high gradient boundaries. The presence of these high gradient boundaries is evidence of two, closely spaced but separate nuclei. [0063]
  • The first step in severing the high-gradient boundaries in the nuclear map is to construct a binary map of these high gradient boundaries using a threshold operation (block [0064] 28) applied to a Sobel map.
  • The Sobel map is generated by applying the Sobel gradient operator to the 577 nm 8-bit digitized image to determine regions of that image that contain high gradients of pixel intensity (block [0065] 29). (The 8-bit digitized image for the 577 nm band was obtained from the leveling operation in block 12.) The result of the Sobel operation in block 29 is an 8-bit map of gradient intensity.
  • Following the [0066] threshold Sobel operation 28, a logical NOT operation is performed (block 30). The logical NOT operation 30 determines the coincidence of the two states, high-gradients and nuclei, and reverses the pixel value of the nuclear map at the point of the coincidence in order to eliminate it from regions that are presumed to be nuclear material. The result of this logical operation is a modified nuclear map.
  • The modified nuclear map is next subjected to an erosion operation (block [0067] 32). The erosion operation 32 eliminates regions in the modified nuclear map that are too small to be of diagnostic significance. The result is a modified binary map of nuclear regions.
  • After the application of the gradient technique for severing close nuclear boundaries ([0068] blocks 28 and 30) and the erosion operation (block 32) for clearing the image of insignificant regions, the binary map of nuclear regions is dramatically altered. To restore the map to its original boundaries while preserving the newly-formed separations, the process applies a dilation operation at block 34. The dilation operation 34 includes the condition that no two nuclear regions will become joined as they dilate and that no nuclear region will be allowed to grow outside its old boundary as defined by the binary map that existed before the Sobel procedure was applied. The dilation operation 34 is preferably applied four times. The result is a modified binary map of nuclear material.
  • With the application of the [0069] dilation operation 34, the nuclear segmentation procedure according to the multi-spectral segmentation process 1 is complete and the resulting binary nuclear map is labeled in block 36, and if required further image processing is applied.
  • As described above, the operation at [0070] block 20 in FIG. 1 comprises neural network processing of the digitized images. In general, the neural network 20 is a highly parallel, distributed, information processing system that has the topology of a directed graph. The network comprises a set of “nodes” and series of “connections” between the nodes. The nodes comprise processing elements and the connections between the nodes represent the transfer of information from one node to another.
  • Reference is made to FIG. 2 which shows a node or processing element [0071] 100 a for a backpropagation neural network 20. Each of the nodes 100 a accepts one or more inputs 102 shown individually as a1, a2, a3 . . . an in FIG. 2. The inputs 102 are taken into the node 100 a and each input 102 is multiplied by its own mathematical weighting factor before being summed together with the threshold factor for the processing element 100 a. The processing element 100 a then generates a single output 104 (i.e. bj) according to the “transfer function” being used in the network 20. The output 104 is then available as an input to other nodes or processing elements, for example processing elements 100 b, 100 c, 100 d, 100 e and 100 f as depicted in FIG. 1.
  • The transfer function may be any suitable mathematical function but it is usual to employ a “sigmoid” function. The relationship between the [0072] inputs 102 into the node 100 and the output 104 is given by expression (1) as follows:
  • b={Σw ji a 1−θi}−1  (1)
  • where b[0073] j is the output 104 of the node 100, ai is the value of the input 102 to the node labeled “I”, wji is the weighting given to that input 102, and θj is the threshold value for the node 100. In the present application, the transfer function is modeled after a sigmoid function.
  • In its general form, the nodes or processing elements for the neural network are arranged in a series of layers denoted by [0074] 106, 108 and 110 as shown in FIG. 3. The first layer 106 comprises nodes or processing elements 112 shown individually as 112 a, 112 b, 112 c, 112 d and 112 e. The first layer 106 is an input layer and accepts the information required for a decision.
  • The [0075] second layer 108 in the neural network 20 is known as the hidden layer and comprises processing elements 114 shown individually as 114 a, 114 b, 114 c, 114 d and 114 e. All of the nodes 112 in the input layer 106 are connected to all of the nodes 114 in the hidden layer 108. It will be understood that there may be more than one hidden layer, with each node in the successive layer connected to each node of the previous layer. For convenience only one hidden layer 108 is shown in FIG. 3.
  • The (last) hidden [0076] layer 108 leads to the output layer 110. The output layer 110 comprises processing elements 116 shown individually as 116 a, 116 b, 116 c, 116 d and 116 e in FIG. 3. Each node 114 of the (last) hidden layer 108 (FIG. 3) is connected to each node 116 of the output layer 110. The output layer 110 renders the decision to be interpreted by subsequent computing machinery.
  • The strength of the neural network architecture is its ability to generalize based on previous training of particular examples. In order to take advantage of this, the neural network is presented a series of examples of the type of objects that it is destined to classify. The backpropagation neural network organizes itself by altering the multiplicity of its connection weights and thresholds according to its success in rendering a correct decision. This is called supervised learning wherein the operator provides the network with the information regarding its success in classification. The network relies on a standard general rule for modifying its connection weights and thresholds based on the success of its performance, i.e. back-propagation. [0077]
  • In the context of the multi-spectral segmentation process, the multi-spectral images are divided into two classes: [0078]
  • C[0079] o—cytoplasm and C1—nuclear, separated by the multi-dimensional threshold t which comprises a 3-dimensional space. The distribution of the pixels for the nuclear and cytoplasm objects is complex and the 3-D space comprises numerous clusters and non-overlapped regions. It has been found that the optimal threshold has a complex non-linear surface in the 3-D space, and the neural network according to the present invention provides the means for finding the complex threshold surface in the 3-D space in order to segment the nuclear and cytoplasmic objects.
  • According to this aspect of the invention, the [0080] neural network 20 comprises an input layer 106, a single hidden layer 108, and an output layer 110. The input layer 106 comprises three nodes or processing elements 112 (FIG. 3) for each of the three 8-bit digitized values for the particular pixel being examined. (The three digitized values arise from the three leveled images collected in each of the three optical bands, as described above with reference to FIG. 1.) The output layer 110 comprises a single processing element 116 (FIG. 3) which indicates whether the pixel under examination is or is not part of the nucleus.
  • Before the [0081] neural network 20 can be successfully operated for decision-making it must first be “trained” in order to establish the proper combination of weights and thresholds. The training is performed outside of the segmentation procedure on a large set of examples. Errors made in the classification of pixels in the examples are “back-propagated” as corrections to the connection weights and the threshold values in each of the processing units. Once the classification error is acceptable the network is “frozen” at these weight and threshold values and it is integrated as a simple algebraic operation into the segmentation procedure as shown at block 20 in FIG. 1.
  • In a preferred embodiment, the [0082] neural network 20 according to the invention comprises a probability Projection Neural Network which will also be referred to as a PPNN. The PPNN according to the present invention features fast training for a large volume of data, processing of multi-modal non-Gaussian data distribution, good generalization simultaneously with high sensitivity to small clusters of patterns representing the useful subclasses of cells. In another aspect, the PPNN is well-suited to a hardware-encoded implementation.
  • The PPNN according to the invention utilizes a Probability Density Function (PDF) estimator. As a result, the PPNN is suitable for use as a Probability Density Function estimator or as a general classifier in pattern recognition. The PPNN uses the training data to create an N-dimensional PDF array which in turn is used to estimate the likelihood of a feature vector being within the given classes as will now be described. [0083]
  • To create and train the PPN network, the input space is partitioned into m×m× . . . m discrete nodes (if the discrete input space is known, then m is usually selected less than the range). For example, for a 3-D PDF array creating a 2[0084] 6×26×26 grid is sufficient.
  • As shown in FIG. 4, the next step involves mapping or projecting the influence of the each training pattern to the neighbor nodes. This is accomplished according to expression (2) as shown below: [0085] P j [ χ o , χ 1 , , χ n - 1 ] = P j - 1 [ χ o , χ 1 , , χ n - 1 ] + d j [ χ o , χ 1 , , χ n - 1 ] . 1 , if r x - 0 0 , if r k r 0 d j [ χ o , χ 1 , , χ n - 1 ] = { 1 - r x i = 0 2 n - 1 ( 1 - r 1 ) if r k < r o ( 2 )
    Figure US20020123977A1-20020905-M00001
  • where P[0086] j o, χ1, . . . χn−1] is the current value of the [χo, χ1, . . . χn−1] node after the j'th iteration; dj o, χ1, . . . χn−1] represents the influence of j'th input pattern to the [χo, χ1, . . . χn−1] node; rk is the distance from the pattern to the k'th node; r0 is the minimum distance between two neighbor nodes; and n is the dimension of the space.
  • From expression (1), it will be appreciated that [0087] j 2 n d k ( j ) - 1
    Figure US20020123977A1-20020905-M00002
  • represents the normalized values.[0088]
  • k=1
  • Once the accumulation of P[0089] N o, χ1, . . . χn−1] (where j=N—number of the training patterns) is completed, a normalization operation is performed to obtain the total energy value for PPNN EPPN−1. The normalized values (i.e. P*) for PPNN are calculated according to expression (3) as follows:
  • P* N o, χ1, . . . χn−1 ]=P N o, χ1, . . . χn−1 ]/N  (3)
  • For feed-forward calculations the trained and normalized nodes P*[0090] N o, χ1, . . . χn−1] and the reverse mapping are utilized according to expression (4) given below, h j [ χ o , χ n - 1 ] - i = 0 2 n - 1 p ( i ) [ χ o , χ 1 N , , χ n - 1 ] d j ( f ) [ χ o , χ 1 , , χ n - 1 ] , ( 4 )
    Figure US20020123977A1-20020905-M00003
  • where d[0091] j (i) o, χ1, . . . χn−1] are calculated according to expression (1) above.
  • To solve a two class (i.e. C[0092] 0—cytoplasm and C1—nuclear) application using the PPNN according to the present invention, two networks must be trained for each class separately, that is, PCO o, χ1, . . . χn−1] and PC1 o, χ1, . . . χn−1]. Because both PPNN are normalized, they can be joined together according to expression (5) below as follows:
  • PC0/C1 o, χ1, . . . χn−1] and P*C0 o, χ1, . . . χn−1]−P*C1 o, χ1, . . . χn−1]  (5)
  • The final decision from expressions (4) and (5) is given by[0093]
  • C0, if hj>0
  • Patternj εC1, if hj≦0  (6)
  • While the PPNN according to the present invention is particularly suited to handle multi-modal data distributions, in many practical situations there will be an unbalanced data set. This means that some clusters will contain less data samples than other clusters and as a result some natural clusters which were represented with a small number of patterns could be lost after PPNN joining. To solve this problem there is provided an algorithm which equalizes all natural clusters according to another aspect of the invention. [0094]
  • Reference is next made to FIG. 5, which shows in flow chart form an embodiment of a [0095] clustering algorithm 200 according to the present invention. All training patterns, i.e. N samples, in block 202 and a given number (i.e. “K”) of clusters in block 204 are applied to a K-mean clustering operation block 206. The clustering operation 206 clusters the input data and generates clusters 1 through K (block 208). Next, all the training data which belongs to an ith-cluster is extracted into a separate subclass. For each sub-class of training data, a normalized PPNN, i.e. Ei=1, is created (block 210). The final operation in the clustering algorithm comprises joining all of the K PPNN's together and normalizing the resulting PPNN by dividing all nodes by the number of clusters (block 212). The operation performed in block 212 may be expressed as follows:
  • E=(E 1 + . . . +E k)/K−1
  • It will also be understood that the [0096] clustering algorithm 200 may be implemented to the each class separately before creating the final classifier according the expression (6) above, as follows. The optimal number of clusters for each of two classes may be found from final PPNN performance analysis (expression (6) above). First, the number of clusters for PPN2=1 are fixed and the optimal number of clusters for PPN1 are found. Next, the reverse variant is modeled as: PPN1=1, Λ PPN2=opt. Lastly, the two optimal networks PPN1 opt Λ PPN2 opt are combined together according to expression (6).
  • While the neural network assisted multi-spectral segmentation process is described with a Probability Projection Neural Network according to the present invention, it will be understood that other conventional neural networks are suitable, including for example, Backpropagation (BP) networks, Elliptic Basic Functions (EBF) networks, and Learning Vector Quantization (LQV) networks. However, the PPNN is preferred. The performance results of the Probability Projection Neural Net have been found to exceed those achieved by conventional networks. [0097]
  • According to another aspect of the present invention, the neural network assisted multi-spectral segmentation process is implemented as a hardware-encoded procedure embedded in conventional FPGA (Field Programmable Gate Array) logic as part of a special-purpose computer. [0098]
  • The hardware implementation of this network is found in the form of a look-up table contained in a portion of hardware memory (FIG. 6). As described above, the [0099] neural network 20 comprises three input nodes and a single, binary output node. The structure of the neural network 20 according to the present invention also simplifies the hardware implementation of the network.
  • As shown in FIG. 6, the three input nodes correspond to three [0100] optical bands 301, 302, 303 used in gathering the images. The images taken in the 530 nm and 630 nm bands have 7-bits of useful resolution while the 577 nm band retains all 8-bits. (The 577 nm band is centered on the nucleus.) The performance of the neural network 20 is then determined for all possible combinations of these three inputs. Since there are 22 bits in total, there are 222 or 4.2 million possible combinations. To create the look-up table, all input pixels in the space (27×27×28 variants for the three images in the present embodiment) are scanned and the look-up table is filled with the PPNN decision, i.e. 1—pixel belongs to nuclear; 0—pixel doesn't belong to nuclear, for all each of these pixel combinations.
  • The coding of the results (i.e. outputs) of the neural network comprises assigning each possible combination of inputs a [0101] unique address 304 in a look-up table 305 stored in memory. The address 304 in the table 305 is formed by joining together the binary values of the three channel values indicated by 306, 307, 308, respectively in FIG. 6. For example, as shown in FIG. 6, the pixel for the image from the first channel 301 (i.e. 530 nm) is binary 0101011, the pixel for the image from the second channel 302 (i.e. 630 nm) is binary 0101011, and the pixel for the image from the third channel 303 (i.e. 577 nm) is binary 00101011, and concatenated together binary representations 306, 307, 308 form the address 304 which is binary (01010110101100101011). The address 304 points to a location in the look-up table 305 (i.e. memory) which stores a single binary value 309 that represents the response of the neural network to this combination of inputs, e.g. the logic 0 at memory location 010101101101100101011 signifies that the pixel in question does not belong to the nucleus.
  • The hardware-encoding of NNA-MSS advantageously allows the process to execute at a high speed while making a complex decision. Secondly, as experimental data is further tabulated and evaluated more complex decision spaces can be utilized to improve segmentation accuracy. Thus, an algorithm according to the present invention can be optimized further by the adjustment of a table of coefficients that describe the neural-network connection weights without the necessity of altering the system architecture. [0102]
  • The present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Therefore, the presently discussed embodiments are considered to be illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. [0103]

Claims (28)

1. A method of analyzing cells, comprising:
a. providing a plurality of digitized images of at least one cell of regions of unknown nuclear or cytoplasmic material, each digitized image being formed from an optical image having a plurality of pixels associated therewith, each digitized image being formed in a narrow band of optical wavelength different from the other digitized images, each of the plurality of pixels having values from each of the digitized images; and
b. analyzing the values for each pixel to identify nuclear and cytoplasmic material of the at least one cell of unknown regions of nuclear or cytoplasmic material, said analyzing step utilizing previously developed classification information for discriminating nuclear or cytoplasmic material, the previously developed classification information being developed from at least one cell of known regions of nuclear or cytoplasmic material.
2. The method of claim 1, wherein the step of analyzing the values for each pixel includes utilizing a classifier trained on a training set of data developed from images having known regions of nuclear and cytoplasmic material.
3. The method of claim 1, wherein the previously developed classification information includes values for pixels stored in a look-up table in a memory storage device.
4. The method of claim 3, wherein the previously developed classification information has memory addresses in the look-up table, the memory addresses comprising a concatenation of the values from each of the digitized images representing the same region of the at least one cell, each of the digitized images being drawn from the band of optical wavelength.
5. The method of claim 1 wherein the previously developed classification information includes a predetermined discriminant between values for pixels representing regions of nuclear and cytoplasmic material.
6. The method of claim 1, further comprising the step of assigning a classification to each pixel as representing nuclear or cytoplasmic material.
7. The method of claim 6, further comprising:
a. forming an absorption map from each of the digitized images to represent the light absorption characteristics associated with each value for each pixel before the step of analyzing; and
b. assigning a classification to each pixel based upon absorption characteristics.
8. The method of claim 7, wherein the step of forming an absorption map comprises applying a formula to each of the digitized images.
9. The method of claim 5, wherein the step of analyzing includes applying a linear discriminant analysis to define a linear boundary between values for pixels representing regions of nuclear and cytoplasmic material.
10. The method of claim 9, wherein the linear discriminant analysis discriminates between pixels of nuclear material and at least two types of cytoplasmic material.
11. The method of claim 1, wherein the at least one cell of unknown nuclear or cytoplasmic material comprises a cellular sample prepared according to the Papanicolaou staining procedure.
12. The method of claim 1, further comprising developing the previously developed classification information by:
a. providing a plurality of digitized images of at least one cell of regions of known nuclear or cytoplasmic material, each digitized image being formed from an optical image having a plurality of pixels associated therewith, each digitized image being formed in a narrow band of optical wavelength different from the other digitized images, each of the plurality of pixels having values from each of the digitized images; and
b. assigning a classification to each pixel as representing regions of nuclear or cytoplasmic material; and
c. storing the classifications in a look-up table in a memory storage device.
13. The method of claim 12, wherein the step of storing includes storing the classifications in locations of the look-up table, the locations having addresses corresponding to the pixels.
14. The method of claim 1, wherein the step of analyzing comprises neural network processing of the values for pixels of the digitized images.
15. The method of claim 14, wherein the neural network processing comprises accepting one or more inputs into at least one processing element, multiplying the inputs by weighing factors, and applying a formula to the weighed inputs to provide an output to a plurality of other processing elements.
16. The method of claim 15, wherein each value for a pixel is accepted and processed by a different one of the at least one processing elements.
17. A method of analyzing cells, comprising:
a. providing a plurality of digitized images of at least one cell, each digitized image being formed in a narrow band of optical wavelength different from the other digitized images and including at least one first digitized image in a wavelength of between 525 to 575 nanometers, at least one second digitized image in a wavelength of between 565 to 582 nanometers, and at least one third digitized image in a wavelength of between 625 to 635 nanometers; and
b. analyzing the digitized images to identify nuclear and cytoplasmic material of the at least one cell.
18. The method of claim 17, wherein the at least one first digitized image has a wavelength of between 525 to 535 nanometers, the at least one second digitized image has a wavelength of between 572 to 582 nanometers, and the at least one third digitized image has a wavelength of between 625 to 635 nanometers.
19. The method of claim 17, wherein the at least one first digitized image has a wavelength of between 535 to 545 nanometers, the at least one second digitized image has a wavelength of between 572 to 582 nanometers, and the at least one third digitized image has a wavelength of between 625 to 635 nanometers.
20. The method of claim 17, wherein the at least one first digitized image has a wavelength of between 565 to 575 nanometers, the at least one second digitized image has a wavelength of between 565 to 575 nanometers, and the at least one third digitized image has a wavelength of between 625 to 635 nanometers.
21. The method of claim 17, wherein the step of analyzing the pixels includes utilizing a classifier trained on a set of data developed from images having known regions of nuclear and cytoplasmic material.
22. The method of claim 17, wherein the step of analyzing includes analyzing the digitized images based upon previously developed classification information.
23. The method of claim 22, wherein the step of analyzing comprises analyzing values for each pixel, each pixel having values from each of the digitized images.
24. The method of claim 17, further comprising the step of assigning a classification to each pixel as representing regions of nuclear or cytoplasmic material.
25. The method of claim 17, wherein the at least one cell comprises a cellular sample prepared according to the Papanicolaou staining procedure.
26. The method of claim 22, wherein the previously developed classification information is developed from at least one cell of known regions of nuclear or cytoplasmic material for analyzing cells of unknown regions of nuclear or cytoplasmic material.
27. The method of claim 26, further comprising the step of storing the previously developed classification information in a look-up table in an electronic memory device.
28. The method of claim 17, wherein the step of analyzing comprises neural network processing of the digitized images.
US09/970,610 1995-09-19 2001-10-04 Neural network assisted multi-spectral segmentation system Abandoned US20020123977A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/970,610 US20020123977A1 (en) 1995-09-19 2001-10-04 Neural network assisted multi-spectral segmentation system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US396495P 1995-09-19 1995-09-19
US09/040,378 US6463425B2 (en) 1995-09-19 1998-03-18 Neural network assisted multi-spectral segmentation system
US09/970,610 US20020123977A1 (en) 1995-09-19 2001-10-04 Neural network assisted multi-spectral segmentation system

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
PCT/CA1996/000619 Continuation WO1997011350A2 (en) 1995-09-19 1996-09-18 A neural network assisted multi-spectral segmentation system
US09/040,378 Continuation US6463425B2 (en) 1995-09-19 1998-03-18 Neural network assisted multi-spectral segmentation system

Publications (1)

Publication Number Publication Date
US20020123977A1 true US20020123977A1 (en) 2002-09-05

Family

ID=21708431

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/040,378 Expired - Fee Related US6463425B2 (en) 1995-09-19 1998-03-18 Neural network assisted multi-spectral segmentation system
US09/970,610 Abandoned US20020123977A1 (en) 1995-09-19 2001-10-04 Neural network assisted multi-spectral segmentation system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/040,378 Expired - Fee Related US6463425B2 (en) 1995-09-19 1998-03-18 Neural network assisted multi-spectral segmentation system

Country Status (6)

Country Link
US (2) US6463425B2 (en)
EP (1) EP0850405A2 (en)
JP (1) JPH11515097A (en)
AU (1) AU726049B2 (en)
CA (1) CA2232164A1 (en)
WO (1) WO1997011350A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070011118A1 (en) * 2005-06-28 2007-01-11 Snook James A Addressing Scheme for Neural Modeling and Brain-Based Devices using Special Purpose Processor
US20070094481A1 (en) * 2005-06-28 2007-04-26 Snook James A Neural Modeling and Brain-Based Devices Using Special Purpose Processor
US20070100780A1 (en) * 2005-09-13 2007-05-03 Neurosciences Research Foundation, Inc. Hybrid control device
US20080262984A1 (en) * 2007-04-19 2008-10-23 Microsoft Corporation Field-Programmable Gate Array Based Accelerator System
US20090202128A1 (en) * 2005-02-25 2009-08-13 Iscon Video Imaging Llc Methods and systems for detecting presence of materials
US20100076911A1 (en) * 2008-09-25 2010-03-25 Microsoft Corporation Automated Feature Selection Based on Rankboost for Ranking
US20100076915A1 (en) * 2008-09-25 2010-03-25 Microsoft Corporation Field-Programmable Gate Array Based Accelerator System
US20130163844A1 (en) * 2011-12-21 2013-06-27 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, non-transitory computer-readable medium, and image processing system
CN108885682A (en) * 2016-02-26 2018-11-23 谷歌有限责任公司 Use Processing with Neural Network cell image
US10902577B2 (en) 2017-06-19 2021-01-26 Apeel Technology, Inc. System and method for hyperspectral image processing to identify object
US10902581B2 (en) 2017-06-19 2021-01-26 Apeel Technology, Inc. System and method for hyperspectral image processing to identify foreign object
EP3662271A4 (en) * 2017-07-31 2021-04-14 Smiths Detection Inc. System for determining the presence of a substance of interest in a sample

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000070549A1 (en) * 1999-05-12 2000-11-23 Siemens Aktiengesellschaft Address reading method
JP2005331394A (en) 2004-05-20 2005-12-02 Olympus Corp Image processor
US20060017740A1 (en) * 2004-07-26 2006-01-26 Coleman Christopher R Diurnal variation of geo-specific terrain temperatures in real-time infrared sensor simulation
US20070036467A1 (en) * 2004-07-26 2007-02-15 Coleman Christopher R System and method for creating a high resolution material image
US20060020563A1 (en) * 2004-07-26 2006-01-26 Coleman Christopher R Supervised neural network for encoding continuous curves
EP2257636A4 (en) * 2008-07-03 2014-10-15 Nec Lab America Inc Epithelial layer detector and related methods
JPWO2010021043A1 (en) * 2008-08-21 2012-01-26 グローリー株式会社 Cash management system
US9551700B2 (en) * 2010-12-20 2017-01-24 Milagen, Inc. Device and methods for the detection of cervical disease
US9053429B2 (en) * 2012-12-21 2015-06-09 International Business Machines Corporation Mapping neural dynamics of a neural model on to a coarsely grained look-up table
US9087301B2 (en) 2012-12-21 2015-07-21 International Business Machines Corporation Hardware architecture for simulating a neural network of neurons
US9373059B1 (en) 2014-05-05 2016-06-21 Atomwise Inc. Systems and methods for applying a convolutional network to spatial data
US10546237B2 (en) 2017-03-30 2020-01-28 Atomwise Inc. Systems and methods for correcting error in a first classifier by evaluating classifier output in parallel
GB201705876D0 (en) 2017-04-11 2017-05-24 Kheiron Medical Tech Ltd Recist
GB201705911D0 (en) 2017-04-12 2017-05-24 Kheiron Medical Tech Ltd Abstracts
HUE058907T2 (en) 2018-06-14 2022-09-28 Kheiron Medical Tech Ltd Second reader suggestion
EP3598194A1 (en) * 2018-07-20 2020-01-22 Olympus Soft Imaging Solutions GmbH Method for microscopic assessment
US11151356B2 (en) * 2019-02-27 2021-10-19 Fei Company Using convolution neural networks for on-the-fly single particle reconstruction
CN113515798B (en) * 2021-07-05 2022-08-12 中山大学 Urban three-dimensional space expansion simulation method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6007996A (en) * 1995-12-12 1999-12-28 Applied Spectral Imaging Ltd. In situ method of analyzing cells
US6665060B1 (en) * 1999-10-29 2003-12-16 Cytyc Corporation Cytological imaging system and method
US6690817B1 (en) * 1993-08-18 2004-02-10 Applied Spectral Imaging Ltd. Spectral bio-imaging data for cell classification using internal reference

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4998284A (en) * 1987-11-17 1991-03-05 Cell Analysis Systems, Inc. Dual color camera microscope and methodology for cell staining and analysis
US4839807A (en) * 1987-08-03 1989-06-13 University Of Chicago Method and system for automated classification of distinction between normal lungs and abnormal lungs with interstitial disease in digital chest radiographs
US5544650A (en) * 1988-04-08 1996-08-13 Neuromedical Systems, Inc. Automated specimen classification system and method
US4965725B1 (en) * 1988-04-08 1996-05-07 Neuromedical Systems Inc Neural network based automated cytological specimen classification system and method
WO1991020048A1 (en) * 1990-06-21 1991-12-26 Applied Electronic Vision, Inc. Cellular analysis utilizing video processing and neural network
US5734022A (en) * 1990-08-01 1998-03-31 The Johns Hopkins University Antibodies to a novel mammalian protein associated with uncontrolled cell division
US5257182B1 (en) * 1991-01-29 1996-05-07 Neuromedical Systems Inc Morphological classification system and method
US5276772A (en) * 1991-01-31 1994-01-04 Ail Systems, Inc. Real time adaptive probabilistic neural network system and method for data sorting
US5331550A (en) * 1991-03-05 1994-07-19 E. I. Du Pont De Nemours And Company Application of neural networks as an aid in medical diagnosis and general anomaly detection
IL98622A (en) * 1991-06-25 1996-10-31 Scitex Corp Ltd Method and apparatus for employing neural networks in color image processing
US5276771A (en) * 1991-12-27 1994-01-04 R & D Associates Rapidly converging projective neural network
EP0587093B1 (en) * 1992-09-08 1999-11-24 Hitachi, Ltd. Information processing apparatus using inference and adaptive learning
JP3207690B2 (en) * 1994-10-27 2001-09-10 シャープ株式会社 Image processing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690817B1 (en) * 1993-08-18 2004-02-10 Applied Spectral Imaging Ltd. Spectral bio-imaging data for cell classification using internal reference
US6007996A (en) * 1995-12-12 1999-12-28 Applied Spectral Imaging Ltd. In situ method of analyzing cells
US6665060B1 (en) * 1999-10-29 2003-12-16 Cytyc Corporation Cytological imaging system and method

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090202128A1 (en) * 2005-02-25 2009-08-13 Iscon Video Imaging Llc Methods and systems for detecting presence of materials
US7709796B2 (en) * 2005-02-25 2010-05-04 Iscon Video Imaging, Inc. Methods and systems for detecting presence of materials
US8126828B2 (en) 2005-06-28 2012-02-28 Neuroscience Research Foundation, Inc. Special purpose processor implementing a synthetic neural model of the human brain
US7533071B2 (en) 2005-06-28 2009-05-12 Neurosciences Research Foundation, Inc. Neural modeling and brain-based devices using special purpose processor
US8326782B2 (en) 2005-06-28 2012-12-04 Neurosciences Research Foundation, Inc. Addressing scheme for neural modeling and brain-based devices using special purpose processor
US20090240642A1 (en) * 2005-06-28 2009-09-24 Neurosciences Research Foundation, Inc. Neural modeling and brain-based devices using special purpose processor
US7627540B2 (en) 2005-06-28 2009-12-01 Neurosciences Research Foundation, Inc. Addressing scheme for neural modeling and brain-based devices using special purpose processor
WO2007002731A3 (en) * 2005-06-28 2007-05-10 Neurosciences Res Found Addressing scheme for neural modeling and brain-based devices using special purpose processor
US20070094481A1 (en) * 2005-06-28 2007-04-26 Snook James A Neural Modeling and Brain-Based Devices Using Special Purpose Processor
US20070011118A1 (en) * 2005-06-28 2007-01-11 Snook James A Addressing Scheme for Neural Modeling and Brain-Based Devices using Special Purpose Processor
US7765029B2 (en) 2005-09-13 2010-07-27 Neurosciences Research Foundation, Inc. Hybrid control device
US20070100780A1 (en) * 2005-09-13 2007-05-03 Neurosciences Research Foundation, Inc. Hybrid control device
US20080262984A1 (en) * 2007-04-19 2008-10-23 Microsoft Corporation Field-Programmable Gate Array Based Accelerator System
US8583569B2 (en) 2007-04-19 2013-11-12 Microsoft Corporation Field-programmable gate array based accelerator system
US8117137B2 (en) 2007-04-19 2012-02-14 Microsoft Corporation Field-programmable gate array based accelerator system
US20100076915A1 (en) * 2008-09-25 2010-03-25 Microsoft Corporation Field-Programmable Gate Array Based Accelerator System
US20100076911A1 (en) * 2008-09-25 2010-03-25 Microsoft Corporation Automated Feature Selection Based on Rankboost for Ranking
US8301638B2 (en) 2008-09-25 2012-10-30 Microsoft Corporation Automated feature selection based on rankboost for ranking
US8131659B2 (en) 2008-09-25 2012-03-06 Microsoft Corporation Field-programmable gate array based accelerator system
US20130163844A1 (en) * 2011-12-21 2013-06-27 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, non-transitory computer-readable medium, and image processing system
US9070005B2 (en) * 2011-12-21 2015-06-30 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, non-transitory computer-readable medium, and image processing system for detection of target cells using image feature determination
US11443190B2 (en) 2016-02-26 2022-09-13 Google Llc Processing cell images using neural networks
CN108885682A (en) * 2016-02-26 2018-11-23 谷歌有限责任公司 Use Processing with Neural Network cell image
US11915134B2 (en) 2016-02-26 2024-02-27 Google Llc Processing cell images using neural networks
US10902577B2 (en) 2017-06-19 2021-01-26 Apeel Technology, Inc. System and method for hyperspectral image processing to identify object
US11410295B2 (en) 2017-06-19 2022-08-09 Apeel Technology, Inc. System and method for hyperspectral image processing to identify foreign object
US11443417B2 (en) 2017-06-19 2022-09-13 Apeel Technology, Inc. System and method for hyperspectral image processing to identify object
US10902581B2 (en) 2017-06-19 2021-01-26 Apeel Technology, Inc. System and method for hyperspectral image processing to identify foreign object
US11379709B2 (en) 2017-07-31 2022-07-05 Smiths Detection Inc. System for determining the presence of a substance of interest in a sample
EP3662271A4 (en) * 2017-07-31 2021-04-14 Smiths Detection Inc. System for determining the presence of a substance of interest in a sample
US11769039B2 (en) 2017-07-31 2023-09-26 Smiths Detection, Inc. System for determining the presence of a substance of interest in a sample

Also Published As

Publication number Publication date
WO1997011350A3 (en) 1997-05-22
JPH11515097A (en) 1999-12-21
EP0850405A2 (en) 1998-07-01
CA2232164A1 (en) 1997-03-27
US20020042785A1 (en) 2002-04-11
WO1997011350A2 (en) 1997-03-27
AU726049B2 (en) 2000-10-26
US6463425B2 (en) 2002-10-08
AU6921496A (en) 1997-04-09

Similar Documents

Publication Publication Date Title
US6463425B2 (en) Neural network assisted multi-spectral segmentation system
CN109800736B (en) Road extraction method based on remote sensing image and deep learning
KR102108050B1 (en) Method for classifying breast cancer histology images through incremental boosting convolution networks and apparatus thereof
CN106056595B (en) Based on the pernicious assistant diagnosis system of depth convolutional neural networks automatic identification Benign Thyroid Nodules
US7027627B2 (en) Medical decision support system and method
Guo et al. Breast cancer histology image classification based on deep neural networks
Buyssens et al. Multiscale convolutional neural networks for vision–based classification of cells
CN106248559A (en) A kind of leukocyte five sorting technique based on degree of depth study
Kartikeyan et al. An expert system for land cover classification
Song et al. Hybrid deep autoencoder with Curvature Gaussian for detection of various types of cells in bone marrow trephine biopsy images
CN109712150A (en) Optical microwave image co-registration method for reconstructing and device based on rarefaction representation
Yonekura et al. Improving the generalization of disease stage classification with deep CNN for glioma histopathological images
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN116580394A (en) White blood cell detection method based on multi-scale fusion and deformable self-attention
Jonnalagedda et al. [regular paper] mvpnets: Multi-viewing path deep learning neural networks for magnification invariant diagnosis in breast cancer
LU500715B1 (en) Hyperspectral Image Classification Method Based on Discriminant Gabor Network
Harvey et al. Investigation of automated feature extraction techniques for applications in cancer detection from multispectral histopathology images
Hasan et al. Nuclei segmentation in er-ihc stained histopathology images using mask r-cnn
Kao et al. A novel deep learning architecture for testis histology image classification
Putzu Computer aided diagnosis algorithms for digital microscopy
Sahoo et al. An efficient approach for enhancing contrast level and segmenting satellite images: HNN and FCM approach
Subhija Detection of Breast Cancer from Histopathological Images
CN113642518B (en) Transfer learning-based her2 pathological image cell membrane coloring integrity judging method
Parisse et al. Graph encoding of multiscale structural networks from binary images with application to bio imaging
Bianchin et al. Remote sensing and urban analysis

Legal Events

Date Code Title Description
AS Assignment

Owner name: VERACEL INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RAZ, RYAN;REEL/FRAME:012524/0571

Effective date: 20011212

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE