CA1232358A - Probabilistic learning element - Google Patents

Probabilistic learning element

Info

Publication number
CA1232358A
CA1232358A CA000472104A CA472104A CA1232358A CA 1232358 A CA1232358 A CA 1232358A CA 000472104 A CA000472104 A CA 000472104A CA 472104 A CA472104 A CA 472104A CA 1232358 A CA1232358 A CA 1232358A
Authority
CA
Canada
Prior art keywords
states
state
objects
sequences
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000472104A
Other languages
French (fr)
Inventor
Allen R. Smith
Chuan-Chieh Tan
Thomas B. Slack
Jeffrey N. Denenbeg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Standard Electric Corp
Original Assignee
International Standard Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Standard Electric Corp filed Critical International Standard Electric Corp
Application granted granted Critical
Publication of CA1232358A publication Critical patent/CA1232358A/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • G06F18/295Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
    • G06V10/85Markov-related models; Markov random fields

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)
  • Machine Translation (AREA)

Abstract

ABSTRACT OF THE DISCLOSURE

A probabilistic learning element for performing task independent sequential pattern recognition. Said element receives sequences of objects and outputs sequences of recognized states composed of objects. A plurality of memory means are utilized to store the received objects in sequence and for storing in context learned information including previously learned states, objects contained in previously learned states, positional information for each object in a learned state and other predetermined types of knowledge relating to the previously learned states and the objects contained therein. The element correlates the sequences of received objects with the learned information relating to previously learned states for providing conditional probabilities to possible sequences of recognized states. The most likely state sequence is determined and outputted at a recognized sequence when the element detects that a state has ended. The memory for storing learned information is a context organized memory including a plurality of tree structures having various types of information stored in the nodes thereof with certain of said tree structures including at each node an attribute list referring to other tree structures whereby searching is facilitated and unnecessary searching eliminated.
The element derives support coefficients relating to how much information was available when calculating the conditional probabilities and said support coefficients and conditional probabilities are combined to provide a rating of confidence.
When the rating of confidence exceeds a predetermined level, the element is caused to store the outputted recognized state sequence as a learned state sequence with said memories storing the various types of knowledge relating to said learned sequence of states.

Description

-A. R. SMITH ET AL 4-2-3-8 ~L~3~35~

IMPROVED PROBABILISTIC LEARNING ELEMENT

BACKGROUND OF THE INVENTION

Field of the Invention The present invention relates generally to recognition systems and more particularly to trainable or learning systems, which are capable of modifying their own internal processing in response to information descriptive of system performance.

Description of the Prior Art Recognition systems of the type that recognize patterns deal with the problem of relating a set of or sequence of input objects or situations to what is already known by the system.
This operation is a necessary part of any intelligent system since such a system must relate its current input and input environment to what it has experienced in order to respond appropriately.
A pattern recognition task is typically divided into three steps: data acquisition, feature extraction, and pattern classification. The data acquisition step is performed by a transducer which converts measurements of the pattern into digital signals appropriate for a recognition system. In the feature extraction step these signals are converted into a set of features or attributes which are useful for discriminating the patterns relevant to the purposes of the recoynizer. In the final step of pattern classification these features are matched to the features of the classes known by the system to decide which class best explains the input pattern.

- ~32~5~ A, R. SMITH ET AL 4-2-3-8 ~ 2 --The division between the step of feature extraction and pattern classification is somewhat arbitrary. A power~ul feature extractor would make the classifier's job trivial and conversely, a powerful decision mechanism in the classi~ier would perform well even with simple features. However in practice, feature extractors tend to be more tasX dependent.
For example, data acquisition and feature extraction for hand-printed character recognition will differ from that needed for speech recognition. Pattern classification on ~he other hand can be designed to be task independent, although it often is not.
A particular category of pattern recognition tasks is characterized by whether or not the features can be reduced to a linear sequence of input objects for the classification step.
This category is called sequential pattern recognition.
Examples of tas~s which naturally fall into this category are optical character recognition, waveform recognition~ and speech recognition. Other tasks such as computer image recognition can be placed within sequential pattern recognition by an appropriate ordering of the features.
Patterns of features must be ac~uired by the pattern recognizer for a new class of features before the system can recognize the class. When patterns cannot be learned from examplesJ acquisition of the patterns is a major problem.
Prior art optical character and speech recognition systems correlate input patterns with a set o~ templates, in order to determine a "best match". A correlation is performed using a particular algorithm which is specifically derived for the matching operation required for a particular problem such as speech recognition, character recognition, etc... A change in type font or speaker, for example, would require replacing the templates and changing parameters of the alqorlthm in such prior art systems.
Many trainable systems exist in the prior art, of which the following U. S. Patents are descriptive. U. S. Patent No.
3,950,733, an Information Processing System, illustrates an adaptive information processing system in which the learning growth rate is exponential rather than linear. U. S. Patent No. 3,715,730, a Multi-criteria 5earch Procedure for Trainable Processors illustrates a system having an expanded ~earch capability in which trained responses to input signals are ~3~3~j~ A. R. SMITH ET AL 4-2-3-8 produced in accordance with predetermined criteria. U. S.
Patent No. 3,702,986, a Trainable Entropy System illustrates a series of trainable non-linear processors in cascade. U. S.
Patent No. 3,70~,865, a Synthesized Cascaded Processor System illustrates a system in which a series of trainable processors generate a probabilistic signal for the next processor in the cascade which is a best estimate for that processor of a desired response. U. S. Patent No. 3,638j196 and 3,601,811, Learning Machines, illustrate the addition of hysteresis to perceptron-like systems. U. S. Patent No. 3,701,974, Learning Circuit, illustrates a typical learning element of the prior art. U. S.
Patent 3,613,084, Trainable Digital Apparatus illustrates a deterministic synethesized boolean function. U. S. Patent No.
3,623,015, Statistical Pattern Recognition System With Continual Update of Acceptance Zone Limits, il~ustrates a pattern recognition system capable of detecting similarities between patterns on a statistical basis. U. S. Patents No.
3,999,161 and 4,066,999 relate to statistical character recognition systems havinq learning capabilities.
Other patents that deal with learning systems that appear to be adaptive based upon probability or statistical experience include U. S. Patents No. 3,725,875; 3,576,976; 3,678,461;
3,440,617 and 3,414,885. Patents showing logic circuits that may be used in the above systems include U S. Patents No~
3,566,359; 3,562,502; 3,446,950; 3,103,648; 3,646,329;
3,753,243; 3,772,558; and 3,934,231.
Adaptive pattern, speech or character recognition systems are shown in the following U. S. Patents No. 4,318,083;
4,189,779; 3,581,281; 3,588,823; 3,196,399; 4,100,370; and 3,457,552. U. S. Patent No. 3,988,715 describes a system that develops conditional probabilities character by character with the highest probability being selected as the most probable interpretation of an optically scanned word. U. S. Patent No.
3,267,431 describes a system t'nat uses a "perceptron", a weighted correlation network, that is trained on sample patterns for identification of other patterns.
Articles and publications relating to the subject matter of the invention include the following: Introduction To Artifical Intelligence, P. C. Jackson Jr., Petrocelli/Charter, N. ~. 1974 pages 368-381; "Artifical Intelligence", S. K.
Roberts, By~e, Vol. 6, No. 9, September 1981, pages 164-178;

3~;~ A . R . SM I TH ET AL 4 - 2 - 3 - 8 "How ~xti~icial Is Intelligence?", W. R. Bennett Jr., American ScientiSt, Vol. 65, November-December 1977, pages 69~-702; and "Machine Intelligence and CommunicatiOns In Future NASA
Missions", T. J. Healy, IEEE Communications Magazine, Vol. l9, No. 6, November 1981, pages 8-15.

SUMMARY OF THE INVENTION

The present invention provides a probabilistic learning system (PLS) which performs the task independent pattern classification step for sequential pattern recognition systems and which acquires pattern descriptions of classes by learning from example. Thus, the PLS of the present invention is an adaptive or trainable learning system. Although a PLS could be applied to the problem of selecting good features for the feature extraction step that application will not be described here.
The PLS may comprise a plurality of probabilistic learning elements (PLE's) configured in an array or could be an individual PLE depending upon overall system requirements.
Each PLE is a element operating in accordance with its own set of multi-dimensional databases which are "learned" or altered through feedback from the environment in which it operates.
The array or the PLE has as its input a sequence of objects containing information, such as pixels, characters, speech or digital input from the environment. This information is processed as it passes throuqh the array or the PLE, thereby generating an output which may be either extracted knowledge in the form of an output state, such as a recognized pattern, or a set of control signals to be fed back for use as a future input modification, i.e. a process control adaptive equalizer.
The invention includes control mechanisms to provide performance feedback information to the array or the PLE. This information is used locally by each PLE of the array to modify its own databases for more appropriate behavior. Such performance feedback information can be supplied either to the entire array (globally) or to selected positions of the array, i.e one row, column or to the PLE involved in the generation of a particular output.
It is a primary objective of the present invention to utilize, in each individual PLE four interacting~ but r ~
A. R. SMITH ET AL 4-2-3-8 independent processing modules. An input module receives and stores input object sequence informationO The input module provides two outputs. Firstly, a set of most probable output states that would end at the present time and their probabilitiesO Secondly, the probability that some state ends at the present time. A predict module receives and stores information on output state options including state and length information. The predict module provides information on probable state length outputs. ~ decide module is responsive to the first output of the input module, the output of the predict module and previous state options to derive a current list of state options. An output module receives the list of state options and the second output of the input module to choose the best state option which is outputted along with a confidence factor signal. ~hen the confidence factor signal exceeds a predetermined threshold value, the databases in both the input and predict modules are updated ~ith the new valid data.
The data stored concerning the input objects and output states includes several types of knowledg0 extracted from the actual input objects and output states7 Sets of extracted knowledge are stored and correlated in the modules using various methods of association depending upon the type of knowledge included in the particular set. The membership function of each set is learned using the adaptive process of the PLE.
The types of knowledge extracted and stored include:
frequency of objects and sequences of objects; position and positional frequency of objects and sequence of objects within states; state-lengths and state frequencies.
The P~E uses context driven searching in context organized memories to maintain a high throushput from the large database.
Efficient searching is ~acilitated by organizing the inputted objects and the various types of extracted intelligence in contextO
When a plurality of PLE's are used in an array to form a PLS parallelism may be employed to speed up task execution.
When using an array the size of the individual PLE's may be reduced as opposed to that required to do a complete task. The overall task is broken down into subtasks each accomplished by single PLE's or combinations of PLE's.

3~ A.R. S~ITH ET AL ~-2-3-8 In order to maintain Lhe general purpose nature of the PLS and its use for wide applicability the representation step for specific tasks is accomplished in an input preprocessor rather than in the array itself.

The invention may be summarized according to a first broad aspect as a probabilistic learning element that sequentially receives objects and outputs sequences of recognized states, said learning element comprising: means for sequentially receiving objects; means for storing received object information, including, said received objects, and sequences of received objects; means ~or storing items of previously learned information, said items including, sequences of states, states contained in said sequences of states, objects contained in said states contained in said sequences of states, sequences of objects contained .in said states contained in said sequences of states, positional information for each object contained in said states contained in said sequences of states, and predetermined types of knowledge relating to said previously learned information, whereby received object information, relating to received objects, is stored as well as previously learned information;
means for correlating said received object information with said previously learned information for assigning conditional probabilities to possible sequences of recognized states; means, responsive to said conditional probabilities of possible sequences of recognized states, for determining a most likely sequence of recognized states; means, responsive to said previously learned information, for detecting 3 A. R . SMITH ET AL ~ - 2 - ~ - 8 - 6a -that a state has ended and for providing an end of state signal; and means, responsire to said end-of-state signal, for outputting said most likely sequence of recognized states as a recognized state sequence.

According to another broad aspect, the invention provides a probabilistic learning element that sequentially receives objects and outputs sequences of recognized states and includes contex~ driven searching, said learning element comprising: means for sequentially receiving objects; short term memory means for storing, in sequential context, said received objects; a context organized memory means comprising a plurality of tree structures for storing items of previously occurring learned information, said items including, states and the number of previous occurrences of said states, said states each having a length, objects contained in said states and the number of previous occurrences of said objects, lengths of said states and the number of occurrences of said state lengths, and state-length pairs in said states and the number of occurrences of said state-length pairs, said items of stored information being stored in accordance with the context in which the items of stored information statistically occurred, whereby from any items of stored information an item of stored information which statistically occurs next in context is directly accessible; said tree structures used to store the object information include an alltree structure and a plurality of singletree structures, the alltree structure stores the contextual occurrences of all objects received by the probabilistic learning element and at each node of the alltree there is provided an attribute list which refers to ~IL~:3~3~j~3 A.R. SMITH ET AL 4-2-3-8 - 6b -singletrees that include the same object context as the node of the alltree, a singletree is provided for each said state, whereby searching is facilitated by using the alltree as a pointer to the less complex singletrees; means for correlating said receiYed objects stored in the short term memory means with information s~ored in the context organized memory means, said correlation being facilitated by use of the context of said received object stored in the short term memory means as a pointer to the context of the statistically stored information in the context organized memory means, said correlating means assigning conditional probabilties to possible sequences of recogni7ed states; means, responsive to lS said conditional probabilities, for determining a most likely state sequence; means, responsive to the stored information, to determine a probability of an end of a state; and means, responsive to the end-of-state probability, for output~ing said most likely state sequence as a sequence of recognized states.

In accordance with another broad aspect of the invention there is provided, a probabilistic learning element that sequentially receives objects and outputs sequences of recognized states, said learning element comprising: means for sequentially receiving objects; means for storing, said received objects, sequences of received objects, sequences of previously learned states, states contained in said sequences of previously learned states, objects contained in said states contained in said sequences of previously learned states, sequences of said objects contained in said states contained in said sequences of previously learned states, and predetermined types of knowledge relating to, said A.R. S~IITH ET AL 4-2-3-8 ~ 6c -sequences of previously learDed states, states contained in said sequences of preYiously learned states, objects contained in said states contained in said sequences of previously learned staLes, and sequences of said objects contained in said states contained in said sequences of previously learned states, so that current object information relating to said received objects and sequences of objects is stored as well as statistical information relating to said previously learned sequences of states, said states, objects and sequences of objects contained in said previously learned sequences of states; means for correlating said current object information with stored statistical in~ormation relating to previously learned sequences o states for assigning conditional probabilities to possible sequences of recognized states; means, responsive to said conditional probabilities of possible sequences of recognized states, for determining a most likely state sequence;
means, responsive to the stored current objec-information and statistical information, to determine a probability of an end of a state; means, responsive to the probability of an end of a state, for outputting the most likely state sequence as a sequence of recognized states; and means for providing a ra~ing of confidence in said sequence of recognized states said means including means for deriving support coefficients relating to how much information was available when calculating ~he conditional probabilities, said confidence rating being a function of the conditional probabilities and the support coefficients for the conditional probabilities used to determine the most likely s~ate sequence.

3rj~3 A.R. SMITH ET AL ~-2-3-~
- 6d -In another broad aspect, the invention is a probabilistic learning element Lhat sequentially receives objects and outputs sequences of recognized states said learning element comprising: means for sequentially receiving objects; short term memory means for storing said received objects in sequential context; context organized memory means, for storing items of previously occurring learned information, including a plurality of tree structures, each tree having a plurality of connected nodes, said plurality of tree structures including, an all~ree s~ructure having objects stored at the nodes of the tree along with the number of previous occurrences of each object, said alltree storing all objects contained in previously learned states in context so that from any stored object, objects which statistically occur next in context are directly accessible, each node of the alltree including an attribute list pointing to nodes of singletrees having objects stored therein in the same context as the context of the alltree node, a plurality of singletrees, one for each previously learned state, each node of the singletrees storing an object in context along with the number of previous occurrences of said objec~ and an attribute list including positional information relating to the positio.n of the object within the state and the number of previous occurrences of the object in that position, a tree structure for storing learned states in context so as to include states, the number of previous occurrences of each state, sequences of states and the number of previous occurrences of each state sequence, a tree structure for storing lengths of learned states in context so as to include state lengths, the number of previous occurrences of each state length, sequences of state lengths and the number of previous occurrences of each state length ~ 3 - 6e -sequence, and a ~ree strucLure for s~oring state-length pairs of learned states in context so as ~o include the number of previous occurrences of each state-length pair9 sequences of state-length pairs and the number of previous occurrences of each state-length pair sequence; means for correlating said received objects stored in the short term memory means with informa~ion stored in ~he context organized memory means, said correlation being facilitated by use of ~he con~ext of said received objects stored in the short term memory means as a pointer to the context of the stored information in the context organized memory means, said correlaLing means assigning conditional prGbabilities to possible sequences of recognized states; means, responsive to said conditional probabilities, for de~ermining a most likely sta~e sequence; means, responsive to the stored information, to determine a probability of an end of a state; and means, responsive to the end-of-state probabili~y, for output~ing said most likely state sequence as a sequences of recognized states.

BRIEF DESCRIPTIO~ OF THE DRAWINGS

Figure 1 is a simplified block diagram of a PLS
including an array of PLE's in accordance with the present invention.

Figure 2 is a diagram showing the recognition function of a PLE.

Figure 3 is an example of a context organized memory.

11.23~3r~
A.R. SMITII ET AL 4-2-3-8 - 6f -Figure 4 illustrates ~he probability computating process in a PLE and also illustrates the relationship of the major subroutines of a PLE with respect to probability computations.

Figure 5 is a simplified functional block diagram of a PLE in accordance with the present inven L i on.

Figure 6 is a block diagram of the input module shown in Figure 5.

Figure 7 is a block diagram of the end time state length function shown in Figure 6.

Figure 8 is a block diagram of the span-length function module shown in Figure 6.

Figure 9 is a block diagram of the length lS normalizer shown in Figure 6.

Figure 10 is a block diagram of the predict module shown in Figure 5.

Figure 11 is a block diagram of the decide module shown in Figure 5.

Figure 12 is a block diagram of the output module shown in Figure 5.

Figure 13 illustrates the use of a PLS in a character recognition system.

Figure 14 illustrates a scrip~ reading system using a plurality of PLS's.

3~J~
A.R. SMITH ET AL 4-2-3-8 - 6g -DESCRIPTION OF THE INVENTION

Prior to describing the inven~ion i~ may be helpful to define certain terms used in the description.

~3~ A. R. SMITH ET AL 4-2-3-8 Object: a feature extracted by the device immediately before th~ PLS and inputted to the PLS. An object may be a picture element (pixel), a set of pixels, a character or even a word depending upon the application.
State: a recognized item outputted from the PLS such as a character, a word or a script depending upon the applicationO
Length: the number of objects in a state.
State-Length Pair: a state and its length indexed and stored together.
Position: information which identifies that an inputted object is part of a state and where in the sequence of all objects that are in that state it occurs. Conversely, this same information identifies that a particular state was formed of a particular set of objects from the sequence sent by the feature extractor. Thus, the position of a state means both where in the input stream the state begins and where it ends.
The position of an object means how far from the beginning of the state and from the end of the state it occurred.
Confidence: a rating related to the probability that a particular state occurred in a particular position and the support coefficient of said probability. Confidence equals Probability* Support Coefficient.
Support Coefficient: a rating related to how much information was available when calculating a given probability.
It is possible to have a high probability, based on little information.
Referring to Figure 1, there is shown a block diagram of a trainable PLS 10 formed of an array 12 of trainable PLE's constructed in accordance with the present invention. The PLS
includes an input 11 for receiving objects and an output 13 for outputting recognized states. The array 12 of PLE's 14a to 14h is configured as a two dimensional array of elements, each of which operates according to its own set of multi-dimensional databases. ~alues for the databases are obtained from the external environment and from outputs which comprise the results of the overall system operation. An output processor 16 also includes a feedback interface portion coupled to a bidirectional bus 18 which is adapted to feedback output information to a rnaintenance and human interface processor 20. The interface ~0 processor 20 also intercouples data from an input preprocessor 22 and the array 12 on bidirectional busses 24 and 26 to provide ~ i8 A. R. SMITH ET AL 4-2-3-8 feedback p~ths not only between the output processor 16 and the trainable array 12, but also between the input processor 22 and the trainable array 12 via the maintenance and human interface processor 20.
Prior to discussing the operation of the PLS 10, which comprises an array 12 of PLE's 14a to 14h, it should be understood that a PLS may consist of only one PLE if it has sufficient capacity for the assigned task.
A PLE inputs a sequence of objects and outputs a sequence of states which it learned from past e~perience to be the state sequence most closely associated with the input object sequence.
The confidence which the PLE has in the association is indicated by assigning a rating to the output state sequence. This recognition function is shown most clearly in Figure 2. The learning function could be illustrated by reversing the arrow now pointing to the output state sequence and ignoring the confidence rating.
In keeping with the task independent goal of the PLS there is no inherent meaning associated with an input object or an output state, they are members of finite sets. The input and output may in fact be the same set, but this is transparent to the system. The number of unique objects and states appearing in the task does however effect the database size of each PLE.
Although much of PLE's processing power, generality, and speed can be attributed to statistical modeling of its environment and the organization of that model in memory, the basic idea behind the modeling is simple. A sequence of objects as shown in Figure 2 is learned and modeled by counting the n-grams of objects making up the sequence, where an n-gram is defined simply as a subsequence of n objects. Thus after learning, the element knows how often each object (l-gram) appeared in any sequence for each state, how often each pair of objects (2-gram) appeared in any sequence for each state, and so forth up to a specified limit of n. If D is the number of different objects there can be as many as D to the power of n different n-grams. However, the number is limited by the realities of a pattern recognition task. The size of D is determined by the feature extractor and the number of unique n-grams is determined by the states being recognized. Typically a finite set of states uses only a small fraction of the-pattern spa~e (e.g. 9 this is true in speech recognition and optical character recognition).

~ ~3~ A . R . SMITH ET AL 4-2-3-8 _ 9 _ The identity and frequency of n-grams are stored in databases in a context organized manner for long term memory.
W8 call the databases that are organized in this manner Context Organized Memories or COM's. This type of memory storage is a modified tree structure in which each node represents a particular n-gram and is the parent node of all (n+l)-gram nodes that share the same first n objects. In addition, each node is linked to an (n-l)-gram node which represents the same object sequence with one less object at the beginning of the sequence.
Figure 3 gives an example of a COM in which the object n-grams are composed of letters for the word "MISSISSIPPI".
For "MISSISSIPPI" there are four objects i.e S, I, M, P, therefore, D=4 and the highest level n-gram shown is a 3-gram for n=3. The objects on the path to a node at level n define the n-gram represented by the node. The number stored at the node is the frequency count of the n-gram. The dotted lines show links to the related (n-l)-grams. For example, the 3-gram "SIS" has occurred in the training once and it is linked to its unique 2-gram "IS".
The COM supports an efficient Context Driven Searchq The memory arranges the objects so that the set of objects which statistically occur next in context are directly accessible from the current point or node in the structure. If the next input object does not match any of those in the expected set, the next position searched in the struct~re corresponds to the less specific context obtained conceptually by deleting the oldest object and algorithmically following the link to the (n~ gram node. At level n the greatest number of nodes expanded (i.e., searching all sons of a node) before the next object is found will be n. This corresponds to the case when the new object has never been found to follow any subpart of the current n-gram and the search most "drop all context" to find the object at level 1. An important feature of Context Driven Searching is that the average number of nodes expanded per input object is two. This is obvious if we remember that every failed node expansion (decreasing level by one) must be matched by some successful node expansion (increasing level by one) since the sea ch remains within the finite levels of the tree.

3~13 A. R. SMITH ET AL 4-2-3-8 The data structure of a PLE consists of four major long term databases and their supporting structures and a number of short term memories which will be discussed subsequently. The long term memories are COM'5 which may comprise one or more 5 connected or associated tree structures. ~he COM's of the four major databases are indexed by object, state, length, or state-length.
The object database comprises a plurality of tree structures wherein the nodes of the trees each have an attribute list. One tree is called an alltree while the other trees of the object database are called singletrees. There are singletrees for each previously learned state, i.e. each state has its own object indexed singletree. Associated with each node of the alltree is an attribute list which acts as a L5 pointer to all singletrees that include the same context as that of the alltree node. Thus, for every singletree there is a corresponding place in the alltree and that place in the alltree has an attribute list pointing to the place in the singletree.
Each node in the alltrees provides a record with these components:

(1) The additional object that this node represents.
(2) The frequency of occurrence of this pattern among the learned sequences of patterns. This occurrence is based not only on the object in this node but on the pattern of the nodes above it that point down to it.
(3) A calculated value from (2) derived by taking the logarithm value of (2) then multiplying by a constant. This operation maps the integer occurrence values to the integer range of zero to a predefined upper bound.
(4) A calculated value from the nodes under the current node. It is a measure of the usefulness of the information represented by those nodes. This value is called the support coefficient and is initialized as -1 to indicate that no valid data stored here.
Each time a node is updated, its support coefficient is also reset to -1 to indicate that the support coefficient is not updated yet. This value is calculated when it is requested the first time after ~2~3~ ~. R. SMITH ET AL ~-2-3-8 last update. The value is then stored in the node.
And this value is valid till the next update.
(5~ The pointer to the node which represents the same correlation information except the last object. And its object is greater than the object of this node.
(6) The pointer to the node which represents the same correlation information and with one more object.
This is called one level deeper, There may be more than one such node. The one that down pointer points to is the one with the smallest object.
(7) The pointer to the node whi,ch represents the same pattern except it is one level higher. That is to say it does not have the oldest object.
(8) The pointer to the node which represents the same pattern without the last object. That node is also one level higher.

Singletrees are similar to alltrees in structure and purpose. The only difference in structure is in the attribute lists and the only difference in purpose is that an alltree contains pattern in~ormation independent of output state recognized, and a singletree contains pattern information concerning a sin~le output state. Thus, an alltree may contain the cumulative of several singletrees. In the described embodiment we use one alltree to contain and control all the singletrees in the object database.
The entries of singletree attribute lists represent detailed correlati'on information for the state represented by the singletree for the node with which it is associated. It has four components:

(1) The number of objects (distance) in front of the object this node represents. This provides the distance from the beginning of the state to the current node.
(2) The number of objects (distance) from the object of this n~de to the end of the state.
(3) The number of times this object in this position has been learned.
(4) The calculated data from (3). The same calculation as done in (3) of alltrees.

~3~3`~ A. R. SMITH ET AL 4-2-3-8 The state, length and state-length databases each comprise one singletree structure indexed by state, length and state-length respectively. These singletrees do not have attribute lists as do the singletrees of the object database but the type of information stored at each node are similar to that stored in the object database tree.
When a COM is used to store the frequency of object n-grams forming parts of states the storage is efficient since only one tree is used for all states. An n-gram which appears in more than one state is stored once and details within the attribute list for the node list the proper states together with separate frequency counts.
Learning the next object in a sequence is simply a matter of creating a new node in the tree whenever the object appears in a new context or incrementing a frequency count when it appears in a previously learned conte~t~ The object is learned in all possible contexts from the (n-l) gram preceding it for some maximum n down to a null context in which the object is recorded by itself as a 1-gram.
The databases are arranged to store five different types of knowledge. The five types of knowledge that are modeled by the PLE and stored in COM's are as follows:

Type 1 The frequency of object n-grams forming parts of all possible states; this knowledge is stored at the - nodes of the alltree.
Type 2 The position and positional frequency of object n-grams within states, this knowledge is stored in the singletree attribute lists of the object database.
Type 3 The frequency of n-grams composed of states (i.e. for states T and A a 2-gram of states would be TA); this knowledge is stored in the nodes of the singletree of the state database.
Type 4 The frequency of n-grams composed of state lengths (i.e., the lengths of the underlying object sequence for state lengths of 4, 3 and 5 a 3-gram of state lengths would be 435); this knowledge is stored in the nodes of the singletree of the length database.
Type 5 The frequency of n-grams composed of state-length pairs, which knowledge is stored at the nodes of the state-length database.

A. R. SMITH ET AL 4-2-3-8 cOnsider an object 4-gram~ YlY2Y3Y4~ stored at node j and let fj be the frequency of occurrence of the 4-gram and fi be the frequency of occurrence for its parent node, a 3-gram. Then the conditional probability that object y4 will occur in the context f YlY2Y3 i5 given by the maximum likelyhood estimate:

P(Y4 I YlY2Y3) = fj/fi-(1) This is the probabilistic basis for pattern matching in the PLE. The following types of conditional probabilistic knowledge maybe retrieved from the COM's using the above knowledge types:

Pl. The probability that object Yt will occur given the previous object context and state Xi, from the nodes of the singletree in the object databaseO
P2. The probability that object Yt will occur with beginning position f, ending position g, given previous object context with consistent positioning and state xi, from the singletree attribute lists in the object database.
P3. The probability that state xi will occur given previous output states, from the nodes of singletree in the state database.
P~. The probability of state length Lj given lengths of previous output states, from the nodes of the singletree in the length database.
P5. The probability of state and length xi, Lj given previous sequence of state-length pairs, from the nodes of the singletree in the state-length database.

These probabilities will be more formally defined and derived subsequently.
Note that the sequence of state-length pairs is given as much attention by PLE modeling as the states themselves. This was done to permit the PLE to extract all relevant information from its environment so that it could decide what was needed to perform a task. In some pattern recognition tasks such as 3~ Morse code recognition or music classification the length of ~3~ A. R. SMITH ET AL 4-2-3-8 Object sequences may be as important as the identity of the objects or the states. The PLE has the ability to use the information which is most helpful for the recognition task being performed.
The databases also include short term memory capability ~or storing the five types o~ Xnowledge that have recently been observed. The recently observed knowledge is correlated with the five types of knowledge that have been experienced, modeled and stored in COM's for long term memory in order to assign probabilities to possible output states. Short term memories build up and maintain the context in which the next object is handled. This saved conte~t includes pointers into the trees of the COM's of long term memory.
~sing the conditional probabilities retrieved from the COM's the following two basic probabilities are computed for all states and lengths previously stored in the COM's each time a new input object is received:

1. Input Probability: the probability that an input object sequence beginning at time b will occur and span a state given that it will end at time t and that the state will occur;
2. Predict Probability: the probability that a state and length will occur given that a previous sequence o states and lengths have occurred.

Figure 4 shows the Input and Predict processes that compute these probabilities. Since mathematical details will be given subsequently, only an overview about these processes is discussed here. Probabilistic knowledge type P2, introduced above as the conditional probability of an object and the object's position, would be sufficient by itself for obtaining the needed input probability if enough training data could be guaranteed. However, in real recognition tasks the knowledge is too specific to stand alone. For example, if n-gram 'bacb' at a position o~ 5 objects from the beginning and 2 objects from the end in state S is highly likely to occur if state S
occurs then it is likely to occur in other positions when state S occurs given any noise or uncertainty in the input. But if the n-gram occurs in some yet unobserved position, probabilistic knowledge type P2 will give no support for state S occurring ~2~ A. R. SMITH ET AL 4-2-3-8 based on n-gram Ibacb'. For this reason n-gram frequencies for a state are learned independent of position as probabilistic knowledge type Pl~ ProbabilistiC knowledge type P2 is used only to estimate the probability of a states beginning time given an ending time, the intervening object sequence, an~ the state. Thus, probabilistic knowledge type P2 segments the input and probabilistic knowledge type Pl identifies it.
Similarly, in the predict p~ocess probabilistic knowledge type P5 containing the probability that a state with a particular length (state-length pair) occurs given that a previous sequence of states and lengths have occurred is very specific and would require a large amount of training and memory to be useful by itself. However it does supply the relationship between states and lengths (e.g., if state S occurs it will have length L with probability p, or if length L' occurs it will be state S; with probability p'). Probabilistic knowledge types P3 and P4 give predictions of state and length respectively based on more context and are combined with probabilities from probabilistic knowledge type P5 to find the best predictor of each state length pair.
The two basic probabilities, Input and Predict, are used in the PLE decision process. From the input and predict probabilities at each object input time, t, the decision process computes the probability that a state and a length and the input object sequence spanning the length ending at t will occur given past context. These probabilities are combined over time using the Viterbi Algorithm to compute the k most likely state sequences ending at time t, for some k. The most likely statesequence ending at final time T is the recognized state sequence.
The foregoing discussions of the use of probabilities in the PLE will now be expanded to include another important PLE
concept. In any human decision at least three factors come into play when assigning a confidence to the decision that is finally made:

1. How much do I trust my input information about the current circumstance?;
2. How well do the circumstance match the circumstance for previous decision experience I have had?; and 3. How much experience have I had and do I trust it?

- 16 _ A. R. SMITH ET AL ~-2-3-8 The PL~ attempts to use the last two factors to make a decision and to compute a rating of confidence in its decision.
The PLE assumes that the input object sequence is completely reliable and therefore does not use the first factor. It is understood that this constraint may not always be true. The second factor corresponds to the two basic correlation probabilities and the decision process.
The third factor is implemented by computing a 'coefficient of support' ~or every conditional probability obtained from the COM structures. For a particular condition (i.e., context) the coefficient measuras how diverse the experience has been and varies between 0 and 1 as the support ranges between no support (experience shows random chance) to complete support (no other possible choice). In addition, the support coefficient measures the amount of experience under each condition.
The support coefficients are combined together throughout the computation of the final probability to obtain an overall confidence rating for the probability of the recognized state sequence. The confidence rating is passed on to the interface processor 20 shown Figure 1 for PLS array or to a learning supervision circuit which decides whether or not to learn to associate the output state sequence with the input object sequence. This decision is based on a threshold test of the confidence rating or external reinforcement. External rein~orcement may be either from another PLE, as in an array, or from a human operator. The reinforcement may also include corrections to some of the state and boundary decisions made by the PLE. These corrections are passed on to the PLE databases before the COM's are updated by a command from the learning supervision circuit.
This type of correlation of conditional probabilities derived from learned experience allows the PLS to be general purpose in nature. To preserve this general purpose nature of the PLS, part of the representation for each specific task will be in the input preprocessor 22 designed for that recognition task and shown in Figure 1. This will allow the PLS to be independent of special purpose aspects of application problems since often the representation are proble~-specific.

2323 ~ ~ A. R~ SMITH ET AL 4-2-3-8 The following describes in detail the computationsperformed by the PL~ to assign the most likely sequence of states to a sequence of objects given the probabilistic knowledge stored in the COM's~ Referring to Figure 2 let
5 Y1~Y2~ YT~ or more compactly, y(l:T), be an input sequence of objects to the PLE during time units 1 through T, and x(l:R) be the output sequence of recognized states. Since an output state is represented by one or more input objects, R
is less than or equal to T. Let b(l:R) be the mapping of input objects to output states such that br gives the time unit of the first object for state xr. Thus, l<bi<bj<T for l<i<j<R
t2) The task of the PLE is to find R output states x(l:R) with boundaries in the input sequence of b(l:R) for a given input 15 object sequence y(l:T) such that P(x(l:R),b(l:R)¦y(l:T)) is maximized.
By Bayes' rule P(x(l:R),b(l:R)) * P(y(l:T)¦x(l:R),b(l:R)) P(x(l:R),b(l:R)¦y(l:T)) =
P(y(l:T)) (3) ~ut since y(l:T) is constant over any one recognition task we can obtain the same solution by maximizing the numerator:

P(y(l:T),x(l:R),b(l:R)) =
P(x(l:R),b(l:R)) * P(y(l:T)¦x(l:R),b(l:R)).
(4) It is not computationally practical to compute equation 4 for all possible sets of [R, x(l:R), b(l:R)]. Therefore the restrictions of a more specific model are applied. The model used by the PLE is that the object sequences within a state, the sequences of states, the sequences of state lengths, and the sequences of state-length pairs represent probabilistic functions of Markov processes.
Specifically it is assumed that:

~ Z32,~i8 A. R. SMITH ET AL 4-2-3-8 1. The conditional probability that object Yt given y(t-cl:t-l) and state xr is independent of t, r, x(l:r-l), x(r+l:R), and any other y's for some context level cl determined by the training ofthe PLE;
2. The conditional probability of state xr depends only on x(r-c2:r-1) for some context level c2;
3. The conditional probability of length Lr = br+l - br depends only on L(r-c3:r-1) for some context level C3;
and 0 4. The conditional probability of (xr,Lr) depends only on (x(r-c4:r-l),L(r-c4:r-1)) for some context level C4.

We are using what might be called variable order Markov processes since for each Markov STATE (i.e. object, output state, length or state-length pair) of these four Markov processes the context level c varies depending on training of the PLE. The order of a Markov process is given by the number of previous STATES effecting the current STATE. We will use "STATE" in bold type to differentiate a Markov ST~TE from an ouput state of the PLE. Now, an nth order Markov process is equivalent to some first order Markov process with an expanded STATE space. In fact, the learning process using the COM of the PLE maintains such a STATE expansion automatically. For example each node on the COM object tree can be viewed as representing a STATE of the Markov chain encoding a particular n-gram of objects. The transitions to all possible next Mar~ov STATES is given by the links to all sons of the node. These sons encode (n+l)-gram of objects. New Markov STATES are added as new n-grams are observed and deleted as transitions to them become improbable.
Given the above Markov assumptions, a simple method for finding the most likely set [R,x(l:R),b(l:R)] for a given y(l:T) is a dynamic programming scheme called the Viterbi Algorithm.
Let W(t,k) be the kth most likely sequence of states x(l:r) that match input y(l:T) up to time t. Let G(t,k) = P(W(t,k~) = P(y(l:t),xk(l:r),b(l:r),br+l=t~l) (5) -~ ~23Z3~ i~ A. R. SMITH ET AL 4-2-3~8 .

denote the probability of the kth best sequence. The term br+lis included to make explicit that the last state xr ends with object Yt. The goal of the PLE is to determine W(T,l) .
W(t,k) can be shown to be an extension of W(t',k') for some t'<t and k'. Specifically, G(t,k) = kth-max [G(t',k')*P(y(br:t~,xr,br,br+l t~l¦x(l:r ), ( ))]
(6) where r' is the number of states in the k'th best sequence ending at time t' and r = r'~l is the number of states in the kth best sequence ending at time t. Computing the best k sequences rather than only the best sequence to time t permits a non-optimal S(t,k) to be part of the final sequence S(T,l) if S(t,k) is supported by the context of later states. Thus context learned in a forward direction as P(xrlx(r-c:r-l)) has an effect in a backward direction as P(xr c¦x(r-c+l:r)).
Equation 6 is computed in the decision process using P(y(br t)~xr~br~br+l=t+l ¦ x(r-c:r-l),b(r-cOr-l)) =
P(y(br:t)~brlbr+l=t+l~xr) P(xr,br+l=t+llx(r-c:r-l),b(r-c:r-l)).
(7) where r-l has replaced r'. The left and right terms of the product are the Input and Predict probabilities respectlvely.
These were discussed and appear in Figure 4 as the output of the Input and Predict processes. Hereinafter we will discuss the computation of the Input and Predict probabilities, but first we will derive the support coefficient used in these computations.
In explaining the support coefficient we want to do four things: describe what a support coefficient is, show how it i5 used to compute a confidence rating, show how support coefficients are combined, and describe how support coefficients permit the PLE to weight its knowledge according to its experience.
Let p(l:n) be the estimated probability vector for the occurxence of n mutually exclusive events. Therefore the n 2 ~2 3~D~ A. R. SMITH ET AL 4-2-3~8 probabilities e~ual l. The remaining uncertainty about what event will occur in terms of bits of information is given by the entropy measure:

H(p(l:n)) = - ~ pjlOg pj (8) Let us asSUIne that the probability vector gives an accurate although perhaps incomplete description of reality concerning the n events.
The fraction of information supplied by the probability vector of the total information needed to correctly predict what event will occur is given by:

H(p(l:n)) S (P) = 1 -- --~ -- (9) - log n We call this fraction the support coefficient since it measures the amount of support given by the probability vector towards making a decision. The support coefficient is l when the probability of one of the events is 1. Its value is 0 when the probability of all events are equal.
Let Pi j represent the probability of event j obtained by some estimator i and Si be the support coefficient for the estimator computed from the probability vector it produced. We use Pi j *Si as a measure of confidence in a decision that chooses event j as the one that will occur. The PLE uses this measure of confidence in four areas:

1. To chose between various conditional probabilities assigned to an object based on different levels of context;
2. To chose between state-based and length-based estimates of the predict probabilities;
3. To chose in the decision processes the kth best state sequence ending at time t in equation 6. Thus equation 6 should be amended with -- 'where the kth ~ax is determined by' kth-max [G(t',k') * P(...) * S(G(t',k') * P(...))].
( 10 ) 1~3~3~ A. R. SMITH ET AL 4 2-3-8 4~ And to indicate to the learning supervisor the confidence it has in the final decision.

The combining of support coefficients from different probability vectors to obtain the support coefficient for their joint distribution is quite simple. It can be shown that for probability vectors p and q each of length n:

H(p*q) - H(p) ~ H(q). (11) From which it follows that:

S(p~q) = (S(p) + S(q)) / 2. (12) Extending this to more than two vectors gives a means by which the PLE can assign a support coefficient to the probability of a sequence of objects (or states) by averaging the support coefficients of the individual conditional probabilities of the objects (or states) making up the sequence.
A weakness of support coefficients as described to this point is that they do not measure the amount of experience behind the estimated probability vectors. For example, a probability vector of [0. 8 r 0 ~ 2 ~ 0 ~ 01 for four events has a support coefficient of 0.64 according to equation 9, which does not distinguish whether the probabilistic estimate is based on frequency counts of [4,1,0,0] or of [100,25,0,0]. Ideally the support coefficient would be higher in the second case than in the first. We will modify equations 8 and 9 to first put them in terms of frequency counts and then to incorporate the concept of experience.
The COM structures store frequency counts at each node for the object n-gram it represents (or state, length, or, state-length pair n-grams -- depending on the knowledge type). The conditional probability, pj of the node is simply the ratio of the frequency count, fj, of the node to the frequency count, f'i, of its parent node. Thus, Pj fj/f i fj/k~ fk (13) ~3~3~ A. R. SMITH F.T AL 4-2 3-a where the sum is over all sons of th~ parent node. Substituting equation 13 into equation 8, combinin9 it with equation 9 and simplifying yields:

log ~'i ~ (1/~ i)j=l(fj 1 g S (p) = 1 -log N (14) where N is the number of possible nodes (e.g., equal to the number of unique objects) and n is the number of exis~ing nodes.
We can now incorporate the concept of experience by assuming that all non-existing nodes-(objects not yet seen in the current context) occur with some finite frequency, u. The larger u is the greater the frequency counts of the existing nodes must be to achieve the same support coeficient. On the other hand, greater experience raises ~he support coefficient even if it does not change the conditional probabilities.
Equation (14) now becomes log fn - f~i ((N-n) u log u ~ 1 ; g S(p) = 1 -log N
(15) where the frequency count of the parent node,f'i, has been replaced by:

.
The value of u doe~ not have to be an integer. ~n the example given above, if u is set to 0.5 the support coefficient for the probabilities based on low frequency counts of [4~ ,0] drops to 0029. The support coefficient for the frequency counts of [100,25,0,0] re~ains almost unchanged at ~.63.
The Input probability i~ given by 3~
A. R. SMITH ET AL 4-2-3-8 P(y(br:t)~br¦br~l-t+l,Xr) =
P(Y(br:t) It~xr) * P(br¦y(br:t) ,t,xr) (16) As summarized previously the first term called probabilistic knowledge type Pl identifies the inpu~ and the second term called probabilistic knowledge type P2 segments the input.
The first term is obtained from (y(br:t) It,Xr) = P(y(br:t) ¦xr) P(Yb lxr) bl<i<tP(yi ¦y(i-ci:i-l) .Xr) where Cbr 0 and i 0<j_ci 1+l[ ~Yily(i-~ xr)*s(p)~

(17) Each P(yily(i-ci:i-l),xi) is a conditional probability obtained from the frequency counts stored at the nodes of the COM tree structure. The log of frequency counts are also stored at each node to permit efficient probability computations in the log domain. However this is an implementation issue and the math has been left in the non-log domain. The value ci determines equivalently: the level in the tree, the context in which the object Yi is matched, and the conditional probability used for the object. It is chosen to maximize th~ confidence value of the decision which as explained above is equal to the product of the probability and the support coefficient. Equation (17) shows that the context level for the next object is limited to be no more than one level deeper than the best level of the current object.
The derivation of the second term containing the second probabilistic knowledge type will now be discussed. The frequency counts stored at a node in the COM object tree for a particular object n-gram are first divided between the states that were learned when the n-gram appeared (knowledge type 1) and then further divided between the various positions in which the n-gram appeared within each state (knowledge type 2). The position is ~iven by two values: the number of objects ~2323~ A. R. SMITH ET AL 4-2-3-8 - 2~ -precedin9 the last object of the n-gram; and the number of objects following the the last object plus 1. We call these values the "distance to beginning" and the "distance to ending".
The sum of these values will always equal the length of the state ~i~e., number of objects) at the time the pattern was learned.
Let fi, and 9i be the distance to beginning and ending respectively for the last object Yi of n-grams appearing in patterns learned for state X~ The probability that a object sequence y(b:e) is a complete pattern for state X (i~2~, both begins and ends the state) is estimated by P(y(b:e),b,e,¦X) = (P(yb~fb 0~gb e P(yi,fi=i-b,gi=e-i+l¦y(i-ci:i-l),f(i-ci:i-l),g(i-ci:i-l),X) b<i<e ~18) where L=e-b+l and ci takes on the same values of equation (17). The conditional probabilities returned by the tree are bounded below by a small positive value since in many cases there will be no learned examples of a particular n-gram, in a particular position, for a particular state. The effect of this l'deficiency" in the training is removed by replacing ~ero probabilities with small probabilities, and normalizing by length by taken the Lth root of the product. These calculations take place in the log domain.
We can now compute the second term of equation (16) with P~y(br:t) ,br,t¦Xr) P(br¦y(br:t),t,x ) = - -- ~
r ~ P(y(i:t),i,t¦xr) br~i~t ( 19 ) 25The Predict probability (the second term of (7)) can be rewritten as P(xr,Lr¦x(r-c:r-l),L(r-c:r-l)) =
P(xr,br+l=tll¦x(r-c:r-l) ,b(r-c:r-l)) (20) ~3~3~:i8 A. R. SMITH ET AL 4-2-3 8 where Lr-br+l-~r is the length of state xr. This probability can be com~uted based on state predictions from probabilistic knowledge type P3 as P(xr~Lrlx(r-c:r-l),L(r-c:r~
P(Lr¦xr,x(r-c':r-l),L(r-c':r-l)) * P(xr lxr_c r_l) (21) 5 or based on length predictions from probabilistic knowledge type P4 as P(xr,Lr¦x(r-c:r-l),L(r-c:r-l)) =
P(xr¦Lr,x(r-c'or-l),L(r-c':r-l)) * P(Lr¦Lr c:r 1) (22) For each state and length pair the method is chosen to give the maximum confidence value for the decision. The first term in each case is derived from the state-length COM tree structure probabilistic knowledge type P5 by summing over all lengths for a given state or all states for a given length as appropriate.
In equation (22) the first term is derived from the equation:

P(xi,Lj¦x(r-c':r-l),L(r-c':r-l)) P~Lj¦xi,x(r-c':r-l),L(r-c':r-l)) =
P(xi,Lj¦x(r-c':r-l),L(r-c':r-l)) (21A) In equation (21) the first term i5 derived from the equation:

. P(xi,Lj¦x(r-c':r-l),L(r-c':r-l)) P(xi¦Lj,x(r-c':r-l),L(r-c':r-1)) =
P(xi,Lj¦x(r-c':r-l),L(r-c':r-1)) (22A) ~L232~ A. Ro SMITH ET AL 4-2-3-8 The context level c' of this tree is typically less than the context levels of the state and length prediction trees.
If C=C~ there i5 no advantage in combining in the state or length prediction information~ In all ~rees the context level is chosen to maximize the confidence values of the conditional probabilities.
The following is a description of a physical embodiment of a PLE constructed in accordance with the present invention.
Referring to Figure 5 there is shown a block diagram of a PLE
comprising four major modules, namely input module 28, predict module 30, decide module 32 and output module 34. A comparator 36 is also provided having one input connected to a variable divider circuit 38. An OR-gate 40 is provided having one input connected to the output of the comparator 36 and a second input connected to receive a learn signal.
Input information, in the form of objects enter the input module 28 at an input terminal 42. The input module uses the input objects to provide two kinds of probability information based on previously learned object sequences. At a terminal 44, the input module provides a signal PE corresponding to the probability that some state ends at the present time and this probability will be known as ~nd-Of-State ProbabilityD At a terminal 46, module 28 provides a signal PI corresponding to the probability that an input object sequence beginning at a time b, will occur and span a state given that it will end at time t and that the state will occur. This probability will be known as Input Probability PI and is derived using the previously discussed equation (16).
The predict module 30 receives Options Information at an input 48 from the decide module 32 and uses this information in conjunction with other information to calculate the most likely state-length pairs and their probabilities, which probability information is provided as signal Pp at an output 50. The state-length pair probability information shall be known as Predict Probability and may be derived using the previously discussed equations (20), (21), and (22).
The decide module 32 includes a first input 52 for receiving the Input Probability signal PI from the input module 28 and a second input 54 for receiving the Predict Probability ~ignal Pp from the predict module 30O The decide module combines the Input and the Predict Probabilities to form . ~3~ A. R. SMITH ET AL ~ 3-8 the previously mentioned Options In~ormation which i5 provided at terminal 56. The Options Information is derived using the previously discussed equations (5), (6) and (7) implementing the Viterbi Algorithm.
The output module 34 includes two inputs, 58 and 60 for receiving the End-Of-State Probability signal P~ and the Options Information respectivelyO The output module accumulates the Options Information and utilizes the End-Of-State Probability ~o decide when the Options Information is sufficiently good to output a recognized state and its preceding states at a final time T at a terminal 62. The output module also provides an output at a terminal 64 corresponding to the probability that the state occured in a particular position and this signal is known as the Confidence Factor derived using equation (9~ and the probability vector as previously discussed. The output module provides one additionaloutput at a terminal 66 corresponding to the positions of the recognized state and the preceding states.
The recognized states are fed back to the input and predict modules at terminals 68 and 70 respectively while the position information is fed back to the input and predict modules at terminals 72 and 74 respectively.
The Confidence Factor is applied to a second input of the comparator 36 so that when the level of the Confidence Factor exceeds a threshold established b~ the divider 38 a self learn signal is provided ~rom the comparator 36 to an input of the Or-gate 40, which in response thereto provides an update signal to inputs 76 and 78 of the input and predict modules respectively. The second input of the OR-gate 40 is adapted to receive a learn signal. The learn signal maybe from a human 39 interface processor, such as the one shown in Figure l. A
human interface processor maybe used to provide human initiated reinforecement when the PLE is learning. Such a processor and its reinforcing function maybe used with a single PLE or a PLS, as shown in Figure 1. The learn signal may also come from another PLE when a PLS is used.
The OR-gate ~0 in response to either a learn or a self learn signal will cause an update signal to be provided to terminals 76 and 78 of the input and predict modules. When an update signal is received at terminal 76 and 78 of the input and predict modules, the current information being received from the output module and the objects that were received and 1%3~3~ A. R. SMITH ET AL 4-2-3-8 -- 2~3 --stored in the input module will be accepted as true and recognized states. The state and position information will be used to update COM's contained in the input and predict modules.
Referring to Figure 6, there is shown a more detailed block diagram of the input module 2a of Figure 5. An object database a0 includes short term memories and a plurality of COM's for long term memory as previously discussedO The object database has inpu~ terminals 42, 68, 72 and 76 for receiving the object information, the state information, and the position information and update signal respectively. The received object information is stored in the short term memory and is used to search in the long term memories. A first COM, called an alltree, within object database 80 stores the previously described type 1 knowledge, namely the frequency of object n-grams forming parts of all possible states. From this COM we receive pointers to appropriate singletrees from the nodes of which we receive the first type of probabilistic knowledge Pl at terminal 82l namely the conditional probability that object Yt occurs given the previous object context or n-gram and s~ate xi. This conditional probability identified as Pl is independant of position within the state and is calculated for all significant states. The attribute lists of the singletrees are used to provide an output at terminal 84 corresponding to the conditional probability that object Yt with beginning position f and ending position g will occur given the previous object context or n-gram with consistant positioning and state xi. This conditional probability P2 is derived from the type 2 knowledge, namely the positional frequency of object n-grams within states and is calculated for all significant states and times that such states could end.
The conditional probability Pl from terminal 82 is provided to a spanned-length function module 86 by way of an input ter~inal 88. Module 86 also receives at a terminal 90 a signal DB from an end-time state-length function module 92 having an output terminal 94. Said signal DB corresponds to the distance back (DB) or to the begin time for each ~ignificant state-length. The spanned length function module 86 stores the previously received P1 value and combines the currently received Pl value with the stored previous value. The sum is then stored and indexed by time to develop accummulated probabilities stored for vaious times. The module uses the DB input to calculate the 3~3 A. R~ SMITH ET AL 4-2-3-8 .

difference between the accummulated probability at the current time and the accummulated probability at the time DB before the current time. This difference is then outputted at terminal 96 as a probability P6 that the sequence of objects between ~he begin time and the end time occurs for a given stateO This probability is calculated using the previously discussed equation ~17).
The end-time state-length function module 92 receives at terrninal 98 the conditional probability P2 outputted from terminal 84. Module 92 outputs at terminal 100 the accummulated probability values as each end-time passes, said accummulated probability being the probability that the sequence back to some begin time occurs in the given state. This probability P7 is derived using the product found in equation (18), previously discussed.
The maximum value of the P7 prohability will give the probability that some state ends at the present time. This maximum value of P7 is determined by the maximum value function module 102 which includes the output terminal 44 which provides the End-Of-State Probability PE.
A length normalizer module 104 receives the outputs of module 92 and provides at a terminal 106 a signal P8 corresponding to the probability that the begin time is correct given the sequence of objects, the end-time and the state. This probability is calculated in accordance with the previously discussed equation (19~.
The outputs of modules 86 and 104 are multiplied together to provide at terminal 46 the previously discussed Input Probability calculated in accordance with equation (16) wherein the results of equations (17~ and (19) are multiplied together.
The end-time state-length function module 92 receives the previously discussed second type of conditional probabilistic knowledge P2 from the object database 80. The positional information stored in the database provides values for the number of objects preceeding the last object of the n-gram and the number of objects following the last object plus 1. These values are called the "distance to beginning" and the "distance to ending" and the sum of these values will always equal the length of the state at the time that the pattern was learned.
The probability P7 that an object sequence is a complete pattern for a state is determined by the product found within . ~3~3~ A. R. SMITH ET AL 4-2-3-8 the prevlously discussed equation (18), which defines the signal provided at terminal 100 of module 92~
Referring to Figure 7, there is shown a detailed block diagram o~ the end-time state-leng~h function module.
Conditional probabilistic information P2 arrives at terminal 98 of a decoder 108. The decoder functions to separate the information received at the input and to provide a timing signal each time a New P2 signal enters at terminal 98. The decoder outputs the timing signal called New P2, a DE signal, a DB signal and a probability value at terminals 110, 112, 114 and 116 respectively.
A matrix of all possible state-lengths would be exceedingly large and most nodes would have zero entries. Dealing with such a large matrix would tax the memory capacity of the database;
therefore, the significant states including their DB and DE
information will be indexed by a common parameter q. Thus, at a given object time the information provided by the decoder 108 includes the New P2 timing signal, DE (q, state), DB (q, state) and probability value (q, state).
The New P2 timing signal is provided to a counter 118, a multiplexer 120 and a latch 124. The counter 118 increments each time a New P2 signal is received and provides a current time output which is a modular number based on how many addresses are needed in the memory to span the distance from the beginning to the end of the longest state.
An adder 113 is provided to add the current time to DE to provide a signal-corresponding to end-time, i.e. current time plus distance to end equals "end-time". The DE signal is added to the DB signal by another adder 115 to provide a signal corresponding to "length". The probability value is multiplied in a multiplexer 117 by an accumulated probability value to provide a "product". The "end-time", "length" and "product"
signals are applied to multiplexer 120 on the left side marked "1".
The top side of the multiplexer marked "0" receives three signals, two corresponding to 0 and one being the current time.
The multiplexer provides output signals on the right side marked "out". When the multiplexer receives a high level signal at a select input "S" from the New P2 signal, the multiplexer selects from the left side marked l'l".

12~23~ A. R. SMITH ET AL 4-2~3-8 Memory 122 has an address i~put which receives a time signal corresponding to end-time or current time depending on the multiplexer output. Two data signals are inputted to the memory from the left by the multiplexer. The first data signal is either "léngth" or zero and the second is the "product" or zero depending upon whether New P2 is high or low.
When New P2 is high the multiplexer selects from the left and the memory address receives the~value of the time when the state ends i.e. "end time". The memory stores the "length" ~q, state) and the "product" (q, state). A read modify write operation is done on the memory to develop the accumalated value which is stored at the addressed "end-time".
When the New P2 signal goes low, the multiplexer selects from the top. Thus, the memory address input receives the current time and a second read modify write is done. Latch 124 is responsive to the low signal on New P2 so that tha data values at the current time are latched and will be available at the outputs. The write,operation providDs a clearing of the information in the memory at the current time address since "0"s are written in. This prepares the memory for the next cycle of information. It should be noted that the data values were actually written for the "end-times" of the states so that when the current time reaches the "end-time" the "length" of the state is the same as the DB and the length information outputted from the memory corresponds to DB.
Referring now to Figure 3, there is shown a detailed block diagram of the spanned-length function module 86. As previously discussed terminal 88 receives the conditional probabilistic information Pl which enters a decoder 126. The decoder provides a timing signal New Pl at an output 128 when new probability information Pl is entered. The New Pl signal is provided to a counter 130, a multiplexer 132, a delay circuit 134, a memory 136, a latch 138 and another latch 140. The counter 130 in response to the timing signal New Pl generates a current time signal in a rnanner similar to that generated in the end-time state-length function module. The current time signal is applied to one input of the multiplexer 132 and to an add circuit 129. The Ds signal from the end-time state-length function module 92 is provided to a terminal 90 which is an inverted input of the add circuit. Thus, the add circuit effectively subtracts the DB ~rom the current time to output a ~ 123~3~ A. R. SMITH ET AL 4-2-3-8 begin ~ime signal which is provided to another input of the multipl~xer.
Th~ multiplexer is controlled by the New Pl signal to provide either the current time or the begin time to an address input of memory 136.
When the New Pl signal i5 high the multiplexer 132 selects from the left and the memory is in a write mode. At this time, the memory is addressed by the value of the current time signal provided from counter 130.
Decoder 126 provides at a second output 142 a signal corresponding to the conditional probability Pl, which output is connected to a first input of a multiplier 144 which multiplier has a second input connected to its output through the delay circuit 134 so that the output of the multiplier corresponds to the product of the probability value Pl multiplied by the accummulated value resulting from the product of previous inputted probability values. The output of multiplier 144 is connected to an input of memory 136 and an input of latch 140 where the current accumulated value is latched for use during the next New Pl low period and stored in the memory 136 and indexed at the current time.
When the timing signal New Pl is low the multiplexer selects the begin time signal which is the value of the count signal outputed from counter 130 minus the DB ~q, state) received at terminal 90. At this time the memory 136 is reading and latch 138 holds the information corresponding to , the accumulated value at the begin time that is read. The outputs of latches 138 and 140 are provided to an add circuit 141 with the output of latch 138 going to an inverted input 143 so that the output of the add circuit 141 on terminal 96 is really the difference between the inputs. Thus, the output at terminal 96 is the difference betweent the current accumulated value and the accumulated value that existed at a distance DB
in the past i.e. at the begin time. The output at terminal 96 is derived in accordance with the previously discussed equation (17) and is identified as P6. It must be kept in mind that we are only interested in the difference and it is assumed that the borrow is possible and the value of the da~a in the memory may be allowed to,overflow without penalty, provided that the memory was all the same value when the first object arrived and that the size of the data is higher than the largest difference possible.

323r~.~ A. R. SMITH ET AL 4-2-3-8 Referring to Figure 9, there is shown a detailed block diagram of the len~th normal1zer 104~ which receives the probability information P7 and the distance to begin-DB
information from module 92 and provides an output P8 in accordance with equation (19). Both the probability value P7 and the distance to begin value DB are provided to a module 146 which provides an ou~put equivalent to the XY or P7~ in accordance with equation (18). ~he output of module 146 is provided to a module 144 where all probability values for each (q, state) are added together to provide an output that is indexed only by (q, state). In order to do this summation the value of the probability which is a log function must be exponentiated after which the sum is taken. The log is then applied before the value is passed on. The outputs of modules 144 and 146 are provided to a module 148 where the output of module 146 is divided by the output of module 144. The result of this division is provided to an encoder 150 where it is encoded with the distanct to begin or length information.
To provide an output P8 at terminal 106 in accordance with equation (19). The probability P8 is indexed by length and state with the parameter q being eliminated by the encoder.
Referrir.g to Figure 10, there is shown a detailed block diagram of the predict module 30 including a length database 152, a both database 154 and state database 156 all comprising separate COM'~ for storing the type 4, type 5 and type 3 knowledge respectively. A decoder 168 receives options information at terminal 48, state information a~ terminal 70, position information at terminal 74 an~ an update signal at terminal 78. The decode module separates the received information and provides at an output 170 length information and at an output 1~2 state information. The length database 152 receives the length information in the form of numbers of objects that make up states. The length information is organized by sequences of lengths for storage in a COM. The state database 156 receives state information which is organized by sequences of state~ in a COM. The both database 154 receives both length and state information which is stored in a COM and is organized by state-length pairs. The state database 156 provides a first output comprising all possible next states and their probabilities. The probabilities for the states are in the forms of the previously discussed type 3 conditional 3r,~ A. R. SMI'rH E~ AL 4-2-3-8 probabilistic information P3. The output of database 156 is provided to an input of a multiplexer 158. The length database 152 provides an output comprising all possible next lengths and their probabilities in the form of ~he type 4 conditional probabilistic information P4. The length database output is connected to another input of multiplexer 158. Database 154 provides an output compri ing all possible next state-length pairs and their probabilities which probabilities are in the form of the type S conditional probabilistic information P5 previously discussed. The output information from databases 152 and 156 each include support coefficients corresponding to the usefulness of the probability information being provided by the respective database. The support coefficients arP derived using equation (9).
The P5 information from the both database 154 is provided to summing circuits 153 and 155 where the probabilities of all states and all lengths are summed respectively~ This is the same type of summing across that was done in the length normalizer. The outputs of the summing circuits 153 and 155 are provided to divider circuits 157 and 159 respectively~ The P5 signal is also provided to dividers 157 and 159 so that the dividers each output a signal in accordance with equations (21A) and (~2A) respectively.
The outputs of dividers 157 and 159 are provided to multipliers 161 and 163 respectively as are the P4 and P3 signals, Multipliers 161 and 163 output signal~ to multiplexer 158 in accordance with equations (22) and (21) respectively.
The output information including the probabilities and the support coefficients f~om multipliers 161 and 163 are provided to module 166 where the probabilities are multiplied by the support coefficients to provide confidence factors for both the state and length i'nformation provided to multiplexer ]58. The confidence factor signals for state and length information are provided to a comparator 160. Comparator 160 provides an output depending upon which confidence factor is higher, this output controls the multiplexer 158 so that the output signal Pp is selected from either equation (21) or (22) depending upon which has the higher confidence factor.
Refe ring to Figure 11, there is shown a detailed block diagram of the decide module 32. The Input Probability PI
calculated in accordance with equation (16) is received at ;3~ A~ R. SMITH ET AL 4-2-3-8 terminal 52 of a decoder circuit 174. The decoder circuit separ~te3 the Input Probability into its value and length and further p~ovides a clock signal New PI when a new Input Probability arrives. The clock signal is provided to a counter 176; a multiplexer 178, an option memory 180 and a prediction memory 182. The clock signal New PI clocks the counter so that it provides an output corresponding to current time. The length information from the decoder 174 is provided to an inverting input of a summing circuit 175 where it is effectively subtracted from the current time to provide a signal corresponding to begin time which is provided to an input on thé left or "1l' side of the multiplexer 178.
Multiplexer 178 also receives on the left side past options information from the option memory 1800 The top or the "~"
side the multiplexer receives current time and options information. The outputs from the multiplexer 178 are provided to both the option memory 18g and the prediction m~mory 182.
The prediction memory 182 is addressed by ~he timel and the option data rom the mult plexer.
Multiplexer 178 is clocked by signal New PI and ~irst selects from the left when New PI is high which causes the option memory to be addressed by the current ~ime minus the length information or the begin time. The outpu~ of the option memory is a list of options that were available at the addressed time or begin time. This list includes states, positions and probabilities at the addressed begin time~ The output of the option memory is looped back and pxovided as a second input to the multiplexer 178 so that the past options data may be used to aid in the addressing of the prediction memory 182. The time from the multiplexer and the past options data both address the prediction memory for storage of the Predict Probability Pp data received at terminal 54. The Pp data consists of sets of states, lengths and probabilities.
The value information provided by decoder 174 containing input probability, the past options data from the option memory 180, and the past predictions data from the prediction memory 182 are multiplied together in 183 to implement equation (6~ using equation (7). The first term in equation (7) is input probability, the second term of equation (7) is past predictionsO Equation (7) is the second term of equation (5) and the first term is the past options from the option memory.

~d~3~ A. R. SMI~H ET AL 4-2-3-8 The pro~uct of this multiplication is provided to a maximum N
function CiE~Uit 1840 The maximum N function circuit chooses the N be;8t options based on their confidence levels. These option~ are outputted at terminal 56.
When the New PI timing signal goes low the multiplexer 178 selects from the top and the option memory i5 addressed at the current time. The write input of th~ option memory 180 is activated so tha~ ~he current options from the maximum N
function circuit 184 are written into option memory 180 through multiplexer 178 and are addressed at the current timeO These current options and the current time also address the prediction memory 182 which is also write enabled by the New PI low to store the Predict Probability data for future use.
The size of both of the memories 180 and 182 and the counter 176 must be sufficient to span the length of the longest state plus the options needed to specify their history.
Referring to Figure 12, there is chown a detailed block diagram of the output module 34. An option decnder 188 receives the options from the decide module 32 at terminal 60. The option~ including sta~es, leng~hs and probabilities are storad in decoder 188 and are addressed by time. The output module uses the end of Rtate probability ignal which is received at terminal 58 to decide when the data in ~he list of options is sufficien~ly good ~o output as a recognition of ~he input objects as a recognized stateO The end of state probability is smoothed by circuit 186 to avoid false triggers. The smoothed function i3 provided to circuit 190 where the maximum value of the smoothed unction is stored. A divider 192 is provided to select ~ predetermined percentage of the stored maximum value.
The output of the smoother la6 and the divider 192 are provided to a comparator 194 so that when the peak value of the signal coming from the smoother 186 drop~ below the predetermined percentage of the stored maximum value comparator 194 provides an output to option decoder 188 to trigger said decoder.
The End-Of-State Probability signal PE is also provided to a maximum end-time circuit 196 which stores the time oE the actual maximum end of state probability value. This maximum end-time value is also provided to option decoder 188 so that when the decoder 188 i~ triggered it may select the best options that were stored and addressed at the maximum end~time.
These best options signals are then provided as confidence, 323~ A~ R~ SMITH ET AL 4--2-3--8 ' state and position output signals. At this time an output changed ~ignal is provided by the decoder 188 which is used to reset the maximum function circuit 190 and the maximum end-time function circuit 196 so that a new maximum function and maximum end-time may be sought.
Referring to Figure 13, there is shown the use of a probabilistic learning system in a character recognition application. A video camera 200 is focused on a hand printed word 202 namely "HELLO" that is to be recognized. The signals from the video camera 200 are provided to a video processor 204 which provides outputs to a video monitor 206. A workstation 208 may be provided particularly for use during the learning mode for providing external reinforcement. The PLS is similar to that shown in Figure l in that it comprises an array 12 consisting of eight individual PLE' 5 14a to 14h, an input processor 22, an output processor 16 and an interface circuit 20O
The input to the PLS is terminal 11 from the video processor while the PLS output 13 is passed through the user interface 2g and on to~the video processor 204~ The video representation of the hand printed word "HELLO" is shown in the upper portion of the video monitor at 210. The video representation is digitized as shown at 212 on the video monitor.
The digitized word "HELLO" is shown~more clearly in Figura 14 where each vertical slice is a time frame containing a predetermined number of pixels of information as for example 10 pixels as-shown in Figure 14. The digitized word is sca~ned rom left to right and input objects in the form of time slices containing 10 pixels each are provided to the PLS. The sequences of objects could be provided to a single high capacity P~E or to a PLS comprised of an array as in the case of Figure 13.
The power of using a PLS comprising an array may be illustrated by referring to Figures 1 and 13. Inputting objects containing 10 pixels presents a rather complex recognition problem which would require a PLE with considerable capacity.
The array provides the advantage of parallelism to speed up the execution of the recognition task by permitting the input information to be partitioned between a plurality of individual PLE's. The information is partitioned in an overlapping or redundant manner to enhance the reliability of the system. Due to the redundant overlapping a breakdown in a portion of the system will not affect the overall system operation.

~3~3r;~ A. R~ SMITH ET AL 4-2-3-8 - 3~ -Referring to ~igure 13, there is shown, that the input preproce~sor 22 receives 10 pixels of information and partitions these pixels so that pixels l to 4 are provided to the first PLE
14a, pixels 3 to 6 are provided to PLE 14b, pixels 5 to 8 are provided to PLE 14c and pixels 7 to 10 are proviaed to PLE 14d.
Each PLE performs a recognition function on the inputs it receives to identiy output states in the orm of a certain type of feature. This is not to b2 confused with a feature extraction steps but is a true pattern classification step and illustrates the generalized aspect of the PLE which allows it to recognize and learn anything such as an abstract feature a~
opposed to such things as letters, numbers, etc. It might be said that the features that are recognized are slices of output states. Thus, the first bank of four PLE's i.e. PLE's 14a to 15 14d receives a total of 16 bits of information, 4 bits to each PLE in overlapping relationship. Each PLE in the first bank outputs 4 bits identifying a particular fea~ure out of 16 possible features.
The 4 bit feature representation outputted from PLE l~a of the first bank is provided to the inputs of the PLE's of the second bank i.e. 14e to 14h. In like manner~ the ~ bit representation of a feature at the output of the second PLE 14b of ~he first bank is provided to the inputs of each PLE of the s~cond bank PLE's. T-hus, each PLE of the second bank receiv~s four, 4 bit feature inputs. Each PLE in the second bank provides a ~ bit output which comprise one fourth of a 16 bit redundantly coded name for a recognized character or output state. Thus, the recognition task is simplified in that each PLE in the second bank must only recognize ~he first 4 bits of a 16 bit coded nam~ for a character. The output processor 16 receives the 16 bit redundantly coded representation of the output state and reduces the 16 bits to 8 bits using a BCH
decoding system. The 8 bit output provided at 13 is in the form of an ASCII Code for a character recognition system. The ASCII
Code has the capability of representing 256 possible letters or characters.
By using the 16 to 8 bit reduction~ significant data overlap is provided so that many of the 16 bits could be in error or missing and the output would still be correct. Thus, one PLE could fail and the system would continue to func~ion without error.

1~ 3~ 3~ ~ A, R~ SMITH E~ AL 4 2 3-8 .

Training of the array takes place in a mannex similar to that of a~ individual PL~ in that external reinforcement learn signals may be provided through a human interface. In addition, the PLE's of an of array are interconnected so that the self learn signal~from an individual PLE is provided to the learn input of its source PLE's. Thus, when the PLE of a second bank provides an output with a high confidence level, this indication will be relayed back to the source PLEIs in the first bank~ All of this training is of course in addition to the internal self 1~ learning of each individual PLE.
The array shown in Figures l and 13 comprises a 4 by 2 arrangement. The next size array would he 16 by 3 for a total o~ 64 PLE's comprising the array. Larger arrays may be built using this progression in size; however, while a larger array would provided for more parallelism in its operation and greater reliability, its speed would be reduced due to the number of PLE's through which the data must flow Erom the input to the output.
Figure 14 shows an expanded concept of using a plurality of PLS's wherein pixels 216 are first used to recognize characters 218 as output states from ~LS 12. The characters 218 may become input ob~e~cts to a PLS 220 which is used to recognize words 222 as output states. The words 222 become inpu~ objects to a PLS
224 to recognize scripts 226 as output ~tates.
It should also be remembered that the PLS is not limited to use in an optical character reader but rather may be used in many other applications, such as voice recognition. The PLS is appropriate for use wherever sequential patterns are to be recognized.

Claims (41)

THE EMBODIMENTS OF THE INVENTION IN WHICH AN EXCLUSIVE PROPERTY
OR PRIVILEGE IS CLAIMED OR DEFINED AS FOLLOWS:
1. A probabilistic learning element that sequentially receives objects and outputs sequences of recognized states, said learning element comprising:
means for sequentially receiving objects;

means for storing received object information, including, said received objects, and sequences of received objects;

means for storing items of previously learned information, said items including, sequences of states, states contained in said sequences of states, objects contained in said states contained in said sequences of states, sequences of objects contained in said states contained in said sequences of states, positional information for each object contained in said states contained in said sequences of states, and predetermined types of knowledge relating to said previously learned information, whereby received object information, relating to received objects, is stored as well as previously learned information;
means for correlating said received object information with said previously learned information for assigning conditional probabilities to possible sequences of recognized states;
means, responsive to said conditional probabilities of possible sequences of recognized states, for determining a most likely sequence of recognized states;
means, responsive to said previously learned information, for detecting that a state has ended and for providing an end of state signal; and means, responsive to said end-of-state signal, for out-putting said most likely sequence of recognized states as a recog-nized state sequence.
2. A probabilistic learning element as claimed in Claim 1, wherein said positional information stored for each object includes the object's distance to begin and distance to end of a state.
3. A probabilistic learning element as claimed in Claim 2, wherein said items of previously learned information may occur a plurality of times and said predetermined types of knowledge include the number of occurrences of each said item of stored previously learned information.
4. A probabilistic learning element as claimed in Claim 3, wherein said states each have a length and said predetermined types of knowledge further include the length of each state, the number of occurrences of each length, sequences of state lengths, the number of occurrences of each sequence of state lengths, state-length pairs, the number of occurrences of each state-length pair, sequences of state-length pairs and the number of occur-rences of each sequence of state-length pairs.
5. A probabilistic learning element as claimed in Claim 4, wherein the means for correlating includes a first means for determining the conditional probabilities that possible states will span an object sequence having particular begin and end times and second means for determining the conditional probabili-ties of possible state-length pairs given the previous state-length pair context.
6. A probabilistic learning element as claimed in Claim 5, additionally comprising means responsive to the conditional prob-abilities that possible states will span an object sequence having particular begin and end times and the conditional probabilities of possible state-length pairs given the previous state-length pair context to implement an algorithm known as the Viterbi Algo-rithm and provide probabilities of possible state-length pairs that span a particular object sequence given the previous state-length pair context of each possible state-length pair.
7. A probabilistic learning element as claimed in Claim 5, wherein the first means is responsive to two types of conditional probability signals, a first type signal corresponding to the conditional probabilities of object sequences occurring within a state given the state and a second type signal corresponding to the conditional probabilities of states with a particular begin time given an end time, object sequence and a state.
8. A probabilistic learning element as claimed in Claim 7, wherein the first type probability signal is derived from the conditional probabilities of an object occurring given the pre-vious object context and state which probabilities are calculated from the stored learned information relating to objects and object occurrences.
9. A probabilistic learning element as claimed in Claim 7, wherein the second type probability signal is derived from con-ditional probabilities of an object occurring in a particular position in a state given the previous object context its posi-tion and state which probabilities are derived from the stored learned information relating to objects, object occurrences and the object positional information.
10. A probabilistic learning element as claimed in Claim 5, wherein the second means for determining the conditional probabil-ities of state length pairs is responsive to the stored learned information relating to previously learned states and their occur-rences, lengths of previously learned states and their occurrences, state-length pairs from previously learned states and their occur-rences, sequences of previously learned states and their occur-rences, sequences of lengths of previously learned states and their occurrences, sequences of state-length pairs and their occurrences.
11. A probabilistic learning element as claimed in Claim 1, wherein the means for storing are adapted to store the information in accordance with the context in which the stored information statistically occurred, whereby from any stored information the stored information which statistically occurs next in context is directly accessible and the conditional probabilities may be easily derived from the contextually stored information.
12. A probabilistic learning element that sequentially receives objects and outputs sequences of recognized states and includes context driven searching, said learning element comprising means for sequentially receiving objects;
short term memory means for storing, in sequential context, said received objects;
a context organized memory means comprising a plurality of tree structures for storing items of previously occurring learned information, said items including, states and the number of previous occurrences of said states, said states each having a length, objects contained in said states and the number of previous occurrences of said objects, lengths of said states and the number of occur-rences of said state lengths, and state-length pairs in said states and the number of occurrences of said state-length pairs, said items of stored information being stored in accordance with the context in which the items of stored information statistically occurred, whereby from any items of stored information an item of stored information which statistically occurs next in context is directly accessible;

said tree structures used to store the object information include an alltree structure and a plurality of singletree structures, the alltree structure stores the contextual occurrences of all objects received by the probabilistic learning element and at each node of the alltree there is provided an attribute list which refers to singletrees that include the same object context as the node of the alltree, a singletree is provided for each said state, whereby searching is facilitated by using the alltree as a pointer to the less complex singletrees;

means for correlating said received objects stored in the short term memory means with information stored in the context organized memory means, said correlation being facilitated by use of the context of said received object stored in the short term memory means as a pointer to the context of the statistically stored information in the context organized memory means, said correlating means assigning conditional probabilties to possible sequences of recognized states;

means, responsive to said conditional probabilities, for determining a most likely state sequence;

means, responsive to the stored information, to deter-mine a probability of an end of a state; and means, responsive to the end-of-state probability, for outputting said most likely state sequence as a sequence of recognized states.
13. A probabilistic learning element as claimed in Claim 12, wherein each singletree contains object information for a state with each node representing an object and having an attri-bute list, said attribute list including information relating to said object including the objects distance from said states beginning, the objects distance to said states end and the number of times that the object appeared at that particular position within a state.
14. A probabilistic learning element that sequentially receives objects and outputs sequences of recognized states, said learning element comprising:
means for sequentially receiving objects;
means for storing, said received objects, sequences of received objects, sequences of previously learned states, states contained in said sequences of previously learned states, objects contained in said states contained in said sequences of previously learned states, sequences of said objects contained in said states contained in said sequences of previously learned states, and predetermined types of knowledge relating to, said sequences of previously learned states, states contained in said sequences of previously learned states, objects contained in said states contained in said sequences of previously learned states, and sequences of said objects contained in said states contained in said sequences of previously learned states, so that current object information relating to said received objects and sequences of objects is stored as well as statistical information relating to said previously learned sequences of states, said states, objects and sequences of objects contained in said previously learned sequences of states;

means for correlating said current object information with stored statistical information relating to previously learned sequences of states for assigning conditional probabilities to possible sequences of recognized states;

means, responsive to said conditional probabilities of possible sequences of recognized states, for determining a most likely state sequence;

means, responsive to the stored current object information and statistical information, to determine a probability of an end of a state;

means, responsive to the probability of an end of a state, for outputting the most likely state sequence as a sequence of recognized states; and means for providing a rating of confidence in said sequence of recognized states said means including means for deriving support coefficients relating to how much information was available when calculating the conditional probabilities, said confidence rating being a function of the conditional probabilities and the support coefficients for the conditional probabilities used to determine the most likely state sequence.
15. A probabilistic learning element that sequentially receives objects and outputs sequences of recognized states said learning element comprising:
means for sequentially receiving objects;

short term memory means for storing said received objects in sequential context;

context organized memory means, for storing items of previously occurring learned information, including a plurality of tree structures, each tree having a plurality of connected nodes, said plurality of tree structures including, an alltree structure having objects stored at the nodes of the tree along with the number of previous occurrences of each object, said alltree storing all objects contained in previously learned states in context so that from any stored object, objects which statistically occur next in context are directly accessible, each node of the alltree including an attribute list pointing to nodes of singletrees having objects stored therein in the same context as the context of the alltree node, a plurality of singletrees, one for each previously learned state, each node of the singletrees storing an object in context along with the number of previous occurrences of said object and an attribute list including positional information relating to the position of the object within the state and the number of previous occurrences of the object in that position, a tree structure for storing learned states in context so as to include states, the number of previous occurrences of each state, sequences of states and the number of previous occurrences of each state sequence, a tree structure for storing lengths of learned states in context so as to include state lengths, the number of previous occurrences of each state length, sequences of state lengths and the number of previous occurrences of each state length sequence, and a tree structure for storing state-length pairs of learned states in context so as to include the num-ber of previous occurrences of each state-length pair, sequences of state-length pairs and the number of pre-vious occurrences of each state-length pair sequence;
means for correlating said received objects stored in the short term memory means with information stored in the context organized memory means, said correlation being facilitated by use of the context of said received objects stored in the short term memory means as a pointer to the context of the stored information in the context organized memory means, said correlating means assigning conditional probabilities to possible sequences of recognized states;
means, responsive to said conditional probabilities, for determining a most likely state sequence;
means, responsive to the stored information, to deter-mine a probability of an end of a state; and means, responsive to the end of-state probability, for outputting said most likely state sequence as a sequences of recognized states.
16. A probabilistic learning element as claimed in Claim 15, wherein the means for correlating comprises:
means for correlating the object information stored in the context organized memory means with the object information stored in the short term memory means for determining conditional probabilities that possible states will span an object sequence having a particular begin time and end time;
means for correlating the state, length and state-length pair information stored in the context organized memory means for determining conditional probabilities of state-length pairs given the previous state-length pair context; and means, responsive to the two previously mentioned con-ditional probabilities, for implementing an algorithm known as the Viterbi Algorithm and for providing probabilities of possible states, with a particular length that spans an object sequence given the previous state-length pair context.
17. A probabilistic learning element as claimed in Claim 15, wherein the means responsive to stored information comprises means responsive to the object information stored in the short term memory means and the object information stored in the context organized memory means for providing a probability signal corres-ponding to a probability that a state has ended.
18. A probabilistic learning element as claimed in Claim 15, additionally comprising means for providing a rating of con-fidence in said recognized state sequence said means including means for deriving support coefficients relating to how much information was available when calculating the conditional prob-abilities, said confidence rating being a function of the condi-tional probabilities and the support coefficients for the condi-tional probabilities used to determine the most likely state sequence.
19. A probabilistic learning element as claimed in Claim 1, additionally comprising means for providing a rating of confidence in said sequence of recognized states.
20. A probabilistic learning element as claimed in Claim 19, additionally comprising means, responsive to said rating of confidence, for causing said means for storing items of previously learned information to store the recognized state sequence, the objects, sequences of objects and states forming the recognized state sequence and the predetermined types of knowledge relating to the objects, sequences of objects, states and sequences of states forming said recognized state sequence as items of pre-viously learned information when the rating exceeds a predeter-mined threshold level.
21. A probabilistic learning element as claimed in Claim 1, additionally comprising learning supervision means, responsive to external reinforcement signals, for causing said means for storing items of previously learned information to store the recognized state sequence, the objects, sequences of objects and states forming the recognized state sequence and the predeter-mined types of knowledge relating to the objects, sequences of objects, states and sequences of states forming the recognized state sequence as items of previously learned information.
22. A probabilistic learning element as claimed in Claim 1, additionally comprising:

means for providing a rating of confidence in said sequence of recognized states;
learning supervision means adapted to receive said rating of confidence and an external reinforcement signal, said means being responsive to the rating of confidence of the recog-nized state sequence and the external reinforcement signal for providing an output signal when either the rating of confidence exceed a predetermined threshold level or an external reinforce-ment signal is received; and means responsive to the output signal from the learning supervision means to cause said means for storing items of pre-viously learned information to store the recognized state sequence, the objects, sequences of objects, and states forming the recog-nized state sequence and the predetermined types of knowledge relating to the objects, sequences of objects, states and sequences of states forming said sequence of recognized states as items of previously learned information.
23. A probabilistic learning element as claimed in Claim 21, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
24. A probabilistic learning element as claimed in Claim 22, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
25. A probabilistic learning element as claimed in Claim 12, additionally comprising means for providing a rating of con-fidence in said sequence of recognized states.
26. A probabilistic learning element as claimed in Claim 25, additionally comprising means responsive to said rating of confidence to cause said context organized memory to store the objects and states forming the recognized state sequence, the lengths and state length pairs of said states and the predeter-mined types of knowledge relating to the objects, states, state lengths and state-length pairs from said recognized state sequence as items of previously learned information when the rating exceeds a predetermined threshold level.
27. A probabilistic learning element as claimed in Claim 12, additionally comprising learning supervision means responsive to external reinforcement signals to cause said context organized memory to store the objects and states forming the recognized state sequence the lengths and state-length pairs of said states and the predetermined types of knowledge relating to the objects, states, state lengths and state-length pairs from the recognized state sequence as items of previously learned information.
28. A probabilistic learning element as claimed in Claim 12, additionally comprising;
means for providing a rating of confidence in said sequence of recognized states;
learning supervision means adapted to receive said rating of confidence and an external reinforcement signal, said means being responsive to the rating of confidence of the recog-nized state sequence and the external reinforcement signal for providing an output signal when either the rating of confidence exceeds a predetermined threshold level or an external reinforce-ment signal is received; and means responsive to the output signal from the learning supervision means to cause said context organized memory to store the objects and states forming the recognized state sequence, the lengths and state-length pairs of said states and the predeter-mined types of knowledge relating to the objects, states, state lengths and state-length pairs from said sequence of recognized states as items of previously learned information.
29. A probabilistic learning element as claimed in Claim 27, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
30. A probabilistic learning element as claimed in Claim 28, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
31. A probabilistic learning element as claimed in Claim 14, additionally comprising means responsive to said rating of confidence to cause said means for storing to store the recognized state sequence, the objects, sequences of objects and states form-ing the recognized state sequence and the predetermined types of knowledge relating to the objects, sequences of objects, states and sequences of states forming said recognized state sequence as items of previously learned information when the rating exceeds a predetermined threshold level.
32. A probabilistic learning element as claimed in Claim 14, additionally comprising learning supervision means responsive to external reinforcement signals to cause said means for stor-ing to store the recognized state sequence, the objects, sequen-ces of objects and states forming the recognized state sequence and the predetermined types of knowledge relating to the objects, sequences of objects, states and sequences of states forming the recognized state sequence as items of previously learned infor-mation.
33. A probabilistic learning element as claimed in Claim 14, additionally comprising:
learning supervision means adapted to receive said rating of confidence and an external reinforcement signal, said means being responsive to the rating of confidence of the recog-nized state sequence and the external reinforcement signal for providing an output signal when either the rating of confidence exceeds a predetermined threshold level or an external reinforce-ment signal is received; and means responsive to the output signal from the learning supervision means to cause said means for storing to store the recognized state sequence, the objects, sequences of objects, and states forming the recognized state sequence and the predeter-mined types of knowledge relating to the objects, sequences of objects, states and sequences of states forming said sequence of recognized states as items of previously learned information.
34. A probabilistic learning element as claimed in Claim 32, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
35. A probabilistic learning element as claimed in Claim 33, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
36. A probabilistic learning element as claimed in Claim 15, additionally comprising means for providing a rating of con-fidence in said sequence of recognized states.
37. A probabilistic learning element as claimed in Claim 36, additionally comprising means responsive to said rating of confidence to cause said context organized memory to store the objects and states forming the recognized state sequence, the lengths and state-length pairs of said states and the predeter-mined types of knowledge relating to the objects, states, state lengths and state-length pairs from said recognized state sequence as items of previously learned information when the rating exceeds a predetermined threshold level.
38. A probabilistic learning element as claimed in Claim 15, additionally comprising learning supervision means responsive to external reinforcement signals to cause said context organized memory to store the objects and states forming the recognized state sequence the lengths and state-length pairs of said states and the predetermined types of knowledge relating to the objects, states, state lengths and state length-pairs from the recognized state sequence as items of previously learned information.
39. A probabilistic learning element as claimed in Claim 15, additionally comprising:
means for providing a rating of confidence in said sequence of recognized states;
learning supervision means adapted to receive said rating of confidence and an external reinforcement signal, said means being responsive to the rating of confidence of the recog-nized state sequence and the external reinforcement signal for providing an output signal when either the rating of confidence exceeds a predetermined threshold level or an external reinforce-ment signal is received; and means responsive to the output signal from the learning supervision means to cause said context organized memory to store the objects and states forming the recognized state sequence, the lengths and state-length pairs of said states and the predeter-mined types of knowledge relating to the objects, states, state lengths and state-length pairs from said sequence of recognized states as items of previously learned information.
40. A probabilistic learning element as claimed in Claim 38, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
41. A probabilistic learning element as claimed in Claim 39, additionally comprising means for correcting a recognized state sequence prior to initiating an external reinforcement signal.
CA000472104A 1984-01-16 1985-01-15 Probabilistic learning element Expired CA1232358A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US571,027 1984-01-16
US06/571,027 US4620286A (en) 1984-01-16 1984-01-16 Probabilistic learning element

Publications (1)

Publication Number Publication Date
CA1232358A true CA1232358A (en) 1988-02-02

Family

ID=24282033

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000472104A Expired CA1232358A (en) 1984-01-16 1985-01-15 Probabilistic learning element

Country Status (6)

Country Link
US (1) US4620286A (en)
JP (1) JPS60230283A (en)
AU (1) AU3666884A (en)
BR (1) BR8500191A (en)
CA (1) CA1232358A (en)
ES (1) ES8608195A1 (en)

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0734162B2 (en) * 1985-02-06 1995-04-12 株式会社日立製作所 Analogical control method
US4805091A (en) * 1985-06-04 1989-02-14 Thinking Machines Corporation Method and apparatus for interconnecting processors in a hyper-dimensional array
JPH0743722B2 (en) * 1985-08-02 1995-05-15 株式会社東芝 Inductive reasoning device
GB8617444D0 (en) * 1985-08-16 1990-08-08 Wang Laboratories Expert system apparatus and methods
US4964060A (en) * 1985-12-04 1990-10-16 Hartsog Charles H Computer aided building plan review system and process
JPH0642268B2 (en) * 1986-10-31 1994-06-01 日本電気株式会社 Character recognition device
US4943932A (en) * 1986-11-25 1990-07-24 Cimflex Teknowledge Corporation Architecture for composing computational modules uniformly across diverse developmental frameworks
US4905162A (en) * 1987-03-30 1990-02-27 Digital Equipment Corporation Evaluation system for determining analogy and symmetric comparison among objects in model-based computation systems
US4881178A (en) * 1987-05-07 1989-11-14 The Regents Of The University Of Michigan Method of controlling a classifier system
US4891766A (en) * 1987-06-15 1990-01-02 International Business Machines Corporation Editor for expert system
AU613062B2 (en) * 1987-07-10 1991-07-25 Formulab International Pty Ltd A cognizant system
US4884218A (en) * 1987-10-01 1989-11-28 International Business Machines Corporation Knowledge system with improved request processing
US5014220A (en) * 1988-09-06 1991-05-07 The Boeing Company Reliability model generator
US5292995A (en) * 1988-11-28 1994-03-08 Yamaha Corporation Method and apparatus for controlling an electronic musical instrument using fuzzy logic
US5077677A (en) * 1989-06-12 1991-12-31 Westinghouse Electric Corp. Probabilistic inference gate
US5119425A (en) * 1990-01-02 1992-06-02 Raytheon Company Sound synthesizer
US5130936A (en) * 1990-09-14 1992-07-14 Arinc Research Corporation Method and apparatus for diagnostic testing including a neural network for determining testing sufficiency
WO1992007525A1 (en) * 1990-10-31 1992-05-14 Baxter International Inc. Close vascularization implant material
EP0494788B1 (en) * 1991-01-11 1996-10-02 Canon Kabushiki Kaisha Fault diagnosis using simulation
US5418954A (en) * 1991-06-19 1995-05-23 Cadence Design Systems, Inc. Method for preparing and dynamically loading context files
US5307445A (en) * 1991-12-02 1994-04-26 International Business Machines Corporation Query optimization by type lattices in object-oriented logic programs and deductive databases
US5365423A (en) * 1992-01-08 1994-11-15 Rockwell International Corporation Control system for distributed sensors and actuators
WO1993018483A1 (en) * 1992-03-02 1993-09-16 American Telephone And Telegraph Company Method and apparatus for image recognition
US5325466A (en) * 1992-05-07 1994-06-28 Perceptive Decision Systems, Inc. System for extracting knowledge of typicality and exceptionality from a database of case records
US5586215A (en) * 1992-05-26 1996-12-17 Ricoh Corporation Neural network acoustic and visual speech recognition system
US5621858A (en) * 1992-05-26 1997-04-15 Ricoh Corporation Neural network acoustic and visual speech recognition system training method and apparatus
US5598511A (en) * 1992-12-28 1997-01-28 Intel Corporation Method and apparatus for interpreting data and accessing on-line documentation in a computer system
JP3218107B2 (en) * 1993-01-29 2001-10-15 ローム株式会社 Fuzzy neuron
JPH07168913A (en) 1993-12-14 1995-07-04 Chugoku Nippon Denki Software Kk Character recognition system
JPH07210190A (en) * 1993-12-30 1995-08-11 Internatl Business Mach Corp <Ibm> Method and system for voice recognition
US5524169A (en) * 1993-12-30 1996-06-04 International Business Machines Incorporated Method and system for location-specific speech recognition
US5689696A (en) * 1995-12-28 1997-11-18 Lucent Technologies Inc. Method for maintaining information in a database used to generate high biased histograms using a probability function, counter and threshold values
US5796922A (en) * 1996-03-29 1998-08-18 Weber State University Trainable, state-sampled, network controller
US6041172A (en) * 1997-11-26 2000-03-21 Voyan Technology Multiple scale signal processing and control system
JPH11250030A (en) * 1998-02-27 1999-09-17 Fujitsu Ltd Evolution type algorithm execution system and program recording medium therefor
EP1264253B1 (en) * 2000-02-28 2006-09-20 Panoratio Database Images GmbH Method and arrangement for modelling a system
WO2002101581A2 (en) * 2001-06-08 2002-12-19 Siemens Aktiengesellschaft Statistical models for improving the performance of database operations
US8094591B1 (en) * 2002-03-19 2012-01-10 Good Technology, Inc. Data carrier detector for a packet-switched communication network
WO2004036461A2 (en) * 2002-10-14 2004-04-29 Battelle Memorial Institute Information reservoir
US20050222897A1 (en) * 2004-04-01 2005-10-06 Johann Walter Method and system for improving at least one of a business process, product and service
US7412842B2 (en) 2004-04-27 2008-08-19 Emerson Climate Technologies, Inc. Compressor diagnostic and protection system
US7275377B2 (en) 2004-08-11 2007-10-02 Lawrence Kates Method and apparatus for monitoring refrigerant-cycle systems
US20060112056A1 (en) * 2004-09-27 2006-05-25 Accenture Global Services Gmbh Problem solving graphical toolbar
WO2006066556A2 (en) * 2004-12-24 2006-06-29 Panoratio Database Images Gmbh Relational compressed data bank images (for accelerated interrogation of data banks)
US8590325B2 (en) 2006-07-19 2013-11-26 Emerson Climate Technologies, Inc. Protection and diagnostic module for a refrigeration system
US20080216494A1 (en) 2006-09-07 2008-09-11 Pham Hung M Compressor data module
US20090037142A1 (en) 2007-07-30 2009-02-05 Lawrence Kates Portable method and apparatus for monitoring refrigerant-cycle systems
US9140728B2 (en) 2007-11-02 2015-09-22 Emerson Climate Technologies, Inc. Compressor sensor module
US8706653B2 (en) * 2010-12-08 2014-04-22 Microsoft Corporation Knowledge corroboration
WO2012118830A2 (en) 2011-02-28 2012-09-07 Arensmeier Jeffrey N Residential solutions hvac monitoring and diagnosis
US8964338B2 (en) 2012-01-11 2015-02-24 Emerson Climate Technologies, Inc. System and method for compressor motor protection
US9310439B2 (en) 2012-09-25 2016-04-12 Emerson Climate Technologies, Inc. Compressor having a control and diagnostic module
US9803902B2 (en) 2013-03-15 2017-10-31 Emerson Climate Technologies, Inc. System for refrigerant charge verification using two condenser coil temperatures
US9551504B2 (en) 2013-03-15 2017-01-24 Emerson Electric Co. HVAC system remote monitoring and diagnosis
WO2014144446A1 (en) 2013-03-15 2014-09-18 Emerson Electric Co. Hvac system remote monitoring and diagnosis
AU2014248049B2 (en) 2013-04-05 2018-06-07 Emerson Climate Technologies, Inc. Heat-pump system with refrigerant charge diagnostics
US9871895B2 (en) * 2015-04-24 2018-01-16 Google Llc Apparatus and methods for optimizing dirty memory pages in embedded devices
DE102016003424B4 (en) * 2016-03-21 2023-09-28 Elektrobit Automotive Gmbh Method and device for recognizing traffic signs
CN113223379A (en) * 2021-05-17 2021-08-06 日照职业技术学院 University mathematics probability event teaching demonstration equipment based on big data analysis

Family Cites Families (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL269512A (en) * 1960-09-23 1900-01-01
US3103648A (en) * 1961-08-22 1963-09-10 Gen Electric Adaptive neuron having improved output
US3196399A (en) * 1962-10-01 1965-07-20 Ibm Specimen identification techniques employing selected functions of autocorrelation functions
US3267431A (en) * 1963-04-29 1966-08-16 Ibm Adaptive computing system capable of being trained to recognize patterns
US3446950A (en) * 1963-12-31 1969-05-27 Ibm Adaptive categorizer
US3457552A (en) * 1966-10-24 1969-07-22 Hughes Aircraft Co Adaptive self-organizing pattern recognizing system
US3581281A (en) * 1967-03-28 1971-05-25 Cornell Aeronautical Labor Inc Pattern recognition computer
US3440617A (en) * 1967-03-31 1969-04-22 Andromeda Inc Signal responsive systems
US3562502A (en) * 1967-08-14 1971-02-09 Stanford Research Inst Cellular threshold array for providing outputs representing a complex weighting function of inputs
US3576976A (en) * 1967-10-19 1971-05-04 Bendix Corp Nonlinear optimizing computer for process control
US3601811A (en) * 1967-12-18 1971-08-24 Matsushita Electric Ind Co Ltd Learning machine
US3588823A (en) * 1968-03-28 1971-06-28 Ibm Mutual information derived tree structure in an adaptive pattern recognition system
US3566359A (en) * 1968-04-17 1971-02-23 Melpar Inc Trainable computer module
US3613084A (en) * 1968-09-24 1971-10-12 Bell Telephone Labor Inc Trainable digital apparatus
JPS5039976B1 (en) * 1968-11-20 1975-12-20
FR2051725B1 (en) * 1969-07-14 1973-04-27 Matsushita Electric Ind Co Ltd
US3623015A (en) * 1969-09-29 1971-11-23 Sanders Associates Inc Statistical pattern recognition system with continual update of acceptance zone limits
US3725875A (en) * 1969-12-30 1973-04-03 Texas Instruments Inc Probability sort in a storage minimized optimum processor
US3678461A (en) * 1970-06-01 1972-07-18 Texas Instruments Inc Expanded search for tree allocated processors
US3715730A (en) * 1970-06-01 1973-02-06 Texas Instruments Inc Multi-criteria search procedure for trainable processors
US3716840A (en) * 1970-06-01 1973-02-13 Texas Instruments Inc Multimodal search
US3702986A (en) * 1970-07-06 1972-11-14 Texas Instruments Inc Trainable entropy system
US3700866A (en) * 1970-10-28 1972-10-24 Texas Instruments Inc Synthesized cascaded processor system
US3772658A (en) * 1971-02-05 1973-11-13 Us Army Electronic memory having a page swapping capability
US3701974A (en) * 1971-05-20 1972-10-31 Signetics Corp Learning circuit
US3753243A (en) * 1972-04-20 1973-08-14 Digital Equipment Corp Programmable machine controller
CH591726A5 (en) * 1973-07-30 1977-09-30 Nederlanden Staat
US3934231A (en) * 1974-02-28 1976-01-20 Dendronic Decisions Limited Adaptive boolean logic element
US3950733A (en) * 1974-06-06 1976-04-13 Nestor Associates Information processing system
NL165863C (en) * 1975-06-02 1981-05-15 Nederlanden Staat DEVICE FOR RECOGNIZING SIGNS.
US3988715A (en) * 1975-10-24 1976-10-26 International Business Machines Corporation Multi-channel recognition discriminator
US4286330A (en) * 1976-04-07 1981-08-25 Isaacson Joel D Autonomic string-manipulation system
US4189779A (en) * 1978-04-28 1980-02-19 Texas Instruments Incorporated Parameter interpolator for speech synthesis circuit
CA1102451A (en) * 1979-06-29 1981-06-02 Percy E. Argyle Apparatus for pattern recognition
US4384273A (en) * 1981-03-20 1983-05-17 Bell Telephone Laboratories, Incorporated Time warp signal recognition processor for matching signal patterns
US4450530A (en) * 1981-07-27 1984-05-22 New York University Sensorimotor coordinator
US4507760A (en) * 1982-08-13 1985-03-26 At&T Bell Laboratories First-in, first-out (FIFO) memory configuration for queue storage
US4504970A (en) * 1983-02-07 1985-03-12 Pattern Processing Technologies, Inc. Training controller for pattern processing system

Also Published As

Publication number Publication date
ES539627A0 (en) 1986-06-16
US4620286A (en) 1986-10-28
JPS60230283A (en) 1985-11-15
AU3666884A (en) 1985-07-25
ES8608195A1 (en) 1986-06-16
BR8500191A (en) 1985-08-20

Similar Documents

Publication Publication Date Title
CA1232358A (en) Probabilistic learning element
US4593367A (en) Probabilistic learning element
US4599692A (en) Probabilistic learning element employing context drive searching
US4599693A (en) Probabilistic learning system
Ramalho et al. Adaptive posterior learning: few-shot learning with a surprise-based memory module
US10380236B1 (en) Machine learning system for annotating unstructured text
US5933806A (en) Method and system for pattern recognition based on dynamically constructing a subset of reference vectors
EP2548096B1 (en) Temporal memory using sparse distributed representation
CN111914085B (en) Text fine granularity emotion classification method, system, device and storage medium
US5748850A (en) Knowledge base system and recognition system
CN108932342A (en) A kind of method of semantic matches, the learning method of model and server
CN110362723B (en) Topic feature representation method, device and storage medium
CN111709493B (en) Object classification method, training device, object classification equipment and storage medium
US20140114896A1 (en) Performing multistep prediction using spatial and temporal memory system
US8612371B1 (en) Computing device and method using associative pattern memory using recognition codes for input patterns
KR20220098991A (en) Method and apparatus for recognizing emtions based on speech signal
CN110781687B (en) Same intention statement acquisition method and device
CN113296755A (en) Code structure tree library construction method and information push method
US6286012B1 (en) Information filtering apparatus and information filtering method
CN113254641A (en) Information data fusion method and device
EP0157080A2 (en) Probabilistic learning element
CN115282606A (en) Cloud game big data mining method and system based on intelligent visualization
CN109902273A (en) The modeling method and device of keyword generation model
CN115062769A (en) Knowledge distillation-based model training method, device, equipment and storage medium
US11442986B2 (en) Graph convolutional networks for video grounding

Legal Events

Date Code Title Description
MKEX Expiry