WO1990010274A1 - Neuronal data processing network - Google Patents
Neuronal data processing network Download PDFInfo
- Publication number
- WO1990010274A1 WO1990010274A1 PCT/NL1990/000018 NL9000018W WO9010274A1 WO 1990010274 A1 WO1990010274 A1 WO 1990010274A1 NL 9000018 W NL9000018 W NL 9000018W WO 9010274 A1 WO9010274 A1 WO 9010274A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- nodes
- type
- module
- activation
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- This invention relates to a data processing module, and a data processing network comprising a plurality of such modules .
- a further object is to provide such a network which is composed of a multiplicity of relatively small, substantially identical modules, each in turn having a simple structure, so that the network can have a relatively simple structure.
- the present invention provides a data processing module comprising a plurality of nodes, as well as connections between said nodes, through which connections data weighted by a weighting factor can be transmitted among the nodes and which nodes have an activation value representative of the data received, there being provided at least three types of nodes; a first type adapted to receive external data and of which at least two are present; a second type, of which there is always one which is paired with a node of the first type; and a third type, of which at least one is present, each node of the first type being connected through a connection with a weighting factor from a first class to the associated node of the second type and to the node of the third type, each node of the second type being connected through a connection with a weighting factor from a second class to the other nodes of the second type, to the nodes of the first type not paired therewith and to the node of the third type; the node of the third type being connected through connections with a weighting factor
- the weighting factors from the first class are all positive and the weighting factors from the second class are all negative.
- the present invention also provides a data processing network comprising at least two modules according to the invention, wherein the modules are present at different levels in the network and only modules at different levels can be interconnected, while, if modules are interconnected, at least each node of the first type of one module is connected to all nodes of the first type of the other module.
- Fig. la is a diagrammatic view of the structure of a module according to the present invention.
- Fig. lb shows the module of Fig. la with the names of the various connections
- Fig. 2 is a graphic view of the activation of a node in function of the total weighted activation presented;
- Fig. 3 shows three modules according to the present invention with intermodular connections;
- Fig. 4 is a diagrammatic explanation of the interactions within a module
- Figs. 5a-h are diagrammatic views of the response of a module to a presented input stimulus during a number of cycles.
- Fig. 6 is a diagrammatic view of an output module.
- Fig. la diagrammatically shows the structure of a module according to the present invention, to be referred to hereinafter as a CALM-module (categorizing and learning module) .
- a CALM-module consists of a plurality of nodes, indicated in the figures with circles. As will be explained below, there are a number of different types of nodes which, however, all have a number of properties in common. Thus each node is a simple data processing element which processes a one-dimensional variable (a voltage value) , called the activation, according to a small number of invariable rules .
- the activation of a node may assume any value between 0 and a maximum value of M and the activations of the various nodes are exchanged through connections between the nodes, while means may be incorporated in the connections for influencing the transfer of the activations by means of a weighting factor.
- Such means can be implemented in a very simple manner by means of resistors.
- the structure of the nodes can be simple, too; e.g. they may have the structure described in J.J. Hopfield and D. . Tank: "Computing with Neural Circuits", in Science, 233, pp. 625-633 (1986) .
- the effective input signal for a node i in Fig. 1, the excitation ai is determined by the weighted sum of the activations of all separate nodes connected to the input of node i.
- the activation of a node i at time t+1, designated by ai(t+l), is a function of the activation at time t, designated by a ⁇ (t) and the input excitation e ⁇ .
- the activation of node i is given in a discrete time representation by formula 1 :
- wi j indicates the weighting factor of the connection between a node j and node i, while k is a constant with 0 ⁇ k ⁇ 1. It is assumed that there are N nodes that are connected to node i.
- the first component (1-k) a ⁇ (t) gives the autonomous decrease in the activation of a node.
- the activation of node i decreases to zero at a rate determined by the magnitude of .
- this decrease is exponential, which indicates why 0 ⁇ k ⁇ 1.
- the second component restricts the excitation at the
- the third component (l-k)ai(t) ensures that the negative excitation, i.e. the inhibition, asymptotically approaches the minimum activation value ai(min) . It is indicated above that the activation of all nodes in a CALM module is determined by formula 1. However, there are also other expressions defining a variation in activation in the manner shown in Fig. 2. Such a formula is:
- aKt+1) (l-k) .a ⁇ (t) + [M-(l-k) .ai] for ei > 0
- This node type can form connections with nodes in other modules, which mostly are of the same type.
- the nodes have excitatory (positively activating) output connections . Because the activation of this type of nodes may correspond with the presence of a specific pattern of input activation signals presented to the module, these nodes are called representation nodes or R nodes for short.
- a node of this second type is always coupled with a node of the first type to form a pair. These nodes only have inhibitory (negatively activating) output connections and suppress the activation of all nodes in the CALM, although the extent of suppression need not be the same for all nodes.
- This type of nodes will be called a V node for short, in which V stands for Veto.
- the third type is a node of which only one need be present per CALM module. This node is excited by all R nodes and inhibited by all V nodes. For the sake of brevity, this type of node is called A (arousal) node. Because the A node is connected, in the manner described, to all V and R nodes, the activation of the A node in the module is a positive function of the extent of competition between the V nodes. In the CALM module, this competition is fiercest with new input patterns, so that the activation of the A node is an indirect measure for the extent of newness of an input pattern.
- E node (external) node and receives exclusively an input signal from the A node and randomly transmits activation pulses to all R nodes in the CALM module. These activation pulses are distributed uniformly over the range of values of O-a E (t), wherein as(t) indicates the activation of the E node at time t.
- the E node is also important for the learning process in a CALM module.
- Fig. la shows a CALM module 1, in which the above- described four types of nodes with their interconnections are designated by the letters likewise indicated above.
- R nodes and V nodes There are always equal numbers of R nodes and V nodes in a module.
- Fig. la shows only three nodes of each type, but in principle their number is unlimited. In actual practice, however, the number of R and V nodes will seldom be higher than about 100, since networks composed of a large number of relatively small modules can operate more quickly in principle than networks consisting of a few very large modules.
- the A node and the E node are named and shown separately, because they have clearly different functions, these nodes may be combined to a single node, if desired, because the E node receives exclusively activation from the A node.
- each V node in a module is connected through inhibitory connections to all R nodes not paired therewith, to all other V nodes and to the A node.
- each V node is connected also through an inhibitory connection to the paired R node, but this is not necessary.
- Each R node is connected through an excitatory connection to its associated V node, as well as to the A node.
- the A node is connected through an excitatory connection to the E node and finally the E node is connected through excitatory connections to all R nodes.
- Fig. lb are named the various types of excitatory and inhibitory connections.
- the weighting factors incorporated in all connections present in a CALM module are pre-determined and invariable.
- the values of the intermodular weighting factors are not limited to a given range; in an experimental set-up, values in the range of -10 to +3 were used. It will be clear that a weighting factor of +1 means, in fact, a through- connection and a weighting factor of -1 a through-connection to the inverted output of a node.
- Up connects an R node to the V node being paired therewith.
- Down VR 4.; inhibitory: the reciprocal of the up weighting factor.
- An up factor and a down weighting factor together provide for an RV node pair to exhibit a differentiating characteristic.
- Cross (VR ⁇ ; inhibitory) controls the inhibition laterally between V nodes and R nodes not paired together.
- weighting factors are usually strongly negative, so that a V node can suppress all R nodes not.paired with it.
- VA inhibitory
- AE activatory
- ER controls the influence of the random activations of the E node on the R nodes.
- Table we specify, by way of example, th weighting factors that were implemented in the various connections within a CALM module with which the following example, given on the basis of Fig. 4 and Table 1, has been realized.
- RV M.(
- the weighting factor between the V nodes themselves determines how quickly the competition between the V nodes can be solved. To render the inhibition so strong that only one V node remains, and the activation of the others decreases to zero, the minimum requirement is that
- the inhibitory weighting factor between a veto node and the A node should be so high that under the proper conditions the activation of the A node can be fully reduced to zero. To that end, this weighting factor should comply with formula 5.
- the weighting factor from the R nodes to the A node should also meet formulae 4 and 5.
- the weighting factor between the A node and the E node is mostly chosen to equal one.
- the excitatory weighting factor from the E node to the R nodes determines the extent of change in the learning parameter (to be explained hereinafter) and the random activations of the R nodes. There is an interaction between these two parameters, so that when the AE weighting factor is kept at the value of 1.0, the random activation and the learning parameter is determined with the ER weighting factor.
- variable weighting factors are included in the intermodular connections between R nodes and in the connections between input sources and the R nodes in a module.
- CALM modules In general, a complete connection pattern is assumed to exist between CALM modules, i.e. if there is a connection from CALM 1 to CALM 2, each R node in CALM 2 receives an input signal from all R nodes in CALM 1.
- CALM modules at one level in the network are not connected to the other CALM modules at the same level but only to the CALM modules at higher levels and, possibly, connections may also exist to CALM modules at lower levels.
- connections between two CALM modules there is a complete connection pattern.
- the weighting factors in the connections between the modules are called I (inter) weighting factors. These I weighting factors are all variable within a given range of values, e.g. the range of 0-2. In general, the initial value of this weighting factor is chosen to equal the mean between the limiting values, in the example given therefore the value 1.0. If the initial values are chosen too close to the minimum or maximum values, the module will not function properly.
- the adjustment of the I weighting factor is effected according to a variant of the Grossberg learning rule, as described in the above mentioned adaptive resonance theory. This Grossberg rule forms an extension of the Hebb rule.
- an increase in a weighting factor in a connection between two nodes depends upon the correlation between the activation in the node at the beginning and in the node at the end of the connection.
- the Grossberg rule in addition takes into account the total additional input excitation caused by the "neighbours" of the node situated at the beginning of the connection, which also have a connection with the node situated at the end of the connection.
- Fig. 3 diagrammatically shows three CALM modules, two of which are at a level I and one at a higher level II. For the sake of clarity, only the R nodes of these modules are shown, while likewise for the sake of clarity, it is assumed that each CALM module has only three R nodes. However, it will be clear that the following also applies if the CALM module at level II is connected to more CALM modules at level I, while also each of the CALM modules may contain much more than three R nodes.
- wi j (t) * the inter weighting factor at time t, which is always larger than zero, because there are only excitatory connections between modules;
- the first part of the component between square brackets in formula 11 represents the Hebbian part of the learning rule, where the activation value of the node at a beginning of a connection contributes to an increase in the weighting factor.
- a difference from the Hebb rule is that high weighting factors tend to limit the increase in the weighting factor.
- the second bracketed term forms an extension of the Hebb rule, used for the first time by Grossberg. This term is responsible for a decrease in the changes in the weighting factor. In the event of high excitation due to the "neighbour" R nodes, a decrease in the weighting factor is highly probable, in particular when wi j (t) is also high.
- this component is therefore the application of an adaptive reducing scale factor to the weighting factor when the total input excitation to a node becomes too high. This may happen for instance when many modules are connected to a single module, or when high input activations are present, or when the weighting factor is too high due to prolonged learning. Besides, this component provides for the important property that the module is capable of having non-orthogonal patterns represented by different R nodes.
- a CALM module To understand the dynamic behaviour of a CALM module, it is useful to discriminate between the three processes that define its operation: the excitatory process, the inhibitory process and the arousal process. Although these processes are interdependent, it is possible to analyse their function separately. In fact, the module can best be understood on the basis of the interactions between these processes.
- Fig. 4 shows a diagram indicating these interactions.
- the first mechanism, the excitatory system, formed by the R nodes in a module, is directly activated by the stimulations presented to the module. Only the R nodes are connected to nodes outside the module. Therefore, the excitatory process is stimulated either from other modules, or from the E node or from receptor nodes which convert physical stimulation (e.g. light, sound, displacement and the like) into activations.
- all variable weighting factors in the connections to the R nodes have equal values .
- all R nodes are activated equally. Each R node will activate its associated V node, resulting in competition between the V nodes due to the mutually inhibitory connections between these nodes.
- the R nodes also excite the A node and hence feed the arousal mechanism.
- the major function of the excitatory mechanism is the activation of the two other systems.
- the inhibitory process, composed of the V nodes is controlled exclusively by the excitatory system. As soon as a V node becomes active, it starts to inhibit the A node, the R nodes and the other V nodes. The mutual inhibition of the V nodes provides for the competition between these nodes. The activation of the V nodes will have an oscillatory pattern so long as competition continues. With respect to the inhibition of the R nodes by the V node, a distinction should be made between the inhibition of the paired R node and the inhibition of other, non-paired R nodes.
- the arousal process consisting of the A node and the E node, has at least two functions.
- the first is to force the module to solve the competition in case several R nodes receive the same quantity of input excitation and cannot solve the competition.
- By transmitting random activation pulses to the R nodes the equilibrium is disturbed, so that one of the R-V node pairs can win the competition.
- a stochastic search process takes place for possible representations among the activated R nodes, while the probability that one of the R nodes is selected as a representation is a function of the relative value of the activations of that R node with respect to the activations of the other R nodes and of the random fluctuations in the data received from the E node.
- the second function of the arousal system is to control the learning process. As described above, the learning rate is proportional to the activation of the E node.
- the influence of the arousal mechanism on the learning process is the following:
- the ratio between the VA weighting factors and the RA weighting factors is selected so that when a single R-V node pair is being activated, the total input activation for the A node is negative and the A node is suppressed by this pair.
- the activation of the A node will increase and random activations will be distributed over the R nodes, which activations will help to solve the competition.
- the R nodes must be activated more strongly than the V nodes for them to be able to activate the arousal system.
- the oscillation will stop and the winning pair will preserve its activation at a low, stationary level, while all other R-V node pairs no longer have activation.
- both the A and the E node integrate their input signals, the activations of these nodes can attain high levels as a result of an oscillating input signal.
- the result is that the presentation of stimuli producing substantial competition in the module is concomitant with more learning than stimuli that can activate a single R node in a simple manner. Stimuli that were presented earlier, therefore, activate the arousal mechanism to a substantially lesser extent and for a shorter period of time. In this manner, the CALM module can discriminate between old and new stimuli.
- the CALM module adjusts its learning rate, so that new stimuli, requiring much learning, are learned much more quickly than old stimuli, which do not require much learning.
- the disturbance of representations by the prolonged presentation of old stimuli is prevented by this mechanism.
- the module has two forms of learning, in the first place the active search for new representations of newly presented patterns of stimuli, concomitant with much learning, and in the second place the passive activation of representations of patterns of stimuli already presented earlier, which need not require much learning and reinforces the representations only slightly.
- CALM CALM module
- Categorization is effected by associating a given stimulus presented to the R nodes with a single R node in the module, which node is then said to represent the stimulus.
- This categorization is an autonomous processing step by the CALM module according to the present invention, which means therefore that the CALM module according to the invention is capable of unsupervised learning, while supervised learning is possible as well. Learning takes place during and subsequent to the categorization process, thereby preserving the association between the stimulus and the R node by adjusting the I weighting factors to the R node.
- the CALM module continues to represent patterns of stimuli newly presented and to be discriminated in the new R nodes.
- the CALM module is therefore capable of discriminating patterns of stimuli, i.e. by representing comparatively strongly differing patterns by different R nodes, and is also capable of generalization, i.e. by representing relatively strongly resembling patterns by the same R node. It will be explained on the basis of Fig. 5 and Table I how the above described three mechanisms cooperate to categorize a stimulus pattern and how this process affects the value of the I weighting factor to the winning R node. Figs.
- Fig. 5 shows a CALM module with two R nodes 3, 4 and two V nodes 5, 6 an A node 7 and an E node 8.
- An input stimulus is presented to the module from two nodes 1 and 2.
- These nodes may be the R nodes of a CALM module at a different level, but also two external input sources producing a pattern that is representative of an externally received stimulus. It is assumed for the example that node 1 has an activation of 1.0 and node 2 an activation of 0.0.
- the activation of the various nodes is shown diagrammatically in Figs. 5a - h by the extent of blacking of a node; an entirely black node has an activation 1.0.
- the Table shows the cycles 1 - 20 traversed by the CALM module at successive times t. Because it is not illustrative to represent all cycles in the manner of Figs. 5a - h, these figures only show cycles 0, 1 , 2 , 3, 4, 10, 12 and 20 in the respective Figs. 5a - 5h.
- the Table gives the values of the activations of nodes 1 - 8 by means of the symbols a(l) - a(8), while moreover the right-hand side of the Table indicates for each cycle the value of the learning parameter ⁇ and the variable weighting factors W3i, W32, W 4 1, and 42 , which weighting factors are also shown in Fig. 5. Cycle 0 is not shown in the Table, since this only indicates the initial state.
- Fig. 5a shows the initial state wherein only node 1 has an activation 1.000 and the other nodes an activation 0.000.
- the learning factor ⁇ has its low initial value and the weighting factors all equal 1.000, i.e. a value intermediate the limiting values of these weighting factors, i.e. 0 and 2.
- Fig. 5b The activation has reached the row of R nodes 3 and 4 in the CALM module and has been distributed uniformly over these nodes .
- the weighting factors have not changed, since nodes 3, 4 were not activated yet.
- Fig. 4c The wave of activations has now also reached V nodes 5 and 6, while also A node 7 has received activation from nodes 5, 6.
- the weighting factors have been changed slightly. The reason that the changes are small, is that the learning parameter still has its low quiescent value.
- the weighting factors from the activated node 1 have slightly increased, while the weighting factors from the non-activated node 2 have decreased. This increase and decrease are a direct result of the learning rule employed.
- nodes 3 and 4 receive random activations ranging between 0.000 and the activation of node E multiplied by the weighting factor in the connection from the E node to nodes 3, 4. This weighting factor is 0.5.
- the learning parameter ⁇ will increase, ⁇ is linearly dependent upon the activation of the E node.
- the increase in ⁇ leads to a greater change in the weighting factors, which means that learning takes place more quickly.
- the V nodes have an inhibitory effect on the activation of the A node.
- the weighting factors in the CALM module have been chosen so that when a V node and an R node paired with it and having an excitatory connection with the A node, have the same activation, the nett contribution of these two nodes to the activation of the A node is negative, so that this activation decreases.
- Fig. 5e The activations of nodes 3 and 4 now differ slightly from one another due to the random activations these nodes receive from the E node.
- the learning parameter ⁇ has increased from 0.005 to 0.016.
- the activation of the A node has decreased as a result of the inhibitory effect of the V nodes.
- the weighting factors have again increased further, while the weighting factors to the respective nodes 3 and 4 may now also assume different values, because the activations of nodes 3, 4 differ from one another. These differences in weighting factors are initially minute, however, and not yet visible in the Table.
- Fig. 5f The activation of node 3 has suddenly decreased strongly, again due to the random fluctuations in the activation which nodes 3, 4 receive from the E node.
- the weighting factors now start to differ strongly from each other.
- the weighting factor from node 1 to node 3 is the higher, which may mean that eventually node 3 will "win” after all.
- the weighting factor from the non-activated node 2 to node 3 strongly decreases on the other hand.
- Fig. 5g The battle has now almost been decided.
- the activation of the R node 4 and the associated V node 6 has become very low and the weighting factor W 31 is clearly the higher.
- Node 3 definitively forms the representation for the input pattern [1.0], the activation of nodes 4, 6 has decreased to zero, while also the activation of the A node has decreased strongly. The value of ⁇ decreases but this decrease takes place considerably more slowly; the activation of the E node decreases only slowly, too. From now on, an equilibrium condition will slowly be established in the CALM module. Repeated presentation of the pattern [1.0], after all activations have been reset to zero, now results in the node 3 being rapidly activated more strongly. The presentation of the orthogonal pattern [0.1] will probably lead, after about 20 cycles, to a representation by node 4.
- the learning rule is so composed that after the pattern [1.0] has been presented a few times, always the total of the weighting factors to that pattern will decrease.
- Fig. 6 shows within the dotted lines a so-called output module, which, in fact, consists of a combination of a single CALM module designated in the figure by reference numeral 1 and comprising N R-V node pairs, and a modified CALM module designated by numeral 2.
- the modified CALM module only contains N pairs of R-V nodes and no A or E node.
- N-l R-V node pairs in module 1 are coupled with N-l R-V node pairs in module 2. Both module 1 and module 2 therefore have one "free" pair of R-V nodes.
- Such an output module makes it possible for a plurality of simultaneous parallel activations within a CALM module to be converted into a series of responses.
- An output module in contrast to the impression that the name makes, need not necessarily be provided at the output end of the network.
- the output module forms a kind of parallel-serial converter, wherein the probability of a production in the series depends upon the activations that can be produced in the CALM module in response to a specific presented pattern. .
- Module 2 shown in the figure consists of pairs of R-V nodes. These pairs, however, can always be combined in a single node without impeding the function. Each V node inhibits again the other V nodes and the R nodes not paired with it .
- the free R-V node pair in module 2 is connected through an excitatory connection to the A node of CALM module 1.
- Each R-V node pair of CALM module 1 excites only one R-V node pair in module 2.
- All R nodes of coupled R-V node pairs of module 2 excite the "free" R-V node pair in module 1. So long as the competition has not yet been solved, the A node excites the "free" R-V node pair in module 2, which blocks further transmission of activations by module 2.
- Activations in the other node pairs in module 2 can only be produced after the activation of the A node has disappeared. As explained above, this is not the case until the competition in CALM module 1 has been solved. In that case, only one activation is therefore transmitted from module 1 to module 2, while the activated R node can provide an output signal. In its turn, module 2 activates again the "free" R-V node pair in the CALM module. This pair only produces inhibition and does not receive inhibition itself. The CALM module 1 is thereby reset, because the activations in all coupled R-V node pairs are reduced to zero. After termination of the activation of the free node pair in CALM module 1, new activations can be built up in the CALM module 1 on the basis of the weighting factors and the data supplied. On the basis of the new activations in the CALM module a new response can be generated in module 2.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL8900425A NL8900425A (en) | 1989-02-21 | 1989-02-21 | INFORMATION PROCESSING MODULE AND AN INFORMATION PROCESSING NETWORK CONTAINING NUMBER OF THESE MODULES. |
NL8900425 | 1989-02-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1990010274A1 true WO1990010274A1 (en) | 1990-09-07 |
Family
ID=19854171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NL1990/000018 WO1990010274A1 (en) | 1989-02-21 | 1990-02-20 | Neuronal data processing network |
Country Status (6)
Country | Link |
---|---|
EP (1) | EP0473592A1 (en) |
JP (1) | JPH04505815A (en) |
AU (1) | AU5164490A (en) |
CA (1) | CA2048598A1 (en) |
NL (1) | NL8900425A (en) |
WO (1) | WO1990010274A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2671207A1 (en) * | 1991-01-02 | 1992-07-03 | Abin Claude | NEURONAL NETWORK WITH BINARY OPERATORS AND METHODS FOR MAKING SAME. |
GB2258554A (en) * | 1991-08-07 | 1993-02-10 | Haneef Akhter Fatmi | Function-neuron net analyzer |
WO1996037028A1 (en) * | 1995-05-17 | 1996-11-21 | Aeg Atlas Schutz- Und Leittechnik Gmbh | Process and system for determining the states of operation of an electric energy supply network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4660166A (en) * | 1985-01-22 | 1987-04-21 | Bell Telephone Laboratories, Incorporated | Electronic network for collective decision based on large number of connections between signals |
WO1988002894A1 (en) * | 1986-10-07 | 1988-04-21 | The Regents Of The University Of California | Pattern learning and recognition device |
-
1989
- 1989-02-21 NL NL8900425A patent/NL8900425A/en not_active Application Discontinuation
-
1990
- 1990-02-20 EP EP90903964A patent/EP0473592A1/en not_active Withdrawn
- 1990-02-20 AU AU51644/90A patent/AU5164490A/en not_active Abandoned
- 1990-02-20 CA CA002048598A patent/CA2048598A1/en not_active Abandoned
- 1990-02-20 JP JP2503991A patent/JPH04505815A/en active Pending
- 1990-02-20 WO PCT/NL1990/000018 patent/WO1990010274A1/en not_active Application Discontinuation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4660166A (en) * | 1985-01-22 | 1987-04-21 | Bell Telephone Laboratories, Incorporated | Electronic network for collective decision based on large number of connections between signals |
WO1988002894A1 (en) * | 1986-10-07 | 1988-04-21 | The Regents Of The University Of California | Pattern learning and recognition device |
Non-Patent Citations (6)
Title |
---|
Computer, Vol. 21, No. 3, March 1988, IEEE, (New York, US), K. FUKUSHIMA: "A Neural Network for Visual Pattern Recognition", pages 65-75 * |
IEEE Transactions on Military Electronics, Vol. MIL-7, No. 2/3, April-July 1963, IEEE, (US), G.J. DUSHECK et al.: "A Flexible Neural Logic Network", pages 208-213 * |
Parallel Processing by Cellular Automata and Arrays; Proceedings of the Third International Workshop on Parallel Processing by Cellular Automata and Arrays, Berlin, 9-11 September 1986, North-Holland, (Amsterdam, NL), H. MIZUNO: "A Neural Network Model for Pattern Recognition", pages 234-241 * |
Proceedings of the International Joint Conference on Neural Networks, Washington, 19-22 June 1989, IJCNN, J.M.J. MURRE et al.: "Calm Networks: a Modular Approach to Supervised and Unsupervised Learning", pages 649-656 see figures 1-3; page 649, left-hand column, line 1 - page 654, left-hand column, line 26 * |
Systems, Computers, Controls, Vol. 6, No. 5, 1975, K. FUKUSHIMA: "Self-Organizing Multilayered Neural Network", pages 15-22 * |
WESCON 63, Technical Papers, San Francisco, 20-23 August 1963, Part 7, Instrumentation, (US) R.O. DUDA et al.: "An Adaptive Prediction Technique and its Application to Weather Forecasting", pages 11.3-1 - 11.3.-7 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2671207A1 (en) * | 1991-01-02 | 1992-07-03 | Abin Claude | NEURONAL NETWORK WITH BINARY OPERATORS AND METHODS FOR MAKING SAME. |
WO1992012497A1 (en) * | 1991-01-02 | 1992-07-23 | Claude Abin | Neural network with binary operators and methods of implementation |
GB2258554A (en) * | 1991-08-07 | 1993-02-10 | Haneef Akhter Fatmi | Function-neuron net analyzer |
WO1996037028A1 (en) * | 1995-05-17 | 1996-11-21 | Aeg Atlas Schutz- Und Leittechnik Gmbh | Process and system for determining the states of operation of an electric energy supply network |
Also Published As
Publication number | Publication date |
---|---|
EP0473592A1 (en) | 1992-03-11 |
JPH04505815A (en) | 1992-10-08 |
CA2048598A1 (en) | 1990-08-22 |
NL8900425A (en) | 1990-09-17 |
AU5164490A (en) | 1990-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Dehaene et al. | Neural networks that learn temporal sequences by selection. | |
Chapeau-Blondeau et al. | Stable, oscillatory, and chaotic regimes in the dynamics of small neural networks with delay | |
Merrill et al. | Fractally configured neural networks | |
Buhmann et al. | Influence of noise on the function of a “physiological” neural network | |
Wang | Oscillatory and chaotic dynamics in neural networks under varying operating conditions | |
Kryukov | An attention model based on the principle of dominanta | |
Kreiser et al. | On-chip unsupervised learning in winner-take-all networks of spiking neurons | |
Grosan et al. | Artificial neural networks | |
WO1990010274A1 (en) | Neuronal data processing network | |
Crook et al. | A novel chaotic neural network architecture. | |
Amit et al. | Architecture of attractor neural networks performing cognitive fast scanning | |
Bennett | Large competitive networks (neurons) | |
Shimizu et al. | How animals understand the meaning of indefinite information from environments? | |
Baird | Learning with synaptic nonlinearities in a coupled oscillator model of olfactory cortex | |
Yarushev et al. | Time Series Prediction based on Hybrid Neural Networks. | |
Karouia et al. | Performance analysis of a MLP weight initialization algorithm. | |
Konen et al. | Unsupervised symmetry detection: A network which learns from single examples | |
Micheli-Tzanakou | Nervous System | |
MacLennan | Neural networks, learning, and intelligence. | |
Wang et al. | Low-Order Model of Biological Neural Networks | |
Handrich et al. | A biologically plausible winner-takes-all architecture | |
Okamoto et al. | Biochemical Neuron: Application to Neural Network Studies and Practical Implementation of Device | |
Amemori et al. | Unsupervised learning for sub-millisecond temporal coded sequence | |
Nishimuray et al. | Deterministic Chaos Approach to Multistable Perception | |
Michler et al. | Adaptive feedback inhibition improves pattern discrimination learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AU CA JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB IT LU NL SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2048598 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1990903964 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1990903964 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1990903964 Country of ref document: EP |