The most widely used form of logical neural networks is based on conventional random-access memory (RAM). This approach to neural computing works with a weightless network. On the first sight it looks very different from the biological idea of a neural network, but seen in connection with the adaptive node the differences are not so significant. The RAM unit is an adaptive node using the simple response schema.
Figure 3.3: A Single RAM Unit.
The basic architecture is as follows: the input vector is divided into parts; each part is connected to the address inputs of a 1-Bit-RAM unit. The output of all the RAMs within one discriminator are summed up. The number of discriminators needed in a network is determined by the number of class which need to be distinguished by the network, see Figure 3.4.
The 1-Bit-RAM unit, depicted in Figure 3.3, is a device which can store one bit of information for each input address. A control input is available to switch the mode of the RAM between `Write' and `Read' for learning and recall.
Initially all memory units are set to `0'. During the learn (`Write') mode the memory is set to `1' for each supplied address; in the recall (`Read') mode the output is returned for each supplied address, either `1' (if the pattern was learned) or `0' (if the pattern was not learned).
The discriminator is the device which performs the generalization. It consists of several RAMs and one node which sums the outputs of the RAMs in recall mode. The discriminator is connected to the whole input vector; each RAM within the discriminator is connected to a part of this vector, so that each input bit is connected to exactly one RAM, see Figure 3.4(a). The connections are preferably chosen by random.
The network shown in Figure 3.4(b) can be used to distinguish classes; it consists of discriminators. For each class of input patterns one discriminator is needed, which is trained solely on the input data of its own class. In the recall mode each discriminator responds with the number of matching subpatterns. The pattern is classified according to the discriminator with the highest response.
The difference between the discriminator with the highest response and the runner-up is a measure of confidence ([alek89a, p174f,]). It is also possible to set a threshold; if the output of a discriminator is higher than the threshold, then the pattern is accepted and recognized.
The major advantages of this kind of network are the ease of implementation and the ability to learn with only one presentation of the training patterns. These networks are mainly used for recognition purposes.