next up previous contents
Next: Modularity Up: Logical Neural Networks Previous: An Example of a

Improvements on Logical Neural Networks

In the following section some new ideas in the field of logical neural networks are presented. The focus is particularly on concepts aimed on improving the basic processing unit.

In [canu95] a generalization process for weightless neurons is introduced, using a Radial-RAM. In this approach each of the adaptive nodes has the ability to generalize. The idea is directly based on simple RAM neurons and uses the same learning mechanism. The improvement is made in the recalling characteristics. Instead of just accessing the address that is assigned to an input pattern the region around the address is also read. The response is therefore dependent on all the learned patterns that are similar to the applied vector. Networks using this new type of neurons showed a better generalization performance than networks using simple RAM neurons.

Another attempt to improve logical NN uses probabilistic logic nodes (PLN) [alek95, 192f,]. The PLN stores more than one bit of information at each address location. The content of an address determines the probability of firing.

A universal architecture for logical neural networks is discussed in [zhan91]. Most logical neural networks obtain their ability to generalize from the architecture of the system and from the training procedure used. The generalization desired may differ with the problem domain and therefore the optimal network structure may differ as wellgif.

The paper considers two extreme architectures used in LNNs: the single layer N-tuple RAM network and the multilayer pyramid architecture. The proposed universal architecture uses N-tuple networks as substructure within a pyramidal architecture. It also investigates how methods of spreading and training with noise can be used to improve the generalization performance. One observation was that N-tuple networks with a small number of inputs have difficulties in coping with noise.

A different approach to build better LNN is to improve the way the inputs are connected to the RAM units [alek89a, 202ff,]. The most obvious method is to follow the biological model and use a genetic algorithm. This paper reports how the connections from an input retina to a logical neural network can be optimized with a genetic algorithm. The system with optimized connections proved to be superior to a randomly connected one.


next up previous contents
Next: Modularity Up: Logical Neural Networks Previous: An Example of a

Albrecht Schmidt
Mit Okt 4 16:45:34 CEST 2000