next up previous contents
Next: An Example of a Up: Logical Neural Networks Previous: A RAM Based Logical

Selecting a Logical Network

To design a logical neural network appears much easier than choosing all the parameters for a Backpropagation MLFFN. The most important parameter is the size of the simple processing units; for a RAM based network this is the number of memory units per RAM. This size has a major influence on the generalization performance of the network. The filling of the network is the ratio between memory cells with content `1' to the size of the RAM. This is directly dependent on the number of inputs to the RAM and therefore on the RAM-size and on the used training set.

If the number of inputs is small the filling will be very high. Assuming an extreme: a two input unit and therefore the RAM-size is four. If only four different subpatterns are in the training set the filling will be 100%.

A network with very high filling over-generalizes. The extreme case with a filling of 100% will return a discriminator response of `1' for all input patterns. This problem often appears when the training set is large or the data is very diverse.

Conversely a RAM unit with a large number of inputs is likely to have a low filling. For example, with 16 inputs there will be 216 = 65536 memory cells. To achieve an adequate filling in this case a large number of training examples is needed. Furthermore it is much more likely that different input vectors will share small subpatterns than large ones.

If the filling is low the generalization performance is poor. In theory, if the input size is equal to the dimension of input vector then there is no generalization at all; the vectors are simply stored.

The selected training vectors can be important, too. The simple method of using half the data set to train the network and the other half to test its performance may not always be appropriate, particularly for large data sets. It is proposed to use statistical methods or a genetic algorithm to select the most useful subset for training the network.

The best generalization performance for real world data is achieved with a filling in the range between 30% and 50%. For most applications this can be realized with a RAM-size of 256 (eight inputs) [band96].


next up previous contents
Next: An Example of a Up: Logical Neural Networks Previous: A RAM Based Logical

Albrecht Schmidt
Mit Okt 4 16:45:34 CEST 2000