next up previous contents
Next: Test Methodology Up: No Title Previous: The Logical Neural Network:

Experimental Evaluation

  To evaluate the proposed modular neural network a number of tests were carried out, examining the ability to memorize different data sets and to generalize on them. A variety of different data sets were used to find a framework for the possible application domain for the proposed architecture.

A minimum criteria for experimental evaluation of neural network learning algorithms is: ``An Algorithm evaluation is called acceptable if it uses a minimum of two real or realistic problems and compares the results to those of at least one alternative algorithm.'' [prec95, p227,]. The experimental studies during the project were made to be well ahead of this guideline.

The new architecture was compared to two other well known and well researched network types: logical neural networks and BP-trained multilayer feedforward networks. For data sets that were represented in a binary form the comparison was made with both reference networks; for continuous data the comparison was made with the MLFFN only.

A number of different real world data sets were used. Some were based on binary input variables, other were coded with continuous values. The input dimension varied from eight to 6750, and the number of instances in the different training sets were between four and 384.

A simulation is always based on a finite number of test cases. It is therefore difficult to draw a general conclusion from the tests. This problem particularly appears in comparing different networks. A reference network with a different learning parameter or number of hidden layers might have given a better result; or a different number of inputs to each module in the modular network might have increased the performance.

Both the BP-network and the modular neural network have a number of variables and parameters that may be changed for the test. To estimate the number of test networks for a comprehensive study consider the following variables:

This would result in:

5 5 5 2 2 4 10 = 20000

different test networks that would have to be trained and evaluated. This huge number is reached despite the fact that the figures for each individual variable are too low to be statistically reliable. This only considers a single data representation. The problem of the huge number of possible networks is addressed in section 8.1.

Because of this a different test strategy was used. In each experiment the focus was on a particular issue and all other variables were kept constant.

In some of the experiments the optimum solution for the MLFFN was found using the program opti. This was developed during an in-course project at the MMU [schm96], and is based on the BP-algorithm and searches for the optimal set of weights for a given training and test set. The basic idea of the program is to check the performance on the test set after each training cycle to prevent the network from overfitting.

next up previous contents
Next: Test Methodology Up: No Title Previous: The Logical Neural Network:

Albrecht Schmidt
Mit Okt 4 16:45:34 CEST 2000