In this experiment the task was to memorize five pictures of different faces. Each gray-level picture had a size of 75 by 90 pixels (6750 continuous input variables). The gray-value (0-255) of a pixel was converted into a normalized value in the interval . The original pictures are from [pict96]. After training the MLP and the modular network the recognition performance was tested with distorted pictures.
No comparison with the logical neural network was made because continuous input values were used.
Figure 7.6: The Original Pictures used as Training Set.
In Figure 7.6 the five original pictures are shown. This set was used as training set for the following two tests. The training was stopped if the propagated error was sufficiently small. Both network types memorized the training set very well. For all pattern the response was below 0.1 for a desired `0' and over 0.9 for a desired `1'.
The training time for the modular network was much shorter than for the monolithic MLP. This is easy to explain by the number of weights used in each network. In Table 7.5 an overview of the training times is given.
It can be seen that the training time to achieve a fixed error value is not directly dependent on the number of weights, if the number of training steps is fixed then the time is proportional to the number of weights.
In this experiment the training was stopped when the root mean square error in the network was below 0.01. A larger number of nodes might learn the data set in less training cycles so that the overall time is shorter; this effect can be seen comparing and in Table 7.5.