Figure 7.5: Examples of good and faulty Pictures of Wood.
The wood data set was generated to test the ability of a network to generalize on a huge number of continuous inputs.
To generate the data set a board of wood was scanned in as a gray-scale picture. Then a program SelectPic was written to select sub-pictures of a particular size out of the original picture. These sub-pictures were then classified manually into the two categories.
The basic idea is to discriminate pictures of wood, to determine if they are faulty or not. The data set consists of 100 gray scale pictures of size 30 by 30 pixels; the gray level of each pixel is coded as value between 0 and 1. The output is either `good' or `faulty'. The result is a data set with 100 instances each having 900 continuous inputs and two output classes. In Figures 7.5 examples of training and test pictures are given.
The modular structure learnt the examples in very short time and gave a performance of 100% on the training set and 92% on the test set. This results were achieved with different system structures.
It was very difficult to find a parameter set for the monolithic MLFFN that converged on the data set. After a lot of tests (about 40 different configurations) a network was found which performed very well. A small learning constant () was necessary to ensure a successful learning process. The best network gave also the results of 100% on the training set and 92% on the test set.
The time difference between training the MLP and the modular NN was significant. The fastest monolithic network trained successfully in about ten minutes whereas the average training time for the modular neural networks with was two and a half minutes; this is a speed-up of 400%.
The significant gain in training speed can be explained by the number of weights in the different networks, see Table 7.4.