The term Multiple Neural Networks is used for strongly separated architectures. Each of the networks works independently on its own domain. The single networks are built and trained for their specific task. The final decision is made on the results of the individual networks, often called expert networks or agents. The decision system can by implemented in many different ways; depending on the problem a simple logical majority vote function, another neural network, or a rule based expert system may be employed.
Multiple NNs are used if different information source (different sensors) are available to give information on one object. Another method to use multiple neural networks is to preprocess the input data in different ways (e.g. different filters) and train a network on each preprocessed input. The networks within the multiple network can be of any architecture.
Each network can be seen as an expert on its domain. The individual network is trained on its domain only. The output by a single network is according to its specific input. In the example given in Figure 4.5(a) one of the networks is trained to identify a person by voice while the other network is trained to identify the person by vision.
The outputs of the expert networks are the input data of the decision network which is trained after the expert networks have been trained. The decision is made according to the outputs of the experts, not directly from the input data. For further examples on such network architectures see [patt96, p17, p235,].
The term Modular Neural Networks (MNN) is very fuzzy. It is used for many different structures. Everything that is not monolithic seems to be modular. In the next paragraph two types of modular NN are briefly introduced.
One idea of a modular neural network architecture is to build a bigger network by using modules as building blocks.
All modules are neural networks. The architecture of a single module is simpler and the sub-networks are smaller than a monolithic network. Due to the structural modifications the task the module has to learn is in general easier than the whole task of the network. This makes it easier to train a single module.
In a further step the modules are connected to a network of modules rather than to a network of neurons. The modules are independent to a certain level which allows the system to work in parallel.
For this modular approach it is always necessary to have a control system to enable the modules to work together in a useful way.
Another idea of modularity is a not-fully connected network. In this model the structure is more difficult to analyze, for an example see Figure 4.5(b). A clear division between modules can not be made. A module is seen as a part of the network that is locally fully connected. This modular approach is biologically very plausible. Experiments with this structure are described in [boer92].