next up previous
Next: The Network Architecture Up: A Modular Neural Previous: A Modular Neural

Introduction

The multilayer Perceptron (MLP) trained by the Backpropagation (BP) algorithm has been used to solve real-world problems in prediction, recognition, and optimization.

If the input dimension is small the network can be trained very quickly. However for large input spaces the performance of the BP algorithm decreases [3]. In many cases it becomes difficult to find a parameter set which leads to convergence towards an acceptable minimum. This makes it often very difficult to find a useful solution, especially in recognition where large input spaces are common.

A lot of research is being done to overcome these problems; many of the ideas include modularity as a basic concept.

In [4] a locally connected adaptive modular neural network is described. This model employs a combination of BP training and a Winner-Take-All layer.

A modular neural system using a self organizing map and a multilayer Perceptron is presented in [2]. It is applied in a cosmic ray space experiment.

In this paper a modular neural network is proposed to enhance the generalization ability of neural networks for high dimensional inputs. The network consists of several MLPs. Each of the modules is trained by the BP algorithm. The number of weight-connections in the proposed architecture is significantly smaller than in a comparable monolithic network.

The modular architecture is introduced, a training algorithm is given, the operation of the network is described, and experiments are presented.



Albrecht Schmidt and Zuhair Bandar
Wed Apr 16 13:57:11 MET DST 1997