An interesting investigation of the relation between structure and function of modular neural networks is given in [happ94].
The article ``Design and Evolution of Modular Neural Network Architectures'' examines the structural evidence for a modular architecture in the human brain which is given by different psychologists, biologists, and neurologists. Several levels of modularity in the brain are described. Human multitasking abilities and disabilities are explained with the modular and parallel structure of the brain.
Happle and Murre conclude ``[...]that the nature of information processing in the brain is modular. Individual functions are broken up into subprocesses that can be executed in separate modules without mutual interference.'' [happ94, p984,]. They further speculate that there is a process of subdividing modules into submodules and tasks into subtasks up to a very basic level, and that the modular architecture of the brain, which has developed in a long evolutionary process, may be the key issue for this division of complex tasks into subtasks.
They suggest building more modular artificial neural networks which are similar to the modular structure of the brain. These new architectures may then increase the ability of the network to solve more complex real world problems.
The authors point out that most well established ANN do not have any pre-imposed modular structure. These networks suffer from problems in convergence and highly increased training expenditure for high dimensional input spaces. It is also addressed that the generalization ability decreases in huge ANNs.
Following this motivation for a modular architecture, a new network structure is introduced. The basic building block in this network is the CALM (Categorization And Learning Module), which works on a competitive and unsupervised basis and has the ability to differentiate input patterns in different categories. For a very detailed description of the CALM see [murr92, p15ff,].
The article also discusses the process of adaption, and describes the following basic biological principles:
The evolutionary idea concludingly suggests the use of genetic algorithms to optimize the structure of a modular neural network. For a further investigation of this issue see [boer92].
The evidence for a modular design in natural neural systems pointed out in this article and in several reference given in the paper is sufficiently encouraging to develop and test ideas of modular structures within the field of artificial neural computing.
The problems occurring in large conventional ANNs such as multilayer feedforward networks or Hoppfield networks mentioned by the authors are very significant, especially for the simulation of huge brain-like structures. The modular approach seems very promising for tackling those problems.
Learning and forgetting as a concept to develop modules is described in [ishi95]. A technique called ``Learning of modular structured networks'' is proposed to overcome the problem of training large-scale networks. The learning occurs in two stages, first the connections in the small modules are learned; then the inter-modular connections are established. Modules that have already learned a function can also be employed elsewhere within the network.
Modularity beyond neurons is shown in [svan92]. The basic units in this approach are blocks of synapses, blocks of neurons, and a control block. The system is realized as a hardware implementation. It is more flexible than a model which uses only neurons.