Connectionist artificial neural networks are an approach to neural computing which uses interconnected simple processors, called neurons to form a simplified model of the structures in the biological nervous system.
These neural networks have the ability to learn from examples and are trained to solve problems rather than programmed. The network has the ability to adapt to the environment.
The connectionist approach is very much inspired by biology and psychology. The very beginning of this field can be seen in the works of McCulloch and Pitts ([mccu43]), Hebb ([hebb49]), and Rosenblatt ([rose58]). This research does introduce the idea of parallel and interconnected neural systems based on simple units.
However there are various limitations to single layer networks which consist of simple neurons. These limitations, especially the classification problem of sets that are not linearly separable (in particular the XOR-Problem) were studied by Minsky and Papert ([mins69]). The publication of these results saw a significant reduction in research in neural networks.
In the 1980's advanced learning algorithms for multilayer networks (e.g. Backpropagation and derivatives) and ideas for different neural network architectures such as Self Organizing Maps by Kohonen ([koho82]) and the ART network by Grossberg ([gros87]) re-vitalized research in this area. This increase in neural network research is still apparent, involving many different approaches. One is the idea of modular and multiple neural networks.