**Figure 2.1:** An Artificial Neuron.

The artificial neuron shown in Figure 2.1 is a very simple
processing unit. The neuron has a fixed number of inputs $n$; each input is
connected to the neuron by a weighted link $w$_{i}. The neuron sums up the
$net$ input according to the equation:
$net\; =\; \sum $_{i=1}^{n} x_{i} w_{i} or
expressed as vectors $net\; =\; xTw$.
To calculate the output a activation function $f$ is applied to
the $net$ input of the neuron.
This function is either a simple threshold
function or a continuous non linear function. Two often used activation
functions are:

$f$_{C}(net) = {1^{-net}}

$f$_{T}(net) = {{

The artificial neuron is an abstract model of the biological neuron. The strength of a connection is coded in the weight. The intensity of the input signal is modeled by using a real number instead of a temporal summation of spikes. The artificial neuron works in discrete time steps; the inputs are read and processed at one moment in time.

There are many different learning methods possible for a single neuron.
Most of the supervised methods are based on the idea of changing the weight
in a direction that the difference between the calculated output and the
desired output is decreased. Examples of such rules are the *Perceptron
Learning Rule*, the *Widrow-Hoff Learning Rule*, and the *Gradient
descent Learning Rule*.

The Gradient descent Learning Rule operates on a differentiable activation function. The weight updates are a function of the input vector $x$, the calculated output $f(net)$, the derivative of the calculated output $f\text{'}(net)$, the desired output $d$, and the learning constant $\eta $.

$net\; =\; xTw$

$\Delta w\; =\; \eta f\text{'}(net)\; (d-f(net))\; x$

The delta rule changes the weights to minimize the error. The error is defined by the difference between the calculated output and the desired output. The weights are adjusted for one pattern in one learning step. This process is repeated with the aim to find a weight vector that minimizes the error for the entire training set.

A set of weights can only be found if the training set is linearly separable [mins69]. This limitation is independent of the learning algorithm used; it can be simply derived from the structure of the single neuron.

To illustrate this consider an artificial neuron with two inputs and a
threshold activation function $f$_{T}; this neuron
is intended to learn the XOR-problem
(see table). It can easily be shown that there are no real
numbers $w$_{1} and $w$_{2} to solve the equations, and hence the neuron can not
learn this problem.

Input Vector | Desired Output | Weight Equation |

0 0 | 1 | $0\; w$_{1} + 0 w_{2} > θ⇒0 > θ |

1 0 | 0 | $1\; w$_{1} + 0 w_{2} < θ⇒w_{1} < θ |

0 1 | 0 | $0\; w$_{1} + 1 w_{2} < θ⇒w_{2} < θ |

1 1 | 1 | $1\; w$_{1} + 1 w_{2} > θ⇒w_{1} + w_{2} > θ |

Mit Okt 4 16:45:34 CEST 2000