next up previous contents
Next: A Single Layer Network Up: Multilayer Networks and Backpropagation Previous: Multilayer Networks and Backpropagation

An Artificial Neuron

 figure176
Figure 2.1:   An Artificial Neuron.

The artificial neuron shown in Figure 2.1 is a very simple processing unit. The neuron has a fixed number of inputs n; each input is connected to the neuron by a weighted link wi. The neuron sums up the net input according to the equation: net = ∑i=1n xi wi or expressed as vectors net = xT w. To calculate the output a activation function f is applied to the net input of the neuron. This function is either a simple threshold function or a continuous non linear function. Two often used activation functions are:

fC(net) = {11-e-net}

fT(net) = {{ if a > θ then 1 else 0 }

The artificial neuron is an abstract model of the biological neuron. The strength of a connection is coded in the weight. The intensity of the input signal is modeled by using a real number instead of a temporal summation of spikes. The artificial neuron works in discrete time steps; the inputs are read and processed at one moment in time.

There are many different learning methods possible for a single neuron. Most of the supervised methods are based on the idea of changing the weight in a direction that the difference between the calculated output and the desired output is decreased. Examples of such rules are the Perceptron Learning Rule, the Widrow-Hoff Learning Rule, and the Gradient descent Learning Rule.

The Gradient descent Learning Rule operates on a differentiable activation function. The weight updates are a function of the input vector x, the calculated output f(net), the derivative of the calculated output f'(net), the desired output d, and the learning constant η.

net = xT w

Δw = ηf'(net) (d-f(net)) x

The delta rule changes the weights to minimize the error. The error is defined by the difference between the calculated output and the desired output. The weights are adjusted for one pattern in one learning step. This process is repeated with the aim to find a weight vector that minimizes the error for the entire training set.

A set of weights can only be found if the training set is linearly separable [mins69]. This limitation is independent of the learning algorithm used; it can be simply derived from the structure of the single neuron.

To illustrate this consider an artificial neuron with two inputs and a threshold activation function fT; this neuron is intended to learn the XOR-problem (see table). It can easily be shown that there are no real numbers w1 and w2 to solve the equations, and hence the neuron can not learn this problem.

Input Vector Desired Output Weight Equation
0 0 1 0 w1 + 0 w2 > θ⇒0 > θ
1 0 0 1 w1 + 0 w2 < θ⇒w1 < θ
0 1 0 0 w1 + 1 w2 < θ⇒w2 < θ
1 1 1 1 w1 + 1 w2 > θ⇒w1 + w2 > θ


next up previous contents
Next: A Single Layer Network Up: Multilayer Networks and Backpropagation Previous: Multilayer Networks and Backpropagation

Albrecht Schmidt
Mit Okt 4 16:45:34 CEST 2000