Training occurs in two stages, using the Backpropagation algorithm described in section 2.3.

In the first phase all sub-networks in the input layer are trained. The individual training set for each sub-network is selected from the original training set and consists of the components of the original vector which are connected to this particular network (as an input vector) together with the desired output class represented in binary or 1-out-of-k coding.

In the second stage the decision network is trained. To calculate the training set each original input pattern is applied to the input layer; the resulting vector together with the desired output class (represented in a 1-out-of-k coding) form the training pair for the decision module.

To simplify the description of the training a *small* intermediate
representation is used, further it is assumed that the
permutation function is the identity $\pi (x)\; =\; x$.

The original training set $TS$ is:
$(x$_{1}^{j}, x_{2}^{j}, &ldots;, x_{l}^{j};d^{j}), and
where $x$_{i}^{j} ∈IR is the $i$th
component of the $j$th input vector, $dj$ is the class number,
$j\; =\; 1,\&ldots;,\; t$, where $t$ is the number of training instances.

The module $MLP$_{i} is connected to:

$x$_{i n + 1},x_{i n + 2},&ldots;,x_{(i+1) n}

The training set $TS$_{i} for the module $MLP$_{i}:

$(x$_{i n + 1}^{j}, x_{i n + 2}^{j}, &ldots;,
x_{(i+1) n}^{j};d_{BIN}^{j})

for all $j=1,\&ldots;,\; t$, where $d$_{BIN}^{j} is the output class $dj$
represented in a binary code.

The mapping performed by the input layer is denoted by:

$\Phi :\; Rn*mR7m\; *log$_{2}k

The training set for the decision network:

$(\; \Phi ((x$_{1}^{j}, x_{2}^{j}, &ldots;, x_{l}^{j}));d_{BIT}^{j}) and
$j=1,\&ldots;,\; t$. Where $d$_{BIT}^{j} is the output class $dj$
represented in a 1-out-of-k code.

The mapping of the decision network is denoted by:

$\Psi :\; Rm\; *log$_{2}k R^{k}

**Figure 5.3:** The Training Algorithm.

The training algorithm is summarized in Figure 5.3.

The training of each module in the input layer is independent of all other modules so this can be done in parallel. The training is stopped either when each module has reached a sufficient small error or a defined maximum number of steps has been performed. This keeps the modules independent.

Alternatively training can be stopped if the overall error of all modules is sufficiently small or the number of maximum steps has been performed. This assumes that the training occurs step by step simultaneously in all modules.

Mit Okt 4 16:45:34 CEST 2000