Recurrent neural network#Elman networks and Jordan networks

A Jordan - network is a partially feedback artificial neural network, looking at the output of the network in the next time step, again as part of the input. As in the Elman network, the Jordan network is thus able to process entries at different times.

The context neurons may have additional feedback on itself, under which the influence of the older pattern can be controlled. The strength of this feedback is hereby controlled with a fixed memory factor λ. Jordan networks can be trained using backpropagation for example, the backward edges ( feedback) but not adjusted.

The structure of the Jordan network has over the similar Elman network has the disadvantage that the number of feedback loops is determined by the number of output neurons and thus the problem.

In Elman networks, the outputs of the hidden layer are fed back instead of the output neurons, so that the restriction of the Jordan network is repealed. Why are generally with Elman networks achieved better results.

450031
de