Rprop

Resilient backpropagation ( RProp ) or elastic reproduction is an iterative method for determining the minimum of the error function in a neural network. The algorithm is sometimes attributed to the group learning process of the second order, as in the determination of the actual change in weight of the last weight change is included. Here, the weight is changed according to the sign of the gradient. With a weight individual parameters, the step size is determined. The weight change is carried out in two steps. First step: For each weight of the modifying parameters for the k- th iteration is determined as follows:

With

Step length and maximum step size for a forward step or a backward step with the parameters, ,, fixed. Good values ​​for the parameters are:

The second step is to change the weights of the k- th iteration is determined:

With

In this case, the weight of the neuron j is for the input i and the sum E of the error.

The RPROP algorithm converges generally faster than the error feedback ( engl. backpropagation ), but it may be due to the discontinuity at the minimum of the local approximation come to skip the extremum.

The RPROP algorithm was the first time in 1992 by Martin Riedmiller and Heinrich Braun in their work, " RPROP - A Fast Adaptive Learning Algorithm" presented.

Christian Igel and Michael Hüsken (2003) have proposed a slight modification of the algorithm, which enhances its stability and speed. By Aristoklis of D. Anastasiadis et al. Changes described (2005) can global convergence of the method are proved.

301746
de