Quickprop

Quickprop is an iterative method for determining the minimum of error function of an artificial neural network, which is based on Newton's method. The algorithm is sometimes attributed to the group learning method of the second order, since it is concluded that the minimum of the error function by a quadratic approximation of the previous gradient step and the current gradient. Under the assumption that the error function is locally approximately square, trying to describe it by means of a upwardly open parabola. The minimum is sought in the vertex of the parabola. The procedure requires only local information of the artificial neuron to which it is applied.

Of the k-th approximation step is given by:

In this case, the weight of the neuron j is for the input i and the sum E of the error.

The Quickprop algorithm converges generally faster than error return (English backpropagation ), but the network during the learning phase can behave chaotically due to large step sizes.

667435
de