Supervised learning

This product was added to computer science because of the content, defects on the quality assurance side of the editor. This is done to bring the quality of the articles from the computer science subject area to an acceptable level. Help us to eliminate the substantive shortcomings of this article and take part you in the discussion! ( )

Supervised learning is a branch of machine learning. By learning this is meant the ability to replicate laws. The results are known by natural laws or expert knowledge and are used to learn the system.

A learning algorithm tries to find a hypothesis which meets unerring possible predictions. Under hypothesis is an illustration to understand that maps each input value the presumed output value. For this, the algorithm changes the free parameters of the selected hypothesis class. Often as the set of all class hypothesis hypotheses that can be modeled by a given artificial neural network can be used. In this case, the freely selectable parameters are the weights of the neurons. In supervised learning, these weights are adjusted such that the output of the neurons which a given Vectors Teaching (English, learning vector) as closely as possible. The method therefore depends on a pre-established to be learned output whose results are known. The results of the learning process can be compared with the known correct results, ie " monitors " are.

To know when a hypothesis is unerringly, an error measure is introduced, which is to be minimized. A popular choice is the mean square error of all training data. A learning step could look as follows:

After this training or learning process, the system should be able to deliver to an unknown, the learned examples similar input, correct output.

In order to test this capability, the system is validated. One possibility is to divide the available data into a training set and a test set. The goal is to minimize the error measure in the test set, is not trained with. Often using cross-validation methods are used.

Does the model a large number of parameters (weights) or are only few training data available, it is easy to over-fitting. This can be seen when the error in the training set while still dropping, but the one in the test set begins to rise again, because the known data to be learned individually (rather than the general rule behind it). Often this time is just waiting to stop the training process. But this test set is used during training. To assess, therefore, a third validation set is introduced.

Method in supervised learning

  • Inductive learning, see also induction ( thinking)
  • Concept learning problem
  • Learning of decision trees, see also decision tree

Examples of Supervised Learning

  • Bayes classifier
  • Perceptron
  • Support Vector Machines
789304
de