The information fusion includes methods to link data from different sensors or information sources with the aim of gaining new and more precise knowledge of readings and events.
Related disciplines are sensor fusion, data fusion and distributed fairs.
The theoretical origins of information fusion dates back to the late sixties. However, these mathematical principles were transferred later to the technology - initially in the field of artificial intelligence ( AI). In this discipline was often the biology, in particular the human brain, used as a model for the modeling of technical systems. Taking into account the performance of the brain in the fusion of data from the various sensory organs, so it is not surprising that the first attempts just come from the AI.
Meanwhile, the use of information fusion is very broad and includes many different disciplines - including robotics, pattern recognition, medicine, nondestructive testing, earth sciences, defense and finance. Although the literature on this is extensive, many of the enclosed method, however, are not very systematic.
In recent years, some systematic fusion approaches have emerged, of which the most important at this point to be discussed shortly. But first was formulated as a parameter estimation model, the fusion problem. A parameter is emitted from a source, which is a realization of the random variable. When the target size, it may be a measure, but also latent constructs, which must have no right to physical reality. In the latter case, the size in the Platonic sense can be understood as an idealization of the sensor data to be considered when desired or known properties of the target itself. Using multiple sensors data are collected, which are also regarded as realizations of a random process. The measurement corresponds to a figure that can be described mathematically by means of the conditional probability distribution ( WV) of. Hereinafter, it is assumed that it is in, and to continuous sizes WV is described by a probability density function.
The classical statistics is an empirical frequentist interpretation of probabilities based on the true sensor data is considered as random variables, but not the measure itself, the estimate based on the sensor data based on the so-called probability density function for interpreted as a function of and maximizes is:
Its value is the maximum likelihood or ML- estimated value.
In Bayesian statistics, the measured variable is interpreted as a realization of a random variable, so the a priori probability density function is used to determine the a posteriori probability density function:
By maximizing this equation, the maximum is obtained a posteriori (MAP) solution for the parameters to be estimated:
This approach has the significant advantage that it allows the specification of WV for the parameters to be estimated, given the measured data, whereas the classical approach only allows for the specification of the WV for the sensor data for a given parameter value.
Dempster -Shafer theory of evidence
The theory of evidence is often considered as an extension of probability theory or as a generalization of Bayesian statistics. It is based on two non-additive measures - the degree For keeping (English: degree of belief ) and the plausibility - and provides the ability to express uncertainty in detail. In practical situations, however, it is not always possible, so differentiated represent the available knowledge about the relevant variables and thus fully exploit the theoretical possibilities of this approach.
The fuzzy logic is based on the generalization of the concept of volume, with the aim to obtain a fuzzy knowledge representation. This is done by reference to a membership function which assigns to each element a degree of membership to a set. Due to the arbitrariness in the choice of this function, the fuzzy set theory is a very subjective method, which therefore particularly suitable for the representation of human knowledge. In the information fusion fuzzy methods are used to manage uncertainty and vagueness in conjunction with the sensor data.
Another method for the fusion of information are the artificial neural networks (ANN ). This can be based on simulated by software processing units which are connected to a network, or may be implemented in hardware in order to solve certain tasks. Their use is particularly advantageous when it is difficult or not possible to specify an algorithm for combining the sensor data. In such cases, the neural network is taught to the desired behavior in a training phase using test data. The disadvantage of neural networks is the lack of opportunities for the integration of a priori knowledge about the variables involved in the merger.