The neuro- computer science is a branch of computer science and neurobiology, that deal with information processing in neural systems to apply these in technical systems. They must be distinguished from the Computational Neuroscience, which is engaged in as part of field of neurobiology with the understanding of biological neural systems by means of mathematical models.
In the neuro computer science is a highly interdisciplinary field of research in the intersection between artificial intelligence research and cognitive science.
In contrast to artificial intelligence, whose aim is to develop machines that result in "intelligent" behavior, it is the neuro- computer science more about the inner workings of the brain. Its operation is analyzed by simulating its basic building blocks, neurons and synapses, and their interconnection.
Branches of neuroscience computer science
Neural methods are mainly used when it comes to gain information from bad or noisy data, but also algorithms that adapt to new situations, so learn, are typical of the neuro- computer science. A basic distinction is supervised learning and unsupervised learning, a compromise between the two techniques is the Reinforcement Learning. Associative memory is a particular application of neural methods, and thus often subject of research in neuroscience computer science. Many applications of artificial neural networks can be found in the pattern recognition and especially in image understanding.
The neuro- computer science is a relatively young and small part of computer science, but can be found at many universities institutes, departments or workgroups for Neuro computer science.
Biological basis neural networks
Neurons ( for German and nerve cells) are found everywhere in the body, but particularly frequently they occur in the brain; almost all higher animals have a brain. In fresh slices through the brain to find a reddish-brown layer, the so-called gray matter and a whitish layer, the white matter.
A typical neuron consists of three parts:
- The soma
- The dendritic tree
- The axon
The dendrites and axons are two different types of appendages that originate from the nucleus. In most cases arise at each nucleus a number of dendrites that branch out to a tree, but only a single axon. The dendrites and cell nuclei lie exclusively in the gray matter, in which there are also a few axons, but only those which are not covered by a myelin sheath.
In the white matter, myelinated axons extend only. Since myelin sheaths consist of cell membranes that contain many lipids, the fat content is relatively high and the layer is whitish.
Two neurons are connected by synaptic coupling. Synapses are the places where excitation of one neuron passes into another. The electrical excitation is either chemically, transmitted by means of a neurotransmitter or electrically. The distance that is bridged by the chemical synapse, the synaptic cleft is 20-30 nm wide. A distinction inhibitory inhibitory synapses and excitatory excitatory synapses ( excitatory synapses). In the nerve cell via the synapses approach getrag excitations will be charged. This is beyond a certain threshold, an action potential is initiated in the neuron that develops across the membrane of the nerve cell.
Modeling of neural networks
There are many different models to model neural networks. One approach is to interconnect a number of artificial neurons to a network. These neurons can thereby be oriented differently depending on the problem close to the biological facts. There are also many other types of artificial neural networks:
- Perceptron ( Frank Rosenblatt ) and especially the multi-layer perceptron (MLP )
- Hetero - Associative networks
- Radial Basis Functions Networks
- Self-organizing maps ( also as Kohonenkarten, Teuvo Kohonen after referred to )
- K -Means Cluster Analysis
- Learning vector quantization ( LVQ )
- Adaptive Resonance Theory (ART)
- Hopfield networks ( Hopfield John )
- Auto- association
- Boltzmann machine ( Terrence J. Sejnowski, Geoffrey Hinton )