Cognitive science

Cognitive Science is the interdisciplinary science to the study of mental processes (English science of the mind ).

Subject of cognitive science are those between sensory and motor switching, conscious or unconscious processes. These include, for example, thinking, memory, learning, or language. Your subject matter is not limited to cognition, but equally includes emotion, motivation and volition.

Cognitive science abstracts thereby partially on whether cognition is in organic systems or living organisms, or artificial systems, such as computers or robots, investigated by considering cognitive processes as information processing. Methodically she works on several levels:

  • The theory that is used for hypothesis formation,
  • Cognitive modeling, which simulates cognitive performance with the help of computer models and new hypotheses incorporated into these models,
  • And the empirical level, which deals with the empirical testing of the models and the specific implementation of cognitive performance.

Cognitive science is the result of interdisciplinary efforts between psychology, neuroscience, computer science / artificial intelligence, linguistics and philosophy, but also anthropology and sociology.

  • 3.1 Problem Solving
  • 3.2 Cognitive Architectures
  • 4.1 Language ability of humans
  • 4.2 Dialogue and expert systems
  • 4.3 The Turing Test

Development of Cognitive Science

History of Cognitive Science

The development of cognitive science is closely related to the so-called " cognitive revolution " together, culminating falls to the year 1956. Until then, had played a key role in the psychology and philosophy of the spirit of behaviorism. Behaviorism arose as a response to the problems of introspection as a psychological research method. Introspective reports on the mental inner workings were not checked for scientists from outside. Behaviorism drew the consequence that psychology must be confined to an understanding of the behavior. In the philosophy of mind went on about Gilbert Ryle a step further and claimed that mental states were no longer as behavioral dispositions.

In 1956 at the Massachusetts Institute of Technology instead of the Symposium on Information Theory, to which the AI pioneers Allen Newell, Herbert Simon and Marvin Minsky, and the linguist Noam Chomsky involved. Chomsky presented a sharp critique of behaviorism and put his enormously influential transformational grammar before. Newell and Simon introduced the Logical Theorist who could independently " prove" a theorem of mathematics for the first time. Important precursors of this development was the formulation of Cybernetics by Norbert Wiener and the work of Alan Turing, who designed the Turing machine and developed the Turing test.

Cognitive science, which was constituted in the context of the developments described, based on a central assumption, which was called the " computer model of the mind". Thus the argument is meant that the brain is an information-processing system and basically work like a computer. The distinction between mind and brain let analogously to understand the distinction between software and hardware. Just as the software is determined by data structures and algorithms, the mind is determined by mental representations and computational processes. Just as the abstract description of software is possible to examine the hardware directly without an abstract description of the mental faculties should be possible to study the brain without directly. And just as the existence of a layer of software is perfectly compatible with materialism, the mental level should be embedded in a materialistic interpretation.

Recent Developments

The computer model of the mind has undergone in recent decades, a sharp criticism. This criticism has two main sources: first, the description of the brain has developed rapidly by the Cognitive Neuroscience. This is evident for example in the increasing importance of imaging techniques that make it implausible to ignore the brain in the study of the mind. On the other hand other successful approaches have developed, such as the connectionism and the modeling of neural networks. Artificial neural networks are programmed, among other things, to simulate the activity of neuronal assemblies. It is doubtful to what extent is still a distinction between software and hardware level possible.

Other alternative paradigms in cognitive science are, for example, the Dynamizismus, Artificial Life ( Artificial Life ) and the materialized heeled and cognitive science. According to the Dynamizismus presents the theory of dynamic systems, a suitable model of the cognitive-behavioral prepared as cognitive behavior always takes place in a temporal context and requires temporal coordination. It is postulated that this temporal aspect of cognition, which is neglected in the computer model of the mind is essential. On the other hand, this approach provides the centrality of the internal representation and the symbol manipulation (see Symbolism ) in question, as these concepts are not part of a dynamic statement.

" Artificial Life " is a term which opposes artificial intelligence: instead of solving abstract tasks (such as chess positions analyze ) what us humans often appear difficult because of the sheer number of possible solutions, computers easily falls, however, one should understand only deal with the supposedly mundane everyday problems. Many tasks that seem to us easy (such as seen running, friends and enemies, catch a ball ... ) are computers or robots are still not at all or only be accomplished very limited.

The materialized and cognitive science turn -off assumes that cognition ( situatedness ) can not be explained without reference to a specific body ( Physicalization, Embodiment ) and a specific environment. These claims result from the doubt that cognition is a process that takes place in a world of abstract symbolic representations, relatively independent of the precise sensory, motor and temporal events in the outside world. Known representatives of this view are Alva Noë, Susan Hurley, Evan Thompson, Francisco Varela and Kevin O'Regan. As part of the verkörperlichten and situated cognitive science a link is often sought by the ideas of phenomenology, Maurice Merleau -Ponty and Edmund Husserl with the classical analytic philosophy of mind.

These various featured currents ( connectionism, Dynamizismus, Artificial Life, situatedness and Physicalization ) are gladly under the heading New AI (New AI) grouped together because they overlap in their demands and assumptions, in part. However, they can not be regarded as identical, since they differ in many ways in assumptions, consequences and applications or even contradict.

Criticism of the computer model of mind led temporarily to a general questioning of the cognitive science. By now the dust has largely smoothed. Cognitive scientists now use self neuronal networks and are in close contact with the cognitive neuroscience.

Philosophy of Cognition

In the cognitive science topics are studied that require human consciousness or self- consciousness. These individual aspects of consciousness, such as perception, thoughts, or memories are considered and commonly referred to as mental states. "Higher" cognitive functions such as learning, problem solving and speaking were once again thinking - that mental states - expected. It is therefore to clarify the cognitive science of high methodological importance, what is meant by the talk of mental states. Using the computer model of mind a classic position of the philosophy of mind is connected - functionalism.

Functionalism, which was developed in the sixties by Hilary Putnam, claims that mental states are functional states. A functional state is specified by its causal role in a system. One can quite well in the example explain the concept of the functional condition of simple machines: Imagine a vending machine before. This raises at a euro from a candy. Now you can vending machines with different states describe: There must be a state in which the machine ejects the candy without further money to demand. However, it must also give states in which the machine asks for 1 euro or 50 cents, to eject something. Each of these states of the automaton is a functional state. It is specified by the fact that he at a certain input ( here: 50 cents or 1 euro ) reacts in a certain way: it has a certain output ( here: Candy or not) and goes into a different state.

The key to this consideration is that the description of the functional condition is independent of, from what and how the candy machine is built in concrete terms. When mental conditions would also be functional conditions, it would also irrelevant whether the functional state is realized in a brain or a computer. This means that the conditions were clear, which must be met so that a computer can have mental states: The computer would only realize the same functional states. This seems to be possible. Already in 1936 by Alan Turing formulated as a mathematical model Turing machine can realize any functional states in principle.

Cognitive abilities and cognitive architectures

People have many different cognitive abilities: memory, language, perception, problem solving, mental intention, attention, and more. The aim of cognitive psychology is to explore the characteristics of these skills and, as far as possible to describe in formal models. These models can then be realized as a cognitive architecture on a computer. The artificial intelligence (AI) has to realize the goal of cognitive abilities in machines. However, the artificial agents may - in contrast to cognitive architectures - use strategies that are not used by people.

Problem solving

" Problem solving " is called actions that are aimed to reach a goal state. Problem solving processes are therefore commonplace, they are necessary for as the days planning, calculating, playing chess or route planning a trip. It was early on the goal of artificial intelligence to give machines the ability to problem-solving.

It is specified a target state in artificial intelligence a start and. The task is to find the (or one ) way to the goal. In this case, there are basically two approaches: Firstly, the program can try to blind to find the path to the goal by all the different ways tried (so-called brute- force method ), as happens for example in the depth-first search or breadth-first search. This approach is, however, very quickly reaches its limits, because the number of possible paths in NP- complete problems is so high that a try would exceed the computational capacity of the machine. In such a case, search algorithms that use heuristics necessary, such as the A * algorithm. Heuristics describe selection mechanisms which attempt to determine the most promising method before execution.

The first program that has worked extensively with heuristics that was the problem solver General (GPS ) by Allen Newell and Herbert Simon. The GPS was capable of finding solutions to such as the Towers of Hanoi game. The game consists of a number of different sized washers and three playing fields. When the game starts all disks are on the left hand field. The goal is reached when all disks are on the right field. However, each slice may only lie on a bigger disk and only one disk can be moved either to the left, the middle or the right place. Although the problem is solvable with an algorithm, people often use heuristics to solve this problem, since the number of possible paths grows rapidly.

The solution of games such as the Towers of Hanoi was a popular task in the early days of artificial intelligence. This is due to the fact that only a rather limited number of actions is possible and there are no unpredictable events. The experimental verifiability of cognitive strategies was facilitated. Today it is also dedicated to complex everyday tasks, such as the successful " execution " of a restaurant visit.

Cognitive architectures

The goal of a cognitive architecture is to summarize the different results of cognitive psychology in a comprehensive computer model. However, the results need to be in a so far formalized form that they can be the basis of a computer program. By combining the individual results are as for a comprehensive theory of cognition and the other a commercially viable model also arise. The three most successful cognitive architectures are ACT -R ( Adaptive Control of Thought, ACT), SOAR and EPIC. With the PSI model, another approach has been introduced in recent years, which is largely based compared to the other architectures on the current state of general psychology.

ACT -R is a production system having a series of modules. It consists of input and output modules, an output memory and a declarative memory. The target module determines which destination to be followed in the production system. In memory production rules are given that determine what action is taken when a specific goal to be achieved, and what content in working memory (or in different partitions of working memory ) must be present so that the action can be carried out successfully. This " pattern matching " leads possibly to select a production rule and determines the action of the output module.

Cognitive architectures are characterized by the fulfillment of certain Kritieren, the Core Cognitive Criteria ( CCC), from. These are:

  • Suitable representational data structure (s)
  • Support of classification
  • Support of the Frege principle
  • Solving the binding problem
  • Productivity
  • Performance
  • Syntactic generalization
  • Robustness
  • Adaptability
  • Memory consumption
  • Scalability
  • Independent gain in knowledge ( logical reasoning, recognition of correlation)
  • Triangulation ( merging data from different sources )
  • Compactness ( the simplest possible basic structure )

A computer system that satisfies these properties is IBM 's DeepQA.

Language and Cognition

Language proficiency is one of the most outstanding human cognitive abilities. Away its language is also the prerequisite for the Have some other cognitive skills. Without language, at least could not have thought many thoughts and many problems can not be solved. In cognitive science, the language has therefore played a central role. Firstly, there is the question of how human language proficiency is possible, on the other hand, as you can bring machines in language proficiency.

Human capacity for speech

How is it that people usually are capable to learn languages ​​? Until the twentieth century, the opinion prevailed that the language acquisition by filtering out the rules of language in dialogue with other people is to be explained. Such, " cognitivism " called, position was represented approximately by Jean Piaget. According to her, the language ability of the general thinking ability is derived. This theory first appeared contrary to Noam Chomsky with his " nativist " said position. Chomsky claims that people are genetically equipped with a language organ that makes language acquisition possible. The organ of speech is thus resides in the brain, but not as a fixed circumscribed neural region.

Chomsky argues that language acquisition can not be explained by a cognitivist approach. The linguistic input of others is not sufficient to define the rules of correct speech. Firstly, the spoken language of the input is namely often ungrammatical, thus deficient. Secondly, the input to let grammatical errors in learning children, but which it does not make de facto. Chomsky concludes that there must be innate linguistic knowledge, on which can be used in language acquisition. This innate knowledge is particular grammatical knowledge, all people had already given a universal grammar from birth.

Chomsky's hypotheses have been heavily criticized in the designated as Linguistics Wars scientific debate of the 1960s and 1970s: its syntax - oriented interpretative semantics of George Lakoff and his universal grammar of the so-called linguistic relativity Benjamin Whorfs.

Since the 1980s, research increasingly turns back to concepts, which - similar to Piaget - provide socialization in language acquisition at the center. Chomsky's approach (see also: The semantic theory in the discussion ) is - like all the traditional " head " philosophy - put in constructivist concepts and neurobiological models are suitable:

According to Humberto Maturana and Francisco Varela ) - see also: The Tree of Knowledge (El árbol del conocimiento 1984) - the brain is not structured as an input / output model, but has - through a network of one hundred billion inter- neurons million interconnect of motor and sensory nerve cells - the ability for intensive parallel processing. A representative idea with an illustration of a concept in the brain is for Maturana and Varela, hardly tenable, since hundreds of neurons from other parts of the nervous system converge at the switching points with a variety of effects and lead to overlays. The nervous system does not work with representations of an independent external world. Words as names of objects or situations in the world would not the fact of structural coupling; rather, they are ontologically defined coordinations of behavior. Language created by Maturana and Varela not in a uniform design ( is not part of the brain), but is learned through coordination of actions variable communicative behavior ( language is part of the environment, which is called the "realm of language": Our common " in-the- language - His [- ] is what we call consciousness, or as our mind 'and' our 'I ' learn '.

Dialogue and expert systems

Trying to equip machines with linguistic ability, often precipitates in dialogue systems. A dialogue system is usually a computer program with which you can input via keyboard. One of the first successful dialogue systems was ELIZA by Joseph Weizenbaum in the year 1966. ELIZA simulated a psychotherapist. Through the skillful use of phrases such as " Tell me more of X" " they often think of X" ELIZA test subjects could deceive or long about their non-human existence. Some test subjects felt even so well understood that they wanted to talk privately beyond the test situation with ELIZA about their problems. But think ELIZA questions that do not fit into the context of the therapy situation, ELIZA can be no reasonable answers in the situation.

Related to dialog systems are expert systems, which now also have numerous commercial applications. Expert systems attempt the knowledge of human experts to store and make available to the user. Applications include automatic medical or technical experts. These experts presuppose a working knowledge representation, through which the program has the knowledge. In a comprehensive knowledge representation, the material must be structured in a convenient way so that more can be made of the necessary knowledge that the relations between the knowledge elements are clear and that the content may be overlooked by the developer and extended if necessary.

The Turing Test

→ Main article: Turing test

The fascination of dialogue systems is closely connected with a thought experiment that was formulated by the computer pioneer Alan Turing in 1950. Turing sought a clear criterion for deciding the question of when the computer can be considered intelligent. His answer was the famous Turing Test: A person enters into a dialogue with a computer - through the computer screen and keyboard. The computer can accurately be regarded as intelligent, if the people is difficult to decide whether it is a dialogue with a human or a computer program.

Today's dialogue systems are still very far from being able to pass the Turing test. This is not surprising when you consider what a program should all be able to pass this. It would have to explain jokes about, understand allusions and irony and formulate the context-appropriate questions and answers. There is now the doped at $ 100,000 Loebner Prize for the developer of the first program that passes the Turing test.

In the Turing test a lot of criticism has been practiced. The best known is probably John Searle's Chinese room argument to show that the existence of the Turing test is not sufficient for understanding speech. Just imagine, one would be in a huge library. From the outside, a leaf with Chinese characters are passed in, you do not understand. Since only sequences of Chinese characters are recorded in the books of the library, you can now pick out the strings on the leaves. Each string is assigned to a different character string in the book, the finally writes to the sheet, and reflects to the outside. By this procedure an outside Chinese, it appears as if he would like to chat with another Chinese understanding humans. It is understood itself not speak Chinese and the library also does not understand Chinese. So you could pass the Turing test a system without it stands only a spark of what has been said.

The connectionism

→ Main article: connectionism

In cognitive science, the development of connectionism has led to major changes. While in the classical artificial intelligence - the computer model of the mind according to - cognitive skills were simulated with a symbolic programming language, operating in connectionism with artificial neural networks. An artificial neural network is an interconnection of simple units, the so-called artificial neurons. The neurons can pass on their activities to the neighboring neurons. This may result, for a given input complex excitation pattern in turn generate even an Output.

The concept of neural networks was developed in 1943 by Warren McCulloch and Walter Pitts. 1949 Donald O. Hebb developed the psychologist the Hebbian learning rule that can be integrated into the concept of neural networks. According to the Hebbian learning can thus describe that one weights the individual connections between neurons. A learning takes place by the weights between the neurons are changed. Despite this early development towards a model of learning neural networks, cognitive science for a long time on the symbol-processing approach was limited ( GOFAI ).

Only since the 80s is in cognitive science again increasing the use of neural networks. This is in particular the fact that neural networks are able to perform tasks in which the symbol-processing approach has been fairly successful. Such tasks include, for example, pattern recognition or movement. This development is also of theoretical importance: The connectionism recognizes namely - no longer a distinction between software and hardware - so important for classical cognitive science.

Cognitive Science at the Universities of

In the United States, but also in the UK, Australia and the Netherlands, cognitive science is a widely used and accepted study. Influential institutions are located approximately at Rutgers University, Tufts University, the University of California, San Diego and at the University of California, Berkeley.

In Germany, however, cognitive science is still not very widely used as a study. There are at the University of Osnabrück own cognitive scientific institute with a bachelor's, master's, and doctoral program at the University of Tübingen was established in the winter semester 2009/10 the bachelor's and master 's degree program cognitive science, offered by the Faculty of Mathematics and Natural Sciences. As a side compartment can be studied Cognitive Science at the Albert- Ludwigs- University of Freiburg and the University of Potsdam. Since the winter semester 2012/2013 in Freiburg is also a M.Sc. Degree program offered. Related subjects are the bachelor's degree program Cognitive computer science at the University of Bielefeld, the bachelor's degree program " Philosophy - Neuroscience - Cognition " at the Otto -von- Guericke- University Magdeburg and MEi: CogSci, the joint degree " Middle European interdisciplinary master program in Cognitive Science ", which offer the universities in Vienna, Bratislava, Budapest, Ljubljana and Zagreb together. At the University of Duisburg -Essen, there is the Bachelor and Master Studies "Applied Cognitive and Media Studies ".

196526
de