Artificial intelligence

Artificial intelligence (AI, english artificial intelligence AI) is a branch of computer science that deals with the automation of intelligent behavior. The notion is not clearly separable, since there is already a lack of a precise definition of intelligence. Yet, he finds in research and development application.

Generally referred to as " artificial intelligence " or " AI " the attempt to replicate a human-like intelligence, that is, to build a computer or program so that it can work independently on problems. But often they do an effective counterfeit, fake intelligence is thus referred to, particularly in computer games, which is simulated by simple algorithms usually an intelligent behavior.

  • 4.1 Search
  • 4.2 Plan
  • 4.3 Optimization Methods
  • 4.4 Reasoning
  • 4.5 Approximation Methods
  • 7.1 Linguistics
  • 7.2 Psychology
  • 7.3 Psychotherapy
  • 7.4 Philosophy
  • 7.5 computer science

Overview

In the understanding of the term artificial intelligence often reflects originating from the Enlightenment idea of ​​" man as machine", an imitation thereof, the so-called strong AI 's goal: to create an intelligence that think like humans creatively and solve problems can and which is characterized by a form of consciousness or self- awareness and emotions. The objectives of the strong AI are after decades of research continues visionary.

In contrast to strong AI it's the weak AI aim is to meet specific application problems. In particular, such applications of interest to the solution of which, as generally understood a form of " intelligence" are seems to be necessary. Ultimately, it is the weak AI thus the simulation of intelligent behavior by means of mathematics and computer science, she is not about creating awareness and a deeper understanding of intelligence. While the strong AI failed due to their philosophical question to date, significant progress has been made on the side of the weak AI in recent years.

In addition to the research results of the core computer science self psychology, neurology and neuroscience, mathematics and logic, communication science, philosophy and linguistics have been incorporated into the AI ​​results. Conversely, the AI ​​had in turn impacts on other areas, especially on the neurosciences. This is reflected in the training of the area of neuro- computer science, is associated with the biology -oriented computer science, and computational neuroscience. Additionally, the entire branch of cognitive science is to call, which significantly relies in cooperation with the cognitive psychology to the results of artificial intelligence.

It can be stated that the AI ​​is not a closed area of ​​research. Rather, techniques from different disciplines can be used without them have a connection with each other. In artificial neural networks, there are techniques that have been developed from the mid-20th century, building on the neurophysiology.

History

On 13 July 1956, at Dartmouth College, a famous conference, which was organized by John McCarthy, Marvin Minsky, Nathan Rochester and Claude Shannon. McCarthy coined the term " artificial intelligence " ("artificial intelligence" ) in 1955 in the grant application to the Rockefeller Foundation as the theme of this Dartmouth Conference. The Dartmouth Conference in the summer of 1956 was the first conference that the issue was devoted to artificial intelligence.

Based on the work of Alan Turing (including the essay Computing machinery and intelligence ) formulated Allen Newell ( 1927-1992 ) and Herbert A. Simon ( 1916-2001 ) from Carnegie Mellon University in Pittsburgh, the Physical Symbol System Hypothesis, after the thinking is information processing, information processing, an arithmetic operation, ie symbol manipulation, and apply it to the brain as such in thinking does not arrive ". intelligence is mind Implemented by any pattern fashionable kind of matter"

This view that intelligence is independent of the carrier substance is shared by the representatives of the strong AI thesis, such as Marvin Minsky (* 1927) from the Massachusetts Institute of Technology (MIT), one of the pioneers of AI, for "the purpose AI overcoming of death is, " or by the robot specialist Hans Moravec ( born 1948 ) from Carnegie Mellon University, in his book mind Children ( Children of the mind) the scenario of the evolution of the post- biological life describes: a robot transmits in human brain knowledge stored in a computer so that the biomass of the brain is redundant and can begin a post-human era in which the stored knowledge remains accessible as long as desired.

In particular, the initial phase of the AI was characterized by an almost limitless expectations with regard to the ability of computers " to solve problems, to the solution of intelligence is necessary when they are performed by humans " ( Minsky ). Simon predicted in 1957, among other things, that within the next ten years a computer chess world champion and discover and prove an important mathematical theorem would, forecasts that did not apply and Simon 1990, but untimed repeated. After all, succeeded in 1997, developed by IBM System Deep Blue, the chess world champion Garry Kasparov beat in six games.

Newell and Simon developed in the 1960s, the General Problem Solver, a program that can be solved with simple methods any problems, should a project that was eventually discontinued after almost ten years of development time. John McCarthy struck before 1958, all human knowledge in a homogeneous, formal style, bring to the predicate logic 1 level. The idea was to construct theorem prover, composed the symbolic expressions to discuss the world's knowledge.

In the late 1960s developed Joseph Weizenbaum ( 1923 to 2008 ) from MIT with a relatively simple strategy, the program ELIZA, in which the dialogue of a psychiatrist is simulated with a patient. The impact of the program was overwhelming. Weizenbaum was surprised that you can provide a relatively simple way people the illusion of an animated party. "If one misunderstands the program, then you can consider it as a sensation ," said Weizenbaum later on ELIZA. In some areas achieved the AI ​​successes, for example in strategy games ( chess, checkers, etc. ), in mathematical symbol processing, in the simulation of robots, the evidence of logical and mathematical propositions and finally in expert systems. In an expert system, the rule-based knowledge in a particular field is represented formally.

The system then allows for specific issues, automatically apply these rules in such combinations that have not been explicitly recognized ( by the human expert ) in advance. The relied to a specific problem-solving rules can then in turn be well spent, that is, the system can "explain" his result. Individual knowledge elements can be added, changed or deleted; modern expert systems have to comfortable user interface.

One of the best known expert systems, the early 1970s by T. Shortliffe at Stanford University developed system MYCIN to support diagnostic and therapeutic decisions in blood infections and meningitis, was attested by an evaluation that its decisions are as good as an expert in the relevant field, and better than that of a non - expert. However, the system responded as he data of a Cholera - known to be a colon and no blood infectious disease - have been entered, with diagnostic and therapeutic proposals for a blood infection, that is, MYCIN did not recognize the limits of its competence. This cliff- and- plateau effect is in expert systems, which are highly specialized set on a narrow field of knowledge, not untypical.

In the 1980s, the AI ​​was parallel to major advances in hardware and software, assigned the role of a key technology, in particular in the field of expert systems. It was hoped that a variety of industrial applications, perspective and detachment " monotonous " human labor (and their costs) by AI-controlled systems. However, after many predictions could not be complied with, the industry and the research funding reduced their commitment.

With the neural networks occurred at the same time a new perspective of AI to light, triggered, among other things of works by the Finnish engineer Teuvo Kohonen. In this area, the weak AI to let go of concepts of "intelligence" and instead analyzed, starting from the neurophysiology, the information architecture of the human and animal brain. The modeling in the form of artificial neural networks then illustrated how a complex sample processing can be done from a very simple basic structure. The neuro- computer science has developed as a scientific discipline to study these processes.

It is clear that this type of study, as opposed to expert systems not based on the derivation and application of rules. It also follows that the special abilities of the human brain are not reducible to such a rule-based intelligence - term. The implications of these insights on the AI research, but also on learning theory, teaching methodology, the relationship to consciousness and other areas are still being discussed.

In the AI is now numerous sub-disciplines have emerged so special languages ​​and concepts for the representation and use of knowledge models to questions of revisability, uncertainty and imprecision and machine learning techniques. The fuzzy logic has established itself as another form of weak AI as in machine controls.

Other successful AI applications are in the fields of natural language interfaces, sensors and robotics.

Subregions

Knowledge-Based Systems

Knowledge-based systems model a form of rational intelligence for so-called expert systems. These are able to deliver to a question of the user based on formalized knowledge and drawn logical conclusions from it answers. Typical applications can be found in the diagnosis of disease or finding and removing errors in technical systems.

Speech recognition

By means of linguistic intelligence, it is possible to convert a written text into speech ( speech synthesis ) and vice versa a spoken text in writing to record ( speech recognition ). This automatic language processing can extend itself, so as by latent semantic analysis ( engl. latent semantic indexing, LSI short ) words and texts meaning can be attached.

Pattern analysis and pattern recognition

Visual intelligence enables it to recognize images or shapes, and analyze. As application examples here handwriting recognition, person identification by comparing the fingerprints or the iris, industrial quality control and manufacturing automation are (the latter in combination with the findings of Robotics ) called.

Robotics

The robotics is concerned with manipulative intelligence. With the help of robots, dangerous activities or even always the same manipulations, such as may occur during welding or painting, be automated.

The basic idea is to create systems that can understand intelligent behavior of living organisms.

Methods

The AI methods can be broadly classified in two dimensions: symbolic vs. neural AI and Simulation Method vs.. phenomenological method. The relationships illustrated in the following graph:

The Neural AI follows a bottom -up approach and would like to emulate the human brain as precisely as possible. The symbolic AI pursues reversed a top- down approach, and approaches the intelligence services from a conceptual level ago. The simulation method is based as close as possible to the actual cognitive processes of humans. On the other hand there is the phenomenological approach only on the result.

Many older methods that have been developed in AI, based on heuristic solution methods. More recently, mathematically sound approaches from statistics, mathematical programming and approximation theory play an important role.

The actual techniques of AI can be roughly divided into groups:

Search

The AI ​​often deals with problems, in which to search for specific solutions. Different search algorithms are being introduced. A prime example of the search is the process of wayfinding, which occupies a central role in many computer games and is based on search algorithms such as the A * algorithm.

Plan

In addition to the search of solutions, to plan an important aspect of AI dar. The process of planning is divided into two phases:

Planning systems to plan and create from such problem descriptions sequences of actions that the agent systems can perform in order to achieve their goals.

Optimization methods

Often tasks of AI lead to optimization problems. These are solved depending on the structure either with search algorithms from computer science, or, increasingly, by means of mathematical programming. Well-known heuristic search method from the context of AI are evolutionary algorithms.

Reasoning

A question of AI is the development of knowledge representations that can then be used for automatic logical reasoning. Human knowledge is - as far as possible - formalized, to bring it into a machine-readable form. This goal, the developers of various ontologies have prescribed.

Early on, the AI ​​busy trying to construct automatic proof systems, the mathematicians and computer scientists in the proofs of propositions and programming ( logic programming) would help. Two difficulties stood out:

Another form of logical inference, the induction is (induction conclusion inductive logic ) are generalized in the examples to rules ( machine learning ). Here, too, plays the type and thickness of the knowledge representation an important role. A distinction between symbolic systems, in which the knowledge - both the examples and the induced rules - is explicitly represented, and sub-symbolic systems such as neural networks, which, although a predictable behavior " trained " is, but do not allow insight into the learned solutions.

Approximation methods

In many applications it comes to derive from a set of data a general rule ( machine learning ). Mathematically, this leads to an approximation problem. In the context of this AI artificial neural networks have been proposed. In practical applications are often used alternative method, which are mathematically easier to analyze.

Applications

In the past, knowledge of artificial intelligence over time are often passed over into the other areas of computer science: Once a problem is understood well enough, the AI has turned to new tasks. For example, the compiler or computer algebra were originally attributed to artificial intelligence.

Numerous applications have been developed on the basis of techniques, the research areas of AI were once or still are. A few examples:

  • Search engines make it easier to deal with the flood of information available on the Internet.
  • In the exploration of oil wells, the control of robots or the Mars medical diagnostic expert systems can be used.
  • Machine translation is widespread. Their results are not comparable with those of human translator, but save a lot of time and money.
  • Data mining and text mining methods provide for the extraction of core information from non- or only weakly structured texts, such as it is needed such as when creating content analysis.
  • Information retrieval or information retrieval, the retrieval and merging of existing, complex structures in very large data sets the goal of an application are Internet search engines.
  • Analysis and forecasting of stock price developments is occasionally supported by artificial neural networks.
  • Optical Character Recognition reads printed texts reliable.
  • Handwriting recognition is a million times used in PDAs.
  • Voice recognition allows you to dictate a text.
  • Computer algebra systems such as Mathematica or Maple, support mathematicians, scientists and engineers in their work.
  • Computer vision systems to monitor public spaces, production processes or secure the road.
  • In computer games, the algorithms that have been developed in AI, serve to make computer-controlled teammates act intelligently. Examples of such applications include Deep Blue, a chess computer that defeated world champion Garry Kasparov in 1997, and the program Chinook, the lady is world champion since 1994.
  • For group simulations for safety planning or computer animation a realistic behavior of ( human) is calculated masses.
  • A knowledge-based system or a special expert system provides solutions to complex issues available. Examples of such applications are: The computer program Watson, which won in the quiz Jeopardy against the two most successful players in 2011, or the knowledge base Cyc.

Turing Test

In order to have a measure of when a machine simulates a human- equivalent intelligence, proposed by Alan Turing, named after him Turing Test. It provides a person using the terminal any questions at another human or an AI without knowing it, who answers each. The asker must then decide if it was a machine or a human being at the interview partners. According to Turing If the machine can not be distinguished from the people, so the machine is intelligent. So far, no machine has passed this Turing Test. Since 1991, the Loebner Prize exists for the Turing Test.

Adjacent Sciences

Linguistics

The interpretation of human speech by machines has a critical role in AI research. So give any results of the Turing test, especially in dialogue situations that need to be addressed.

Linguistics provides with their grammar models and psycholinguistic models as semantics of the feature, or prototype Semantic foundations for machine "understanding" of complex natural language phrases.

A subarea of linguistics and also an interface between it and the computer science forms the computational linguistics, which deals among other things with machine language processing and artificial intelligence.

Psychology

The psychology deals, inter alia, to with the concept of intelligence.

Psychotherapy

In psychotherapy research for quite some time already exist experimental applications of artificial intelligence, to bridge gaps and bottlenecks in the psychotherapeutic care and reduce costs.

Philosophy

The philosophical aspects of AI problems are among the most extensive of the entire computer science.

The answers that are given to the central questions of this range, extend far into ontological and epistemological issues that occupied the thinking of the people already in the beginnings of philosophy. Who gives such answers, the consequences of it must also be for the people and draw themselves. Not infrequently, one might conversely proceed and transfer the answers that were found prior to the development of artificial intelligence, to this. But as it turned out, the artificial intelligence, many researchers has prompted to look at issues such as the relationship between matter and spirit, the origins of consciousness, the limits of knowledge, the problem of emergence, the possibility of out- of human intelligence, etc. in a new light and re-evaluate in part.

A metaphysical or even idealistic thinking undertook view holds it (in the sense of a weak AI ) as impossible that machines could ever have more than just awareness simulated with real knowledge and freedom. From an ontological point of view, the American philosopher Hubert Dreyfus criticized the view of the strong AI. Based on the developed by Martin Heidegger in his work Being and Time ontology of " worldliness of the world " Dreyfus tries to show that behind the phenomenon of the world can not be returned as a meaningful meaning wholeness: mind, that is, relations of things in the world to each other, is a Emergenzphänomen, because there is not " something meaningful " and then " more sense ". This, however, also the task to program the meaningful relations between the things of the world into a computer than actually impossible or endless projects proves. This is because the mind can not be produced by addition of initially meaningless elements.

An evolutionary - progressive mindset sees it, however (in the sense of a strong AI ) as possible to that of artificial intelligence systems once the people in the could surpass what currently is considered to be specifically human. This involves on the one hand the risk that such AI aircraft be abused, for example, for military purposes. On the other hand, this technology offers the opportunity to solve problems, the solution to man because of his limited mind is not possible ( see also technological singularity ).

Other points of contact can be found in analytic philosophy; here is Wittgenstein's Tractatus Logico-Philosophicus exemplified.

Computer science

Of course, the AI ​​is closely linked to the other disciplines of computer science. An attempt by the definition could be based on the evaluation of the results achieved in terms of their degree of intelligence. To this end, it seems useful to distinguish different dimensions of intelligence. In the following, these dimensions are to be listed, the first three seem to be regarded as necessary conditions.

The more of these characteristics fulfills an application, the more intelligent it is. An application that can be classified on this scale as intelligent, rather the AI ​​will be able to be assigned as a different discipline of computer science.

Representation in film and literature

Since the classical modern it is treated in art, film and literature. It goes in the artistic discourse - as opposed to AI research, in which the technical realization is paramount - above all to the moral, ethical and religious aspects and consequences of non-human " machine intelligence".

During the Renaissance, the term homunculus was coined an artificial miniature people without a soul. In the 18th and 19th century published in the literature, human-like machines, for example, in ETA Hoffmann's Der Sandmann and Jean Paul The vending machines.

In the 20th and 21st century science fiction engages in film and prose on the subject in many ways. 1920 coined the writer Karel Capek the term in his stage play RUR 1926 themed Fritz Lang in Metropolis robots take over the work of the people. The film audience were in different works, the robot presents an intelligent and sophisticated machines, with very different personalities. They are developed to use them for good purposes, but are changing frequently to dangerous machinery, develop the hostile plans against humans. In the course of film history, they are increasingly becoming self- conscious beings who want to subjugate humanity.

Some significant examples from the recent literature and film history are:

  • HAL 9000 in 2001: A Space Odyssey (1968 )
  • Colossus and Guardian in Colossus (1970 )
  • The speaking bombs in Dark Star (1974 )
  • Master Control Program in Tron (1982 )
  • Skynet from The Terminator (1984 )
  • All programs ( Oracle, Architect, Agent, etc.) in The Matrix (1999)
  • The main character in A. I. - Artificial Intelligence by Steven Spielberg (2001)
  • Red Queen from Resident Evil (2002-2012)
  • Sonny in I, Robot (film ) (2004 )
  • Deep Thought in The Hitchhiker's Guide to the Galaxy (film ) (2005 )
  • Jarvis in Iron Man (2008 )
  • The computer game Erebos in the book by Ursula Poznanski (2010)
70025
de