Theoretical computer science
The theoretical computer science is concerned with the abstraction, modeling, and fundamental issues that are related to the structure, processing, transmission and reproduction of information. Your content is automata theory, formal language theory, computability and complexity theory, but also logic and formal semantics as well as the information, algorithms and database theory.
The theoretical computer science was - filed into the structure Sciences and provides a basis for the definition, verification, and implementation of programs of programming languages , building the compilers of programming languages - - by the proponents of this science class the compiler - and the mathematical formalization and investigation of most discrete problems and their models. Using mathematical abstraction of the properties of obtained models have yielded useful definitions, sentences, proofs, algorithms, applications, and solutions or problems. The theoretical computer science forms with its timeless truths and mathematical methods, a formal skeleton that penetrates the computer science into practice with concrete implementations. The theoretical computer science identified many unsolvable problems by means of computability theory and welcome, often with a constructive proof of complexity theory, the definition of the practice efficiently solvable problems from those for which the opposite is true.
Among the constructive methods of theoretical computer science also includes the design of formal systems, vending machines, graphs and syntax diagrams, and setting grammars and semantics in order to grasp a problem with mathematical expressions and formally withdraw from the informal level. The constructs described as the inner logic of a problem with mathematical and logical statements, which permits a formal investigation in further and potentially new - supported by evidence - statements and algorithms of formal models makes deducible as results. In addition to the mathematical insights can be some of the solutions found practically implement to provide people by machines automated semantic advantages of mathematics and computer usage.
- 3.1 Intuitive and formal computability and Church's Thesis
- 3.2 Incompleteness halting problem and set of Rice
History of theoretical computer science
The theoretical computer science is closely related to mathematics and logic. In the 20th century there was a emancipation and education as a discipline.
Automata Theory and Formal Languages
Defines the automata theory and formalized machines or calculators and deals with their properties and computational power. Among other things, examines the theory of automata, which problems can be solved by the different classes of computing machines.
The theory of formal languages considered formal grammars and generated by these grammars formal languages . It deals with syntactic and semantic features of these formal languages over an alphabet. The problem whether a word in a formal language belongs is solved by machines; thus there is a close relationship between the grammars that generate formal languages , and the machines that they recognize.
Most occur in practice, formal languages , such as programming languages, have a simple structure and the Chomsky hierarchy can be classified according to their complexity in one of the known classes of languages . The Chomsky hierarchy - by Noam Chomsky, a pioneer of language theory - consists of four classes. These are, in ascending order according to their thickness, the regular languages , (type 3), the context-free languages ( type 2), the context-sensitive languages ( type 1) and the recursively enumerable languages ( type 0).
Is an equivalence in terms of their generated and recognized classes of languages between the four grammar classes and four classes of machines the Chomsky hierarchy. The formal languages that are generated by the respective classes of grammar, Chomsky hierarchy - can be recognized by the corresponding machine classes and vice versa - as listed above.
Pumping and Jaffe lemmas
Known practical tools in the characterization of regular and context-free languages are the pumping lemmas, which provide a necessary but not sufficient condition that a language generated by a grammar is regular or context-free. Due to the structure of the statements of the lemmas, the pumping lemma for context-free languages is also called for regular languages also uvw theorem and the pumping lemma uvwxy theorem. Extensions such as the lemma of Jaffe provide a sufficient criterion in contrast to the pumping lemmas.
Description of Type 2 grammars
The Backus -Naur Form ( by John W. Backus and Peter Naur ) or BNF is a notation convention for context- free grammars and thus for context-free languages. The BNF is for example used in practice to define the syntax of programming languages. The actual syntax of the programming languages Pascal and Modula - 2 has been defined in the extended Backus -Naur Form, EBNF. The extended Backus- Naur - form differs only in some notation extensions of the BNF.
In computability theory, the algorithmic solvability of mathematical problems - ie, their predictability - examined. In particular, it deals with the analysis of the internal structure of problems and to the classification of problems according to various degrees of solubility or insolubility of their.
Intuitive and formal computability and Church's Thesis
Starting from the intuitive predictability, the emotional idea of what problems imagine themselves solutions and can be formulated, the computability theory developed a formal mathematical definition of predictability, with the can perform mathematical proofs, which make it possible to verify statements about the predictability or falsify. Attempts to grasp the notion of computability formal, led to the Church's thesis, which claims that the concept of mathematical predictability was with the Turing machine and equally strong formal models of computation have been found. On the foundation of the mathematical models of computation and Church 's thesis are based, the actual findings and statements of computability theory.
Incompleteness, halting problem and set of Rice
With the methods of computability theory, for example, can formulate and prove Kurt Gödel's incompleteness theorem. Another result of computability theory is the realization that the halting problem is undecidable, so you can not find an algorithm that any program then examined whether they ever stop at a particular input or not. Also undecidable is by the theorem of Rice any non- trivial property of a program in a Turing powerful language.
The complexity theory examines which resources ( eg CPU time and memory ) must be the extent to which spent more than to algorithmically solve certain problems. In general, a classification of the problems in complexity classes is performed. The best known such classes are probably P and NP ( in German literature notation also in Gothic letters: and ). P is the class of efficiently solvable problems (more precisely, P is the class of problems that can be decided by a deterministic Turing machine in polynomial time ), NP is the class of problems whose solutions can be checked efficiently (or equivalently: NP is the class the problems that can be decided by a nondeterministic Turing machine in polynomial time ).
By specifying an algorithm for solving a problem can specify an upper bound for the above-mentioned need for resources. The search for lower bounds arises, however, be much more difficult dar. this end, it must be shown that all possible algorithms that use only a certain amount of resources, can not solve a problem.
One ( if not the ) central and decades open question in complexity theory is whether the classes NP and P coincide - to solve an NP -complete problem in deterministic polynomial time would suffice as proof. Equivalently, one can try to solve an NP-complete problem on a Turing - equivalent computer efficiently (see also Church's thesis ).
Parameterized Algorithmics is a relatively new topic in theoretical computer science, is examined in more detail in the which instances are to solve NP - complete problems efficiently.
The formal semantics deals with the meaning of described in a formal language programs. A semantic function is expressed mathematically constructed that maps a given program to the calculated function of it.
Where represents the semantics function of the amount of syntactically correct programs for which calculated by the program function and for the set of possible memory allocations.
Depending on the mathematical approach, a distinction is
- Axiomatic semantics
- Denotational semantics
- Dialogical semantics
- Fixed point semantics
- Operational semantics
Subject of information theory is the mathematical description of information. The information content of a message is characterized by its entropy. This makes it possible to determine the transmission capacity of an information channel, which establishes the relationship of coding theory. Further information theoretic methods are used in cryptography, for example, the one- time pad is an information-theoretically secure encryption methods.
Mathematical logic is used in many ways in theoretical computer science; this has conversely led to pulses for mathematical logic. Propositional logic and Boolean algebra, for example, used for description of circuits; doing basic results of logic as Craig interpolation are used. Introduces basic concepts of the theory of programming are naturally expressible by logic, in addition to the above semantics, especially in the area of the theory of logic programming. In the area of formal specification are different logics, including Predicate logic, temporal logic, modal logic and dynamic logic used to describe the intended behavior of software and hardware systems, which can then be verified by model checking or theorem proving. The logic used in artificial intelligence, such as Modal logics by which the knowledge of an agent is represented, are the subject of theoretical studies. For the theory of functional programming is the combinational logic used.