Unix philosophy

The Unix philosophy is a set of rules and approaches to software development based on the experience of the leading Unix programmers.

McIlroy: A Quarter Century of Unix

Douglas McIlroy, the inventor of Unix pipes, summed up the philosophy as follows:

  • Write computer programs so that they do only one task and do it well.
  • Write programs so that they work together.
  • Write programs so that they handle text streams, because that is a universal interface.

Usually that is shortened to: "Make only one thing and do it well. "

Of these three tenets, especially the first two are not limited to UNIX, but emphasize Unix programmers all three theorems stronger than other programmers.

Pike: Notes on Programming in C

Rob Pike suggests in the text Notes on Programming in C before the following rules as principles of programming, but you can see them as well as the core idea of ​​the Unix philosophy:

  • Rule 1: You can not tell which part of the program will eat up the bulk of the performance. Since bottlenecks often occur in surprising places, one should not install to increase the speed until you've shown where the bottleneck is located.
  • Rule 2: Measure the duration of the program. File only at the speed if you have measured it, and even then only if the considered part eats a dominant share of the computing time.
  • Rule 3: Fancy algorithms are slow when the input data set [ see complexity theory ] is small, and this is the normal case. Fancy algorithms have large fixed costs. As long as you do not know that there are often large values ​​will suppose to do without fancy algorithms. ( And even if it is large, first rule 2 )
  • Rule 4: Fancy algorithms are erroneous as simple, and they are much more difficult to implement. Use both simple algorithms as well as simple data structures.
  • Rule 5: data are more important. If you have chosen the right data structures and everything is well designed, the algorithms will almost always arise by themselves. Data structures are the central theme of programming, not algorithms.
  • Rule 6: There is no rule 6

The Rules 1 and 2 repeat the famous principle of Tony Hoare: " Premature optimization is the root of all evil. " Ken Thompson formulated the rules 3 and 4 as "Use in doubt brute force "; these rules are examples of the KISS principle. The Rule 5 is from Fred Brooks, who mentions it in the book The Mythical Man-Month, and is often formulated as " write stupid code that uses smart data ". The Rule 6 is borrowed from Bruce's sketch from Monty Python's Flying Circus ( Episode 22 ).

Mike Gancarz: The UNIX Philosophy

Published in 1994, Mike Gancarz his own experience with Unix ( he was one of the developers of the X Window System), together with ideas from discussions with colleagues and people from other areas, which were also dependent on Unix The Unix Philosophy. It identifies nine main demands:

The following ten less stringent demands are not generally accepted as part of the Unix philosophy, and to some extent lead to heated debates ( for example, the issue of monolithic kernel vs Microkernel. ):

Worse is better

Richard P. Gabriel, claiming to be a fundamental advantage of Unix come from a design philosophy that it is better described as poor. According to this philosophy, the simplicity of both the user interface and the implementation of the program is much more important than any other feature of the system - including features such as accuracy, consistency and completeness. Gabriel argues that this approach offers substantial benefits in the development of software, but he also doubts on the quality of so many implementation of this procedure.

For example, the first Unix systems had a pure monolithic kernel; User processes that performed kernel function calls, use the user stack for this. Now, if a signal is to be sent to a process while it is blocked by a longer-lasting kernel function call, the signal handler can not be executed - because there are potentially critical data for the kernel on the stack. What should be done? One option would be to delay the signal to the kernel call is finished - but this can take a very long time, sometimes for too long. Another option would be to cache the kernel call to resume it later can, provided the signal handler runs smoothly.

In such cases, Ken Thompson and Dennis Ritchie preferred simplicity of perfection. If such a case occurs, the kernel terminates the function call with an error message that states that the function call has not been executed (this is the Interrupted system call with the error number 4 = EINTR; This interruption came naturally from the signal handler ). This is only a handful of long-running system calls such as read (), write (), open (), select (), etc before. This procedure has the advantage that it makes the I / O system much easier ( as special cases need not be considered ). Most of the programs does not bother because they do not use anyway signals or terminate itself when a SIGINT. The few few programs that use signals can respond to this problem by surrounding the kernel function calls with a wrapper that represents calling the same repeated again at a onset EINTR. Thus the problem is solved in a simple manner.

For these reasons, Unix was in its early days the operating system, the most frequently crashed (several times per day), but also had the fastest reboot. Because of its simplicity was within ten years of Unix the most stable system that could come up instead of hours with error-free run times measured in months and years.

Raymond: The Art of Unix Programming

Eric S. Raymond summarizes in his book The Art of Unix Programming the Unix philosophy, together with the well-known engineer wisdom Keep it Simple, Stupid ( KISS ). Then he describes how this attitude is being implemented in his opinion, in practice the Unix culture ( which of course is occasionally very clearly violated in practice these rules ):

  • Rule of Modularity: Write simple parts that are connected by clean interfaces.
  • Rule of Clarity: Clarity is better than cleverness.
  • Rule of assembly: Design programs so that they can be linked with other programs.
  • Rule of Separation: Separate the basic idea of the implementation, separate the interfaces of the processing logic.
  • Rule of simplicity: Design with the goal of simplicity; only add complexity when it is absolutely necessary.
  • Rule of parsimony: Write a big program only when it can be shown clearly that there is no other way.
  • Rule of Transparency: Design with the goal of transparency in order to facilitate troubleshooting.
  • Rule of Robustness: Robustness is the child of transparency and simplicity.
  • Rule of Representation: Put the knowledge so that the program logic can be stupid and robust in the data structures.
  • Rule of Least Surprise: Make the design of interfaces always the most obvious, which ensures the fewest surprises the user.
  • Rule of Silence: When a program has nothing surprising to say, it should be silent.
  • Rule of fixing: If the program fails, it will do so loudly and as early as possible.
  • Rule of Economics: The work of programmers is expensive; Own it at the expense of computation time.
  • Typically, the code generation: Avoid manual work; write programs that write programs, if possible.
  • Rule of Optimization: Create prototypes before you make yourself to the finishing touches. Make it run before you are optimizing it.
  • Rule of versatility: Mistrust all claims to " the one true way ".
  • Rule of Extensibility: Design for the future, because it will come faster than you think.

Many of these standards are also recognized outside the Unix community - if they were not first used in Unix, they were soon taken. Nevertheless, Unix experts consider a combination of these rules as the basis of the Unix style.

Role of the operating system

The above statements describe what properties have programs that make Unix to what it is. However, another aspect of the Unix philosophy also applies to the operating system itself: For programs as simple as possible, be kept clear and modular so that they can work well together and that they can be highly portable, the operating system needs the necessary conditions in the form of clear interfaces create and high abstraction. In practice:

Everything is a file

  • The access to both local drives as well as network drives is on the same directory structure; there are not different drives, but these are all directories and files in the same tree.
  • Virtual disks can easily be realized too, because they also appear only as a directory. Each image file at any location can be mounting are included in the directory tree at any point.
  • Occurs also involve accessing equipment through the file system. A device driver is assigned a device file in / dev; by reading and writing this file, a program can communicate with the device driver.
  • On kernel data can also be accessed through the directory structure, via the / proc directory.

Client- server model

Communication is performed via network connections. The internal communication between, for example client programs and daemons is via network interface, so that the programming is uniform and the programs can also be used over the network either.

For this reason there is in Unix not for every application a specialized programming, but also comparatively exotic applications are mapped to files or network connections.

Quotes

  • "Unix is simple. It only requires a genius to understand its simplicity. "- Dennis Ritchie
  • " UNIX was not designed to stop its users from doing stupid things, because that would also stop this from doing clever things. " - Doug Gwyn
  • "Unix never says > please <. " - Rob Pike
793902
de