Algorithmic efficiency

The efficiency of an algorithm is its parsimony with regard to the resources, computing time and memory, which he claimed to solve a specified problem. Efficiency with increasing the complexity of the algorithm is reduced. However, efficient algorithms are usually more difficult to understand, since they are often based on ingenious ideas. Efficient algorithms are fast in the solution of the corresponding problem.

Evaluation of the efficiency

Run time and memory efficiency is not solely a property of the algorithm. The efficiency is influenced by the concrete implementation in the appropriate programming language, the underlying hardware, and the input data. Therefore, it leads to the efficiency rating of independent factors:

  • Running time of an algorithm on the basis of the required computational steps
  • Memory requirements by variables

Whether an algorithm now can be regarded as efficient or not depends mainly on the perspective from which to analyze the algorithm and what is known about the complexity of the treated by the algorithm problem.

A distinction is made between the efficiency:

  • Worst case - the worst possible behavior
  • Average-case - the average behavior
  • Best case - the best behavior

As the number of pulses required will vary for elementary operations between different computer architectures, but differs only by a constant factor in the rule and also the running time and space requirements for small inputs is usually negligible, one uses the Landau notation. Thus, the run-time behavior and the space requirement neglecting a constant prefactor for large input sizes can be represented, but this says about a practicality of the algorithm still nothing, as is most often worked in practice with relatively small input sizes, the Landau notation, however, considered very large input sizes.

The term efficient algorithm is used quite spongy in theoretical computer science. Usually one means by that an algorithm whose running time is polynomial in the size of the input, but also those algorithms can be practically useless if, for example, the fixed exponent of the associated polynomial is too large. There is even the linear time algorithms which are practically unusable because of the constant prefactor is too large in the representation. Thus, it may be that although an algorithm A in the theory is much more efficient than an algorithm B, in practice, only Algorithm B is always used as the input to be processed sizes are not large enough to allow the algorithm A can demonstrate its advantages.

Optimize efficiency

The amount of memory used a data structure must always be evaluated in relation to their frequency of occurrence for the duration of a program. The cost optimization is often only useful when there are many objects of the type under consideration are created in the main memory or stored permanently. In this case, on the other hand can be achieved by reducing the memory consumption on the one hand a reduction of costs and an increase of system throughput.

A reduction of the memory usage of objects can result in a deterioration of the execution time of certain accesses to the underlying data structure, namely, if partial information is summarized to be compactly stored in base types. The efficiency of the introduced coding operations plays a role. The relative frequency of specific requests in a typical program run must be observed in order to be optimal in the sum. The operations on data are divided into two categories: reading and writing. These correspond to the decoding and encoding of partial information and therefore must be considered separately. An additional aspect is in through the model of the abstract data type: the internal representation may optionally be processed without conversion to the components.

The goal is to find a balanced solution that does not pay the benefits on one side with a loss of efficiency on the other side. Effort and complexity must be justified by the profit gained. If in doubt, the clear implementation of the preferred tricky. That is, not only the immediate needs of the individual resources algorithm run time is considered, but, depending on the requirements of the overall system efficiency.

A further object is the avoidance of redundancy in the storage of the object states. It is also to be noted that redundancy reduces the ease of maintenance. However, just targeted use of redundancy in certain applications to increase the data access speed considerably.

  • Practical computer science
  • Theoretical computer science
297303
de