Code coverage

As a test coverage is the ratio of actual statements made a test with respect to the theoretically possible treffbaren statements or the amount of the desired treffbaren statements. The test coverage plays as a metric for quality assurance and to increase the quality, especially in engineering and software engineering an important role.

In practice, the test coverage is influenced by various criteria. The test coverage can be improved by increasing the number of measurements, sampling and test cases. However, the test coverage is limited in practice by the cost associated with each test.

Test coverage in Mechanical Engineering

Depending on the nature, benefits and costs of the tests, some tests are random, completely carried out other tests. A simple and automatic to be carried out test is performed with every product as its cost increase production costs only slightly. A crash test with a vehicle but is of course carried out with the sample, since the tested product then becomes unusable.

For vehicles produced in 1000, this could mean, for example, that particularly elaborate tests and crash tests are carried out with a single vehicle, while less expensive tests with a larger number or even all vehicles be carried out.

Necessary but costly tests are varied in their frequency and thus the test coverage. If a test predominantly or exclusively positive results, its speed is reduced. Provides a test negative results, it is used frequently, until the changes to the production to a significant increase of positive results and thus again led to a higher product quality.

The cost -benefit analysis of such tests is performed using the Stochastic. If, for example, only 5 out of 1000 vehicles carried out a test of whether the power windows work properly, can be calculated using the stochastic statistical relevance and the likelihood of miscalculation due to the test result.

Test coverage in software engineering

For the test coverage in software engineering (English test coverage and code coverage ) Stochastic plays virtually no role, since it is not for computer programs to mass-produced single products in which tests are carried out with samples. Instead, tests are defined according to the specification ( properties of the interface) or the internal structure of the software under test unit.

In the software engineering test coverage for different areas of the software is determined. Among these is the cover of the code data, or the professionalism. In order to achieve the highest possible test coverage, depending on the area abzudeckendem different test cases to be written. Not all test methods to specify a measure of test coverage (software metric ) is the software test possible, since the determination of the number of possible test cases for real-world problems is often not possible.

A complete test coverage of professionalism is an exception, because the number of possible test cases is incredibly large very quickly ( by combinatorial explosion). A complete functional test for a simple function that takes two 16 -bit values ​​as an argument would have 2 ^ ( 16 16), so mean about 4 billion test cases to test the specification is complete. Instead, we restrict ourselves to a selection of useful appearing tests for borderline cases. For example, a root function for rational numbers could, for example, with all the elements of the set { -10; -1; -0.0000001; 0; 0.0000001; 1; 2; 3; 4; 5.25; 9; 10000 } to be tested. As a sensible selection of test cases for an adequate test coverage are usually different types of valid arguments for components with robustness request additional boundary elements (barely valid arguments and just invalid arguments ). It has also proven to be successful, take the argument error triggering in the amount of test elements in case of error.

A largely complete test coverage of the code, however, is often the target for unit tests and unit tests: Through high-level test coverage for ' small ' functional units to the total number of required test cases result only from the addition of these test cases and not from the combinatorics of the larger functionality. But here may not be 100% achieved this goal you about the remaining risks ( unexpected ' action at a distance ' of errors ), and also for cost -benefit reasons in most cases.

Tools to measure the code coverage

  • Bullseye Coverage - C
  • Cantata - C, C
  • CC ANALYZER - Cobol
  • Clover - Java, Groovy
  • Cobertura - Java
  • CodeCover - Java, Cobol
  • Coverage.py - Python
  • Devel :: Cover - Perl
  • Froglogic 's Squish Coco - C and C
  • Gcov - C, C , Ada
  • EMMA - Java
  • JetBrains dotCover -. NET
  • LDRA Testbed - C
  • NCover -. Net
  • PartCover -. Net
  • Rcov - Ruby
  • Shcov - shell / bash script
  • Simulink Verification and Validation - Simulink models
  • Tessy - C and C
  • Test Well ctc - C, C , Java, C #
  • VBWatch - Visual Basic
  • XDebug - PHP
766284
de