Black-box testing

Black box test is a method of software testing, in which tests can be developed without any knowledge about the internal operation of the system under test. It is limited to functional -based testing, ie for the determination of the test cases, only the requirements, but not the implementation of the test object can be used. The exact nature of the program is not considered, but rather treated as a black box. Only outwardly visible behavior is incorporated into the test.

Objective

The aim is to check the compliance of a software system with its specification. Based on the formal or informal specification test cases are developed to ensure that the required functionality is met. The system under test is viewed as a whole, only its external behavior is used in the evaluation of test results.

Derive test cases from an informal specification is comparatively expensive and, depending on degree of precision of the specification may not be possible. Therefore often a complete black-box test is just as economical as a complete white-box testing.

A successful Black- box testing is no guarantee of the accuracy of the software, as created in the early stages of software design specifications do not cover subsequent detail and implementation decisions.

The also existing gray box testing is an approach from the Extreme Programming, largely to connect using test-driven development, the desired benefits of Black- box testing and white box testing each other while eliminating the undesirable drawbacks as possible.

Prevent black-box tests that programmers develop tests " to their own mistakes around " and thus overlook gaps in the implementation. A developer who has knowledge of the inner workings of a system could unintentionally by some additional assumptions that are out of specification, forget some things in the tests or look different than the specification. Another useful feature is black box testing are also suitable as an additional support for checking the specification for completeness, since incomplete specification often raises questions in the development of the tests.

Because the developers of the tests may have no knowledge of the inner workings of the system under test, is practically needed a separate team to the development of tests for black-box testing. Many companies even have special testing departments are responsible.

Compared with white-box testing

Black-box tests are used to detect errors with respect to the specification, but are hardly likely to identify errors in certain components or even the error-prone component itself. For the latter, you need white-box testing. It should also be that two errors in two components could cancel a temporary seemingly correct overall system. This can be easily uncovered by white-box tests, however, occur in black-box testing after correction can not be ruled only one of the two errors as alleged regression to days.

Compared to White- box testing Black box testing are much more complex to carry out, since they need a larger organizational infrastructure ( own team ).

The benefits of Black- box testing over white box testing:

  • Better verification of the entire system
  • Testing of semantic properties with suitable specification
  • Portability of systematically generated test sequences based on platform independent implementations

The disadvantages of black box testing over white box testing:

  • Larger organizational effort
  • Additionally inserted functions in the implementation can be tested only by chance
  • Test sequences of inadequate specification are useless

In addition may be mentioned that the distinction Black- box testing vs.. White- box testing is partly dependent on the perspective. Testing a subcomponent of the overall system, a white-box test, as figures for the entire system from the outside perspective no knowledge of the system structure and therefore the existing sub-components. From the perspective of the subcomponent in turn, the same test can be considered under certain circumstances as a black-box test if it is developed and carried out without knowledge of the internals of the component part.

Selection of test cases

The number of test cases of a test sequence systematically created, based on an appropriate specification is, in almost all applications for practice too high. There are e.g. the following ways to reduce these systematic:

  • Limits and special values
  • Equivalence class method, classification tree method,
  • ( simplified ) decision tables
  • Condition-based tests,
  • Use case tests
  • Cause and efficiency
  • Finding of robustness and security issues fuzzing
  • Risk analysis and prioritization of the application of the desired results ( important or unimportant functions).

In contrast, the reduction can also be performed in an intuitive manner ( error guessing ). From this method, however, distance should be taken, as there always unconscious assumptions are taken into account, which may turn out to be negative in subsequent use of the application. But there are also other test directions that are very successful with it. Representatives are, for example, James Bach with Rapid Testing Cem Kaner, or with Exploratory Testing (Ad - hoc test ). These types of tests are attributable to the experience-based or non-systematic techniques. This includes vulnerabilities based testing.

Representative testing

All functions are in accordance with the frequency with which they will be in use later tested.

Vulnerability -based testing

It is often restricted to extensive testing that functions in which the probability of occurrence of errors is high ( complex algorithms parts with insufficient specification, parts of inexperienced programmers, ...). More intensive tests can be performed with fuzzing tools, as they allow extensive automation of robustness and vulnerability tests. The results of these tests are then information on data packets which the SUT (System Under Test ) can compromise. Vulnerability tests can be performed by Vulnerability Scanner or fuzzer for example.

Extent of damage - based testing (risk analysis)

It is limited to extensive testing of functions in which errors can have serious consequences particularly (eg, falsification or destruction of a large file / danger of death for people (cars, machine controllers ) etc..).

These are prioritized or classified ( 1,2,3, ...) and then tested in accordance with this order.

130211
de