Software testing

A software test examines and evaluates software to meet the defined requirements for their efforts and measures its quality. The knowledge gained will be used to detect and correct software errors. Tests during software development serve to take the software as error free operation.

From this, a single test measure descriptive term the homonymous name 'Test' (also ' testing ') is to be distinguished under the totality of the measures for the verification of software quality (including planning, preparation, management, implementation, documentation, etc.; see definitions) is understood.

Evidence that no errors (more) are available, the software testing can not provide. It can only find that certain test cases were successful. Edsger W. Dijkstra wrote this: "Program testing can be used to show the presence of bugs, but never show Their absence " ( The testing of programs, the existence of errors show, but never their absence). The reason is that all program functions and also all possible values ​​in the input data in its combination should be tested - which ( except for very simple test objects) is practically impossible. For this reason, different test strategies and concepts deal with the issue, as with the smallest possible number of test cases a big test coverage is reached.

Explain Pol, Koomen, Spillner ' testing ' as follows: "Tests are not the only measure in the quality management of software development, but often the last resort. Errors are detected, the later, the more elaborate the remedy, from which is derived the implication: quality has to be implemented ( throughout the course of the project ) and can not ' be tested .' " And: " When testing in software development is typically a more or less large number of errors assumed to be 'normal' or accepted. Here there is a significant difference to the industry: There are in the process section ' quality control ' often only expected in extreme situations mistake. "

  • 5.1 Test Planning
  • 5.2 Test preparation
  • 5.3 Test Specification
  • 5.4 Test Procedure
  • 5.5 Interpretation of results
  • 5.6 Test completion
  • 6.1 Classification of the Testing 6.1.1 Analytical measures
  • 6.1.2 Constructive measures
  • 6.1.3 Specification techniques
  • 7.1 Test Strategy
  • 7.2 documentation
  • 7.3 Test Automation
  • 8.1 concepts in testing
  • 8.2 Interfaces in the testing

Definition

There are different definitions of software testing:

According to ANSI / IEEE Std 610.12-1990 test is "the process of operating a system or component under specified conditions, observing or recording the results and making of evaluation of some aspects of the system or component. "

Another definition provides Denert, after which the " test [ ... ] the verifiable and repeatable proof of the correctness of a software block to predefined requirements relative " is.

A further definition may use pole, Koomen and Spillner: Under testing is the process of planning, preparation and measurement, with the aim to determine the characteristics of an IT system and show the difference between the actual and the required state. Noteworthy in this connection: As a measure of ' the necessary condition ', not just the ( possibly faulty ) specification.

'Test ' is an essential part of quality management of software development projects.

Standardization

September 2013, the ISO / IEC / IEEE 29119 Software Testing standard was published, the first time internationally, many ( older ) national standards of software testing, such as the IEEE 829, summarizing and replaced. The standard ISO / IEC 25000 adds to the side of software engineering as a guide for ( common ) quality criteria and replaces ISO / IEC 9126 standard.

Objectives

Global objective of software testing is to measure the quality of the software system. It defined requirements serve as Test references, by means of which any existing errors are uncovered. ISTQB: The effect of errors ( in production mode) will prevent it.

A framework for this requirements, the quality parameters as. ISO / IEC 9126 be, where each concrete detail requirements, for example, the functionality, usability, security, etc. can be assigned. In particular, the fulfillment of legal and / or contractual requirements shall be demonstrated.

The test results ( obtained on different test methods ) contribute to the assessment of the real quality of the software at - as a condition for their release with the operation. The testing is to create confidence in the quality of the software.

Individual test objectives: Since the software testing consists of numerous individual measures, which is usually several test levels and are performed at many test objects, there are individual test objectives for each test case and for each test level - such as arithmetic function X in tested program Y, successfully tested interface test re-commissioning, load test is successful, program XYZ tested etc.

Test levels

(Also called test cycles) The classification of test levels is often followed by the development of the system. Your content is geared to the development stages of projects according to the V - model. Here, in each test stage (right side in the ' V') is tested against the system designs and specifications of the corresponding stage of development (left side), that is, the test objectives and test cases based on the corresponding specifications. This basic principle is, however, only applicable if any made ​​in later stages of development changes in the older specifications were tracked.

In reality these forms, depending on the size and complexity of the software product, further subdivided. For example, the tests could be subdivided for the development of safety-related systems in the transportation safety technology as follows: unit test on the development computer, component testing on the target hardware, product integration testing, product testing, product validation testing, system integration testing, system testing, system validation tests, field testing and acceptance testing.

The test levels described below are often not sharply delimited from each other in practice, but can, depending on the project situation, running or run additional intermediates. Thus, for example, the acceptance of the system on the basis of test results (reviews, testing protocols ) carried out by system tests.

Unit test

The module test, also called unit test or unit test is a test at the level of the individual modules of the software. Test article is the functionality within a single definable parts of the software ( modules, programs or sub-programs, units or classes). The purpose of this test is often performed by the software developers themselves tests is to demonstrate the technical ability to run and correct technical (partial) results.

Integration test

Integration testing or interaction test tests the cooperation of interdependent components. The test focuses on the interfaces of the components involved and should prove correct results on complete sequences of time.

System Test

The system test, the test step in which the whole system of all of the requirements (functional and non-functional requirements ) is tested. Usually takes the test rather than a test environment and is carried out with test data. The test environment to the production environment of the customer simulate, ie be similar to yours as possible. In general, the system test is carried out by the Organization realized.

Acceptance test

An acceptance test, test procedures, acceptance test or user acceptance test ( UAT ) is to test the software supplied by the customer or client. The successful completion of this test level is usually a prerequisite for the legally effective acquisition of software and its payment. This test can be performed under conditions (such as in new applications ) already on the production environment, with copies of real data.

Especially for system and acceptance tests, the black-box method is used, that is, the test is not based on the code of the Software, but only on the behavior of the software in specified situations / actions ( user input limits in data collection, etc. ).

Test process / test phases

Pole, Koomen and Spillner recommend a follow-up to the phase model shown in the graphic. They call this approach the testing process, the individual steps and test phases distinguished: test planning, test preparation, test specification, test execution, test analysis, test completion

This approach is generic, i.e., it is - applied for different levels, namely for the entire project, for each test level and, ultimately, each test object and test case - each as required. The waste in these planes typically work intensity ( low =) in the graph by points represented dashes ( = medium) and solid lines ( = center of gravity).

Testing activities are grouped into so-called test functions for specific roles (after pole, Koomen and Spillner ): testing, test management, methodological support, technical support, functional support, administration, coordination and consulting, application integrator, CLOCK - architect and CLOCK - engineer (with use of test automation, CLOCK = testing, automation, knowledge, tools ). These functions (roles) have priorities in specific test stages; they can be set up in the project itself or be included on specialized organizational units.

For other authors or institutions can be found in part of other groups, and other names, but they are almost identical in content. For example, the fundamental test process is in ISTQB defined with the following main activities: Test planning and control, test analysis and test design, test implementation and test execution, evaluation of exit criteria and reporting, Test closure activities

Test Planning

Result of these typically held in parallel to the software development phase is i W. of the test plan. It will be prepared for each project and to define the whole testing process. In TMap is running this: Both the increasing importance of IT systems for business processes as well as the high cost of testing warrant a perfectly manageable and structured testing process. The plan can and should be per test level updated and specified so that the individual tests can be performed to the extent appropriate and efficient way.

Content in the test plan should be, eg, the following aspects: test strategy ( test coverage, test coverage, risk assessment); Test objectives and criteria for starting the test, test and test end abortion - for all levels of testing; Procedure (test species); Aids and tools for testing; Documentation ( specifying the nature, structure, level of detail ); Test environment (description); Test data ( general provisions ); Test Organization (dates, roles), all resources, training needs; Test metrics; Problem Management.

Test preparation

Based on the test planning laid down therein matters for operational use will be prepared and made available.

Examples of individual tasks (global and per test level): providing the documents of the test basis; Expose (eg customizing) of tools for test case and fault management; Setting up the test environment (s ) (systems, data); Apply the test objects as a basis for test cases from the development environment to the test environment; Create users and user rights; ... Examples of preparations ( for individual tests ): Transfer / provision of test data or input data into the test environment (s ).

Test Specification

Here all determinations and preparations are being made that are required to run a particular test case can.

Examples of specific activities: test case finding and test case optimization; Describe each test case (which is just to test ); Preconditions (incl. set of dependencies on other test cases ); Designing and creating the input data; Specifications for test sequence and test sequence; Set desired result; Condition (s) ' satisfies test ' for; ...

Test Procedure

For dynamic tests, the program to be tested is to run in static tests, it is the subject of analytical tests.

Examples of specific activities: selecting the test cases to be tested; Start the DUT - manually or automatically; Providing the test data and the actual outcome for the evaluation; Environment information for the test run archive ...

Additional Note: A test object should, if possible, independent people who are not tested by the developer himself, but from others.

Test Evaluation

The results of the tests carried out (the test case ) to be checked. Here, the actual result is compared with the desired result and then brought a decision on the test result (ok or error).

  • In Error: classification (eg cause of the error, error severity, etc.), appropriate error description and explanation, Transfer to fault management; Test case remains open
  • In OK: test case is considered completed
  • For all tests: Documentation, historicizing / archiving of documents

Test completion

Graduation activities will take place at all levels of testing: test case, test object, test level, project. Status at the end of test levels is documented (eg by means of test statistics) and communicated, decisions shall be effected and archive documents. Basically, a distinction must be by:

  • Rule 's degree = objectives achieved, initiate next steps
  • Alternatively possible: exit level test, if necessary prematurely or interrupt ( from several to documenting reasons); in collaboration with the Project Management

Classification of test species

There is hardly a discipline of software development complexity of the task ' test ' accordingly such a large variety of concepts for process approaches has formed as in software testing. This starts with the type - forms for test variants, which are referred to by terms such as test level, test cycle, test phase, test type, test type, test method, test method. ...

The designation of specific types of tests is derived mostly from their individual goals and character traits from - resulting in a variety of denominations. This multidimensionality can accordingly for a specific test apply the names of several test species. Example: Developer test, dynamic test, black box testing, fault testing, integration testing, equivalence class testing, batch testing, regression testing. Quite efficient in the sense of testing process to cover multiple test cases, with only a specific test, such as a technical data interface, checking the correct range of values ​​and a calculation formula.

The type of test names are described in the following examples from the literature. In practical use, but many of these names are not used, but (for example) simply referred to as ' functional test ' and not as error test, batch test, High -level test, etc. The test efficiency is not affected - if the tests planned otherwise reasonable and are executed. The following lists can also convey an idea of ​​what should be taken into account when testing, especially in the test plan or could.

One means of understanding this term diversity is the following applied classification - are classified in types of tests according to various criteria.

Classification according to the testing

Analytical measures

Software testing is often defined as analytical measures that can be performed only after the creation of the test. Liggesmeyer classified these test methods as follows ( shortened and partially commented ):

Static test (test without program execution )

  • Review
  • Static code analysis, and Formal Verification

Dynamic test (test during program execution )

  • Structure-based Control flow oriented ( a measure of the coverage of the control flow ) Statement, branch, condition and path coverage testing
  • Defs-/Uses Criteria, Required k- tuple test data context - coverage
  • Functional equivalence partitioning, state transition testing, cause-effect analysis, for example by means of cause-and- effect diagram, syntax check, transaction flow -based test, test based on decision tables
  • Positive test (trying to verify the requirements) and negative test (checks the robustness of an application )
  • Regression testing, back- to-back test, mutations test
  • Range test or Domain Testing ( generalization of equivalence classes ), error guessing, boundary value analysis, representation techniques

Design measures

This analytical measures which test objects are " checked ", preceded by the so-called constructive measures that are already operating in the course of software development to quality assurance. Examples: requirements management, prototyping, review of specifications.

Specification techniques

Furthermore, the testing techniques, the specification techniques can be distinguished: You do not denote types of tests, which test objects are actively tested, but only the procedures by which the tests are prepared and specified.

Examples equivalence class test and coverage test: test cases are identified by these methods and specifies concrete is reviewed (for example ) in the integration testing, batch testing, security testing, etc.

Classification according to the test criterion

The classification is done here depending on the substantive aspects to be tested.

Other classifications of test species

Gem From the quality characteristics. ISO / IEC 9126 ( which can form the framework for most test requirements ) is derived from a large number of test species. Due to their diversity, only a few examples are mentioned below: security testing, functional testing, recovery testing, GUI testing, bug testing, installation testing, load testing.

To test selected methodological approaches also reflected in test type designations. These are for example:

Type of test names are derived from, inter alia, also from the time of the test:

Even after the test intensity some types of tests specifically designated:

Test species are referred to afterwards, which level of information on the components to be tested is present (which could be specified on the basis of test cases ):

From the nature and extent of the test object, the following type of test - labels mean:

Types of tests can also be named after, who performs or specifies the tests:

Of minor importance are test terms that are based on the type of software measure, from the results of the test requirements:

Other aspects when testing

Test strategy

Pole, Koomen and Spillner describe the testing strategy as comprehensive approach: A test strategy is necessary because a complete test, ie a test that checks all parts of the system with all possible input values ​​of all pre-conditions, in practice is not feasible. Therefore, must be specified in the test plan based on a risk assessment as critical to assess the occurrence of a fault in one part of the system (eg only financial loss or danger to human life ) and how intense ( taking into account the available resources and the budget) a part of the system tested must be or can.

Accordingly, define the test strategy which parts of the system and at what intensity using which test methods and techniques using which test infrastructure and in what order (see also test levels ) to be tested.

It is developed by the test management in the context of test planning, documented in the test plan and set and set as a framework for testing ( by the test team) as a basis.

According to another interpretation " test strategy " as a methodological approach is understood, after the testing is applied.

To appoint eg ISTQB characteristics for test strategies as follows:

  • Top-down: Main front detail test functions; subordinate routines are initially ignored during test or simulated (using " stubs " )
  • Bottom-up: first test detailed functions; higher-order functions or calls are simulated using "Test Driver"
  • Hardest first: Depending on the situation, first things first
  • Big- bang: everything at once

Other principles and techniques for test strategies are:

  • Risk based testing: test principle according to which the test coverage is aligned with the risks that may occur (in the case of the non- finding of errors) in the test objects.
  • Data driven Testing: Testing technique by which the data constellations can be selectively changed over settings in the test scripts to test so that multiple test cases one after the other efficiently
  • Test-driven development
  • SMART: Test Principle "Specific, Measurable, Achievable, Realistic, time - bound"
  • Keyword driven testing
  • Framework based: test automation using testing tools for specific development environments / languages

Documentation

Test planning shall include the preparation of documentation. A normalized approach to recommend the IEEE 829 According to this standard include a full test documentation, the following documents:

Test Automation

In particular in tests that are repeated frequently, the automation is recommended. This is especially for regression testing and test-driven development of the case. In addition, test automation comes with manually impossible or difficult tests on the used (eg load tests).

  • Through regression testing is usually checked as part of the system or acceptance testing of the error-free receipt of the previous functionality by software changes.
  • In the test-driven development, tests are in the course of software development complements ideally before each change and run after every change.

In non-automated assays, in both cases the cost is so high that is often not required for the tests.

Overviews / relationships

Concepts in testing

The graph shows terms that occur in the context of ' testing ' - and how they communicate with other terms.

Interfaces when testing for

The graph shows the main interfaces that occur during testing. The elements referred to by Thaller ' partners ' when testing is exemplified below, WHAT is in this case communicates / replaced.

  • Project Management: Time and expense framework, status of each test object ('test ready' ), documentation systems
  • Line management ( and line department ): provide Senior scientific support, test acceptance, professional testers
  • Data Center: deploy and operate test environment (s ) and test tools
  • Database administrator: install, load, and manage test databases
  • Configuration Management: Set up test environment, integrating the new software
  • Development: test - based documents, specimens, support for testing, discuss test results
  • Problem and CR Management: Error messages, feedback on the retest, error statistics
  • Steering Committee: decisions to test ( levels) decrease or test termination
24673
de