Software testing: Types and how to choose the test strategy
In the testing process, the test strategy describes the organization’s general test methodology. It includes the ways in which testing is used to manage product and project risks, the subdivision of tests into levels and the high-level activities associated with testing. The test strategy, as well as the processes and activities described therein, must be consistent with the test policy. It should in fact provide the general criteria for entry and exit from testing for the organization or for one or more software products.
Test strategies often include both preventive and reactive elements. In particular, the preventive elements of the test strategy are those that use the testing activities on the first work products, for example by using the analysis and design of the tests to find the defects in the basic test documents, such as the requirements specifications . The reactive elements of the test strategy are those that ensure that the test activities are consistent with the software being tested.
Types and choice of strategy
As mentioned above, the test strategies describe the general test methods, which typically include:
- Analytical strategies, such as risk-based testing, in which the test team analyzes the test base to identify the test conditions to be covered. For example, in requirement-based testing, the test analysis derives the test conditions from the requirements and the tests are then designed and implemented to cover these conditions. The tests are then performed, often using the priority of the requirement to determine the order in which the tests will be performed. The test results are then reported in terms of the status of the requirements, for example: tested and passed requirement, tested and failed requirement, requirement not yet fully tested, requirement with blocked test, etc.
- Model-based strategies, such as operational profiles, where the test team develops a model (based on real or expected situations) of the environment that hosts the system, the inputs and conditions to which the system is subjected, and how the system should behave. For example, in performance testing based on a rapidly growing mobile application model, you could develop:
- incoming and outgoing network traffic models, active and inactive users, and the resulting processing load, based on the current use and growth of the project over time;
- models of the current production environment with hardware, software, data capacity, network and infrastructure;
- models for (with target, expected and minimum values) throughput rate, response time and resource allocation.
- Methodological strategies, such as those based on quality characteristics, in which the test team adopts a predetermined set of test conditions, such as a quality standard, a checklist or a general set of test conditions, related to a particular domain, a certain type of test (for example, safety test), and then uses this set of test conditions from an iteration to the next or from one version to the next. For example, in the maintenance testing of a simple and stable ecommerce site, the testers could use a checklist that identifies the key features, attributes and links for each page and stick to those elements whenever a change is made to the site.
- Process or standard compliance strategies, such as medical systems subject to the US FDA standard, where the test team follows a series of processes defined by a standardization committee or other group of experts, where the processes prescribe the documentation, the correct identification and use of the test base and oracles, as well as the organization of the test team. For example, in projects that follow the Scrum Agile techniques, in each iteration the testers analyze the “user stories” that describe particular functionalities, evaluate the test effort for each function as part of the iteration planning, identify the test conditions ( often called “acceptance criteria”) for each “user story”, they perform the tests that cover these conditions, and report the status of each “user story” (untested, failed, exceeded) during the execution of the tests.
- Reactive strategies, such as the use of defect-based attacks, where the test team waits to design and implement the tests until the SW has been delivered to them, then reacting to the first results of the system being tested. For example, using exploratory testing based on a suitable PC application, a series of test papers corresponding to features, screenshots and menu selections could be developed. Each tester is assigned a set of test papers, which he uses to structure his own exploratory test session. The testers periodically report the results of the test sessions to the test manager, who can review the plans based on the results.
- Consultative strategies, such as the use of user-driven testing, where the test team relies on the inputs of one or more stakeholders to determine the test conditions to be covered. For example, in the outsourcing compatibility test for a web application, a company can give outsourced test provider services, a prioritized list of browser versions, antivirus software, operating systems, connection types and others configuration options that you want to test for your application. The supplier can then use techniques such as pairwise testing (for high priority options) and equivalence partitioning (for lower priority options) to generate the tests.
- Regression testing strategies, such as extended automation, where the test team uses different techniques to manage regression risk, particularly with the automation of functional and non-functional regression tests at one or more levels. For example, for regression testing of a web application, testers can use a GUI-based tool to automate typical and anomalous use cases. These tests are then performed each time the application is changed.
Different strategies can be combined together. The specific strategy selected must be appropriate to the needs of the organization, and organizations can adapt strategies to adapt to particular situations and projects.
The test strategy can describe the levels of tests to be performed. In such cases, it should be a general guide on the entry and exit criteria of each level and on the relationships between the different levels (for example, the sharing of testing coverage objectives).
Finally, different test strategies may be needed for the short and long term, as well as for different organizations and projects. For example, in the presence of safety applications, a more intensive strategy may be more appropriate than in other cases. Furthermore, the testing strategies may vary depending on the development models adopted.