Software Testing ManagementTest Strategy In Software Testing, test strategy is a document describing how the software will be tested. Generally, it is developed for all levels of testing. Test Lead / Test Manager write the Test Strategy and review the strategy with the project team.Also, please note that Test Plan may include test environment, risk assessment, test cases, conditions, pass / fail criteria and a list of related tasks.The inputs of Test Strategy can be:
Test EnvironmentTest Tool dataProject ScheduleDefined Software Testing StandardsFunctional and technical requirementsAn ideal Test Strategy must contain:
Required hardware and software components details including test toolsRoles and ResponsibilitiesSoftware Testing methodologyLimitations of the applicationAt the completion of this stage, testing team will be having following three documents:
Test StrategyTest PlanTest CasesAll the above tasks collectively, called as Software Testing Methodology.Test Strategy - all aspects.
In continuation to my previous post on Test Strategy, I'm here describing it in more details.
Test strategy can be defined as a high level management method that confirms adequate confidence in the software product being tested, while ensuring that the cost efforts and timelines are all within acceptable limits.
Test strategy can be of different levels:
Enterprise-wide test strategy to set up a test group for the entire organization.Application or the product test strategy to set up the test requirements of a software product for its entire lifeProject level test strategy to set up test plans for a single project life cycle.A good test strategy should:
Define the objectives, timelines and approach for the testing effortList various testing activities and define roles & responsibilitiesIdentify and coordinate the test environment and data requirements before the starting of testing phaseCommunicate the test plans to stakeholders and obtain buy-in from business clientsProactively minimize fires during the testing phaseDecisions to be made in test strategy:
When should the testing be stopped?What should be testing?What can remain untested?Further, the following three types of risks can be considered while making a test strategy:
1. Development related risks include:
Inefficiently controlled project timelines.Complexity of the codeLess skilled programmersDefects in existing codeProblems in team co-ordinationLack of reviews and configuration controlLack of specifications
2. Testing related risks include:
Lack of domain knowledgeLack of testing and platform skillsLack of test bed and test dataLack of sufficient time
3. Production related risks include:
Dynamic frequency of usageComplexity of user interfaceHigh business impact of the functionTest strategies generally acknowledge that
The earlier a bug is detected, the cheaper it is to fix.Splitting the testing into smaller parts and then aggregating, ensures a quicker debug and fix cycleBugs found in critical functions mean more time to fixSix steps for developing a test strategy:
Determine the objective and scope of the testingIdentify the types of tests requiredBased on the external and internal risks, add, modify or delete processes.Plan the environment, test bed, test data and other infrastructurePlan strategy for managing changes, defects and timeline variations.Decide both the in-process and the post process metrics to match the objectives.While creating test strategy for maintenance projects, following aspects needs to be considered:
How much longer is the software being supported and is it worth-while strategizing to improve the testing of this software?Is it worthwhile to incrementally automate the testing of the existing code and functionality?How much of the old functionality should we test to give us the confidence that the new code has not corrupted the old code?Should we have same set of people support the testing of multiple software maintenance projects?What analysis can be applied on the current defect database that will help us improve the development itself?However, for creating the test strategy of product development, following aspects needs to be considered:
How much longer is the product going to last? This includes multiple versions etc. since test case and artifacts can continue to be used across versions.Would automation be worth considering?How much testing should we do per release (minor, patch, major, etc.)Do we have a risk based test plan that will allow releases to be made earlier than planned?Is the development cycle iterative? Should the test cycle follow that?As per IEEE software engineering standards, the Test Strategy document should answer the following aspects:
1. Objective and scope of testing
What is the business objective of testing?What are the quality goals to be met by the testing effort?To what extent will each application be tested?What external systems will not be tested?What systems and components need to be tested?2. Types of testing
Different phases of the testing that are required.Different types of testingTest Coverage3. Testing approach
Definition of testing process life cycleCreation of testing related templates, checklists and guidelinesMethodology for test development and executionSpecification of test environment setupPlanning for test execution cycles4. Test Environment specification
Hardware and software requirementsTest data creation, database requirements and setupConfiguration management, maintenance of test bed and build management5. Test Automation
Criteria and feasibility of test automationTest tool identificationTest automation strategy (effort, timelines etc.)
6. Roles and Responsibilities / Escalation mechanism
Testing team organization and reporting structureDifferent roles as a part of testing activities and their corresponding responsibilitiesWho to escalate to and when?
7. Defect Management
Categorization of defects based on criticality and priorityDefinition of a workflow or the disposition of defectsTechniques and tools for tracking of defects.
8. Communication and status reporting
Status meetings for communication of testing statusFormat and content of different status reportsPeriodicity of each status reportDistribution list for each report9. Risks and mitigation plans
Identification of all testing related risks, their impact and exposurePlan for mitigation and managing these risks10. Configuration management
List of testing artefacts under version controlTools and techniques for configuration management
11. Change management
Plan for managing requirement changesModels for assessing impact of changes on testingProcess for keeping test artifacts in sync with development artifacts
12. Testing metrics
What metrics are to be collected?
Do they match the strategic objectives?
What will be the techniques for collection of metrics?
What tools will be employed to gather and analyze metrics?
What process improvements are planned based on these metrics?
Spiral Testing ApproachThe purpose of software testing is to identify the differences between existing and expected conditions, i.e., to detect software defects. Testing identifies the requirements that have not been satisfied and the functions that have been impaired. The most commonly recognized test objective is to identify bugs, but this is a limited definition of the aim of testing. Not only must bugs be identified, but they must be put into a framework that enables testers to predict how the software will perform.
In the spiral and rapid application development testing environment there may be no final functional requirements for the system. They are probably informal and evolutionary. Also, the test plan may not be completed until the system is released for production. The relatively long lead time to create test plans based on a good set of requirement specifications may not be available. Testing is an ongoing improvement process that occurs frequently as the system changes. The product evolves over time and is not static.
The testing organization needs to get inside the development effort and work closely with development. Each new version needs to be tested as it becomes available. The approach is to first test the new enhancements or modified software to resolve defects reported in the previous spiral. If time permits, regression testing is then performed to assure that the rest of the system has not regressed.
In the spiral development environment, software testing is again described as a continuous improvement process that must be integrated into a rapid application development methodology. Testing as an integrated function prevents development from proceeding without testing. Deming’s continuous improvement process using the PDCA model will again be applied to the software testing process.Before the continuous improvement process begins, the testing function needs to perform a series of information-gathering planning steps to understand the development project objectives, current status, project plans, function specification, and risks.
Once this is completed, the formal Plan step of the continuous improvement process commences. A major step is to develop a software test plan. The test plan is the basis for accomplishing testing and should be considered an ongoing document, i.e., as the system changes, so does the plan. The outline of a good test plan includes an introduction, the overall plan, testing requirements, test procedures, and test plan details. These are further broken down into business functions, test scenarios and scripts, function/test matrix, expected results, test case checklists, discrepancy reports, required software, hardware, data, personnel, test schedule, test entry criteria, exit criteria, and summary reports.
The more definitive a test plan is, the easier the plan step will be. If the system changes between development of the test plan and when the tests are to be executed, the test plan should be updated accordingly.
The Do step of the continuous improvement process consists of test case design test development and test execution. This step describes how to design test cases and execute the tests included in the test plan. Design includes the functional tests, GUI tests, and fragment system and acceptance tests. Once an overall test design is completed, test development starts. This includes building test scripts and procedures to provide test case details.
The test team is responsible for executing the tests and must ensure that they are executed according to the test design. The do step also includes test setup, regression testing of old and new tests, and recording any defects discovered.
The Check step of the continuous improvement process includes metric measurements and analysis. “Quality Through a Continuous Improvement Process,” crucial to the Deming method is the need to base decisions as much as possible on accurate and timely data. Metrics are key to verifying if the work effort and test schedule are on schedule, and to identify any new resource requirements.During the check step it is important to publish intermediate test reports. This includes recording of the test results and relating them to the test plan and test objectives.
The Act step of the continuous improvement process involves preparation for the next spiral iteration. It entails refining the function/GUI tests, test suites, test cases, test scripts, and fragment system and acceptance tests, modifying the defect tracking system and the version and control system, if necessary. It also includes devising measures for appropriate actions relating to work that was not performed according to the plan or results that were not what was anticipated. Examples include a reevaluation of the test team, test procedures, and technology dimensions of testing. All the above is fed back to the test plan, which is updated.
Once several testing spirals have been completed and the application has been verified as functionally stable, full system and acceptance testing starts. These tests are often optional. Respective system and acceptance test plans are developed defining the test objects and the specific tests to be completed.
The final activity in the continuous improvement process is summarizing and reporting the spiral test results. A major test report should be written at the end of all testing. The process used for report writing is the same whether it is an interim or a final report, and, like other tasks in testing, report writing is also subject to quality control. However, the final test report should be much more comprehensive than interim test reports. For each type of test it should describe a record of defects discovered, data reduction techniques, root cause analysis, the development of findings, and follow-on recommendations for the current and/or future projects.
The methodology provides a framework for testing in this environment. The major steps include information gathering, test planning, test design, test development, test execution/evaluation, and preparing for the next spiral. It includes a set of tasks associated with each step or a checklist from which the testing organization can choose based on its needs. The spiral approach flushes out the system functionality. When this has been completed, it also provides for classical system testing, acceptance testing,
Software Testing Estimation ProcessSoftware Testing estimation process is one of the most difficult and critical activity. When say that one project will be completed in a particluar time at a particular cost, then it must happen. If it does not happen, it may result in peer's comments and senior management’s warnings to being fired depending on the reasons and seriousness of the failure.
Here are a few rules for effective software testing estimation:
- Estimation must be based on previous projects: All estimation should be based on previous projects.
- Estimation must be recorded: All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again.
- Estimation shall be always based on the software requirements: All estimation should be based on what would be tested. The software requirements shall be read and understood by the testing team as well as development team. Without the testing participation, no serious estimation can be considered.
- Estimation must be verified. All estimation should be verified: Two spreadsheets can be created for recording the estimations. At the end, compare both the estimations. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.
- Estimation must be supported by tools: Tools such as spreadsheet containing metrics calculates automatically the costs and duration for each testing phase. Also, a document containing sections such as: cost table, risks, and free notes should be created. Showing this document to customer can help the customer to decide which kind of test he needs.
- Estimation shall be based on expert judgment: The experinced resources can easily make estimate that how long it would take for testing.
Classifiy the requirements in the following categories:
Critical: The development team has little knowledge in how to implement it.
High: The development team has good knowledge in how to implement it but it is not an easy task.
Software Test PlanningThe quality of Software Testing effort depends on the quality of quality of Software Testing Planning. Software Testing Planning is very critical and important part of Software Testing Process.
Below are some questions and suggestions for Software Test Planning:
- Have you planned for an overall testing schedule and the personnel required, and associated training requirements?
- Have the test team members been given assignments?
- Have you established test plans and test procedures for
1. Module testing2. Integration testing3. System testing4. Acceptance testing
- Have you designed at least one black-box test case for each system function?
- Have you designed test cases for verifying quality objectives/factors (e.g. reliability, maintainability, etc.)?
- Have you designed test cases for verifying resource objectives?
- Have you defined test cases for performance tests, boundary tests, and usability tests?
- Have you designed test cases for stress tests (intentional attempts to break system)?
- Have you designed test cases with special input values (e.g. empty files)?
- Have you designed test cases with default input values?
- Have you described how traceability of testing to requirements is to be demonstrated (e.g. references to the specified functions and requirements)?
- Do all test cases agree with the specification of the function or requirement to be tested?
- Have you sufficiently considered error cases? Have you designed test cases for invalid and unexpected input conditions as well as valid conditions?
- Have you defined test cases for white-box-testing (structural tests)?
- Have you stated the level of coverage to be achieved by structural tests?
- Have you unambiguously provided test input data and expected test results or expected messages for each test case?
- Have you documented the purpose of and the capability demonstrated by each test case?
- Is it possible to meet and to measure all test objectives defined (e.g. test coverage)?
- Have you defined the test environment and tools needed for executing the software test?
- Have you described the hardware configuration an resources needed to implement the designed test cases?
- Have you described the software configuration needed to implement the designed test cases?
- Have you described the way in which tests are to be recorded?
- Have you defined criteria for evaluating the test results?
- Have you determined the criteria on which the completion of the test will be judged?
- Have you considered requirements for regression testing?
Software Test PlanA test plan is a document describing the approach to be taken for intended testing activities and serves as a service level agreement between the quality assurance testing function and other interested parties, such as development. A test plan should be developed early in the development cycle and help improve the interactions of the analysis, design, and coding activities. A test plan defines the test objectives, scope, strategy and approach, test procedures, test environment, test completion criteria, test cases, items to be tested, the tests to be performed, the test schedules, personnel requirements, reporting procedures, assumptions, risks, and contingency planning.
While developing a test plan, one should be sure that it is simple, complete, current, and accessible by the appropriate individuals for feedback and approval. A good test plan flows logically and minimizes redundant testing, demonstrates full functional coverage, provides workable procedures for monitoring, tracking, and reporting test status, contains a clear definition of the roles and responsibilities of the parties involved, target delivery dates, and clearly documents the test results.
There are two ways of building a test plan. The first approach is a master test plan which provides an overview of each detailed test plan, i.e., a test plan of a test plan. A detailed test plan verifies a particular phase in the waterfall development life cycle. Test plan examples include unit, integration, system, acceptance. Other detailed test plans include
application enhancements, regression testing, and package installation. Unit test plans are code orientated and very detailed but short because of their limited scope. System or acceptance test plans focus on the functional test or black-box view of the entire system, not just a software unit.
The second approach is one test plan. This approach includes all the test types in one test plan, often called the acceptance/system test plan, but covers unit, integration, system, and acceptance testing and all the planning considerations to complete the tests.
A major component of a test plan, often in the Test Procedure section, is a
test case. A test case defines the step-by-step process whereby a test is executed. It includes the objectives and conditions of the test, the steps needed to set up the test, the data inputs, the expected results, and the actual results. Other information such as the software, environment, version, test ID, screen, and test type are also provided.
Major steps to develop a test plan:
A test plan is the basis for accomplishing testing and should be considered a living document, i.e., as the application changes, the test plan should change.
A good test plan encourages the attitude of “quality before design and coding.” It is able to demonstrate that it contains full functional coverage, and the test cases trace back to the functions being tested. It also contains workable mechanisms for monitoring and tracking discovered defects and report status.
The following are the major steps that need to be completed to build a good test plan:
Define the Test Objectives. The first step for planning any test is to establish what is to be accomplished as a result of the testing. This step ensures that all responsible individuals contribute to the definition of the test criteria that will be used. The
developer of a test plan determines what is going to be accomplished with the test, the specific tests to be performed, the test expectations, the critical success factors of the test, constraints, scope of the tests to be performed, the expected end products of the test, a final system summary report, and the final signatures and approvals. The test objectives are reviewed and approval for the objectives is obtained.
Develop the Test Approach. The test plan developer outlines the overall approach or how each test will be performed. This includes the testing techniques that will be used, test entry criteria, test exit criteria, procedures to coordinate testing activities with development, the test management approach, such as defect reporting and tracking, test progress tracking, status reporting, test resources and skills, risks, and a definition of the test basis (functional requirement specifications, etc.).
Define the Test Environment. The test plan developer examines the physical test facilities, defines the hardware, software, and networks, determines which automated test tools and support tools are required, defines the help desk support required, builds special software required for the test effort, and develops a plan to support the above.
Develop the Test Specifications. The developer of the test plan forms the test team to write the test specifications, develops test specification format standards, divides up the work tasks and work breakdown, assigns team members to tasks, and identifies features to be tested. The test team documents the test specifications for each feature and cross-references them to the functional specifications. It also identifies the interdependencies and work flow of the test specifications and reviews the test specifications.
Schedule the Test. The test plan developer develops a test schedule based on the resource availability and development schedule, compares the schedule with deadlines, balances resources and work load demands, defines major checkpoints, and develops contingency plans.
Review and Approve the Test Plan. The test plan developer or manager schedules a review meeting with the major players, reviews the plan in detail to ensure it is complete and workable, and obtains approval to proceed.
Test SpecificationTest Specification – It is a detailed summary of what scenarios will be tested, how they will be tested, how often they will be tested, and so on and so forth, for a given feature. Trying to include all Editor Features or all Window Management Features into one Test Specification would make it too large to effectively read.However, a Test Plan is a collection of all test specifications for a given area. The Test Plan contains a high-level overview of what is tested for the given feature area. Contents of a Test Specification:Revision History - This section contain information like Who created the test specification? When was it created? When was the last time it was updated? Feature Description – A brief description of what area is being tested.What is tested? – An overview of what scenarios are tested.What is not tested? - Are there any areas that are not being tested. There can be several reasons like... being covered by different people or any test limitations etc. If so, include this information as well.Nightly Test Cases – A list of the test cases and high-level description of what is tested whenever a new build becomes available. Breakout of Major Test Areas - It is the most interesting part of the test specification where testers arrange test cases according to what they are testing. Specific Functionality Tests – Tests to verify the feature is working according to the design specification. This area also includes verifying error conditions.Security tests – Any tests that are related to security. Accessibility Tests – Any tests that are related to accessibility. Performance Tests - This section includes verifying any performance requirements for your feature.Localization / Globalization - tests to ensure you’re meeting your product’s Local and International requirements.Please note that your Test Specification document should be in such a manner that should prioritize the test case easily like nightly test cases, weekly test cases and full test pass etc:
Nightly - Must run whenever a new build is available.
Weekly - Other major functionality tests run once every three or four builds.
Lower priority - Run once every major coding milestone