Sunday, May 3, 2009

SDLC : Software Development Life Cycle

SDLC : Software Development Life Cycle

1) System engineering and modeling
2) Software require analysis
3) Systems analysis and design
4) Code generation
5) Testing
6) Development and Maintenance

System Engineering and Modeling
In this process we have to identify the projects requirement and main features proposed in the application. Here the development team visits the customer and their system. They investigate the need for possible software automation in the given system. By the end of the investigation study. The team writes a document that holds the specifications for the customer system.

Software Requirement Analysis
In this software requirements analysis, firstly analysis the requirement for the proposed system. To understand the nature of the program to built, the system engineer must understand the information domain for the software, as well as required functions, performance and the interfacing. From the available information the system engineer develops a list of the actors use cases and system level requirement for the project. With the help of key user the list of use case and requirement is reviewed. Refined and updated in an iterative fashion until the user is satisfied that it represents the essence of the proposed system.

Systems analysis and design
The design is the process of designing exactly how the specifications are to be implemented. It defines specifically how the software is to be written including an object model with properties and method for each object, the client/server technology, the number of tiers needed for the package architecture and a detailed database design. Analysis and design are very important in the whole development cycle. Any glitch in the design could be very expensive to solve in the later stage of the software development.

Code generation
The design must be translated into a machine readable form. The code generation step performs this task. The development phase involves the actual coding of the entire application. If design is performed in a detailed manner. Code generation can be accomplished with out much complicated. Programming tools like compilers, interpreters like c, c++, and java are used for coding .with respect to the type of application. The right programming language is chosen.

Testing
After the coding. The program testing begins. There are different methods are there to detect the error in coding .different method are already available. Some companies are developed they own testing tools

Development and Maintenance
The development and maintenance is a staged roll out of the new application, this involves installation and initial training and may involve hardware and network upgrades. Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could be happen because of some unexpected input values into the system. In addition, the changes in the system could be directly affecting the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.

V-Model-Testing Method


There are various levels of testing:
Unit Testing
Integration Testing
System Testing



There are various types of testing based upon the intent of testing such as:
Acceptance Testing
Performance Testing
Load Testing
Regression Testing



Based on the testing Techniques testing can be classified as:
Black box Testing
White box Testing


How does Unit Testing fit into the Software Development Life Cycle?
This is the first and the most important level of testing. As soon as the programmer develops a unit of code the unit is tested for various scenarios. As the application is built it is much more economical to find and eliminate the bugs early on. Hence Unit Testing is the most important of all the testing levels. As the software project progresses ahead it becomes more and more costly to find and fix the bugs.



In most cases it is the developer’s responsibility to deliver Unit Tested Code.
Unit Testing Tasks and Steps:


Step 1: Create a Test Plan


Step 2: Create Test Cases and Test Data


Step 3: If applicable create scripts to run test cases


Step 4: Once the code is ready execute the test cases


Step 5: Fix the bugs if any and re test the code


Step 6: Repeat the test cycle until the “unit” is free of all bugs



What is a Unit Test Plan?
This document describes the Test Plan in other words how the tests will be carried out. This will typically include the list of things to be Tested, Roles and Responsibilities, prerequisites to begin Testing, Test Environment, Assumptions, what to do after a test is successfully carried out, what to do if test fails, Glossary and so on



What is a Test Case?
Simply put, a Test Case describes exactly how the test should be carried out. For example the test case may describe a test as follows: Step 1: Type 10 characters in the Name Field Step 2: Click on Submit



Test Cases clubbed together form a Test Suite


Test Case Sample
Test Case ID
Test Case Description
Input Data
Expected Result
Actual Result
Pass/Fail
Remarks


Additionally the following information may also be captured:


a) Unit Name and Version Being tested


b) Tested By


c) Date


d) Test Iteration (One or more iterations of unit testing may be performed)

Steps to Effective Unit Testing:


1) Documentation: Early on document all the Test Cases needed to test your code. A lot of times this task is not given due importance. Document the Test Cases, actual Results when executing the Test Cases, Response Time of the code for each test case. There are several important advantages if the test cases and the actual execution of test cases are well documented.

a. Documenting Test Cases prevents oversight.


b. Documentation clearly indicates the quality of test cases


c. If the code needs to be retested we can be sure that we did not miss anything


d. It provides a level of transparency of what was really tested during unit testing. This is one of the most important aspects. e. It helps in knowledge transfer in case of employee attrition f. Sometimes Unit Test Cases can be used to develop test cases for other levels of testing


2) What should be tested when Unit Testing: A lot depends on the type of program or unit that is being created. It could be a screen or a component or a web service. Broadly the following aspects should be considered:



a. For a UI screen include test cases to verify all the screen elements that need to appear on the screens b. For a UI screen include Test cases to verify the spelling/font/size of all the “labels” or text that appears on the screen c. Create Test Cases such that every line of code in the unit is tested at least once in a test cycle d. Create Test Cases such that every condition in case of “conditional statements” is tested once e. Create Test Cases to test the minimum/maximum range of data that can be entered. For example what is the maximum “amount” that can be entered or the max length of string that can be entered or passed in as a parameter f. Create Test Cases to verify how various errors are handled g. Create Test Cases to verify if all the validations are being performed



3) Automate where Necessary: Time pressures/Pressure to get the job done may result in developers cutting corners in unit testing. Sometimes it helps to write scripts, which automate a part of unit testing. This may help ensure that the necessary tests were done and may result in saving time required to perform the tests


Summary:
Unit Testing” is the first level of testing and the most important one. Detecting and fixing bugs early on in the Software Lifecycle helps reduce costly fixes later on. An Effective Unit Testing Process can and should be developed to increase the Software Reliability and credibility of the developer. The Above article explains how Unit Testing should be done and the important points that should be considered when doing Unit Testing.
Many new developers take the unit testing tasks lightly and realize the importance of Unit Testing further down the road if they are still part of the project. This article serves as a starting point for laying out an effective (Unit) Testing Strategy.



Integration Testing: Why? What? & How?


Each level of testing builds on the previous level.
“Unit testing” focuses on testing a unit of the code. “Integration testing” is the next level of testing. This ‘level of testing’ focuses on testing the integration of “units of code” or components.
How does Integration Testing fit into the Software Development Life Cycle?
Even if a software component is successfully unit tested, in an enterprise n-tier distributed application it is of little or no value if the component cannot be successfully integrated with the rest of the application.



Once unit tested components are delivered we then integrate them together. These “integrated” components are tested to weed out errors and bugs caused due to the integration. This is a very important step in the Software Development Life Cycle.
It is possible that different programmers developed different components.
A lot of bugs emerge during the integration step.
In most cases a dedicated testing team focuses on Integration Testing.



Prerequisites for Integration Testing: Before we begin Integration Testing it is important that all the components have been successfully unit tested.

Integration Testing Steps: Integration Testing typically involves the following


Steps:


Step 1: Create a Test Plan


Step 2: Create Test Cases and Test Data


Step 3: If applicable create scripts to run test cases


Step 4: Once the components have been integrated execute the test cases


Step 5: Fix the bugs if any and re test the code


Step 6: Repeat the test cycle until the components have been successfully integrated

What is an ‘Integration Test Plan’? As you may have read in the other articles in the series, this document typically describes one or more of the following:


- How the tests will be carried out


- The list of things to be Tested


- Roles and Responsibilities


- Prerequisites to begin Testing


- Test Environment


- Assumptions


- What to do after a test is successfully carried out


- What to do if test fails


- Glossary


How to write an Integration Test Case?
Simply put, a Test Case describes exactly how the test should be carried out. The Integration test cases specifically focus on the flow of data/information/control from one component to the other.
So the Integration Test cases should typically focus on scenarios where one component is being called from another. Also the overall application functionality should be tested to make sure the app works when the different components are brought together.



The various Integration Test Cases clubbed together form an Integration Test Suite Each suite may have a particular focus. In other words different Test Suites may be created to focus on different areas of the application.
As mentioned before a dedicated Testing Team may be created to execute the Integration test cases. Therefore the Integration Test Cases should be as detailed as possible.


Sample Test Case Table:
Test Case ID
Test Case Description
Input Data
Expected Result
Actual Result
Pass/Fail
Remarks


Additionally the following information may also be captured:


a) Test Suite Name


b) Tested By


c) Date


d) Test Iteration (One or more iterations of Integration testing may be performed)


Working towards Effective Integration Testing:
There are various factors that affect Software Integration and hence Integration Testing:
1) Software Configuration Management: Since Integration Testing focuses on Integration of components and components can be built by different developers and even different development teams, it is important the right version of components are tested. This may sound very basic, but the biggest problem faced in n-tier development is integrating the right version of components. Integration testing may run through several iterations and to fix bugs components may undergo changes. Hence it is important that a good Software Configuration Management (SCM) policy is in place. We should be able to track the components and their versions. So each time we integrate the application components we know exactly what versions go into the build process.



2) Automate Build Process where Necessary: A Lot of errors occur because the wrong version of components were sent for the build or there are missing components. If possible write a script to integrate and deploy the components this helps reduce manual errors.



3) Document: Document the Integration process/build process to help eliminate the errors of omission or oversight. It is possible that the person responsible for integrating the components forgets to run a required script and the Integration Testing will not yield correct results.



4) Defect Tracking: Integration Testing will lose its edge if the defects are not tracked correctly. Each defect should be documented and tracked. Information should be captured as to how the defect was fixed. This is valuable information. It can help in future integration and deployment processes.


Summary:
Integration testing is the most crucial steps in Software Development Life Cycle. Different components are integrated together and tested. This can be a daunting task in enterprise applications where diverse teams build different modules and components. In this article you learned the steps needed to perform Integration Testing.




System Testing: Why? What? & How?


Unit testing’ focuses on testing each unit of the code.
‘Integration testing’ focuses on testing the integration of “units of code” or components. Each level of testing builds on the previous level.



System Testing’ is the next level of testing. It focuses on testing the system as a whole.
This article attempts to take a close look at the System Testing Process and analyze: Why System Testing is done? What are the necessary steps to perform System Testing? How to make it successful?


How does System Testing fit into the Software Development Life Cycle?
In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individual components are working OK. The ‘Integration testing’ focuses on successful integration of all the individual pieces of software (components or units of code).
Once the components are integrated, the system as a whole needs to be rigorously tested to ensure that it meets the Quality Standards.



Thus the System testing builds on the previous levels of testing namely unit testing and Integration Testing.


Usually a dedicated testing team is responsible for doing ‘System Testing’.


hy System Testing is important? System Testing is a crucial step in Quality Management Process.

-In the Software Development Life cycle System Testing is the first level where the System is tested as a whole


-The System is tested to verify if it meets the functional and technical requirements


- The application/System is tested in an environment that closely resembles the duction environment where the application will be finally deployed


- The System Testing enables us to test, verify and validate both the Business requirements as well as the Application Architecture


Prerequisites for System Testing:


The prerequisites for System Testing are: -


All the components should have been successfully Unit Tested


- All the components should have been successfully integrated and Integration


Testing should be completed


- An Environment closely resembling the production environment should be created.



When necessary, several iterations of System Testing are done in multiple environments.



Steps needed to do System Testing:


The following steps are important to perform System Testing:


Step 1: Create a System Test Plan


Step 2: Create Test Cases


Step 3: Carefully Build Data used as Input for System Testing


Step 3: If applicable create scripts to


- Build environment and - to automate Execution of test cases


step 4: Execute the test cases


Step 5: Fix the bugs if any and re test the code


Step 6: Repeat the test cycle as necessary

What is a ‘System Test Plan’?


As you may have read in the other articles in the testing series, this document typically describes the following:


- The Testing Goals - The key areas to be focused on while testing


- The Testing Deliverables


- How the tests will be carried out


- The list of things to be Tested


- Roles and Responsibilities


- Prerequisites to begin Testing


- Test Environment


- Assumptions


- What to do after a test is successfully carried out


- What to do if test fails


- Glossary



How to write a System Test Case?


A Test Case describes exactly how the test should be carried out.


The System test cases help us verify and validate the system.


The System Test Cases are written such that: -


They cover all the use cases and scenarios


- The Test cases validate the technical Requirements and Specifications


- The Test cases verify if the application/System meet the Business & Functional Requirements specified


- The Test cases may also verify if the System meets the performance standards



Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The detailed Test cases help the test executioners do the testing as specified without any ambiguity.

The format of the System Test Cases may be like all other Test cases as illustrated below:

Test Case ID
Test Case Description:
What to Test?
How to Test?
Input Data
Expected Result
Actual Result
Sample Test Case Format:

Test Case ID
What To Test?
How to Test?
Input Data
Expected Result
Actual Result
Pass/Fail

Additionally the following information may also be captured:


.a) Test Suite Name


.b) Tested By


..c) Date


.d) Test Iteration (The Test Cases may be executed one or more times


Working towards Effective Systems Testing:
There are various factors that affect success of System Testing:


1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business Requirements, Technical Requirements, and Performance Requirements. The test cases should enable us to verify and validate that the system/application meets the project goals and specifications.


2) Defect Tracking: The defects found during the process of testing should be tracked. Subsequent iterations of test cases verify if the defects have been fixed.


3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so results in improper Test Results.


4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is a compilation of the various components that make the application deployed in the appropriate environment. The Test results will not be accurate if the application is not ‘built’ correctly or if the environment is not set up as specified. Automating this process may help reduce manual errors.



5) Test Automation: Automating the Test process could help us in many ways:
a. The test can be repeated with fewer errors of omission or oversight
b. Some scenarios can be simulated if the tests are automated for instance simulating a large number of users or simulating increasing large amounts of input/output data .


6) Documentation: Proper Documentation helps keep track of Tests executed. It also helps create a knowledge base for current and future projects. Appropriate metrics/Statistics can be captured to validate or verify the efficiency of the technical design /architecture.
Summary:
In this article we studied the necessity of ‘System Testing’ and how it is done.

What is User Acceptance Testing?

What is User Acceptance Testing?
User Acceptance Testing is often the final step before rolling out the application.
Usually the end users who will be using the applications test the application before ‘accepting’ the application.
This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.
This testing also helps nail bugs related to usability of the application.


User Acceptance Testing – Prerequisites:
Before the User Acceptance testing can be done the application is fully developed. Various levels of testing (Unit, Integration and System) are already completed before User Acceptance Testing is done. As various levels of testing have been completed most of the technical bugs have already been fixed before UAT.

User Acceptance Testing – What to Test?
To ensure an effective User Acceptance Testing Test cases are created. These Test cases can be created using various use cases identified during the Requirements definition stage. The Test cases ensure proper coverage of all the scenarios during testing.
During this type of testing the specific focus is the exact real world usage of the application. The Testing is done in an environment that simulates the production environment. The Test cases are written using real world scenarios for the application


User Acceptance Testing – How to Test?
The user acceptance testing is usually a black box type of testing. In other words, the focus is on the functionality and the usability of the application rather than the technical aspects. It is generally assumed that the application would have already undergone Unit, Integration and System Level Testing.


However, it is useful if the User acceptance Testing is carried out in an environment that closely resembles the real world or production environment.
The steps taken for User Acceptance Testing typically involve one or more of the following:

1) User Acceptance Test (UAT) Planning

2) Designing UA Test Cases

3) Selecting a Team that would execute the (UAT) Test

4) Executing Test Cases

5) Documenting the Defects found during UAT

6) Resolving the issues/Bug Fixing

7) Sign Of

User Acceptance Test (UAT) Planning: As always the Planning Process is the most important of all the steps. This affects the effectiveness of the Testing Process. The Planning process outlines the User Acceptance Testing Strategy. It also describes the key focus areas, entry and exit criteria.


Designing UA Test Cases: The User Acceptance Test Cases help the Test Execution Team to test the application thoroughly. This also helps ensure that the UA Testing provides sufficient coverage of all the scenarios. The Use Cases created during the Requirements definition phase may be used as inputs for creating Test Cases. The inputs from Business Analysts and Subject Matter Experts are also used for creating.


Each User Acceptance Test Case describes in a simple language the precise steps to be taken to test something.
The Business Analysts and the Project Team review the User Acceptance Test Cases.
Selecting a Team that would execute the (UAT) Test Cases: Selecting a Team that would execute the UAT Test Cases is an important step. The UAT Team is generally a good representation of the real world end users. The Team thus comprises of the actual end users who will be using the application.

Executing Test Cases: The Testing Team executes the Test Cases and may additional perform random Tests relevant to them


Documenting the Defects found during UAT: The Team logs their comments and any defects or issues found during testing.


Resolving the issues/Bug Fixing: The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users.


Sign Off: Upon successful completion of the User Acceptance Testing and resolution of the issues the team generally indicates the acceptance of the application. This step is important in commercial software sales. Once the User “Accept” the Software delivered they indicate that the software meets their requirements.
The users now confident of the software solution delivered and the vendor can be paid for the same.

What are the key deliverables of User Acceptance Testing?
In the Traditional Software Development Lifecycle successful completion of User Acceptance Testing is a significant milestone.


The Key Deliverables typically of User Acceptance Testing Phase are:
1) The Test Plan- This outlines the Testing Strategy
2) The UAT Test cases – The Test cases help the team to effectively test the application
3) The Test Log – This is a log of all the test cases executed and the actual results.
4) User Sign Off – This indicates that the customer finds the product delivered to their satisfaction

Bug Life Cycle & Guidelines

Bug Life Cycle & Guidelines

Introduction:
Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

Bug Life Cycle:
In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows

The different states of a bug can be summarized as follows:
1. New 2. Open 3. Assign 4. Test 5. Verified 6. Deferred 7. Reopened 8. Duplicate 9. Rejected and 10. Closed


Description of Various Stages:
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.
5. Deferred The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.
While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.
Guidelines on deciding the Severity of Bug:
Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.


A sample guideline for assignment of Priority Levels during the product test phase includes:


Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test. .


Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc. .


Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points. .
Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.



Guidelines on writing Bug Description:
Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.
1Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.
2Use present tense.
3Don’t use unnecessary words.
4Don’t add exclamation points. End sentences with a period.
5DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).
6Mention steps to reproduce the bug compulsorily.

Introduction to CMM

Introduction to CMM

Quality software should reasonably be bug-free, delivered on time and within budget. It should meet the given requirements and/or expectations, and should be maintainable. In order to produce error free and high quality software certain standards need to be followed.

Quality Standards
ISO 9001: 2000
is Quality Management System Certification. To achieve this, an organization must satisfy ISO 9001: 2000 clauses (clauses 1 - 8). Six Sigma is a process improvement methodology focused on reduction in variation of the processes around the mean. Its objective is to make the process defect free. SEI CMM is a defacto standard for assessing and improving processes related to software development, developed by the software community in 1986 with leadership from SEI. It’s a software specific process maturity model. It provides guidance for measuring software process maturity and helps process improvement programs.

SEI CMM is organized into 5 maturity levels:
1 Initial
2 Repeatable
3 Defined
4 Manageable
5 Optimizing

1)Initial
The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics.

2) Repeatable:
Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.

3) Defined:
The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.

4) Managed:
Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.

5) Optimizing:
Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

Automated Testing Advantages, Disadvantages and Guidelines

Automated Testing Advantages, Disadvantages and Guidelines

Automation is the use of strategies, tools and artifacts that augment or reduce the need of manual or human involvement or interaction in unskilled, repetitive or redundant tasks.
Minimally, such a process includes:
Detailed test cases, including predictable "expected results", which have been developed from Business Functional Specifications and Design documentation
A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application.
The following types of testing can be automated
Functional - testing that operations perform as expected.
Regression - testing that the behavior of the system has not changed.
Exception or Negative - forcing error conditions in the system.
Stress - determining the absolute capacities of the application and operational infrastructure.
Performance - providing assurance that the performance of the system will be adequate for both batch runs and online transactions in relation to business projections and requirements.
Load - determining the points at which the capacity and performance of the system become degraded to the situation that hardware or software upgrades would be required.

Benefits of Automated Testing
Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error
Repeatable: You can test how the software reacts under repeated execution of the same operations. Programmable: You can program sophisticated tests that bring out hidden information from the application. Comprehensive: You can build a suite of tests that covers every feature in your application.
Reusable: You can reuse tests on different versions of an application, even if the user interface changes.
Better Quality Software: Because you can run more tests in less time with fewer resources
Fast: Automated Tools run tests significantly faster than human users.
Cost Reduction: As the number of resources for regression test are reduced.
Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize these benefits. The right areas where the automation fit must be chosen.
The following areas must be automated first
1. Highly redundant tasks or scenarios
2. Repetitive tasks that are boring or tend to cause human error 3. Well-developed and well-understood use cases or scenarios first 4. Relatively stable areas of the application over volatile ones must be automated.
Automated testers must follow the following guidelines to get the benefits of automation:
Concise: As simple as possible and no simpler.
Self-Checking: Test reports its own results; needs no human interpretation.
Repeatable: Test can be run many times in a row without human intervention.
Robust: Test produces same result now and forever. Tests are not affected by changes in the external environment.
Sufficient: Tests verify all the requirements of the software being tested.
Necessary: Everything in each test contributes to the specification of desired behavior.
• Clear: Every statement is easy to understand.
Efficient: Tests run in a reasonable amount of time.
Specific: Each test failure points to a specific piece of broken functionality; unit test failures provide "defect triangulation".
Independent: Each test can be run by itself or in a suite with an arbitrary set of other tests in any order.
Maintainable: Tests should be easy to understand and modify and extend.
Traceable: To and from the code it tests and to and from the requirements.

Disadvantages of Automation Testing
Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages are: • Proficiency is required to write the automation test scripts. • Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly consequences. • Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test script has to be rerecorded or replaced by a new test script. • Maintenance of test data files is difficult, if the test script tests more screens. Some of the above disadvantages often cause damage to the benefit gained from the automated scripts. Though the automation testing has pros and corns, it is adapted widely all over the world.

Metrics Used In Testing

Metrics Used In Testing

The Product Quality Measures:
1. Customer satisfaction index
This index is surveyed before product delivery and after product delivery (and on-going on a periodic basis, using standard questionnaires).The following are analyzed:
Number of system enhancement requests per year
Number of maintenance fix requests per year
User friendliness: call volume to customer service hotline
User friendliness: training time per new user
Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems groups)

2. Delivered defect quantities
They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users
Turnaround time for defect fixes, by level of severity
Time for minor vs. major enhancements; actual vs. planned elapsed time

4. Product volatility
Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)

5. Defect ratios
Defects found after product delivery per function point.
Defects found after product delivery per LOC
Pre-delivery defects: annual post-delivery defects
Defects per function point of the system modifications

6. Defect removal efficiency
Number of post-release defects (found by clients in field operation), categorized by level of severity
Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects
All defects include defects found internally plus externally (by customers) in the first year after product delivery

7. Complexity of delivered product
McCabe's cyclomatic complexity counts across the system
Halstead’s measure
Card's design complexity measures
Predicted defects and maintenance costs, based on complexity measures

8. Test coverage
Breadth of functional coverage
Percentage of paths, branches or conditions that were actually tested
Percentage by criticality level: perceived level of risk of paths
The ratio of the number of detected faults to the number of predicted faults.

9. Cost of defects
Business losses per defect that occurs during operation
Business interruption costs; costs of work-arounds
Lost sales and lost goodwill
Litigation costs resulting from defects
Annual maintenance cost (per function point)
Annual operating cost (per function point)
Measurable damage to your boss's career

10. Costs of quality activities
Costs of reviews, inspections and preventive measures
Costs of test planning and preparation
Costs of test execution, defect tracking, version and change control
Costs of diagnostics, debugging and fixing
Costs of tools and tool support
Costs of test case library maintenance
Costs of testing & QA education associated with the product
Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)

11. Re-work
Re-work effort (hours, as a percentage of the original coding hours)
Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
Re-worked software components (as a percentage of the total delivered components)

12. Reliability
Availability (percentage of time a system is available, versus the time the system is needed to be available)
Mean time between failure (MTBF).
Man time to repair (MTTR)
Reliability ratio (MTBF / MTTR)
Number of product recalls or fix releases
Number of production re-runs as a ratio of production runs .

Metrics for Evaluating Application System Testing:
Metric = Formula
Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria
Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size
Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed
Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing

Challenges in Testing Web Based Applications

Challenges in Testing Web Based Applications
Introduction:
Web based Applications are increasingly becoming more feature rich, important and also the most popular means for developing commercial systems. Most companies opt for developing web based software wherever possible. This helps in catering to large number of end users. The deployment of the apps (once the infrastructure is in place) is fairly easy. The web based applications are powerful and have the ability to provide feature rich content to a wide audience spread across the globe at an economical cost. Hence it is a daunting task to test these applications and with more and more features testing these apps is becoming even more complex. In this article we will study the challenges faced when testing these applications

Why testing Web Applications is different?
Testing web applications is different because of many factors scenarios affecting the performance and user experience. Web applications can typically cater to a large and a diverse audience. Web Applications can also be exposed to wide range of security threats. Web applications may open up illegal points of entry to the databases and other systems holding sensitive information. To ensure that the web application works reliably and correctly under different situations these factors need to be accounted for and tested. Hence a lot of effort needs to put in for Test Planning and Test Design. Test Cases should be written covering the different scenarios not only of functional usage but also technical considerations such as Network speeds, Screen Resolution, etc. For example an application may work fine on Broad Band internet users but may perform miserably for users with dial up internet connections. Web Applications are known to give errors on slow networks, whereas they perform well on high speed connections. Web pages don’t render correctly for certain situations but work okay with others. Images may take longer to download for slower networks and the end user perception of the application may not be good.

Factors effecting Testing of Web Applications:
As mentioned above Web Applications can have a lot of variables affecting them such as:
- Numerous Application Usage (Entry – Exit) Paths are possible Due to the design and nature of the web applications it is possible that different users follow different application usage paths.
For example in an online banking application a user may directly go to “Bill Pay” page and other users may check account balances, view previous transactions and then “Pay the Bills”. Generally a large number of usage paths are possible and all are supposed to work well. All these Permutations and Combinations need to be tested thoroughly - People with varying backgrounds & technical skills may use the application Not all applications are self explanatory to all people. People have varying backgrounds and may find the application hard to use. For instance a Business Intelligence application with “Drill-Down-Reports” may work out for certain users but not for others. Although this affects the design of the applications, but these factors should be tested in usability testing of the applications

- Intranet versus Internet based Applications Intranet based applications generally cater to a controlled audience. The developers and architects can make accurate assumptions about the people accessing the apps and the hardware/software/technical specifications of the client machines. While it may be difficult to make similar assumptions for Internet Based Applications Also the intranet users can generally access the app from ‘trusted’ sources whereas for internet applications the users may need to be authenticated and the security measures may have to be much more stringent. Test Cases need to be designed to test the various scenarios and risks involved. - The end users may use different types of browsers to access the app Typically for internet based applications users may have different Browsers when accessing the apps. This aspect also needs to be tested. If we test the app only on IE then we cannot ensure if works well on Netscape or Fire-Fox. Because these browsers may not only render pages differently but also have varying levels of support for client side scripting languages such as java-script. - Even on similar browsers application may be rendered differently based on the Screen resolution/Hardware/Software Configuration .

- Network speeds: Slow Network speeds may cause the various components of a Webpage to be downloaded with a time lag. This may cause errors to be thrown up.
The testing process needs to consider this as important factor specially for Internet based Applications - ADA ( Americans with Disabilities Act) It may be required that the applications be compliant with ADA. Due certain disabilities, some of the users may have difficulty in accessing the Web Applications unless the applications are ADA compliant. The Application may need to be tested for compliance and usability - Other Regulatory Compliance/Standards: Depending on the nature of the application and sensitivity of the data captured the applications may have to be tested for relevant Compliance Standards. This is more crucial for Web Based Applications because of their possible exposure to a wide audience. - Firewalls: As mentioned earlier Applications may behave differently across firewalls. Applications may have certain web services or may operate on different ports that may have been blocked. So the apps need to be tested for these aspects as well. - Security Aspects: If the Application captures certain personal or sensitive information, it may be crucial to test the security strength of the application. Sufficient care need to be taken that the security of the data is not compromised.

Why technology platforms affect testing?
Technology platform upon which the Web Application was built also creates different strengths and weaknesses. Different Test Automation tools & packages are available for different Technology Platforms. This can influence the Test Strategy and the way in which Test Execution is done.

Challenges in Testing Web Based Web Applications:
To ensure that sufficient Test Coverage is provided for Web Based Applications and to provide a secure, reliable application to the user the above factors need to be considered.

Testing for Agile Software Development

Testing for Agile Software Development

Background:
To understand the Testing Process in an Agile Development Methodology, it is important to understand the Agile Development paradigm. Agile Development paradigm is not very new. Although the Agile Software Development Manifesto came into existence in February 2001, the concepts existed long before that and were expressed in different ways. Spiral Development Methodology is one such example.

Understanding Agile Software Development:
The Agile Software Development primarily focuses on an iterative method of development and delivery. The developers and end users communicate closely and the software is built. A working piece of software is delivered in a short span of time and based on the feedback more features and capabilities are added. The focus is on satisfying the customer by delivering working software quickly with minimum features and then improvising on it based on the feedback. The customer is thus closely involved in the Software Design and Development Process. The delivery timelines are short and the new code is built on the previous one. Despite this, high quality of the product cannot be comprised. This creates a different set of challenges for Software Testing.

How is Testing approach different in an Agile Development Scenario?
The Testing Strategy and Approach in Agile Development could be very different from traditional bureaucratic methods. In fact it could vary with project needs and the project team. In many scenarios, it may make sense to not have a separate testing team. The above statement should be understood carefully. By not having a testing team we do not consider testing to be any less important. In fact testing can done more effectively in short turn around times, by people who know the system and its objectives very well. For example in certain teams Business Analysts could be doing a few rounds of testing each time the software version is released. Business Analysts understand the Business Requirements of the Software and test it to verify if it meets the requirements. Developers may test the software. They tend to understand the system better and can verify the test results in a better way. Testing for AGILE Software Development requires innovative thinking and the right mix of people should be chosen for doing the testing.

What to test?
Given the relatively short turn around times in this methodology it is important that the team is clear on what needs to be tested. Even though close interaction and innovation is advocated rather than processes, sufficient emphasis is given to the testing effort. While each team may have its own group dynamics based on the context, each code has to be unit tested. The developers do the unit testing to ensure that the software unit is functioning correctly. Since the development itself is iterative it is possible that the next release of code was built by modifying the previous one. Hence Regression Testing gains significant importance in these situations.
The team tests if the newly added functionality works correctly and that the previously released functionality still works as expected. Test Automation also gains importance due to short delivery timelines. Test Automation may prove effective in ensuring that everything that needs to be tested was covered. It is not necessary that costly tools be purchased to automate testing. Test Automation can be achieved in a relatively cost effective way by utilizing the various open source tools or by creating in-house scripts. These scripts can run one or more test cases to exercise a unit of code and verify the results or to test several modules. This would vary with the complexity of the Project and the experience of the Team

Typical bugs found when doing agile testing?
Although nothing is typical about any Agile Development Project and each project may have its own set of complexities, by the very nature of the paradigm bugs may be introduced in the system when a piece of code is modified/enhanced/changed by one or more Developers.
Whenever a piece of code is changed it is possible that bugs have been introduced to it or previously working code is now broken. New bugs/defects can be introduced at every change or old bugs/defects may be reopened.

Steps Taken to Effectively Test in Agile Development Methodology:
As a wise person once said there is no substitute to hard work. The only way one can effectively test is by ensuring Sufficient Test Coverage and Testing Effort Automated or otherwise. The challenge could be lack of documentation, but the advantage could be close communication between team members thereby resulting in greater clarity of thought and understanding of the system. Each Time Code is changed Regression Testing is done. Test Coverage is ensured by having Automated Scripts and the right mix/combination of People executing the Test Cases. Exploratory Testing may also be encouraged. Exploratory Tests are not pre-designed or pre-defined. The Tests are designed and executed immediately. Similarly ad hoc testing may also be encouraged. Ad-hoc testing is done based on the tester’s experience and skills. While automated Test Cases will ensure that the Test Cases scripted are executed as defined, the team may not have enough time to design and script all the test cases.

Ensuring software test coverage
To ensure that delivered product meets the end user’s requirements it is important that sufficient testing is done and all scenarios are tested. Sufficient Test Coverage in an Agile Development Scenario may be tricky but with close cooperation and the right team dynamics it may not be impossible. The objectives of the Project should be clear to the entire team. Many teams advocate Test Driven Development. At every stage the Software is tested if it meets the Requirements. Every Requirement is translated to a Test Case and Software is validated/ verified. While the Processes and Documentation are not stressed upon sufficient steps are taken to ensure that the software is delivered as per the User Expectations. This implies that each Software delivery should be tested thoroughly before it is released. The timelines being short requires that the person testing the software has sufficient knowledge about the system and its objectives

Best Practices in Automated Testing

Best Practices in Automated Testing
Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission – critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability.

Why Automate the Testing Process?
In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive and time-consuming manual test procedures. Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations.
Using Testing Effectively
By definition, testing is a repetitive activity. The methods that are employed to carry out testing (manual or automated) remain repetitious throughout the development life cycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation eliminates the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse. An automated test executes the next operation in the test hierarchy at machine speed, allowing test to be completed many times faster than the fastest individual. Automated test also perform load/stress testing very effectively.

Reducing Testing Costs

The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods requires only a fraction of the computer hardware that would be necessary to complete a manual test.

Replicating testing across different platforms
Automation allows the testing organization to perform consistent and repeatable test. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently.

Greater Application Coverage
The productivity gains delivered by automated testing allow and encourage organization to test more often and more completely. Greater application test coverage also reduces the risk if exposing users to malfunctioning or non-compliant software.

Results Reporting
Full-featured automated testing systems also produce convenient test reporting and analysis. These reports provide a standardized measure of test status and results, thus allowing more accurate interpretation of testing outcomes. Manual methods require the user to self-document test procedures and test results.

Understanding the Testing Process
Typical Testing Steps: Most software testing projects can be divided into general steps Test Planning: This step determines like ‘which’ and ‘when’. Test Design: This step determines how the tests should be built the level of quality. Test Environment Preparation: Technical environment is established during this step. Test Construction: At this step, test scripts are generated and test cases are developed. Test Execution: This step is where the test scripts are executed according to the test plans. Test evaluation: After the test is executed, the test results are compared to the expected results and evaluations can be made about the quality of an application.

Identifying Tests Requiring Automation
Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests test that run only once and tests that require constant human intervention are usually not worth the investment incurred to automate. The following are examples of criteria that can be used to identify tests that are prime candidates for automation. High path frequency – Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. Examples include: creating customer records. Critical Business Processes – Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings, production planning, sales order entry and other core activities. Any application with a high –degree of risk associated with a failure is a good candidate for test automation.
Repetitive Testing – If a testing procedure can be reused many times, it is also a prime candidate for automation Applications with a Long Life Span – If an application is planned to be in production for a long period of time, the greater the benefits are from automation.

Task Automation and Test Set-Up
In performing software testing, there are many tasks that need to be performed before or after the actual test. For example, if a test needs to be executed to create sales orders against current inventory, goods need to be in inventory. The tasks associated with placing items in inventory can be automated so that the test can run repeatedly. Additionally, highly repetitive tasks not associated with testing can be automated utilizing the same approach.

Who Should Be Testing?
There is no clear consensus in the testing community about which group within an organization should be responsible for performing the testing function. It depends on the situation prevailing in the organization

SQA Principles

SQA Principles

Developing a software is not just writing codes, they are essentially answers to pressing problems may it be in the office or just to cure boredom – just like in games. Underlying these answers to problems and needs are principles that guide the developers in their software development.

The SQA team also has to follow certain principles. As a provider of quality assurance for procedures and application, they need to have a strong foundation on what to believe in and what to stand for. These principles will ensure that the application will live up to the expectations of the client and the users. Like the application, the SQA principles run on the background but eventually dictate how the system will be tackled.
The following are some of the most powerful principles that can be used for proper execution of software quality assurance:

Feedback – In gist, the faster the feedback the faster the application will move forward. An SQA principle that uses rapid feedback is assured of success. Time will always be the best friend and the most notorious enemy of any developer and it’s up to the SQA team to give the feedback as soon as possible. If they have the ability to get the feedback of their applications as soon as possible, then the chance of developing a better application faster is possible.

Focus on Critical Factor – This principle has so many meanings; first it just means that some of the factors of the software being developed are not as critical compared to other. That means SQA should be focused on the more important matters.

Secondly, SQA’s measurement should never be universal in the sense that every factor in the application should not have the same treatment. One great example of this is the treatment of the specific functions compared to the skin or color of the interface. Clearly, the function should have more focus compared to a simple skin color.

Multiple Objectives – This is partly a challenge as well as risk for the SQA team. At the start of the SQA planning, the team should have more than one objective. If you think about it, it could be very dangerous however it is already a common practice. But what is emphasized here is that each objective should be focused on. As much as possible a matrix should be built by the SQA so that it could track the actual actions that relates to the objective.

Evolution – Reaching the objective is really easy but every time something new happens, it should be always noted. Evolution is setting the benchmark in each development. Since the SQA team is able to mark every time something new is done, evolution is monitored.

The good thing about this principle is for future use. Whenever a benchmark is not reached, the SQA team should be able to study their previous projects. Evolution should be able to inform and educate the SQA team while working on the project.

Quality Control – By the name itself, Quality Control is the pillar for Software Quality Assurance. Everything needs to have quality control – from the start to the finish. With this principle there has to be an emphases on where to start. The biggest and the tightest quality control should be executed as early as possible.

For example, when the SQA team receives the SR (software requirements document) the intensity of quality control should be at the start. Of course quality control will still be executed until the end but developers should take into account that anything that starts out real bad could never take off. It’s better to know what’s wrong at first than to find that out later.

Motivation – There is not substitute than to have the right people who has the will to do their job at all times. When they have the right mindset the willingness to do it, everything will just go through. Work will definitely be lighter, expertise will be seen and creativity is almost assured when everyone has the drive and passion in their line of work. Quality assurance is a very tedious task and will get the most out of the person if they are not dedicated to their line of work.
Process Improvement – Every project of the SQA team should be a learning experience. Of course each project will give us the chance to increase our experience of SQA but there’s more to that. Process improvement fosters the development of the actual treatment of the project.
Every project has a unique situation that will give the SQA team a chance to experience something new. This “new” will never be translated to something good if they are not documented for future references. Learning should not only be based on individual experience but also on company’s ability to adapt to the new situation and use it for future references.

Persistence – There is no perfect application. The bigger they get, the more error there could be. The SQA team should be very tenacious in looking for concerns in every aspect of the software development process. Even with all the obstacles everyone would just have to live with the fact that every part should be scrutinized without hesitation.

Different Effects of SQA – SQA should go beyond software development. A regular SQA will just report for work, look for errors and leave. The SQA team should be role models in business protocols at all times. This way, the SQA does not only foster perfection in the application but also in their way of life. That seemed to be quite off topic but believe me; when people dress and move to success, their work will definitely reflect with it.

Result-focused – SQA should not only look at the process but ultimately its effect to the clients and users. The SQA process should always look for results whenever a phase is set.
These are the principles that every SQA plan and team should foster. These principles tell encourages dedication towards work and patience not necessarily for perfection but for maximum efficiency.

SQA Approaches and Methodologies

SQA Approaches and Methodologies
A scientific approach should have methods. As a scientific process, a stage or a step should be established or used to ensure the final product is according to the user’s specifications. The method is usually determined through the wishes of the clients, the available manpower and circumstances.

It is not that the clients specify the actual method for a scientific approach but the client’s provider takes into consideration the need of the clients. Using the facts and data provided by the client, the method for developing a product will be established. Using the experience and available data a method is determined and executed.

In SQA, the client’s need for a better application is established. But a good application is not the only need of the client. There are metrics that an application should meet and anything below par is not good for business. The SQA team should ensure the metrics are reached by constantly monitoring the development process while giving feedbacks to the developers. The SQA team is there to ensure that the process is done correctly to reach the needed metrics.

To ensure that a proper SQA is done, the SQA team should select a proper methodology. Selecting the proper methodology is quite a challenge. However if the proper facts is laid out, the SQA team should be able to select a good SQA methodology. On the other hand, the factors are also determined before hand so that the methodology will be known. Since SQA is an evaluating process, it reacts to the available information.

Why Not to Use a Methodology
On the other hand, there are SQA and software engineers who have doubts on the importance and use of an SQA methodology. Their reason comes from the fact that the SQA methodologies and approaches are very specific. Since there are very specific and strict, it does not give any room for additional information.

There are also developers and SQA managers who develop their own type of software quality assurance methodology based on their present situation. Again, the reason for that is that the methodologies are to strict that any creativity of software development is not allowed.
A waste of time is also a major source of disregard from software developers. Instead of determining the methodology, the developers focus their time on other things. The proof of the usability of the methodology of SQA is almost non-existent. There are a few who have tried to prove that having an SQA methodology makes worth more efficient but they are usually associated with the general information and a small part of the text is dedicated to the methodology.

Last but not the least, the methodology for SQA is just a waste of time. This is true especially when you’re trying out a new methodology for software development. It’s always a gamble to try out something new even though they have been tested in simulated environments. It all goes back to the fact that the SQA methodologies are very specific and the possibility of going out of what is written is not a good thing

SQA Software and Tools

SQA Software and Tools
In quality assurance, it is always important to get all the help we could get. In other industries, developers could easily check the products manually and discard those that do not meet the standard. The length and the width of the product are checked to maintain standardization of the product. Others use special machines to check the product. With tools and machines, they can easily set a standard with their products.

That also goes the same with software and applications. Although it does not use physical machines, applications go through rigorous testing before they are released to the public even for beta testing.

The tools used in SQA are generally testing tools wherein an application is run through a series of tests to gauge the performance of the application.

The tools used in SQA vary in purpose and performance. These applications range from testing the code or running the application under great stress. These tools are employed to test the application and produce numbers and statistics regarding the actual application. Through these numbers, the SQA team and their developers will know if the application has lived up according to the targeted performance.

Like most developers each SQA team has their preferred tools for application testing. Based on their belief and expertise, the SQA team will usually give the owners or business managers a free hand on what type of testing tool to use.
Notable SQA Tools

The following are some of the renowned SQA tools and applications. There are still hundreds out there but the following tools have been around for years and have been used by thousands or probably millions of testers.

WinRunner – Developed by HP, WinRunner is a user friendly application that can test the applications reaction from the user. But other than measuring the response time, WinRunner can also replay and verify every transaction and interaction the application had with the user. The application works like a simple user and captures and records every response the application does.

LoadRunner – Developed by HP, LoadRunner is one of the simple applications that can test the actual performance of the application. If you are looking for a program to test your application’s tolerance to stress, LoadRunner is your tool. It has the ability to work like thousands of users at the same time – testing the stress of the application.

QuickTest Professional – If you have worked with WinRunner you surely have bumped in with this tool. Built by HP, QuickTest emulates the actions of users and exploits the application depending on the procedure set by testers. It can be used in GUI and non-GUI websites and applications. The testing tool could be customized through different plug-ins.

Mercury TestDirector – An all-in-one package, this web-based interface could be used from start to end in testing an application or a website. Every defect will be managed according to their effect to the application. Users will also have the option to use this exclusively for their application or use it together with wide array of testers.

Silktest – Although available in limited operating system, Silktest is a very smart testing tool. Silktest lists all the possible functions and tries to identify the function one by one. It can be implemented in smaller iterations as it translate the available codes into actual objects.

Bugzilla – Developed by Mozilla, this open source testing tool works as the name suggests. Bugzilla specializes in detecting bugs found in the application or website. Since the application is open-source it can be used freely and it is availability in different OS makes it even a viable alternative for error tracking. The only downside is it has a long list of requirements before it could run.

Application Center Test – Also known as ACT, this testing tool was developed by Microsoft using ASP.NET. This application is primarily used for determining the capacity of the servers that handle the application. Testers can test the server by asking constant requests. A customized script either from VB or JS could be used to test the server’s capacity.

OpenSTA – Another open source tool, testers can easily launch the application and use it for testing the application’s stress capacity. The testing process could be recorded and testing times could be scheduled. Great for websites that needs daily maintenance.

SQA Management Standards and Metrics

SQA Management Standards and Metrics

SQA Management Standards
Aside from internationally recognized SQA standards, there are specific standards that were developed to cater specifically for the management of software development:

ISO 9001 and ISO 9000-3 – These certifying organizations was established specifically for software development. This standard encourages leadership and could be integrated continuously even when the product has been developed and released to its users. Good supplier relations are also emphasized as technology is not only developed in-house.

SW-CMM (Software Capability Maturity Model) – Developed in 1980, SW-CMM has become the standards for large scale software development companies. It has drawn support because this development model was established by the developers for the developers. It believes in quantitative methods to develop and maintain productivity. SW-CMM has a five-level model to gauge the applications maturity and establish a detailed plan to further enhance them. The best draw so far of SW-CMM is that it does not care about SDLC model, tool and documentation standard, promoting creativity for software development.

ISO/IEC 15504 Process Assessment Model and SPICE (Simulation Program with Integrated Circuit Emphasis) – Aiming for international acceptance, this type of SQA supports a specific type of testing standard for a better application. Called SPICE, this application could test each part of the application. SPICE is also used to asses the performance of circuits in electronic products.
Metrics
There are many forms of metrics in SQA but they can easily be divided into three categories: product evaluation, product quality, and process auditing.

Product Evaluation Metrics – Basically, this type of metric is actually the number of hours the SQA member would spend to evaluate the application. Developers who might have a good application would solicit lesser product evaluation while it could take more when tackling an application that is rigged with errors. The numbers extracted from this metric will give the SQA team a good estimate on the timeframe for the product evaluation.

Product Quality Metrics – These metrics tabulates all the possible errors in the application. These numbers will show how many errors there are and where do they come from. The main purpose of this metric is to show the trend in error. When the trend is identified the common source of error is located. This way, developers can easily take care of the problem compared to answering smaller divisions of the problem. There are also metrics that shows the actual time of correcting the errors of the application. This way, the management team who are not entirely familiar with the application.

Process Audit Metrics – These metrics will show how the application works. These metrics are not looking for errors but performance. One classic example of this type of metric is the actual response time compared to the stress placed on the application. Businesses will always look for this metric since they want to make sure the application will work well even when there are thousands of users of the application at the same time.

There are lots of options on what standard to be used in developing the plan for Software Quality Assurance. But on metrics, the numbers are always constant and will be the gauge whether the application works as planned.

SQA Lifecycle Standards

SQA Lifecycle Standards

SQA Lifecycle Standards
Software Quality Assurance procedures have finally been standardized and have been virtually perfected after years of planning on how to perfect the application standardization.

Through experience, the company was able to place in writing how to develop a plan for software development. Because it has been standardized, the application that was developed using SQA could be recognized worldwide because it has been made according to the standards.
Along with the standards, the metrics are also standardized. More than anything else written in the report, the clients who are looking for an efficient application looks for the numbers more than anything else. Metrics will tell whether the application has been developed according to plan and could perform when released or sold to their intended users.
SQA Standards

IEEE (Institute of Electric and Electronic Engineers) – This standard was established by the organization with the same name. This organization was established in 1963 and the IEEE standards for software development starting in 1986. There are two types of IEEE standards for Software Quality Application: the Standard 730-1989 which was developed in 1989 and the ANSI/IEEE Standard 983 – 1986 which was the original version developed in 1986. IEEE is very popular especially for SQA Planning and development.


ISO (International Standards Organization) – One of the oldest standardization organizations in the world, ISO were established in 1947 and have established itself to be the standardized company not only in software development but also in business plans. Because it was internationally recognized it has become a powerful standard for different business uses.


DOD (US Department of Defense) – The government has also developed their own standardization scheme especially for developing technology. They have evolved from ISO 9000 and developed a specialized standard on their own. There are currently two DOD standards: the MIL-STD-961E which refers to the program specific standards and the MIL-STD-962D which stands for general standardization. Both of these formats were used applied by the DOD starting August 1, 2003.


ANSI (American National Standards Institute) – Working with US-based companies, ANSI has become the bridge of small US based companies to international standards so that they could achieve international recognition. ANSI covers almost anything in the country, products technology and application. ANSI ensures that the products developed in the US could be used in other countries as well.


IEC (International Electrotechnical Commission) – Stated June of 1906, the commission has dedicated itself to the standardization of electrical materials and its development. Today it is usually associated with ISO since it has become a part of technical industries. IEC is known for standardizing electronics through the International Electrotechnical Vocabulary which is used by all electronic industries until today.

EIA (Electronic Industries Alliance) - EIA is a coming together of different electronic manufacturers in US. This organization set the standards for electronic products for the country and has been accepted by thousands of companies worldwide.

Friday, May 1, 2009

Errors, Defects and Bugs

Errors, Defects and Bugs

Software Errors

One common definition of a software error is a mismatch between the program and its specification. In other words, we can say, a software error is present in a program when the program does not do what its end user expects.Categories of Software Errors:
User interface errors such as output errors or incorrect user messages.Function errorsHardware defectsIncorrect program versionRequirements errors Design errors Documentation errors Architecture errors Module interface errors Performance errors Boundary-related errors Logic errors such as calculation errors, State-based behavior errors, Communication errors, Program structure errors, such as control-flow errors. Most programmers are rather cavalier about controlling the quality of the software they write. They bang out some code, run it through some fairly obvious ad hoc tests, and if it seems okay, they’re done. While this approach may work all right for small, personal programs, it doesn’t cut the mustard for professional software development. Modern software engineering practices include considerable effort directed toward software quality assurance and testing. The idea, of course, is to produce a high software with the probability of satisfying the customer’s needs.There are two ways to deliver software free of errors:
Preventing the introduction of errors in the first place. Identifying the bugs lurking in program code, seek them out, and destroy them. Obviously, the first method is superior. A big part of software quality comes from doing a good job of defining the requirements for the system you’re building and designing a software solution that will satisfy those requirements. Testing concentrates on detecting those errors that creep in despite your best efforts to keep them out.

Classification of Defects / Bugs

There are various ways in which we can classify. Below are some of the classifications:


Severity Wise:

Major: A defect, which will cause an observable product failure or departure from requirements.
Minor: A defect that will not cause a failure in execution of the product.
Fatal: A defect that will cause the system to crash or close abruptly or effect other applications.
Work product wise:

SSD: A defect from System Study document
FSD: A defect from Functional Specification document
ADS: A defect from Architectural Design Document
DDS: A defect from Detailed Design document
Source code: A defect from Source code
Test Plan/ Test Cases: A defect from Test Plan/ Test Cases
User Documentation: A defect from User manuals, Operating manuals
Type of Errors Wise:

Comments: Inadequate/ incorrect/ misleading or missing comments in the source code
Computational Error: Improper computation of the formulae / improper business validations in code.
Data error: Incorrect data population / update in database
Database Error: Error in the database schema/Design
Missing Design: Design features/approach missed/not documented in the design document and hence does not correspond to requirements
Inadequate or sub optimal Design: Design features/approach needs additional inputs for it to be completeDesign features described does not provide the best approach (optimal approach) towards the solution required
In correct Design: Wrong or inaccurate Design
Ambiguous Design: Design feature/approach is not clear to the reviewer. Also includes ambiguous use of words or unclear design features.
Boundary Conditions Neglected: Boundary conditions not addressed/incorrect
Interface Error: Internal or external to application interfacing error, Incorrect handling of passing parameters, Incorrect alignment, incorrect/misplaced fields/objects, un friendly window/screen positions
Logic Error: Missing or Inadequate or irrelevant or ambiguous functionality in source code
Message Error: Inadequate/ incorrect/ misleading or missing error messages in source code
Navigation Error: Navigation not coded correctly in source code
Performance Error: An error related to performance/optimality of the code
Missing Requirements: Implicit/Explicit requirements are missed/not documented during requirement phase
Inadequate Requirements: Requirement needs additional inputs for to be complete
Incorrect Requirements: Wrong or inaccurate requirements
Ambiguous Requirements: Requirement is not clear to the reviewer. Also includes ambiguous use of words – e.g. Like, such as, may be, could be, might etc.
Sequencing / Timing Error: Error due to incorrect/missing consideration to timeouts and improper/missing sequencing in source code.
Standards: Standards not followed like improper exception handling, use of E & D Formats and project related design/requirements/coding standards
System Error: Hardware and Operating System related error, Memory leak
Test Plan / Cases Error: Inadequate/ incorrect/ ambiguous or duplicate or missing - Test Plan/ Test Cases & Test Scripts, Incorrect/Incomplete test setup
Typographical Error: Spelling / Grammar mistake in documents/source code
Variable Declaration Error: Improper declaration / usage of variables, Type mismatch error in source code
Status Wise:

Open
Closed
Deferred
Cancelled

Bug Life Cycle


The steps in defect life cycle varies from company to company. But the basic flow remains the same. However, below I'm describing a basic flow for Bug Life Cycle:


A Tester finds a bug. Status --> Open
Test lead review the bug and authoriza the bug. Stats --> Open
Development team lead review the defect. Stats --> Open
The defect can be authorized or unauthorized by the development team. (Here the status of the defect / bug will be Open (For Authorized Defects) & Reject (For Unauthorized Defects).
Now, the authorized bugs will get fixed or deferred by the development team. Status of the fixed bugs will be Fixed & Status will be Deferred for the bugs which got Deferred.
The Fixed bugs will be again re-tested by the testing team (Here based on the Closure of the Bug, the status will be made as Closed or if the bug still remains, it will be re-raised and status will be Re-opened.
The above-mentioned cycle continues until all the bugs / defects gets fixed in the application.

Software Testing Bug Report


If you are doing manual Software Testing and reporting bugs withour the help of any tool, assign a unique number to each bug report. This will help to identify the bug record.
Clearly mention the steps to reproduce the bug. Do not assume or skip any reproducing step.
Be Specific and to the point
Apart from these tips, below are some good practices:

Report the problem immediately
Reproduce the bug atleast one more time before you report it
Test the same bug occurrence on other similar modules of the application
Read bug report before you submit it or send it.
Never ever criticize any developer or attack any individual.

Software Testing Bug Report Template


If you are using any Software Testing Management tool or any Bug reporting tool like Bugzilla or Test Director or Bughost or any other online bug tracking tool, then; the tool will automatically generate the bug report. If you are not using any tool, you may refer to the following template for your software bug report:



Name of Reporter:
Email Id of Reporter:
Version or Build:
Module or component:
Platform / Operating System:
Type of error:
Priority:
Severity:
Status:
Assigned to:
Summary:
Description: .

Software Testing Management

Software Testing Management

Test Strategy


In Software Testing, test strategy is a document describing how the software will be tested. Generally, it is developed for all levels of testing. Test Lead / Test Manager write the Test Strategy and review the strategy with the project team.Also, please note that Test Plan may include test environment, risk assessment, test cases, conditions, pass / fail criteria and a list of related tasks.The inputs of Test Strategy can be:
Test EnvironmentTest Tool dataProject ScheduleDefined Software Testing StandardsFunctional and technical requirementsAn ideal Test Strategy must contain:
Required hardware and software components details including test toolsRoles and ResponsibilitiesSoftware Testing methodologyLimitations of the applicationAt the completion of this stage, testing team will be having following three documents:
Test StrategyTest PlanTest CasesAll the above tasks collectively, called as Software Testing Methodology.


Test Strategy - all aspects.
In continuation to my previous post on Test Strategy, I'm here describing it in more details.
Test strategy can be defined as a high level management method that confirms adequate confidence in the software product being tested, while ensuring that the cost efforts and timelines are all within acceptable limits.
Test strategy can be of different levels:
Enterprise-wide test strategy to set up a test group for the entire organization.Application or the product test strategy to set up the test requirements of a software product for its entire lifeProject level test strategy to set up test plans for a single project life cycle.A good test strategy should:
Define the objectives, timelines and approach for the testing effortList various testing activities and define roles & responsibilitiesIdentify and coordinate the test environment and data requirements before the starting of testing phaseCommunicate the test plans to stakeholders and obtain buy-in from business clientsProactively minimize fires during the testing phaseDecisions to be made in test strategy:
When should the testing be stopped?What should be testing?What can remain untested?Further, the following three types of risks can be considered while making a test strategy:
1. Development related risks include:
Inefficiently controlled project timelines.Complexity of the codeLess skilled programmersDefects in existing codeProblems in team co-ordinationLack of reviews and configuration controlLack of specifications
2. Testing related risks include:
Lack of domain knowledgeLack of testing and platform skillsLack of test bed and test dataLack of sufficient time
3. Production related risks include:
Dynamic frequency of usageComplexity of user interfaceHigh business impact of the functionTest strategies generally acknowledge that
The earlier a bug is detected, the cheaper it is to fix.Splitting the testing into smaller parts and then aggregating, ensures a quicker debug and fix cycleBugs found in critical functions mean more time to fixSix steps for developing a test strategy:
Determine the objective and scope of the testingIdentify the types of tests requiredBased on the external and internal risks, add, modify or delete processes.Plan the environment, test bed, test data and other infrastructurePlan strategy for managing changes, defects and timeline variations.Decide both the in-process and the post process metrics to match the objectives.While creating test strategy for maintenance projects, following aspects needs to be considered:
How much longer is the software being supported and is it worth-while strategizing to improve the testing of this software?Is it worthwhile to incrementally automate the testing of the existing code and functionality?How much of the old functionality should we test to give us the confidence that the new code has not corrupted the old code?Should we have same set of people support the testing of multiple software maintenance projects?What analysis can be applied on the current defect database that will help us improve the development itself?However, for creating the test strategy of product development, following aspects needs to be considered:
How much longer is the product going to last? This includes multiple versions etc. since test case and artifacts can continue to be used across versions.Would automation be worth considering?How much testing should we do per release (minor, patch, major, etc.)Do we have a risk based test plan that will allow releases to be made earlier than planned?Is the development cycle iterative? Should the test cycle follow that?As per IEEE software engineering standards, the Test Strategy document should answer the following aspects:
1. Objective and scope of testing
What is the business objective of testing?What are the quality goals to be met by the testing effort?To what extent will each application be tested?What external systems will not be tested?What systems and components need to be tested?2. Types of testing
Different phases of the testing that are required.Different types of testingTest Coverage3. Testing approach
Definition of testing process life cycleCreation of testing related templates, checklists and guidelinesMethodology for test development and executionSpecification of test environment setupPlanning for test execution cycles4. Test Environment specification
Hardware and software requirementsTest data creation, database requirements and setupConfiguration management, maintenance of test bed and build management5. Test Automation
Criteria and feasibility of test automationTest tool identificationTest automation strategy (effort, timelines etc.)
6. Roles and Responsibilities / Escalation mechanism
Testing team organization and reporting structureDifferent roles as a part of testing activities and their corresponding responsibilitiesWho to escalate to and when?
7. Defect Management
Categorization of defects based on criticality and priorityDefinition of a workflow or the disposition of defectsTechniques and tools for tracking of defects.
8. Communication and status reporting
Status meetings for communication of testing statusFormat and content of different status reportsPeriodicity of each status reportDistribution list for each report9. Risks and mitigation plans
Identification of all testing related risks, their impact and exposurePlan for mitigation and managing these risks10. Configuration management
List of testing artefacts under version controlTools and techniques for configuration management
11. Change management
Plan for managing requirement changesModels for assessing impact of changes on testingProcess for keeping test artifacts in sync with development artifacts
12. Testing metrics
What metrics are to be collected?
Do they match the strategic objectives?
What will be the techniques for collection of metrics?
What tools will be employed to gather and analyze metrics?
What process improvements are planned based on these metrics?

Spiral Testing Approach

The purpose of software testing is to identify the differences between existing and expected conditions, i.e., to detect software defects. Testing identifies the requirements that have not been satisfied and the functions that have been impaired. The most commonly recognized test objective is to identify bugs, but this is a limited definition of the aim of testing. Not only must bugs be identified, but they must be put into a framework that enables testers to predict how the software will perform.
In the spiral and rapid application development testing environment there may be no final functional requirements for the system. They are probably informal and evolutionary. Also, the test plan may not be completed until the system is released for production. The relatively long lead time to create test plans based on a good set of requirement specifications may not be available. Testing is an ongoing improvement process that occurs frequently as the system changes. The product evolves over time and is not static.
The testing organization needs to get inside the development effort and work closely with development. Each new version needs to be tested as it becomes available. The approach is to first test the new enhancements or modified software to resolve defects reported in the previous spiral. If time permits, regression testing is then performed to assure that the rest of the system has not regressed.
In the spiral development environment, software testing is again described as a continuous improvement process that must be integrated into a rapid application development methodology. Testing as an integrated function prevents development from proceeding without testing. Deming’s continuous improvement process using the PDCA model will again be applied to the software testing process.Before the continuous improvement process begins, the testing function needs to perform a series of information-gathering planning steps to understand the development project objectives, current status, project plans, function specification, and risks.
Once this is completed, the formal Plan step of the continuous improvement process commences. A major step is to develop a software test plan. The test plan is the basis for accomplishing testing and should be considered an ongoing document, i.e., as the system changes, so does the plan. The outline of a good test plan includes an introduction, the overall plan, testing requirements, test procedures, and test plan details. These are further broken down into business functions, test scenarios and scripts, function/test matrix, expected results, test case checklists, discrepancy reports, required software, hardware, data, personnel, test schedule, test entry criteria, exit criteria, and summary reports.
The more definitive a test plan is, the easier the plan step will be. If the system changes between development of the test plan and when the tests are to be executed, the test plan should be updated accordingly.
The Do step of the continuous improvement process consists of test case design test development and test execution. This step describes how to design test cases and execute the tests included in the test plan. Design includes the functional tests, GUI tests, and fragment system and acceptance tests. Once an overall test design is completed, test development starts. This includes building test scripts and procedures to provide test case details.
The test team is responsible for executing the tests and must ensure that they are executed according to the test design. The do step also includes test setup, regression testing of old and new tests, and recording any defects discovered.
The Check step of the continuous improvement process includes metric measurements and analysis. “Quality Through a Continuous Improvement Process,” crucial to the Deming method is the need to base decisions as much as possible on accurate and timely data. Metrics are key to verifying if the work effort and test schedule are on schedule, and to identify any new resource requirements.During the check step it is important to publish intermediate test reports. This includes recording of the test results and relating them to the test plan and test objectives.
The Act step of the continuous improvement process involves preparation for the next spiral iteration. It entails refining the function/GUI tests, test suites, test cases, test scripts, and fragment system and acceptance tests, modifying the defect tracking system and the version and control system, if necessary. It also includes devising measures for appropriate actions relating to work that was not performed according to the plan or results that were not what was anticipated. Examples include a reevaluation of the test team, test procedures, and technology dimensions of testing. All the above is fed back to the test plan, which is updated.
Once several testing spirals have been completed and the application has been verified as functionally stable, full system and acceptance testing starts. These tests are often optional. Respective system and acceptance test plans are developed defining the test objects and the specific tests to be completed.
The final activity in the continuous improvement process is summarizing and reporting the spiral test results. A major test report should be written at the end of all testing. The process used for report writing is the same whether it is an interim or a final report, and, like other tasks in testing, report writing is also subject to quality control. However, the final test report should be much more comprehensive than interim test reports. For each type of test it should describe a record of defects discovered, data reduction techniques, root cause analysis, the development of findings, and follow-on recommendations for the current and/or future projects.
The methodology provides a framework for testing in this environment. The major steps include information gathering, test planning, test design, test development, test execution/evaluation, and preparing for the next spiral. It includes a set of tasks associated with each step or a checklist from which the testing organization can choose based on its needs. The spiral approach flushes out the system functionality. When this has been completed, it also provides for classical system testing, acceptance testing,


Software Testing Estimation Process

Software Testing estimation process is one of the most difficult and critical activity. When say that one project will be completed in a particluar time at a particular cost, then it must happen. If it does not happen, it may result in peer's comments and senior management’s warnings to being fired depending on the reasons and seriousness of the failure.
Here are a few rules for effective software testing estimation:
- Estimation must be based on previous projects: All estimation should be based on previous projects.
- Estimation must be recorded: All decisions should be recorded. It is very important because if requirements change for any reason, the records would help the testing team to estimate again.
- Estimation shall be always based on the software requirements: All estimation should be based on what would be tested. The software requirements shall be read and understood by the testing team as well as development team. Without the testing participation, no serious estimation can be considered.
- Estimation must be verified. All estimation should be verified: Two spreadsheets can be created for recording the estimations. At the end, compare both the estimations. If the estimation has any deviation from the recorded ones, then a re-estimation should be made.
- Estimation must be supported by tools: Tools such as spreadsheet containing metrics calculates automatically the costs and duration for each testing phase. Also, a document containing sections such as: cost table, risks, and free notes should be created. Showing this document to customer can help the customer to decide which kind of test he needs.
- Estimation shall be based on expert judgment: The experinced resources can easily make estimate that how long it would take for testing.
Classifiy the requirements in the following categories:
Critical: The development team has little knowledge in how to implement it.
High: The development team has good knowledge in how to implement it but it is not an easy task.

Software Test Planning

The quality of Software Testing effort depends on the quality of quality of Software Testing Planning. Software Testing Planning is very critical and important part of Software Testing Process.
Below are some questions and suggestions for Software Test Planning:
- Have you planned for an overall testing schedule and the personnel required, and associated training requirements?
- Have the test team members been given assignments?
- Have you established test plans and test procedures for
1. Module testing2. Integration testing3. System testing4. Acceptance testing
- Have you designed at least one black-box test case for each system function?
- Have you designed test cases for verifying quality objectives/factors (e.g. reliability, maintainability, etc.)?
- Have you designed test cases for verifying resource objectives?
- Have you defined test cases for performance tests, boundary tests, and usability tests?
- Have you designed test cases for stress tests (intentional attempts to break system)?
- Have you designed test cases with special input values (e.g. empty files)?
- Have you designed test cases with default input values?
- Have you described how traceability of testing to requirements is to be demonstrated (e.g. references to the specified functions and requirements)?
- Do all test cases agree with the specification of the function or requirement to be tested?
- Have you sufficiently considered error cases? Have you designed test cases for invalid and unexpected input conditions as well as valid conditions?
- Have you defined test cases for white-box-testing (structural tests)?
- Have you stated the level of coverage to be achieved by structural tests?
- Have you unambiguously provided test input data and expected test results or expected messages for each test case?
- Have you documented the purpose of and the capability demonstrated by each test case?
- Is it possible to meet and to measure all test objectives defined (e.g. test coverage)?
- Have you defined the test environment and tools needed for executing the software test?
- Have you described the hardware configuration an resources needed to implement the designed test cases?
- Have you described the software configuration needed to implement the designed test cases?
- Have you described the way in which tests are to be recorded?
- Have you defined criteria for evaluating the test results?
- Have you determined the criteria on which the completion of the test will be judged?
- Have you considered requirements for regression testing?

Software Test Plan
A test plan is a document describing the approach to be taken for intended testing activities and serves as a service level agreement between the quality assurance testing function and other interested parties, such as development. A test plan should be developed early in the development cycle and help improve the interactions of the analysis, design, and coding activities. A test plan defines the test objectives, scope, strategy and approach, test procedures, test environment, test completion criteria, test cases, items to be tested, the tests to be performed, the test schedules, personnel requirements, reporting procedures, assumptions, risks, and contingency planning.
While developing a test plan, one should be sure that it is simple, complete, current, and accessible by the appropriate individuals for feedback and approval. A good test plan flows logically and minimizes redundant testing, demonstrates full functional coverage, provides workable procedures for monitoring, tracking, and reporting test status, contains a clear definition of the roles and responsibilities of the parties involved, target delivery dates, and clearly documents the test results.
There are two ways of building a test plan. The first approach is a master test plan which provides an overview of each detailed test plan, i.e., a test plan of a test plan. A detailed test plan verifies a particular phase in the waterfall development life cycle. Test plan examples include unit, integration, system, acceptance. Other detailed test plans include application enhancements, regression testing, and package installation. Unit test plans are code orientated and very detailed but short because of their limited scope. System or acceptance test plans focus on the functional test or black-box view of the entire system, not just a software unit.
The second approach is one test plan. This approach includes all the test types in one test plan, often called the acceptance/system test plan, but covers unit, integration, system, and acceptance testing and all the planning considerations to complete the tests.
A major component of a test plan, often in the Test Procedure section, is a test case. A test case defines the step-by-step process whereby a test is executed. It includes the objectives and conditions of the test, the steps needed to set up the test, the data inputs, the expected results, and the actual results. Other information such as the software, environment, version, test ID, screen, and test type are also provided.
Major steps to develop a test plan:
A test plan is the basis for accomplishing testing and should be considered a living document, i.e., as the application changes, the test plan should change.
A good test plan encourages the attitude of “quality before design and coding.” It is able to demonstrate that it contains full functional coverage, and the test cases trace back to the functions being tested. It also contains workable mechanisms for monitoring and tracking discovered defects and report status.
The following are the major steps that need to be completed to build a good test plan:
Define the Test Objectives. The first step for planning any test is to establish what is to be accomplished as a result of the testing. This step ensures that all responsible individuals contribute to the definition of the test criteria that will be used. The developer of a test plan determines what is going to be accomplished with the test, the specific tests to be performed, the test expectations, the critical success factors of the test, constraints, scope of the tests to be performed, the expected end products of the test, a final system summary report, and the final signatures and approvals. The test objectives are reviewed and approval for the objectives is obtained.
Develop the Test Approach. The test plan developer outlines the overall approach or how each test will be performed. This includes the testing techniques that will be used, test entry criteria, test exit criteria, procedures to coordinate testing activities with development, the test management approach, such as defect reporting and tracking, test progress tracking, status reporting, test resources and skills, risks, and a definition of the test basis (functional requirement specifications, etc.).
Define the Test Environment. The test plan developer examines the physical test facilities, defines the hardware, software, and networks, determines which automated test tools and support tools are required, defines the help desk support required, builds special software required for the test effort, and develops a plan to support the above.
Develop the Test Specifications. The developer of the test plan forms the test team to write the test specifications, develops test specification format standards, divides up the work tasks and work breakdown, assigns team members to tasks, and identifies features to be tested. The test team documents the test specifications for each feature and cross-references them to the functional specifications. It also identifies the interdependencies and work flow of the test specifications and reviews the test specifications.
Schedule the Test. The test plan developer develops a test schedule based on the resource availability and development schedule, compares the schedule with deadlines, balances resources and work load demands, defines major checkpoints, and develops contingency plans.
Review and Approve the Test Plan. The test plan developer or manager schedules a review meeting with the major players, reviews the plan in detail to ensure it is complete and workable, and obtains approval to proceed.

Test Specification

Test Specification – It is a detailed summary of what scenarios will be tested, how they will be tested, how often they will be tested, and so on and so forth, for a given feature. Trying to include all Editor Features or all Window Management Features into one Test Specification would make it too large to effectively read.However, a Test Plan is a collection of all test specifications for a given area. The Test Plan contains a high-level overview of what is tested for the given feature area. Contents of a Test Specification:Revision History - This section contain information like Who created the test specification? When was it created? When was the last time it was updated? Feature Description – A brief description of what area is being tested.What is tested? – An overview of what scenarios are tested.What is not tested? - Are there any areas that are not being tested. There can be several reasons like... being covered by different people or any test limitations etc. If so, include this information as well.Nightly Test Cases – A list of the test cases and high-level description of what is tested whenever a new build becomes available. Breakout of Major Test Areas - It is the most interesting part of the test specification where testers arrange test cases according to what they are testing. Specific Functionality Tests – Tests to verify the feature is working according to the design specification. This area also includes verifying error conditions.Security tests – Any tests that are related to security. Accessibility Tests – Any tests that are related to accessibility. Performance Tests - This section includes verifying any performance requirements for your feature.Localization / Globalization - tests to ensure you’re meeting your product’s Local and International requirements.Please note that your Test Specification document should be in such a manner that should prioritize the test case easily like nightly test cases, weekly test cases and full test pass etc:

Nightly - Must run whenever a new build is available.
Weekly - Other major functionality tests run once every three or four builds.
Lower priority - Run once every major coding milestone