Test case Execution


This activity starts once the test case review has been completed. All the test cases are executed and results are marked as pass, fail or blocked.

Pass: - a test case(s) is marked as pass if it‘s expected result is matched with the Actual result

Fail: - a test case(s) is marked as fail if it‘s expected result is not matched with the Actual result

(Note1) Before you submit a failed case(s) as a bug try to reproduce it to ensure that
              It is consistent and it can be reproducible.
(Note2) Bugs which are not reproducible but occur unconditionally or unexpectedly but 
 Very frequently, such bugs can be reported by giving the sequence of steps you   
  Followed, system configuration at that Movement ETC
(Note3) You can take a snap shot of that bug and send it as an attachment for more clarity
  To the development team

Blocked: - a test case(s) is marked as blocked if the test case(s) is not executed because of non availability of the functionality or environment ETC.
(Note) Report the test case(s) with suitable comments why it is marked as blocked



Test cases Review


Review of test cases should be done in order to eliminate the duplicates, this activity makes easy to map the each test case(s) with its requirement. Once the test case review is completed we can prepare a test tractability matrix.

Test tractability matrix:
Is basically prepared to ensure that all requirements are covered and each Requirement has one or more use case(s).It is represented in a tabular format. This makes easy to identify which requirement has what test case(s).
It looks like the following.

SRS id
1.0
Use case id
1.1
Test case id
1.1.1



Use Case


The use case is made up of a set of possible sequences of interactions between   systems and users in a particular environment and related to a particular goal. The use case can be thought of as a collection of possible scenarios related to a particular goal, indeed, the use case and goal are sometimes considered to be synonymous. Use cases represent the interfaces that a system makes visible to external entities and their interrelationships. It has following components

a) Use case name
b) Brief Description / Goal: what is to be achieved?
c) Flow of events for <name> use case
d) Preconditions: what the situation must be before the use case can take place
e) Main Flow: the basic successful flow of events
f) Sub Flow (if applicable): subordinate flows
g) Alternative flows: variations in business conditions
i) Trigger: the event that causes the use case to happen (usually by an actor)

Test Case


A test case is a document that describes an input, sequence of actions and the expected output, in order to determine a product/system/applications requirement is correctly working or not. A test case should have the following information

a) Test case number
b) Test case name/title
c) Objective
d) Test conditions/setup
e) Input data requirements and sequence of steps
f)  Expected result

(Note1) Always prepare test cases from use cases it’s a good practice.
(Note2) Preparation of test cases should be started once SRS & design documents are
              Frozen. This type of approach is changed depending on the life cycle model
  That is selected for the execution for the project.
(Note3) IEEE – is a standard(s) for writing test cases.

Software Testcase Preparation


The primary objective of test case(s) preparation is to see weather requirements of the application/system/product are fully covered or not. Test Cases can be prepared from the use cases. If use cases or not available then, test cases can be Prepared from SRS & design documents.

Test cases are prepared using following techniques

Boundary value analysis
A test data selection technique in which values are chosen to lie along data extremes.  Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values. The hope is that, if a systems works correctly for these special values then it will work correctly for all values in between.

Equivalence class path: -
Is a technique where test cases are designed with a valid and invalid range of Values Range is decided with reference to SRS. Examples are
a) Range
b) Set
c) Number of inputs
d) Exists/constraint

Error guessing
Is a test case design technique where the experience of the tester is used to guess what faults might occur and to design test cases specifically to expose them.

Few of the techniques are:-
a) Null input.
b) Long input.
c) Random input.
d) Almost correct input.
e) Spaces in strings.
f) Quoted strings.
g) All CAPS.
h) Negative numbers.
i) Special characters

(Note) All the above mentioned techniques for error guessing are valid if SRS has a Limitation(s) on it.

Software Test Plan


A test plan is generally a document that provides and records important information about a test project, such as background information that is relevant or resources that will be used during a testing effort. Also included in such plans is entrance, stopping, and exit criteria. These are all very important for running an accountable test effort. Besides all of this, other material needs to be included as well such as statements of quality risks, Tests to be performed, assumptions, dependencies, and risks, as well as a basic schedule

Timeline that indicates what testing is being done, when it will start, when it will end and any milestones that might be pertinent. It should give complete information like ‘why’ and ‘how’ product is validated. Now we will look in detail about the contents of a test plan.

(Note) If the organization has the CMM certification it might have a standard template, if you are suppose to prepare a test plan you can request for that Template.

Types of Test Plans: -
a) Unit test plan
b) Integration test plan
c) System test plan
d) Acceptance test plan

Contents of a test plan: -

Introduction: -
This is the first section of the test plan and should contain information about what are the things/information we are presenting in this (test plan) document. In short we can say a brief description/purpose about the test execution plan of that specific Project.

Approach:-
This section of the test plan will describe what type of test approach we are adopting for execution of the project. Is it manual testing or automation testing? With suitable reasons.

Schedules: -
Here mention the details of the project schedules , such as start date and end date .In between if we have beta or alpha release mention schedule(s) of that also, apart from these, give build wise start date and end dates. If test cases are divided into module Wise you can give the schedules as per that also.

(Note) It is advised to include buffer time in the schedules, so that unexpected events can be handled in time.

Resources: -
Give the Hardware, software(s) requirements and the man power needed for the Project execution.
(Note1) Retain spare systems (systems apart from the test environment) and at least one backup person, so that unexpected events can be handled

(Note2) Do consider the budget allocation for that project, while requesting for the hardware & software(s) and manpower.

Environment: -
Give a detailed list of environment(s) on which the test bed setup has to be created. Mention each OS & software(s) along with its version(s) and patch(s).

(Note)Any specific test cases those are specific to OS or software(s). Give the total number of such specific test cases in a tabular format (if applicable)

Test methodologies: -
This section of the test plan mentions the details of the type of test methodology that is used to validate the system/application /product. Give a brief description about the Methodology adopted Along with the suitable reasons why we have chosen method with respect to the project/product. Following are most frequently used test methodologies

a) Black box testing methodology
b) Gray box testing methodology
c) White Box testing methodology

Requirements/features to be tested: -
List out the features/requirements of the system/product/application that are to be tested. Give a detailed list module wise. (If available).

(Note) Represent the data in a tabular format with use case id, SRS id and test cases id
 This makes an easy readability.         

Requirements/features not to be tested: -
List out the features/requirements of the System/product/application that are not to be tested because of various reasons like
a) Non availability of environment or different software(s).
b) Any specific feature(s) or functionality that is not yet implemented in the
    Current Build ETC.

9) Assumptions: -
List out the possible assumptions for the project like
a) It is assumed that test environment is properly configured and specific to that 
    Project only which will not be used for any other purpose
b) Development's test environment will be separate from QA's test environment
d) Development team releases testable software to testing team on schedule ETC

10) Risks: -
Here mention the possible risks for the project execution. They might be
For example
a) Testing the functionality of the product which is integrated with a third Party software 
b) Team is not properly trained on the technology on which the application
     Is built and to be tested
c) Execution of many test cases in a shorter period of time due to dead lines ETC

11) Test Exit Criteria: -
Give the information about when to stop testing of a product. List out the conditions at which testing can be stopped. The reasons can be like
a) All test cases are executed and bugs were fixed to an extent
b) Test cases completed with certain percentage passed
c) Project has reached the end date
d) Product is stable after certain regression test cycles performed on it.
e) Bug rate falls below a certain level ETC

12) Release Criteria: -
When testing is completed and QA deems that the following Items have been satisfactorily met, a recommendation will be made by QA to release the product. A Test and Evaluation report will be submitted reflecting quality Assurance assessment of the product at this time.
a) All Severity 1 defects are resolved.
b) All must be fixed bugs or high priority bugs are resolved
c) All Severity 2 problems must have acceptable work-arounds, which are documented in  
    Release Notes or Read me file, which are available to customers.
d) The main features of the product are fully implemented and function according to
     Requirements.
e) All test cases are executed at least once
f) Product is stable on specified OS
g) User documentation is accurate and fully descriptive of the product. ETC
(This is the end of the contents of the test plan)

Test Strategy


The first stage in testing is the formulation of a test strategy. A test strategy is a statement of the overall approach to testing, identifying what levels of testing are to be applied and the methods, techniques and tools to be used. A test strategy should ideally be organization wide, being applicable to all of organizations' development practices. Developing a test strategy that efficiently meets the needs of an organization is critical to the success of the development of products within the organization. The actual application (or implementation) of a test strategy to a development project should be detailed in the project's software quality plan. Others feel that this should be handled by the test plan.

Introduction to Software Testing


Testing can be considered as an activity performed on a system or an application under controlled conditions and evaluating the results with reference to its design or requirements specifications. It is done with an intention to finding a bug in the system or in the application.

Testing Life cycle has the following stages
1) Test Strategy Preparation
2) Test Plan Preparation
3) Test Cases Preparation
4) Test Cases Review
5) Test Cases Execution
6) Reporting the Results
7) Preparation of bug tracking report
8) Regression Testing
9) Test Report Preparation

Software Testing Preface


This White Paper gives a brief description and information about various activities that might come across at various stages of testing for a software test engineer. The primary intention of this document is to provide basic level test information to a test engineer who has 1 year to 3 years of testing experience. Others can just brush-up their basics