Test Design - Test Case Steps

Test Design Steps - Test case writing steps

1. Read the requirement. Analyze the requirement.
2. Write the related prerequisites and information steps if required (ex. If some setting should have already been done, or Some browser should have been selected).
3. Write the procedure (steps to perform some connection, configuration). This will contain the majority of steps to reproduce is this test case fails.
4. Write a step to capture the tester input/record. This is used for objective evidence.
5. Write the Verify step (Usually the expected Result).

 

What is a Test Case? – It is a document, which specifies the test inputs, events and expected results developed for a particular objective, so as to evaluate a particular program path or to verify the compliance with a specific requirement based on the test specification.
 

Areas for Test Design - Below are some of the areas in Test Design.

o Deriving Test Cases from Use Cases
o Deriving Test Cases from Supplementary Specifications
>>>>>> Deriving Test Cases for Performance Tests
>>>>>> Deriving Test Cases for Security / Access Tests
>>>>>> Deriving Test Cases for Configuration Tests
>>>>>> Deriving Test Cases for Installation Tests
>>>>>> Deriving Test Cases for other Non-Functional Tests

o Deriving test cases for Unit Tests
>>>>>> White-box tests
>>>>>> Black-box tests

o Deriving Test Cases for Product Acceptance Test
o Build Test Cases for Regression Test
 

Test case content:

The contents of a test case are,
* Prerequisites
* Procedures
* Information if required
* Tester input/record
* Verify step

Please refer the Software Test Templates area for a Test Case Template.
 

Types of Test Cases They are often categorized or classified by the type of test / requirement for test they are associated with, and will vary accordingly. Best practice is to develop at least two test cases for each requirement for test:

1. A Test Case to demonstrate the requirement has been achieved. Often referred to as a Positive Test Case.
2. Another Test Case, reflecting an unacceptable, abnormal or unexpected condition or data, to demonstrate that the requirement is only achieved under the desired condition, referred to as a Negative Test Case.

 

Some general software documentation terms.

 

USR:

User Requirements Specification - Contains User (Customer) requirements received from the user/client.

 

PS:

Product Specification. Derived from UR such that it can be implemented in the product. A high level product requirement.

 

DRS:

Design Requirement Specifications - Related to design. Design requirements. For hardware components.

 

SRS:

Software Requirement Specifications - Low level requirements derived from PS. For Software related components.

 

SVP:

Software Verification Procedure written from SRS. These are the actual test cases.

 

DVP:

Design Verification Procedure written from DRS. More design oriented test cases.

 

Some of the Test Design Techniques are as below,

Test Design Technique 1 - Fault Tree analysis

Fault tree analysis is useful both in designing new products/services (test cases for new components) or in dealing with identified problems in existing products/services. Fault tree analysis (FTA) is a failure analysis in which the system is analyzed using boolean logic.

 

Test Design Technique 2 - Boundary value analysis

Boundary value analysis is a software testing design technique in which tests are designed to include representatives of boundary values. The test cases are developed around the boundary conditions. One common example for this technique can be, if a text box (named username) supports 10 characters, then we can write test cases which contain 0,1, 5, 10, >10 characters.

 

Test Design Technique 3 - Equivalence partitioning

Equivalence partitioning is a software design technique that divides the input data to a software unit into partition of data from which test cases can be derived.

 

Test Design Technique 4 - Orthogonal Array Testing

This Technique can be used to reduce the number of combination and provide maximum coverage with a minimum number of Test Cases. Pay attention to the fact that it is an old and proven technique. The Orthogonal array testing was introduced for the first time by Plackett and Burman in 1946 and was implemented by G. Taguchi in 1987.

It is a Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]