Thursday 31 October 2013

If information is king, then coverage metrics must be the joker?!


Problem: Test coverage reporting can easily lead a false sense of security.

One of my peers asked me to review an acceptance test summary report before it was sent to his customer. It was an excellent report, with loads of information and a nice management summary that suggested exactly what to do. The test coverage section did however catch my eye, as it was a table showing test cases passes per use cases and the execution status.

The table looked something like this (simplified). Just looking at the % in the right column would suggest that everything is good…
 
Test coverage and execution status per use case:
Use Case
Test case
Execution status
% Run
UC1 – [Title]
17
12 Passed
5 Failed
100%
UC2 – [Title]
11
11 Passed
100%
UC3 – [Title]
14
12 Run
2 No Run
86%
Total
42
35 Passed
5 failed
2 No Run
95%

Solution: Be VERY careful when reporting on coverage, and make sure to explain the meaning.

1st problem when looking at coverage is the set the level of measurement. I like exact numbers like code coverage, but in some cases this is impossible to get. The report I reviewed was covering acceptance testing of two deliveries from 3rd party vendors, making code-related metrics impossible to obtain.

Basing coverage on functionality is like navigating a minefield wearing flippers. It raises two problems; How do you measure if a function is covered, and how do you measure if alternative scenarios are sufficiently covered?

I would base my coverage measurements against acceptance criterias for the userstories to see functional coverage. If the user stories are broken down in acceptance criterias, then the customer will have a very clear idea of which features have been tested. Drilling down into the use case specifics and changing the focus from run to passed cases.
 
Use Case /
Acceptance criteria
Test case
Execution status
% Passed
UC1 – [Title]
·        AC 1
·        AC 2
·        AC 3
 
8
5
4
 
7 Passed
3 Failed
1 Passed, 2 Failed
 
88%
0%
25%
There are shortcomings in reporting like this, but in the case where code is a black box you will have to take what you can get. Keeping this in mind there are things that must be communicated as part of the report:
·        Functional Test Coverage gives an indication of what features are done, in that they satisfy the acceptance criteria.
·        Many cases per Acceptance criteria is not equal to better coverage than a few.
·        This approach requires acceptance criterias to be very crisp, a missing acceptance criteria will mean a lot.
For each of these bullets you have to tell what risks you see, and what they mean for the project.

Nonetheless if you decide to put a coverage numbers into the reports, make sure to tell the reader what it means. In my opinion you need both code and functional coverage umbers for a complete coverage report, but can live with one if in a tight spot.

Happy testing!

/Nicolai

No comments:

Post a Comment