Wednesday 3 September 2014

Monitoring your defects


Problem: Unmonitored defects will turn into technical debt

You have found a bucket load of defects during testing, so far so good, but they need constant care in order to avoid them ending up as technical debt.

Solution: Defect trend monitoring and tracking

You need to arm your organization with a few good metrics that will give you an overview of state of affairs and allow you to see trends in your defect database. There are plenty of metrics to choose from, especially if you are using a modern defect tracker, but in my experience less is more in this context.

Consider the following metrics:

·         Defect arrival rate: Graph that shows progress over time.

·         Severity and priority spread: Pie that shows what severity and priority open defects have

·         Defect turnaround time & Defect ageing: Time from discovery until closure & time since last update of the defect in your database.

Each of the metrics will tell you something about the state of affairs in the item under test, and here is what you should look for:

Defect arrival rate:

I usually just look at 3 lines, open, closed and total defects in order to get the following information:

Total defects:

·         Are we testing? – Flatline means no.

·         Is quality improving? – Slowdown in arrival either indicates that chances of meeting release goals increases. If you do not see a slowdown in arrival in the weeks prior to shipping the software you should expect a large volume of defects found in production after golive.

Open/closed defects:

·         Closure rate - Is the gab between open and closed defects shrinking or increasing? Extrapolating the trend will tell you if you can expect lots of known bugs to ship with product or not. Something that comes in handy when doing expectation management with the customer receiving the product.

Example:


·         It seems that testers was not testing much in weeks 6-8, as total defects are stable on 100

·         There is a slowdown in defects from week 9 to 10, but this might just be coincidence, so do not draw conclusions on trends until you see numbers from week 11.

·         Defect closure rate seems constant, and open defects are dropping. This is healthy if the product ships soon, and we assume that the test is nearing completion.

Severity & Priority Spread:

Look for high volumes of high priority or severity defects. In case you find yourself in a situation where you have too many high ones then you need to stop and do a bug triage, or you risk loosing the ability to use severity & priority to steer your efforts. This is detailed in one of my previous posts found here:


Defect turnaround time & ageing:

These two metrics tells you something about your organization’s ability to process defects found. What you should look for are

·         Do we address our defects in a timely manner? Looking at time since last update for defects with high severity and/or priority is interesting when talking to customers and other stakeholders in a project. Furthermore this will help you determine if there is a problem getting those critical ones fixed in due time.

·         Knowing your defect fix rate or capacity will make it much easier when estimating for future planning, meaning that you get some insight for the future.

Happy defect monitoring!

/Nicolai

2 comments:

  1. Hi guys,

    Great article, good to see it on Facebook via the Testing Club. I agree that a little tracking can help quite a lot.[1]

    I disagree though that if there are no new defects, then the testers are slacking. It could be that they are testing testcases - and/or things are working as expected. That would be good right?

    The graph of open defects interests me - how significant are the humps?

    /Jesper
    1: http://www.ministryoftesting.com/2011/07/a-little-track-history-that-goes-a-long-way/
    2: http://www.ministryoftesting.com/2014/04/daily-defect-count-image-camel/

    ReplyDelete
    Replies
    1. Hi Jesper!

      Thanks for the comment!

      I think that the curvy trends seen for defects often is caused by significant raise in focus on verification and validation of the product just before closure and just after go-live. My experience is that it tends to be the discovery of defects that cause the humps in the graph and not the closure of defects. Most projects/delivery teams I have been working with seems to have a fairly stable defect correction rate.

      Going back to my claim that flatline = no test, I find it unlikely that running test should reveal no defects – Keep in mind that this is not necessary a signal that “the testers are slacking”, but likely that they are doing something else than test execution. From experience there is always something that needs to be raised, even in situations where “things are working as expected” defects will just be fewer and with lower severity, but never absent.

      As to the significance of the humps I find that all trends should be kept in mind when estimating. If you repeatable see this in your metrics it can go straight into your plans for staffing the teams as you will know when business analysts and developers will be needed for bug-fixing.

      Have a nice weekend!

      /Nicolai

      Delete