Sunday, 6 October 2013

We do not have time to…

Problem: There is not enough time to do everything by the book

One of my peers told me, that his clients said that they did not have time to attend sprint demos for sprints that were not directly linked to a release. That made me think of all the projects I have seen where someone did not have the time to do something that they should have done.

There can be a lot of reasons for not doing various tasks, but the argument that there is not enough time is rarely a good sign.

Solution: Be aware of, and communicate the consequences of not doing something.

Let us look at the consequences of some of the “We don’t have time to…” statements:

“We do not have time to attend sprint demos” Feedback is needed in order to ensure that solution meets requirement – The longer feedback time you have in your project the more rework will be needed and the impact of misunderstandings will increase.

“We do not have time for reviewing our documentation” Reviewing for grammar and spelling mistakes really adds little value, but missing reviews for code- and testability can soon become expensive.  Think cost escalation here, the later the discovery the more expensive the fix will be. Some argue that static testing or a review is actually one of the activities where you can get the biggest RoI in your projects.

“We do not have time to do test” Not testing, allows defects to go undetected to production, leaving little chance of success with implementing the application. Not testing is a huge risk, not only do you risk application failure but also business failure, that equals loss of money and prestige.

“We do not have time to write unit test” Some less mature projects I have seen shipped code for test if it could compile, skipping all developer driven test. Unit test allows early defect detection and reduces turnaround in projects, meaning that cost will escalate. On top of this, a strategy like this will result in more pressure on the test team, who are often already under pressure when the release approaches.

Common for all the above is the fact that they are investments that someone might not have time to undertake. Some investments are not a necessity, but postponing investments in quality is something that is likely to drive cost up and customer satisfaction down.

I will end post this with a quote from something I read recently: “Postponing investments in software quality entails risks for the business. When investments are delayed too long, business continuity can be put at risk - sometimes sooner than expected.” – from “You can’t avoid investing in your software forever” By Rick Klompé

Happy testing & investing :)


1 comment:

  1. Very well put Nicolai!

    Even I remember one such project where most of the stakeholders used to skip the sprint demos and retrospective meetings. But our QA Director pulled up the socks and wrote an epic email stating all the consequences of not attending the sprint demos and retrospective meetings. She was very well able to relate each role, how their absence impacted the overall product quality, along with a relevant and a recent example. And that I think struck every one really well and we started seeing a full house ever since.

    Anyhow, apart from sprint demos, even we as testers face this situation of not having time to update our own test cases, especially when we have back to back releases coming up. This, I feel, poses an even bigger risk, because an invalid or a non-relevant test case not only leads a defect to go unnoticed but it also wastes the testers time (as they sometimes find out that the test case is invalid only after executing it). This time could have been better invested in executing a valid test case and probably catching a defect.

    This happened in one of the project I worked in and it used to cause a lot of embarrassment, as we saw defect slippage in back to back releases. We soon found out the root cause was the sheer number of invalid test cases we had in our test suite. We requested the client to give a day, right after the sprint release, to do house-keeping tasks and update the test cases (at least the higher priority ones). Thankfully the client agreed and we were able to not only curb the defect slippage rate, but also bring down the number of valid test cases to be executed in each release (Since it was an agile model, the client requirements kept changing frequently which meant the application changed and so did our test cases).