Monday 24 February 2014

Reviewing process using old failures as driver


I came across this interesting article on some of the most spectacular software failures last year:
http://www.softwaretestingnews.co.uk/the-10-most-spectacular-software-failures-in-2013/

This article points at the extreme consequences failures in software can have on the business, not to mention loss of image. This article serves as a reminder on the importance of quality assurance and contingency planning.

After reading the article I made a little exercise that I would invite you to do. It is called the cost of failure, and is basically just trying to remember the fails you experienced last year in your organization. The list (short or long) can give you two Things:
  • Ability to reflect and learn from previous mistakes.
  • General acceptance of quality assurance in the organization.
Note: Remember that root cause discussions related to failures can get personal, unless kept in a positive and constructive tone.
 
Using past failures for a QA process review:
Put it on the agenda for a cross functional meeting involving both developers, maintenance and test roles. The purpose is to have quality assurance process and implementation of same under review, triggering a discussion on how to do better. Take the conclusions to next years to do list, prioritize and implement, and remember to include that non-test roles in the implemenation.

Have a nice day & Happy testing!

/Nicolai

Friday 7 February 2014

Testing User Manuals

Problem: User manuals is a source of cost escalation after go-live.

Despite technical writers doing their best comply with all guidelines, problems will typically arise when users apply the manual in practice. When reality hits the users, they often get stuck, and rely heavily on the manual for help. If the manual is useless they will start calling support, super users or start using the system wrong – All of which leads to increased cost and overhead, not to mention hostility towards the system.

Solution: Test your user manual as part of the user acceptance test, or maybe even earlier.

Therefore, it is useful to have real users test the manual before go-live. User manuals are highly dependant of a solid project scope before they can be written accurately. This suggests that user manuals often are one of those deliveries that arrive late in a project, but it does not mean that it can not be tested earlier.

It is likely that the user manual will have a paragraph for each new feature added to the system. This means that this can be written already during development as part of the sprint. Putting this on the scrum board will ensure that the manual is not neglected and allow you to test it way before large scale user acceptance test of the entire system.

This is how it could be done: Feature X is delivered, and is scheduled for the upcoming sprint demo. Test of the manual is planned as an activity in relation to the customer demo, and the PO or a user representative is charged with the task of doing what the manual says. While the user performs the actions shortcomings are recorded, and these are the defects raised versus the manual. Once the defects have been corrected the new paragraph can be included in the manual.

When larger feature areas / business areas are ready users can be invited to another session where the manual is used in a larger context. Doing this will give the users good insight in the system being delivered, teach them the new ways of working, and allow them to reflect on the workings of their business in the new system.

The cool thing about user manual testing is, that you have all the test cases well described in the manual, so it is just a matter of finding some volunteer users and record their findings. Some will argue that the users will not bother with the manual, but my claim is that well written user guidelines are always worth the effort.

Have a nice weekend & Happy testing!
/Nicolai