Solution: Use simple quality metrics for each story
Traditional V-model approach allows you to monitor defect detection ratio and test efficiency for the individual test phases. This offers an indication about quality in the delivery and pinpoints where to look for optimizing your test effort. This approach is however not viable in an agile setup where release happens every other week.
That is where two simple metrics will help you in your retrospective on the topic of improving the quality effort in the team: Story rejection rate & defects / story
Story rejection rate is measured pr. Story delivered in a sprint. It is binanry to the question: “Did the customer accept the story, as presented without any objections?” Yes is green, no is red, leaving you with a very clear pie or graph to be monitored from sprint to sprint. From here it is simple mathe to make the story point rejection rate in case you break everything down to the point.
Defects / story allows you to get some indication on the defect spread vs. your stories. It requires that you actually register the defects that the development team finds, rather than just fixing them on the fly – But since it is good practice to accompany the code changes with documentation like defect descriptions this shouldn’t be a problem? Reasoning behind measuring this is to follow up on two things: The general level of rework, and where the defects are found. Trends like majority of defects being found in large (read: high story point) stories tells you that you might want to break the story down to avoid confusion.
Go measure that agile delivery and then use the figures in your retrospectives to give yourself a little adeg one improving the quality in the delivery of each and every story-point in your delivery.