Thursday 29 January 2015

Following up on the project risk log

Problem: Measuring progress to facilitate risk management.
Many a risk-log has ended it's life without any real value being added, due to lack of implementation and follow-up. The reason is, in my experience that the project fails to use test metrics for follow-up and governance.
Solution: Baseline the risk register on regular bases and compare metrics.
In order to baseline the risk log you would require a document containing the risks and some metrics. There are many ways of working with risks, but most common is the probability and consequence. This allows the project to plot the risks in a risk matrix, detailing the priority that the risks should have when working with mitigation. The matrix might look like this:
 
From here on the fun starts! Based on the risk mitigation activities that you implement you should see changes in the risk matrix. E.g. if your strategy is to address high probability risks first, you should see a trend of risks moving downwards in the matrix, and here is how it works:
On a monthly basis, you compare the risk matrix to last month, and you look for the following:
  • No change = Your risk management process is not kicking in, as the log is not changing!
  • Fewer risks = Your risk management is either mitigating those risks, or they are turning into issues.
  • More risks = Project is getting wiser, risks are identified – Food for thought and planning / mitigation.
  • Risks are migrating South & West = You are mitigating the probability and Impact of the risks.
  • Risks are migrating North & East = Your efforts are not mitigating the probability and Impact of the risks.
By doing this you will raise awareness on the risk management exercise, and force stakeholders to engage in a dialogue on risk within the project.
Another and even simpler way of baselining is to do a count of the number of risks and their status (assuming that you operate with one). You might have a list like this:
Potential: 14
Actual: 25
Averted: 12
Irrelevant: 3
Total: 54
Comparing the numbers on a regular basis will tell you if your risk management process is moving the risks in the right direction. You can even do a little trend-graph to keep things manager-friendly.
Finally, the use of tools will make it much easier for you in the long run – I’m not going to promote any tools, as I have no preference. One word of advice is to use Dr. Google and look for a risk management template for Excel, that should get you quite far for free. Look for a template draws the risk matrix for you.
Happy testing!
/Nicolai
 

Friday 23 January 2015

Static test planning


Problem: Planning the verification part of testing does not always happen in a structured way.

Reviewing aka static testing is the verification part of the V&V used for checking a delivery. Documentation is very important to testers, and testing artifacts and test basics are in need of processes to support them in their lifetime. A part of this is reviews and the approval of the document that it leads to – Something that require careful planning, as it requires that timing and resources are aligned to be done in due time.

Solution: Static test planning

Base your static test om a plan, much like you would do in the case of dynamic testing. The plan should be build on same principles as you dynamic test plan. Following items should (as a minimum) be detailed in your static test plan:

  • Deliverables under review – Look at the list of deliveries and nominate those that needs review.
  • Approach – Describe the methods/types of review applied for each document under review.
  • Document approval – What are the prerequisites for approval of a document?
  • Procedures and templates – What is the procedure for reviewing and what tools are needed?
  • Responsibilities – Who is document owner, review lead and approver?
  • Staffing and training needs – Who will do the work? Do they need training I doing reviews?
  • Schedule – When should the work be done? Remember your dependencies to project plan.

What is the purpose of static testing? Much like dynamic testing, we will be looking for deviation from standards, missing requirements, design defects, non-maintainable code and inconsistent specifications. The process of review also resembles the one for dynamic test:

Define expectations to review, then perform review & record results and finally implement changes required. When the review is done and findings have been addressed then look at sign off of the document.

Happy testing!

/Nicolai

Friday 9 January 2015

The quality of testing


Just recently I attended a venue, where the quality of testing was the topic of the day. There was some interesting points about how this could be measured and monitored that I would like to share.

The discussion had the following topics all related to quality of the testing:

  • Completeness of the test done
  • Credibility of the test documentation
  • Reliability
  • Quality Parameters to be observed

The discussion was revolving around the parameters that was driving quality of a product, and especially the quality found in the work done by testers. It quickly became evident that the work and quality of it is hard to quantify, and require that you look into the details of the work, looking for completeness, creditability and reliability.

You can see the parameters we discussed below.

Completeness of the test done:

  • Coverage vs. objects under test
    • Actual coverage vs. Planned
    • Execution vs. High risk areas first and coverage of same.

Creditability of the test effort

  • Test cases are created and executed based on objective foundation
    • Structure in test cases.
    • Independence in review and test.
    • Dedicated resources for testing.
    • Separate estimate / time for test activities.

Reliability in test

  • Predictability in test result (based on metrics and pervious experience).
  • Status on test can easily and accurately be derived at any time.
  • Defects and other observations are handled according to defined process.

Quality Parameters to be observed

  • Defined metrics, that are monitored (less is more)
  • Approval criterias and compliance in the deliveries (eg. Open defects vs. categories)
  • Test efficiency per test phase (& number of defects found after go-live)

…And there are probably a lot more if you really want to scrutinize that test department of yours – Just remember to have a little faith in the work done, before going all in.

Happy testing & Have a nice weekend!

/Nicolai