Wednesday, 24 December 2014

Happy Holidays!

Hi all,

Another year passed, and I would like to wish you all the best & happy holidays.


Thanks for all the positive feedback received this year!


Monday, 22 December 2014

Testing Business Intelligence (BI) and datawarehouse solutions

I recently had the pleasure of entertaning a small group of BI-professionals on the ever interesting topic of “test”. My brief was very short – like “can you participate for an hour and talk about test?”. Of course I could.

So we went through the usual discussions about testing challenges in general, the more specific ones about the nature of BI and datawarehouse solutions, the complexity of data, the heterogenuos system-of-systems setups we usually encounter in this area of business and last but not least the more universal ones about always being last in the food chain and how this affects testing.


Before speaking I got a bit of inspiration from this excellent link.

A lot of problems were discussed and finally somebody raised his voice and said: “But I hoped you were able to provide us with the silver bullet of testing BI solutions.” After 20 seconds of silence I had to admit: “There is no silver bullet.

Of course there is no silver bullet. There are several bullets including those big ones for cannons but the real bullet is looking at the organisation of BI- and datawarehouse teams and understanding their background and the way they developed from “merge this data into one excel sheet” into “align this data from these 80 sources and give me a transparent and flexible datamart”. That is the essence of testing challenges. That most other branches of IT have understood and acknowledged these challenges and have adapted with proper processes and tools.

Within BI there seems to have been an understanding that “we can test our way out of the problems” – and then it has been one failure after the other. The combination of functional testers and test managers and “BI-teams” trying to do end-to-end testing is not a happy combination. Add to that missing or in-complete test environments and lots of configuration and reconfiguration happening all the time and you have a pre-programmed failure.

If I were to spend my money on testing within this field from scratch I would bet them on testing the ETL part. This is where you have a relative chance of success based on the fact that:

  • It is a relatively simple process (or set of different processes with similar goal).
  • It is possible to do checks for every step
  • the input and output can be predicted and to some extent it does not matter whether you have complete data sets
  • It is possible to repeat the ETL process (full or partially) for every error that is found to see that re-test and regression testing results are as expected.

Doing end-to-end testing is the ultimate goal, ETL is the pragmatic start with a chance of success. It’s similar to all other complex integrated test tasks with some slightly different challenges related to BI.

Friday, 19 December 2014

The retrospective starfish

Problem: Structuring the retrospective in a way that facilitates lessons learned.

Preparing for a retrospective is needed in order to get valuable feedback and ensure that the participants are prepared for the session.

Solution: Try the retrospective starfish

There are many ways of doing retrospectives, some simpler than others, and in my mind simplicity is the key to success. If you expect that people spend time preparing then you should make sure that the process is understandable and that the product that you peers needs to produce is well defined.

I came across a method that was new to me a couple of months back, called the retrospective starfish. It is all about listing items under 5 simple headlines:
· Do more

· Do less

· Start doing

· Stop doing

· Continue doing

All input is consolidated under respective headline, and team then evaluate where to look for improving for next sprint, test phase etc. Try it out, it gives quick results and a nice overview of where your project is headed. Furthermore, it allows you to see the trends from retrospective to retrospective, by comparing the starfish from sprint to sprint.


You’ll find a nice description of the method here:

Have a nice weekend!



Tuesday, 16 December 2014

Riskdriven test

Problem: Calculating or defining the risks and priorities driving the test
Risk-driven testing is for obvious reasons in need of risk-definitions or priorities. Obtaining these might be difficult in cases where the framework or organization does not support the risk-driven test setup.
Solution: Pursue simple estimates with the right stakeholders
I usually base my test priorities on the combination of technical complexity and business criticality, following below formula:
Technical complexity * Business Criticality = Test Risk or Priority
I usually apply a scale 1 to 5, with 5 as the most complex / critical and 1 as the least. This means that all test items will be rated from a technical and a business perspective, following model:

Test item
Tech complexity
Business Crit
Test Priority
Use Case 1: Report Print
Use Case 2: Advanced filtering
User statistics
Data maintenance
Technical complexity is based on items like Test data complexity and availability, Requirements and Code complexity, Environment and technology, number of integrations, developer skill and knowledge on business and technology. You get an indication of this for free in the story estimation done as part estimation of the stories. Seek advice from the techies in the project in case you require some input on this
Business criticality ranges from Need-To-Have over Important features to Nice-to-Have, using a scale of 1 to 5. Seek this with the business representative, product owner or other person who is representing the customer.
The alternative is to apply a shortcut:
Label all test items using following scale using the business importance as driver
Need-To-Have, Important features & Nice-to-Have
Be aware however, everything is Need-To-Have in the initial discussion with the customer, getting to a point where you have even spread across the scale is hard. Furthermore ignoring the tech complexity is not always advisable.
Happy testing!

Thursday, 11 December 2014

Simple quality metrics for agile teams

Problem: Measuring the quality of delivered story points
One thing is to get the agile team working and measuring the trends in ability to deliver finished stories or story points, another is to monitor the quality of the delivered story points.

Solution: Use simple quality metrics for each story
Traditional V-model approach allows you to monitor defect detection ratio and test efficiency for the individual test phases. This offers an indication about quality in the delivery and pinpoints where to look  for optimizing your test effort. This approach is however not viable in an agile setup where release happens every other week.

That is where two simple metrics will help you in your retrospective on the topic of improving the quality effort in the team: Story rejection rate & defects / story

Story rejection rate is measured pr. Story delivered in a sprint. It is binanry to the question: “Did the customer accept the story, as presented without any objections?” Yes is green, no is red, leaving you with a very clear pie or graph to be monitored from sprint to sprint. From here it is simple mathe to make the story point rejection rate in case you break everything down to the point.

Defects / story allows you to get some indication on the defect spread vs. your stories. It requires that you actually register the defects that the development team finds, rather than just fixing them on the fly – But since it is good practice to accompany the code changes with documentation like defect descriptions this shouldn’t be a problem? Reasoning behind measuring this is to follow up on two things: The general level of rework, and where the defects are found. Trends like majority of defects being found in large (read: high story point) stories tells you that you might want to break the story down to avoid confusion.

Go measure that agile delivery and then use the figures in your retrospectives to give yourself a little adeg one improving the quality in the delivery of each and every story-point in your delivery.
Happy testing!


Wednesday, 26 November 2014

Efficient reporting

Problem: Inefficient reports are not read, hence wasting effort in the development organization.
It has become evident to me that reporting, and especially efficient reporting in becoming paramount in software development projects. Complexity is higher than ever and deadlines are tight, leaving little room in the organization for reading, understanding and reflecting on huge reports.
Solution: Write reports that deliver information refined for the receiver.
An effective report presents and analyses facts and evidence that are relevant to the specific problem or issue of the report, and does so brief and precise. All reports need to be clear, concise and well structured. The key to writing an effective report is to allocate time for planning and preparation.
I suggest that following steps are applied:
Understand the purpose of the report and the recipient group - Consider who the report is for and why it is being written.
Gather information for the report - Your information may come from a variety of sources, make sure that you know them. In addition to this you need to consider how much information you will need, depending on how much detail is required in the report. Keep referring to your report purpose and recipient list to help decide level of information.
Organizing the materials – Organize your content so it makes logical sense, for a test summary report it could be test preparation, execution and finally test results.
Writing the report – Start by drafting of the report, take time to consider and make notes on the points you will make using the facts and evidence you have gathered. What conclusions can be drawn from the material?
Review your work – It goes without saying that a review is needed, like any other workproduct it pays off to read it twice and weed out the spelling mistakes and contradictions that entered the report while writing it.
Present it to the world – A report not reaching an audience looses it’s meaning. Make sure that you communicate the report to the stakeholders. And make sure that you make it easily available.
Other tips:
·         Be careful with reporting templates, their generic nature often include all kinds of information that might not serve you. Make sure that you critically review the template to lose anything that does not support your report.
·         Your management summary must be short and to the point – Often this is the only thing read, by the recipient. The more manager-friendly your report is, the bigger you chance of creating awareness in the organization. If distributing the report by mail, then consider including the management summary in the mail body text.
·         Be very clear about conclusions and recommendations – I often include in the management summary as bullet lists, to ensure that they are communicated unambiguously to the reader.
·         Avoid Cover My Ass (CMA) clauses all over the report – CMA clauses blur your message, and hurts your creditability with the reader.
·         Remember that test is all about dealing with information about the delivery, meaning that the reports and communication of same is paramount for documenting the results of your hard work.
Happy Reporting!

Monday, 24 November 2014

Happy reading

If you fancy reading a bit about the current state of affairs within testing take at look at this excellent report. Yes it is advertising but it might underline some of the issues that you are dealing with in your own organisation.

"Testing" is after all "testing" and we can always use some extra voices to state our case. Whether it's more money or a broader scope for testing (and more money) or the adoption of new techniques and tool (and more money) or something completely different it is usefull with a little inspiration.

So click the link and fill in the form and you have access to 30 minutes of written entertainment

There are no major testing or QA revolutions mentioned in the report and even though it might not apply to your organisation it is still interesting to compare your own expectations to the forecasts in the report. After all there might be a thing or two your own organisation has overlooked.

Happy reading

Thursday, 20 November 2014

Strange error messages

I got the following error the other day, while working in a web-based tool.
It seems that the developers of this application did not foresee my actions and my 'inappropriate' use of the tool, hence did not do anything to help the me understand what went wrong when presenting the error mesage. From my perspective the message might as well have been – Website exploded! Have a nice day!
This experience made me think of an excellent blog post I read a while back on doing meaningful error messages to the user. It was written by Ben Rowe, who advocates for the use of the 4H’s when communicating with the user: Human, Helpful, Humorous & Humble – Check it out it is very informative:
Next time you encounter a message like the one I saw, then include a link to the 4H's blog post in the defect report you file, to help getting understandable error messages to the users.
Happy Testing!

Monday, 17 November 2014

Combination testing strategies

Problem: Complete testing is impossible.
I had an interesting discussion about test of a critical feature. The discussion was on test coverage and completeness in the planned test. The argument was that this feature was of a criticality that required “complete testing”
Solution: Acknowledge that complete testing is not possible and apply combination testing strategies.
There are enormous numbers of possible tests. To test everything, you would have to:
·         Test every possible input to every variable.
·         Test every possible combination of inputs to every combination of variables.
·         Test every possible sequence through the program.
·         Test every hardware / software configuration, including configurations of servers not under your control.
·         Test every way in which the user might try to use the program.
No test satisfies all of above. How do we balance them? It is a matter of balancing the tradeoffs vs. risk, meaning that you get the highest coverage. I came across a nice slide set that I need to share, as it details different combination testing strategies:
Happy testing!

Thursday, 13 November 2014

What is your mission?

Problem: Expectations vs. test team not aligned with the test team’s mission.
Do you know what your team’s mission is? Has this been aligned with expectations in the project and the customer? It is not unlikely that your answer is no and no, suggesting that there is a misalignment between your teams goals and the expectations of the project or maybe even the customer’s expectations
Solution: Make and communicate your team’s mission statement.
First step is to write a mission statement for the test team in the project. Do not confuse this with the mission statement that might exist for the testing services or QA department in general, this one needs to be for the project test team. There are numerous good guides to writing a mission statement – Google will help you with that.
Here are some bullets for inspiration when looking for your mission statement:
·         Find defects
·         Block premature product releases
·         Help customer make go/no-go decisions
·         Minimize technical support costs
·         Assess conformance to specification
·         Conform to regulations
·         Minimize safety-related risk
·         Find safe scenarios for use of the product
·         Assess and/or assure quality
·         Verify correctness of the product
Then it is time to starting communicating it. Start with the stakeholders closest to you, project management etc. and ensure that you at some point show it to the customer, and have a nice discussion on expectations to the outcome of the work being done by the test team. The mission statement is the means for a constructive dialogue on expectations and once you have come to an agreement with your peers it will give your team a clear purpose.
Print the mission statement and stick it on the wall – Make sure that it is catches the eye of those passing by, as it will lure them over for a chat about expectations and purpose. This will ensure that you are reminded about your purpose, and get a chance to reflect on the mission statement every now and then – Like plans, your mission statement might need revision in case expectations, stakeholders or deliveries change.
Happy testing!

Monday, 10 November 2014

The curse of the notification e-mail...

I still remember one of the selling points from the Mercury sales pitch for TestDirector back in the days: ”The system is so clever! It will send you a notification Email with all the information you need every time that you get something assigned to you!” Since then I have received thousands of notification Emails from all kinds of ALM systems – Not all of them were especially clever or informative.

The notification Email is very a very popular feature, and widely implemented in various ALM and defect/issue management solutions. For those not very involved in the project they are a joy, as you do not need to visit the ALM tool unless you get a mail. For those in key positions in the workflow they are a pain, as you are already in the tool daily, and can find this information based on your dashboard or queries in the Tool. For those not understanding what they are used for, the mails become the curse that threatens the use of the ALM solution.

It is the administrator of the tool that carries the key to unleashing the curse, as (s)he can by default set the notification rules in a way that is not supporting the users. If (s)he at the same time adopts a  strategy where notification rules are global and not fixed to the user roles, then BAD things will happen.

The Emails are often not understood, containing lots of irrelevant data, or out of context, making them the equivalence of SPAM. It is wasting the recipient’s time, and makes him angry, fuelling the curse even further. Angry users are not very constructive, and those familiar with change management theory knows that it can provoke powerful negative reactions from stakeholders

This is why it is important to consider the usage of notification emails carefully before firing the mail-cannon at your organization. My advice would be to:

·         Identify and understand your stakeholders – Who will receive the mails?

·         Define who needs to be involved, who needs to be informed and what are the volumes of notifications.

·         Make sure all stakeholders are appropriately informed about the ALM notifications and ensure that your offer them a way to modify the mail subscriptions to fit their needs.

·         Consider doing daily, weekly or monthly notification roundup mails (if feature is available in tool).

·         Ensure that the template used for the mails has only the required information needed, and not everything.

Alternatively you are likely to experience some of the following:

·         Users applies mail autofiltering, sending the notification mail directly in the archive (or deleted items)

·         Users complains that ALM system is a pain, and stops using the tool.

·         Users gets stressed by all the mails they get.

·         Users starts to forward notification emails, disabling the ALM workflow in the tool.

·         Users looses track of other items in their crowded inbox.

All of the above is waste, which is injected and maintained directly in your development cost – Not as clever as the sales pitch originally stated…

Happy testing!


Monday, 3 November 2014

Tips for registering defects

We have just been through a couple rounds of testing, and gathered a lot of observations. Some of these are not defects, they are the result of tester wish-listing, misunderstandings, wrong test cases etc. This is however as expected, and as always calls for a bug triage and refinement BEFORE any fixing and distribution of the defects happen – In other words, no defect management before you are actual sure that it is a defect.
Luckily for us, the testers have been very careful when raising an issue – Something that we benefit from now, saving vast amounts of time in the analysis. Despite the testers being new to testing, they have reported the observations in a way that facilitates future analysis. Simply by following our tips for issue reporting!
There  is little new in the tips for issue reporting below for all you professional testers, but for business reps who are invited to join a project as testers, everything is new, and that is when you will benefit from guides that explains key areas of your test process. We made such a guideline, consisting of a manual for using the test tool – Stripped for anything but what the testers needed. And a list of advice to help filing the issues while testing.
Make sure that you nurse your testers, so they understand their role and the expectations towards their deliveries – They become a powerful asset if you do.
 Happy testing!
Tips for registering issues:
Reproduce the issue before filing an issue report: Your issue should be reproducible. Make sure your steps are robust enough to reproduce the issue without any ambiguity by someone who is not you. If the issue is not reproducible every time you can still file an issue mentioning the periodic nature of the bug.
Report the problem immediately: If you found any issue while testing, do not postpone the detailed report to later, instead write the issue report immediately. This will ensure a good and detailed bug report. If you decide to write the bug report later on, then chances are that you miss the important steps in your report.
Spend 2 extra minutes on the description: When writing a description please adhere to the following structure, to ensure that sufficient information is captured:
·         Reproduce steps: Clearly mention the steps to reproduce the issue, to facilitate reproduction of the issue by developers and others who will work on fixing the problem.
·         Expected result: How application is expected behave on above mentioned steps. Add a reference (if known) to the requirement that appears to be violated.
·         Actual result: What is the actual result on running above steps. Include a screenshot of the scenario, or clear detailing on test scenario, including test data used.
Read issue report before hitting SAVE button
Read all sentences, wording, steps used in bug report. See if any sentence is creating ambiguity that can lead to misinterpretation. Misleading words or sentences should be avoided in order to have a clear bug report, and ease communication with the receiver.