Friday 31 October 2014

Assumptions – The mother of all evil?

Yesterday we were discussing some topics related to a delivery, and I became aware of something our counterpart was saying things like: “We assume…”, “It is safe to assume…”, “You might be in a position to assume…” – That led me to lose faith in the plan that they had prepared.

In my book assumptions are a constant source of risk, due to the fact that they represent something that is unknown and hidden under a blanket of guessing. They are however easy to make, simplifies things and offers a shortcut in situations where unknown variables are encountered. In other words a necessary evil for most of the things we do and when made consciously they might even serve you well. If assumptions are injected unconsciously bad things might happen on the basis the risk you indirectly accept.

This corresponds (more or less) with the dictionary definition of assumption: “Accepted cause and effect relationships, or estimates of the existence of a fact from the known existence of other fact(s). Although useful in providing basis for action and in creating "what if" scenarios to simulate different realities or possible situations, assumptions are dangerous when accepted as reality without thorough examination”

Back to the plan under discussion the other day… With all the assumptions aired while walking through the plan I had to ask the author about the foundation on which it was build. My claim is that it could hardly be estimates, as many of the activities was based on assumptions, it would have to be guesstimates, or in other words inaccurate estimates – This argument caused quite a stir.

We ended up agreeing that the plan was fine, but all assumptions that were related to activities on the critical path of the project had to be documented. These assumptions had to be proved right or dismissed with adequate re-planning as a consequence. Until this point in time they where they were considered risks. It was hard work identifying the assumptions, but gave a good picture on risk vs. critical path.

Conclusion: Assumptions are not the mother of all evil but be very conscious when operating on assumptions. You cannot avoid them, but rest assured that assumptions are the equivalence of unmanaged risk if you are not, especially if you build critical plans on assumptions!

Happy testing!
 
/Nicolai

Monday 27 October 2014

Choosing the right approach for the test plan


Following my game of who-is-who last week I continued to think about the questions (or test cases) used in the game. I was wondering if a traditional test approach would work in a setup like the one encountered in who-is-who.
 
So I tried preparing the game just like I would if I was following the approach suggested by the V-model:
·         I listed all questions I would ask (test case preparation)
·         I ordered the cases in a sequence (fixed test plan)
·         And then I played the game (test execution)
 
Result: I lost miserably – After several games where I was tweaking my test plan I did not win a single game. The reason was, that I could not change the test plan based on experience gained while playing, meaning that some of my questions have little to no impact.
 
Then I changed strategy – Experience driven test execution following this approach:
·         I listed all questions I would ask (test case preparation)
·         I asked questions based on experience (no fixed test plan)
·         And then I played the game (test execution)
 
Result: I started winning, but I spotted some problems in the approach – I found situations where test coverage was insufficient, and lacking test cases I was in situations where my poor test coverage resulted in me loosing, or taking the long route to the answer I was looking for.
 
Conclusion: Changing strategy again to a charter-driven, experience based test would likely have been the best approach for a game like this. There are a lot of strategies when testing, picking the right one is the hard part. This suggests that the test needs to be driven by more than a good question or test case – That being the ability to ask the right question at the right time.
 
Happy testing!
 
/Nicolai

Friday 24 October 2014

Asking the right question?

I played a game of who-is-who yesterday. The game is all about eliminating options until you are able to guess who your opponent is. In other words: It is all about probing for specific information, that will give you a complete picture of your opponent – Much like testing an application, where you’re verifying and validating until you reach agreed level of knowledge.
 
Probing for information is about asking questions. Poor questions and poor questioning techniques interfere with the information gathering by creating confusion and misinformation, just like poor test cases will lead to false positive and negative results.
 
Question: So thinking about questions like test cases and visa verse I started wondering if questioning techniques for human communication could be applied when writing test cases…?
 
To answer a question like that I had a look at questioning techniques from the HR & communication world, and is seems that there are many takes on how to ask a good question, and interesting enough several takes on inefficient questioning. Less effective questioning consists of: Closed ended questions – Answered by Yes or NO and Leading questions – Containing the answer imbedded in the question. A test step is answered binary (pass or fail) and contains the answer in the step as expected result, suggesting that the traditional test case is using less efficient methods for gathering information?
 
I tend to think that questioning a person and testing a machine are miles apart. When questioning a person you have the benefit of verbal and nonverbal communication. When testing a machine you get observations that need to be checked versus an expected result. That is why you need to ask a closed ended question in your test case, and have the answer you are looking for in the case – Something that (according to the communication experts) is a NO-NO when talking to people.
 
Consider this example:
 
 
Human interpretation
Machine interpretation
There is no definition of how to identify a bitch.
Outcome is complicated, Mr. Jackson’s and my perception of ‘the bitch variable’ might differ, meaning that answer will never be unambiguous enough to get a result that would work in a game of who-is-who.
Check vs. variables will be BitchVariableExists = Yes or No and BitchVariablePopulated = Yes or No?
Outcome is simple, either variable exists or not, and if it exists it can be either populated or not.
 
This means that the test case prepared by Mr. Jackson would work if run vs. a machine, but not in an analogue game of who-is-who.
 
Conclusion: Questioning techniques for people communication are not necessarily a benefit when writing test cases, as these must be unambiguous, hence have to rely on the closed ended questions. Questioning techniques are however paramount when you gather from your peers and human test oracles information for writing your test case.
 
Happy testing & have a nice weekend!
 
/Nicolai

Thursday 16 October 2014

Ignorance is not the strategy to pursue

I came across this comic a while ago, and it underlines many interresting points when thinking about testing and QA:
“It is true. Hobbes. ignorance is bliss” – Indeed it is, but this changes quickly when your ignorance is exposed on the front-page of the printed press just after launch of your application. This is why you need knowledge about your solution; something that testing will give you.
“Once you know things you start seeing problems everywhere… And once you see problems, you feel like you ought to try to fix them.” – Knowledge is power, as knowing where the problem is will give you a tool to try fixing it. When you start seeing problems everywhere it becomes paramount that the knowledge is formalized and priorities are applied. Tools for knowledge management, specifically defect management will help here.
“And fixing problems always seems to require personal change, and change means doing things that aren’t fun. I say phooey to that!” – Fixing problems are indeed hard work, and if they are not well described and following an agreed procedure then it will be even harder. Lack of process for handling problems will often lead to confusions and people saying “phooey to that!”
“But if you are willfully stupid you don’t know any better, so you can keep doing whatever you like!” – Yes you can, but you will also keep repeating your mistakes, introduce immense overhead, build frustration in your development organization and run the risk of doing and epic fail and scare all talent away from your team. Management needs to take responsibility for the organization working smarter and not harder.
“The secret to happiness is short-term stupid self interest!” – That strategy is called pissing your pants, ill advisable but often practiced but organizations or people who have lost their foothold.
 “Were heading for that cliff!” “I don’t want to know about it” – Does that sound familiar, you have probably seen this scene in a professional setting? Some unpleasant reports are deliberately ignored, and some risks are left without mitigation, this is cost escalation and waste. Only OK if you (or your customer) have too much money to spend on nothing, or if you are planning to fail.
“I’m not sure I can stand so much bliss!” “Careful! We don’t want to learn anything from this!” – Maybe it is worth learning something, at least to avoid getting bruised all the time?!
Applying Calvin’s statements in a software development environment carries following conclusions:
·         Ignorance is not the strategy to pursue.
·         Knowledge is king, test can help you wise up.
·         Fixing problems is hard, applying process and tools will help you.
·         Management is responsible for work smarter and not harder all the time.
·         Do not ignore warnings – Act!
Happy testing!
/Nicolai

Monday 13 October 2014

Why do we test?

My daughter asked me what I do at work, and soon there after why. The answer to what is testing and the answer to the why is in short to add value, as this is the core nature of developing solutions.
If we were only focusing our test on functionality, then the test would soon be of little to no value. So the discussion with my daughter led me to think of the following items that I finds necessary as part of testing in order to ensure that the test adds value to the development of the solution:
·         Increase customer happiness: Is it fit for purpose, easy to use and supporting their business.
·         Increase profitability: faster and easier to market, something that the solution should ensure.
The next question is of cause how?
There are two methods that come to mind: Risk based and business driven testing. Risk based in order to ensure that impact on users, money, name, market share, brand value, customers, people etc. is minimized and business driven to ensure business goals are met. From here on you can go exploratory or similar, the important this is just to keep in mind that there is so much more to testing than the functionality.
Remember that happy users (who make money while being happy) are on the agenda for developing the right solution, and not just a solution.
Happy Testing!
/Nicolai