Monday, 16 December 2013

Pairwise testing

Problem: Test cases covering multiple variables can be tricky to create
Lately we have been testing workflow rules. This calls for test cases that covers many variables, and as the number of variable goes up, so does complexity. We started the test specification in front of a decision tree, and quickly realized that writing the cases manually was too time-consuming, not to mention risky as the number of combinations was too much for us to cope with.

Solution: Apply all-pairs testing for the test case creation, and a tool to do the pairing.

There are many tools for creating the test cases, and the place to visit is Here you will find a tool that does the job, no matter if you prefer command prompt, text driven or GUI driven tools that can create nice graphical models of the test.

I like it simple, so I used PICT, a command prompt driven tool that takes text files as input and delivers text files as out put. Easy-peasy, just put your variables in a file, and transform that into a list of test cases that you can use for your test. For more information on PICT I suggest that you read the help-file found here:

Many of our test cases are parameter driven in MS test manager, meaning that it is possible to copy the result from the result file directly into the parameters into the test case. The only precondition is that you make sure that the columns in the result are in the same order as the parameters in MS test manager.

Consider the technique and tools for input test scenarios.

Happy testing !


Tuesday, 10 December 2013

Are you ready for testing?

Problem: Assumptions about test readiness will ruin your day

Like any other project activity test execution relies on agreements with stakeholders, preparations and deliveries to take place before the test can start. I have seen quite a few test runs being foiled by invalid assumptions and unknown status.

Solution: Test Readiness Review

Fire up your entry criterias in a Test Readiness Review before running the test. Either as an informal or a formal test readiness review. This is how I usually do it:

All my test readiness reviews are meetings where stakeholders are invited to have a talk about the entry criteria for the test that is about to be executed. The level of formality differs depending on the test that is being done. Internal tests that only involve internal stakeholders will be informal reviews, where external test runs with external stakeholders will be very formal.

Informal Test Readiness Review

Done in relation to System and System Integration Test (to internal systems)
When: A couple of days before test start.
Length: max 30 min
Who: Test team, key players in development org (like build manager, lead dev, project manager etc.)

I call for a meeting where only preparation is to bring a cup of coffee, and the agenda is discussion of the preconditions / entry criterias for the test to come. I gather information about readiness up front, making a shortlist of questions / actions to be discussed, but majority of the meeting is information about who does what.

Formal Test Readiness Review

Done in relation to System Integration Test (to external systems) and acceptance test
When: One week before test start.
Length: 1 hour+ (depending on the size of the test project)
Who: Test teams, key players in delivery organizations (like project managers etc.)

I call for a meeting where agenda states the entry criterias and responcibilities (from test plan) and preparation for the participants is to give status on their readiness. The agenda is a short presentation of the test to come and readiness status from all participants.

Outcome of the meeting is a status/action list that states exactly what needs to be done by who before test can start.

If your test readiness review raises a concern about being ready on time, then you have time to proactively doing something about the problem before the test is running. At the very least you can report the risk to the project owner.

The trick is to raise awareness about the test to come – This is often forgotten in the rush of completing the coding. I recommend that the meetings are called the second that test plan is approved, in order to reserve the time in peoples calendars and the create visibility about the milestone that delivery to test is.

Happy testing!


EDIT: You can see my test readiness checklist here:

Thursday, 28 November 2013

Obsessive testers?

We had quite a laugh here at the office the other day when looking at the pictures in this Blog post:

Obsessive or not, any testers looking at the pictures will recognize the feeling they get when testing and hunting for bugs. When the bug is close you get the feeling of “something-is-not-right-here!”

All of the pictures includes an element, which makes you wonder if this is right? Whenever I start asking myself that question while testing, then the hunt is on, because the bugs will be hiding here. The pictures are indeed situations similar to what the tester will encounter in a piece of software – in either the documentation, GUI, database or code.

I often start my test sessions with some exploratory testing. The reason for doing this to learn about the item under test, and to allow myself wonder is things are right. I would like to invite you to try this some time it is fun and rewarding.

Happy testing!


Sunday, 24 November 2013

Test Plan Reality Check

Problem: The test plan needs constant care in order for it to remain valid.

Plans are created as part of the startup of a project, and then often left alone during long periods of the project. As reality kicks in, the plan will become more and more invalid, leaving the reader alone in a world of false assumptions and inaccurate statements. “Plan the work, work the plan” is not an advisable strategy!

Solution: Put some reality checks into your plan and update the plan, as you wise up.

I came across this picture on LinkedIn, and realized that it serves as an excellent reminder that the test plan needs updating, just as any other plan. Looking at ‘Your plan’ vs. ‘Reality’ it becomes obvious that a reality check is needed every now and then.

Whenever reality bites, it will change the foundation of the plan either confirming some of your assumptions or invalidating them. It will be dependencies to other plans or deliveries that will rock the boat, suggesting that you will need to question your plan at least every time you (are supposed to) receive something from others. I have seen these points mentioned as quality gates in some organizations – No matter what you call the deliveries, you will require entry criterias (or similar) to communicate your expectations to the delivery organization and to check if the delivery meets the requirements.

When you find that the delivery does not match your expectation then update the plan, communicate risk/issues to stakeholders and take corrective action to ensure that the plan reflects what you know and where that will take you and the test project.
Remembering the project triangle (aka. Iron Triangle) I have some pointers where to look for changes in your plan:
Scope changes, check following areas of the plan:
  • Test items
  • Features to be tested
  • Features not to be tested
Cost changes, check following areas of the plan:
  • Item pass/fail criteria
  • Suspension criteria and resumption requirements
  • Test deliverables
Schedule changes, check following areas of the plan:
  • Approach
  • Responsibilities
  • Staffing and training needs
  • Schedule

I strongly advise that the plan is kept simple (see “KISS your test plan, but not goodbye”), and one of the reasons for this is to keep the administration workload at a minimum. In complex projects with loads of dependencies, just keeping a plan up to date can be quite a task.

Happy planning and testing!


Monday, 18 November 2013

Hunting bugs in the jungle.

Problem: Some bugs are like ghosts – They are only seen rarely, but they can be quite a scare.
Some bugs are illusive, and (almost) impossible to recreate, but it does not mean that they are not there. It is most likely that these bugs are considered to be ‘not reproducible’, but they are lurking under the surface, just waiting to ruin your day - or the release you have been hacking on for the past weeks.

Solution: Team up, and gather all the information you can get – Then eliminate the possibilities.

We have recently been puzzled by a bug, one of those that require a lot of investigation to reproduce and document. The root cause was really simple, but pinpointing the origin and reproducing it was hard.

The problem was caused by faulty population of dropdown that dictated what data the user was working on in the system. The selection made by the user was supposed to be persisted in the session and saved in the database, to ensure that the selection stayed even if the user quit and reentered the application.

While executing the test we discovered that the selection changed – Apparently for no reason. It happened a couple of times and we started discussing the observation, and agreed that this was indeed a bug. It was a bug that we could not recreate, nor could we point at a single plausible root cause.

We did two things in order to start the hunt for the bug – First we recorded everything we knew about it in our bug-tracking tool, then we teamed up in order to make a bug-hunting crew. We asked the developer who wrote the code to assist with two things; First thing was to participate in the discussion on root cause theories, and the other was to enable all logging, and stand by for the next sighting of the bug.

We used the theories to guide the testing, as we performed the scenarios believed to lead to the reproduction of the bug. One by one, the theories were dismissed as root cause, until we had the bug cornered.

When we finally encountered the problem again we had lots of logs to look for and less possible root causes to check – That made it much easier to find the bugger and recreate it based on the information we had available.

It turned out that the session handling was not correctly set, making the population of the dropdown to take place before the value was fetched from the database. It only happened on the rare occasion when user was transferred from one instance in the cloud to another. This was the reason for the illusive nature of the bug.

Conclusion: Use root cause guessing/analysis as a guide and your peers to help you when hunting those illusive bugs.

Happy bug hunting!

Thursday, 31 October 2013

If information is king, then coverage metrics must be the joker?!

Problem: Test coverage reporting can easily lead a false sense of security.

One of my peers asked me to review an acceptance test summary report before it was sent to his customer. It was an excellent report, with loads of information and a nice management summary that suggested exactly what to do. The test coverage section did however catch my eye, as it was a table showing test cases passes per use cases and the execution status.

The table looked something like this (simplified). Just looking at the % in the right column would suggest that everything is good…
Test coverage and execution status per use case:
Use Case
Test case
Execution status
% Run
UC1 – [Title]
12 Passed
5 Failed
UC2 – [Title]
11 Passed
UC3 – [Title]
12 Run
2 No Run
35 Passed
5 failed
2 No Run

Solution: Be VERY careful when reporting on coverage, and make sure to explain the meaning.

1st problem when looking at coverage is the set the level of measurement. I like exact numbers like code coverage, but in some cases this is impossible to get. The report I reviewed was covering acceptance testing of two deliveries from 3rd party vendors, making code-related metrics impossible to obtain.

Basing coverage on functionality is like navigating a minefield wearing flippers. It raises two problems; How do you measure if a function is covered, and how do you measure if alternative scenarios are sufficiently covered?

I would base my coverage measurements against acceptance criterias for the userstories to see functional coverage. If the user stories are broken down in acceptance criterias, then the customer will have a very clear idea of which features have been tested. Drilling down into the use case specifics and changing the focus from run to passed cases.
Use Case /
Acceptance criteria
Test case
Execution status
% Passed
UC1 – [Title]
·        AC 1
·        AC 2
·        AC 3
7 Passed
3 Failed
1 Passed, 2 Failed
There are shortcomings in reporting like this, but in the case where code is a black box you will have to take what you can get. Keeping this in mind there are things that must be communicated as part of the report:
·        Functional Test Coverage gives an indication of what features are done, in that they satisfy the acceptance criteria.
·        Many cases per Acceptance criteria is not equal to better coverage than a few.
·        This approach requires acceptance criterias to be very crisp, a missing acceptance criteria will mean a lot.
For each of these bullets you have to tell what risks you see, and what they mean for the project.

Nonetheless if you decide to put a coverage numbers into the reports, make sure to tell the reader what it means. In my opinion you need both code and functional coverage umbers for a complete coverage report, but can live with one if in a tight spot.

Happy testing!


Friday, 25 October 2013

Parallel testing as part of a platform upgrade

Problem: Testing an application after platform upgrades can be tricky.

We are about to start a project where an applications is moving from an old platform to a new one, and this calls for testing. Unfortunately, this old application is not well documented, neither in requirements nor in test cases. This makes the test tricky, because determining expected result is not possible from existing documentation.

Solution: Deploy parallel testing techniques!

We do in fact have the expected result in a well documented manner – We have a running system in production. The production deployment has been running for years and no bugs are raised against it. The system is a number-cruncher based on a huge order database, making it perfect for some parallel testing, and this is the prospect for our test:

We assume that the result in production running on the old platform is valid, hence equal to our expected result in the test cases to be run on the application after deployment to the new platform. Our test cases are the functions that can be invoked in the production environment, and the input data is the datasets from the database.

This means that we need the following setup to run our test:
Two test environments running the old (same as in production) and new platform (same as to be production) pointing at the same test data. We can use one database for input data, as the application makes calculations on data rather than changing it. This means that we will copy data from production and use that as a foundation for our test.

This is how we will create our test cases:
Reverse engineering on the production system. For each screen, we will list all functions, and break that down into steps. From production we get the test scenarios for each test case from business examples, and that will dictate what test data we will need for the test. On top of that, there are some batchjobs and other ‘hidden’ functions that will require attention.

This is how we will run the test:
First we will run the case in the old environment, and do the same in the new environment if the result is the same then we’ll move on to the next one.

The cool thing about doing it this way is that we now have documented test cases and a very good foundation for regression testing the application in the following releases.

Happy testing & Have a nice weekend!


Friday, 18 October 2013

Businessdriven test vs. Application

Problem: Users/business reps can have a hard time using their business knowledge in a new system context.
Test sessions and workshops that involve users / business reps (acceptance, prototype etc.) relies heavily on the business knowledge from the participants and the ability to turn this into meaningful test cases and scenarios. Seeing (and understanding) an application for the first time combines with a request to combine new impressions with business knowledge can be a though cookie for some.

Solution: Roleplay your way through the business scenarios.

I have had the pleasure of running quite a few test sessions involving real users, either with the purpose of writing test scenarios or running explorative test vs. a release or prototype. Common for the sessions is that they involve clever people that knows about the business, but not necessary anything about the new application that they are about to test. In order to unlock the business knowledge in the system context I have found that roleplaying with the business reps is very efficient.
This is how I would structure a business test workshop:
  • Welcome, meet & greet + expectations
  • Demo of application, showing a standard flow through the application
  • Go play session, where participants get some time with hands on the application
  • Roleplaying, explanation of the rules and concept
  • Roleplaying session 1
  • Recap of findings and recording of additional scenarios that might have been discovered.
  • Roleplaying session 2
  • Recap of findings & new scenarios
  • Repeat until all scenarios have been played out / recorded

Rules are simple
Everybody gets to play a role as either business rep, customer or any other applicable role. Participants that usually deals with the customers are excellent for the customer role, as their first hand knowledge of customer requests will come in handy.
The gamemaster will be the test lead, he makes sure that scope of the scenario is not creaping, keeps flow in the test and records spinoff scenarios and defects for the recap session that follows the scenario.

Sessions consist of a scenario, that is defined in a headline and the roles involved. If you have some use cases start playing those, but refrain from giving the steps to the users, as it will kill their creativity. A word of advise here – Keep it simple, as scenario execution becomes time-consuming with complexity.

Preparations for a session is required much like any other test activity, where systems, data, access, scenarios etc. needs to be planned up front. If you are running this without a working prototype you need a mockup of the central parts of the system in order to facilitate the roleplaying sessions. 

Spin offs that you get from doing this: Useability, or lack of same will show instantly when the users gets their hands on the application for the first time – Make sure to take a lot of notes when users gets stuck, that is where useability bugs will be hiding.

Happy roleplaying & Have a nice weekend!


Sunday, 6 October 2013

We do not have time to…

Problem: There is not enough time to do everything by the book

One of my peers told me, that his clients said that they did not have time to attend sprint demos for sprints that were not directly linked to a release. That made me think of all the projects I have seen where someone did not have the time to do something that they should have done.

There can be a lot of reasons for not doing various tasks, but the argument that there is not enough time is rarely a good sign.

Solution: Be aware of, and communicate the consequences of not doing something.

Let us look at the consequences of some of the “We don’t have time to…” statements:

“We do not have time to attend sprint demos” Feedback is needed in order to ensure that solution meets requirement – The longer feedback time you have in your project the more rework will be needed and the impact of misunderstandings will increase.

“We do not have time for reviewing our documentation” Reviewing for grammar and spelling mistakes really adds little value, but missing reviews for code- and testability can soon become expensive.  Think cost escalation here, the later the discovery the more expensive the fix will be. Some argue that static testing or a review is actually one of the activities where you can get the biggest RoI in your projects.

“We do not have time to do test” Not testing, allows defects to go undetected to production, leaving little chance of success with implementing the application. Not testing is a huge risk, not only do you risk application failure but also business failure, that equals loss of money and prestige.

“We do not have time to write unit test” Some less mature projects I have seen shipped code for test if it could compile, skipping all developer driven test. Unit test allows early defect detection and reduces turnaround in projects, meaning that cost will escalate. On top of this, a strategy like this will result in more pressure on the test team, who are often already under pressure when the release approaches.

Common for all the above is the fact that they are investments that someone might not have time to undertake. Some investments are not a necessity, but postponing investments in quality is something that is likely to drive cost up and customer satisfaction down.

I will end post this with a quote from something I read recently: “Postponing investments in software quality entails risks for the business. When investments are delayed too long, business continuity can be put at risk - sometimes sooner than expected.” – from “You can’t avoid investing in your software forever” By Rick KlompĂ©

Happy testing & investing :)


Wednesday, 2 October 2013

To test in Production, or not...

Test or verification in production? This discussion emerge from time to time, usually following the statement “We will verify in production!”

I came across this example of test in production on YouTube:
It is Matt Davis from Armor Express testing a bullet proof vest while wearing it. I sincerely hope that this is not real a test, but rather a demonstration of a well-tested product…

I really enjoy atsay714's comment: "This is a test of his balls, not his vest." From at QA and test perspective this comment really nails the concept of test in production. It is a test of the guts of the system owner, as HUGE risk is accepted while doing the testing in production.

Remember to tell your stakeholders what kind of risk they accept if they accept to exercise test in production.

Happy testing!


Tuesday, 1 October 2013

Use tools to help you gather information

Problem: Writing and documenting bugs can be time-consuming.

A lot of the test execution we do these days is exploratory. This means that we have little information from scripted test cases that can be copy pasted to bug reports and scenario descriptions. In order to transfer bugs and other information to development, there is a considerable workload of writing everything in steps and do-this-do-that descriptions. This offers two problems: It is boring to describe everything in detail and details are easily forgotten in the process.

Solution: Let snipping and recording tools ease your work

Snipping tools
A picture is worth a thousand words” This goes for defect reports as well – The better the picture the less explanation is needed when pointing development towards the problem. In my experience a good picture on a bugreport is a screenshot with some pointers and a maybe a little text pointing out areas of attention.

Capturing and sharing pictures can be done easily by using a screen capture tool. I use the snipping tool already build into Windows, primarily because it is free, and easily available. Snipping Tool in windows:

More sophisticated versions of screen capture tools are on the marked, and I suggest that you check som of them out, as they might be a shortcut to faster feedback to your peers. A colleague of mine demonstrated Snagit a while back, if you are for a more feature rich tool than the windows snipping tool:

Recording tools
Problem Steps Recorder included in Windows 7 and 8 allows you to record and share scenarios with step descriptions and screenshots recorded. Before we started using MS Test Manager for exploratory testing this was frequently used to make recordings of repro steps for a bug. Check it out:

MS Test Manager takes the recording to a new level with the possibility of recording both bugs and test cases using the features build into the tool.

There are lots of other recording tools, but if you go for one I suggest that you select one that is more than a Video recording of the screen.

One last thing to remember: “A fool with a tool is still a fool” – Tools are not a silver bullet, but a way to increase productivity and information flow in your organization. Evaluate a new tool in a short period of time, and scrap it if does not give you the results you expect.
Have a nice day & Happy testing!

Friday, 27 September 2013

When defect priority looses its meaning... STOP!

Problem: All defects are critical just before launch – defect priority can easily loose it’s value & meaning

Clock is ticking and a release is approaching. Development only have a few days/hours before code-freeze and only the most urgent bugs can be addressed. This is where defect priority can loose its meaning, as people see all bugs as critical, hoping for them to make the cut.

Solution: STOP! Check your defect metrics and call for a bug triage

Most extreme example I have seen was a large program where the % of priority 1 & 2 defects went from 17% to 92% in a week. This is what happened:

Program management announced to all stakeholders that deadline was tight, and that time would no longer pay attention to defects of severity 3 or lower. In the two days that followed this announcement, we could see a pattern forming. More than half of the existing defects had priority changed, majority got priority 1 or 2. 90+% of new defects was opened with priority 2+.

This story holds two lessons:
  • Do not tell stakeholders that you will be ignoring their defects – They will not take it kindly.
  • Make sure to check your defect metrics from time to time and do a bug triage when needed.

Since this I have always kept a pie-chart of defect priority (and severity) at hand in my projects – This is easy, as you get it for free in most defects tracking tools. Checking it from time to time, will offer a sanity check of priority spread, before doing a bug triage exercise.

In the example above priority got confused with severity, everyone got all jumpy about when, and if things got fixed. This meant that the fist bug triage meeting was a battle of will – Who was prepared to give ground and sacrifice some bugs to lower priorities? The meeting did not change any priorities, and we simply agreed that all stakeholders (business areas) had to nominate 3 bugs that they needed and 5 they wanted, and that set the new standards for priorities – Not very pretty but it did the job, giving development priorities for completing bug fixing in the time left.

For more information on bug triage have a look at this:

Have a nice weekend & Happy testing!


Thursday, 26 September 2013

Testing ROI - what's your story??


There are numerous ROI calculations on various forms of testing. Be it manual or automated testing, in waterfall or agile contexts someone has crunched the numbers for "testing as a whole" or for a specific industry.
It very often leads to some rather academic figures.

And very often to a setup where a lot of assumptions or "model projects" have to form the foundation for the calculations - and then another set of "best practices" on top of this with estimates on how many minutes it takes to do a test script, review and execute it, how many defects/test script. And then you have to put in the human factor - multiply by 1.07 and the ROI is....lost.


Find a good "product" story for your project or for your QA department in your company. You will most likely not be able to do flashy ROI on top of the story. Or only partially.
I've been with a company that was very focused on software- and hardware quality, yet what we needed was not ROI to remind us that test was important. We all shared the same story which was told when you joined the company - two failed product launches and we are out of business.
And QA didn't want to contribute to that in any way. At all. So we tested because we had to do test - because we did not want to fail. And that did not change for years. ROI calculations were updated during that time, better tools to support testing were introduced, the test deparment matured in terms of processeses and participants skills and experience but at the end of the day our story was still the selling point. And it could be shared outside the QA department with other deparments.

I'm not saying that ROI's are useless. Some are obviously so easy to pick apart that they are. On the other hand good ROI's are great input for understanding how much effort and reuse it takes before automation is feasible. And it is also good for telling the story about how many parameters that are impacting testing. In short the ROI process is good for getting discussions and understandings aligned internally. If there is a factor of 20 between two estimates or factors in an ROI where is the consensus and why?

If you are put with the task of doing a ROI to justify a test department in your company you should consider something to be fundamentally wrong. Test departments are needed whenever there is software or hardware development. They can be outsourced with different disasters waiting to happen. Unless you also ROI "communication challenges". And if you do that please share your thoughts on this blog.
But we would much rather hear your product story.


Monday, 23 September 2013

Test automation return of investment

Problem: Calculating & using Return of Investment (RoI) in test automation implementation

Engaging in discussions on RoI when implementing test automation is a tricky business. Discussions is often derailed either by incorrect numbers, a strange formula being used or stakeholders who for political or personal reasons wants to influence the decision.

Solution: Expectation management and a simple RoI calculation to steer the discussion

RoI calculations are needed for doing the business case detailing the investment that test automation is. There is really no way around this, but I would recommend that you start your test automation discussion with expectation management.

Expectation management is needed, as there will be many different opinions on what can, and what cannot be done using test automation. Especially the falsely expected benefits are dangerous, as they will pull discussion in the wrong direction, and set the project up for failure. When looking for falsely expected benefits and other mistakes I suggest that you start the discussion with stakeholders listing the tangible and intangible benefits they see. This will serve as input for the business case and give you an idea of the realism in the expectations of the stakeholders.

Test automation is an investment that takes a lot of time to get RoI from. Especially the arguments about savings and cost reduction should be examined closely and that is where a RoI calucation will come in handy. Some years ago I got a copy of Dorothy Graham’s test automation calculator, a nice Excel sheet that allows you to make a simple RoI. You can find it here, along with some notes on the subject on Dorothy's Blog.

In my experience RoI calculations can never stand alone – The numbers in the calculation needs to be backed by the experience you get from doing the actual test automation. When introducing test automation to a project, I suggest that you do a short proof of concept to ensure that the assumptions are right and that estimates are reasonable.
In short:
  • Engage stakeholders in a discussion on expectations and manage those
  • Do a RoI based on a simple calculation – Remember to mention the intangible benefits
  • Make a proof of concept, where you check your numbers and assumptions

Want to know more? Check out Douglas Hoffman’s excellent paper on ’Cost Benefits Analysis of Test Automation’ found here:

Happy testing!


Thursday, 19 September 2013

Test policy considerations


"Test" is a holistic expression used randomly across the organisation until 2 (weeks/days/hours) before go-live.

You've worked with test for more than six months, you know the above is true. Some exceptions exist but they are exceptions.


Write an ultra short test policy and if organisational acceptance fails, at least you know that things will never change.

A fellow test blogger has done an interesting blog entry about test policies (in Danish). So based on her headlines we tried to come up with the shortest possible test policy that can be adapted to almost all organisations.

Why not a 20 page document? Because they are never read, never updated and never detailed enough to deal with the fact that IT-development is a one giant moving target, no two projects are alike and change happens fast.

So instead of spending months and weeks on that unpolished jewel, spend 1 hour. Preferably with fellow test professionals and see how short you are able to fill in 9 sections below - or if they are relevant for you at all.

Test policy for you

  1. Definition of- and purpose of test in the organisation. This organisation recognises the importance of structured test, with the aim of providing valuable and tangible information to relevant parts of the organisation and the senior management group.
  2. Test is organised according to demand since it is a support activity to 1) projects of whatever size, 2) maintenance and 3) IT-operations. Test is a distinct discipline and thus also has its own distinct manager (insert name).
  3. Career tracks and competence development of the testers is subject to demand from the organisation. Competence planning will be carried out at least two times a year or at the end of each (project) assignment.
  4. The overall test process that must be followed is embedded in the 1) project model and the 2) development model that are implemented in the company. No stand-alone test models are accepted. The implementation of test activities and related gates are evaluated and adjusted when the models are up for revision.
  5. Standards that must be followed. Since test is a professional discipline we follow the vocabulary of ISTQB as a standard. In the case we are working within a regulated industry we follow (insert name) standard(s).
  6. Other relevant internal and external policies that must be followed - refer to (4)
  7. Measurement of test value creation is based on the following three criteria:
    • Input from test in terms of blocking defects (number of accepted, requirements traceable bugs that have required a fix)
    • Input from test in terms of implemented process improvement suggestions (time to market improvements).
    • Assistance with defect prevention activities (number of accepted bugs found during analysis and inspection of requirements and other static test environments)
  8. Archiving and reuse of test results are determined at the end of each task or project.  A peer review will be conducted to determine which test artefacts are worth saving for re-use, and which existing test artefacts must be discarded.
  9. Ethics for the testers:
    • Bugs must be documented and made visible to the organisation
    • You own your own bugs until handover is accepted by the receiver
    • All testers are expected to speak up when they come across something "fishy" i.e. improper solutions, processes, implementations or the like.

Wednesday, 18 September 2013

Cutting corners has a price…

Problem: Risk impact discussions can be problematic

Someone posted this picture at Linkedin, and that made me think of the iron triangle. While working with quality you often have to talk about the cost of quality. Quality is fluffy, and hard to explain, especially when talking quality in large projects. As a tester you have valuable input for the risk logs and risk meetings - Make sure that you get the point across to the receiver.

Solution: Use the iron triangle to illustrate the consequences of a risk to your peers

I often use the iron triangle when talking impact of risks and issues. The reason for this is that it is simple and speaks a language that most project managers will understand.

This is how I do it:

Every time I identify something that threatens quality, I think about the triangle, and ask the question; where will this hurt the most? Cost, Scope or Schedule? I really like this angle on risk analysis, as it gives much to the risk index from traditional risk management.

The thing this approach adds, is easier prioritization amongst risks based on the overall project goals, as most project managers will be able to say which of the corners of the triangle he can afford to cut. Furthermore, it links the risk impact score directly to something that project members can relate to.

Give it a try – I bet it will raise awareness about risk impact in your project. At least it will give you a nice discussion on what corner your project will cut off the triangle when things gets though.

One last thing: Cutting corners always costs quality...

Have a nice day & Happy testing!


Monday, 16 September 2013

Get to know your defects

How many times have you been in a project where the standard defect report has been published - and nobody cared?

The numbers are right, the defects are grouped according to severity, system area, functional area or whatever makes sense in the project. If you are in a well-managed project you'll also be presented to an overview showing changes over time. The report is filed with graphs for easy presentation of the numbers. Nobody is awake now. Why?

 Well, the "why" word is exactly the reason nobody cares about standard reports because they do hardly say a thing about why the defects are there.

Its' comparable to a train wreck. It has happened. The dust has settled. What remains now is twisted metal and a lot of questions with few obvious answers.

In well-driven projects there's room for the most fabulous defect activity - root cause analysis. The kind of work that really pays off both in the short run and in the long run.

The most fantastic book I've read for a while is actually this one.
It's an old book. It's from the time when the PC was a future concept, the Internet was research and tablets were science fiction. Yet it captures many of the conceptual problems we face when working in present day projects.

The reason I mention this book is because the authors have the most wonderful and simple problem break-down which goes as follow:
  • What is the problem?
  • What is really the problem?
  • What is the problem, really?

Most projects only pay attention to the first bullet. Raise a defect, describe the initial problem. Done. Fix. Re-test. Done-done.

Instead of the "let's see how many defects we can close or down grade"-approach, try to apply the problem break-down method to your defects. Maybe not all of them. Maybe just the trivial ones. You know the most severe ones will get the attention anyway, so start from the bottom instead. 

Then you'll have the chance of understanding why so many defects are being reported. And just then you might also have some actions to do to prevent defects. That is when you understand your defects.

Monday, 9 September 2013

Load testing in the cloud

Problem: Scalability and setup can be problematic when setting up load and stress tests

Load testing is one of the tougher disciplines, not only does it require automated test execution, but it also requires high volumes of cases running every second. Scalability can easily become a problem, for your execution, and limitations in the infrastructure where the test is running dictates the test rather than your non-functional requirements.  

Solution: Take your load test to the cloud

We have quite some time used Microsoft Visual Studio & Azure to run distributed load testing. It is a test setup of a test controller with multiple test agents hosted in Azure, using Visual Studio for running the distributed test.

It is the test controller's responsibility to manage the test agents. The controller handles the execution of the test cases, by nominating who (ie which test agents) to perform the test and how the cases should be performed. It also makes sure to distribute the relevant test cases for a given test agent when the test started. When the test is finished, the controller collects the data from all test agents and this forms the basis of test results.

The test agent is responsible for the execution of the tests and simulation of a given number of virtual users. The test controller tells which tests should be performed and how many virtual users to be allocated for each test. Test Agent reports the status to the test controller during the test, and this information is used to generate the load test metrics.

For more information on test controllers and agents please see MSDN:

The trick is getting all the test controllers and agents setup, and then running the automated test on this setup. But once it is running there is really no limit to the number of test agents you can spawn to do your bidding. There is a nice demo on Youtube, showing running Web performance test and load test in Visual Studio 2010:

Another thing that you might want to look into is the preview version of MS Visual Studio 2013 – It incorporates distributed load testing as a feature, cutting most of the setup of controllers and agents, making distributed load testing much more easily available.

For an overview of the new features of VS 2013 have a look at Brian Harry’s Blog:

Preview version of VS 2013 can be found here:

Have a nice day & Happy testing!