Tuesday 15 December 2015

Season greetings - and a few thoughts on the testing challenges for 2016

It's that time of year. Many of us are engaged in testing just how much the Christmas mood can be stretched. It's kind of a stress test - this year no exception. We do not do predictions here on this blog. At least not the bold ones. For that we have the experts who every year publish the World Quality Report. If you have a spare moment before Christmas take a look at it. At least it's a good source of inspiration for the strategic initiatives

But back to this blog. What's happening here? Well for 2016 we would like to see the following.

Make your black box testing a bit more white

This is a tough one. One on side black box testing has a specific purpose - interact with a system with known inputs in order to get predicted test results to prove that requirements are implemented. On the other hand - wouldn't it be nice if there actually was a deeper understanding among the testers about what actually goes on behind the scenes? Maybe that's also what is covered in the World Quality Report when they put emphasis on "Expand testing teams’ skills beyond manual and test automation".

Spend more resources on test data

Test data will continue to become more and more critical for testing. For system integration testing it is maybe the most important component to be in control of. And yet, this is where most testing organisations fail to address the problems they are facing. This is not about strategy or big frameworks. This is hands on work that require the participation of the development organisation on a broad scale - not the same as many resources or a lot of effort but do a bit of agile approach to this and focus on the areas with high frequency and where it really hurts. It's like a toothache - it does not go away on itself and every time you test the problem is there - and it takes more resources to work with a bad solution so no need for fancy ROI analysis. Just do it.

Devops and testing are two of a kind

If your organisation is beyond "agile" and are now focusing on the new management buzz "Devops" make sure that testing is involved in whatever activities that happen around this area. We all know that testers are the most valuable source of knowledge prior to go-live and the "Ops" should therefore be interested in that knowledge. And vice-versa - testers always love real "Ops" stories because they give such good input to fill the missing blanks. The areas we forgot to think about when we did our test planning and analysis. But the end users found them - those who pay our salary.

Season greetings

So please, Santa. This was three of the wishes on the list. There are about 999 other interesting test related topics that we would also like to address but they are currently in the drawer waiting for 2016 and more action on this blog.

Season greetings to all of you and thanks for following this blog.


Image result for santa

Thursday 15 October 2015

Black box testing - FAIL



Unless you've just arrived to planet Earth from some other galaxy you cannot have missed the biggest news story in software this year - and perhaps also this decade. VW's "cheat software" that made the emission tests from various diesel engine models behave differently when they went through the US EPA emission tests - only to behave differently once they were driven by the right foot of real people.

No matter who decided what, who knew what - and who did what -the implications from a software testing perspective are interesting. First of all this is a signal to everybody about not trusting what's inside "the box". If you work professionally with test, software audit or other types of certification jobs where software is included the black box testing activities must undergo increased scrutiny.

A much more focused approach to the vendors black box testing activities should be the result of this event and a much better understanding of the tests and results should be the outcome. It is of course not possible to have completely transparent white box testing activities all the way through the development process and even it was the feasibility of looking over the shoulder is not really an option.

One simple learning is to limit the re-use of test cases. Make sure that the test is flexible within certain boundaries. Secondly do not rely on tests that are known by test vendor - and definitely not too much on test cases from their black box testing activities.

Instead, when you are involved in FAT and SAT activities your main goal should be to understand the testing scope of your vendor and challenge that with your own black box test cases. A further approach should be to have a team of skilled testers and end users involved in exploratory testing - a set of planned and focused activities with the vendors software prior to accepting delivery.

VW is presumably not the only company where the content of the box is different than expected and the motivation for spending extra time and resources is now forever embedded in their specific case. The ROI from a customer perspective is quite easy to calculate - insufficient testing = don't know how the engine of your business is running.

From a historical perspective software has often included unwanted or hidden features. Just read a few examples about Office 97.


Thursday 3 September 2015

PRINCE2 quality management strategy


I have spend the past months working in an organization that utilizes PRINCE2 as driver for the projects run. One of the products I have been working with is the Quality Management Strategy (QMS), an essential work product that defines how to approach quality in a project.

The QMS describes how the quality management systems of the participating organizations will be applied to the project and confirms any quality standards, procedures, techniques and tools that will be used. Sounds easy enough, but requires a little thought to ensure proper level of information to ensure that quality is manageable in the project.

The QMS is created as part of the project initiation, and continues to be updates throughout the project. It contains the following:

Quality management procedures for planning and control. Tools and techniques, including records and reporting. Quality management activities, covering dependencies, roles and responsibilities.

During my short time working with the QMS work product, I have made the following conclusions:

  • Gold-plating your QM will cost you a lot – Make sure that you set your target at fit project purpose and scope. Make sure that scale and tailoring of standards is described. You might not need everything related to quality that you can find on the corporate intranet. Less is more – too little is a problem.
  • Project quality is about customer happiness and satisfaction, meaning that you need a lead on those acceptance criterias and definitions that tells if product is fit for purpose. Furthermore you need to think about how to get input from the customers.
  • Remember that PRINCE2 is about learning, meaning that you should have some focus on improving described processes and tools, based on the observations and learning you experience in your project. These activities needs a place in QMS.

Happy testing!
/Nicolai

Monday 31 August 2015

A few thoughts about risk

We've been around this topic a number of times - risk based testing. Another Danish blogger did a short post about this (in Danish) just before the summer holidays.

The point is right - base the test approach on the identified risks. There are numerous problems in that approach. A few mentioned below.



Customer input

The "stakeholders" who should be able to list and prioritise risks hardly know what they've bought or ordered. Most organisations or customers have little idea about what's going to hit them when they embark on projects where risk based testing could be of good use. They are about to participate in the expectation roller coaster ride where each day will bring new challenges and decisions forward and on top of that they have to prepare their own organisation and their customers for the changes they've ordered. They are not used to working with the fluffy term of "risks" and being the test manager trying to facilitate them is no easy task.

The professional testers

This group of project participants have to be kept on a short leash when it comes to risks. Depending on their interest, experience and beliefs they can carpet bomb any risk session in a matter of minutes. Most of them with very relevant input, but too often it is input that is way outside what the project is capable of dealing with - even with lots of resources (time, hands and money).
That said, testers are usually your best chance of success when it comes to qualified risk input based on their experience but put on your angry test manager hat and cut away any risks that are improbable or impossible to deal with.

Development and service organisation

This might be your chance of some significant input to the risk list. Understand the changes that are being implemented. What's being reused, what's the new development, who's the experienced crew, who's new. New and totally uncharted territory vs. known land. That's something you can get from the developers. Not on a silver plate but they might have an idea early in the project about where and what will be impacted. Architects are a similar good source for this kind of input. Although they tend to be a bit further into the universe.
The service organisation - well, they knew where it failed last time, and the time before that. Service delivery managers are usually the best source for historic insight into failures that should have been addressed by the testing effort. If they have kept record of their findings (yes, it could be recorded systematically in a database), they can even quantify and group this for you.
And if you, as the responsible test manager, is not able to get useful input from these critical parts of the project organisation, you should consider your position or be prepared for a "memorable project experience".

Management

This might also be a good place to get some input on risks affecting the testing. Not in terms of tangible risks that can be listed and prioritised, but rather in terms of the managements understanding of which risks that are worth focusing on. For one this will give an indication to the test manager whether there is an alignment between the test organisation and (project) managements understanding of risks. If not - then that is the primary risk to address - and stop testing while priorities are being sorted out.

In fact this is where you start. Get an agreed risk definition for the project which is agreed with the management - be it project management or a project owner or sponsor. Then you know where to aim, and where to prioritise as a test manager.

For the record - this blog is not dead, it just took at 14 week vacation. Happy testing. We're back in action.

>M

Tuesday 19 May 2015

QA Engineer walks into a bar...


I came across this one a while back - A fun reminder on equivalence partitioning and test cases you can get from different equivalence classes.

Happy testing !

/Nicolai

Friday 17 April 2015

Management Potential


I came across an interesting view on the (test) management discipline, during a course I attended this week. The discussion was on what it takes to be a manager, and this was quantified in the following formula:

Management Potential = P * E * L * M

P: Personality

E: Experience

L: Leadership

M: Management

The points that I find interesting about this way of perceiving the manager’s potential is:

Four key elements define the cornerstones of skills that are needed. They are all essential, so having no skill / ability to operate equals a management potential of zero. Good managers need not be masters of all, but can compensate for lack in one area by being skilled in another.

So what is the reasoning behind this discussion? Think about your own management potential, maybe you want to cultivate one area that you are weak in, or maybe you would like to strengthen a skillset to compensate for another? In my opinion, especially inexperienced managers can benefit from putting some thought into how they compensate for lack of experience (while building it).

Food for thought – Have a nice weekend & Happy testing!

/Nicolai

Wednesday 18 March 2015

Guiding your estimates in the right direction


Problem: Estimating effort for (test-)activities is inaccurate

Estimation, or educated guessing is the foundation for many an activity related to management of a project. Given that estimation is not an exact science there will be inaccuracy in the numbers used for planning and resource allocation.

 
Solution: Apply general estimation rules and use a technique to guide your estimates.

In order to improve accuracy in your estimation you can rely on some general estimation rules and use a technique to guide your estimation.

General estimation rules:

·         A full-time resource will only be productive around 80% of the time.

·         Shared resources, working on multiple projects will have a lower productivity given that they will have to spend time when they do context switching.

·         There is a high degree of optimism in estimates, as most people have tendency of underestimating complexity and time consumption.

·         Base you estimates on multiple sources – Don’t just use own experience, pursue experience of others if possible.

·         Estimation of a task should be done by the person/team responsibility for doing the task.

·         Remember to include issue management, meetings and other supportive activities in the estimate.

·         Break estimates down if possible.

·         Make sure that assumptions, exceptions and limitations are clearly documented as part of the estimate

Techniques for estimating

·         Top-down estimation

·         Bottom-up estimation

·         Combined top-down & Bottom-up estimation

·         Comparative estimation

·         Parameter estimation

·         One-point estimation

·         3-point estimation

·        

The point is that there are numerous techniques that allow you to do more or less scientific estimation – You need to choose the one that fits the purpose and tolerance of your project.

Happy testing!

/Nicolai

Monday 9 March 2015

Supporting your principles of testing

Problem: Your principles of testing cannot stand alone.
Having spend the time defining the principles on which you want you test to run is not enough – It is definitely a good start, but the principles needs to be anchored in the project through activities and action that turns the principle into practice.
Solution: Walk the talk by supporting the principles with action.
One thing is to have a defined test principles, another thing is to support these in practice in the projects. I am going to have a look at some of the supportive actions related to the 7 Principles of Testing from the ISTQB syllabus, as these are commonly accepted.
1) Testing shows presence of defects, but cannot prove that there are no defects. This means that testing needs to be very structured to ensure that testing probes the right areas. Risk based testing can be an approach to testing the right areas, but no matter how you go about this there is two things I would recommend you to inject in the project to address this principle: A strategy that guides your test planning and priorities and expectation management that ensures that the  steering group, project mangers, external stakeholders etc are aware that test will not prove that there are no errors.  
2) Exhaustive testing is impossible and testing everything including all combinations of inputs and preconditions is not possible. Applying different test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective. Use equivalence partitioning as a driver for designing the test to a level of detail that makes sense for the project.
3) Early testing. Remember the principles of verification and verification?  Apply them early and make a plan for when to do it – Some want the V-model to drive early test, others have different approaches, the one that is successful is the one that ensures that testing happens in a timely manner (read: as early as possible). Another thing that facilitates early test is to have a champion in the project from the start – Someone who advocates good quality already from the project planning and onwards.
4) Defect clustering. Does your risk driven approach consider technical complexity / risk? If not, then you might want to consider this in conjunction with some coverage analyses. Another approach is to do experience driven explorative testing, with the purpose of finding candidates for a thorough test – It is a shortcut to finding those troubled areas.
5) Pesticide paradox. Change your approach every now and then – Switch responsibility for functional areas in the test team. Invite new people in, try explorative test, arrange bug hunts, have a bug-off of testers vs. development or business vs. it-team. Point is that you need to refresh yourself now and then, and you might as well put in your plans and have some fun while doing so.
6) Testing is context depending. Base your strategy on the nature of the system and the error tolerance of the receiver. Don’t juse run all test cases just because you can – Run those that are related to the functions delivered.
7) Absence – of – errors fallacy. Invite those users in early, sooner than later, to gather feedback that ensures that you are moving towards ‘fit for purpose’. Involving users requires careful planning, and since you want early involvement this is something that you need to address ASAP in any project.
Again, my point is that you need to have actions that supports the principles, without this, you principles are worthless. Another point is that you need to drive all of the above aided by common sense. Do not invent a wheel unless you have a need for driving somewhere.
Happy testing!

/Nicolai

Wednesday 25 February 2015

Risks against test start

Problem: Test is delayed already before it has started

A test may be delayed due to different reasons, causing the start of test to be delayed. Test not starting on planned time is due to events that violates the start criteria for the test activity.

Solution: Monitor and act on risks against your test start criteria.

In order to monitor the risks you need to know what they are – Most common risks against starting a test is related to delays in activities that are on critical path for your test. I have listed the most common problems I see:

  • Missing test component or system.
  • High error levels in test component or system.
  • Testware not complete.
  • Missing or wrong testdata.
  • Too many (partial) deliveries for test

There are two ways of approaching these risks – reactive and proactive.

Reactive approach is easy, but introduces waste in your development life cycle:

  1. Wait until test is delayed by one of the above.
  2. Look for rootcause for dealy. 
  3. Eliminate rootcause. 
  4. Start test at first possible time. 

Proactive is also easy, but requires a higher degrees of advance planning than the reactive approach:

  1. Do a risk identification workshop, listing things that will derail your test.
  2. List actions that minimize likelihood and impact of identified risks.
  3. Adopt actions in your test plans, and follow-up on these.
  4. Do a test readiness review in due time before test starts.

To help you on your way you might want to consider the following actions to mitigate the mentioned risks:

  • Missing test component or system.
    • Likelihood is reduced through continuous integration, automation of deployment / build process, release management of test deliveries. A high degree of automation is preferred in order to ensure timeliness and correctness.
  • High error levels in test component or system.
    • Test as part of development and cooperation between developers and testers. Focus should be an early test, and on test of critical items that is a must-have in the following test phases. Furthermore a bug triage session on regular basis might help utilize resources on important (from a test perspective) bug-fixing, rather than random bugfixing.
  • Testware not complete.
    • Test planning is paramount. The plan needs to include all activities and the delivery these leads up to. Follow up on critical path, and make sure that you update the plan as you go along.
  • Missing or wrong testdata.
  • Too many (partial) deliveries for test
    • Release management will get you far – Know what is in the box, and bundle test to match the partial deliveries. Prioritize test cases based on the release content, to ensure that you do not run cases prematurely.

Happy testing!
/Nicolai

Tuesday 17 February 2015

Gold plating = Cost injection


Problem: Gold plating of your product is expensive.

Ever experienced that features are added in your product without any adding any real value? If yes then you have probably been gold plating that delivery of yours. Gold plating is done with the best of intentions and most of the time is appreciated by the customer. However, there are many cases where it is not liked and the gold plating is backfires on your product.

Solution: Stick to implementing approved features.

Usually gold plating is introduced either by the project team or by a project manager, at no cost to the customer. Gold plating does not come cheap however! It can increase operation and maintenance costs significantly and will make a dent in quality. The reason is that it raises complexity, introduces features that are not traceable and operates on the outside of the feature approval process – In other words it is a risk that needs to be mitigated.

Although, gold plating sounds good to everyone, it is bad for the project team in the long run. Gold plating increases the development cost, raises the expectation of the customer by inflating the feature per hour / story-point of development. If you do another project for the same customer, he would again expect you to deliver a product with extra features.

Avoiding Gold Plating is important to stay in control of the product, and ensure that the development effort is spend on value-adding features. If you operate in an agile setup you should see gold plating as a violation of the product backlog setup and priorities. Avoiding gold plating requires discipline. It requires the team to ask themselves if they are adding unnecessary or over-engineered functionality into the application.

Start out by discussing gold plating as part of your retrospective, and the set some ground rules like:

  • Never allow add any extra function or features to a story without approval.
  • As a product managers and sales people should use the product log to add new features – not the sprint log.
  • You must establish proper communication lines within the project team, ensuring that approval from customer is achieved.

Then do a little measurement on number of features per iteration vs. number of features in the accepted sprint log. You might have gold plated the solution, but make sure that the result can still be tested, and ultimately accepted by client, otherwise you just might end up paying a very high price for the golden plates.

Happy Testing!

/Nicolai

Wednesday 11 February 2015

Your very best friend - The Service Delivery Manager

Working with test also means networking. Since most test management is about getting the product tested and shipped according to deadline a natural part of this networking is with the development organisation. Project managers, developers, testers, business partners and other SME's. They provide essential knowledge and resources to the test effort getting things prepared and executed.



This means we have a tendency to forget our most important body of knowledge - the "service organisation". They tend to take over service of the product at go live and then we forget them. Let's correct this mistake in the future.


First and foremost the service delivery manager should be included in the handover of the product prior to go-live. Essential knowledge from the test team must be embedded in the service organisation. Not as a long trivial list of open defects but as story about product quality, workarounds and other necessary information.



Secondly the service delivery manager sits on a treasure trove of knowledge about real user problems and user behaviour as well as technical problems faced by operations. This can be used by the test organisation for a number of tasks including estimation, prioritisation of test effort, general planning of test and as a source for planning test of "off-normal" situations.

The service delivery manager is as such an essential member of the test team. Not on a permament basis but ad hoc at the right time during the project the Service Delivery Manager is your best friend.
If the service delivery manager is not able to provide the input, he is at least the gateway to the organisation where the knowledge is present and available. Just kick in the door.

Monday 9 February 2015

TDD – My experience

Reading this excellent post at ’The Codeless Code’ I came to think of my own experience with Test Driven Development (TDD): http://thecodelesscode.com/case/44
 
My first take on Test Driven Development was many years ago, before agile had really made an impact. We faced the following challenges:
  • Customer wanted shorter development cycles.
  • Development was outsourced.
  • Test resources was scarce and had no coding skills.
  • Offshore resources had little business domain knowledge.
 
In order to address especially the last two bullets it became evident that we had to change strategy, and focus on implementing a process that supported the offshore development team and enabled the onshore resources to assist the offshore team though reviews and guidance.
 
TDD was introduced as part of the low level specification done by the offshore team. One of the sections in the specification was dealing with the unit test cases that had to be written in order to cover the functionality detailed in the spec.
 
Another challenge was that the offshoring happened fast, and we had little time to train the team. That meant that we had to come up with ways of “testing” the team’s understanding before countless hours was used coding a solution. We found that testing the solution late in the development cycle often proved too late to counter misunderstandings. This meant that we came up with this very simple procedure for handing over the development assignment to the offshore team:
 
Walkthrough of business requirements and related high level specification, to empower the offshore team to take up development. Offshore team did low-level detailed specifications on solution including test cases, and these was sent for review. Testers reviewed test cases, architects reviewed the solution, and sr. developers reviewed the pseudo code for completeness and capability with current solution. The interesting thing was that it almost always proved to be the test cases that gave the pointers on whether or not the proposed solution was in line with expectations.
 
Once low-level design was approved the test cases was implemented – Once done they were checked in and added to system codebase, and baselined as part of the delivery. After development of the actual solution all test cases was part of the regression test suite, meaning that we soon had lots of automated test on the project leaving us with a high level of code coverage.
 
The real challenge of introducing TDD was to shape the organization to facilitate this new way of working, and enforcing the procedures. There was quite a stir in the onshore organization, not only did they have to embrace the new offshore colleagues, they also had to hand over some of their assignments to them. At first the offshore colleagues were put off by the constant review and scrutiny of their code – Little code could be written before passing a series of quality gates. These gates was not in any of the standard development processes, meaning that they were sailing uncharted waters. This meant that there was a lot of explaining and discussion up front before the TDD approach could be attempted.
 
Biggest impact was on the testers – they had to abandon trivial functional testing, as this responsibility now rested on the shoulders of the developers. This was hard, as they were used to doing test cases one to one on the functional requirements. Their scope was now expanding to a compiling test results of TDD into test coverage reporting, and then doing testing in areas that looked a little weak. On top of this, they were now in charge of the factory acceptance, calling for testing that was focused on the system as a whole, and challenging their business domain knowledge far more than they were used to.
 
Happy testing!
 
/Nicolai
 
PS. MSDN has a very nice guide for those wanting to pursue TDD:
http://msdn.microsoft.com/en-us/library/aa730844(v=vs.80).aspx
 

Tuesday 3 February 2015

Enforcing your quality standards


Problem: Enforcing the quality standards.

You spend months negotiating the quality policy and test strategy for your development department, and now it is time to implement, but nothing is happening. The quality of your products remain the same and there is little to no measurable change that points towards better quality.

Solution: Identify quality rules and enforce them as part of your build process.

Assuming that you have your quality goals in a quality policy and/or a test strategy you will be looking for means to inject it in the development organization. Focus on a few goals at a time, seeking fact-based follow-up on tends, and clear goals for how to enforce that the means to reach the goals.

A word of advice – Without management commitment you will not get very far with your efforts, so a good place to start is to get management buy-in on enforcing that quality policy. Another thing to think about is that you need help from all members of your development community, meaning that you will have to bring arguments that can be understand across the organization when explaining the changes. Avoid communication like this:


A place to start is the check in rules / committing code. Here you have a chance to enforce a quality gate based on rules that support the goals you have. Furthermore it is important to ensure that the state of current build is visible to all who works on it, allowing quick response on problems. Collect metrics that allow your monitoring on trends, and use these metrics to support the quality policy goals.

Communicate your goals, and how they are measured. Eg. Test coverage must be equal to or higher than last build – Measured as line coverage delta in build report vs. previous build. Define build break rules around these measurements, and stop builds that violate / jeopardize quality.

Another thing that can support the check in rules is to outline best practice. Simple guidelines will get you far and promote a higher standard of the builds. Formulate these guidelines, with statements like: Don’t check in on a broken build. Always run commit test locally before committing. Do not comment out failing tests.

Finally you can consider to do personalized measurements from the trends gathered on quality metrics, and have that as a driver for one’s ability to get a bonus or raise.

Happy testing!

/Nicolai


PS. Just came across this blog, promoting toolbased build monitoring:
http://www.federicosilva.net/2015/02/about-continuous-integration-and-build.html

Thursday 29 January 2015

Following up on the project risk log

Problem: Measuring progress to facilitate risk management.
Many a risk-log has ended it's life without any real value being added, due to lack of implementation and follow-up. The reason is, in my experience that the project fails to use test metrics for follow-up and governance.
Solution: Baseline the risk register on regular bases and compare metrics.
In order to baseline the risk log you would require a document containing the risks and some metrics. There are many ways of working with risks, but most common is the probability and consequence. This allows the project to plot the risks in a risk matrix, detailing the priority that the risks should have when working with mitigation. The matrix might look like this:
 
From here on the fun starts! Based on the risk mitigation activities that you implement you should see changes in the risk matrix. E.g. if your strategy is to address high probability risks first, you should see a trend of risks moving downwards in the matrix, and here is how it works:
On a monthly basis, you compare the risk matrix to last month, and you look for the following:
  • No change = Your risk management process is not kicking in, as the log is not changing!
  • Fewer risks = Your risk management is either mitigating those risks, or they are turning into issues.
  • More risks = Project is getting wiser, risks are identified – Food for thought and planning / mitigation.
  • Risks are migrating South & West = You are mitigating the probability and Impact of the risks.
  • Risks are migrating North & East = Your efforts are not mitigating the probability and Impact of the risks.
By doing this you will raise awareness on the risk management exercise, and force stakeholders to engage in a dialogue on risk within the project.
Another and even simpler way of baselining is to do a count of the number of risks and their status (assuming that you operate with one). You might have a list like this:
Potential: 14
Actual: 25
Averted: 12
Irrelevant: 3
Total: 54
Comparing the numbers on a regular basis will tell you if your risk management process is moving the risks in the right direction. You can even do a little trend-graph to keep things manager-friendly.
Finally, the use of tools will make it much easier for you in the long run – I’m not going to promote any tools, as I have no preference. One word of advice is to use Dr. Google and look for a risk management template for Excel, that should get you quite far for free. Look for a template draws the risk matrix for you.
Happy testing!
/Nicolai
 

Friday 23 January 2015

Static test planning


Problem: Planning the verification part of testing does not always happen in a structured way.

Reviewing aka static testing is the verification part of the V&V used for checking a delivery. Documentation is very important to testers, and testing artifacts and test basics are in need of processes to support them in their lifetime. A part of this is reviews and the approval of the document that it leads to – Something that require careful planning, as it requires that timing and resources are aligned to be done in due time.

Solution: Static test planning

Base your static test om a plan, much like you would do in the case of dynamic testing. The plan should be build on same principles as you dynamic test plan. Following items should (as a minimum) be detailed in your static test plan:

  • Deliverables under review – Look at the list of deliveries and nominate those that needs review.
  • Approach – Describe the methods/types of review applied for each document under review.
  • Document approval – What are the prerequisites for approval of a document?
  • Procedures and templates – What is the procedure for reviewing and what tools are needed?
  • Responsibilities – Who is document owner, review lead and approver?
  • Staffing and training needs – Who will do the work? Do they need training I doing reviews?
  • Schedule – When should the work be done? Remember your dependencies to project plan.

What is the purpose of static testing? Much like dynamic testing, we will be looking for deviation from standards, missing requirements, design defects, non-maintainable code and inconsistent specifications. The process of review also resembles the one for dynamic test:

Define expectations to review, then perform review & record results and finally implement changes required. When the review is done and findings have been addressed then look at sign off of the document.

Happy testing!

/Nicolai

Friday 9 January 2015

The quality of testing


Just recently I attended a venue, where the quality of testing was the topic of the day. There was some interesting points about how this could be measured and monitored that I would like to share.

The discussion had the following topics all related to quality of the testing:

  • Completeness of the test done
  • Credibility of the test documentation
  • Reliability
  • Quality Parameters to be observed

The discussion was revolving around the parameters that was driving quality of a product, and especially the quality found in the work done by testers. It quickly became evident that the work and quality of it is hard to quantify, and require that you look into the details of the work, looking for completeness, creditability and reliability.

You can see the parameters we discussed below.

Completeness of the test done:

  • Coverage vs. objects under test
    • Actual coverage vs. Planned
    • Execution vs. High risk areas first and coverage of same.

Creditability of the test effort

  • Test cases are created and executed based on objective foundation
    • Structure in test cases.
    • Independence in review and test.
    • Dedicated resources for testing.
    • Separate estimate / time for test activities.

Reliability in test

  • Predictability in test result (based on metrics and pervious experience).
  • Status on test can easily and accurately be derived at any time.
  • Defects and other observations are handled according to defined process.

Quality Parameters to be observed

  • Defined metrics, that are monitored (less is more)
  • Approval criterias and compliance in the deliveries (eg. Open defects vs. categories)
  • Test efficiency per test phase (& number of defects found after go-live)

…And there are probably a lot more if you really want to scrutinize that test department of yours – Just remember to have a little faith in the work done, before going all in.

Happy testing & Have a nice weekend!

/Nicolai