Wednesday 25 February 2015

Risks against test start

Problem: Test is delayed already before it has started

A test may be delayed due to different reasons, causing the start of test to be delayed. Test not starting on planned time is due to events that violates the start criteria for the test activity.

Solution: Monitor and act on risks against your test start criteria.

In order to monitor the risks you need to know what they are – Most common risks against starting a test is related to delays in activities that are on critical path for your test. I have listed the most common problems I see:

  • Missing test component or system.
  • High error levels in test component or system.
  • Testware not complete.
  • Missing or wrong testdata.
  • Too many (partial) deliveries for test

There are two ways of approaching these risks – reactive and proactive.

Reactive approach is easy, but introduces waste in your development life cycle:

  1. Wait until test is delayed by one of the above.
  2. Look for rootcause for dealy. 
  3. Eliminate rootcause. 
  4. Start test at first possible time. 

Proactive is also easy, but requires a higher degrees of advance planning than the reactive approach:

  1. Do a risk identification workshop, listing things that will derail your test.
  2. List actions that minimize likelihood and impact of identified risks.
  3. Adopt actions in your test plans, and follow-up on these.
  4. Do a test readiness review in due time before test starts.

To help you on your way you might want to consider the following actions to mitigate the mentioned risks:

  • Missing test component or system.
    • Likelihood is reduced through continuous integration, automation of deployment / build process, release management of test deliveries. A high degree of automation is preferred in order to ensure timeliness and correctness.
  • High error levels in test component or system.
    • Test as part of development and cooperation between developers and testers. Focus should be an early test, and on test of critical items that is a must-have in the following test phases. Furthermore a bug triage session on regular basis might help utilize resources on important (from a test perspective) bug-fixing, rather than random bugfixing.
  • Testware not complete.
    • Test planning is paramount. The plan needs to include all activities and the delivery these leads up to. Follow up on critical path, and make sure that you update the plan as you go along.
  • Missing or wrong testdata.
  • Too many (partial) deliveries for test
    • Release management will get you far – Know what is in the box, and bundle test to match the partial deliveries. Prioritize test cases based on the release content, to ensure that you do not run cases prematurely.

Happy testing!
/Nicolai

Tuesday 17 February 2015

Gold plating = Cost injection


Problem: Gold plating of your product is expensive.

Ever experienced that features are added in your product without any adding any real value? If yes then you have probably been gold plating that delivery of yours. Gold plating is done with the best of intentions and most of the time is appreciated by the customer. However, there are many cases where it is not liked and the gold plating is backfires on your product.

Solution: Stick to implementing approved features.

Usually gold plating is introduced either by the project team or by a project manager, at no cost to the customer. Gold plating does not come cheap however! It can increase operation and maintenance costs significantly and will make a dent in quality. The reason is that it raises complexity, introduces features that are not traceable and operates on the outside of the feature approval process – In other words it is a risk that needs to be mitigated.

Although, gold plating sounds good to everyone, it is bad for the project team in the long run. Gold plating increases the development cost, raises the expectation of the customer by inflating the feature per hour / story-point of development. If you do another project for the same customer, he would again expect you to deliver a product with extra features.

Avoiding Gold Plating is important to stay in control of the product, and ensure that the development effort is spend on value-adding features. If you operate in an agile setup you should see gold plating as a violation of the product backlog setup and priorities. Avoiding gold plating requires discipline. It requires the team to ask themselves if they are adding unnecessary or over-engineered functionality into the application.

Start out by discussing gold plating as part of your retrospective, and the set some ground rules like:

  • Never allow add any extra function or features to a story without approval.
  • As a product managers and sales people should use the product log to add new features – not the sprint log.
  • You must establish proper communication lines within the project team, ensuring that approval from customer is achieved.

Then do a little measurement on number of features per iteration vs. number of features in the accepted sprint log. You might have gold plated the solution, but make sure that the result can still be tested, and ultimately accepted by client, otherwise you just might end up paying a very high price for the golden plates.

Happy Testing!

/Nicolai

Wednesday 11 February 2015

Your very best friend - The Service Delivery Manager

Working with test also means networking. Since most test management is about getting the product tested and shipped according to deadline a natural part of this networking is with the development organisation. Project managers, developers, testers, business partners and other SME's. They provide essential knowledge and resources to the test effort getting things prepared and executed.



This means we have a tendency to forget our most important body of knowledge - the "service organisation". They tend to take over service of the product at go live and then we forget them. Let's correct this mistake in the future.


First and foremost the service delivery manager should be included in the handover of the product prior to go-live. Essential knowledge from the test team must be embedded in the service organisation. Not as a long trivial list of open defects but as story about product quality, workarounds and other necessary information.



Secondly the service delivery manager sits on a treasure trove of knowledge about real user problems and user behaviour as well as technical problems faced by operations. This can be used by the test organisation for a number of tasks including estimation, prioritisation of test effort, general planning of test and as a source for planning test of "off-normal" situations.

The service delivery manager is as such an essential member of the test team. Not on a permament basis but ad hoc at the right time during the project the Service Delivery Manager is your best friend.
If the service delivery manager is not able to provide the input, he is at least the gateway to the organisation where the knowledge is present and available. Just kick in the door.

Monday 9 February 2015

TDD – My experience

Reading this excellent post at ’The Codeless Code’ I came to think of my own experience with Test Driven Development (TDD): http://thecodelesscode.com/case/44
 
My first take on Test Driven Development was many years ago, before agile had really made an impact. We faced the following challenges:
  • Customer wanted shorter development cycles.
  • Development was outsourced.
  • Test resources was scarce and had no coding skills.
  • Offshore resources had little business domain knowledge.
 
In order to address especially the last two bullets it became evident that we had to change strategy, and focus on implementing a process that supported the offshore development team and enabled the onshore resources to assist the offshore team though reviews and guidance.
 
TDD was introduced as part of the low level specification done by the offshore team. One of the sections in the specification was dealing with the unit test cases that had to be written in order to cover the functionality detailed in the spec.
 
Another challenge was that the offshoring happened fast, and we had little time to train the team. That meant that we had to come up with ways of “testing” the team’s understanding before countless hours was used coding a solution. We found that testing the solution late in the development cycle often proved too late to counter misunderstandings. This meant that we came up with this very simple procedure for handing over the development assignment to the offshore team:
 
Walkthrough of business requirements and related high level specification, to empower the offshore team to take up development. Offshore team did low-level detailed specifications on solution including test cases, and these was sent for review. Testers reviewed test cases, architects reviewed the solution, and sr. developers reviewed the pseudo code for completeness and capability with current solution. The interesting thing was that it almost always proved to be the test cases that gave the pointers on whether or not the proposed solution was in line with expectations.
 
Once low-level design was approved the test cases was implemented – Once done they were checked in and added to system codebase, and baselined as part of the delivery. After development of the actual solution all test cases was part of the regression test suite, meaning that we soon had lots of automated test on the project leaving us with a high level of code coverage.
 
The real challenge of introducing TDD was to shape the organization to facilitate this new way of working, and enforcing the procedures. There was quite a stir in the onshore organization, not only did they have to embrace the new offshore colleagues, they also had to hand over some of their assignments to them. At first the offshore colleagues were put off by the constant review and scrutiny of their code – Little code could be written before passing a series of quality gates. These gates was not in any of the standard development processes, meaning that they were sailing uncharted waters. This meant that there was a lot of explaining and discussion up front before the TDD approach could be attempted.
 
Biggest impact was on the testers – they had to abandon trivial functional testing, as this responsibility now rested on the shoulders of the developers. This was hard, as they were used to doing test cases one to one on the functional requirements. Their scope was now expanding to a compiling test results of TDD into test coverage reporting, and then doing testing in areas that looked a little weak. On top of this, they were now in charge of the factory acceptance, calling for testing that was focused on the system as a whole, and challenging their business domain knowledge far more than they were used to.
 
Happy testing!
 
/Nicolai
 
PS. MSDN has a very nice guide for those wanting to pursue TDD:
http://msdn.microsoft.com/en-us/library/aa730844(v=vs.80).aspx
 

Tuesday 3 February 2015

Enforcing your quality standards


Problem: Enforcing the quality standards.

You spend months negotiating the quality policy and test strategy for your development department, and now it is time to implement, but nothing is happening. The quality of your products remain the same and there is little to no measurable change that points towards better quality.

Solution: Identify quality rules and enforce them as part of your build process.

Assuming that you have your quality goals in a quality policy and/or a test strategy you will be looking for means to inject it in the development organization. Focus on a few goals at a time, seeking fact-based follow-up on tends, and clear goals for how to enforce that the means to reach the goals.

A word of advice – Without management commitment you will not get very far with your efforts, so a good place to start is to get management buy-in on enforcing that quality policy. Another thing to think about is that you need help from all members of your development community, meaning that you will have to bring arguments that can be understand across the organization when explaining the changes. Avoid communication like this:


A place to start is the check in rules / committing code. Here you have a chance to enforce a quality gate based on rules that support the goals you have. Furthermore it is important to ensure that the state of current build is visible to all who works on it, allowing quick response on problems. Collect metrics that allow your monitoring on trends, and use these metrics to support the quality policy goals.

Communicate your goals, and how they are measured. Eg. Test coverage must be equal to or higher than last build – Measured as line coverage delta in build report vs. previous build. Define build break rules around these measurements, and stop builds that violate / jeopardize quality.

Another thing that can support the check in rules is to outline best practice. Simple guidelines will get you far and promote a higher standard of the builds. Formulate these guidelines, with statements like: Don’t check in on a broken build. Always run commit test locally before committing. Do not comment out failing tests.

Finally you can consider to do personalized measurements from the trends gathered on quality metrics, and have that as a driver for one’s ability to get a bonus or raise.

Happy testing!

/Nicolai


PS. Just came across this blog, promoting toolbased build monitoring:
http://www.federicosilva.net/2015/02/about-continuous-integration-and-build.html