Friday, 25 October 2013

Parallel testing as part of a platform upgrade

Problem: Testing an application after platform upgrades can be tricky.

We are about to start a project where an applications is moving from an old platform to a new one, and this calls for testing. Unfortunately, this old application is not well documented, neither in requirements nor in test cases. This makes the test tricky, because determining expected result is not possible from existing documentation.

Solution: Deploy parallel testing techniques!

We do in fact have the expected result in a well documented manner – We have a running system in production. The production deployment has been running for years and no bugs are raised against it. The system is a number-cruncher based on a huge order database, making it perfect for some parallel testing, and this is the prospect for our test:

We assume that the result in production running on the old platform is valid, hence equal to our expected result in the test cases to be run on the application after deployment to the new platform. Our test cases are the functions that can be invoked in the production environment, and the input data is the datasets from the database.

This means that we need the following setup to run our test:
Two test environments running the old (same as in production) and new platform (same as to be production) pointing at the same test data. We can use one database for input data, as the application makes calculations on data rather than changing it. This means that we will copy data from production and use that as a foundation for our test.

This is how we will create our test cases:
Reverse engineering on the production system. For each screen, we will list all functions, and break that down into steps. From production we get the test scenarios for each test case from business examples, and that will dictate what test data we will need for the test. On top of that, there are some batchjobs and other ‘hidden’ functions that will require attention.

This is how we will run the test:
First we will run the case in the old environment, and do the same in the new environment if the result is the same then we’ll move on to the next one.

The cool thing about doing it this way is that we now have documented test cases and a very good foundation for regression testing the application in the following releases.

Happy testing & Have a nice weekend!



  1. Hi Nicolai,

    I just did something similar, although I didn't have the luxury of the two systems in parallel, so I had to archive data before the migration. I also kept the tests running on the migrated system for a short period as a kind of stability test which I hadn't tried before but worked well:

    1. Thank you for your perspective on parallel testing!

      Have a nice day!