Wednesday, June 19, 2013

Agile manual test case creation and management

What happens if you work in an environment where you have to run acceptance tests manually; an environment where you cannot run an automated regression test suite in each iteration and thus can't report automatically which test cases have to be adapted or marked as obsolete?  Is agile even possible without automation?

In this post I try to propose a workflow for agile testers - agile as in agile athlete - to conquer this essential lack of automation using TMS like TestLink  (finally surrendering myself to using the word "agile" in one of my posts). You might ask: "Why is he talking about regression and release test suite and at the same time about agile?" Let me answer this: agile methodologies, e.g. SCRUM, might not always be correctly implemented, and though this might lead to failure (sometimes), testers have to adapt to the environment they test in.

Lifetime of an agile test case

First there are acceptance tests, then these are extended in order to find bugs or pursue other test goals. So we are in an iteration and have a set of test cases but in agile these can have quite different lifetimes. When testing manually we ask ourselves: How are the odds that this test has to be run again, ever? If test cases were automated the cost for running that test case again would be nearly zero. But if checked manually the picture changes a lot. For example:
  1. Imagine you develop a CMS. We will want to export the content, like a picture. There are tests for that. But in a later iteration that file will automatically be opened in MSPaint. This additional feature makes the first test set obsolete for manual testing: if the picture isn't opened in MSPaint, we'll check why that is and we'll find or not that the file has been exported before.
  2. If there was a dead button in the GUI, the fix should form part of our regression test suite. The lifetime of this test case is undetermined, but for now we know that it will survive until the next regression test.
  3. If we change test data or namings in the project, e.g. image vs. picture, the written test case will need refactoring in order to survive.
The important point is that I suppose that testers have developed an intuition about the lifetime of the test cases. Testers assign life expectancies to test cases.

Test cases with low life expectancy (throw-away test cases)

If our intuition tells us that a test case is quite specific and won't probably need to be repeated ever again, it's enough if we write it down quickly and concretely, e.g.

1.) Select a picture.
2.) Click the Button labelled "Download".
3.) Open the documents folder of the user.
=> There is a folder OurCMS. It contains the selected picture.

These test cases can be flagged in the TMS. In TestLink we can assign Test Importance = Low or create a custom field "throw-away". And after the test case ran successfully we Deactivate this version. This way we know, we won't have to care about it again but still we document what has been tested.

Test cases with high life expectancy

If our intuition tells us that a test case describes a general or essential aspect of the functionality of the system, it will have a somewhat higher life expectancy. In our example 1.) above, in the beginning we might be talking about downloading and managing photos; later about graphics or images. Or we might change the default check out location. If we anticipate this change, we can reduce refactoring in future relase or regression tests by writing our test case as mimicking coded tests:

var GRAPHIC = a picture
var BUTTON = Download

1.) Select GRAPHIC
2.) Click BUTTON.

Testers can document decisions they took while testing by specifying the chosen test data easily:

GRAPHIC = VacationsExamples/DayOnTheBeach.png

As this test case is probably run again in the future we'll assign a higher prio and avoid deactivating it. Written the test case in this fashion, we'll be able to quickly refactor it when the system has changed by just adapting Preconditions.

Of course, now we talk to humans as if they were computers, but surely this is how non-testers treat us, too, demanding manual testing and high coverage, maintainability and traceability.

No comments:

Post a Comment