Wednesday, June 19, 2013

Agile manual test case creation and management

What happens if you work in an environment where you have to run acceptance tests manually; an environment where you cannot run an automated regression test suite in each iteration and thus can't report automatically which test cases have to be adapted or marked as obsolete?  Is agile even possible without automation?

In this post I try to propose a workflow for agile testers - agile as in agile athlete - to conquer this essential lack of automation using TMS like TestLink  (finally surrendering myself to using the word "agile" in one of my posts). You might ask: "Why is he talking about regression and release test suite and at the same time about agile?" Let me answer this: agile methodologies, e.g. SCRUM, might not always be correctly implemented, and though this might lead to failure (sometimes), testers have to adapt to the environment they test in.

Lifetime of an agile test case

First there are acceptance tests, then these are extended in order to find bugs or pursue other test goals. So we are in an iteration and have a set of test cases but in agile these can have quite different lifetimes. When testing manually we ask ourselves: How are the odds that this test has to be run again, ever? If test cases were automated the cost for running that test case again would be nearly zero. But if checked manually the picture changes a lot. For example:
  1. Imagine you develop a CMS. We will want to export the content, like a picture. There are tests for that. But in a later iteration that file will automatically be opened in MSPaint. This additional feature makes the first test set obsolete for manual testing: if the picture isn't opened in MSPaint, we'll check why that is and we'll find or not that the file has been exported before.
  2. If there was a dead button in the GUI, the fix should form part of our regression test suite. The lifetime of this test case is undetermined, but for now we know that it will survive until the next regression test.
  3. If we change test data or namings in the project, e.g. image vs. picture, the written test case will need refactoring in order to survive.
The important point is that I suppose that testers have developed an intuition about the lifetime of the test cases. Testers assign life expectancies to test cases.

Test cases with low life expectancy (throw-away test cases)

If our intuition tells us that a test case is quite specific and won't probably need to be repeated ever again, it's enough if we write it down quickly and concretely, e.g.

1.) Select a picture.
2.) Click the Button labelled "Download".
3.) Open the documents folder of the user.
=> There is a folder OurCMS. It contains the selected picture.

These test cases can be flagged in the TMS. In TestLink we can assign Test Importance = Low or create a custom field "throw-away". And after the test case ran successfully we Deactivate this version. This way we know, we won't have to care about it again but still we document what has been tested.

Test cases with high life expectancy

If our intuition tells us that a test case describes a general or essential aspect of the functionality of the system, it will have a somewhat higher life expectancy. In our example 1.) above, in the beginning we might be talking about downloading and managing photos; later about graphics or images. Or we might change the default check out location. If we anticipate this change, we can reduce refactoring in future relase or regression tests by writing our test case as mimicking coded tests:

Preconditions:
var GRAPHIC = a picture
var BUTTON = Download
var CHECKOUT_FOLDER = $USER$/Documents/OurCMS

Steps:
1.) Select GRAPHIC
2.) Click BUTTON.
=> CHECKOUT_FOLDER contains GRAPHIC_FILE.

Testers can document decisions they took while testing by specifying the chosen test data easily:

Notes:
GRAPHIC = VacationsExamples/DayOnTheBeach.png

As this test case is probably run again in the future we'll assign a higher prio and avoid deactivating it. Written the test case in this fashion, we'll be able to quickly refactor it when the system has changed by just adapting Preconditions.

Of course, now we talk to humans as if they were computers, but surely this is how non-testers treat us, too, demanding manual testing and high coverage, maintainability and traceability.


Thursday, May 30, 2013

Some advanced code testing

For those who don't want or don't have the time to read thick books or framework testing APIs, I have collected a few topics that can help improve your unit testing but might not always be obvious. Note that they're not purely restricted to C# even though the examples are.

Mighty testing frameworks

There are some mighty mocking and unit testing frameworks out there with impressing features like mocking dependent-on static methods, testing private members etc. Although in some cases these are vital, e.g. when writing tests for legacy code, they might lead you to write your code with lower quality taking less care about class design, dependencies etc. What's better: code quality or a new (and possibly costly) dependency on a mighty framework? Code quality does not depend on the language you're programming in, it depends mainly on you as a developer!

Use delegates for dependency to static methods

C# delegates can work like functions in languages like Python or Go where they are first class citizens. This is a good thing, e.g. in the following case:
 public void Init()  
 {  
      var content = File.ReadAllText(this.Path);  
      ...  
 }  
We have a dependency here making testing somewhat difficult. Whereas in some situations it's good pratice to wrap call to static methods into an instance, here we can get along without it:
 public void Init()  
 {  
      this.Init(File.ReadAllText);  
 }  
 internal void Init(ReadAllText readAllText)  
 {  
      var content = readAllText(this.Path);  
      ...  
 }  
 internal delegate string ReadAllText(string path);  

Dependencies to the surface

Somewhat related to the last topic on delegates is the the following: Often we programm quickly and only afterwards discover that we have dependencies, especially to built-ins like System.IO.FileInfo. Now, built-ins can have bugs, too. And a dependency on them should better be injected, e.g. for testability or for extendability. We can avoid missing those dependencies by not using global imports, by deleting any using statement at the beginning of our files and using full namespaces. What is the difference between
 using System.IO;  
 using System.Xml;  
 ...  
 internal void Init()  
 {  
      var file = new FileInfo(this.Path);  
      var dir = new DirectoryInfo(this.DirPath);  
      var xdoc = new XmlDocument();  
      ...  
 }  
and
 internal void Init()  
 {  
      var file = new System.IO.FileInfo(this.Path);  
      var dir = new System.IO.DirectoryInfo(this.DirPath);  
      var xdoc = new System.Xml.XmlDocument();  
      ...  
 }  
The difference is that in the second case you're getting so tired of typing the namespaces that you will want to do something about it. For the moment you cannot use global imports, so you will need to refactor to dependency injection, interface extraction etc.

The protected antipattern

There is this rule that production code and test code should not be mixed. In .NET we choose to create seperate test projects and only test against the public interface, letting private members become internal to be testable if necessary and sensible to do so.
Another possibility is to set protected instead. Then in our test project we just inherit and overwrite.
In my experience though, this second approach has at least two downsides that the use of internal doesn't have:

  • the access modifiers totally loose their sense because we cannot control what's done with protected members.
  • the inherited class will be tested and in a more complicated setup in the end we might loose track. Did we really test our code or the test code?

Any vs. Some

In order for our test cases to serve as documentation we need method and variable names communicating intention:
 public void TestCaseConstructorSetsProperties()  
 {  
      var to = new TestObject(anyParameter());  
      AssertThatPropertiesAreSet(to);  
 }  
My personal gusto is only to use the prefix any if null is permitted, too. So if - following common practice - the constructor checks for null argument, this will be another test case. We can opt for using anyNonNullParameter() or simply:
 public void TestCaseConstructorSetsProperties()  
 {  
      var to = new TestObject(someParameter());  
      AssertThatPropertiesAreSet(to);  
 }  

Advanced setup and teardown - cleanup files example

Have a look at the following code:
 public void TestCaseUsingFileSystem()  
 {  
      var file = createTestFile();  
      var to = new TestObject();  
      to.doSomething(file);  
      AssertThatSomethingHoldsOn(to);  
      file.Delete();  
 }  
The problem here is that the file probably won't be deleted if the assertion fails, making this test case fragile. Surely, you can think of other objects that might need proper tear down even if the test fails in order to assure correctness of the test fixture. A common solution of this problem is to introduce some class variable serving as trash and using a shared teardown. Mind also the file creation method which simply could have been called createTestFile() as before:
 public void TestCaseUsingFileSystem()  
 {  
      var file = createAndRegisterForCleanupTestFile();  
      var to = new TestObject();  
      to.doSomething(file);  
      AssertThatSomethingHoldsOn(to);  
 }  
 public void TearDown()  
 {  
      foreach(var item in this.trash)  
      {  
           try  
           {  
                var file = item as File;  
                if(file != null) file.Delete();  
                ...  
           }catch(Exception e){  
                reportToTestRunner(e.Message);  
           }  
      }  
 }  
 private File createAndRegisterForCleanupTestFile()  
 {  
      var file = createTestFile();  
      this.trash.Add(file);  
      return file;  
 }       

Event checking

You should always check if events are raised, too! An easy pattern for doing so is this:
 public void TestCaseSomeMethodRaisesEvent()  
 {  
      var eventHasBeenRaised = false;  
      testObject.SomeEventHandler = (sender, args) => eventHasBeenRaised = true;  
      testObject.SomeMethod();  
      AssertThat(eventHasBeenRaised);  
 }  

Friday, May 10, 2013

Android Programming and Testing with ADT

It's been a while I dared to take a look at Android application development using Eclipse. I'm certainly surprised about the high quality tutorials and documentation. It is fun to learn, with the right tools, of course. Also, the developer community hasn't left out the vital testing perspective, delivering automatic test project setup, JUnit extensions (even mocks!). If you want to make your first steps with Android application development and learn how to test it right from the start, I recommend to take the following steps supposing you have some experience with Java, Eclipse and, of course, unit testing:


  1. Download the Android ADT Bundle here.
  2. Follow the steps for setting it up here.
  3. Complete the tutorial Building your First App. I recommend using a real device, not only for performance but it feels great ;-) In case you're working on Linux, you'll probably have to add a rule for udev. This is well documented and can be found at the tutorial. Tip: find your vendorId using lsusb and use MSC as the transfer protocol.
  4. Skim through Managing Projects from Eclipse with ADT, Building and Running from Eclipse and Testing Fundamentals, the latter being a fascinating read by itself for testing developers (and suffering testers in automation).
  5. Make sure you have the Samples for SDK. If you don't you will download them using Android SDK Manager as described here.
  6. Skip the Testing from Eclipse with ADT, and dive directly into Activity Testing Tutorial.
This is a good starting point and fun to do :-)