Saturday, November 19, 2016

Fast Frontend Development and HUI Testing with mockyeah

The backend mocking library mockyeah has recently been added new features inspired by WireMock. The library has the big advantage of allowing you to re-use mock data during development for testing.
By mocking the backend there's no immediate need to set up and maintain backend data thus speeding up development.
You can find working sample code here.

Fast frontend development - backend mocking

In order to be independent from a real backend instance during frontend development we'll organize our expected responses and add them to a default setup. We'll then configure our frontend to point to the mockyeah instance, start the mock server with the default setup and we're ready to develop our web application!

Organizing the mock data

Organizing our mock data we'll have later easy access in your test definitions:
// mock-data.js
var usersGet = {
    pattern: /users$/,
    ok: {
        status: 200,
        json: [
            {
                id: 1,
                username: 'user1',
                email: 'user1@email.com'
            },
            {
                id: 2,
                username: 'user2',
                email: 'user2@email.com'
            }
            ,
            {
                id: 3,
                username: 'user3',
                email: 'user3@email.com'
            }
        ]
    },
    ko: {
        status: 500,
        json: {errorCode: "UnexpectedException"}
    }
};

Collecting mock data

mockyeah has a record-and-play feature that allows it to run as a proxy and save data you want to mock. Have a look at the library's documentation to find out more.

Define a default setup

//default-setup.js

var mockyeah = require('mockyeah');
var mockdata = require('./mock_data');

var init_mock = function () {
    mockyeah.get(mockdata.usersGet.pattern, mockdata.usersGet.ok);
    mockyeah.get(mockdata.userGet.pattern, mockdata.userGet.ok);
    mockyeah.post(mockdata.userPost.pattern, mockdata.userPost.ok);
};

init_mock();

exports.init_mock = init_mock;
exports.mockyeah = mockyeah;
exports.mockdata = mockdata;
Now we run node default-setup.js and have a backend available during development.

Hermetic User Interface (HUI) testing

Frontend by itself is a subsystem of your application. HUI tests strive to have more stable and maintainable tests by mocking out any dependency to a service. This means you would normally implement system level tests with Selenium WebDriver or Protractor.

Starting mockyeah for testing

As mockyeah is automatically started as configured in .mockyeah when required, your test suite must simply import your default setup as defined above.
var setup = require('./../default-setup');

describe('Handle users', function () {
    describe('Users view', function () {
        beforeAll(setup.init_mock);
        afterAll(setup.mockyeah.close);

Changing mockyeah behaviour during test definition

Simply set the response in the test definition itself. Notice that we re-use the mockyeah exported by the default setup.
describe('Handle users', function () {
    describe('Users view', function () {
        // ...
        
        it('should load the users list', function () {
            return usersPage.navigateToUsersView().then(function () {
                expect(usersPage.getUserList()).toContain({id: 1, username: 'user1'});
                expect(usersPage.getMessage()).toEqual("SUCCESS");
            });
        });

        it('should show the error code if list cannot be loaded', function () {
            setup.mockyeah.get(setup.mockdata.usersGet.pattern, 
            setup.mockdata.usersGet.ko);
            return usersPage.navigateToUsersView().then(function () {
                expect(usersPage.getMessage()).toEqual("UnexpectedException");
            });
        });

Request verification and logging

During test development we'll want to inspect the received requests (the "request journal") when something doesn't come out as expected. We can do this by setting
{
    output: true,
    journal: true
}
giving the output
      ✓ should send the correct request
[mockyeah][12:31:46][SERVE][MOUNT][GET] /users$/
[mockyeah][12:31:46][REQUEST][JOURNAL] {
  "callCount": 1,
  "url": "/users",
  "fullUrl": "http://localhost:4001/users",
  "clientIp": "127.0.0.1",
  "method": "GET",
  "headers": {
    "host": "localhost:4001",
    "accept": "application/json",
    "connection": "close"
  },
  "query": {},
  "body": {}
}
[mockyeah][12:31:46][REQUEST][GET] /users (2ms)
If our application code composes a complex request from several sources sometime we find it useful to verify the sent request:
it('should send the correct request', function() {
    return usersPage.navigateToUsersView().then(function () {
        usersPage.enterNewUserDetails("user1", "user1@email.com");
        var expectation = setup.mockyeah.post(setup.mockdata.userPost.pattern, 
                                              setup.mockdata.userPost.ok)
            .expect()
            .body({
                username: 'user1',
                email: 'user1@email.com'
            })
            .once();
        return usersPage.confirm().then(function () {
            expectation.verify();
        });
    });
});
The standard configuration of mockyeah will write a standard request log which is very helpful during test development.
[21:59:59] I/local - Selenium standalone server started at http://192.168.0.155:48608/wd/hub
Spec started
[mockyeah][SERVE] Listening at http://127.0.0.1:4001
[mockyeah][REQUEST][GET] /users (2ms)

  Handle users

    Users view
      ✓ should load the users list
[mockyeah][REQUEST][GET] /users (1ms)
      ✓ should show the error code if list cannot be loaded
[mockyeah][REQUEST][GET] /users (1ms)
[mockyeah][REQUEST][POST] /users (1ms)
      ✓ should load user details when created successfully

Executed 3 of 3 specs SUCCESS in 0.062 sec.
[mockyeah][SERVE][EXIT] Goodbye.
[22:00:00] I/local - Shutting down selenium standalone server.
However, for test reporting you might prefer to switch logging of by setting
{ ...
  "output": false,
  "verbose": false
}
which will give the following less cluttered output.
[21:59:59] I/local - Selenium standalone server started at http://192.168.0.155:48608/wd/hub
Spec started

  Handle users

    Users view
      ✓ should load the users list
      ✓ should show the error code if list cannot be loaded
      ✓ should load user details when created successfully

Executed 3 of 3 specs SUCCESS in 0.062 sec.
[22:00:00] I/local - Shutting down selenium standalone server.

Sunday, June 26, 2016

WireMock for your Dependant-On Http Service

Recently I needed to mock an http service our appplication depends on. A developer recommended me WireMock he uses at unit level.

I target the system level. But - wow - WireMock running as standalone server gives me the same feature set thanks to it's great JSON API for configuration while running.

Here I give you an overview and some advice about how to get started mocking an HTTP service at system level.

Collection of the Mocked Data

First thing you need to know is:
  • the url you call
  • the data returned by those calls.
WireMock has a great record and playback feature. WireMock proxies any calls to the DOS and automatically creates files for the responses and mappings.

Example: GetWeather

We'll record responses for calling the global weather API of WebserviceX.NET.

Run
java -jar wiremock-1.58-standalone.jar --proxy-all="http://www.webservicex.net/" --record-mappings --verbose
and make a sample call to the GetWeather method, for example:
curl --header "Content-Type: text/xml;charset=UTF-8" --header "SOAPAction:her.asmxww.webserviceX.NET/GetWeather" --data @request.xml http://localhost:8080/globalweather.asmx
(request.xml containing a valid request, obviously).

This will create a file containing the response in __files and a sample mapping in mappings in your current working directory.

If you open the mapping just created you'll see something like:

{
  "request" : {
    "url" : "/globalweather.asmx",
    "method" : "POST",
    "bodyPatterns" : [ {
      "contains" : "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:web=\"http://www.webserviceX.NET\">\n   <soapenv:Header/>\n   <soapenv:Body>\n      <web:GetWeather>\n         <!--Optional:-->\n         <web:CityName>Madrid</web:CityName>\n         <!--Optional:-->\n         <web:CountryName>Spain</web:CountryName>\n      </web:GetWeather>\n   </soapenv:Body>\n</soapenv:Envelope>"
    } ]
  },
  "response" : {
    "status" : 200,
    "bodyFileName" : "body-globalweather.asmx-8VMpb.json",
    "headers" : {
      "Cache-Control" : "private, max-age=0",
      "Content-Type" : "text/xml; charset=utf-8",
      "Content-Encoding" : "gzip",
      "Vary" : "Accept-Encoding",
      "Server" : "Microsoft-IIS/7.0",
      "X-AspNet-Version" : "4.0.30319",
      "X-Powered-By" : "ASP.NET",
      "Date" : "Sun, 26 Jun 2016 12:49:57 GMT",
      "Content-Length" : "691"
    }
  }
}
Woohoo! your first configuration for the JSON API.

You can easily check your configuration / mock data, by restarting WireMock, but this time in standalone mode

java -jar wiremock-1.58-standalone.jar

and rerunning the above curl command.

Configuration of the Mock via JSON API

While you can have a static mock configuration in the mappings and __files folders, the really cool stuff is changing the configuration during runtime / testing on the fly calling the JSON API. This way we can even control delay for performance testing.

Let's suppose you're running WireMock on it's standard port at your localhost:8080 without any previous static configuration. You then create a new configuration by posting
{
  "request" : {
    "url" : "/globalweather.asmx",
    "method" : "POST",
    "bodyPatterns" : [ {
      "contains" : "GetWeather"
    } ]
  },
  "response" : {
    "status" : 200,
    "body" : ...,
    "fixedDelayMilliseconds": 2000
  }
}
to http://localhost:8080/__admin/mappings/new causing this way the mock to respond after 2 seconds.

WireMock has many well documented features any tester could whish for.

JSON API Cheat Sheet

Here a listing for easy reference during test development.

MethodDescriptionNotes
__admin/reset Removes all stub mappings and deletes the request log. Great for clearing mock behaviour before setting up anything
__admin/mappings Lists all configured mappings Helps while developing your tests
__admin/mappings/save Saves all mappings This way you can create a static default setup easily, and refresh it if your mappings change.
__admin/mappings/new Creates a new mapping Create a new mapping in setup or during a test, switching for example from success to error. By default, WireMock will use the most recently added matching stub to satisfy the request. But several mappings can even be prioritised.
__admin/mappings/reset Removes all non-static mappingsLike full reset, but preserving your default setup.
__admin/settings
__admin/socket_delay
Set the global settings for your stubbing You can vary over delays for your performance testing.
__admin/scenarios/reset Reset all states to START. Again, great for resetting your test environment if you use stateful behaviour.
__admin/requests/count
__admin/requests/find
__admin/requests/reset
Lets you manage and check the requests log Check requests during test development and verify DoS have been called.
find with body { "urlPattern" : "/" } let's you inspect all recorded requests.
You can find the full API reference here.


Tuesday, April 19, 2016

OpenCover and FitNesse/fitSharp


OpenCover is a great tool for measuring .NET code coverage. In ATDD some tests are written and documented below system level.
If you use FitNesse/fitSharp the code coverage cannot be determined by calling FitNesse on the console via java -jar fitnesse-standalon -c args.

But... the test runner, Runner.exe, is implemented in .NET.

It is called from the FitNesse server with arguments args1 = -r {assembly list} HOST PORT SOCKET.

You can get a coverage report from OpenCover by defining a new test runner using OpenCover as a proxy. The FitNesse server will call the proxy with args1. The new test runner (definable in the wiki via the global variable TEST_RUNNER) will call OpenCover.Console.exe which then calls the original runner passing args1 on and returning its error code with the -returntargetcode argument.

Monday, February 17, 2014

Breaking Dependency on Third Party Executables - Examples in Go


Often in distributed systems our products depend on third party applications. In this post I describe how we can mock these in order to improve coverage, testability and ease the setup of the test environment.

Imagine you have an application that calls an executable. You could for example create an XML document from a database and feed it to a third party application to get a PostScript file. We implement the Decorator Pattern and use a suitable class or function in our programming environment to
  1. Call the executable with or without certain arguments (input interface)
  2. Intercept the standard output and error to check our results and return to the main application (output interface)
In .NET we could use the Process class to accomplish this: we wrap the CLI into a Process object and redirect stderr and stdout for verification.

On the unit test level we can mock the decorator itself. But if we have legacy code which isn't unit testable, or at the integration test level, it's useful to mock the third party executable itself. This will:
  1. speed up testing in case the third party application would take a while to finish,
  2. improve coverage: by taking control over the third party application we can provoke error states and need less test data setup to comply with the interface,
  3. make it easier to log which arguments the application is called with.
I will call a mocked executable FakeExe.

If our executable expects certain inputs we can control our FakeExe's behaviour based on these. We create an arguments encoder and a decoder. Here is a short example in Go of how we could control the exitcode for an executable expecting an XML file as input:
 package fakeexe  
 //...  
 func EncodeArgument(exitCode int) string {  
      return strconv.Itoa(exitCode) + ext  
 }  
 type fakeExe struct {  
      ExitCode int  
 }  
 func (f *fakeExe) DecodeArgument(arg string) {  
      var err error  
      if len(arg) > len(ext) {  
           ecode := arg[:len(arg)-len(ext)]  
           f.ExitCode, err = strconv.Atoi(ecode)  
      } else {  
           err = errors.New("input not valid" + arg)  
      }  
      if err != nil {  
           f.handleError(err)  
      }  
 }  
 func (f *fakeExe) Run(arg string) {  
      f.DecodeArgument(arg)  
 }  
 func (f *fakeExe) handleError(error) {   
 //...  
 }  
 package main  
 import (  
       "fakeexe"  
       "os"  
 )  
 func main() {  
       f := new(fakeexe.fakeExe)  
       f.Run(os.Args[0])  
       os.Exit(f.ExitCode)  
 }  

The Encoder can be used in test setup code in order to generate the correct input args for the FakeExe. It can be extended with the following useful behaviour:

  1. Write a log: this way we can control that the Decorator calls the executable as expected and we can check the FakeExe in case the possible actions are more complicated than the easy example given above.
  2. Use input files from paths, for example if there is an input files directory which will be used as working directory. This can also be used to configure the FakeExe in case of several similar expected behaviours where we might not want to implement a seperate FakeExe for each executable we mock.


At some point I faced the problem that the working directories of the executable wasn't known beforehand, it was created when the task using the Decorator was run. Furthermore, many instances of the same executable were called in a workflow. This created two problems:

  1. There was no use in writing the log per FakeExe instance: I needed a log of all instances together.
  2. Configuring the FakeExe by a configuration file beside it was undoable because it wouldn't have been copied to the target working directory.
I solved the problem by implementing a logger and a configuration service which in Go reduced to just some lines of code, see http://golang.org/pkg/net/. The easiest implementation might look like this:

 package configuring  
 import (  
      "fmt"  
      "io/ioutil"  
      "net"  
 )  
 var ConfigFilename = "Config.txt"  
 //...  
 type server struct {  
      getListener func(protocol, port string) (net.Listener, error)  
      ln     net.Listener  
      protocol  string  
      port    string  
 }  
 func (srv *server) Start() {  
      var err error  
      srv.ln, err = srv.getListener(srv.protocol, ":"+srv.port)  
      if err != nil {  
           panic(err)  
 }  
      for {  
           conn, err := srv.ln.Accept()  
           if err != nil {  
                fmt.Println(err)  
                continue  
           }  
           go srv.sendConfig(conn)  
      }  
      return  
 }  
 func (srv *server) sendConfig(conn net.Conn) {  
      bytes, err := ioutil.ReadFile(ConfigFilename)  
      if err != nil {  
           panic(err)  
      }  
      if len(bytes) > MAX_MESSAGE_LENGTH {  
           panic("Config message too long.")  
      }  
      _, err = conn.Write(bytes)  
      if err != nil {  
           panic(err)  
      }  
      return  
 }  
 func (srv *server) Stop() {  
      if srv.ln != nil {  
           srv.ln.Close()  
      }  
 }  
 package logging  
 import (  
      "net"  
      "fmt"  
 )  
 //...  
 type server struct{  
      getListener func(protocol, port string) (net.Listener, error)  
      ln net.Listener  
      protocol string  
      port string  
      msgs []string  
 }  
 func (srv *server)Msgs() (msgs []string){  
      msgs = srv.msgs  
      return  
 }  
 func (srv *server)Start() (err error){  
      srv.ln, err = srv.getListener(srv.protocol, ":" + srv.port)  
      if(err != nil){  
           panic(err)  
      }  
      for {  
           conn, err := srv.ln.Accept()  
           if (err != nil){  
                fmt.Println(err)  
                continue  
           }  
           go srv.appendMessage(conn)  
      }  
      return  
 }  
 func (srv *server)appendMessage(conn net.Conn){  
      defer conn.Close()  
      buf := make([]byte, MAX_MESSAGE_LENGTH)  
      msg_length, err := conn.Read(buf)  
      var msg string  
      if(err != nil){  
           msg = err.Error()  
      }else{  
           msg = string(buf[:msg_length])  
      }  
      srv.msgs = append(srv.msgs, msg)  
 }  
 func (srv *server)Stop(){  
      if(srv.ln != nil){  
           srv.ln.Close()  
      }  
 }  

From this the next step could be the implementation of a little DSL for our testing extending the FakeExe with an interpreter like for example in Bob's Blog - Writing a Lisp Interpreter in Go. Then, instead of sending concrete implementation specific configuration values with the configuration service we just send a script:

 package fakeexe  
 import (  
      "lisp" //https://github.com/bobappleyard/golisp  
      "io"  
      "strings"  
 )  
 //...  
 func (f *fakeExe)Run(script string){  
      i := lisp.New()  
      i.Repl(strings.NewReader(script), io.Stdout)  
      //...  
 }  

*Examples are written in Go. 

Wednesday, June 19, 2013

Agile manual test case creation and management

What happens if you work in an environment where you have to run acceptance tests manually; an environment where you cannot run an automated regression test suite in each iteration and thus can't report automatically which test cases have to be adapted or marked as obsolete?  Is agile even possible without automation?

In this post I try to propose a workflow for agile testers - agile as in agile athlete - to conquer this essential lack of automation using TMS like TestLink  (finally surrendering myself to using the word "agile" in one of my posts). You might ask: "Why is he talking about regression and release test suite and at the same time about agile?" Let me answer this: agile methodologies, e.g. SCRUM, might not always be correctly implemented, and though this might lead to failure (sometimes), testers have to adapt to the environment they test in.

Lifetime of an agile test case

First there are acceptance tests, then these are extended in order to find bugs or pursue other test goals. So we are in an iteration and have a set of test cases but in agile these can have quite different lifetimes. When testing manually we ask ourselves: How are the odds that this test has to be run again, ever? If test cases were automated the cost for running that test case again would be nearly zero. But if checked manually the picture changes a lot. For example:
  1. Imagine you develop a CMS. We will want to export the content, like a picture. There are tests for that. But in a later iteration that file will automatically be opened in MSPaint. This additional feature makes the first test set obsolete for manual testing: if the picture isn't opened in MSPaint, we'll check why that is and we'll find or not that the file has been exported before.
  2. If there was a dead button in the GUI, the fix should form part of our regression test suite. The lifetime of this test case is undetermined, but for now we know that it will survive until the next regression test.
  3. If we change test data or namings in the project, e.g. image vs. picture, the written test case will need refactoring in order to survive.
The important point is that I suppose that testers have developed an intuition about the lifetime of the test cases. Testers assign life expectancies to test cases.

Test cases with low life expectancy (throw-away test cases)

If our intuition tells us that a test case is quite specific and won't probably need to be repeated ever again, it's enough if we write it down quickly and concretely, e.g.

1.) Select a picture.
2.) Click the Button labelled "Download".
3.) Open the documents folder of the user.
=> There is a folder OurCMS. It contains the selected picture.

These test cases can be flagged in the TMS. In TestLink we can assign Test Importance = Low or create a custom field "throw-away". And after the test case ran successfully we Deactivate this version. This way we know, we won't have to care about it again but still we document what has been tested.

Test cases with high life expectancy

If our intuition tells us that a test case describes a general or essential aspect of the functionality of the system, it will have a somewhat higher life expectancy. In our example 1.) above, in the beginning we might be talking about downloading and managing photos; later about graphics or images. Or we might change the default check out location. If we anticipate this change, we can reduce refactoring in future relase or regression tests by writing our test case as mimicking coded tests:

Preconditions:
var GRAPHIC = a picture
var BUTTON = Download
var CHECKOUT_FOLDER = $USER$/Documents/OurCMS

Steps:
1.) Select GRAPHIC
2.) Click BUTTON.
=> CHECKOUT_FOLDER contains GRAPHIC_FILE.

Testers can document decisions they took while testing by specifying the chosen test data easily:

Notes:
GRAPHIC = VacationsExamples/DayOnTheBeach.png

As this test case is probably run again in the future we'll assign a higher prio and avoid deactivating it. Written the test case in this fashion, we'll be able to quickly refactor it when the system has changed by just adapting Preconditions.

Of course, now we talk to humans as if they were computers, but surely this is how non-testers treat us, too, demanding manual testing and high coverage, maintainability and traceability.


Thursday, May 30, 2013

Some advanced code testing

For those who don't want or don't have the time to read thick books or framework testing APIs, I have collected a few topics that can help improve your unit testing but might not always be obvious. Note that they're not purely restricted to C# even though the examples are.

Mighty testing frameworks

There are some mighty mocking and unit testing frameworks out there with impressing features like mocking dependent-on static methods, testing private members etc. Although in some cases these are vital, e.g. when writing tests for legacy code, they might lead you to write your code with lower quality taking less care about class design, dependencies etc. What's better: code quality or a new (and possibly costly) dependency on a mighty framework? Code quality does not depend on the language you're programming in, it depends mainly on you as a developer!

Use delegates for dependency to static methods

C# delegates can work like functions in languages like Python or Go where they are first class citizens. This is a good thing, e.g. in the following case:
 public void Init()  
 {  
      var content = File.ReadAllText(this.Path);  
      ...  
 }  
We have a dependency here making testing somewhat difficult. Whereas in some situations it's good pratice to wrap call to static methods into an instance, here we can get along without it:
 public void Init()  
 {  
      this.Init(File.ReadAllText);  
 }  
 internal void Init(ReadAllText readAllText)  
 {  
      var content = readAllText(this.Path);  
      ...  
 }  
 internal delegate string ReadAllText(string path);  

Dependencies to the surface

Somewhat related to the last topic on delegates is the the following: Often we programm quickly and only afterwards discover that we have dependencies, especially to built-ins like System.IO.FileInfo. Now, built-ins can have bugs, too. And a dependency on them should better be injected, e.g. for testability or for extendability. We can avoid missing those dependencies by not using global imports, by deleting any using statement at the beginning of our files and using full namespaces. What is the difference between
 using System.IO;  
 using System.Xml;  
 ...  
 internal void Init()  
 {  
      var file = new FileInfo(this.Path);  
      var dir = new DirectoryInfo(this.DirPath);  
      var xdoc = new XmlDocument();  
      ...  
 }  
and
 internal void Init()  
 {  
      var file = new System.IO.FileInfo(this.Path);  
      var dir = new System.IO.DirectoryInfo(this.DirPath);  
      var xdoc = new System.Xml.XmlDocument();  
      ...  
 }  
The difference is that in the second case you're getting so tired of typing the namespaces that you will want to do something about it. For the moment you cannot use global imports, so you will need to refactor to dependency injection, interface extraction etc.

The protected antipattern

There is this rule that production code and test code should not be mixed. In .NET we choose to create seperate test projects and only test against the public interface, letting private members become internal to be testable if necessary and sensible to do so.
Another possibility is to set protected instead. Then in our test project we just inherit and overwrite.
In my experience though, this second approach has at least two downsides that the use of internal doesn't have:

  • the access modifiers totally loose their sense because we cannot control what's done with protected members.
  • the inherited class will be tested and in a more complicated setup in the end we might loose track. Did we really test our code or the test code?

Any vs. Some

In order for our test cases to serve as documentation we need method and variable names communicating intention:
 public void TestCaseConstructorSetsProperties()  
 {  
      var to = new TestObject(anyParameter());  
      AssertThatPropertiesAreSet(to);  
 }  
My personal gusto is only to use the prefix any if null is permitted, too. So if - following common practice - the constructor checks for null argument, this will be another test case. We can opt for using anyNonNullParameter() or simply:
 public void TestCaseConstructorSetsProperties()  
 {  
      var to = new TestObject(someParameter());  
      AssertThatPropertiesAreSet(to);  
 }  

Advanced setup and teardown - cleanup files example

Have a look at the following code:
 public void TestCaseUsingFileSystem()  
 {  
      var file = createTestFile();  
      var to = new TestObject();  
      to.doSomething(file);  
      AssertThatSomethingHoldsOn(to);  
      file.Delete();  
 }  
The problem here is that the file probably won't be deleted if the assertion fails, making this test case fragile. Surely, you can think of other objects that might need proper tear down even if the test fails in order to assure correctness of the test fixture. A common solution of this problem is to introduce some class variable serving as trash and using a shared teardown. Mind also the file creation method which simply could have been called createTestFile() as before:
 public void TestCaseUsingFileSystem()  
 {  
      var file = createAndRegisterForCleanupTestFile();  
      var to = new TestObject();  
      to.doSomething(file);  
      AssertThatSomethingHoldsOn(to);  
 }  
 public void TearDown()  
 {  
      foreach(var item in this.trash)  
      {  
           try  
           {  
                var file = item as File;  
                if(file != null) file.Delete();  
                ...  
           }catch(Exception e){  
                reportToTestRunner(e.Message);  
           }  
      }  
 }  
 private File createAndRegisterForCleanupTestFile()  
 {  
      var file = createTestFile();  
      this.trash.Add(file);  
      return file;  
 }       

Event checking

You should always check if events are raised, too! An easy pattern for doing so is this:
 public void TestCaseSomeMethodRaisesEvent()  
 {  
      var eventHasBeenRaised = false;  
      testObject.SomeEventHandler = (sender, args) => eventHasBeenRaised = true;  
      testObject.SomeMethod();  
      AssertThat(eventHasBeenRaised);  
 }  

Friday, May 10, 2013

Android Programming and Testing with ADT

It's been a while I dared to take a look at Android application development using Eclipse. I'm certainly surprised about the high quality tutorials and documentation. It is fun to learn, with the right tools, of course. Also, the developer community hasn't left out the vital testing perspective, delivering automatic test project setup, JUnit extensions (even mocks!). If you want to make your first steps with Android application development and learn how to test it right from the start, I recommend to take the following steps supposing you have some experience with Java, Eclipse and, of course, unit testing:


  1. Download the Android ADT Bundle here.
  2. Follow the steps for setting it up here.
  3. Complete the tutorial Building your First App. I recommend using a real device, not only for performance but it feels great ;-) In case you're working on Linux, you'll probably have to add a rule for udev. This is well documented and can be found at the tutorial. Tip: find your vendorId using lsusb and use MSC as the transfer protocol.
  4. Skim through Managing Projects from Eclipse with ADT, Building and Running from Eclipse and Testing Fundamentals, the latter being a fascinating read by itself for testing developers (and suffering testers in automation).
  5. Make sure you have the Samples for SDK. If you don't you will download them using Android SDK Manager as described here.
  6. Skip the Testing from Eclipse with ADT, and dive directly into Activity Testing Tutorial.
This is a good starting point and fun to do :-)