Tuesday, February 21, 2017

Sunk Cost Fallacy - Let's talk about bias 3

We continue our review of  biases we all have intrinsecally implemented in our own brains that lead us to make certain decisions, as Rolf Dobelli describes in his bestseller The Art of Thinking Clearly.

Sunk Cost Fallacy - Are you Lean enough?

This really is a very useful bias. I believe if we're honest enough with ourselves it helps us to really do a better job saving unnecessary costs for our company.

Without much introduction, two cases that immediately come to my mind:

  1. You see this huge beautiful implementation arriving testing. Verification passes without any flaw. You then notice: validation fails! This is not really what we wanted. Do you dare to create a bug and vote for throwing away / reworking all that hard and good work? Do you try to bend in and persuade any opposing mind to believe that that's what they wanted? Sometimes this is not the worst choice - sometimes clients just don't know what they want XD
  2. You've convinced everybody you need those automated tests / manual test scripts / test management system - whatever - to improve your testing process and all over quality of your product(s). It becomes a nuissance in time, things change, tempora mutantur. Are you willing to let go, to not hold it back anymore? Throw away what lacks purpose and be really lean.

Wednesday, January 11, 2017

Social Proof - Let's talk about bias 2

We continue our review of  biases we all have intrinsecally implemented in our own brains that lead us to make certain decisions, as Rolf Dobelli describes in his bestseller The Art of Thinking Clearly.

Social Proof

SCRUM planning session. A tester (role) points out that a certain US should include a some test to minimize risk of failure (functional, non-functional, however). The team and the product owner discuss the risk and convince themselves that the test is not necessary. The tester role has given in and will rest in peace.

The social proof makes you believe a decision is right because everybody else seems to believe it's right.
As tester you then have to ask yourself critically: is it their arguments that convince me or is it them being so convinced that does? If it's their arguments you might rest in peace, really. If not, argument against it or forever remain silent.

Survivorship bias - Let's talk about bias 1

You'll probably remember that a tester's profile has one of its main reasons for existence in avoiding author bias.

As QA and testers dive into Agile environments it's hard not to become part of the problem.

Interesting enough, there are well known biases we all have intrinsecally implemented in our own brains that lead us to make certain decisions, as Rolf Dobelli describes in his bestseller The Art of Thinking Clearly.

This post starts a series of checks of those biases with QA specific examples, so any tester or QA role can improve their work. Besides Agile, I like the move forward Quality Assistance because I experience that it works. So anybody in the team should have some advantage by knowing these biases.

Survivorship Bias

You as QA are confronted with developers and business representatives telling you "product X didn't have all of those design patterns, automated tests, branching policies, etc."
They strike you hard. You feel like earth's gonna swallow you. You're in China and only speak Hopi. Shit!

This is not your bias! Let'em check out their bug reports and time (and money - YEEEES, business speaks MO-O-NEY!) spent on fixing them. Speak to them about your customers serving as testers. But we're not facebook, we sell other stuff under different conditions. And if you really can't find none of those arguments to have effect present them the worst case, find examples for companies who failed with much embarrassment or quit your job, right now there are great developers at work!


Saturday, November 19, 2016

Fast Frontend Development and HUI Testing with mockyeah

The backend mocking library mockyeah has recently been added new features inspired by WireMock. The library has the big advantage of allowing you to re-use mock data during development for testing.
By mocking the backend there's no immediate need to set up and maintain backend data thus speeding up development.
You can find working sample code here.

Fast frontend development - backend mocking

In order to be independent from a real backend instance during frontend development we'll organize our expected responses and add them to a default setup. We'll then configure our frontend to point to the mockyeah instance, start the mock server with the default setup and we're ready to develop our web application!

Organizing the mock data

Organizing our mock data we'll have later easy access in your test definitions:
// mock-data.js
var usersGet = {
    pattern: /users$/,
    ok: {
        status: 200,
        json: [
            {
                id: 1,
                username: 'user1',
                email: 'user1@email.com'
            },
            {
                id: 2,
                username: 'user2',
                email: 'user2@email.com'
            }
            ,
            {
                id: 3,
                username: 'user3',
                email: 'user3@email.com'
            }
        ]
    },
    ko: {
        status: 500,
        json: {errorCode: "UnexpectedException"}
    }
};

Collecting mock data

mockyeah has a record-and-play feature that allows it to run as a proxy and save data you want to mock. Have a look at the library's documentation to find out more.

Define a default setup

//default-setup.js

var mockyeah = require('mockyeah');
var mockdata = require('./mock_data');

var init_mock = function () {
    mockyeah.get(mockdata.usersGet.pattern, mockdata.usersGet.ok);
    mockyeah.get(mockdata.userGet.pattern, mockdata.userGet.ok);
    mockyeah.post(mockdata.userPost.pattern, mockdata.userPost.ok);
};

init_mock();

exports.init_mock = init_mock;
exports.mockyeah = mockyeah;
exports.mockdata = mockdata;
Now we run node default-setup.js and have a backend available during development.

Hermetic User Interface (HUI) testing

Frontend by itself is a subsystem of your application. HUI tests strive to have more stable and maintainable tests by mocking out any dependency to a service. This means you would normally implement system level tests with Selenium WebDriver or Protractor.

Starting mockyeah for testing

As mockyeah is automatically started as configured in .mockyeah when required, your test suite must simply import your default setup as defined above.
var setup = require('./../default-setup');

describe('Handle users', function () {
    describe('Users view', function () {
        beforeAll(setup.init_mock);
        afterAll(setup.mockyeah.close);

Changing mockyeah behaviour during test definition

Simply set the response in the test definition itself. Notice that we re-use the mockyeah exported by the default setup.
describe('Handle users', function () {
    describe('Users view', function () {
        // ...
        
        it('should load the users list', function () {
            return usersPage.navigateToUsersView().then(function () {
                expect(usersPage.getUserList()).toContain({id: 1, username: 'user1'});
                expect(usersPage.getMessage()).toEqual("SUCCESS");
            });
        });

        it('should show the error code if list cannot be loaded', function () {
            setup.mockyeah.get(setup.mockdata.usersGet.pattern, 
            setup.mockdata.usersGet.ko);
            return usersPage.navigateToUsersView().then(function () {
                expect(usersPage.getMessage()).toEqual("UnexpectedException");
            });
        });

Request verification and logging

During test development we'll want to inspect the received requests (the "request journal") when something doesn't come out as expected. We can do this by setting
{
    output: true,
    journal: true
}
giving the output
      ✓ should send the correct request
[mockyeah][12:31:46][SERVE][MOUNT][GET] /users$/
[mockyeah][12:31:46][REQUEST][JOURNAL] {
  "callCount": 1,
  "url": "/users",
  "fullUrl": "http://localhost:4001/users",
  "clientIp": "127.0.0.1",
  "method": "GET",
  "headers": {
    "host": "localhost:4001",
    "accept": "application/json",
    "connection": "close"
  },
  "query": {},
  "body": {}
}
[mockyeah][12:31:46][REQUEST][GET] /users (2ms)
If our application code composes a complex request from several sources sometime we find it useful to verify the sent request:
it('should send the correct request', function() {
    return usersPage.navigateToUsersView().then(function () {
        usersPage.enterNewUserDetails("user1", "user1@email.com");
        var expectation = setup.mockyeah.post(setup.mockdata.userPost.pattern, 
                                              setup.mockdata.userPost.ok)
            .expect()
            .body({
                username: 'user1',
                email: 'user1@email.com'
            })
            .once();
        return usersPage.confirm().then(function () {
            expectation.verify();
        });
    });
});
The standard configuration of mockyeah will write a standard request log which is very helpful during test development.
[21:59:59] I/local - Selenium standalone server started at http://192.168.0.155:48608/wd/hub
Spec started
[mockyeah][SERVE] Listening at http://127.0.0.1:4001
[mockyeah][REQUEST][GET] /users (2ms)

  Handle users

    Users view
      ✓ should load the users list
[mockyeah][REQUEST][GET] /users (1ms)
      ✓ should show the error code if list cannot be loaded
[mockyeah][REQUEST][GET] /users (1ms)
[mockyeah][REQUEST][POST] /users (1ms)
      ✓ should load user details when created successfully

Executed 3 of 3 specs SUCCESS in 0.062 sec.
[mockyeah][SERVE][EXIT] Goodbye.
[22:00:00] I/local - Shutting down selenium standalone server.
However, for test reporting you might prefer to switch logging of by setting
{ ...
  "output": false,
  "verbose": false
}
which will give the following less cluttered output.
[21:59:59] I/local - Selenium standalone server started at http://192.168.0.155:48608/wd/hub
Spec started

  Handle users

    Users view
      ✓ should load the users list
      ✓ should show the error code if list cannot be loaded
      ✓ should load user details when created successfully

Executed 3 of 3 specs SUCCESS in 0.062 sec.
[22:00:00] I/local - Shutting down selenium standalone server.

Sunday, June 26, 2016

WireMock for your Dependant-On Http Service

Recently I needed to mock an http service our appplication depends on. A developer recommended me WireMock he uses at unit level.

I target the system level. But - wow - WireMock running as standalone server gives me the same feature set thanks to it's great JSON API for configuration while running.

Here I give you an overview and some advice about how to get started mocking an HTTP service at system level.

Collection of the Mocked Data

First thing you need to know is:
  • the url you call
  • the data returned by those calls.
WireMock has a great record and playback feature. WireMock proxies any calls to the DOS and automatically creates files for the responses and mappings.

Example: GetWeather

We'll record responses for calling the global weather API of WebserviceX.NET.

Run
java -jar wiremock-1.58-standalone.jar --proxy-all="http://www.webservicex.net/" --record-mappings --verbose
and make a sample call to the GetWeather method, for example:
curl --header "Content-Type: text/xml;charset=UTF-8" --header "SOAPAction:her.asmxww.webserviceX.NET/GetWeather" --data @request.xml http://localhost:8080/globalweather.asmx
(request.xml containing a valid request, obviously).

This will create a file containing the response in __files and a sample mapping in mappings in your current working directory.

If you open the mapping just created you'll see something like:

{
  "request" : {
    "url" : "/globalweather.asmx",
    "method" : "POST",
    "bodyPatterns" : [ {
      "contains" : "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:web=\"http://www.webserviceX.NET\">\n   <soapenv:Header/>\n   <soapenv:Body>\n      <web:GetWeather>\n         <!--Optional:-->\n         <web:CityName>Madrid</web:CityName>\n         <!--Optional:-->\n         <web:CountryName>Spain</web:CountryName>\n      </web:GetWeather>\n   </soapenv:Body>\n</soapenv:Envelope>"
    } ]
  },
  "response" : {
    "status" : 200,
    "bodyFileName" : "body-globalweather.asmx-8VMpb.json",
    "headers" : {
      "Cache-Control" : "private, max-age=0",
      "Content-Type" : "text/xml; charset=utf-8",
      "Content-Encoding" : "gzip",
      "Vary" : "Accept-Encoding",
      "Server" : "Microsoft-IIS/7.0",
      "X-AspNet-Version" : "4.0.30319",
      "X-Powered-By" : "ASP.NET",
      "Date" : "Sun, 26 Jun 2016 12:49:57 GMT",
      "Content-Length" : "691"
    }
  }
}
Woohoo! your first configuration for the JSON API.

You can easily check your configuration / mock data, by restarting WireMock, but this time in standalone mode

java -jar wiremock-1.58-standalone.jar

and rerunning the above curl command.

Configuration of the Mock via JSON API

While you can have a static mock configuration in the mappings and __files folders, the really cool stuff is changing the configuration during runtime / testing on the fly calling the JSON API. This way we can even control delay for performance testing.

Let's suppose you're running WireMock on it's standard port at your localhost:8080 without any previous static configuration. You then create a new configuration by posting
{
  "request" : {
    "url" : "/globalweather.asmx",
    "method" : "POST",
    "bodyPatterns" : [ {
      "contains" : "GetWeather"
    } ]
  },
  "response" : {
    "status" : 200,
    "body" : ...,
    "fixedDelayMilliseconds": 2000
  }
}
to http://localhost:8080/__admin/mappings/new causing this way the mock to respond after 2 seconds.

WireMock has many well documented features any tester could whish for.

JSON API Cheat Sheet

Here a listing for easy reference during test development.

MethodDescriptionNotes
__admin/reset Removes all stub mappings and deletes the request log. Great for clearing mock behaviour before setting up anything
__admin/mappings Lists all configured mappings Helps while developing your tests
__admin/mappings/save Saves all mappings This way you can create a static default setup easily, and refresh it if your mappings change.
__admin/mappings/new Creates a new mapping Create a new mapping in setup or during a test, switching for example from success to error. By default, WireMock will use the most recently added matching stub to satisfy the request. But several mappings can even be prioritised.
__admin/mappings/reset Removes all non-static mappingsLike full reset, but preserving your default setup.
__admin/settings
__admin/socket_delay
Set the global settings for your stubbing You can vary over delays for your performance testing.
__admin/scenarios/reset Reset all states to START. Again, great for resetting your test environment if you use stateful behaviour.
__admin/requests/count
__admin/requests/find
__admin/requests/reset
Lets you manage and check the requests log Check requests during test development and verify DoS have been called.
find with body { "urlPattern" : "/" } let's you inspect all recorded requests.
You can find the full API reference here.


Tuesday, April 19, 2016

OpenCover and FitNesse/fitSharp


OpenCover is a great tool for measuring .NET code coverage. In ATDD some tests are written and documented below system level.
If you use FitNesse/fitSharp the code coverage cannot be determined by calling FitNesse on the console via java -jar fitnesse-standalon -c args.

But... the test runner, Runner.exe, is implemented in .NET.

It is called from the FitNesse server with arguments args1 = -r {assembly list} HOST PORT SOCKET.

You can get a coverage report from OpenCover by defining a new test runner using OpenCover as a proxy. The FitNesse server will call the proxy with args1. The new test runner (definable in the wiki via the global variable TEST_RUNNER) will call OpenCover.Console.exe which then calls the original runner passing args1 on and returning its error code with the -returntargetcode argument.

Monday, February 17, 2014

Breaking Dependency on Third Party Executables - Examples in Go


Often in distributed systems our products depend on third party applications. In this post I describe how we can mock these in order to improve coverage, testability and ease the setup of the test environment.

Imagine you have an application that calls an executable. You could for example create an XML document from a database and feed it to a third party application to get a PostScript file. We implement the Decorator Pattern and use a suitable class or function in our programming environment to
  1. Call the executable with or without certain arguments (input interface)
  2. Intercept the standard output and error to check our results and return to the main application (output interface)
In .NET we could use the Process class to accomplish this: we wrap the CLI into a Process object and redirect stderr and stdout for verification.

On the unit test level we can mock the decorator itself. But if we have legacy code which isn't unit testable, or at the integration test level, it's useful to mock the third party executable itself. This will:
  1. speed up testing in case the third party application would take a while to finish,
  2. improve coverage: by taking control over the third party application we can provoke error states and need less test data setup to comply with the interface,
  3. make it easier to log which arguments the application is called with.
I will call a mocked executable FakeExe.

If our executable expects certain inputs we can control our FakeExe's behaviour based on these. We create an arguments encoder and a decoder. Here is a short example in Go of how we could control the exitcode for an executable expecting an XML file as input:
 package fakeexe  
 //...  
 func EncodeArgument(exitCode int) string {  
      return strconv.Itoa(exitCode) + ext  
 }  
 type fakeExe struct {  
      ExitCode int  
 }  
 func (f *fakeExe) DecodeArgument(arg string) {  
      var err error  
      if len(arg) > len(ext) {  
           ecode := arg[:len(arg)-len(ext)]  
           f.ExitCode, err = strconv.Atoi(ecode)  
      } else {  
           err = errors.New("input not valid" + arg)  
      }  
      if err != nil {  
           f.handleError(err)  
      }  
 }  
 func (f *fakeExe) Run(arg string) {  
      f.DecodeArgument(arg)  
 }  
 func (f *fakeExe) handleError(error) {   
 //...  
 }  
 package main  
 import (  
       "fakeexe"  
       "os"  
 )  
 func main() {  
       f := new(fakeexe.fakeExe)  
       f.Run(os.Args[0])  
       os.Exit(f.ExitCode)  
 }  

The Encoder can be used in test setup code in order to generate the correct input args for the FakeExe. It can be extended with the following useful behaviour:

  1. Write a log: this way we can control that the Decorator calls the executable as expected and we can check the FakeExe in case the possible actions are more complicated than the easy example given above.
  2. Use input files from paths, for example if there is an input files directory which will be used as working directory. This can also be used to configure the FakeExe in case of several similar expected behaviours where we might not want to implement a seperate FakeExe for each executable we mock.


At some point I faced the problem that the working directories of the executable wasn't known beforehand, it was created when the task using the Decorator was run. Furthermore, many instances of the same executable were called in a workflow. This created two problems:

  1. There was no use in writing the log per FakeExe instance: I needed a log of all instances together.
  2. Configuring the FakeExe by a configuration file beside it was undoable because it wouldn't have been copied to the target working directory.
I solved the problem by implementing a logger and a configuration service which in Go reduced to just some lines of code, see http://golang.org/pkg/net/. The easiest implementation might look like this:

 package configuring  
 import (  
      "fmt"  
      "io/ioutil"  
      "net"  
 )  
 var ConfigFilename = "Config.txt"  
 //...  
 type server struct {  
      getListener func(protocol, port string) (net.Listener, error)  
      ln     net.Listener  
      protocol  string  
      port    string  
 }  
 func (srv *server) Start() {  
      var err error  
      srv.ln, err = srv.getListener(srv.protocol, ":"+srv.port)  
      if err != nil {  
           panic(err)  
 }  
      for {  
           conn, err := srv.ln.Accept()  
           if err != nil {  
                fmt.Println(err)  
                continue  
           }  
           go srv.sendConfig(conn)  
      }  
      return  
 }  
 func (srv *server) sendConfig(conn net.Conn) {  
      bytes, err := ioutil.ReadFile(ConfigFilename)  
      if err != nil {  
           panic(err)  
      }  
      if len(bytes) > MAX_MESSAGE_LENGTH {  
           panic("Config message too long.")  
      }  
      _, err = conn.Write(bytes)  
      if err != nil {  
           panic(err)  
      }  
      return  
 }  
 func (srv *server) Stop() {  
      if srv.ln != nil {  
           srv.ln.Close()  
      }  
 }  
 package logging  
 import (  
      "net"  
      "fmt"  
 )  
 //...  
 type server struct{  
      getListener func(protocol, port string) (net.Listener, error)  
      ln net.Listener  
      protocol string  
      port string  
      msgs []string  
 }  
 func (srv *server)Msgs() (msgs []string){  
      msgs = srv.msgs  
      return  
 }  
 func (srv *server)Start() (err error){  
      srv.ln, err = srv.getListener(srv.protocol, ":" + srv.port)  
      if(err != nil){  
           panic(err)  
      }  
      for {  
           conn, err := srv.ln.Accept()  
           if (err != nil){  
                fmt.Println(err)  
                continue  
           }  
           go srv.appendMessage(conn)  
      }  
      return  
 }  
 func (srv *server)appendMessage(conn net.Conn){  
      defer conn.Close()  
      buf := make([]byte, MAX_MESSAGE_LENGTH)  
      msg_length, err := conn.Read(buf)  
      var msg string  
      if(err != nil){  
           msg = err.Error()  
      }else{  
           msg = string(buf[:msg_length])  
      }  
      srv.msgs = append(srv.msgs, msg)  
 }  
 func (srv *server)Stop(){  
      if(srv.ln != nil){  
           srv.ln.Close()  
      }  
 }  

From this the next step could be the implementation of a little DSL for our testing extending the FakeExe with an interpreter like for example in Bob's Blog - Writing a Lisp Interpreter in Go. Then, instead of sending concrete implementation specific configuration values with the configuration service we just send a script:

 package fakeexe  
 import (  
      "lisp" //https://github.com/bobappleyard/golisp  
      "io"  
      "strings"  
 )  
 //...  
 func (f *fakeExe)Run(script string){  
      i := lisp.New()  
      i.Repl(strings.NewReader(script), io.Stdout)  
      //...  
 }  

*Examples are written in Go.