Sunday, July 05, 2015

Using Ansible with Vagrant

The story often starts with a single developer on a single project cobbling infrastructure together as it's needed. Another developer joins the project and does much the same.

One day, one developer pulls down a recent change and the project doesn't build. Reaching out to the last contributor, the request for a bugfix is met with a truly terrible excuse:

It works on my machine!

After hours of debugging environment differences, the issue turns up. It's a version issue in the runtime or a key service provider, it's a case-sensitive name on OSX, or it's any of a thousand other things it doesn't have to be.

Reproducible development environments aren't a new problem. Tools like VirtualBox (and more recently, docker) allow software teams to share read-only, just-add-water machine images or containers; Vagrant allows us to share reproducible configurations of the VMs themselves, and provisioning tools let us keep system-level configurations transparent and instantly repeatable from source control.

Images stop "it works on my machine", but—as attractive as they sound—they come with several significant drawbacks. They're opaque, making it difficult to know with certainty what a specific image contains; they're big, often running into several hundreds of megabytes that must be stored and distributed with every update; and they only represent a single machine.

If we could share a base image and decorate it with a configuration written in a human-readable format, we could get around these challenges. Changes would be clearly exposed in source control; they would only be as big as the text diff; and descriptions of one service could be easily copied and edited to describe others.

So let's do that. We'll use Vagrant to set up a basic machine, then use a very simple ansible playbook to provision it to a repeatable state.

Set up Vagrant

Check Vagrant's downloads page for a binary installation. We'll also need to install a virtualizer to run our Vagrant box; we'll use Virtualbox here, but VMWare or any other can be easily substituted.

Once Vagrant is installed, verify that the install ran successfully:

$ vagrant --version
Vagrant 1.7.2

Set up Ansible

Ansible may be available from [your package manager here], but it's a quick build from source. Check out the latest version from master (or use a tagged release):

$ git clone git@github.com:ansible/ansible.git --depth 1

Next, we need to update its submodules and build it:

$ cd ansible
$ git submodule update --init --recursive
$ sudo make install

Verify that it worked:

$ ansible --version
ansible 1.9.0

A simple, reproducible development environment

Now for the fun part. Let's define a basic VM using Vagrant and use Ansible to set it up.

# ./Vagrantfile
Vagrant.configure("2") do |config|

  # Base image to use
  config.vm.box = "hashicorp/precise64"

  # Declare ansible provisioner
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "dev/playbook.yml"
  end
end

Next, we'll need to add a dead-simple playbook. In a real application, we'd load it up with the machine's various roles; for the sake of demonstration we can simply drop a file in the vagrant user's home directory:

# ./dev/playbook.yml
---
- hosts: all
  tasks:
    - name: Hello, world
      shell: 'echo "Hey there! --ansible" > hello_world.txt'

Let's bring the machine up and see how things look:

$ vagrant up
$ vagrant ssh
vagrant@precise64:~$ cat hello_world.txt
Hey there! --ansible

Conclusion

At this point, we would begin expanding our playbook with roles that applied application runtimes, datastores, web services, and anything else needed for happy, healthy development. We could then expose the vagrant instance's network adapter to the host machine, sync local folders to ease development, and tune the entire setup to our heart's content. But even our trivial example lets us demonstrate repeatability: if something ever happens to our development environment, we simply check out "good" versions of the Vagrantfile and playbook.yml, blast away the offending VM, and bring it up again:

$ vagrant destroy -f && vagrant up
$ vagrant ssh
vagrant@precise64:~$ cat hello_world.txt
Hey there! --ansible

Not too shabby for a dozen lines of text!



Friday, June 05, 2015

Testing API requests from window.fetch

Note: this article is the second of two parts in a miniseries on functional client-server testing. In the first, we used sinon.js's fakeServer utility to test an XMLHttpRequest-based client. Picking up where we left off, we'll now apply similar techniques to high-level tests around the window.fetch API.

If we designed a generic API for JavaScript client requests today, chances are that it wouldn't look like XHR. Most of the interactions we need to make with a server can be represented simply as a request (method, path, body); custom headers take care of a few more edges; and the remainder will be made using websockets or novel browser transport layers. It's no surprise, then, that, the first common cases are covered first by the emerging specification for window.fetch.

Introduction

Simple requests made with fetch look much like those made by any other client library. They take a path and any request-specific options and return the server's (eventual) Response as a Promise object:

window.fetch('/api/v1/users')
  .then((res) => {
    console.log(res.status);
  });

Note that we'll use ES6 syntax throughout this discussion on the assumption that ES6 will be in widespread use by the time window.fetch is broadly supported (and that babel-compiled projects are sufficient for now); back-porting to ES5-compliant code is left as an exercise for the reader.

Supported tomorrow, but usable today

At press time window.fetch enjoys native support in exactly zero browsers, but the fine folks over at github have released an XHR-based polyfill for the current specification. It may change before the standard is finalized. For us, though, it's a start—using it, we can begin using fetch in client applications today:

$ npm install whatwg-fetch

Next, we'll need a simple client to test. This implementation proxies window.fetch requests to a JSON API, employing some trivial response parsing to type error responses and capitalize successful ones.

require('whatwg-fetch');

function apiError (status, message) {
  var err = new Error(message);
  err.status = status;
  return err;
}

function onAPIError (res) {
  return res.json().then(function (json) {
    return Promise.reject(apiError(res.status, json.message));
  });
}

function onAPIResponse (res) {
  return res.json().then(function (json) {
    return {
      hello: json.hello.toUpperCase()
    };
  });
}

export const client = (path) => {
  return window.fetch(path)
    .catch(onAPIError)
    .then(onAPIResponse);
};

export default client;

In practice, users will not interact with the client directly. Rather, they'll be interacting with it through the UI and scheduled events within a bigger web application. While our tests will focus on the client for simplicity, the same approaches used to test the client can also unlock higher-level tests for the interface. If we can mimic a server response to a direct client request, we can mimic the same response (for instance) when a user clicks a button or saves their progress.

High-level tests

In our previous look at testing XHR-based applications, we considered describing client-server interactions at both the unit and functional levels:

  1. unit - test that our code provides correct arguments to window.fetch or any client libraries that wrap it

  2. functional - test that our code results in a correctly-dispatched Request and reacts appropriately to the Response

Functional tests are marginally more difficult to set up, but testing against a standard (even an emerging one) enables separation between application logic and the request layer while encouraging tests that are clearer and more portable than those written at the unit level. We'll take a similar approach here, though the details change a bit.

Testing the client

Let's look at a simple functional test that capture's the client's "success" behavior.

it('formats the response correctly', (done) => {
  client('/foobar')
    .catch(done)
    .then((json) => {
      expect(json.hello).toBe('WORLD');
      done();
    });
});

Even though we expect success, we implement the promise's .catch interface to notify the test runner if an error is encountered. And indeed, running the test as written yields a 404 when the runner is unable to GET /foobar. In order to make it pass, we need to describe the expected behavior of the underlying server. Instead of using an existing utility like sinon.fakeServer as we did with the more complex XHR API, the design of the fetch API is simple enough for us to mock it ourselves.

First, let's stub window.fetch. This serves both to cancel any outbound requests and to let us replace its default behavior with one of our own:

beforeEach(() => {
  sinon.stub(window, 'fetch');
});

afterEach(() => {
  window.fetch.restore();
});

Next, we need to mock a behavior that matches the actual Response we would receive from a fetched request. For a simple success response from a JSON API, we could simply write:

beforeEach(() => {
  var res = new window.Response('{"hello":"world"}', {
    status: 200,
    headers: {
      'Content-type': 'application/json'
    }
  });

  window.fetch.returns(Promise.resolve(res));
});

Note that this behavior is synchronous—the Promise with the Response resolves immediately—but our test is written in an asynchronous style (runners like jasmine and mocha will wait for the done callback before proceeding with other tests). While not strictly necessary, assuming that a fetch could resolve during a separate tick through the event loop yields both a more flexible test and a better representation of reality.

In any case, the test client will now encounter the resolved Response, apply its formatting, and pass successfully.

Tidying up

Just as with XHRs, the server behaviors mocked across a non-trivial test suite are likely to involve some repetition. Rather than formatting each JSON response independently, or injecting the same headers across multiple tests, it's well worth considering test helpers to reduce the volume of boilerplate code. For an example, we can update the jsonOk and jsonError helpers used in our XHR-based tests to build Response objects instead:

function jsonOk (body) {
  var mockResponse = new window.Response(JSON.stringify(body), {
    status: 200,
    headers: {
      'Content-type': 'application/json'
    }
  });

  return new Promise.resolve(mockResponse);
}

function jsonError (status, body) {
  var mockResponse = new window.Response(JSON.stringify(body), {
    status: status,
    headers: {
      'Content-type': 'application/json'
    }
  });

  return new Promise.reject(mockResponse);
}

These barely scratch the surface of useful testing facilities—we might want to match specific requests, for instance, or write helpers to describe sequences of requests (as in an authentication flow)--but even a simple helper like jsonOk can reduce test setup to a nearly-trivial line:

beforeEach(() => {
  window.fetch.returns(jsonOk({
    hello: 'world'
  }));
});

Conclusion

window.fetch provides a more straightforward API than XMLHttpRequest, and it's reflected in our tests. Instead of needing to contrive a mock with a wide range of event states, accessors, and boutique behaviors, fetch can be tested with simple stubs and instances of the actual objects used in its normal operation. There's still a fair amount of boilerplate, which helpers can mitigate somewhat, but the volume of "magic"--fake global objects and the like—needed to mimic low-level behavior is significantly reduced.



Sunday, May 31, 2015

Testing API requests with XHR and sinon.js

Clients are nothing without their servers, but necessity hasn't meant an easy relationship. Security considerations and the molasses-slow evolution of web standards across the major browsers mean that most applications are still interacting with the HTTP world through the venerable channels of the XMLHttpRequest API.

We rarely interact with XHR directly, though, preferring a variety of wrappers and client libraries to replace its less-than-obvious structure with something more familiar. That might be using Backbone.sync (which in turn delegates to jQuery), or Angular's $http; we might simply consume it through a 3rd-party service's client. If we're making requests though, odds are that somewhere, somehow, we're relying on XHR.

For application developers that ubiquity is both a blessing and a curse. On the one hand, XHR provides a common currency for nearly every outbound request. On the other, its complexity ensures that it is usually wrapped—with plenty of room for inconsistency between each wrapping implementation. Not only are we wedded to XHR; we're attached to the wrappers' details as well.

The strategy

This presents a difficult choice for testing. We can stub each wrapper and write unit tests around out own applications (now we have a bunch of unit tests), or we can find some way to test at a functional level, treating the wrapper as a black box between our own code and the XHRs that it ultimately triggers. There are several advantages to this:

  • Application logic is divorced from the request layer. Multiple transport methods (request libraries, client libraries, etc.) can coexist peacefully within a single application. This also encourages:

  • Portability. Tests at the XHR level describe the underlying logic in a standard-as-in-browsers format. If they were originally written for a Backbone app, their logic will still apply after a custom client has been swapped in for Backbone.sync.

  • Server behaviors can be described directly. Servers speak HTTP. XHRs describe HTTP. Writing test fixtures in terms of the raw status codes, headers, and bodies expected from the server makes it easy to compare tests to actual server behavior.

The downside is complexity: a quick look through Angular's $httpBackend mock gives some idea of how involved server responses can get. The many-splendored features of the XHR API don't make it any easier, so let's start simple.

Sinon.fakeServer

XHR is hardly a novel problem, and the contributors to the fabulous sinon.js mocking library have provided a facility save us reimplementing it in our tests: fakeServer.

fakeServer works by mocking the global XMLHttpRequest object to provide predetermined response fixtures when certain requests are matched. Say we have a simple client (demo source is available on github):

function apiError (status, message) {
  var err = new Error(message);
  err.status = status;
  return err;
}

function client (path, callback) {

  var xhr = new window.XMLHttpRequest();

  xhr.addEventListener('load', function () {
    var body;
    try {
      body = JSON.parse(this.responseText);
    }
    catch (e) {
      return callback('Invalid JSON:', this.responseText);
    }

    if (this.status < 200 || this.status > 299) {
      return callback(apiError(this.status, body.message));
    }

    return callback(null, body);
  });

  xhr.open('get', path);
  xhr.send();
}

Not much there—just a tool for wrapping XHR outcomes in node.js's continuation-passing style. If we want to write a simple jasmine spec for it using sinon's server, it might look something like this:

describe('client', function () {

  var server = null;

  beforeEach(function () {
    server = sinon.fakeServer.create();
  });

  afterEach(function () {
    server.restore();
  });

  describe('responding to a generic request', function () {

    beforeEach(function () {
      var okResponse = [
        200,
        { 'Content-type': 'application/json' },
        '{"hello":"world"}'
      ];

      server.respondWith('GET', '/hello', okResponse);
    });

    it('returns correct body', function (done) {
      client('/hello', function (err, json) {
        if (err) return done(err);
        expect(json.hello).toBe('world');
        done();
      });

      server.respond();
    });
  });
});

From this test, we expect the client to translate the response to a callback. We could simply describe this as:

it('returns correct body', function (done) {
  client('/hello', function (err, json) {
    expect(json.hello).toBe('world');
    done();
  });
});

Running this test, however, we would see a 404 as the outbound XHR tries—and fails—to GET /hello from the local server. To prevent the XHR from ever making it that far, we set up a fake server and preload it with a fixed response (following [ status, headerObj, bodyStr ]) to GET /hello:

var server = null;

beforeEach(function () {
  server = sinon.fakeServer.create();
});

afterEach(function () {
  server.restore();
});

beforeEach(function () {
  var okResponse = [
    200,
    { 'Content-type': 'application/json' },
    '{"hello":"world"}'
  ];

  server.respondWith('GET', '/hello', okResponse);
});

Finally, we tell the server to respond.

it('returns correct body', function (done) {
  client('/hello', function (err, json) {
    if (err) return done(err);
    expect(json.hello).toBe('world');
    done();
  });
  server.respond();
});

There's a subtlety in the flow of this test: we're sending the request and response synchronously, but the request callback will resolve the test whenever it is actually invoked. In other words, the actual evaluation follows:

  1. client request is sent
  2. server responds
  3. client callback is invoked with (err, json)
  4. client callback runs assertions
  5. client callback resolves test by calling done()

Dressing it up

We're now testing via XHR, but phew!--it's taken a lot of boilerplate to get there. We can clean things up a bit by extracting common operations to test helpers. For instance, we can wrap fake responses from a JSON server up in their own almost-trivial methods:

function jsonOk (body) {
  return [
    200, {
      'Content-type': 'application/json'
    }, JSON.stringify(body)
  ];
}

function jsonError (statusCode, body) {
  return [
    statusCode, {
      'Content-type': 'application/json'
    }, JSON.stringify(body || {
      error: statusCode,
      message: 'an error has befallen us!'
    })
  ];
}

The behavior of each API will be slightly different, of course, but if we can contain repeated behaviors (headers, standard error codes, response format, etc) inside test helpers it can make a somewhat cumbersome test much more legible:

describe('responding to a generic request', function () {

  beforeEach(function () {
    server.respondWith('GET', '/hello', jsonOk({
      hello: 'world'
    });
  });

  it('returns correct body', function (done) {
    client('/hello', function (err, json) {
      if (err) return done(err);
      expect(json.hello).toBe('world');
      done();
    });

    server.respond();
  });
});

Conclusion

Replacing library-specific unit tests with full-on mocks of XHR is a non-trivial project. For client codebases deeply involved with one or more HTTP APIs, though, tests aimed directly at XHR breed can be clearer, flexible, and easier than tests targeted at a particular XHR wrapper. And even though it requires an intimidating volume of boilerplate up front, the ubiquity of XHR makes it easy to extract and reuse helpers across multiple tests.

Note: this article is the first of two parts in a miniseries on functional client-server testing. Next up, we'll use similar techniques to author high-level tests around the window.fetch API.




View all posts