Tuesday, July 28, 2015

Multi-machine deployments with Vagrant and Ansible

Note: this is the second part in a short series about using Ansible and Vagrant to create reproducible development environments modeled after production systems. Part one closed with Vagrant and Ansible playing nicely together; next, we'll extend our basic environment to reflect a multi-machine deployment.

A simple Ansible Group

Ansible plays run against a single machine aren't much more than a reproducible SSH session. There's undeniable value in the ability to document, commit, and replay repetitive tasks, but the tool's real power emerges once it's extended across multiple hosts.

On the other hand, Vagrant is insanely useful for enabling consistent development environments on a single virtual machine, provisioning the same OS, runtime, and dependencies wherever it's used. Nothing stops it at just one host, though, and with a little bit of work, we can use it to configure multiple machines in a model of a small- to mid-size production setup.

Let's take a look at how we can use both tools to work in production-like environments we can setup, adjust, destroy, and recreate with just a few keystrokes on the command line.

An Ansible group

To start things off, consider a simple Ansible inventory. One group, two hosts—no secrets here.

# /etc/ansible/hosts
[group-a]
instance-1.example.com
instance-2.example.com

In production, we could now define a playbook and use it to describe the target state of the group-a hosts. For the sake of demonstration, let's just check that each machine is up and available:

# ./dev/playbook.yml
---
- hosts: group-a
  sudo: no
  tasks:
    - command: 'echo {{ greeting }}'
      register: echoed

    - debug: var=echoed.stdout

Running the playbook, we see the expected result:

$ ansible-playbook ./dev/playbook.yml \
  --extra-vars='{"greeting":"bongiorno"}'

# ...

TASK: [debug var=echoed.stdout] *******************************************
ok: [instance-1] => {
    "var": {
        "echoed.stdout": "bongiorno"
    }
}
ok: [instance-2] => {
    "var": {
        "echoed.stdout": "bongiorno"
    }
}

Great! But these are production machines: if we want to develop and test the playbook from the safety of a development environment, we're going to need some help. Cue Vagrant.

Defining the Vagrant group

Vagrant lets us define virtual machines in a human-readable format, for instance to set up a development group. Let's recreate the group-a hosts and attach them to the local network:

# Somewhere inside ./Vagrantfile
config.vm.define "instance-1" do |instance|
  instance.vm.network "private_network", ip: "192.168.32.10"
end

config.vm.define "instance-2" do |instance|
  instance.vm.network "private_network", ip: "192.168.32.11"
end

Vagrant has no concept of a "group", but we can use the Ansible provisioner to redefine group-a. We'll simply need to tell it how the group is organized:

# Somewhere inside ./Vagrantfile
config.vm.provision "ansible" do |ansible|
  ansible.groups = {
    "group-a" => [
      "instance-1",
      "instance-2"
    ]
  }
end

Finally, we can assign the playbook to use to provision them, recycling the same plays from our original, production example. Overriding it with a small, development-only greeting:

config.vm.provision "ansible" do |ansible|

  # ...

  ansible.playbook = "dev/playbook.yml"

  ansible.extra_vars = {
    "greeting" => "Hello, world"
  }
end

We're almost ready to make it go. We just need to gain access to the hosts Vagrant has created.

Authentication, a brief interlude

By default, recent versions of Vagrant will create separate SSH keys for the vagrant user on each managed instance. This makes sense, but to keep things simple for development we'll bypass individual keys and provision our hosts with global key used before version 1.7.

# Somewhere inside ./Vagrantfile
config.ssh.insert_key = false

If we're willing to work around the unique keys, we can instead find the keys in .vagrant/machines/<instance_name>/virtualbox/private_key, provide a "development" inventory file with the correct key for each machine. For local development, though, the convenience of a shared key may trump other concerns.

Provisioning the Vagrant group

Let's bring the VMs up. As they're built, we'll see each machine created and provisioned with the new message in turn:

$ vagrant up

# ...

TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-1] => {
    "var": {
        "echoed.stdout": "Hello, world"
    }
}

# ...

TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-2] => {
    "var": {
        "echoed.stdout": "Hello, world"
    }
}

As the boxes are initialized, the provisioner also creates an Ansible inventory describing the group at .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. Opening it, we see contents very similar to our original hosts file:

# .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant

instance-2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200
instance-1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201

[group-a]
instance-1
instance-2

As long as we run Ansible through the provisioner, this inventory will be used by any plays we specify. If we want to run Ansible against running boxes outside of Vagrant, though, we'll need to reference the inventory ourselves.

Running plays on the Vagrant guests

A full vagrant reload --provision takes time, slowing the pace of iterative development. Now that we have an inventory, though, we can run plays against provisioned machines without rebuilding them. All we need are a playbook and a valid SSH key:

ansible-playbook \
  --inventory-file=.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
  --user=vagrant \
  --private-key=~/.vagrant.d/insecure_private_key \
  --extra-vars='{"greeting":"bongiorno"}' \
  dev/playbook.yml

--extra-vars will override the corresponding variable declared in the playbook, and we'll now see the Vagrant guests echo our original greeting. Typing that out is a bit of a mouthful, though, so let's script up the details:

# dev/tasks.sh
#!/bin/bash

set -e

ANSIBLE_PLAYBOOK_PATH="dev/playbook.yml"
ANSIBLE_INVENTORY_PATH=".vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory"

VAGRANT_SSH_USER=vagrant
VAGRANT_SSH_KEY_PATH="$HOME/.vagrant.d/insecure_private_key"

usage () {
  echo "$0 <greeting>"
  exit 1
}

run_playbook () {
  ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
    --inventory-file=$ANSIBLE_INVENTORY_PATH \
    --user=$VAGRANT_SSH_USER \
    --private-key=$VAGRANT_SSH_KEY_PATH \
    --extra-vars="{\"greeting\":\"$1\"}" \
    $ANSIBLE_PLAYBOOK_PATH
}

[ "$#" == "0" ] && usage

run_playbook "$@"

The contents of this boilerplate will change depending on what tasks we need to run. Maybe we allow different playbooks or user-specified tags; maybe we expose certain variables as command-line options. But we now have a simple basis for running Ansible against our development machines.

./dev/tasks.sh 'Hello, world!'


Sunday, July 05, 2015

Using Ansible with Vagrant

The story often starts with a single developer on a single project cobbling infrastructure together as it's needed. Another developer joins the project and does much the same.

One day, one developer pulls down a recent change and the project doesn't build. Reaching out to the last contributor, the request for a bugfix is met with a truly terrible excuse:

It works on my machine!

After hours of debugging environment differences, the issue turns up. It's a version issue in the runtime or a key service provider, it's a case-sensitive name on OSX, or it's any of a thousand other things it doesn't have to be.

Reproducible development environments aren't a new problem. Tools like VirtualBox (and more recently, docker) allow software teams to share read-only, just-add-water machine images or containers; Vagrant allows us to share reproducible configurations of the VMs themselves, and provisioning tools let us keep system-level configurations transparent and instantly repeatable from source control.

Images stop "it works on my machine", but—as attractive as they sound—they come with several significant drawbacks. They're opaque, making it difficult to know with certainty what a specific image contains; they're big, often running into several hundreds of megabytes that must be stored and distributed with every update; and they only represent a single machine.

If we could share a base image and decorate it with a configuration written in a human-readable format, we could get around these challenges. Changes would be clearly exposed in source control; they would only be as big as the text diff; and descriptions of one service could be easily copied and edited to describe others.

So let's do that. We'll use Vagrant to set up a basic machine, then use a very simple Ansible playbook to provision it to a repeatable state.

Set up Vagrant

Check Vagrant's downloads page for a binary installation. We'll also need to install a virtualizer to run our Vagrant box; we'll use Virtualbox here, but VMWare or any other can be easily substituted.

Once Vagrant is installed, verify that the install ran successfully:

$ vagrant --version
Vagrant 1.7.2

Set up Ansible

Ansible may be available from [your package manager here], but it's a quick build from source. Check out the latest version from master (or use a tagged release):

$ git clone git@github.com:ansible/ansible.git --depth 1

Next, we need to update its submodules and build it:

$ cd ansible
$ git submodule update --init --recursive
$ sudo make install

Verify that it worked:

$ ansible --version
ansible 1.9.0

A simple, reproducible development environment

Now for the fun part. Let's define a basic VM using Vagrant and use Ansible to set it up.

# ./Vagrantfile
Vagrant.configure("2") do |config|

  # Base image to use
  config.vm.box = "hashicorp/precise64"

  # Declare ansible provisioner
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "dev/playbook.yml"
  end
end

Next, we'll need to add a dead-simple playbook. In a real application, we'd load it up with the machine's various roles; for the sake of demonstration we can simply drop a file in the vagrant user's home directory:

# ./dev/playbook.yml
---
- hosts: all
  tasks:
    - name: Hello, world
      shell: 'echo "Hey there! --ansible" > hello_world.txt'

Let's bring the machine up and see how things look:

$ vagrant up
$ vagrant ssh
vagrant@precise64:~$ cat hello_world.txt
Hey there! --ansible

Conclusion

At this point, we would begin expanding our playbook with roles that applied application runtimes, datastores, web services, and anything else needed for happy, healthy development. We could then expose the Vagrant instance's network adapter to the host machine, sync local folders to ease development, and tune the entire setup to our heart's content. But even our trivial example lets us demonstrate repeatability: if something ever happens to our development environment, we simply check out "good" versions of the Vagrantfile and playbook.yml, blast away the offending VM, and bring it up again:

$ vagrant destroy -f && vagrant up
$ vagrant ssh
vagrant@precise64:~$ cat hello_world.txt
Hey there! --ansible

Not too shabby for a dozen lines of text!

Note: this article is the first in a miniseries on using Vagrant and Ansible to replicate multimachine production systems. Next up, we'll extend our simple setup to incorporate multiple Vagrant instances configured by Ansible. Read on!



Friday, June 05, 2015

Testing API requests from window.fetch

Note: this article is the second of two parts in a miniseries on functional client-server testing. In the first, we used sinon.js's fakeServer utility to test an XMLHttpRequest-based client. Picking up where we left off, we'll now apply similar techniques to high-level tests around the window.fetch API.

If we designed a generic API for JavaScript client requests today, chances are that it wouldn't look like XHR. Most of the interactions we need to make with a server can be represented simply as a request (method, path, body); custom headers take care of a few more edges; and the remainder will be made using websockets or novel browser transport layers. It's no surprise, then, that, the first common cases are covered first by the emerging specification for window.fetch.

Introduction

Simple requests made with fetch look much like those made by any other client library. They take a path and any request-specific options and return the server's (eventual) Response as a Promise object:

window.fetch('/api/v1/users')
  .then((res) => {
    console.log(res.status);
  });

Note that we'll use ES6 syntax throughout this discussion on the assumption that ES6 will be in widespread use by the time window.fetch is broadly supported (and that babel-compiled projects are sufficient for now); back-porting to ES5-compliant code is left as an exercise for the reader.

Supported tomorrow, but usable today

At press time window.fetch enjoys native support in exactly zero browsers, but the fine folks over at github have released an XHR-based polyfill for the current specification. It may change before the standard is finalized. For us, though, it's a start—using it, we can begin using fetch in client applications today:

$ npm install whatwg-fetch

Next, we'll need a simple client to test. This implementation proxies window.fetch requests to a JSON API, employing some trivial response parsing to type error responses and capitalize successful ones.

require('whatwg-fetch');

function apiError (status, message) {
  var err = new Error(message);
  err.status = status;
  return err;
}

function onAPIError (res) {
  return res.json().then(function (json) {
    return Promise.reject(apiError(res.status, json.message));
  });
}

function onAPIResponse (res) {
  return res.json().then(function (json) {
    return {
      hello: json.hello.toUpperCase()
    };
  });
}

export const client = (path) => {
  return window.fetch(path)
    .catch(onAPIError)
    .then(onAPIResponse);
};

export default client;

In practice, users will not interact with the client directly. Rather, they'll be interacting with it through the UI and scheduled events within a bigger web application. While our tests will focus on the client for simplicity, the same approaches used to test the client can also unlock higher-level tests for the interface. If we can mimic a server response to a direct client request, we can mimic the same response (for instance) when a user clicks a button or saves their progress.

High-level tests

In our previous look at testing XHR-based applications, we considered describing client-server interactions at both the unit and functional levels:

  1. unit - test that our code provides correct arguments to window.fetch or any client libraries that wrap it

  2. functional - test that our code results in a correctly-dispatched Request and reacts appropriately to the Response

Functional tests are marginally more difficult to set up, but testing against a standard (even an emerging one) enables separation between application logic and the request layer while encouraging tests that are clearer and more portable than those written at the unit level. We'll take a similar approach here, though the details change a bit.

Testing the client

Let's look at a simple functional test that capture's the client's "success" behavior.

it('formats the response correctly', (done) => {
  client('/foobar')
    .catch(done)
    .then((json) => {
      expect(json.hello).toBe('WORLD');
      done();
    });
});

Even though we expect success, we implement the promise's .catch interface to notify the test runner if an error is encountered. And indeed, running the test as written yields a 404 when the runner is unable to GET /foobar. In order to make it pass, we need to describe the expected behavior of the underlying server. Instead of using an existing utility like sinon.fakeServer as we did with the more complex XHR API, the design of the fetch API is simple enough for us to mock it ourselves.

First, let's stub window.fetch. This serves both to cancel any outbound requests and to let us replace its default behavior with one of our own:

beforeEach(() => {
  sinon.stub(window, 'fetch');
});

afterEach(() => {
  window.fetch.restore();
});

Next, we need to mock a behavior that matches the actual Response we would receive from a fetched request. For a simple success response from a JSON API, we could simply write:

beforeEach(() => {
  var res = new window.Response('{"hello":"world"}', {
    status: 200,
    headers: {
      'Content-type': 'application/json'
    }
  });

  window.fetch.returns(Promise.resolve(res));
});

Note that this behavior is synchronous—the Promise with the Response resolves immediately—but our test is written in an asynchronous style (runners like jasmine and mocha will wait for the done callback before proceeding with other tests). While not strictly necessary, assuming that a fetch could resolve during a separate tick through the event loop yields both a more flexible test and a better representation of reality.

In any case, the test client will now encounter the resolved Response, apply its formatting, and pass successfully.

Tidying up

Just as with XHRs, the server behaviors mocked across a non-trivial test suite are likely to involve some repetition. Rather than formatting each JSON response independently, or injecting the same headers across multiple tests, it's well worth considering test helpers to reduce the volume of boilerplate code. For an example, we can update the jsonOk and jsonError helpers used in our XHR-based tests to build Response objects instead:

function jsonOk (body) {
  var mockResponse = new window.Response(JSON.stringify(body), {
    status: 200,
    headers: {
      'Content-type': 'application/json'
    }
  });

  return new Promise.resolve(mockResponse);
}

function jsonError (status, body) {
  var mockResponse = new window.Response(JSON.stringify(body), {
    status: status,
    headers: {
      'Content-type': 'application/json'
    }
  });

  return new Promise.reject(mockResponse);
}

These barely scratch the surface of useful testing facilities—we might want to match specific requests, for instance, or write helpers to describe sequences of requests (as in an authentication flow)--but even a simple helper like jsonOk can reduce test setup to a nearly-trivial line:

beforeEach(() => {
  window.fetch.returns(jsonOk({
    hello: 'world'
  }));
});

Conclusion

window.fetch provides a more straightforward API than XMLHttpRequest, and it's reflected in our tests. Instead of needing to contrive a mock with a wide range of event states, accessors, and boutique behaviors, fetch can be tested with simple stubs and instances of the actual objects used in its normal operation. There's still a fair amount of boilerplate, which helpers can mitigate somewhat, but the volume of "magic"--fake global objects and the like—needed to mimic low-level behavior is significantly reduced.




View all posts