Wednesday, August 05, 2015

Circular Queue

Software developers aren't keen on losing data. In certain tasks, though—retaining a limited event history, say, or buffering a realtime stream—data loss might be a fully-underwritten feature of the system. In these scenarios, we might employ a circular queue; a fixed-size queue with the Ouroborosian behavior that stale data from the end of the queue are overwritten by newly-arrived items.

Getting Started

We'll use the circular-queue package from NPM to get started:

$ npm install circular-queue

Let's create our first queue and specify how many items it can hold. The maximum size will vary greatly between applications, but 8 is as good a number as any for demonstration.

var CircularQueue = require('circular-queue');
var queue = new CircularQueue(8);

Operations

Our new queue has two basic operations: queue.offer(item), which adds items to the queue, and queue.poll(), which will remove and return the queue's oldest item:

queue.offer(3);
queue.poll();  // 3
queue.isEmpty; // true

It can also be useful to inspect the next item from the queue before deciding whether to poll it. We can do this using queue.peek():

queue.offer(3);
queue.peek(); // 3
queue.poll(); // 3

Here's what it looks like in action. Use offer() and poll() methods to add and remove additional items, noting the "rotation" of the oldest element (blue) as the contents shift:

Eviction

The idea at the heart of the circular queue is eviction. As new items are pushed onto an already-full queue, the oldest items in the queue are "evicted" and their places overwritten. Using circular-queue, we can detect evictions as they happen using the 'evict' event.

queue.addEventListener('evict', function (item) {
  console.info('queue evicted', item);
});

How we handle eviction will vary from one domain to the next. It's an opportunity to apply error-correcting behavior—to expand a buffer, allocate new workers, or take other steps to relieve upstream pressure—but even if the loss is "expected", it may be worth at least a note in the logs.

To see eviction in practice, use offer() to overrun the full queue below. Note the value of the evicted item and the "rotation" of the queue as its head and tail move.

Buffering

We're now ready to put it all together. In a live application, we'll rarely be clicking offer and poll ourselves. Rather, some upstream "producer" will be pushing new data to the queue while a downstream "consumer" races to pull items off.

We can get a sense for how this behaves using a final demo. Try adjusting the rate of production and consumption to see the queue in balance (consumption matches production), filling up (production exceeds consumption), or draining (consumption exceeds production).



Tuesday, July 28, 2015

Bring your production environment home with Vagrant and Ansible

Note: this is the second part in a short series about using Ansible and Vagrant to create reproducible development environments modeled after production systems. Part one closed with Vagrant and Ansible playing nicely together; next, we'll extend our basic environment to reflect a multi-machine deployment.

With our powers combined

Ansible plays applied to a single machine aren't much more than a reproducible SSH session. There's undeniable value in the ability to document, commit, and replay repetitive tasks, but the tool's real power emerges once it's extended across multiple hosts.

Vagrant, on the other hand, is insanely useful for single machine environments. Define an image, its hardware configuration, and provisioning, and—boom—your team can tuck HFS+ compatibility into bed forever. Nothing stops it at just one host, though, and with a little bit of work, we can use it to configure multiple machines in a model of a small- to mid-size production setup.

Let's take a look at how combining both tools will let us bring production home with prod-like environments we can setup, adjust, destroy, and recreate with just a few steps on the command line.

A simple group

To start things off, consider a simple Ansible inventory. One group, two hosts—no secrets here.

# /etc/ansible/hosts
[group-a]
instance-1.example.com
instance-2.example.com

In production, we could now define a playbook and use it to describe the target state of the group-a hosts. For the sake of demonstration, let's just check that each machine is up and available:

# ./dev/playbook.yml
---
- hosts: group-a
  sudo: no
  tasks:
    - command: 'echo {{ greeting }}'
      register: echoed

    - debug: var=echoed.stdout

Running the playbook, we see the expected result:

$ ansible-playbook ./dev/playbook.yml \
  --extra-vars='{"greeting":"bongiorno"}'

# ...

TASK: [debug var=echoed.stdout] *******************************************
ok: [instance-1] => {
    "var": {
        "echoed.stdout": "bongiorno"
    }
}
ok: [instance-2] => {
    "var": {
        "echoed.stdout": "bongiorno"
    }
}

Great! But these are production machines: if we want to develop and test the playbook from the safety of a development environment, we're going to need some help. Cue Vagrant.

Defining the Vagrant group

Vagrant lets us define virtual machines in a human-readable format, for instance to set up a development group. Let's recreate the group-a hosts and attach them to the local network:

# Somewhere inside ./Vagrantfile
config.vm.define "instance-1" do |instance|
  instance.vm.network "private_network", ip: "192.168.32.10"
end

config.vm.define "instance-2" do |instance|
  instance.vm.network "private_network", ip: "192.168.32.11"
end

Vagrant has no concept of a "group", but we can use the Ansible provisioner to redefine group-a. We'll simply need to tell it how the group is organized:

# Somewhere inside ./Vagrantfile
config.vm.provision "ansible" do |ansible|
  ansible.groups = {
    "group-a" => [
      "instance-1",
      "instance-2"
    ]
  }
end

Finally, we can assign the playbook to use to provision them, recycling the same plays from our original, production example. Overriding it with a small, development-only greeting:

config.vm.provision "ansible" do |ansible|

  # ...

  ansible.playbook = "dev/playbook.yml"

  ansible.extra_vars = {
    "greeting" => "Hello, world"
  }
end

We're almost ready to make it go. We just need to gain access to the hosts Vagrant has created.

Authentication, a brief interlude

By default, recent versions of Vagrant will create separate SSH keys for the vagrant user on each managed instance. This makes sense, but to keep things simple for development we'll bypass individual keys and provision our hosts with global key used before version 1.7.

# Somewhere inside ./Vagrantfile
config.ssh.insert_key = false

If we're willing to work around the unique keys, we can instead find the keys in .vagrant/machines/<instance_name>/virtualbox/private_key, provide a "development" inventory file with the correct key for each machine. For local development, though, the convenience of a shared key may trump other concerns.

Provisioning the Vagrant group

Let's bring the VMs up. As they're built, we'll see each machine created and provisioned with the new message in turn:

$ vagrant up

# ...

TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-1] => {
    "var": {
        "echoed.stdout": "Hello, world"
    }
}

# ...

TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-2] => {
    "var": {
        "echoed.stdout": "Hello, world"
    }
}

As the boxes are initialized, the provisioner also creates an Ansible inventory describing the group at .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. Opening it, we see contents very similar to our original hosts file:

# .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant

instance-2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200
instance-1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201

[group-a]
instance-1
instance-2

As long as we run Ansible through the provisioner, this inventory will be used by any plays we specify. If we want to run Ansible against running boxes outside of Vagrant, though, we'll need to reference the inventory ourselves.

Running plays on the Vagrant guests

A full vagrant reload --provision takes time, slowing the pace of iterative development. Now that we have an inventory, though, we can run plays against provisioned machines without rebuilding them. All we need are a playbook and a valid SSH key:

ansible-playbook \
  --inventory-file=.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
  --user=vagrant \
  --private-key=~/.vagrant.d/insecure_private_key \
  --extra-vars='{"greeting":"bongiorno"}' \
  dev/playbook.yml

--extra-vars will override the corresponding variable declared in the playbook, and we'll now see the Vagrant guests echo our original greeting. Typing that out is a bit of a mouthful, though, so let's script up the details:

# dev/tasks.sh
#!/bin/bash

set -e

ANSIBLE_PLAYBOOK_PATH="dev/playbook.yml"
ANSIBLE_INVENTORY_PATH=".vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory"

VAGRANT_SSH_USER=vagrant
VAGRANT_SSH_KEY_PATH="$HOME/.vagrant.d/insecure_private_key"

usage () {
  echo "$0 <greeting>"
  exit 1
}

run_playbook () {
  ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
    --inventory-file=$ANSIBLE_INVENTORY_PATH \
    --user=$VAGRANT_SSH_USER \
    --private-key=$VAGRANT_SSH_KEY_PATH \
    --extra-vars="{\"greeting\":\"$1\"}" \
    $ANSIBLE_PLAYBOOK_PATH
}

[ "$#" == "0" ] && usage

run_playbook "$@"

The contents of this boilerplate will change depending on what tasks we need to run. Maybe we allow different playbooks or user-specified tags; maybe we expose certain variables as command-line options. But we now have a simple basis for running Ansible against our development machines.

./dev/tasks.sh 'Hello, world!'


Sunday, July 05, 2015

Using Ansible with Vagrant

The story often starts with a single developer on a single project cobbling infrastructure together as it's needed. Another developer joins the project and does much the same.

One day, one developer pulls down a recent change and the project doesn't build. Reaching out to the last contributor, the request for a bugfix is met with a truly terrible excuse:

It works on my machine!

After hours of debugging environment differences, the issue turns up. It's a version issue in the runtime or a key service provider, it's a case-sensitive name on OSX, or it's any of a thousand other things it doesn't have to be.

Reproducible development environments aren't a new problem. Tools like VirtualBox (and more recently, docker) allow software teams to share read-only, just-add-water machine images or containers; Vagrant allows us to share reproducible configurations of the VMs themselves, and provisioning tools let us keep system-level configurations transparent and instantly repeatable from source control.

Images stop "it works on my machine", but—as attractive as they sound—they come with several significant drawbacks. They're opaque, making it difficult to know with certainty what a specific image contains; they're big, often running into several hundreds of megabytes that must be stored and distributed with every update; and they only represent a single machine.

If we could share a base image and decorate it with a configuration written in a human-readable format, we could get around these challenges. Changes would be clearly exposed in source control; they would only be as big as the text diff; and descriptions of one service could be easily copied and edited to describe others.

So let's do that. We'll use Vagrant to set up a basic machine, then use a very simple Ansible playbook to provision it to a repeatable state.

Set up Vagrant

Check Vagrant's downloads page for a binary installation. We'll also need to install a virtualizer to run our Vagrant box; we'll use Virtualbox here, but VMWare or any other can be easily substituted.

Once Vagrant is installed, verify that the install ran successfully:

$ vagrant --version
Vagrant 1.7.2

Set up Ansible

Ansible may be available from [your package manager here], but it's a quick build from source. Check out the latest version from master (or use a tagged release):

$ git clone git@github.com:ansible/ansible.git --depth 1

Next, we need to update its submodules and build it:

$ cd ansible
$ git submodule update --init --recursive
$ sudo make install

Verify that it worked:

$ ansible --version
ansible 1.9.0

A simple, reproducible development environment

Now for the fun part. Let's define a basic VM using Vagrant and use Ansible to set it up.

# ./Vagrantfile
Vagrant.configure("2") do |config|

  # Base image to use
  config.vm.box = "hashicorp/precise64"

  # Declare ansible provisioner
  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "dev/playbook.yml"
  end
end

Next, we'll need to add a dead-simple playbook. In a real application, we'd load it up with the machine's various roles; for the sake of demonstration we can simply drop a file in the vagrant user's home directory:

# ./dev/playbook.yml
---
- hosts: all
  tasks:
    - name: Hello, world
      shell: 'echo "Hey there! --ansible" > hello_world.txt'

Let's bring the machine up and see how things look:

$ vagrant up
$ vagrant ssh
vagrant@precise64:~$ cat hello_world.txt
Hey there! --ansible

Conclusion

At this point, we would begin expanding our playbook with roles that applied application runtimes, datastores, web services, and anything else needed for happy, healthy development. We could then expose the Vagrant instance's network adapter to the host machine, sync local folders to ease development, and tune the entire setup to our heart's content. But even our trivial example lets us demonstrate repeatability: if something ever happens to our development environment, we simply check out "good" versions of the Vagrantfile and playbook.yml, blast away the offending VM, and bring it up again:

$ vagrant destroy -f && vagrant up
$ vagrant ssh
vagrant@precise64:~$ cat hello_world.txt
Hey there! --ansible

Not too shabby for a dozen lines of text!

Note: this article is the first in a miniseries on using Vagrant and Ansible to replicate multimachine production systems. Next up, we'll extend our simple setup to incorporate multiple Vagrant instances configured by Ansible. Read on!




View all posts