Friday, September 04, 2015

Confab: Simple Node.js Configurations

Node.js service configurations run the gamut from sophisticated libraries to ad-hoc lookups of environment variables or arbitrary JSON on the file system. Somewhere in the middle lies confab, a tiny utility for building configurations that are simple, external, and utterly predictable.

$ npm install confab

The core concept is dead simple: a single configuration object is constructed, extended, and validated using a sequence of transformations. With no transformations supplied, the configuration is simply an empty object:

// config.js
module.exports = confab([]); // {}

That isn't so useful, but confab also ships with a few built-in transformations that can:

  • load configuration data from JSON
  • merge environment variables
  • specify defaults and required fields
  • assign programmatic overrides
  • freeze the results against futher modification.

To set a default port and allow the environment to override it, we can extend our empty configuration with a few transformations:

var config = confab([
  confab.loadEnvironment({ 'PORT': 'port' }),
  confab.defaults({ port: 3000 }),
]); // { "port": 3000 }

Custom Transformations

We can also easily define new tranformations; if we want all applications to report certain configuration values at startup, for instance, we can whitelist and log the interesting parts:

function logConfigValue (whiteList) {
  return function (config) {
    var output = {};
    whitelist.forEach(function (key) {
      if (config.hasOwnProperty(key)) {
        output[key] = config[key];

The custom transformation can now be used just like the built-ins:

var config = confab([
  confab.loadEnvironment({ 'PORT': 'port' }),
  confab.defaults({ port: 3000 }),
  logConfigValue('port') // log { "port": "3000" }


Besides encouraging simple, declarative configuration, confab also provides a platform for applying consistent rules across multiple projects. Once our transformation list is settled, we can package the configuration logic up and publish it to a (private) package repository. Maybe we want to require all projects to respect JSON config files stored in a "global" directory:

module.exports = function myConfig (projectName) {

  var env = process.env('NODE_ENV') || 'default';
  var homeDir = process.env('USERPROFILE') || process.env('HOME');

  return confab([
      path.resolve(__dirname, 'config.json'),
      path.resolve(homeDir, '.org-config', projectName + '.' + env + '.json'
    confab.loadEnvironment({ 'PORT': 'port' }),
    confab.defaults({ port: 3000 }),

To reuse this configuration, we can simply install it:

$ npm install my-config

...and pass it project-specific options as needed.

// app-config.js
module.exports = require('my-config')('my-app');


That's about it: simple, reusable configuration logic. It's been invaluable to me for simplifying configuration management and runtime visibility across projects, scheduled tasks, and services, and now it's open-sourced for your development convenience.

Happy Hacking!

Wednesday, August 05, 2015

Circular Queue

Software developers aren't keen on losing data. In certain tasks, though—retaining a limited event history, say, or buffering a realtime stream—data loss might be a fully-underwritten feature of the system. In these scenarios, we might employ a circular queue; a fixed-size queue with the Ouroborosian behavior that stale data from the end of the queue are overwritten by newly-arrived items.

Getting Started

We'll use the circular-queue package from NPM to get started:

$ npm install circular-queue

Let's create our first queue and specify how many items it can hold. The maximum size will vary greatly between applications, but 8 is as good a number as any for demonstration.

var CircularQueue = require('circular-queue');
var queue = new CircularQueue(8);


Our new queue has two basic operations: queue.offer(item), which adds items to the queue, and queue.poll(), which will remove and return the queue's oldest item:

queue.poll();  // 3
queue.isEmpty; // true

It can also be useful to inspect the next item from the queue before deciding whether to poll it. We can do this using queue.peek():

queue.peek(); // 3
queue.poll(); // 3

Here's what it looks like in action. Use offer() and poll() methods to add and remove additional items, noting the "rotation" of the oldest element (blue) as the contents shift:


The idea at the heart of the circular queue is eviction. As new items are pushed onto an already-full queue, the oldest items in the queue are "evicted" and their places overwritten. Using circular-queue, we can detect evictions as they happen using the 'evict' event.

queue.addEventListener('evict', function (item) {'queue evicted', item);

How we handle eviction will vary from one domain to the next. It's an opportunity to apply error-correcting behavior—to expand a buffer, allocate new workers, or take other steps to relieve upstream pressure—but even if the loss is "expected", it may be worth at least a note in the logs.

To see eviction in practice, use offer() to overrun the full queue below. Note the value of the evicted item and the "rotation" of the queue as its head and tail move.


We're now ready to put it all together. In a live application, we'll rarely be clicking offer and poll ourselves. Rather, some upstream "producer" will be pushing new data to the queue while a downstream "consumer" races to pull items off.

We can get a sense for how this behaves using a final demo. Try adjusting the rate of production and consumption to see the queue in balance (consumption matches production), filling up (production exceeds consumption), or draining (consumption exceeds production).

Tuesday, July 28, 2015

Bring your production environment home with Vagrant and Ansible

Note: this is the second part in a short series about using Ansible and Vagrant to create reproducible development environments modeled after production systems. Part one closed with Vagrant and Ansible playing nicely together; next, we'll extend our basic environment to reflect a multi-machine deployment.

With our powers combined

Ansible plays applied to a single machine aren't much more than a reproducible SSH session. There's undeniable value in the ability to document, commit, and replay repetitive tasks, but the tool's real power emerges once it's extended across multiple hosts.

Vagrant, on the other hand, is insanely useful for single machine environments. Define an image, its hardware configuration, and provisioning, and—boom—your team can tuck HFS+ compatibility into bed forever. Nothing stops it at just one host, though, and with a little bit of work, we can use it to configure multiple machines in a model of a small- to mid-size production setup.

Let's take a look at how combining both tools will let us bring production home with prod-like environments we can setup, adjust, destroy, and recreate with just a few steps on the command line.

A simple group

To start things off, consider a simple Ansible inventory. One group, two hosts—no secrets here.

# /etc/ansible/hosts

In production, we could now define a playbook and use it to describe the target state of the group-a hosts. For the sake of demonstration, let's just check that each machine is up and available:

# ./dev/playbook.yml
- hosts: group-a
  sudo: no
    - command: 'echo {{ greeting }}'
      register: echoed

    - debug: var=echoed.stdout

Running the playbook, we see the expected result:

$ ansible-playbook ./dev/playbook.yml \

# ...

TASK: [debug var=echoed.stdout] *******************************************
ok: [instance-1] => {
    "var": {
        "echoed.stdout": "bongiorno"
ok: [instance-2] => {
    "var": {
        "echoed.stdout": "bongiorno"

Great! But these are production machines: if we want to develop and test the playbook from the safety of a development environment, we're going to need some help. Cue Vagrant.

Defining the Vagrant group

Vagrant lets us define virtual machines in a human-readable format, for instance to set up a development group. Let's recreate the group-a hosts and attach them to the local network:

# Somewhere inside ./Vagrantfile
config.vm.define "instance-1" do |instance| "private_network", ip: ""

config.vm.define "instance-2" do |instance| "private_network", ip: ""

Vagrant has no concept of a "group", but we can use the Ansible provisioner to redefine group-a. We'll simply need to tell it how the group is organized:

# Somewhere inside ./Vagrantfile
config.vm.provision "ansible" do |ansible|
  ansible.groups = {
    "group-a" => [

Finally, we can assign the playbook to use to provision them, recycling the same plays from our original, production example. Overriding it with a small, development-only greeting:

config.vm.provision "ansible" do |ansible|

  # ...

  ansible.playbook = "dev/playbook.yml"

  ansible.extra_vars = {
    "greeting" => "Hello, world"

We're almost ready to make it go. We just need to gain access to the hosts Vagrant has created.

Authentication, a brief interlude

By default, recent versions of Vagrant will create separate SSH keys for the vagrant user on each managed instance. This makes sense, but to keep things simple for development we'll bypass individual keys and provision our hosts with global key used before version 1.7.

# Somewhere inside ./Vagrantfile
config.ssh.insert_key = false

If we're willing to work around the unique keys, we can instead find the keys in .vagrant/machines/<instance_name>/virtualbox/private_key, provide a "development" inventory file with the correct key for each machine. For local development, though, the convenience of a shared key may trump other concerns.

Provisioning the Vagrant group

Let's bring the VMs up. As they're built, we'll see each machine created and provisioned with the new message in turn:

$ vagrant up

# ...

TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-1] => {
    "var": {
        "echoed.stdout": "Hello, world"

# ...

TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-2] => {
    "var": {
        "echoed.stdout": "Hello, world"

As the boxes are initialized, the provisioner also creates an Ansible inventory describing the group at .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory. Opening it, we see contents very similar to our original hosts file:

# .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant

instance-2 ansible_ssh_host= ansible_ssh_port=2200
instance-1 ansible_ssh_host= ansible_ssh_port=2201


As long as we run Ansible through the provisioner, this inventory will be used by any plays we specify. If we want to run Ansible against running boxes outside of Vagrant, though, we'll need to reference the inventory ourselves.

Running plays on the Vagrant guests

A full vagrant reload --provision takes time, slowing the pace of iterative development. Now that we have an inventory, though, we can run plays against provisioned machines without rebuilding them. All we need are a playbook and a valid SSH key:

ansible-playbook \
  --inventory-file=.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
  --user=vagrant \
  --private-key=~/.vagrant.d/insecure_private_key \
  --extra-vars='{"greeting":"bongiorno"}' \

--extra-vars will override the corresponding variable declared in the playbook, and we'll now see the Vagrant guests echo our original greeting. Typing that out is a bit of a mouthful, though, so let's script up the details:

# dev/

set -e



usage () {
  echo "$0 <greeting>"
  exit 1

run_playbook () {
  ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
    --inventory-file=$ANSIBLE_INVENTORY_PATH \
    --user=$VAGRANT_SSH_USER \
    --private-key=$VAGRANT_SSH_KEY_PATH \
    --extra-vars="{\"greeting\":\"$1\"}" \

[ "$#" == "0" ] && usage

run_playbook "$@"

The contents of this boilerplate will change depending on what tasks we need to run. Maybe we allow different playbooks or user-specified tags; maybe we expose certain variables as command-line options. But we now have a simple basis for running Ansible against our development machines.

./dev/ 'Hello, world!'

View all posts