Bring your production environment home with Vagrant and Ansible
- 7/28/2015
- ·
- #index
Note: this is the second part in a short series about using Ansible and Vagrant to create reproducible development environments modeled after production systems. Part one closed with Vagrant and Ansible playing nicely together; next, we’ll extend our basic environment to reflect a multi-machine deployment.
With our powers combined
Ansible plays applied to a single machine aren’t much more than a reproducible SSH session. There’s undeniable value in the ability to document, commit, and replay repetitive tasks, but the tool’s real power emerges once it’s extended across multiple hosts.
Vagrant, on the other hand, is insanely useful for single machine environments. Define an image, its hardware configuration, and provisioning, and–boom–your team can tuck HFS+ compatibility into bed forever. Nothing stops it at just one host, though, and with a little bit of work, we can use it to configure multiple machines in a model of a small- to mid-size production setup.
Let’s take a look at how combining both tools will let us bring production home with prod-like environments we can setup, adjust, destroy, and recreate with just a few steps on the command line.
A simple group
To start things off, consider a simple Ansible inventory. One group, two hosts–no secrets here.
# /etc/ansible/hosts
[group-a]
instance-1.example.com
instance-2.example.com
In production, we could now define a
playbook and use it to describe the target
state of the group-a
hosts. For the sake of demonstration, let’s just check
that each machine is up and available:
# ./dev/playbook.yml
---
- hosts: group-a
sudo: no
tasks:
- command: 'echo {{ greeting }}'
register: echoed
- debug: var=echoed.stdout
Running the playbook, we see the expected result:
$ ansible-playbook ./dev/playbook.yml \
--extra-vars='{"greeting":"bongiorno"}'
# ...
TASK: [debug var=echoed.stdout] *******************************************
ok: [instance-1] => {
"var": {
"echoed.stdout": "bongiorno"
}
}
ok: [instance-2] => {
"var": {
"echoed.stdout": "bongiorno"
}
}
Great! But these are production machines: if we want to develop and test the playbook from the safety of a development environment, we’re going to need some help. Cue Vagrant.
Defining the Vagrant group
Vagrant lets us define virtual machines in a human-readable
format, for instance to set up a
development group. Let’s recreate the group-a
hosts and attach them to the
local network:
# Somewhere inside ./Vagrantfile
config.vm.define "instance-1" do |instance|
instance.vm.network "private_network", ip: "192.168.32.10"
end
config.vm.define "instance-2" do |instance|
instance.vm.network "private_network", ip: "192.168.32.11"
end
Vagrant has no concept of a “group”, but we can use the Ansible
provisioner to redefine group-a
. We’ll simply
need to tell it how the group is organized:
# Somewhere inside ./Vagrantfile
config.vm.provision "ansible" do |ansible|
ansible.groups = {
"group-a" => [
"instance-1",
"instance-2"
]
}
end
Finally, we can assign the playbook to use to provision them, recycling the same
plays from our original, production example. Overriding it with a small,
development-only greeting
:
config.vm.provision "ansible" do |ansible|
# ...
ansible.playbook = "dev/playbook.yml"
ansible.extra_vars = {
"greeting" => "Hello, world"
}
end
We’re almost ready to make it go. We just need to gain access to the hosts Vagrant has created.
Authentication, a brief interlude
By default, recent versions of Vagrant will create separate SSH keys for the
vagrant
user on each managed instance. This makes sense, but to keep things
simple for development we’ll bypass individual keys and provision our hosts with
global key used before version 1.7.
# Somewhere inside ./Vagrantfile
config.ssh.insert_key = false
If we’re willing to work around the unique keys, we can instead find the keys in
.vagrant/machines/<instance_name>/virtualbox/private_key
, provide a
“development” inventory file with the correct key for each machine. For local
development, though, the convenience of a shared key may trump other concerns.
Provisioning the Vagrant group
Let’s bring the VMs up. As they’re built, we’ll see each machine created and provisioned with the new message in turn:
$ vagrant up
# ...
TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-1] => {
"var": {
"echoed.stdout": "Hello, world"
}
}
# ...
TASK: [debug var=echoed.stdout] ********************************************
ok: [instance-2] => {
"var": {
"echoed.stdout": "Hello, world"
}
}
As the boxes are initialized, the provisioner also creates an Ansible inventory
describing the group at
.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
. Opening it,
we see contents very similar to our original hosts
file:
# .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
# Generated by Vagrant
instance-2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200
instance-1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2201
[group-a]
instance-1
instance-2
As long as we run Ansible through the provisioner, this inventory will be used by any plays we specify. If we want to run Ansible against running boxes outside of Vagrant, though, we’ll need to reference the inventory ourselves.
Running plays on the Vagrant guests
A full vagrant reload --provision
takes time, slowing the pace of iterative
development. Now that we have an inventory, though, we can run plays against
provisioned machines without rebuilding them. All we need are a playbook and a
valid SSH key:
ansible-playbook \
--inventory-file=.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
--user=vagrant \
--private-key=~/.vagrant.d/insecure_private_key \
--extra-vars='{"greeting":"bongiorno"}' \
dev/playbook.yml
--extra-vars
will override the corresponding variable declared in the
playbook, and we’ll now see the Vagrant guests echo our original greeting.
Typing that out is a bit of a mouthful, though, so let’s script up the details:
# dev/tasks.sh
#!/bin/bash
set -e
ANSIBLE_PLAYBOOK_PATH="dev/playbook.yml"
ANSIBLE_INVENTORY_PATH=".vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory"
VAGRANT_SSH_USER=vagrant
VAGRANT_SSH_KEY_PATH="$HOME/.vagrant.d/insecure_private_key"
usage () {
echo "$0 <greeting>"
exit 1
}
run_playbook () {
ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook \
--inventory-file=$ANSIBLE_INVENTORY_PATH \
--user=$VAGRANT_SSH_USER \
--private-key=$VAGRANT_SSH_KEY_PATH \
--extra-vars="{\"greeting\":\"$1\"}" \
$ANSIBLE_PLAYBOOK_PATH
}
[ "$#" == "0" ] && usage
run_playbook "$@"
The contents of this boilerplate will change depending on what tasks we need to run. Maybe we allow different playbooks or user-specified tags; maybe we expose certain variables as command-line options. But we now have a simple basis for running Ansible against our development machines.
./dev/tasks.sh 'Hello, world!'