The story often starts with a single developer on a single project cobbling infrastructure together as it’s needed. Another developer joins the project and does much the same.
One day, one developer pulls down a recent change and the project doesn’t build. Reaching out to the last contributor, the request for a bugfix is met with a truly terrible excuse:
It works on my machine!
After hours of debugging environment differences, the issue turns up. It’s a version issue in the runtime or a key service provider, it’s a case-sensitive name on OSX, or it’s any of a thousand other things it doesn’t have to be.
Reproducible development environments aren’t a new problem. Tools like VirtualBox (and more recently, docker) allow software teams to share read-only, just-add-water machine images or containers; Vagrant allows us to share reproducible configurations of the VMs themselves, and provisioning tools let us keep system-level configurations transparent and instantly repeatable from source control.
Images stop “it works on my machine”, but–as attractive as they sound–they come with several significant drawbacks. They’re opaque, making it difficult to know with certainty what a specific image contains; they’re big, often running into several hundreds of megabytes that must be stored and distributed with every update; and they only represent a single machine.
If we could share a base image and decorate it with a configuration written in a human-readable format, we could get around these challenges. Changes would be clearly exposed in source control; they would only be as big as the text diff; and descriptions of one service could be easily copied and edited to describe others.
So let’s do that. We’ll use Vagrant to set up a basic machine, then use a very simple Ansible playbook to provision it to a repeatable state.
Check Vagrant’s downloads page for a binary installation. We’ll also need to install a virtualizer to run our Vagrant box; we’ll use Virtualbox here, but VMWare or any other can be easily substituted.
Once Vagrant is installed, verify that the install ran successfully:
$ vagrant --version Vagrant 1.7.2
Ansible may be available from [your package manager here], but it’s a quick build from source. Check out the latest version from master (or use a tagged release):
$ git clone firstname.lastname@example.org:ansible/ansible.git --depth 1
Next, we need to update its submodules and build it:
$ cd ansible $ git submodule update --init --recursive $ sudo make install
Verify that it worked:
$ ansible --version ansible 1.9.0
Now for the fun part. Let’s define a basic VM using Vagrant and use Ansible to set it up.
# ./Vagrantfile Vagrant.configure("2") do |config| # Base image to use config.vm.box = "hashicorp/precise64" # Declare ansible provisioner config.vm.provision "ansible" do |ansible| ansible.playbook = "dev/playbook.yml" end end
Next, we’ll need to add a dead-simple playbook. In a real application, we’d load it up with the machine’s various roles; for the sake of demonstration we can simply drop a file in the vagrant user’s home directory:
# ./dev/playbook.yml --- - hosts: all tasks: - name: Hello, world shell: 'echo "Hey there! --ansible" > hello_world.txt'
Let’s bring the machine up and see how things look:
$ vagrant up $ vagrant ssh vagrant@precise64:~$ cat hello_world.txt Hey there! --ansible
At this point, we would begin expanding our playbook with roles
that applied application runtimes, datastores, web services, and anything else
needed for happy, healthy development. We could then expose the Vagrant
instance’s network adapter to the host machine, sync
local folders to ease development, and tune the entire
setup to our heart’s content. But even our trivial example lets us demonstrate
repeatability: if something ever happens to our development environment, we
simply check out “good” versions of the
away the offending VM, and bring it up again:
$ vagrant destroy -f && vagrant up $ vagrant ssh vagrant@precise64:~$ cat hello_world.txt Hey there! --ansible
Not too shabby for a dozen lines of text!
Note: this article is the first in a miniseries on using Vagrant and Ansible to replicate multimachine production systems. Next up, we’ll extend our simple setup to incorporate multiple Vagrant instances configured by Ansible. Read on!