With continuous increase of PC performance capacity and expansion of an army of cloud platforms, Virtual Machines (VMs) become less of a luxury associated with having a large server capable of running VMs and more of a day-to-day utility aimed at helping developers across all industry spectrum. But with the shift towards virtualization comes the question of control and repeatability of all the new VMs. For us at Zaleos consistency and automation are two foundational values and below you can learn how we bring them into action when it comes to virtual environments.

Why use virtual environments in the first place?

If you can’t think of at least a couple of reasons in favor of using virtual environments by software engineers, then here are some to get you started:

  • Isolation and disposability - allows one to spawn a new VM, try something out and then discard it without a single trace left. Can be repeated numerous times without the risk of persisting undesired changes that can cause surprises later.
  • Environment cloning - allows one to take an environment where the problem manifests itself and share it with the colleague to help identify the cause.
  • Testing - allows one to take the image of the production environment and use it to carry out various testing phases.

Why automate?

Most common answers that come to mind when thinking about this question are probably:

  • To save time
  • To minimize human error

And those are very valid reasons, but one thing that may not immediately stand out is that automating processes increases organizations ability to scale up and manage changes.

Imagine for a second that you are responsible for maintaining a standardized development environment for your team. You take team’s favorite operating system, perform all necessary provisioning steps (capturing them either as a set of reusable scripts or with the help of a more sophisticated infrastructure configuration tools) and you package final result as a virtual machine for the benefit of your team.

And just as you’ve done all that work, you find out that:

  • The testing team liked the VM approach so much that they would like to run all tests inside disposable VMs as well. And what a better configuration to start with than one used by the development team. Except...the image is too big, has sub-optimal performance configuration in favor of better debugging, etc, etc. So they ask you to produce a separate flavor of your VM to be used for testing.
  • A newsletter email arrives in your inbox announcing the upcoming end of support for the version of the operating system your team has been using which starts you thinking about the upcoming transition period to the newer version.
  • Your boss walks past your desk, compliments you on the great work done to provide common foundation for all teams using standardized virtual environment setup and casually mentions how much he liked the idea of lightweight containers like Docker that he heard about in the last conference he attended and that the team should really consider adopting them where possible.

And just like that, in a single afternoon, the amount of effort required to maintain virtual environments has increased eightfold.

Of course, the above scenario is just a fiction, but it illustrates how infrastructure requirement changes can significantly impact the effort required to maintain virtual environments and creates the premise for automated virtual environment creation.

As mentioned earlier, drive for automation is one of our foundational values and you can see how we’ve been applying it in other areas in the blog post “Automated Testing”.

What is Packer?

Packer is an open source tool for automated creation of identical virtual environments. Using a single configuration it allows building, provisioning and deploying images for a wide range of virtualization platforms.

From the configuration point of view, Packer provides several simple, intuitive abstractions that govern its behavior.

  • Builders - typically the first thing you have to configure when setting up Packer to automate your infrastructure. A builder is a component of Packer that is responsible for creating a machine and turning it into an image. Depending on your target platform you will have to choose one or the other type of build from this exhaustive list and supply additional configuration specific to that type. Don't fret if you can't find your target platform in the list, there is the option to extend Packer to support your own custom builder type.
  • Provisioners - help you apply customizations on top of the base OS image generated by builders. They play well both with sophisticated infrastructure automation tools such as Ansible, Chef or Puppet as well as with plain shell scripts.
  • Post-processors - are optional, and they help you compress and upload artifacts from previous steps, repackage them (for example VirtualBox OVI to a Vagrant box) or run custom post-processing steps outside of the image being built.

Example

Let us consider the scenario mentioned earlier and see how we can address one of the issues mentioned - creating 2 different types of virtual images with the common provisioning logic using Packer.

In order to be able to reproduce following steps yourself, you will need to make sure that you have installed Packer, Docker, and Vagrant as well as the Vagrant provider (we will use VirtualBox in this case).

Packer configuration (called "template" in Packer world) is defined as a JSON file. Below is the template outline that can be used as a good starting point in the majority of situations.

{
  "variables": {
    ...
  },
  "builders": [
    ...
  ],
  "provisioners": [
    ...
  ],
  "post-processors": [
    ...
  ]
}

Considering our scenario, we are going to start by specifying the necessary builders. Here is the bare minimum configuration needed for creating a Docker image that is based on a pre-existing one:

{
  "type": "docker",
  "image": "centos:7",
  "export_path": "{{user `output_path`}}/docker/image.tar"
}

3 lines are all it took! To create a Vagrant box, we will need to configure a few extra things, but hopefully, you will find the following template snippet just as clear and concise as the previous one:

{
  "type": "vagrant",
  "source_path": "centos/7",
  "ssh_port": 22,
  "ssh_username": "vagrant",
  "communicator": "ssh",
  "add_force": true,
  "provider": "virtualbox",
  "output_dir": "{{user `output_path`}}/vagrant"
}

Here we are also creating a Vagrant box from a pre-existing box (that can be stored locally or in the Vagrant Cloud) and also specifying how Packer should connect to it (in our case via SSH on the given port and with the given user).

Above snippet demonstrates a fairly recent Packer feature that allows building Vagrant box from another Vagrant box. Prior to this, one had to first build a virtual image (using builder type for one of the providers supported by Vagrant) and then package it to a Vagrant box using a post-processor step. At Zaleos we've been creating Vagrant boxes by first building a VirtualBox image from official Centos 7 distribution ISO. As you can imagine, this process is time-consuming (fetching the ISO and installing) and required much more configuration. We are looking forward to trial the new approach for reusing existing Vagrant boxes and apply it in our production.

You might have noticed that in both Docker and Vagrant builder configurations the following construct appears:

{{user `output_path`}}

This is the Packer's way to inject user variables in the configuration. Before output_path variable can be injected anywhere though, it has to be defined in the variables section of the template JSON:

"variables": {
  "output_path": "./output"
},
...

Next, we are going to look at how provisioning steps can be applied to the builder output. As mentioned earlier, provisioners take care of this in Packer. Below is the sample configuration that ensures that both Docker and Vagrant images come with a network monitoring tool sngrep preinstalled.

"provisioners": [
  {
    "type": "shell",
    "inline": "yum install -y sudo",
    "only": ["docker"]
  },
  {
    "type": "shell",
    "inline": [
      "echo Starting Provisioning",
      "sudo rpm --import http://packages.irontec.com/public.key",
      "REPOFILE=irontec.repo",
      "cat <<'EOF' > /tmp/${REPOFILE}\n[irontec]\nname=Irontec RPMs repository\nbaseurl=http://packages.irontec.com/centos/$releasever/$basearch/\nEOF",
      "sudo mv /tmp/${REPOFILE} /etc/yum.repos.d/",
      "sudo yum install -y sngrep",
      "sudo rm -f /etc/yum.repos.d/${REPOFILE}"
    ]
  }
]

Both provisioners are of plain shell type that allows writing shell scripts directly in the template. However, the first provisioner will run only in Docker due to "only": ["docker"] line. Think of it as of your if-statement that allows you to define customizations specific to particular virtualization platforms. Second provision above is our workhorse that does all the job of installing sngrep utility. As you can see, thanks to a bit of only trickery above (ensuring sudo is available in Docker) we were able to keep the main provisioning logic identical for both target platforms.

If you were to run Packer at this point with the template steps shown above, you would end up with 2 output artifacts - ./output/docker/image.tar and ./output/vagrant/package.box. You would then have to upload them to the corresponding platform cloud or import them locally before you could run them. Let us see how we can avoid the need for extra manual steps by automatically importing Docker image in our local environment. Post-processor will help us with that.

"post-processors": [
  {
    "type": "docker-import",
    "repository": "zaleos",
    "tag": "demo",
    "only": ["docker"]
  }
]

This will create a local Docker image zaleos:demo which you could then run directly with docker run -it zaleos:demo /bin/bash without having to import it manually first.

Putting things together

Now that we have looked at various parts of the Packer template up close, lets put everything together and run it! Below is the complete version of the template that we are going to execute:

{
  "variables": {
    "output_path": "./output"
  },
  "builders": [
    {
      "type": "vagrant",
      "source_path": "centos/7",
      "ssh_port": 22,
      "ssh_username": "vagrant",
      "communicator": "ssh",
      "add_force": true,
      "provider": "virtualbox",
      "output_dir": "{{user `output_path`}}/vagrant"
    },
    {
      "type": "docker",
      "image": "centos:7",
      "export_path": "{{user `output_path`}}/docker/image.tar"
    }
  ],
  "provisioners": [
    {
      "type": "shell",
      "inline": "yum install -y sudo",
      "only": ["docker"]
    },
    {
      "type": "shell",
      "inline": [
        "echo Starting Provisioning",
        "sudo rpm --import http://packages.irontec.com/public.key",
        "REPOFILE=irontec.repo",
        "cat <<'EOF' > /tmp/${REPOFILE}\n[irontec]\nname=Irontec RPMs repository\nbaseurl=http://packages.irontec.com/centos/$releasever/$basearch/\nEOF",
        "sudo mv /tmp/${REPOFILE} /etc/yum.repos.d/",
        "sudo yum install -y sngrep",
        "sudo rm -f /etc/yum.repos.d/${REPOFILE}"
      ]
    }
  ],
  "post-processors": [
    {
      "type": "docker-import",
      "repository": "zaleos",
      "tag": "demo",
      "only": ["docker"]
    }
  ]
}

You can get above template by cloning a Github repository:

cd <your-clone-path> 
git clone https://github.com/zaleos/post-packer-demo
cd post-packer-demo

And now we can run Packer (notice how Vagrant and Docker builders are started simultaneously and run in parallel):

packer build template.json

Once the Packer finishes, you should see a summary of the build artifacts:

==> Builds finished. The artifacts of successful builds are:
--> docker: Imported Docker image: zaleos:demo
--> vagrant: Vagrant box is output/vagrant/package.box

Let us check that everything worked as expected:

Launching Docker container based on new image and running sngrep
Importing Vagrant box, launching an instance and running sngrep
When validating Vagrant build we had to disable default synced folder because centos/7 Vagrant box doesn't come with VirtualBox Guest Additions preinstalled. If we hadn't disabled the synced folder, we would have seen a nasty error message saying that Vagrant was unable to mount VirtualBox shared folders. If you're wondering how you to add guest additions to your image, check suggestions in the CentOS blog post or consider installing them via Packer (as an extra provisioning step).

What next?

Using Packer for managing your infrastructure build process essentially turns the process into a Configuration Item with a clearly defined state (Packer's config expressed as a JSON file). At Zaleos, we are using GitHub to store Packer configuration, which allows us to peer review changes and keep track of the versions in a way that is familiar to software engineers. In addition, we have set up a CI process that runs Packer and produces ready-to-use virtual images so installing new tools or updating build dependencies is just a Pull Request away.

Conclusions

If the challenges of maintaining infrastructure as a collection of virtual images outlined in the fictitious scenario have stricken a chord, then perhaps Packer is what you have been looking for. It deserves credit for the easily configurable, maintainable, repeatable, CI-friendly approach to building images and comes with support for a wide range of virtualization platforms out of the box. Make sure to check out the official Getting Started guide or head to the Packer's GitHub repository for a collection of examples that might suit your current needs.

References

  1. Introduction to Packer
  2. sngrep is a Ncurses SIP Messages flow viewer made by Irontec