Automating your cloud infrastructure, part one: automating server deployment with pyrax, unittest and mock

Sun 19 January 2014

I've been tinkering with cloud infrastructure a lot in the past couple of years. I mostly administer my servers by hand, but recently I've tasked myself with migrating a dev environment of a half dozen servers - I figured it would be a good opportunity to roll up my sleeves and do some writing on the topic of cloud infrastructure automation. In this first post on the topic, I shall provide a possible approach to automating server deployment (as well as unit-testing your scripts) using Rackspace Cloud's API; I'll eventually get round to other topics such as automating your deployments and configuration with Puppet, and setting up monitoring and intrusion prevention -- but for now, my focus will be on automating server deployments on the cloud.

Your mission, should you choose to accept it...

Imagine that you are a sysadmin tasked with deploying several servers on cloud infrastructure. There was a time when you had to painstakingly configure and deploy each server manually... A boring, highly repetitive, and error-inducing task. These annoyances are rapidly overcome nowadays: advances in virtualization and cloud computing have made automating server deployments a breeze.

Before we begin, though, here are a couple of assumptions: I assume that you are working with Rackspace Cloud and are familiar with its services. I also assume that you're familiar with the concept of ghosting, i.e. creating a pre-configured, base template of a typical server that would be deployed in your infrastructure. Configuration of the base template is not in the scope of this post; I might cover some basics in the near-future when broaching the subject of configuration management with Puppet, though.

I also assume that you are familiar with software development concepts such as test-driven development, code versioning, and software design patterns. I've tried to provide links where it makes sense; if something's not easy to follow, feel free to comment and I'll answer/amend accordingly.

Introducing the tools

Infrastructure automation is a subject mainly discussed in sysadmin circles; however, the tools that I use in my approach come largely from my programming / testing toolkit. I'm an advocate of Test-Driven Development, and I see no reason why the same cannot be applied to systems administration.

My entire approach is based on the assumption that you are comfortable with Python. I've set myself up with PyDev for this task; PyDev is a python editor based on the excellent open-source Eclipse IDE. The benefits of using PyDev over notepad, notepad++ or gedit  are that 1) you get syntax highlighting and code completion, 2) the refactoring plug-ins are sweet, and 3) you can manage and run your unit tests from the IDE. I realize and respect that there are a lot of vi / nano / emacs purists out there - I used to be one. If you're happier using a nice, clean editor like that, cool! Doesn't change my approach.

But I digress. Rackspace Cloud has a ReST API that allows you to perform (almost) all the tasks you can perform from the admin dashboard. You can do things like create servers, isolated networks, list server images... The panoply of functionalities is documented on If you're familiar with python's urllib library, you can implement your own library with a little work; however, I would recommend using pyrax instead. The library is easy to use, well-documented and only a pip/pypi install away. I'll be using this library in my sample source code.

As mentioned before, I'm keen on TDD; when developing my deployment script, I begin by writing tests that are bound to fail when they are first run, then implement the code that will make them succeed. This way I can catch a lot of silly errors before I launch the script in my production environment and I can make sure that changes I make as I go along don't break the scripts. I use the unittest and mock libraries to achieve this purpose. I don't go as far as to check code coverage, though I may do so eventually for larger scripts.

Setting up the project

I recommend setting up a basic environment so that you can comfortably write scripts for your infrastructure. If you administer several infrastructures, I urge you to have one environment per infrastructure so as to avoid any accidental deployments (or deletions!).

Your entire environment should be contained in a single folder as a package. I'd recommend setting up code versioning with a tool like git to manage code changes but also branches - for instance, you could easily maintain deployment scripts for several infrastructures that way.

Here's what your environment should look like; I've called the root directory of the environment my_rackspace_toolkit - I provide explanations for each component below:

my_rackspace_toolkit [dir]
+--> category [dir]
     +--> tests [dir]

This contains a single class, RackspaceContext. This allows you to supply your scripts with contextual variables for calling pyrax objects. Here's an example implementation:

import pyrax as pyrax_lib
import keyring as keyring_lib

class RackspaceContext(object):

    #Set up alias to the pyrax library:
    pyrax = pyrax_lib
    keyring = keyring_lib

    def __init__(self):
        # Set up authentication
        self.pyrax.set_setting("identity_type", "rackspace")
        self.keyring.set_password("pyrax", "my_username", "my_api_token")

        # Set up aliases
        self.cs     = self.pyrax.cloudservers
        self.cnw    = self.pyrax.cloud_networks

As the name indicates, RackspaceContext is a typical implementation of the Context design pattern. There are several benefits to this:

  1. With a Context class, you can consistently set up authentication throughout all your deployment scripts. If your API token changes, you only have one file to worry about.
  2. If you want to re-deploy your environment for multiple rackspace accounts, you need only change the context and you're good to go.
  3. If done right, your deployment scripts don't need to worry about authentication - they just need to consume the context class.
  4. This makes testing your scripts insanely simple. We'll see why in a moment.

The rackspace shell is a command-line interface that pre-loads the context and any scripts that you've written so that you can execute them easily. Here's an example:

#!/usr/bin/env python

from pyrax_context import PyraxContext
context = PyraxContext()

# Import the CreateDevEnvironment script so it can easily be called from the shell.
from dev.create_dev_environment import CreateDevEnvironment

print """
Pyrax Interactive Shell - preloaded with the rackspace context.

When running your scripts, please make calls using the context object.

For instance:

script = CreateDevEnvironment()
result = script.actuate(context)

print result

# Drop to the shell:
import code
code.interact("Rackspace shell", local=locals())

Note that if you're writing deployment scripts for use in a CI environment like Jenkins, you may wish to adapt this file to make it either interactive or non-interactive, perhaps by using a flag. I've found that it's a useful thing to have in any case.

category directory

You are likely to have several types of deployment scripts; I recommend that you divide them by category using packages. For instance, why not have a dev package for the development servers? Why not separate creation scripts from deletion scripts? How you separate your scripts, whether by functionality or by server type, is up to you; I've found that some categorization is essential, particularly because you may find yourself executing many of these scripts at a time and you need a way to do this in a logical manner. Make sure the each of your directories has an file, making it a package.

Each deployment script should be a file containing a class that will be called from your shell. For instance:

class CreateDevEnvironment(object):
    Script object used to perform the necessary initialization tasks.
    To use, instantiate the class and call the actuate() method.
    # Set up list of machines
    MACHINES = ["machine-1",

    def actuate(self, pyrax_context):
        Actually performs the task of creating the machines.
        # Get the flavors of distribution
        flavors = [flavor
                   for flavor in pyrax_context.cs.flavors.list()
                   if == u"512MB Standard Instance"]
        flavor_exists = len(flavors) == 1

        # Get the networks
        networks = [network
                    for network in pyrax_context.cnw.list()
                    if network.label == "MyPrivateNetwork"]
        network_exists = len(networks) == 1
        network_list = []
        for network in networks:
            network_list += network.get_server_networks()

        # Get the images
        images = [image
                  for image in pyrax_context.cs.images.list()
                  if == "my_image"]
        image_exists = len(images) == 1

        if (flavor_exists and network_exists and image_exists):
            for machine_name in self.MACHINES:
                pyrax_context.cs.servers.create(machine_name, images[0].id, flavors[0].id, nics = network_list)
        return "Creation of machines is successful."
    except Exception as e:
        return "An exception has occurred! Details: ", e.message

The class contains a single method, actuate, which carries out your infrastructure deployment tasks - in this case, it is the creation of two machines based on a previously created image, using the standard 512 MB flavor of server.

This is the file containing your unit tests. You can write your tests using unittest, pyunit or nose; I've written mine with unittest, and I use mocks to provide my tests with fake versions of pyrax objects. The goal is to verify that the script calls the correct function with the appropriate parameters, not to actually carry out the call. Once again, here's an example of how this can be done:

import unittest
from rackspace_context import RackspaceContext as RackspaceContextClass
from mock import Mock
from dev.create_dev_environment import CreateDevEnvironment
from collections import namedtuple

Flavor = namedtuple("Flavor", "id name")
Network = namedtuple("Network", "id label")
Network.get_server_networks = Mock(return_value = [{'net-id': u'some-guid'}])
Image = namedtuple("Image", "id name")

class CreateDevEnvironmentTests(unittest.TestCase):

    RackspaceContext = RackspaceContextClass

    def setUp(self):
        self.RackspaceContext.pyrax = Mock()

        self.RackspaceContext.pyrax.cloudservers.flavors.list = Mock(return_value = [
                                                                                 Flavor(id = u'2', name = u'512MB Standard Instance')

        self.RackspaceContext.pyrax.cloud_networks.list = Mock(return_value = [
                                                                           Network(id = u'1', label = u'MyPrivateNetwork')

        self.RackspaceContext.pyrax.cloudservers.images.list = Mock(return_value = [
                                                                                Image(id = u'3', name = u'my_image')

        self.context = self.RackspaceContext()

    def tearDown(self):

    def testActuate(self):
        create_script = CreateDevEnvironment()
        # The script should first check that the 512 standard server flavor exists.
        self.assertTrue(self.context.pyrax.cloudservers.flavors.list.called, "cloudservers flavors list method was not called!")

        # The script should then check that the DevNet isolated network exists.
        self.assertTrue(self.context.pyrax.cloud_networks.list.called, "cloudservers networks list method was not called!")

        # The script should also check that the image it is going to use exists

        # Finally, the script should call the create method for each of the machines in the script:
        for args in self.context.pyrax.cloudservers.servers.create.call_args_list:
            machine_name, image_id, flavor_id = args[0]
            nic_list = args[1]["nics"]
            self.assertTrue (machine_name in CreateDevEnvironment.MACHINES)
            self.assertTrue(image_id == u'3')
            self.assertTrue(flavor_id == u'2')
            self.assertTrue(nic_list == [{'net-id': u'some-guid'}])

if __name__ == "__main__":

Notice how I'm setting up mocks for each method that expects parameters back - this allows me to do back-to-back testing on my scripts so that I'm sure that I know how the script will be calling the pyrax libraries. While this doesn't prevent you from making mistakes based on misunderstanding of how pyrax is used, it does prevent you from doing things like accidentally inverting an image id from a flavor id!


Using this methodology, you should be able to easily develop and test scripts that you can use to mass deploy and configure rackspace cloud servers. Initial setup of your environment using this approach should take no more than half an hour; once your environment is set up, you should be able to whittle out scripts easily and, more importantly, make use of this nifty little test harness so that you avoid costly accidents!

Suggestions and constructive criticism are welcome; I'm particularly interested if you have seen better approaches to automation, or if you know any other nifty tools. I'd also be interested in finding out if anyone out there has real-world experience using pyrax with Jenkins or Bamboo, and/or integrated this type of scripting with WebDriver scripts.

In my next post, I'll be discussing Puppet. Now that automating server deployments is no longer a secret to you, how do you get your machines to automatically download packages, set up software, properly configure the firewall et cetera? I'll attempt to address this and more shortly.