Setting up Ansible for multiple environment deployments

Ansible is a tool for automating the task of installing software via SSH. You can run it against a server, a VM or even a desktop; anything that can be a target of SSH! If you haven’t had the chance to use Ansible then go get a basic understanding before attempting to follow the things I will talk about.

The goal of Ansible is to be able to setup a server the same way, every time (immutability). This fits well with code development and the need to have production-like servers. Ansible does not smooth the transition between different environments, however. In fact, there is a special variable (environment) that has nothing to do with the concept that I’m writing about, making it a confusing thing to describe.

Environments

Developing code professionally, especially in teams, requires some type of repository to store your code and a server to host your code for others to run and use. However, a professional developer doesn’t make it a habit to directly alter a production server, as users might be getting a broken experience or worse, the server is changed without logging what was changed and how. Too many things can go wrong and there is no safety net. The production environment needs to be touched only when valid code and resources are ready for use (which implies testing). Solving this issue requires creating a development environment. This is where you can sandbox your changes and test them before moving them into a production environment. If you are in a larger team you might even have a staging environment for large scale testing and also for deploying code into the production environment. In this setup there are at least 3 server environments that need to be as similar as possible. This is one thing that Ansible should be used for; as an efficient and reliable way to clone a server setup.

Ansible Roles

Ansible is based around roles. The key to writing a role is to have a role do one thing and one thing well. A role should be a distinct and separate action. A good example would be a role that can create a user, or a role that installs Nginx. Roles matter when setting up an environment based Ansible script by decoupling each role from environment requirements. An example would be installing hostnames within an HTTP server. Each environment would most likely have separate names (dev.geedew.com vs geedew.com). To accomplish this, installing the HTTP server would be a different role than installing the host configuration files. Each environment can then install the identical HTTP servers, differing only in configs. The multiple roles break down the complexity.

A little described fact of Ansible

Directly from the docs there is a very quickly discussed feature of Ansible that really makes everything possible.

Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either the playbook directory OR the inventory directory. If both paths exist, variables in the playbook directory will override variables set in the inventory directory.
Ansible requires an inventory and most examples will use a single inventory file that is in the root of the project. That then means folders for _groupvars/ and _hostvars/ will be located in the root. I will recommend that this is not done; and should never be done.

Environments by using multiple inventory files

I recommend that there is an inventory file per environment. To accomplish this, a project would have a directory structure that has a folder for each environment with it’s own inventory file. For instance, at the root the folder structure would consist of

1
2
3
4
5
../
environments/
development/
production/
roles/

Now in each environment folder you would find common files and folders.

1
2
3
4
5
6
7
8
9
10
11
../
environments/
development/
group_vars/
host_vars/
inventory
production/
group_vars/
host_vars/
inventory
roles/

This works as the tip in the docs states, the inventory is the ‘root’ of where variables are loaded from.

Separate environment, separate variables

It should make sense how this keeps a clear split between environments, as each one can have different variables. Since you now have variables that are only found in the respective environment, you can very easily write your roles to use variables to describe the differences roles should take.

Sharing variable files across environments can be accomplished by using the ‘vars_files’ statement in the playbook.

Example playbook: App.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
- hosts: app
remote_user: "{{ remote_user }}"
vars_files:
- "{{ env }}/group_vars/certs.yml"
- "{{ env }}/group_vars/credentials.yml"
- "{{ env }}/group_vars/keys.yml"
- "{{ env }}/group_vars/deploy_keys.yml"
roles:
- common
- security
- credentials
- apache
- php
- hosts
- ssl-keys
- node
- newrelic
tasks:
- name: Bring Apache Online
service:
name: apache2
state: started
enabled: yes
sudo: yes

Closing thoughts
Using this method will split your inventory files into multiple files and locations and could also create duplication and extra maintenance. However, it does allow for vastly clearer roles with less error prone YAML (no complicated when statements mixed throughout). If you can live with that or develop a dynamic inventory, then give this setup a try, it’s worth it.

Notice: I am not affiliated with Ansible, other than a user of. All logos and documentation are owned and controlled by Ansible.