Testing Ansible Roles Using Containers

My Services – Docker repository provides the means to test the Ansible roles in my Libraries – Ansible repository using Docker containers, which simulate the Varilink Computing Ltd server estate on the developer desktop and support the use of Ansible to manage them via SSH connections. Since our servers are based on Debian, so are those Docker containers.

I believe this to be an artificial/unusual use of Docker. You wouldn’t normally use Ansible and SSH to connect to Docker containers and configure services within them using non-Docker related Ansible modules, though there are a set of Ansible community modules for managing Docker containers and this includes an Ansible module to execute a command in a Docker container. However, I’ve found this approach to be useful for the purpose of testing Ansible roles that will subsequently be used on non-Docker hosts, because it’s so easy to recreate containers.

I had a scenario in which I was retro fitting Ansible to a server estate that was already hosting services I had built without an automation tool. So I was seeking confidence that the Ansible roles I had developed would work for builds from scratch, making that ability to easily recreate containers important.

Contents

Using my “Services – Docker” Repository

If you want to try out my Services – Docker repository for yourself; for example to learn more about Ansible and the services that my Libraries – Ansible repository defines, then I’ve made it very easy for you to do so. The Services – Docker README contains detailed usage instructions. Furthermore, as well as using Docker containers to simulate the Varilink Computing Ltd server estate, this repository extends that idea to implement Ansible itself, automation helper scripts and testing clients using Docker containers too.

This approach minimizes so far as is possible any reliance on a specific desktop environment in order to use Services – Docker. The only dependencies remaining are:

  • Docker, including Docker Compose.
  • The ability to run Linux containers, so the Windows Subsystem for Linux (WSL) is needed on a Window desktop.
  • A shell to use, which should come with your operating system of course.

I personally have tested Services – Docker in two desktop environments:

  1. Debian bullseye with the docker APT package installed using the Bash and Dash shells.
  2. Windows 10 with Docker Desktop on Windows and the Windows Subsystem for Linux (WSL) installed using PowerShell.

I’d love to hear about the experience of anybody else trying to use this repository, especially if it’s on a different desktop environment to the ones that I’ve used it on.

Docker Container Workarounds

When I started down this path I thought, “Docker containers are not exactly the same as hosts, workarounds will be inevitable.” In that context I set two goals to determine the success or otherwise of this approach:

  1. Don’t pollute Libraries – Ansible with considerations arising from using it to manage Docker containers.
  2. Avoid workarounds when testing Libraries – Ansible using Docker containers that negate the value of that testing as assurance that Libraries – Ansible works for server hosts.

With that in mind, I’ve kept a close eye on the workarounds that I’ve had to adopt to get this to work. Here are those workarounds:

Simulating service management within containers

Debian host environments include the systemd system and service manager by default. By contrast a container does not and indeed the concept of a service manager within a container is somewhat alien to the design concept of containerisation – see Run multiple services in a container in the Docker documentation. To workaround this I have adopted the following approach within my Libraries – Ansible and Services – Docker repositories.

Within Libraries – Ansible itself I have split the main task list of roles into “install” and “configure” (post install) aspects; install typically includes a task to install APT packages which configure then goes on to configure for my needs. When packages are installed in a Debian server host with systemd present, any associated services are immediately started by systemd if that’s possible at that point, which it generally is. The same is not true in a Debian container. Taking this approaching of splitting the install and configure tasks in a role allows a container environment to:

  1. Import the role taking tasks from the install tasks list.
  2. Do something within the container to start processes that is equivalent to systemd starting services in a server host.
  3. Import the role taking tasks from the configure tasks list.

Furthermore, in Libraries – Ansible I define handlers in roles that, on being notified of configuration changes, use the ansible.builtin.service module to either reload service configurations (if the services support reload) or restart services (if the services do not support reload). Those handlers are defined in roles within handlers/service.yml and not handlers/main.yml, so that they are not included by default when the roles are imported. See the documentation for ansible.builtin.import_role module, where the default value for handlers_from is “main”.

What does Services – Docker do in order to utilise these structural changes within Libraries – Ansible? To answer that question, examine the contents of the my-roles folder within Services – Docker. Rather than directly import roles from Libraries – Ansible, playbooks in Services – Docker import “wrapper” roles defined in this my-roles folder. Those wrapper roles map one-to-one to roles within Libraries – Ansible and define the following:

  • Two task lists to start and stop processes that are the container substitutions for equivalent systemd commands to start and stop services.
  • A main task list that imports the install tasks for the equivalent role in Libraries – Ansible, starts the process(es) and then imports the configure task from the same role in Libraries – Ansible.
  • Handlers that respond to notifications of configuration change in Libraries – Ansible roles by invoking the stop process and start process tasks instead of using the handers defined in Libraries – Ansible that use the ansible.builtin.service module.

Putting all this together along with other features implemented by Services – Docker, what happens is:

  1. The Docker Compose services that simulate hosts in the Varilink Computing Ltd server estate are brought up by the raise-hosts command – see the README in Services – Docker.
  2. They start an sshd process that appends to the file /var/local/services.log but that sshd process is run in the background and its the command tail -f /var/local/services.log that attaches to the console.
  3. The playbooks are run and further processes are started (between install and configure task lists) or stopped and started again (when notification of configuration change triggers handlers). Again those processes are run in the background and they all append to /var/local/services.log. Thus the console is continually spooled with notifications of what’s going on.

Personally, I think it works very well but then I probably would!

Enabling “unsafe_writes” to use my own DNS service within containers

Libraries – Ansible defines services that are listed at the top of the README for that repository. One of those is “DNS lookup on our internal network.” The dns_client role within Libraries – Ansible configures our hosts to use that service by updating their /etc/resolv.conf file.

The Docker documentation on DNS services states:

By default, containers inherit the DNS settings of the host, as defined in the /etc/resolv.conf configuration file. Containers that attach to the default bridge network receive a copy of this file. Containers that attach to a custom network use Docker’s embedded DNS server. The embedded DNS server forwards external DNS lookups to the DNS servers configured on the host.

DNS services in Docker Docs

To test the DNS services implemented by Libraries – Ansible requires us to configure containers to use that service and not Docker’s embedded DNS server. To do this we must overwrite the /etc/resolv.conf that containers inherit from the host.

One consequence of this is that when a playbook uses the dns_client role to overwrite /etc/resolv.conf in a container using the ansible.builtin.template module, it must do so with the unsafe_writes parameter set to true. The default for this parameter is, unsurprisingly, false.

To workaround this, I set the value of unsafe_writes in the dns_client role in Libraries – Ansible role using a variable, which defaults to false . When my_dns_client, which is the equivalent wrapper role in Services – Docker, then imports the dns_client role, it overrides that default value and sets the variable to true instead, as required within the container environment.

Avoiding “unreachable host” conditions

The dns role in Libraries – Ansible is configured with one or more host patterns that it matches to hosts in the inventory. These define the hosts that a DNS server should provide lookups for. During configuration of the DNS service, Ansible connects to those hosts to gather the value of the ansible_default_ipv4 address for each of them.

Under Raise Hosts in the Using this Repository section of the README for Services – Docker it explains how the [SERVICES…] argument(s) can be used to limit the containers that are run to those needed for one or more of the services; for example backup, calendars, DNS, etc. It also explains that since the DNS services is a dependency for all of the other services, it is always included in scope.

When you limit the containers that are run in this way, then when Ansible tries to connect to hosts to gather the value of the ansible_default_ipv4 address for each of them it will raise an “unreachable host” condition when the container corresponding to a host has not been run.

To avoid this the my_dns role in Services – Docker that imports the dns role from Libraries – Ansible interrogates the ansible_limit variable if it is defined and uses its value to restrict the hosts that the dns role connects to so that it only includes those corresponding to containers that are running. This coordination is facilitated by the docker-entrypoint.sh scripts used by the raise-hosts and playbook Docker Compose services in Services – Docker.

Multiple Test Environments and Testing Clients

Services – Docker further implements two features to facilitate comprehensive testing of the services defined in Libraries – Ansible after they’ve been deployed.

  1. The ability to configure multiple environments to simulate different deployment topologies (the mapping of services to hosts) and releases of Debian on hosts. See Test Environments in the README for Services – Docker.
  2. A set of clients defined as Docker Compose services that you can use to connect the services that Libraries – Ansible defines; for example an email client, a web browser, etc. See the build/ directory in Services – Docker.

In the coming weeks/months I will publish a series of posts, one for each of the services defined by Libraries – Ansible, the topic of which will be the testing of the defined service within Services – Docker. This series of posts is listed below. Links to each of them will become available as they are published.

These posts will focus on the use of test environments and testing clients in Services – Docker.

Conclusions

This approach to testing the roles defined in Libraries – Ansible has been very successful. Using it, I have been able to test the services created by Libraries – Ansible in test environments that simulate multiple versions of the Varilink Computing Ltd server estate. It’s like having “a hybrid hosting environment in a (Docker) box” that for testing on a desktop, with minimal dependencies on that desktop’s environment.

Referring back to the two goals that I set to determine the success or otherwise of this approach:

  1. Don’t pollute Libraries – Ansible with considerations arising from using it to manage Docker containers.
  2. Avoid workarounds when testing Libraries – Ansible using Docker containers that negate the value of that testing as assurance that Libraries – Ansible works for server hosts.

In terms of changes made to Libraries – Ansible to make this work, I deem them to easily pass “Don’t pollute Libraries – Ansible with considerations arising from using it to manage Docker containers.” I haven’t really had to change it at all, merely restructure it a bit.

Similarly the only significant workaround I’ve had to use, is to substitute the handlers in Libraries – Ansible that use the ansible.builtin.service module with container equivalents that start and stop processes using scripts instead. This is a minor compromise to make when one considers that the ansible.builtin.service module is so tried and trusted.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>