Brainsware Blog

(In other news: App development, UX design, business: we're writing about the entire development of Sealas)

Constraints of a Design

by igor on

The last time we discussed a Puppet Module the design was mostly driven by what we wanted to achieve. This time around the design is entirely driven by external constraints.

Before we go down that road, let''s first talk about our


When you set out to write a Puppet module, the best way to go about it - in my very humble opinion - is still to start out with one line:

% puppet module generate Brainsware-kvmhost

This will give you a set of files, and the first thing you should do is open the README, and change the second line from this:

This is the kvmhost module.

to something that matches purpose of this module. If that purpose does not fit into one line, you either have not thought of it well enough, or you are trying to shove too much functionality into a single thing. Which probably means you haven''t thought about it well enough.

When I set out to write this module, I had one goal:

I want to be able to provision new virtual machines with a standard setup of Ubuntu $latest LTS.

A little background... Most of our infrastructure is located at Hetzner and is distinguished by a number of peculiarities. We have learned to work around most of those for two reasons:

  • the server offering is in our sweet spot between cheap & powerful
  • Hetzner''s support is superb

Peculiarities and Constraints

Let''s talk about those peculiarities then. Hetzner offers

  • a "rescue" & setup system you can activate and boot into over the network.
  • only servers with one NIC

As such, only Hetzner''s infrastructure servers are allowed to send out PXE boot messages on the network. There is another issue I''d like to point out:

The network at Hetzner is entirely routed, rather than switched. This has historical and practical reasons. The network used to be switched but it turned out far too easy to hijack a neighbouring servers'' IPs. Accidentally even. For practical reasons, I assume, it was decided to make it routed, rather than configuring one VLAN per server, or per customer. Configuring one new VLAN may not be vastly more complex compared to one more route, but VLANs are restricted to 65K in number, while routes are essentially unlimited. Additionally VLANs are hard to configure during a PXE setup. While routes can be pushed with the DHCP.

In reality this comes down to: we will have to run the virtual machine provisioning software on every single hardware box.

Finally, there''s DNS. Cobbler, our machine provisioning software of choice, supports two daemons: dnsmasq, or Bind.

Bind is a monstrosity of indescribable proportions. But if we don''t use Bind, how will we update our DNS? After all puppet needs DNS to run an infrastructure.

In the context of constraints that our ISP puts on us, neither Bind nor dnsmasq are very appealing, because we would have to run on each hardware, which isn''t much different from having to synchronise entries in /etc/hosts(5).

Up to this point Brainsware''s internal infrastructure has been using unbound as DNS daemon. Its main point is the simplicity in design and distinguishes itself from bind through its security history. We decided to not use Cobbler for managing DNS entries; we decided to not run a gaping security hole on each machine.1 We decided to keep using unbound as our DNS daemon of choice and feed it puppet.

The Framing Device

I lied when I said I only had one goal in mind.

Let''s not call this secondary thing a "goal", though. Let''s call it our "framing device."

Like with our last module, we wanted this module to be data-driven. All data should be in a concise format, in a central place.

The most obvious choice for our framing device under Puppet was Hiera. If we don''t take something like Hiera for granted, we have to step back and think about its implications:

  • All data that we need to provision a piece of hardware or a VM running on it are gathered in one place
  • There is no need to repeat our selves
  • There is no confusion about what is running where
  • Since everything is version controlled it is crystal-clear who added or removed or altered a machine, when, and, perhaps even why

And so we separate this data from our code, knowing fully well that all it takes to move from YAML to Puppet is one function call: create_resources().

Walk Through

Let''s take a look at this data then, and see how it all falls in place.

You can follow this walk-through, by looking at the code of the module on GitHub. I designed the data such that it could be grasped even without the code. However, having worked with and on the module for so long, I might be biased. I welcome any feedback on how to make it better.

Global Settings

First of all, there are a number of settings that for various reasons we elected not to hard-code. Sometimes the reasons where as simple as: This is an external module we depend on, and we couldn''t, or didn''t want to change it, so we''d have to use Hiera:

# customize the installation of apache for cobbler
apache::default_vhost: false

to make it work in our setup

# Managing the apache installation on this server
# now happens in hiera''s kvmhost role!

# However, we secure it here by only listening to private interfaces:
apache::listen {[

The rest of that data then configures global settings that we want to keep the same on all KVM hosts.

kvmhost::defaultrootpw: our super secret root password


    arch: x86_64
    breed: ubuntu
    os_version: precise
    initrd: /srv/www/cobbler/ks_mirror/ubuntu-12.04.3-x86_64/install/netboot/ubuntu-installer/amd64/initrd.gz
    kernel: /srv/www/cobbler/ks_mirror/ubuntu-12.04.3-x86_64/install/netboot/ubuntu-installer/amd64/linux
         tree: http://@@http_server@@/cblr/links/ubuntu-12.04.3-x86_64

Cobbler offers a number of abstractions for managing (virtual) machines and their installations. We use the distro to define a base from which to drive the installation. A small number of virtual machine characteristics is cast into profiles. The main distinguishing point between the two profiles we define is how many interfaces the virtual machine derived from it will have.


    distro: ubuntu-12.04.3-x86_64
    virt_ram: 2048
    virt_cpus: 2
    virt_bridge: virbr0
    kickstart: /srv/www/cobbler/ks_mirror/config/internal.cfg

    distro: ubuntu-12.04.3-x86_64
    virt_ram: 2048
    virt_cpus: 2
    virt_bridge: virbr0
         netcfg/choose_interface: eth0
    kickstart: /srv/www/cobbler/ks_mirror/config/external.cfg

An internal machine will only have one interface, connected to virbr0. This will give it a single network interface (eth0) with a NAT (in IPv4) and an implicitly routed IPv6 network.

An external machine will have the same setup. But it will also have a second interface, eth1. This interface will have a routed IPv4 address configured to make it reachable from the outside world. By the constraints of cobbler''s abstractions, a second network interface isn''t something we can configure in the profile itself, only in a system. We can, or rather have to, set a special kernel parameter for the PXE boot, so that the Debian Installer won''t be too confused about which interface to boot from.

This wraps up our global configurations. So let''s look at

Machine Specific Configuration

# steak is a KVM Host.
# These IP addresses are needed to setup virbr1
# n.b.: We can''t use facter for these values because as soon as we start
# managing /etc/network/interfaces the "facts will change".. in a way..
kvmhost::ipv6: 3b12:3e7:273:b4
kvmhost::ipv6_gateway: fe80::1

Given that all we really need to designate a node as KVM host is:

node bacon.esat {
  include ''kvmhost''

the above data is crucially important. It means we have to sit down once, and copy it from Hetzner''s horribly designed, not copy/pastable web interface all of this data. At first this might sound like it''s not DRY. When we think about the alternatives, the "obvious" choice that comes to mind is to retrieve all of this from Hetzner''s API. This is certainly an option, but it means that instead of shooting myself in the foot, I''m allowing other people to shoot me in the foot, when they "update" or break their API.

Finally, we get to the real meat of the matter:2

# On steak we''ll host the following Virtual Machines


   # backup VM
      profile: internal
              ipv6_address:  "%{kvmhost::ipv6}::10:126"

The first thing we''ll notice is that it is, indeed extremely concise to declare a single virtual machine. We could have stripped this further down, by making profile => ''internal'' the default some where. However, I wanted this to be visually explicit.

   # yet another Apache Traffic Server backed Caching proxy
      profile: external
      hostname: proxy02.esat
              mac_address: ''00:50:56:00:48:FE''
              virt_bridge:   virbr1
              ipv6_address:  "%{kvmhost::ipv6}::10:52"

Declaring an external machine is slightly more involved. We have to change the change the IPv4 gateway3, and add another network interface, attached to a different bridge (virbr1).

Again, this process involves copying data from Hetzner''s web interface. However, it also puts all data into one place and gives it a semantically different meaning. A meaning that makes sense to us than the way our ISP choses to represent it.

The Evolution of Design

This module didn''t come into existence in this form.

In its first iteration it didn''t it came to be over the course of numerous weeks of hacking and learning, while I was still very fresh with Puppet and Hiera.

One of the things I did not foresee at the beginning was how urgent the necessity of IPv6 support would become to us. A gradual rewrite proved to be impossible and a migration to it seemed to be impossible. The network configuration was too deeply built-in, too early configured. I had to re-provision the hardware and the virtual machines.. Thankfully, it turned out to be easy enough. And that''s all I really wanted,

I want to be able to provision new virtual machines with a standard setup of Ubuntu $latest LTS.

This module is far from perfect. A lot of things are still very Brainsware specific. Many things are very Ubuntu specific. Some people might not like our use of unbound as a DNS servers. Others might frown at our use of ufw for firewall management.

None of that really matters. What really matters is this:

Almost all design decisions we make are informed by the constraints we face. We must design in a way that doesn''t compromise our goals.

You could say that this is a no-brainer, or you could cite Antoine de Saint Exupéry

It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove.

As engineers we all know what happens to the perfect design over the course of its extensive use. It gets leaky and we need to patch those leaks. New demands from customers, or new discoveries mean that we will have to shrink or extend in areas we might not have foreseen.

It''s really hard to design in a way that will allow for such elasticity. I have yet to find a way. When you do before me, please tell me how.

  1. n.b.: We are still running dnsmasq on each machine, but we don''t do much with it. It''s used by libvirt to provide DHCP, which we only use in the setup phase for the PXE boot. 

  2. What phrase do vegetarians use instead? 

  3. The default of is still hard-coded. This needs to be ripped out and put read from Hiera. 

Share this Post: