How to Create a BOSH Release of a DNS Server

April 13, 2015 Brian Cunnie

BOSH is a tool that (among other things) deploys VMs. In this blog post we cover the procedure to create a BOSH release for a DNS server, customizing our release with a manifest, and then deploying the customized release to a VirtualBox VM.

BOSH is frequently used to deploy applications, but rarely to deploy infrastructure services (e.g. NTP, DHCP, LDAP). When our local IT staff queried us about using BOSH to deploy services, we felt it would be both instructive and helpful to map out the proceduring using DNS as an example.

Note: if you’re interested in using BOSH to deploy a BIND 9 server (i.e. you are not interested in learning how to create a BOSH release), you should not follow these steps. Instead, you should follow the instructions on our BOSH DNS Server Release repository’s page.

We acknowledge that creating a BOSH release is a non-trivial task and there are tools available to make it simpler, tools such as bosh-gen. Although we haven’t used bosh-gen, we have nothing but the highest respect for its author, and we encourage you to explore it.

0. Install BOSH Lite

BOSH runs in a special VM. We will install a BOSH VM in VirtualBox using BOSH Lite, an easy-to-use tool to install BOSH under VirtualBox.

We follow the BOSH Lite installation instructions. We follow the instructions up to and including execution of the bosh login command.

The instructions in the remainder of this blog post will not work if not logged into BOSH.

1. Initialize a skeletal BOSH Release

A BOSH release is a package, analogous to Microsoft Windows’s .msi, Apple OS X’s .app, RedHat’s .rpm.

We will create a BOSH release of ISC‘s BIND 9 [1] .

BIND versus named

BIND is a collection of software that includes, among other things, the named DNS daemon.

You may find it convenient to think of BIND and named as synonyms [2].

Initialize Release

We follow these instructions. We name our release bind-9 because BIND 9 [3] is a the DNS server daemon for which we are creating a release.

cd ~/workspace
bosh init release --git bind-9
cd bind-9

The --git parameter above is helpful if you’re using git to version your release, and including it won’t break your release if you’re not using git.

Since we are using git, we populate the following 3 files:

Our release is available on GitHub.

Create Job Skeletons

Our release consists of one job, named:

bosh generate job named

Let’s create the control script:

vim jobs/named/templates/ctl.sh

It should look like this (patterned after the Ubuntu 13.04 bind9 control script):

#!/bin/bash

# named logs to syslog, daemon facility
# BOSH captures these in /var/log/daemon.log
RUN_DIR=/var/vcap/sys/run/named
# PIDFILE is created by named, not by this script
PIDFILE=${RUN_DIR}/named.pid

case $1 in

  start)
    # ugly way to install libjson.0 shared library dependency
    if [ -f /etc/redhat-release ]; then
      # Install libjson0 for CentOS stemcells.
      # We first check if it's installed to prevent an
      # an over-eager yum from contacting the Internet.
      rpm -qi json-c > /dev/null || yum install -y json-c
    elif [ -f /etc/lsb-release ]; then
      # install libjson0 for Ubuntu stemcells (not tested)
      apt-get install libjson0
    fi

    mkdir -p $RUN_DIR
    chown -R vcap:vcap $RUN_DIR

    exec /var/vcap/packages/bind-9-9.10.2/sbin/named -u vcap -c /var/vcap/jobs/named/etc/named.conf

    ;;

  stop)

    PID=$(cat $PIDFILE)
    if [ -n $PID ]; then
      SIGNAL=TERM
      N=1
      while kill -$SIGNAL $PID 2>/dev/null; do
        if [ $N -eq 1 ]; then
          echo "waiting for pid $PID to die"
        fi
        if [ $N -eq 11 ]; then
          echo "giving up on pid $PID with kill -TERM; trying -KILL"
          SIGNAL=KILL
        fi
        if [ $N -gt 20 ]; then
          echo "giving up on pid $PID"
          break
        fi
        N=$(($N+1))
        sleep 1
      done
    fi

    rm -f $PIDFILE

    ;;

  *)
    echo "Usage: ctl {start|stop}" ;;

esac

Edit the monit configuration

vim jobs/named/monit

It should look like this:

check process named
  with pidfile /var/vcap/sys/run/named/named.pid
  start program "/var/vcap/jobs/named/bin/ctl start"
  stop program "/var/vcap/jobs/named/bin/ctl stop"
  group vcap

Create Package Skeletons

Create the bind package; we include the version number (9.10.2).

bosh generate package bind-9-9.10.2

Create the spec file. We need to create an empty placeholder file to avoid this error:

`resolve_globs': undefined method `each' for nil:NilClass (NoMethodError)

BOSH expects us to place source files within the BOSH package; however, we are deviating from that model: we don’t place the source files withom our release; instead we configure our package to download the source from the ISC. But we need at least one source file to placate BOSH, hence the placeholder file.

vim packages/bind-9-9.10.2/spec

It should look like this:

---
name: bind-9-9.10.2

dependencies:

files:
- bind/placeholder

then we need to create the placeholder:

mkdir src/bind
touch src/bind/placeholder

Create the bind-9 backage script:

vim packages/bind-9-9.10.2/packaging

It should look like this:

# abort script on any command that exits with a non zero value
set -e

if [ -f /etc/redhat-release ]; then
  # install libjson0 for CentOS stemcells
  yum install -y json-c
elif [ -f /etc/lsb-release ]; then
  # install libjson0 for Ubuntu stemcells (not tested)
  apt-get install libjson0
fi

curl -OL ftp://ftp.isc.org/isc/bind9/9.10.2/bind-9.10.2.tar.gz
tar xvzf bind-9.10.2.tar.gz
cd bind-9.10.2
./configure 
  --prefix=${BOSH_INSTALL_TARGET} 
  --sysconfdir=/var/vcap/jobs/named/etc 
  --localstatedir=/var/vcap/sys
make
make install

Configure a blobstore

We skip this section because we’re not using the blobstore—we’re downloading the source and building from it.

Create Job Properties

We edit jobs/named/templates/named.conf.erb. This will be used to create named‘s configuration file, named.conf. Note that we don’t populate this template; instead, we tell BOSH to populate this template from the config_file section of the properties section of the deployment manifest:

<%= p('config_file') %>

We edit the spec file jobs/named/spec. We point out that the properties → config_file from the deployment manifest is used to create the contents of named.conf:

---
name: named
templates:
  ctl.sh: bin/ctl
  named.conf.erb: etc/named.conf

packages:
- bind-9-9.10.2

properties:
  config_file:
    description: 'The contents of named.conf'

Create a Dev Release

bosh create release --force
    [WARNING] Missing blobstore configuration, please update config/final.yml before making a final release
    Syncing blobs...
    Please enter development release name: bind-9
    ...
    Release name: bind-9
    Release version: 0+dev.1
    Release manifest: /Users/cunnie/workspace/bind-9/dev_releases/bind-9/bind-9-0+dev.1.yml

Upload the Dev Release

bosh upload release dev_releases/bind-9/bind-9-0+dev.1.yml

Create the sample Deployment Manifest

We create an examples subdirectory:

mkdir examples

We create examples/bind-9-bosh-lite.yml. Much of this is boilerplate for a BOSH Lite deployment. Note that we hard-code our VM’s IP address to 10.244.0.66:

---
name: bind-9-server
director_uuid: PLACEHOLDER-DIRECTOR-UUID
compilation:
  cloud_properties:
    ram: 2048
    disk: 4096
    cpu: 2
  network: default
  reuse_compilation_vms: true
  workers: 1
jobs:
- instances: 1
  name: named
  networks:
  - default:
    - dns
    - gateway
    name: default
    static_ips:
    - 10.244.0.66
  persistent_disk: 16
  resource_pool: bind-9_pool
  templates:
  - { release: bind-9, name: named }
  properties:
    config_file: |
      options {
        recursion yes;
        allow-recursion { any; };
        forwarders { 8.8.8.8; 8.8.4.4; };
      };
networks:
- name: default
  subnets:
  - cloud_properties:
      name: VirtualBox Network
    range: 10.244.0.64/30
    dns:
      - 8.8.8.8
    gateway: 10.244.0.65
    static:
    - 10.244.0.66
releases:
  - name: bind-9
    version: latest
resource_pools:
- cloud_properties:
    ram: 2048
    disk: 8192
    cpu: 1
  name: bind-9_pool
  network: default
  stemcell:
    name: bosh-warden-boshlite-centos-go_agent
    version: latest
update:
  canaries: 1
  canary_watch_time: 30000 - 600000
  max_in_flight: 8
  serial: false
  update_watch_time: 30000 - 600000

Create the Deployment Manifest

We create the deployment manifest by copying the sample manifest in the examples directory and substituting our BOSH’s UUID:

cp examples/bind-9-bosh-lite.yml config/
perl -pi -e "s/PLACEHOLDER-DIRECTOR-UUID/$(bosh status --uuid)/" config/bind-9-bosh-lite.yml

Deploy the Release

bosh deployment config/bind-9-bosh-lite.yml
bosh -n deploy

Test the Deployment

We use the nslookup command to ensure our newly-deployed DNS server can resolve pivotal.io:

nslookup pivotal.io 10.244.0.66
    Server:     10.244.0.66
    Address:    10.244.0.66#53

    Non-authoritative answer:
    Name:   pivotal.io
    Address: 54.88.108.63
    Name:   pivotal.io
    Address: 54.210.84.224

We have successfully created a BOSH release including one job. We have also successfully created a deployment manifest customizing the release, and deployed the release using our manifest. Finally we tested that our deployment succeeded.

Addendum: BOSH directory differs from BIND‘s

The BOSH directory structure differs from BIND‘s, and systems administrators may find the BOSH structure unfamiliar.

Here are some examples:

File type BOSH location Ubuntu 13.04 location
executable /var/vcap/packages/bind-9-9.10.2/sbin/named /usr/sbin/named
configuration /var/vcap/jobs/bind/etc/named.conf /etc/bind/named.conf
pid /var/vcap/data/sys/run/named/named.pid /var/run/named/named.pid
logs /var/log/daemon.log (same)

That is not to say the BOSH layout does not have its advantages. For example,
The BOSH layout allows multiple instances (jobs) of the same package, each with its own configuration.

That advantage, however, is lost on BIND: running multiple versions of BIND was not a primary consideration—only one program could bind [4] to DNS’s assigned port 53 [5] , making it difficult to run more than one BIND job on a given VM.


Footnotes

1 We chose BIND 9 and not BIND 10 (nor the open-source variant Bundy) because BIND 10 had been orphaned by the ISC (read about it here).

There are alternatives to the BIND 9 DNS server. One of my peers, Michael Sierchio, is a strong proponent of djbdns, which was written with a focus on security.

2 Although it is convenient to think of BIND and named as synonyms, they are different, though the differences are subtle.

For example, the software is named BIND, so when creating our BOSH release, we use the term BIND (e.g. bind-9 is the name of the BOSH release)

The daemon that runs is named named. We use the term named where we deem appropriate (e.g. named is the name of the BOSH job). Also, many of the job-related directories and files are named named (a systems administrator would expect to find the configuration file to be named.conf, not bind.conf, for that’s what it’s named in RedHat, FreeBSD, Ubuntu, et al.)

Even polished distributions struggle with the BIND vs. named dichotomy, and the result is evident in the placement of configuration files. For example, the default location for named.conf in Ubuntu is /etc/bind/named.conf but in FreeBSD is /etc/namedb/named.conf (it’s even more complicated in that FreeBSD’s directory /etc/namedb is actually a symbolic link to /var/named/etc/namedb, for FreeBSD prefers to run named in a chroot environment whose root is /var/named. This symbolic link has the advantage that named‘s configuration file has the same location both from within the chroot and without).

3 The number “9” in BIND 9 appears to be a version number, but it isn’t: BIND 9 is a distinct codebase from BIND 4, BIND 8, and BIND 10. It’s different software.

This is an important distinction because version numbers, by convention, are not used in BOSH release names. For example, the version number of BIND 9 that we are downloading is 9.10.2, but we don’t name our release bind-9-9.10.2-release; instead we name it bind-9-release.

4 We refer to the UNIX system call bind (e.g. “binding to port 53”) and not the DNS nameserver BIND.

5 One could argue that a multi-homed host could bind [3] different instances of BIND to distinct IP addresses. It’s technically feasible though not common practice. And multi-homing is infrequently used in BOSH.

In an interesting side note, the aforementioned nameserver djbdns makes use of multi-homed hosts, for it runs several instances of its nameservers to accommodate different purposes, e.g. one server (dnscache) to handle general DNS queries, another server (tinydns) to handle authoritative queries, another server (axfrdns) to handle zone transfers.

One might be tempted to think that djbdns would be a better fit to BOSH‘s structure than BIND, but one would be mistaken: djbdns makes very specific decisions about the placement of its files and the manner in which the nameserver is started and stopped, decisions which don’t quite dovetail with BOSH‘s decisions (e.g. BOSH uses monit to supervise processes; djbdns assumes the use of daemontools).

About the Author

Biography

More Content by Brian Cunnie
Previous
Become A Founding Contributor of “Project” Geode
Become A Founding Contributor of “Project” Geode

Today, Pivotal announced the creation of “Geode”, the new in-memory distributed database that will form the...

Next
Deploying Cloud Foundry Microservices
Deploying Cloud Foundry Microservices

The buzz around the microservice way of architecting software systems is taking hold across the Internet. M...