Developing a Custom Concourse Resource

April 11, 2018 Brian McClain

One of my favorite concepts in Concourse is resources. Representing any entity that a pipeline can act on, resources are an interface that allows developers to easily add support for new integrations into Concourse. Concourse ships with resources supporting everything from pulling and pushing to and from git repositories and s3 buckets to deploying applications on Cloud Foundry.

What makes this abstraction so important is that it allows the community to contribute their own resources, a list of which is already growing to be quite impressive. Support for Slack notifications, Kubernetes, Terraform and more have all added by the community.

“This abstraction is immensely powerful, as it does not limit Concourse to whatever things its authors thought to integrate with. Instead, as a user of Concourse you can just reuse resource type implementations, or implement your own.”

Since this abstraction is so straightforward, I thought a quick look at how to create your own custom Concourse resource would be interesting. In our example, we’ll walk through creating a resource to interact with a MySQL server as well as trigger a build when a new row is added to a table. While this resource may not be as useful in practice, it will demonstrate the complete possibilities of a Concourse resource. I’ve made this code available to follow along and try for yourself, available here.

What goes into a resource?

There aren’t actually too many artifacts that come out of building a Concourse resource. The resource developer can choose which of these to implement, depending on the desired abilities of their resource (aside from the Dockerfile, which is always required).

  1. Dockerfile — For ease of integration, Concourse resources are distributed as container images. This way, we can simply point our configuration to a repository and instantly bring the resource into our pipeline.

  2. check — This is the script or executable that will be responsible for detecting new versions of the resource. In the git resource, for example, this will look for new commits to a branch. In the time resource, this is responsible for checking if the predefined amount of time has passed.

  3. in — Responsible for fetching the new version of the resource as detected in the check script. The git resource would pull down the newly detected commit, or the s3 resource would download the new version of a file that was uploaded.

  4. out — This script is generally responsible for handling a newly built artifact. The git resource could be responsible for pushing to a version branch, or if you’re using the cf resource it would push an application to Cloud Foundry.

It’s important to note that, other than the Dockerfile, it’s optional which scripts are implemented based on the needs of your resource. The Riff resource we’ll implement later, for example, will only define an “out” method to push a function, while some resources may only ever implement a “check” and “in” method.

What will our MySQL resource do?

As mentioned, this example will implement all three scripts as well as the Dockerfile. From a high level, we can define the behavior as such:

check — Watch a user-defined table for new entries

in — Get the newest row(s) from the table to perform some user-defined processing

out — Update a specified field in the row. In our example, we’ll have a column named “processed” that we’ll changed from “false” to “true”

These scripts can be any form of executable, such asa bash script, Ruby script, or a precompiled executable. For this example, we’ll write our examples in Ruby since it’s fairly easy to understand. So long as the container can execute these files though, Concourse doesn’t know or care how it gets the job done.

Check

We’ll start with the check script first, which can be found here. This script will periodically check a user-defined table in our database for new entries. For the sake of simplicity, we’ve made the assumption that our primary key in the table will be a column named id and will contain an auto-incrementing integer. This will how we will have our resource know when new rows are added to the table.

All check scripts will take in a JSON string on STDIN that will provide both the previous version that was processed, as well as all of the parameters that are defined defined in the pipeline YAML file. In our case, we’ve expected that all of the basic connection information is provided in the resource definition:

resources:

- name: mysql-gcp

type: mysql

source:

  host: "127.0.0.1"

  user: myuser

  password: mypass

  database: mydb

  table: my-concourse-table

When the check script is called, the resulting JSON that’s passed to the script would look similar to what we see below:

{

"source": {

  "host": "127.0.0.1",

  "user": "myuser",

  "password": "mypass"

  "database": "mydb"

  "table": "my-concourse-table"

},

"version": { "id": 1234}

}

The output of the check script will be all new “versions” of our resource. What qualifies as a version will be dependent on your resource. For a git resource, a new version could be the SHA of a new commit for example. In our case, the “id” primary key will act as a version number.

There’s only one edge case we’ll need to account for is the first build, where there will not have been a previous version of our resource. In this case, the “version” field will simply be null. You can see us handing this scenario in our check script.

Once our script has determined what new “versions” we have, Concourse expects that we emit a JSON string to STDOUT that states what new versions are available. The format is expected to be an array of JSON objects, on per new version, so if we have three new rows added to our table, IDs 111 through 113, we would return the following:

[

{"id": "111"},

{"id": "112"},

{"id": "113"}

]

In

Next let’s take a look at the in script, available here. This script will pull down the whole row so that our build script can process it. Much like the check script, we’ll receive all of the configuration provided in the pipeline configuration, including the host, username and password for the MySQL server. We’ll also receive a singular version which will indicate exactly what row we’ll be requesting from the database:

{

"source"=> {

  "host": "127.0.0.1",

  "user": "myuser",

  "password": "mypass"

  "database": "mydb"

  "table": "my-concourse-table"

},

"version"=>{ "id"=>"3" }

}

 

Additionally, as a command-line argument our script will receive the path to where we should write the version of our resource to disk. For example, in the git resource this would be where we clone the repository for a build. In our case, we’ll simply write the row that we’re working to a file in this directory formatted as a JSON string.

After the in script completes, Concourse expects that it should output a JSON string on STDOUT containing both the version of the resource as well as any metadata we’d like to make available for public consumption. The git resource is a wonderful reference for this, where we would be cloning a repository for example. The version that the git resource would emit would be the SHA of the commit that we pulled, and the metadata could be information such as the author of the commit, branch it was pulled from, etc. In our case, we’ll simply emit the ID as the row we’re operating on, and omit to include any metadata:

 

{

"version": {

  "id":"3"

},

"metadata":[]

}

Out

Finally, we’ll put together our script that will write to the MySQL server, the out script, shown here. Much like the check script and the in script, this script will receive a JSON string on STDIN. Again, we’ll see the same “source” block that we got with the previous scripts, describing the connection details to the MySQL server, but we’ll also receive a new “params” block, which is defined in the pipeline YAML specifically in the “put” section. For example, if our “put” section in the job defined in your pipeline YAML is as such:

- put: mysql-gcp

params:

  column: processed

  value: true

 

The JSON that this script will receive will look as follows:

 

{

"source"=> {

  "database"=>"...",

  "host"=>"...",

  "password"=>"...",

  "table"=>"...",

  "user"=>"..."

},

"params"=> {

  "column"=>"processed",

  "value"=>true

}

}

As with the in script, we’ll also want to take the provided command-line argument, which is a path that will contain all of our builds resources.

Apart from actually writing to the MySQL table, Concourse will expect us to emit a JSON string on STDOUT in the same format that we did with our in script, including the version and metadata, but this time we’ll be emitting the resulting version of our build. Using the git resource as an example again, if we were to put up to a release candidate for example, we would emit the SHA of the new commit here instead. For our simple scenario, however, we’ll simply emit the same ID as the row we’re working on.

This time, we’ll also emit a bit of metadata to show off the expected format. We’ll emit the ID as metadata as well as the column and value we updated, and finally ad a timestamp for good measure.

{

"version": {

  "id":"3"

},

"metadata":[

  {"name":"id","value":"3"},

  {"name":"processed","value":"1"}

]

}

 

Dockerfile

Now that we have our three scripts, it’s time to put it all together into a container image so that we can start using it and share it with others. You can see the Dockerfile that I’ve written for this resource here, but it’s fairly straightforward. Let’s take a look:

 

FROM ruby:2.5.0-alpine3.7

RUN apk add --no-cache build-base mysql-dev

COPY check.rb /opt/resource/check

COPY in.rb /opt/resource/in

COPY out.rb /opt/resource/out

COPY Gemfile /opt/resource/Gemfile

RUN chmod +x /opt/resource/check /opt/resource/in /opt/resource/in

WORKDIR /opt/resource

RUN bundle install

 

The important thing to notice here is where we’re placing our three scripts (check, in and out), all placed in the /opt/resource directory with no extension. This is where Concourse expects to find these files when invoking our resource. Other than that, the rest of the Docker file is simply setting up the container for running Ruby scripts (note the base container image and the bundle install at the end) as well as installing any required dependencies (mysql-dev, for example). Concourse doesn’t care what else runs in this container or what the base OS image, only that it can properly execute these scripts as needed.

We’ll build, tag and publish this image so that we can make this available for any pipeline to consume. This process is no different than any other time we’d be pushing something up to, say, Docker Hub. For example, this is exactly what I would run to push this up to my personal Docker Hub account:

docker build -t mysql-concourse-resource .

docker tag mysql-concourse-resource:latest brianmmcclain/mysql-concourse-resource:0.0.1

docker push brianmmcclain/mysql-concourse-resource:0.0.1

Building the pipeline

The resource is complete! But now we should probably put together a pipeline to try it out. You can see the example pipeline here, but the main things to point out are the resource_types section, which tells Concourse where to find our resource, and the resources section, which is where we define our parameters for using our new resource.

resource_types

resource_types:

- name: mysql

type: docker-image

source:

  repository: brianmmcclain/mysql-concourse-resource

  tag: 0.0.1

Not much to explain here, we’re telling Concourse about a new resource type, where to find it and which tag to use. In this case I’m pointing to where I’ve pushed this up in Docker Hub, but Concourse does have support for private registries as well, which you can read more about here.

resources

resources:

- name: my-mysql

type: mysql

source:

      host: ((host))

      user: ((user))

      password: ((password))

      database: ((database))

      table: ((table))

 

Since we’ve named our resource type as “mysql” in the resource_types section, we’ll refer to it as such here. Otherwise, all we’re doing here is giving this specific resource a name and passing the credentials in the source block. Well, sort of. We’re parameterizing them and will reference an external file to replace those parameters.

The pipeline past that is just like any other pipeline. In our plan we have a get defined to get new rows from MySQL and trigger builds using the check and in scripts, a custom task that doesn’t do much in this case, and then a put block that has parameters specific to the out script.

 

- put: my-mysql

  params:

    sql_path: ./my-mysql/concourse-mysql-resource.XXXXXX

    column: processed

    value: true

I’ve gone ahead and setup a database and table and populated it with some simple data.

CREATE TABLE `mytable` (

`id` int(10) NOT NULL AUTO_INCREMENT,

`timestamp` datetime NOT NULL,

`value` varchar(255) DEFAULT NULL,

`processed` tinyint(1) NOT NULL DEFAULT '0',

PRIMARY KEY (`id`)

) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8

INSERT INTO mytable (timestamp, value) VALUES (NOW(), "a")

INSERT INTO mytable (timestamp, value) VALUES (NOW(), "b")

INSERT INTO mytable (timestamp, value) VALUES (NOW(), "c")

And before running our pipeline, we’ll see the table looks like this

+----+---------------------+-------+-----------+

| id | timestamp           | value | processed |

+----+---------------------+-------+-----------+

|  1 | 2018-04-02 16:00:10 | a     | 0 |

|  2 | 2018-04-02 16:00:13 | b     | 0 |

|  3 | 2018-04-02 16:00:14 | c     | 0 |

+----+---------------------+-------+-----------+

Nothing too exciting, but it will at least show off our resource! Next, we’ll setup our parameters file that will contain our credentials for the MySQL server

creds.yml

As mentioned, we really don’t want to check these credentials into source control along side our code, so we’ve moved these into a separate file. You’ll see the keys all match what we found in our pipeline YAML file. For a deeper look at credential management, check out the Concourse docs on the subject.

host: "127.0.0.1"

user: myuser

password: mypass

database: mydb

table: mytable

 

Finally, we’ll setup our pipeline, passing the path to our credentials file and then unpause it.

fly -t lite set-pipeline -p msql-test -c pipeline.yml -l creds.yml

fly -t lite unpause-pipeline -p mysql-test

If we check our Concourse server, we can see our pipeline setup and, eventually, start running!

 

 

Once completed, we can see that the most latest version (id 3) has been updated as per our configuration

 

+----+---------------------+-------+-----------+

| id | timestamp           | value | processed |

+----+---------------------+-------+-----------+

|  1 | 2018-04-02 16:00:10 | a     | 0 |

|  2 | 2018-04-02 16:00:13 | b     | 0 |

|  3 | 2018-04-02 16:00:14 | c     | 1 |

+----+---------------------+-------+-----------+

 

Additionally, if we take a look at the show_result task, we’ll see that we also successfully cat’d the contents of the file that we wrote during the in script.

 

{"id":3,"timestamp":"2018-04-02 16:00:14","value":"c","processed":1}

 

As we add additional rows to this database, we’ll see this job kick off a new build, which will update the rows as expected.

 

mysql> insert into mytable (timestamp, value) VALUES (NOW(), "d");

Query OK, 1 row affected (0.16 sec)

mysql> select * from mytable;

+----+---------------------+-------+-----------+

| id | timestamp           | value | processed |

+----+---------------------+-------+-----------+

|  1 | 2018-04-02 16:00:10 | a     | 0 |

|  2 | 2018-04-02 16:00:13 | b     | 0 |

|  3 | 2018-04-02 16:00:14 | c     | 1 |

|  4 | 2018-04-09 22:19:44 | d     | 0 |

+----+---------------------+-------+-----------+

 

 

mysql> select * from mytable;

+----+---------------------+-------+-----------+

| id | timestamp           | value | processed |

+----+---------------------+-------+-----------+

|  1 | 2018-04-02 16:00:10 | a     | 0 |

|  2 | 2018-04-02 16:00:13 | b     | 0 |

|  3 | 2018-04-02 16:00:14 | c     | 1 |

|  4 | 2018-04-09 22:19:44 | d     | 1 |

+----+---------------------+-------+-----------+

Wrap Up

It’s really exciting to see how easy it is to write custom resource types for Concourse and the number of possibilities that this presents. To me, this is really one of the strongest features of Concourse that makes it stand out against other automation solutions. As technologies change and grow, Concourse can easily grow with them.

If you’d like to read more on developing a custom Concourse resource, be sure to check out the official docs and if you’re a fan of Concourse in general, follow them on Twitter for updates!

 

About the Author

Brian McClain

Brian is a Senior Product Marketing Manager at Pivotal with a focus on technical educational content for Pivotal customers as well as the Cloud Foundry and BOSH communities. Prior to Pivotal, Brian led a team of infrastructure engineers to help a large entertainment company build out and operate Cloud Foundry, as well as help educate a large team of developers how to deploy and run on Cloud Foundry.

Follow on Twitter More Content by Brian McClain
Previous
Afraid your Kubernetes clusters will go down? Now's the time to examine PKS
Afraid your Kubernetes clusters will go down? Now's the time to examine PKS

Pivotal Container Service - based on Kubernetes - includes several native features that simplify high avail...

Next
Adaptability-as-a-Service with Kubernetes
Adaptability-as-a-Service with Kubernetes

How running your apps on Kubernetes makes them more tolerant to change.In the IT business, we live in an en...

Enter curious. Exit smarter.

Register Now