& Redis on DigitalOcean in 5mins

It all started when I casually spotted a sponsored tweet offering me $10 worth of virtual computing power from digital ocean. I had seen their ads on everything from Facebook to Gmail and thought I’d check out what the fuss was about.

I had been planning to try on something other than my macbook for a while and was tempted to run up another linode server. Instead I decided to see if I could get docker running on digitalocean.

Docker turned 0.6 over the weekend and is heading full speed towards a production grade container platform. If you are reading this post you probably already interested in docker but for the uninitiated docker allows you to ship applications as containers running in what appears to be self contained linux environments. Its is based upon linux container magic and runs within same operating system as its host.

So here is my guide to get docker 0.6.1 running on digital ocean vm (5 minutes starts now!)

First create a new droplet based upon the Ubuntu 13.04 x64 image

ImageCreate the droplet and within a minute or so you should be able to interact with your new machine. Your root password is emailed to the registered email address

Once it is created ssh directly on to the box as root, for me it was as simple as 

 ssh root@

Now following docker’s install guide I ran

apt-get update
apt-get install linux-image-extra-`uname -r`

sh -c "curl | apt-key add -"

sh -c "echo deb docker main > /etc/apt/sources.list.d/docker.list"

apt-get update
apt-get install lxc-docker

Which allowed me to run docker for the first time 

root@blog:~# docker version
Client version: 0.6.1
Server version: 0.6.1
Git commit: 5105263
Go version: go1.1.2
Last stable version: 0.6.1

Now to run something exciting, lets run redis

docker run -d johncosta/redis 

Install redis-cli on the host machine

apt-get install redis-server

redis-cli -h -p 6379

Or connect from your macbook

brew install redis

redis-cli -h -p 6379

Now time to play with redis

redis localhost:6379> set docker magic
redis localhost:6379> get docker

5 minutes must be up by now!

Grails create link to homepage

After reading the Zeroturnaround’s bake off of modern jvm web framework I thought I would give Grails one last go. After all, in the report it beat the likes of Wicket, Spring, Play, etc

Intellij 12 made quick work of setting up a project and soon I had a functioning website. All good so far, I remembered my basic Rails/Grails knowledge from before and have to admit I was able to do a lot in a very short period of time. 

However one of the simplest links stopped me in my tracks and it took a disproportionate about of googling to figure it out. 

How to link to the homepage/root?

After much searching the best I got was

 <a href=”${createLink(uri: ‘/’)}”>home</a>

Easy once you see it!! Hopefully I will save you some the time too

Java Collections API Enhancements: Thanks to Closures – Lambda Expressions

Amit Phaltankar

Thank you for visiting this site.

The blog and the content has been moved to a new location and will be maintained from there.

The new address is :

The same post can be read at

Java Collections API Enhancements: Thanks to Closures – Lambda Expressions 

You are requested to visit the blog at new address and we will continue from there.


View original post

Open Source Application Monitoring: Catching Exceptions

Imagine if you will, you are working on a new critical application and you need to write the output of your process calculations to a file. Simple stuff, a few lines of Java later you have

File file = new File("myfile.txt");
try {
  boolean fileCreated = file.createNewFile();
  log.debug("fileCreated = " + fileCreated);
} catch (IOException e) {
  log.error("Could not create file",e);

You even remembered to log the exception to the logs just in case there was a problem in production. A few weeks later the code ships and works perfectly for weeks until one day the network mount disappears and the application starts to throw exceptions.

Your applications logs then fill up with the exception message and stacktrace but no one realizes there is an issue until an angry customer rings up complaining that they never received their report.

A far worse scenario is that the exception occurs in production but the development staff decide that it is a “good exception” and that the best course of action is to ignore it. Forever! Well until the new guy starts and they have to explain that it is a “good exception”, and so are the following 600 exceptions.

I remember when I first heard the term “Good Exception”, I was working for startup in London over ten years ago. I was new to the company and the first phase of the application was already in production as part of a critical beta phase of the product. Each morning a developer would have to be in the office ready to deal with any issues that may arise from 6am.

One cold December morning I was in the office and as part of the morning grind I was going through a checkout of the application. Checkpoint number 27 was “Check application logs”. No more detail, so I jumped on to the application server and started to tail the logs and to my dismay hundreds of exceptions were being logged in realtime.

I spent the next hour trying to work out what was wrong with the application and what had changed to cause such an exception storm in production. At around 8:00am one of the developers who had the longest tenure in the team arrived in and calmly pointed out “oh those, those are good exceptions, you can ignore them. They occur the day after a billing cycle due to a bug in one the core components”.

Key lesson; Exceptions should be exceptional, if you get an exception in production you need to deal with it.

Exceptional Workflow

Exceptions are part of both the development process and the application monitoring process. An ideal flow is that once an alert is generated in production it is fed back into the development process as a potential bug fix or improvement. The key is to provide adequate monitoring of exceptions in production and to provide sufficient feedback into the development team.

How many of the applications you have worked on have had anything more than log level or log scrapping exception monitoring?

How many development processes have you seen that link production exception to bug fix and strive to fix as many exceptions as possible?

How many “good exceptions” were written to your logs in production since you started reading this post?

Baking Exception Monitoring

Personally I think one of the reasons for poor infrastructure in critical areas like this is down to the way different parts of the organizations are structured. In many large teams people are dedicated to different function of the application lifecycle. Developers are generally focused on the application business requirements and have unforgiving deadlines. Support teams have deadlines of a different type. They also tend to support many applications across a range of functions.

With the advancement of the DevOps movement these communities are starting to join forces and work on the infrastructure behind the applications. So one problem is certainly being addressed and will start to become more and more widespread in the next 2-3 years The other major factor is tool support. How many good modern tools are available for application monitoring that are quick to use and onboard? There are a number of interesting commercial startups in this space at the moment, AirBrake for example is used by a number of corporation to add monitoring support to their application.

Airbrake offers rich functionality but also supports almost all popular languages in it’s API arsenal. However it is hosted on their servers and this deployment configuration will not suit a large majority of application developers who build bespoke software for internal clients and are forbidden to publish information external regardless of content. Interestingly enough there is an open source alternative to AirBrake called ErrBit which is compatible with the AirBrake API.

It’s a ruby on rails application that can be easily installed on your local server or for the purpose of this blog I put it up on Heroku mainly for ease of use. Once you have installed ErrBit you can quickly post exceptions and stacktraces to the server and it has some basic workflow for your support staff to monitor and deal with the exception. It also has integration with some of the most popular Issue Trackers however there is currently no Jira support.

Installing ErrBit

This was the first time I used Heroku for anything even though I had heard great things. I had an account but it was unverified something that I over looked when I did my first installation. ErrBit needs MongoDB and to use MongoDB with Heroku you need to verify your account with a credit card. This surprisingly stop my application working for a while and it took me ages to notice the small error message in the install script. You have been warned!

To install the application you need to follow the simple steps from the github page (you will need git and ruby installed locally)

Clone the repository

git clone

Create & configure for Heroku

gem install heroku
heroku create example-errbit --stack cedar
heroku addons:add mongolab:starter
cp -f config/mongoid.mongolab.yml config/mongoid.yml
git add -f config/mongoid.yml
git commit -m "Added mongoid config for Mongolab"
heroku addons:add sendgrid:starter
heroku config:add HEROKU=true
heroku config:add
heroku config:add
git push heroku master

Seed the DB

heroku run rake db:seed

Pretty quick, well once you have a validated Heroku account. Once completed simply type

heroku open

And your new ErrBit install should be running. My instance is at and you can use to login



Once you have installed ErrBit you will need to configure your users and whatever applications you plan to monitor. Again straightforward, clicking “Add a new app” button will bring you to configuration screen And once you create the application record you will get the important application id You will need this later when publishing exceptions

Publishing Exceptions from Java

As I mentioned earlier ErrBit is compatible with all the language APIs that AirBrake provide and luckily for me there is an actively developed API for Java available at This will allow you to send Exceptions from you Java Server Appications, Mobile Applications and Desktop Clients. To start using it with maven add the following dependencies to your pom file


Once I imported the libraries I saw a slight problem in how to override the url for communicating with the backend server. In the AirBrakeNotifier class, which is responsible for calling the server side rest api, the URL for AirBrake is hardcoded whereas I needed to override it for ErrBit. A quick solution was to create a new ErrBitNotifier class which takes the base url a construction argument.

import airbrake.AirbrakeNotice;
import airbrake.NoticeXml;


public class ErrBitNotifier {

    private final String baseUrl;

    public ErrBitNotifier(String baseUrl) {
        this.baseUrl = baseUrl;

    private void addingProperties(final HttpURLConnection connection) throws ProtocolException {
        connection.setRequestProperty("Content-type", "text/xml");
        connection.setRequestProperty("Accept", "text/xml, application/xml");

    private HttpURLConnection createConnection() throws IOException {
        return (HttpURLConnection) new URL(String.format("http://%s/notifier_api/v2/notices", baseUrl)).openConnection();

    private void err(final AirbrakeNotice notice, final Exception e) {

    public int notify(final AirbrakeNotice notice) {
        try {
            final HttpURLConnection toairbrake = createConnection();
            String toPost = new NoticeXml(notice).toString();
            return send(toPost, toairbrake);
        } catch (final Exception e) {
            err(notice, e);
        return 0;

    private int send(final String yaml, final HttpURLConnection connection) throws IOException {
        int statusCode;
        final OutputStreamWriter writer = new OutputStreamWriter(connection.getOutputStream());
        statusCode = connection.getResponseCode();
        return statusCode;


Perhaps the AirBrake API could potentially allow for custom configuration of the URL in the next revision. Once you have created a new ErrBitNotifier you can start publishing exceptions. Going back to our previous example

import airbrake.AirbrakeNotice;
import airbrake.AirbrakeNoticeBuilder;
import org.apache.log4j.Logger;


public class TestException {

    private static org.apache.log4j.Logger log = Logger

    public static void main(String[] args) {

        File file = new File("h://myfile.txt");
        try {
            boolean fileCreated = file.createNewFile();
            System.out.println("fileCreated = " + fileCreated);
        } catch (IOException e) {
            log.error("Could not create file",e);
            AirbrakeNotice notice = new AirbrakeNoticeBuilder("b4f7cb2020b2972bde2f21788105d645", e, "prod").newNotice();
            ErrBitNotifier notifier = new ErrBitNotifier("");


This code will throw an IOException (well at least on my computer, since I don’t have a h drive!) and the exception will be seen on the ErrBit console It has the ability to spot duplication of exceptions and you can set it up to email you when the exception is generated.

Also the AirBrake API has log4j appender support but it is tied to the AirBrake public URL and I have left it out of the post. However it can be turned on by the following log4j configuration example

log4j.rootLogger=INFO, stdout, airbrake

log4j.appender.stdout.layout.ConversionPattern=[%d,%p] [%c{1}.%M:%L] %m%n



Application Exception Monitoring is an important part of your application lifecycle.

Exceptions should be easily visible to the support and development teams and your development process should look to address all exceptions in forthcoming sprints.

Exceptions should be used for exceptional cases only, any exception that is not acted upon in production is noise and creates confusion.

Tool Support is important in this area and ErrBit looks like a great multi-language tool that can help support your Exception Management workflow.

How Spring 3.1 Environments & Profiles will make your life better!

My goal of writing one technical blog post per week fell to the wayside around December mainly due to work related project time constraints. I have a four-day weekend and a sparkling new home office, which has to be used for something other than surfing hacker news. Its noon here and I have two hours to produce this blog post so here I go!

The software I write or design generally needs to be deployed in different configurations. These deployment configuration end points can be generalised into the following buckets

  1. Java Enterprise Containers (jboss, weblogic, tomcat, glassfish, etc)
  2. Standalone Java application
  3. GUI Applications
  4. Testing Frameworks

Ignoring GUI Applications for the moment (I might return to these later) the code is often the same between container, standalone and testing. This leads to a key design consideration or philosophy when designing and coding this “type” of software. The code I write needs to run perfectly and untainted in each scenario.

That’s crucial to quality and robustness! The problem is that there are environmentally aware resources that are configured depending on where the code is executed. When I am writing a unit test I will not (I know I could, but I think you are missing the point) have my datasource bound to a JNDI tree. Where as in the container I simple lookup the tree and ask can I have a datasource please.

And frameworks like Spring encourage this form of development, or at least have popularized patterns such as inversion of control. Instead of the executing code configuring the database or queue it is injected as run time and life is good again.

So a blog post to reiterate the same warbling around inversion of control? Not quite, at this point I have a piece of code

public class BusinessClazz implements SomethingReallyImportant {
    private DataSource dataSource;
    public void setDataSource(DataSource dataSource) {
        this.dataSource = dataSource;

The datasource is injected and the BusinessClazz is none the wiser about the origination of the datasource. Hey, I’m not the smartest guy in the world, but I’m certainly not the dumbest. I mean, I’ve read books like “J2ee Development Without Ejb” and “Expert one-on-one J2EE design and development”, and I think I’ve understood them. They’re about girls, right? Just Kidding. I create a datasource in spring and inject it

<bean name="myBusinessClazz" class="BusinessClazz">
   <property name="dataSource" ref="dataSource"/>

My business service now is agnostic to the datasource origin and can now happily run in any of my deployment end points. But what about the datasource, how do we configure this to run anywhere? We are focusing on the datasource but this example can be applied to any component that changes between environments or runtimes.

A datasource typically requires two pieces of configuration. The first part of the puzzle is where is the database and how should I connect to it. I need hostnames, port numbers, service names, etc.

The second piece of configuration is how is it represented. Here we have a few options. I could create a simple one single connection to the database

 <bean id="dataSource" destroy-method="close" 
    <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
    <property name="url" value="jdbc:hsqldb:hsql://localhost"/>
    <property name="username" value="sa"/>
    <property name="password" value=""/>

Or I could use apache pooling to create a pool of connections

<bean id="dataSource" 
   class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
  <property name="driverClassName" value="net.sourceforge.jtds.jdbc.Driver"/>
  <property name="url" value="jdbc:myserver"/>
  <property name="username" value="username"/>
  <property name="password" value="password"/>
  <property name="initialSize" value="2"/>
  <property name="maxActive" value="5"/>
  <property name="maxIdle" value="2"/>

Or the container manages the connection and we look it up

<jee:jndi-lookup id="dataSource" jndi-name="java:mydatasource"/>

There are many other ways of configuring the datasource but this is enough for illustration purposes. Spring’s PropertyPlaceholderConfigurer class does a great job at handling properties that change between environments.

The problem is how do we seamlessly go between deployment types and pickup different styles of datasources?

Roll your own

This is not an old problem and has been solved in many different ways by many development teams over the last six or seven years. Each team would look at the problem and create a custom way of adding enough intelligence to the code to help alleviate the problem.

One of the biggest drivers behind making code run cleanly inside and outside of the container is to allow developers write unit tests that test true production code.

A typically example of a solution to this problem is to define datasources in a separate context files and create a naming convention to bring order to the chaos. For example if we had three modes of operation we would have three separate context files.


Next comes custom infrastructure code that has to be used at every point of spring initialization throughout your application. It would leverage the ability to use ant style wildcard searches for context files across the deployed classpath.

ClassPathXmlApplicationContext containerContext = 
   new ClassPathXmlApplicationContext("**/**-containerContext.xml");
ClassPathXmlApplicationContext nonContainerContext = 
   new ClassPathXmlApplicationContext("**/**-pooledContext.xml");
ClassPathXmlApplicationContext testingContextContext = 
   new ClassPathXmlApplicationContext("**/**-singleContext.xml");

In a few hours you can have a system that is tolerant of each runtime and the code is engineered to take advantage of this. This roll your own approach has worked for several years but has the major disadvantage that each team and project has a different way of tackling the problem.

Enter Spring Profiles

As of Spring 3.1 there is a solution to this problem (If you do not have an upgrade path for the libraries your project depends on then you have a much larger problem!)

Spring has introduced the notion of Environments and Profiles across the container. Each application context has an Environment object which can be accessed easily

 ClassPathXmlApplicationContext classPathXmlApplicationContext = 
    new ClassPathXmlApplicationContext();
 ConfigurableEnvironment configurableEnvironment = 

Each environment can have a number of active profiles. Most Spring profile examples talk about profiles being of dev or prod. However I have a Spring solution to the environment type issues my problem is the different profiles for multiple runtimes. But this is the advantages of the implementation you are free to decide how you use it.

By default your beans have no profile and are loaded into the container. So lets start with example. Imagine if this is my current single connection datasource context

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="…">
 <bean id="dataSource" destroy-method="close" 
   <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
   <property name="url" value="jdbc:hsqldb:hsql://localhost"/>
   <property name="username" value="sa"/>
   <property name="password" value=""/>

As of Spring 3.0 there is a new xml application context class called GenericXmlApplicationContext which is an alternative to ClassPathXmlApplicationContext and FileSystemXmlApplicationContext.

The advantage of using GenericXmlApplicationContext is that it can be configured completely with setters rather than a single clunky constructor. Just remember to call refresh() once ready to instantiated the container.

Armed with GenericXmlApplicationContext we initialize the container with the following code snippet

GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();

Here I set the active profile to standalone. By convention in my project I will consider any code running outside the application container as “standalone” and anything inside the container as “container”. I can set multiple profiles here, for example I could have the following for setting it to standalone and activemq rather than MQSeries

ctx.getEnvironment().setActiveProfiles("standalone", "activemq");

Setting active profiles will have no effect on the current configuration context since I haven’t set a profile on the beans. So we change our configuration context to

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="…" profile="standalone">
  <bean id="dataSource" destroy-method="close" 
   <property name="driverClassName" value="org.hsqldb.jdbcDriver"/>
   <property name="url" value="jdbc:hsqldb:hsql://localhost"/>
   <property name="username" value="sa"/>
   <property name="password" value=""/>

These beans will now only be initialized if the active profile is set to “standalone”. Profiles are an attribute of beans rather than bean therefore you can not set individual beans to select profiles. In the older versions of Spring this would still leave the problem with multiple files and ant wildcards to select the correct context at runtime.

Spring 3.1 has introduced the ability to nest <beans/> within <beans/>. With a quick refactor we can now have a single datasource context file such as

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="…">
 <beans profile="standalone">  
 <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> 
 <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> 
 <property name="url" value="jdbc:hsqldb:hsql://localhost"/> 
 <property name="username" value="sa"/> 
 <property name="password" value=""/> 

 <beans profile="container">
 <jee:jndi-lookup id="dataSource" jndi-name="java:mydatasource"/>

And I can quickly change profile to container by


How to change profiles

As shown above you can change profiles programmatically by coding something like


Another way to change the profile is to pass a system parameter at run time"standalone"

But you can also set it as part of the init parameter of your ear/war


What it means for you?

Hopefully this gives an insight into a new and powerful feature of Spring 3.1 and a feature that has been sorely missed over the 3 major revisions of Spring.

London Java Community Open Conference 2011

Yesterday I was explaining my trend for blogging to an ex-colleague and he felt that to make any kind of impact in the tech space blogging sphere I would need to blog a thousand words a day! I don’t think I will ever come close to that, however I will try to blog at least one technical article a week until the end of the year. That’s achievable right? That is only 5 weeks away, I am sure I have 5 blog posts in me.

Back to the original point of this post, I was at the London Java Community Open Conference where I met many ex-colleagues including my friend who thinks a thousand words is a realistic target.  The conference is a yearly collection of presentations loosely linked to the daily life of a java developer. It was held at IBM’s offices on the Southbank which allowed for some great views of the Thames as we sat and listened to the eclectic mix of presenters.

I woke up late with a slightly sore head and was only able to catch the second half of the keynote speech by Ben Evans and Martijn Verburg on their insights and predictions of the Java Ecosystem and community. They interestingly made the comment that Swing was dead and Oracle was moving it into maintenance mode to be replaced by JavaFX. They didn’t give a timeline on this but I think, like ‘roaches, it will be hard to kill Swing off. It’s interesting as I always didn’t any credence to JavaFX, I had heard it was a competitor to Flash-Flex and immediately had dismissed its importance. But the guys were saying that Oracle have dropped the scripting concept and have refocused the team. (ahh that will be a blog post, a frustrated web developer’s trials and tribulations of JavaFX development)

I also learned the London Java Community had a place on the JCP. This is an amazing testament to the strength of the community in London. I remember when I first moved to London and did a quick search of technical user groups and only found the Java Special Interest Group which didn’t seem extremely active. Now there are so many well supported and exciting developer groups throughout the city and throughout the different programming languages and frameworks. Read more about their amazing election win at

As part of the JCP process LJC get to vote on ever JSR, I can only contemplate the sheer volume of paper work this produces, to encourage wider community participation they have introduced “Adopt a JSR” programme. I think the title speaks for itself, pick a JSR and get involved. I think we all these things it requires a considerable amount of effort and dedication. A quick look on the LJC website and at present they seem to be looking for help with the following JSRs

These are key JSRs for both the JSE and JEE standards and it a great way to get involved in shaping the future of both specs. If you want to find out more go to

“The really frakkin simple guide to clojure” by John Stevenson

Next up on the conference schedule was a quick introduction to Clojure by John Stevenson who happens to be the Atlassian Ambassador for the UK.

I go through a functional programming phase every 3-4 months, I pick the functional language of the day, buy a book, hack around, give up complaining that real people don’t need this and go back to procedural programming object oriented programming. I had heard of Clojure but never mustered enough interest to check it out. I am still trying to improve at Scala and really do not have the capacity to learn two pseudo-functional languages at the same time.

Clojure is a dialect of Lisp, interestingly Lisp was the second programming language I learned when I was a kid. The reason is that one of the applications that I had access to was AutoCad which had a dialect of Lisp called AutoLisp. I am not sure if the AutoLisp is around anymore but it was perfect for extending AutoCad’s primitive functions.

Back to the presentation, it was a real fly through the language and its constructs. John gave a good feel for the language and highlights the strengths of the language in a mutli-core world. Well highlighted the strength of functional languages. Enough of an introduction to bait my interest but not enough to ditch Scala right now as my “language to master”

Haskell – Emily Green

If I knew little about Clojure I knew even less about Haskell! Emily Green works for a company called Scrive who build their serverside applications in Haskell. She was a java developer in a previous life but now has been programming Haskell for the last nine months. An engaging presenter, one of the best I saw at the conference. The interesting thing is that for the majority of this talk she gave no really insight into the language itself. I was hoping for a 40mins introduction to Haskell and to live with some idea about the language. However it wasn’t until the very end before there was even the slightest hint of code.

Nonetheless the talk was very interesting and was centred on her life with Haskell. She gave a very brief tour of her likes and dislikes with the language and the community. I will not do the talk and justice by trying to list out all the points but the two that stuck in my mind were make the complier work more. Her example was that instead of writing unit tests to ensure that all code is tested, write the logic into the code and get the complier to ensure correctness at compile time. Emily noted that when working with Haskell that the complier will refuse to compile if the code is not just syntactically correct but functionally too. The other point I remember from yesterday is that she mentioned that most of the content produced around Haskell are academic papers as the language is rooted in research. Also she mentioned something about monads 😉

The Future of Java – Steve Elliott

Steve Elliott from Oracle present on the future of Java. Starting with going through some of the hilightls of the recently released version Java 7 he moved on to focus on version 8 & 9. Java 8 is expected sometime in the Summer of 2013 and will introduce some ground breaking changes. Language-level support for lambda expressions is due for this version and is hotly debated throughout the Java community. Recently a build of Java 8 with the first true preview of lambda expressions was released and syntax will be (copied and pasted from Brian Gotez’s email to the lambda dev mailing list on the final decision

x => x + 1

(x) => x + 1

(int x) => x + 1

(int x, int y) => x + y

(x, y) => x + y

(x, y) => { System.out.printf("%d + %d = %d%n", x, y, x+y); }

() => { System.out.println("I am a Runnable"); }

I need to spend some quality time with the latest complier but this is pretty much the C# syntax so for .net developers or scala developers for that matter it should be relatively straightforward to pick up.

Another key core component predicted to be released in 8 is project jigsaw or more commonly know as java modularization. Steve reiterated that this was not just adding Ivy or Osgi to the top of java but go the root core of the platform and add modularization to the java platform itself. Java has evolved over the years and the component entanglement is major problem. The task of unpicking the component and stitching them back together is currently underway.

Steve referenced Mark Reinhold’s requirement documents on the java module system as a good place to start to understand the true mammoth of the task. You can check it out at

Five Hours with ThoughtWorks Go


After a day of looking after a sick child I had won enough brownie points to get a few solid hours of geek time on my computer. It’s the first time I have used my PC in anger since I installed Bias Lighting after reading Jeff Atwoods post earlier this month ( I bought the Antec Halo 6 LED Bias Lighting Kit and placed it on the back of my middle monitor. Two weeks later the lights have fallen off and are now doing a great job of lighting the floor underneath my desk.

But this is not a review on back lights; it’s a review on Thoughtwork’s agile release tool “Go”.  I believe it the reincarnation of Cruise Control, remember cruise control? I do, and I never liked it. Once I discovered TeamCity I was hooked and have never looked back.

However TeamCity is a build tool. It compiles code. It doesn’t care what you do with it, it doesn’t care about when you deploy, it doesn’t care about your team structure, it doesn’t care about release notes, etc. If you need any extra “team” features you will have to customize builds tools such as Ant/Gradle/Maven to accommodate your wishes. Who has the time? Well I know some teams do but the fast majority of development teams do not have the time to sit down and write custom team build tools.

What I need is to take a TeamCity style build tool but introduce the concept of

  • Environments
  • Release calendar,
  • Issue Tracker Integration
  • Team Structure
  • Release Notes
  • Code Quality View
  • Signoff

I had heard the Go was more than just a build server, it was a release management tool. It wasn’t just Cruise Control with a fancy UI, it was a one stop shop for your team’s release needs. So time to check it out…

There are two versions available, an Enterprise Server edition (no pricing on the site) and a Community Edition.  Nice to see that the community edition comes with LDAP integration, often the carrot to get development teams to purchase enterprise versions.

I wanted to get a feel for the full version so I downloaded and installed the Enterprise Version.


Installation on a Windows 7 machine was a breeze, I downloaded both the Server and Agent exe and installation was quick and painless. The server installation picked port 8153 (no option in the installation procedure to choose port) and the Agent only needed the IP address.

Once installed a browser was opened pointing to http://localhost:8153/go/home where I was asked to add a pipeline. I know, a good boy scout reads the manual (I wasn’t even able to find the manual later on!) but I hadn’t at this stage and really could have done with some on screen information explaining what a pipeline meant in this context.

I think, in Go’s parlance,  a pipeline is a grouping of units of work. For example if you had a complex procedure that included

Each stage would be modelled as a pipeline and could allow for quite elaborate release models

For this demo I originally wanted to build netty from its repository on github. Netty uses Maven and although I am not a fan of Maven I do have it on my machine. Every maven build I attempt rarely works out of the box, I always thought that was the point of maven!? Well netty didn’t work out of the box as it needed another repository added to it’s pom.xml and I had neither the will or the time to go fork the project and add it. I switched to a much simpler project called netty-examples, again maven backed but completely buildable out of the box. This is not a post on maven so back to Go!

The relationship of Pipeline to source to build step is as follows

I wonder will the limitation to only allow one source repository be considered a limitation? You can always tail on to other pipelines by selecting their source as your source.

Materials is almost the same as saying SCM but for the last option which is Pipeline. As mentioned pipeline allows you to chain on to previous pipeline’s source. The list of SCM available out of the box are

  • Git
  • Subversion
  • Mercurial
  • Perforce

Surprised not to see CVS here, it’s still very popular! Materials added it is time to add the build job

Here we can add the build job, its time to compile the code. The Tasks types can be

  • Ant / Nant
  • Rake
  • Custom Command

No Maven, this is where I started to regret picking a maven based project, but a quick custom command with maven later and we were up and running.

Before I built the project I decided to take a closer look at the environments option of the tool. Predictable no environments were present and I was expecting the ability to define exactly

what an environment meant to my team. I was disappointed!

The environment relationship is as follows

It is pretty simplistic and lacked the sophistication I was hoping for. Yes I understand you can change anything with environmental variables but I was hoping to move away from name value pairs and into something more structured. I created an environment for each of the stages in my pretend team.

I assigned the build pipeline to the development environment and pressed the build project. Several iterations later, I had dropped netty for netty-example and I had a working build

Here is a view from build console

And finally a glowing green light to give any technical a warm and fuzzy feeling

So far we have a build that is linked to an environment and looking at the environment dashboard you can get a great overview of where your different stages are at

And once you spend the time to configure everything you can start to see where Go is different to other build servers


I would have to spend more time with Go to really be able to give it a proper evaluation, five hours is just not enough to make an informed decision on its usefulness to a global development team. I do like the concept of environments this is probably the one killer feature of the product that sets it apart from all the other build servers I have used. I feel it could have been more sophisticated in terms of environment definition but the environment variables do allow you to achieve all I would really need.

I felt its core build infrastructure was not as mature or as powerful as TeamCity. It lacked the feature set that TeamCity has to offer out of the box. TeamCity caters for almost anything you can throw at it and has great integration with most build tools and plugins.

I don’t think I will be dropping TeamCity in favour of Go any time soon but I will continue to monitor the product. You can download a copy from