Developer Experience for Containers

As our organization attempts to transition to Docker, one question that keeps coming up is, “how are we going to change the local developer experience?” Arguably, this is the most important aspect of the entire Docker/container stack. If containers don’t offer developers an improvement to their current workflow, why would they be motivated to use them?

By now, selling the benefits of containers to a developer isn’t all that difficult. In theory, it seems clear. But once developers attempt to actually dockerize their code, they can start to run into road blocks and critical choices. This was our attempt to resolve those choices early.

Our Setup

The first application we chose to containerize is one that was recently brought into our organization, we inherited it. The applciation is a node app, with a series of microservices also running on node to maek the full app. The goal was to not only unify the experience of developing across the whole app, but to be able to test changes on a full environment running on a developers laptop.

The application consists of a number of individual git repos. The first step didn’t even involve containers at all, but simply providing an easy method of checking out the entire aplication in one step. Our solution was git submodules. This allowed devs to run one command to get all the code they might need:

Git clone –recursive

This clones the whole repo and grabs all the submodules. For applications consisting of hundreds of microservices, this clearly won’t work. For applications of that size, I recommend drawing your context boundaries around specific services, and isolating them. This will also help define better boundaries within your application.

The other thing we did was to write a couple of helper scripts. We kept these simple. The scripts execute builds in each of the submodule repos. Here, we implemented build scripts that the top level script would execute. This simply wrapped existing builds. Why did we do this? To standardize the steps needed to set up an environment that is ready to run a container.

Containers

Finally, to the containers. For this, we stuck our docker-compose script in the top level repo. Individual dockerfiles were inserted into the child repos and referenced from the docker-compose script. This means initial docker builds might be semi-lengthy (though still quick) but subsequent runs are very fast.

To make a code change, a developer drops into the child repo of the service they want to work on, makes they change, and reexecutes the docker compose command. Here is where the magic of docker really speeds things up. As only the one container has changed, only that one container is rebuilt. Additonally, only the last layer of the container needs to be rebuilt.

Areas for improvement

One are for improvement is the workflow for git submodules. Submodules add two difficulties not experienced before. One, the need for –recursive on the call to git. More importantly, is the need to update the uber repo after updating a submodule. This is a definite pain point.

The current discussion for solving this revolves around using a CI system (like Jenkins) to monitor the submodule repos and trigger an update on the uber repo. This seems viable and simple, so we will likely go with this solution for now. It is unfortunate that it requires a separate tool.

Conclusion, does it actually speed up devs?

At this point, we haven’t actually sped devs up. But we haven’t added much friction. While the submodule structure has some friction, its not additional friction that we didn’t have before. What we have done is allowed devs to easily run the whole platform locally.

I’d like to get metrics around how much that is helping our devs. I’m not sure what metrics these would be. If you have any suggestions feel free to drop me an email or leave a comment below!