Before You Go Over the Container Cliff with Docker, Mesos etc: Points to Consider

| December 15, 2016 | 2 Comments

As a company making software for Continuous Delivery and Devops at scale, XebiaLabs is pretty much always in discussions with users about the benefits and challenges of new development styles, application architectures and runtime platforms. Unsurprisingly, many of these discussions focus on microservices on the application side and containers and related frameworks such as Docker, Kubernetes, Mesos, Marathon etc., etc., on the platform side.

I’m personally really excited about the potential of microservices and containers, and typically recommend emphatically that our users research them. But I also add that doing research is absolutely not the same thing as deciding up front to go for full-scale adoption.

container-fallGiven the incredibly rapid pace of change in this area, it’s essential to develop a clear understanding of the capabilities of the technology in your environment before making any decisions: production is not usually a good arena for R&D.

Based on what we have learned from our users and partners that have been undertaking such research, our own experiences (we use containers quite a lot internally) and lessons from companies such a eBay and Google, here are six important criteria to bear in mind when deciding whether to move from research to adoption:

1. Genuine business need

Perhaps the most fundamental question that needs to be answered before deciding to adopt microservices or containers is whether there is a real business problem that needs to be solved…and that cannot satisfactorily be solved with your existing approaches or technologies.

Microservices and containers are new, fast-moving and still not very well understood, all of which represents a risk factor that needs be weighed up by some concrete benefit for your teams and organization.

I can’t say it better than Etsy’s former principal engineer Dan McKinley:

[Consider] how you would solve your immediate problem without adding anything new. First, posing this question should detect the situation where the “problem” is that someone really wants to use the technology. If that is the case, you should immediately abort.

2. Engineering know-how

If you are clear that microservices and/or containers do indeed promise to solve a problem that you can’t address in other ways, check that you have access to expert platform engineering resources because you will need them.

It’s not just that most of the APIs and frameworks that people are looking at are pretty much brand new: getting a container-based platform up and running in production means solving many “adjacent” problems that the current frameworks aren’t even intended to address: optimizing networking, deciding on storage strategies, handling backups and failover dealing with security and so on.

3. Willingness to “learn as you go”

At present, there are many more questions around microservices and containers at any kind of production scale than there are readily accessible answers. Even if you have the right engineering expertise to handle these challenges, you should be prepared for a multi-year period of ongoing experimentation and learning.

At least some the APIs and frameworks you will initially pick will undergo significant, backwards-incompatible changes or even fall by the wayside entirely. You will also need to rip-and-replace others that turn out not to be suitable or mature enough for your scenario. And as regards best practices for everything from operational procedures to app delivery patterns: be prepared to develop these yourself.

4. Microservices != containers

When we talk with users coming from a platform/operations angle, or with those who have heard about Docker or other technologies and want to dive in, we often find a perception that microservices and containers are “basically two sides of the same coin,” and that you need one to do the other.

I’d tend to agree that containers nudge you in the direction of making your deliverables smaller, and so tend to move you away from large, monolitic applications (although I have also seen plenty of multi-gigabyte container images). However, the opposite is definitely a misconception, in my view: it’s perfectly possible to move towards a microservices architecture without using containers as an underlying runtime technology.

In fact, if you’re looking to “microservice-ize” existing applications and are not working in a greenfield environment, it may even make more sense to do so. Sticking with your existing runtime platform (you can easily run tens or hundreds of microservice processes on a server without wrapping them in containers, after all!) takes a big variable out of the “change equation” and so reduces the risk to your project.

5. Handling dependencies

A common definition for a microservice we often hear mentioned is an “independently-deployable unit,” and indeed it is good practice to design your microservices so they can start up successfully without requiring all kinds of other components to be available. But in the vast majority of cases, “no microservice is an island”: a single service may boot up and respond to certain API calls on its own, but in order to handle scenarios that are actually useful to the user, you typically need multiple services to be available and talking to each other.

For example, an order service should be able to start and tell you how many orders are open on its own, but if you actually want to simulate a user browsing through your catalogue, picking some items, completing a purchase and tracking an order through to completion, you’ll need a whole bunch of services to be running.

If you’re looking to implement microservices using containers, the available frameworks are providing increasing levels of support for this. Indeed, KubernetesMarathonDocker Compose and  Docker Swarm Orchestration and similar other container orchestration tools have been created largely to handle container dependencies and links.

Still, the state of the art in terms of runtime/microservice dependency management, and especially visualization, is way behind what we have for build-time dependencies (which, from what we’re seeing, is one of the reasons many of our users are interested in the dependency management features of XL Deploy). This is an area you will likely need to tackle yourself, at the very least by augmenting the capabilities of exising tools.

6. Beyond “hello world”

One of the main reasons why particularly Docker comes up in many of our recent conversations, above and beyond the general buzz, is that the “hello world” experience is truly great. Getting a sample application in your language of choice running in a container is a very simple, rewarding experience, and even taking the next steps of adding some tweaks is easy to get right.

However, getting real applications running in production in a container environment is a totally different thing, especially if you’re moving towards microservices. Not only is building your own PaaS (which is effectively what you’ll be doing) a hard engineering challenge, there’s a whole bunch of process-related questions that need to be addressed as well.

I talked about what we consider to be the most important questions in a previous blog post (there’s overlap with some of the topics discussed here). Coming up with approaches to deal with them should be part of your microservices and container research.

Summary

In short, microservices and containers should definitely be on your tech research agenda (and we can hopefully help with some of the challenges you’ll encounter through the microservice- and container-related features available in XL Release and XL Deploy, as well as with our plugins to KubernetesDocker ComposeDocker and Ansible).

Before you decide to push ahead with any kind of adoption, ensure that you understand the challenges and are aware of the investment in time and resources that will be required, and that you have a genuine business need that justifies the effort and risk.

Editor’s note: This post was originally published in April 2015 and has been updated for accuracy and comprehensiveness.

About the Author ()

Andrew Phillips is the VP of DevOps Strategy at XebiaLabs. He is a DevOps thought-leader, speaker and developer.