Data, not Docker: Why Metrics Are the Key to CD

| November 25, 2014 | 0 Comments

One of the readers of a previous blog post left a nice comment:

Kudos! This is the very first article on continuous delivery that I’ve read the included the following caveat: “Have a real business reason for investigating release automation and other changes to your delivery process. Just wanting to experiment with new technology is not enough.” If only other “DevOps” bloggers would think critically before simply pushing their company’s latest continuous delivery “solution”.

My response to that:

Thank you! Sorry, must go and test this new DevOps Screwdriver I’ve just been sold that will solve all cd-devops-screwdrivermy problems…

Seriously, though: I’m glad to see we’re getting more and more friendly pushback, especially from higher levels of management in enterprises, about proving that all these concepts and methodologies we’re so enthusiastic about actually work.

We know from analysis of all kinds of non-linear processes that complex dependencies mean that the real bottlenecks often occur in unexpected places. So the idea that we can eyeball the real bottlenecks in this specific non-linear process, i.e. the releases/delivery pipelines in a modern IT environment, and “just go away and write some scripts before wiring it all up with this CI server” seems hugely naïve to me.

That’s why collecting and analysing data about what actually happens in your releases and pipelines is a key focus of our tools, especially for XL Release. It may be “boring” and not as attention-span-grabbing as Look Here We Support Docker Too Now!! (we do, actually ;-)), but it’s what you really need to make your CD initiatives successful at any kind of scale.


About the Author ()

Andrew Phillips is the VP of DevOps Strategy at XebiaLabs. He is a DevOps thought-leader, speaker and developer.