Building software is an extremely creative, highly repetitive process that generates tremendous amounts of data begging for machine learning. To maximize the creativity, you have to optimize the repetition.
“It’s all about IT.”
“We want to become a software company.”
“We believe in an Engineering Culture.”
These are just some of the quotes being used by CEOs and CTOs at big financial, insurance, and telecom companies today. If you go to the DevOps Enterprise Summit, or any other DevOps conference, you will hear things like this said time after time. And these people are right––if you don’t become a software company, it’s better to sell the company today and share the profit amongst all employees.
Lean and Lean principles are taken very seriously in these companies when it comes to the processes where humans are involved in the customer journey––measurements for throughput times, satisfaction, and “first time right” are commonplace. However, when we look at IT in these companies, we go from Lean to Agile. And in most cases there is a lack of data, or even better, a lack of a scientific approach to improve the IT process.
The lack of a data-driven approach
The market is overhyping Artificial Intelligence, claiming AI is the new BLACK. Yet, most companies have yet to start leveraging AI to improve IT delivery processes. In fact, we barely use our data for basic reporting and insights. While companies might have some point insights in place, overall insight into the IT delivery process is missing.
But… “We’re always prototyping”
This is a false argument, and it’s been a false argument for a while already. You’re no longer prototyping. Even if you’re still using Jenkins for software delivery, you’re already in a repetitive process.
For example, the 2018 Accelerate State of DevOps Report states that the teams it categorizes as “Elite” have an extremely repetitive process in place. Whether you like it or not, you’re no longer prototyping, you’re a part of a software factory.
The speed of your factory
An important question to ask yourself is, ‘how much value am I creating, and how much time and effort do I really spend on the creation of user value?’
Take the example of a car factory. Value in a car factory, for example, is exhibited by delivering high-quality cars, time after time, in a repeatable, predictable process. What are non-value adding activities in a car factory? Things like cleaning the machines and designing new robots. I’m not saying these activities are not meaningful, but if 100% of your capacity is spent on them, you will go bankrupt and not produce a single car. The key is to find balance and maximize the value-adding activities.
The value in your DevOps team is in creating software that makes an impact in the hands of the user. What activities are non-value adding? Setting up your development toolchain, keeping your GitLab and Kubernetes scripts up to date, building the same deployment scripts time after time, just like another team one floor above you. Again, I’m not saying these activities are useless, but it’s all about balance.
Finding the right balance
How do you find a balance when all of these tools add value to your software delivery process? Automation is key, but if you automate everything, you end up like the Tesla Model 3 factories. They utterly failed because they lacked the human involvement that was crucial for success. So, the challenge now becomes, how can you gain better insights with the help of Machine Learning and AI?
The approach toward Predictive DevOps
1. Data collection: It’s obvious, but predictive DevOps is all about data. There is no machine learning without data and no AI without machine learning. So, harvesting the data from all DevOps tools in your toolchain is the most important step to take.
2. Data context creation: Data without context is useless. Creating the context for the data you collect is vital. What data belongs to your tickets, pull requests, teams, Continuous Integration runs, and so on. More importantly, context is identifying how everything is related to your releases in terms of pushing new features to customers. Then, you need to train and teach various ML models to figure out what to do with it all.
3. Real-time interpretation and impact determination: This can be advice be given at run time that predicts whether a release will fail upfront, and what activities caused the failure.
4. Data-driven actions: With models getting better, data becoming richer, and more process automation, it’s time to automate decision making. For example, not allowing a release to start if you discover an unhealthy development pattern. Or, simply advise and take action if a delivery will be missed.
ML and AI: Scary or helpful?
Machine learning and Artificial Intelligence have already changed entire industries. Think about Alibaba––the world’s largest e-commerce platform––or your recommendations on Amazon and eBay. Machine learning and AI are now being used as an extension of human beings.
Is it scary? Not at all. It’s about time that we start embracing the smart processing and interpretation of the vast amount of data being generated from our IT processes to improve them. Welcome to the new era of producing software.
Continuous Delivery allows you to get new features and capabilities to market faster and more reliably. This ebook helps managers understand the principles behind Continuous Delivery, explains the transition to a Continuous Delivery organization, and gives practical advice on how to start benefiting from the dramatic improvements Continuous Delivery provides.