10 Reasons You Can’t Scale DevOps with Configuration Management Tools

| April 10, 2018 | 0 Comments

Provisioning and Configuration Management Tools

Provisioning and configuration management (CM) tools such as Terraform, AWS CloudFormation, Puppet, Chef, SaltStack, and Ansible are popular choices for infrastructure automation and configuration. These tools effectively manage infrastructure and other components using scripts, which initially seem simple. At scale, however, their scripts are complex and labor intensive, and domain knowledge is often lost as resources transition to other activities.

Application Release Automation (ARA) solutions are designed for a very different purpose: to automate the process of releasing your complete software packages (applications, infrastructure, and configurations) and deploying them to different environments in the enterprise release pipeline, from development to testing to staging to production.

Provisioning environments, orchestrating releases, and deploying applications are all closely related parts of the software delivery process, so you might think that you can use one tool to do it all. Unfortunately, provisioning and configuration management tools alone won’t give you the foundation you need for successful enterprise DevOps at scaleWhy not?

1. They don’t help you orchestrate releases or manage dependencies

The ability to scale effectively requires the flexibility to release at the velocity of the business, whether daily or hundreds or even thousands of times a month. This level of release acceleration requires repeatable release coordination with comprehensive dependency management. Unfortunately, these capabilities are outside the scope and capacity of environment provisioning and management. Delivering at scale requires an ARA solution to orchestrate releases and manage both technical and logical dependencies among applications and microservices.

2. Scripting doesn’t scale

Deploying applications with provisioning or CM tools requires the constant writing and testing of scripts, customizing and configuring them for different applications and environments and updating them every time something changes. This endless custom scripting costs you time and money in the short run and in the long run, and it inevitably leads to quality problems as maintenance continually becomes more complex.

RELATED READING: 11 Black Holes of DevOps – How Not to Get Lost in Outer Space

 3. No support for advanced deployment patterns

The larger and more complex your environments are, the more benefit you’ll see from advanced deployment approaches, such as blue-green deployments, canary deployments, and rolling or dark releases. Implementing these patterns with a provisioning or CM tool requires additional scripting efforts, which increases exponentially if you want to support more than one approach. Plus, you’ll face more complicated maintenance in the future.

4. Intelligent and automated rollback is mandatory for success

When a deployment fails, you need a tool that can take action, fast. Provisioning and CM tools don’t offer automated rollback, so when things go wrong it’s nearly impossible to create and maintain custom scripts that will reliably undo advanced deployments. You need a solution that can automatically roll back from a failure that happens at any point in the deployment process.

5. You need visibility into your real-world process

Releasing an application doesn’t just mean provisioning infrastructure and deploying software; most enterprises have business needs that can’t be automated. Provisioning and CM tools don’t account for all of the work that is part of the release pipeline, which means they don’t represent your real-world process.

6. Risk visibility is crucial for all types of stakeholders

The status of a release–and the chance that it’s going to fail–isn’t just a technical concern. Stakeholders across the business need to see release status at a glance and receive proactive alerts when releases are at risk of failing. Provisioning and CM tools are designed to provide status information that’s aimed at technical users and that covers only a subset of the complete delivery pipeline. These tools do not proactively notify stakeholders when there’s a danger that something will go wrong, instead reporting only failure conditions. Why fail when you can avoid it?

7. Reports from provisioning and CM tools simply contain infrastructure data

Of course, provisioning and CM tools collect data in reports that allow you to analyze your processes. This data is inherently limited to the provisioning phase of the pipeline, when what you need are reports that visualize the entire release process and show you where you are in that process. Ideally reports should not just report base data. They should also be based on an individual’s role, providing insight to help individuals and teams improve efficiency and speed by focusing on measurable DevOps goals.

8. Compliance data collection should be a no-brainer

When the tools in your pipeline collect compliance data automatically and present it in an easy-to-read and meaningful manner, meeting compliance requirements and providing information to auditors become easy tasks. Collecting information beyond basic compliance data with a provisioning or CM tool requires you to explicitly identify the data you want, and there’s no guarantee it will be usable for non-technical team members.

9. Access control should be easy to configure

As development teams are empowered to deploy applications as part of their Continuous Delivery process, intuitive user management with granular access control is critical to ensure code that doesn’t belong in Production doesn’t end up there. Provisioning and CM tools require access control to be defined in code, which means that initial setup, consistency across projects, and significant maintenance over time all will be added to the workload of overburdened technical staff.

10. You need standardized processes to scale up

Provisioning and CM tools depend on customized scripts that you have to write and maintain. As a result, using them for application deployment inevitably leads to one-off scripts that don’t scale across the organization, with no release orchestration to help you manage disparate processes. Delivering repeatable, scalable pipelines for releasing and deploying applications allows you to scale your DevOps initiatives, reuse pipelines, and ensure you’re collecting the data you need for compliance and audit purposes.

Application Release Automation brings together all of the steps and tools in your software delivery cycle to accelerate delivery and provide the enterprise-level scalability, reusability, and standardization that your business requires.

Related reading


Rob Stroud

About the Author ()

Rob Stroud is Chief Product Officer for XebiaLabs and a recognized industry thought leader in DevOps and Continuous Deployment. Before XebiaLabs, Rob was Principal Analyst for Forrester Research, Inc., where he helped large enterprises drive their DevOps transformations and guided them through organizational change. As VP Strategy and Innovation for IT Business Management for CA Technologies, Rob developed the strategy and product portfolio for products within multi-billion dollar markets.