Stop Scripting Your Deployments in your CI Server

| January 10, 2017 | 0 Comments

Continuous Integration tools like Jenkins and Bamboo are especially effective when building, unit testing and validating your applications. In fact, with CI tools, teams can do almost anything they want with code. But do CI tools scale?

The short answer is no. There are many steps required to deploy applications, such as removing servers from load balancers, stopping servers, updating web server content and application server binaries, running database updates, and restarting servers to name a few. Application deployment using a CI server basically means scripting out entire deployments yourself, using CI tools to move your script to a target server and running it there. In an enterprise environment, it wouldn’t take long for all this scripting to turn into a convoluted mess.

But I Like to Script!

So maybe you’re fine with scripting. But are you ok with:

  • Maintaining all of those deployment scripts, especially if something changes?
  • Installing agents on every machine that’s not a build server? A QA database or app server is unlikely to be a build agent for your CI tool, for example.
  • Having no cross-machine orchestration? CI servers, after all, are designed to run an entire build on one machine of a large pool of possible machines. In contrast, application deployments need parts of a build to run on multiple, specific machines.
  • Lacking a suitable domain model for applications and deployments? Yes, you can see whether the job succeeded or not. But answering a simple, very common deployment questions such as “which version of my application is running in which environment” is non-trivial with most CI tools.

 

Continuous Delivery with a CI Tool is like using workflows to scale deployments. Read “5 Reasons To Stay Away from Workflows” next!

 

Deploying with CI Tools

CI servers typically offer a couple of choices for deployments:

  1. Creating jobs for each application/environment combination (e.g., “MyApp to Dev,” “MyApp to Test”)
  2. Creating a parameterized job that accepts that application and/or environment as parameters

The latter is the more scalable of the two options, but how do you secure and configure it effectively? You don’t because it’s impossible. The target servers on which the deployment needs to run and the build slaves on which the job is executed depend on the “target environment” parameter. Who has permission to run the job depends on the target environment, but CI tools generally do not support security configuration of the form “this user is only allowed to run this job if the value of this parameter to the job is X, Y, or Z”.

So if you want security, you’re left with option 1. That’s right. Building delivery pipelines using CI servers tends to mean one deployment job per application/environment combination. Changing your deployment logic, for example, to support a new target server version, suddenly means modifying tens, or even hundreds of jobs. With most CI tools, this is definitely not a quick or easy thing to do.

CI Tools are Great–for CI

CI tools are proven workhorses for build, test, and validation before deploying applications to production. It’s time to appreciate these pipeline standards for what they’re great at while recognizing that they aren’t made to automate deployment.

Fortunately, IT teams all over the world are doing just that. More and more are turning to deployment automation and release orchestration software while integrating their trusted CI tools as part of an efficient, scalable and easily maintainable delivery process.

 

Interested in seeing what a Continuous Delivery tool looks like? Check it out, you might be surprised: XL Deploy & XL Release


Gino Toro

About the Author ()