How we fixed our broken deployment process

Ryan Rousseau

I can be found IRL at the Octopus Deploy table


I can be found on Twitter @ryanrousseau


Slides can be found at https://broken.rousseau.dev/deliveryconf/


Check #DeliveryConf on Twitter

The year is 2013

My team builds a web application spanning VB6, classic ASP, and ASP.NET

Our deployment process is broken

It's not going great

Our deployments

Mostly manual (deploy by Word doc), but there are some scripts for copying files to servers

Different steps for different environments

They take too much time - usually multiple hours

Non-production woes

"Hey, can you refresh the demo system for me?"

Delays in testing new changes and bug fixes

No two environments were the same and the state of each environment was always in question

Production woes

Customers know when we deploy

Because stuff breaks a lot

Deploying to production is nearly a full time job

Did I mention?

We deploy twice a week (scheduled)

or for anything considered a "Work Stoppage"

Something has to change

Process changes

Slow down!

Big changes are only released monthly

More oversight on labeling issues as "Work Stoppage"

Dallas Day of .NET

I attended a session by Jeffrey Palermo on Iteration Zero

PSake: PowerShell Make

Book: Configuration Management Best Practices

I'll automate our deployments with PSake!

Friend: wait, check out this tool I found (Octopus Deploy)

Downloaded the trial and had a web deployment to a test environment in about two hours

Full application deployment from Development to Production in two weeks

Bad Practice: Manual Web Deployments

Error prone, unreliable, not repeatable

Average of 1-2 hours per deployment, sometimes 8 for troubleshooting

Not very many people knew how to do the deployment

How we fixed it

Created a PowerShell script to automate the deployment and configuration of our apps

Updated our builds using PSake to create the artifacts Octopus needed

Configured Octopus to deploy our applications to our infrastructure

Bad practice: branch per environment

Development

Staging

Production

But why was this bad?

Different builds per environment

Hotfixes had to merge backwards from Production -> Development

We were doing a lot of cherry pick merges

Merge conflicts were super fun

Environment configuration

Updating config files at build time

Passwords in source control

How we fixed it

Removed Staging branch

Development and Production branches only

Everything gets deployed or nothing gets deployed

Octopus applied configuration at deploy time

Bad Practice: Manual Database Deployments

Scripts, schema changes, and stored procedures were usually run via a cmd file

Each change had their own cmd file or script to run

Why manual DB deployments were worse than manual web deployments

Sometimes the order of scripts mattered and sometimes the same order wasn't followed on each environment

The release manager had little context for these scripts in the event they failed

Release manager (and others) had write access to production database

How we fixed it

We incorporated a database migration tool, FluentMigrator

Run order was defined in the migration scripts

Only needed to run one command for a release instead of many commands

Most people (including me) had production write access revoked

Some obstacles along the way

Change is hard

Legacy deployment steps

Personalities and buy-in

The results

Deployment times down to 10-15 minutes

Trained a team of release engineers

Four scheduled deployments a week

And customers couldn't tell (except for the new features)

Thank you!


@ryanrousseau

https://broken.rousseau.dev/deliveryconf/


Background Photo by Zo Razafindramamba on Unsplash