Welcome!

Java IoT Authors: Stackify Blog, APM Blog, Liz McMillan, Elizabeth White, Yeshim Deniz

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Microsoft Cloud, Linux Containers, Agile Computing

@DevOpsSummit: Blog Post

Time to Invest in Deployment By @Itransition | @DevOpsSummit [#DevOps]

You can’t be sure that every one of your app deployments will be smooth sailing

Why Now Is the Right Time to Invest in Deployment Automation

Isn't it great to treat your girlfriend by cooking her favorite omelet every morning? In theory, sure, but in reality, chances are most of the time you end up with darn scrambled eggs instead. Let's face it: you're a great boyfriend but a terrible cook. Believe it or not, this is quite analogous to app deployment. You can't be sure that every one of your app deployments will be smooth sailing; every now and then you will mess up a thing or two (or a dozen) along the way.

Be it a critical urge to rapidly roll back to a previous release or the inability to find the phone number of that one guy responsible for deployment, the opportunities for things to go terribly wrong are endless. As a rule, there are two reasons behind your worst nightmares coming true:

  • You're good at development but operations isn't your strong suit.
  • You're not using deployment automation.

In this article, we'll focus on the second point - deployment automation.

The automating software deployment process for .NET has pretty much become a ‘no-brainer' during the past few years. New tools have made it extremely easy to make deployments faster and less risky at costs tending to zero. Not using automatic deployment in 2014 is like not using source control; it's possible to live without it but having it in place keeps you safe while requiring such little effort. Yet, many still resist given their perception of the hassle around the creation, configuration and maintenance of automated deployment. And they‘re really missing out.

There are many reasons to start investing in deployment automation for .NET, including a drastic increase in deployment success rates and frequencies, but most important, it is good for business. Here's why:

Stable Manual Deployment Is a Utopia
Let's have a brief look at what it usually takes to deploy an ordinary application:

  1. Checking out the version of the source code that you want to deploy (e.g. the latest commit of the /Release_01 branch);
  2. Building solution with appropriate settings applied;
  3. Transforming configuration appropriately;
  4. Publishing/packaging new version;
  5. Stopping the application/tuning load balancer so that users don't hit the app in the middle of deployment;
  6. Backing up the database;
  7. Updating the database structure/data;
  8. Removing old files (but keeping some, e.g. /Uploads folder);
  9. Copying new version to production server;
  10. Setting appropriate ACL permissions/other environment settings;
  11. Deploying dependencies recursively;
  12. Starting the application;
  13. Executing health-checks.

That's only the basic list; of course, every application is unique and has a slightly different process (so your app may have additional steps or not require some of the aforementioned ones, but most deployments are similar).

All those steps may seem easy enough to perform manually without anything going wrong. Well, day-to-day experience says that deployments always go wrong from time to time if executed manually, mainly due to our nature. Humans (particularly creative people like developers) are not very good at performing routine repetitive tasks; that's why we have computers. Here are a few examples of some of the most avoidable manual deployment errors I've seen:

  • Checking out the version of the source code that you want to deploy:
  • Ever had /Dev branch deployed to Production environment just because the guy who executed deployment was sure he switched to /Dev branch when, in actuality, he didn't?
  • Or even worse: getting /Production branch deployed with local intermediate changes never checked in the source control (it's so easy to forget about your local changes when you are to deploy a hotfix). So no one actually understands why the application misbehaves (I've seen people decompiling production .dll with reflector to understand what is in there, because production code is different than any version in the repository).
  • Building solution with appropriate setting applied:
  • It's so easy to forget to switch Visual Studio to Release configuration before you build the source code. And it's so sad to find out that your app is too slow in production because of DEBUG compilation.
  • Transforming configuration appropriately:
  • There's not much joy in discovering that your production application uses a development database after a new release when "data loss" is reported by end users (somebody forgot to replace connection string in web.config). In general, manual configuration transformation is a bad idea, because usually it is not versioned (meaning developers have to figure out how production configuration files differ from the dev version and "merge" them manually every time. As a result nobody is actually sure what the production configuration is and when/why it was changed); "What's the requestTimeOut in production? - hmm, I see it's 300 seconds now, but it was around 100 last week - who changed it?"
  • Backing up the database:
  • Discovering you forgot to back up the production database just at the moment your migration script fails corrupting the data.

You get the idea. I have no doubt you've encountered some of the aforementioned errors and can easily add a ton of others. The key problem with these errors is the fact that it's very easy to make any of them but very hard to get to the bottom of them. As a rule, it takes a lot of effort and nervous hours to detect and fix them.

Therefore, you are forced to have either a deployment document (checklist) or a special "deployment" guy on your team (or both). Each of these approaches has major drawbacks:

  • Deployment checklists are often outdated (developers are generally bad in maintaining documentation - and they are bad for a reason; maintaining docs is boring). Moreover, new team members still need to undergo deployment training when they join the project (and it takes time).
  • Your team's bus factor is 1. No new version/hotfix can be deployed if your "deployment" guy is on vacation.

These issues will not occur if deployments are done automatically, because computers are good at repetitive tasks (humans are not).

Time Is Money: Gain Both
If you'd rather have a million dollars straight away than a penny doubled every day for one month you should revisit your math, because the latter would result in you becoming over $10M richer. The story is exactly the same when investing in deployment automation. In addition to streamlining release operations, you're gaining profit and saving resources; in other words, you're increasing ROI.

In my experience, carefully going through a deployment process manually takes at least an hour for an average developer for a small project. In an agile environment, you usually want to have frequent deployments to QA/UAT platforms, so features are delivered and validated quickly (i.e., have 3-4 QA deployments per week [meaning manual deployment costs you at least 12-16 hours per month]). On the other hand, configuring automated deployment for a simple project rarely takes more than 16 hours, while deployment itself would then be just a click away. It truly is as simple as that; automation wins even with a tight time frame. Now add up time needed to train new developers to do the deployment plus time spent to troubleshoot deployment errors. It turns out that you can save about 25-40 hours per month with automation, which translates to about $500-800.

Another important aspect is the fact that overall team performance increases, because the QA team does not need developers to be distracted from new features when QA needs a new build to be delivered for validation which; this, in turn, means greater flexibility.

Take a look at this graph from McConnell's "Software project survival guide":

The longer it takes from introducing an error to its detection, the more time (and money) it takes to fix it. Being able to deploy without a developer's involvement means much more frequent deployments, which means much quicker error detection (i.e. allows your team move even faster!).

It's also about making errors less risky, which again means higher deployment frequency. And higher frequency entails faster feedback from testers and end users.

But there's more. If done intelligently, deployment automation also provides for automated reporting into the process, which results in zero efforts and money put into complying with audit requirements, and a significant unlikelihood of audit failure.

Sound too good to be true? Here's a summary of what our team achieved in terms of ROI when we introduced deployment automation in one of our .NET projects:

Overall efforts

90 man-months

Duration

20 months

 

Manual Deployment

Automated Deployment

Deployment time

2 man-hours/deployment

16 man-hours for one-time implementation

Deployment frequency

Twice a week

Once every day

Deployment error rate

10%

2%

Deployment cost

$16h/month

16h (one-time)

Overall benefits

 

$6,400 (320 man-hours) saved

200% increase in deployment frequency

5x decrease in deployment errors

 

And that's just a simple case. For complex projects/projects with high deployment frequencies, teams can save more than a thousand of hours per year.

New Opportunities Made Possible By Deployment Automation
I am a strong advocate of deployment automation for two main reasons:

  • It saves developers from routine error-prone tasks (which saves money).
  • It opens up several new opportunities that can make a great difference to your project's overall success.

I personally think the second reason is the most important for business, because of the following:

  • Integration tests as part of your daily workflow:

Everybody knows it is cost-effective to automate regression checks. Nowadays, we have plenty of cool tools to help implement integration tests. A good suite of UI regression tests is crucial for long-running projects' sustainable development; otherwise, in a year or two, you get to a point when you can't add new features because you are in a constant rush fixing the old ones. Automatic deployment is required in order to run these tests frequently and in an automated fashion. I recommend running them throughout the day (ideally - for every check in) - this saves money and helps you keep up with the schedule as you detect errors in under an hour from introducing them (see graph above).

  • Provide visibility with nightly builds:

Everybody wants visibility in agile development. The sooner you get something done and give it to product owners/target users, the sooner you understand what they actually need from the software (so you eliminate "you built exactly what I asked you, but it is not what I need" situation). A great way to provide visibility at no cost is to set up a nightly build, which deploys the current development version to a test environment just for reference. That way, everybody who is interested (stakeholders, managers, beta users, etc.) can see what has been built on a daily basis.

  • DevOps:

If you want your team to get into the DevOps world, be ready to invest in deployment automation. Developers tend to not like ops tasks, but they love automating stuff (that's why we automate email replies and program our coffee maker to run every morning). So a boring task to configure a new UAT platform becomes a challenging, exciting thing when you want to automate it.

  • Continuous deployment:

Continuous deployment, where every change that passes automated testing is deployed to production automatically, has been gaining traction over the last few years with most major tech companies adopting it (WordPress, Google, Facebook, Amazon).

Standard workflow with automated acceptance (integration tests) employed.

The bottom line is there is no excuse these days to not to do automatic deployment on .NET web projects. Technology is mature and easy to use, it saves time and money, it eliminates hard-to-troubleshoot errors because the process is reproducible and versioned, and it allows your team to be more flexible and move significantly faster.

More Stories By Ivan Antsipau

Ivan Antsipau is a senior .NET developer at Itransition specializing in architecting and implementation of business-specific web applications. With a specialist degree in Radiophysics and Computer Science, a knack for team leading, and years of hands-on programming experience under his belt, he sees the key to sustainable and accelerated delivery of software projects in elimination of stressful manual efforts with the help of continuous integration and automated testing.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...