We're ready to help

Our cloud experts can answer your questions and provide a free assessment.

Schedule
a meeting
close

ArchOps vs. DevOps: Foundation and Automation 

  • 3
  •  0


Good order is the foundation of all things.

– Edmund Burke

The benefits of DevOps are widely known: seventy percent (70%) of senior IT leaders believe the need for DevOps has never been greater — and the risk of manual, undocumented, slapdash procedures has also never been clearer.

The pressure to deliver software applications quickly will only increase in the next several years. This is part of why deployment automation has always been a crucial component of DevOps philosophy. But fully automated environments require a sophisticated development process, and in the rush to production, that can add unwanted complexity.

This is why your organization may need to be more strategic about selecting the aspects of DevOps your team can currently handle. DevOps is a spectrum, and it is not true that you need to implement every DevOps best practice in order to enjoy some benefit from DevOps principles. We like to think of the DevOps spectrum like this, from most basic to advanced:

You can also think of this as a spectrum from what Amazon Web Services calls “ArchOps” to DevOps. ArchOps is about laying the foundation — the minimum viable product of the environment — and AWS DevOps is about building the automation.

To get behind the terminology, this means that ArchOps focuses on automating just the infrastructure buildout to be able to deploy “standard” environments in a few hours or days. At the most basic level, ArchOps is creating a “perfect” image of an instance to bring up manually. A good base image should not make any assumptions about what its name is, should be destructively tested, etc.

An automated network architecture buildout is the next level of complexity. Even if you have hundreds of complex apps, your network configuration often does not change radically, and remembering to close X, Y, and Z security loopholes each time you configure a network is a recipe for disaster. Setting up credentials and baking in naming conventions will also save you huge amounts of time.

We use CloudFormation, a native AWS resource, to do this. If you ever need to make a change to your network infrastructure, you can change your CloudFormation template. This helps prevent regressions in your security policy and acts as a physical document that fully defines the environment you are building. CloudFormation allows you to automate things like deploying secure, multi-AZ web servers, network infrastructure, and can even download our Puppet scripts and configures our Puppet Master — dozens of small tasks that are very time-consuming if done manually.

CloudFormation is especially time-saving if your “standard” deployment has HIPAA or PCI compliance restraints or must incorporate legacy systems; your team is spending countless hours writing and re-installing 3rd party tools and a custom mix of AWS tools for each deployment. This might be sustainable if you are working on a small deployment of a handful of instances, but you’ll need a fleet of NOC engineers if you are looking to spin up hundreds of environments in an enterprise or growing start-up.

In essence, CloudFormation allows you to maintain buildout velocity as you grow by orchestrating your AWS resources to spin up the foundation of an environment quickly. This is the fundamental goal of ArchOps.

Once CloudFormation does the work of installing and configuring Puppet, now we start pulling the strings. At this point, you are getting into true AWS DevOps practices.

First you start with deployment automation. To put the principle of continuous integration into practice; for example, you might develop a set of Lint scripts that check for syntax errors and every time you do a commit, something is running and checking that code.

With highly sophisticated deployment automation, you can deploy code to production hundreds of times a day. With every commit that is pushed out to the site, a whole suite of unit tests verify what is working. If you get to the point where you have a one-button process to push from your Git — so you can deploy a big update to just one instance and then run statistics on that one node to make sure it is working correctly — you have a very mature systems development process and a true DevOps shop. At Logicworks, we do this by through CodeDeploy, which has a number of functions, like deploying your update to a fleet of AWS EC2 instances (from a single instance to thousands) and automatically scheduling updates across multiple Availability Zones to maintain High Availability during deployment.

A final step is integration between your deploy process and Auto Scaling — where you are using Puppet scripts to automatically integrate those EC2 instances into your Auto Scaling groups. You also usually get rid of your “Golden Master” and instead configure a vanilla template on launch with Puppet scripts. Consider our Auto Scaling best practices for optimal cloud scalability.

Cloud DevOps engineers ultimately break these code blocks into the smallest possible pieces and then build reusable components, so that even custom environments can still leverage that automated architecture. It is all about building a central repository of these templates allows us to deploy complex, compliant, and secure environments quickly. These practices enabled Logicworks to build the environment for the largest Health Information Exchange in America on AWS in 30 days.

It takes a significant amount of time and focus to create and maintain these automation resources. Unit tests alone become a whole separate infrastructure you need to maintain. But you are ultimately saving hundreds of hours of manual work and keeping your development velocity high so your team can spend time on code, not deploy.

Find out more about Managed AWS and DevOps as a Service or contact Logicworks.

__

by DevOps @ Logicworks

DevSecOps-eBook