Over the past year, AWS has made it easier than ever for enterprises to move to the cloud. That includes everything from reusable data shipping machines to giant instance types to services that enable development teams to run code on AWS without having to think about AWS resources at all.
AWS Lambda, announced in November 2014, is a service that allows you to run arbitrary code without managing instances or networks. In other words, it is a new abstraction layer above virtual machines (EC2), containers (Docker) and AWS APIs, a kind of meta-microservices platform that orchestrates bits of code without needing to manage container images or compute clusters. It is easiest to visualize this as levels of increasing compute abstraction:
We are in the midst of a total sea-change in the definition of infrastructure management. This is not just about a transition from on-premises datacenters to AWS and virtual machines. It is not just about “infrastructure-as-code.” Lambda points to the ultimate ideal of every innovation in cloud, IT automation, and DevOps: automated, self-managing infrastructure. Indeed, IT automation lies at the heart of any DevOps transformation, with a majority of IT leaders ranking “IT automation” as the most important DevOps component — even above methodologies like agile development (47%) and collaboration (45%).
AWS is creating a world where infrastructure control is defined by its ability to change intelligently and efficiently. Rather than using complex monitoring and management tools to measure change or error, the system can respond to prescribed rules to heal itself, measure itself, and recreate itself. Automation does not remove human intervention, but it places humans as rule-makers at the helm of complex systems that can carry out its demands and take action.
What Lambda Looks Like
The best part about automated, self-managing infrastructure is that it sounds a lot more esoteric and difficult than it actually is. Every enterprise computing system is already using some of these tools today. The next step is simply to use more of these tools. Just like the DevOps philosophy that it stems from, IT automation is not an all or nothing proposition.
If “NoOps” is manual process orchestration, each subsequent effort at automation puts your team in the realm of “DevOps”. For those of you that want a more technical explanation, infrastructure automation would be the creation of templated virtual compute resources that can be spun up/down dynamically in response to events/changes, usually governed by Auto Scaling Groups, an orchestration tool like AWS Elastic Beanstalk or in more sophisticated environments, a centralized configuration management hub like Puppet or Chef. Deployment automation is usually the creation of an automated CI/CD pipeline, including testing with homegrown tools or something like AWS CodeDeploy. If your team can spin up a “vanilla” AMI, bootstrap it with Puppet, and deploy it into an Auto Scaling Group with the most recent version of your code with no human intervention, you truly have a DevOps shop. Most teams already use some configuration management tools, although orchestrating these tools to work with AWS is a fairly complex job.
AWS Lambda transcends even this sophisticated IT automation, and for even the most forward-facing IT teams, Lambda is used in addition to the configuration management and deployment tools described above, not instead of them. As Netflix announced, they currently use Lambda for just four functions:
- Encoding media files: Rules triggered by movement of video assets to aggregate parts of files into single files
- Backup for disaster recovery: Rules trigger what needs to be backed up, and check that the file has been backed up
- Security: Lambda validates that each new instance is constructed according to rules
- Monitoring: Events (like the launch of a new instance) turn on monitoring on that instance
At Logicworks, we are in the process of converting existing background jobs to AWS Lambda functions. These tasks prove Lambda’s real value:
- We will see 30-40% cost reductions due to the fact that the Lambda function runs in response to an event and therefore is not constantly “on”
- We will no longer have to replicate the background job server for reliability — Lambda is by nature resilient and scalable
- It makes it cost-efficient to run checks that your infrastructure is behaving the way it should; basically creating a series of “insurance” functions to check your automation
- Developers can write Lambda functions, and therefore can build an extremely scalable abstract computation engine without having to understand AWS
Overall, Lambda can reduce the complexity of critical functions and makes it more cost-efficient to run background functions. But given Lambda’s attention at Re:Invent 2015, one can only expect that its usefulness as a computation engine will increase.
IoT, Amazon Echo, and… the Future
You have undoubtedly seen the commercials for Amazon Echo, the cylindrical, voice-controlled device that is part Siri, part in-home speaker. What you may not know is that Amazon has made the Alexa Voice Service available to hardware makers, so that users can add voice-powered experiences to their devices. (Ford has already done so; you can now tell your Amazon Echo to start your car.) Even more interestingly, they allow developers to write a “skill” for your Echo with an AWS Lambda function.
Hypothetically, an engineer could use Amazon Echo to launch virtual instances, even launch entire AWS environments. (Logicworks gave every engineer on our team an Amazon Echo for fun, just to see what was possible — stay tuned for the results of their experiments.) That is not to say that enterprise systems will be voice-activated in 5 years. But the fact that this is even possible highlights the tremendous power of AWS Lambda.
In many ways, manual work has long been the foundation of enterprise IT. Existing procurement, governance, and security processes required engineers to be available at every step to build and maintain critical systems. Change required detailed monitoring, project management, and human intervention.
To make change more efficient, migrating to AWS is not enough. This is why automation, DevOps, and cloud are a three-legged stool; automation is the critical step that turns the cloud from just another pool of compute resources that needs to be manually changed into the PaaS layer that DevOps teams need. Whether or not containers, Lambda, or some other abstraction service ultimately gains acceptance in the enterprise, automated, self-managing infrastructure is the future of enterprise cloud management.
Voice-control, however, is optional.