We're ready to help

Our cloud experts can answer your questions and provide a free assessment.

Schedule
a meeting
close
hardware

Orchestration Software: Does Hardware Matter Anymore?

  • 1
  •  0

In IT infrastructure departments, hardware matters less every day.

Today, engineers can make commoditized infrastructure (“the cloud”) more secure, agile, and cost-effective with software than they were ever able to accomplish with bare metal servers, switches, and motherboards.

The migration of functionality from hardware to software goes far beyond the world of cloud computing. Yesterday in Harvard Business Review, Willy C. Shih of Harvard Business School argued that this migration will also move entire industries from hardware-defined innovation to software-defined innovation.

Just listen to how Shih describes his experience with a new Volkswagen Beetle:

“…I noticed that when I opened the door the window rolled down just a little bit, anticipating the air pressure buildup that would occur when I shut the door. I got a nice satisfying door slam, and afterward the window rolled up. That would be really hard to do with analog controls, but with software? Easy.”

What Shih is describing is a world of predictive, intelligent design. A world where we build application- (or human-) centered software, not machine-limited applications.

Every day at Logicworks, software engineers write and manage thousands of lines of code that spin up hundreds of virtual machines in minutes. This is a remarkable feat, of course, but what makes this revolutionary is that we are not just replacing servers with code. Our clients are not just using Amazon Web Services (AWS) because it is cheap. They are building systems that were never possible before.

This innovation is possible because software control systems are both cheaper and faster to develop than analog control systems. “The biggest benefit from this trend is that you can incorporate more sophisticated control regimes into products,” Shih writes. This is as true for Volkswagen as it is for AWS. Five years ago, engineers shied away from AWS because they wanted complete control over their environment, meaning over the physical hardware running their applications. Today, more engineers are realizing that the fine-grain control their applications actually require is for things like access control, compute power, and cost — not power supply and disk drives.

The superiority of an enterprise’s power supply to their datacenter will almost never differentiate their product from their competitors’. And increasingly, neither will the degree to which they customize their servers. It turns out that application-specific hardware customization achieves great performance but is time-consuming, expensive, and therefore inefficient. In the cloud, engineers can customize instances “over-the-air”, instantly fixing errors and deploying new features in hours, not months.

What is the point of great performance if the application it supports is outdated? To put it rather bluntly, many system engineers are tasked with perfecting and controlling hardware that supports aging applications with unaddressed security and/or performance vulnerabilities. Spending time to research and compare type X disk over type Y disk is the last thing those engineers should worry about. It is simply a waste to spend engineering dollars regulating a datacenter’s cooling systems when the application crashes regularly because it cannot scale. This is as frustrating to system engineers as it is to business leaders.

Instead, more engineering dollars should (and will) go towards infrastructure software engineers, a.k.a. cloud engineers, DevOps engineers, and automation engineers.

These dollars will be focused on time-to-market, not perimeter security; on “using building blocks, adding the custom pieces and then rapidly deploying them,” as Shih writes, not individually customizing each rack. When system engineers work alongside application developers, the entire team is focused on the performance of the application overall, not on the performance of their particular piece of the infrastructure.

The result? Applications are faster and more resilient. Business leaders will never again hear that IT cannot do anything about downtime.

One final point. The last bastion of cloud resistance in any IT department is usually the threat of security breaches and/or NSA snooping. This is a legitimate concern, especially when your cloud is supported by internal teams that are new to the cloud.

However, we are nearing a stage when the security strategy will be a “core design principle”, not the final checkbox on the list. Not only will this make system security stronger, but it will enforce that security is the responsibility of every engineer, not just the guy who installs the anti-virus program. When you put treasure in a safe, snapping the lock closed is the last step. When you put treasure next to everyone else’s treasure, you make sure that the gold has an intrinsic property that it can only be touched by certain people (you poison the rest), then you disguise the gold to look like dust, and then you put several hundred video cameras on it to monitor every person that comes within 50 feet of the gold. The responsibility goes from the one guy who put the lock on to an entire team of people who make every part of the treasure secure. Who is to say which is safer?

As cloud software becomes more complex, differentiated, and precise, hardware will become less and less relevant. The question is whether enterprises will be take infrastructure as code seriously and start experimenting — or continue to put dollars where they do not count.

By Jason Deck
VP – Strategic Development, Logicworks

Logicworks is a managed service provider with 22 years of experience in designing and managing complex enterprise systems. Contact us to learn more our team of system architects and DevOps engineers.

DevSecOps-eBook

1 Comment

Leave A Comment