There will be an estimated 25,000 petabytes of patient data in the US by 2020. For many health organizations and health care SaaS providers, the cost of continually expanding on-premise storage is already unsustainable — and it will only get worse.
These pressures make cloud an attractive option. Mature cloud platforms like Amazon Web Services (AWS) that offer the ability to scale quickly and painlessly have attracted dozens of reputable health care and life sciences companies, including Pfizer, Siemens, and Philips. But for companies at the beginning of their cloud efforts, migration can seem daunting.
Here are the most common questions health care companies ask when moving to AWS — and answers drawn from our experience at Logicworks managing tens of thousands of servers with PHI in the cloud.
Our organization wants AWS — now what? What is the most effective plan for migration?
The first step is to determine your organization’s priorities for AWS migration. Do you just want cheaper storage? Or do you want to use health care data in new ways? This will determine whether your migration will be a “simple” lift-and-shift (take a snapshot of your on-prem server, spin it up on AWS) or a more complex refactoring, which would allow you to take advantage of native AWS tools.
The second step is to understand where you are today. This is normally a painful process for health care organizations with multiple legacy systems and aging ERP systems; products like AWS Application Discovery Service and other third party tools can get you part of the way there, but some engineer discovery work is required.
The complexity of the discovery phase means that for some health care organizations, it is more effective to delay migration of existing systems and start building greenfield environments on AWS, either for new applications or simply for storage of future data. The advantage of this strategy is two-fold; first, it allows you to build a simple, relatively “standard” AWS environment unhampered by customizations, which will be easier to build and maintain; second, it allows IT to gain familiarity in AWS and prove the value of the model before investing in costly application refactoring efforts.
Does AWS offer a BAA? Does that mean we can avoid a new assessment?
Amazon was the first public cloud provider to offer a BAA, one of the many reasons it is a pioneer in the health care space. But health care organizations must understand AWS’ BAA and the limits of that BAA, both in terms of services covered by the BAA and potential technical risks.
At its most basic level, the Amazon BAA covers the security of the physical servers in AWS’ datacenters. This in itself is a tremendous cost savings for health care organizations. However, the BAA does not cover the use of those services by your organization, the configuration of those servers, the data hosted in those servers, etc. Many organizations use a third party managed service provider (like Logicworks) to fill the gap between AWS’ responsibilities and their own and extend limitations of liability.
However, whether or not you use an MSP, migrating to AWS means you must conduct your own internal risk assessment and undergo your own audits. Migrating to AWS only outsources some, but not all, compliance responsibilities to Amazon or your MSP.
What application type or tier do health care organizations migrate to AWS first?
The lowest hanging fruit for migration to AWS is usually either DR or cold storage of infrequently accessed records that would otherwise be sent to tape or off-site storage. Health care organizations must retain records for at least seven years; data can be collected by existing on-prem applications and pushed to AWS Glacier, its low-cost cold data storage service, via VPN for as little as $0.007 per gigabyte per month. That alone is a tremendous cost savings for health care organizations.
The next step is usually migrating a new application or set of applications to AWS. For obvious reasons, a greenfield app is easier to build and manage since it usually has fewer dependencies. After the success of a new application, the organization is usually willing to invest effort in migrating legacy systems for hardware reaching end of life.
What are the disadvantages to a “lift and shift” migration approach?
The advantages of a lift and shift migration are that it is quick and cheap in the short-term. In the long-term, it is almost always more expensive thant a “true” migration. Because you are just snapshotting your current environment and loading it on AWS, or using an ETL (extract, transform, load) tool to extract data and dump in on AWS, lift and shift often means that you are still maintaining all of your own tools or purchasing third party licenses rather than taking advantage of native AWS tools.
For instance, if you lift and shift your SQL Server database to AWS, you still have to pay Microsoft. If you spend the time to migrate to AWS Aurora, Amazon’s fully managed database service, you will not have to upgrade, replicate, mirror, etc. again and you only pay for the amount of data you store, which is a huge savings for most companies. If you put the data in an unstructured data warehouse like AWS Redshift, you have the potential to combine, reuse, or analyze that data for future “big data” applications. This means you are not just migrating to AWS, you are laying the foundation for future product innovations.
The most overlooked disadvantage of a lift and shift strategy is that you are usually not planning for the long-term, sustainable growth of your cloud infrastructure itself. Engineers build a single environment and customize it to their needs, which quickly becomes an issue when multiple departments and project owners build their own custom AWS environments, each of which uses different resources, different network policies, and different access rules. Every new AWS environment is built from scratch with a completely new set of rules. Finance teams and GRC teams usually panic when they look at your complex, ungoverned, wasteful AWS environment.
Even if you are only migrating a handful of applications to AWS, invest in developing a standard set of architecture templates that can be reused in multiple contexts. These templates, usually developed in AWS CloudFormation, mean that spinning up a new standard AWS environment is fast, each environment has “built in” standards for network, access, naming, etc., and these templates then form the central source of truth into how a system is architected.
For organizations that lack the time or expertise to create such templates, find a cloud service provider with a dedicated DevOps team that is focused on creating custom infrastructure automation software. That template can save you hundreds of hours of manual work and reduce the risk of configuration drift.
At a time of upheaval and transition in the health care industry, health IT is feeling pressure not just to digitize records, but to use technology to bring real value to patients. AWS and its partner network help health care organizations focus on applications that deliver better care, not on managing infrastructure — and could be the key to unlocking real value in health IT.