What is AWS Outposts?
For years, AWS was clear on its messaging that customers should stop spending money on undifferentiated heavy lifting. This is AWS verbatim, as can be seen in the design principles for the Cost Optimization pillar of the AWS Well-Architected Framework (https://docs.aws.amazon.com/wellarchitected/latest/framework/cost-dp.html). As it says, racking, stacking, and powering servers fall into this category, with customers advised to explore managed services to focus on business objectives rather than IT infrastructure.
From that statement, it would be reasonable to conclude that AWS would hardly give customers an offering that could resemble the dreaded kind of equipment that needs power, racking, and stacking. The early strides of AWS bringing physical equipment to customers were in the form of the AWS Snow family: AWS Snowball Edge devices and their variants (computing, data transfer, and storage).
It does sport the title of being the first product that could run AWS compute technology on customer premises, being able to deliver compute using specific Amazon Elastic Compute Cloud (EC2) instance types and the AWS Lambda function, locally powered by AWS IoT Greengrass functions. Despite this fact, it was advertised as a migration device that enabled customers to move large local datasets to and from the cloud, supporting independent local workloads in remote locations.
In addition, Snowball Edge devices can be clustered together and locally grow or shrink storage and compute jobs on demand. AWS Snowball Edge supports a subset of Amazon Simple Storage Service (S3) APIs for data transfer. Being able to create users and generate AWS Identity and Access Management (IAM) keys locally, it can run in disconnected environments and has Graphic Processing Unit (GPU) options.
Launched in 2015, the first generation was called AWS Snowball and did not have compute capabilities, which would appear in 2016 when the product was rebranded as Snowball Edge. Today, AWS Snowball refers to the name of the overall service. The specs are impressive, with 100 GB network options and the ability to cluster up to 400 TB of S3 compatible storage. SBE-C instances are no less impressive, featuring 52 vCPUs and 208 GB of memory.
AWS invested a great deal to make the cloud not only appealing but also accessible. Remove that scary thought of having to change something drastically and radically, that awful sensation of having to rebuild the IT infrastructure on top of a completely different platform. AWS even gave various customers a soft landing and easy path to AWS when they announced (https://aws.amazon.com/blogs/aws/in-the-works-vmware-cloud-on-aws/) their joint work with VMware in 2016 to bring its capabilities to the cloud, which debuted in 2017 (https://aws.amazon.com/blogs/aws/vmware-cloud-on-aws-now-available/).
With these capabilities and Edge appended to the service name, it seemed that moving forward, the path was set with Snowball. It was not without surprise that AWS Outposts was announced in November 2018 during Andy Jassy’s keynote at re:Invent. On stage, it was shown as a conceptualized model, but one could clearly see it had the shape and form of a server rack.
AWS Outposts debuted on video in 2019 (https://youtu.be/Q6OgRawyjIQ), introduced by Anthony Liguori, the VP and distinguished engineer at AWS. By that time, it became clear that a server rack was in the making inside AWS and it was targeting the traditional data center realm. However, it was against the AWS philosophy of asking customers to stop spending money on traditional infrastructure. Anyone staring at an AWS Outposts rack could be intrigued.
At re:Invent in 2019, Andy Jassy revealed the use case for Outposts during his keynote. He started by acknowledging some workloads that would have to remain on-premises because even companies who had been strong advocates for cloud adoption had also struggled at times to move certain workloads that proved to be very challenging, and eventually stumbled along their way.
Outposts was characterized as a solution to run AWS infrastructure on-premises for a truly consistent hybrid experience. The feature set was enticing: the same hardware that AWS runs on its data centers, seamlessly connecting to all AWS services, with the same APIs, control plane, functionality, and tools as used when operating in the Region. On top of it, it is fully managed by AWS. In the same opportunity, he showcased one specific Outposts variant for VMware, which was a bold move for a cloud company advocating to stop investing in data centers.
That was not the only announcement targeting the edge space. At that same event, AWS Local Zones and AWS Wavelength were announced. While these offerings fall beyond the scope of this book, it’s worth noting that they weave together to compound an array of capabilities to address the requirements and gaps in the edge space and get a strong foothold in it. So, it suffices to say, AWS Local Zones are built using slightly modified (multi-tenant) AWS Outposts racks.
Now, we have finally set the stage to introduce AWS Outposts. Let us begin with the product landing page (https://aws.amazon.com/outposts/). At the time of writing, it is now dubbed Outposts Family, due to the introduction of two new form factors at re:Invent in 2021. The 42U Rack version, the first to be launched, is now called an AWS Outposts rack. The new 1U and 2U versions are called AWS Outposts servers.
Regardless of family type, three outstanding statements that are valid across the family and strongly establish the value proposition of this offering:
- Fully managed infrastructure: Operated, monitored, patched, and serviced by AWS
- Run AWS Services on-premises: The same infrastructure as used in AWS data centers, built on top of the AWS Nitro System
- Truly consistent hybrid experience: The same APIs and tools used in the region, a single pane of management for a seamless experience
Let us cover each in detail.
One of the key aspects of positioning AWS Outposts in customer conversations revolves around explaining how AWS Outposts is different from ordering commodity hardware from traditional hardware vendors. That is exactly where these three statements come into play, highlighting differentiators that cannot be matched by competing offerings.
AWS Outposts is fully managed by AWS. While others may claim their products are also fully managed, AWS takes it to the ultimate level: it is an AWS product end to end. The hardware is AWS, purchase and delivery are managed and conducted by AWS, product requirements are strongly enforced by AWS, and site survey, installation, and servicing are conducted by AWS. No third parties are involved – the customer’s point of contact is AWS.
AWS Outposts enables customers to run a subset of AWS services on-premises and allows applications running on Outposts to seamlessly integrate with AWS products in the region. Single-handedly, the first line itself knocks out traditional hardware. For example, you can’t run EC2 on it. To amend the case, while applications running on traditional hardware can interact with AWS via API calls, AWS Outposts once again takes it to a whole new level, stretching an AWS Availability Zone in a given Region to the confines of an Outposts rack, allowing workloads to operate as if they lived in the same Region.
Customers are extremely sensitive to consistent processes. The use of multiple tools, multiple management consoles, and various scripting languages is cumbersome and error-prone. When you craft a solution where multiple parts come from multiple vendors and are all assembled, that is what ends up happening.
You will need to use a myriad of tools, interfaces, and scripts to configure and make it work. Long and complex setup processes, multiple vendors involved in troubleshooting errors, and multiple teams conducting various stages of the process lead to inefficiency, inconsistency, security problems, and significant delays in being ready for production.
IT professionals normally try to avoid this pitfall by pursuing a solution provided by a single vendor, even with the risk of the infamous single vendor lock-in. However, one hardware provider hardly ever designs and manufactures all the constituent technologies involved, such as the compute, storage, networking, power, cooling, and rack structures. More often than not, the OEM of some of the components is a third-party vendor, if not a third-party brand itself. In the end, these solutions are a collection of individual parts with some degree of consistency.
Here is another significant differentiator of AWS Outposts, which is a thoroughbred AWS solution. AWS Outposts employs the same technology used in AWS data centers whose hardware designs and solutions have undergone significant advancements over time and have been battle-tested in production for several years. With this level of integration and control, AWS can explore and tweak the components for highly specialized tasks, as opposed to the more general-purpose approach of commodity hardware.
AWS developed a technology called the AWS Nitro System (https://aws.amazon.com/ec2/nitro/), which is a set of custom application-specific integrated circuits (ASICs) dedicated to handling very specialized functions. AWS Outposts uses the same technology, standing in line to receive any of the latest and greatest advancements AWS can bring into the hardware technology space. Being such a uniform and purpose-built solution, it benefits from a fully automated, zero-touch deployment for maximum frictionless operations.
Now, we are equipped to widely understand the AWS Outposts offering as a stepping stone deployed outside the AWS cloud, with strong network connection requirements to an AWS Region, capable of running a subset of AWS services and capabilities, and conceived and designed by AWS with its own DNA.
AWS Outposts is not a hardware sell, it is not a general-purpose infrastructure to deploy traditional software solutions, and it is not meant to run disconnected from an AWS Region. AWS Outposts is a cloud adoption decision because you are running your workloads not in a cloud-like infrastructure but rather, in a downscaled cloud infrastructure. This is evident because, during the due-diligence phase, an AWS Outposts opportunity can be disqualified by the field teams if the customer workloads are capable of running in an AWS Region. AWS believes in the philosophy that if workloads are capable of running in AWS Region, they should run in an AWS Region.
Basically, AWS is asking what the use cases and business requirements are that prevent certain workloads from operating in the cloud, something that could defy common sense. Does that mean AWS is trying to discourage the customers from taking the Outposts route in favor of bringing them from the edge to the core Region?
Very much the reverse – AWS wants to make sure customers are making informed decisions. It wants them to understand the use cases for Outposts. Fundamentally, they understand they are effectively setting foot in the cloud with Outposts being the enabler to galvanize cloud adoption and the catalyst for companies to upskill their teams to build a cloud operations model and become trained in AWS technologies and services.
At this point, you should be able to identify the edge IT space, the gap between the cloud and the on-premises data center, and also understand the historical challenges associated with operating infrastructure spanning these significantly different domains.
As the initial solutions to address this problem were not good enough, AWS developed Outposts to be the answer to seamlessly bridging these two worlds. Now, it is time to frame AWS Outposts in this edge space to see how it handles the assignment.