Extending resilience to containers and serverless
When it comes to building resilient applications on AWS, extending your resilience strategy beyond instances to containers and serverless services is crucial. Containers offer a lightweight and portable way to package and deploy applications. By utilizing containers, you can isolate application dependencies and ensure consistent and predictable behavior across different environments. Additionally, containers provide resource isolation, allowing you to run multiple applications on a single host while maintaining high availability and security.
Serverless computing, on the other hand, allows you to build and run applications without managing infrastructure. With serverless, you only pay for the resources your application consumes, eliminating the need to provision and maintain servers. This can significantly reduce the operational overheads and complexity associated with traditional infrastructure management. Furthermore, serverless services are typically designed to be highly scalable and fault-tolerant, making them an ideal choice for building resilient applications.
AWS offers a range of container and serverless services to support your resilience needs. Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Kubernetes Service (Amazon EKS) provide managed container orchestration services that make it easy to deploy, manage, and scale containerized applications. Amazon Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. On the other hand, AWS Fargate is a serverless compute engine for containers that allows you to run containers without managing the underlying infrastructure.
By leveraging these services, you can build highly resilient applications that are scalable, fault-tolerant, and cost-effective.
Container environments are ephemeral by nature. You never treat a container as a pet but as cattle, which means you just replace the container resource when it isn’t performing to your expectations. For example, if a Kubernetes Pod is running out of memory, you never go into the Pod to increase the allocated memory. Rather, you create a new Pod configuration and redeploy the resource, which results in the existing Pod being completely replaced by a new Pod with the new configuration with higher memory. This essentially means that it is dangerous to keep the state within the Pod memory as it can result in applications losing track of their state often.
Depending on the container orchestration technology, you always deploy more than one copy of the application at any given point to ensure redundant resources are always available to serve your customer requests. It is typical to see customers deploy multiple Replicas of their applications in a Deployment configuration in Amazon EKS. In this case, Kubernetes deploys copies of Pods and also ensures each Pod is deployed in different underlying nodes to ensure high availability and resiliency. Please refer to the Kubernetes documentation (https://kubernetes.io/docs/concepts/workloads/) to learn about concepts such as Pods, Deployments, and more.
All the concepts and logic behind building resilient architectures on virtual machines or other environments apply to container environments as well. This includes emphasizing stateless architectures, maintaining redundancy, automating deployment orchestration, closely monitoring performance and health metrics through monitoring platforms, reducing the blast radius through infrastructure isolation, and more.