Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
AWS DevOps Simplified
AWS DevOps Simplified

AWS DevOps Simplified: Build a solid foundation in AWS to deliver enterprise-grade software solutions at scale

eBook
$9.99 $39.99
Paperback
$49.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Colour book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

AWS DevOps Simplified

Accelerating Your DevOps Journey with AWS

Digital transformation is key to the success of any modern business that wants to deliver great products and delight its customers. It involves the integration of digital technologies and solutions across all areas of the organization. To be successful in this journey, the majority of organizations leverage software automation. This helps them stay ahead of their competition, innovate faster, and reduce the lead time to produce something valuable for the end user. However, just moving fast is not sufficient. Moving fast, and executing well, every single time is where the real magic happens.

Amazon Web Services (AWS) and DevOps are two such enablers that spearhead organizations with fast and controlled growth. They are representative of high-performing teams and agile cultures. Both are often misunderstood in different ways.

DevOps is a human-centered approach that aims at improving communication between people, while software and automation contribute to this goal. It aims at removing silos that block organizations from delivering software faster, and with increased quality. Adopting DevOps requires a change in culture and mindset. The core idea is to remove barriers between development and operations – the two traditionally siloed groups. By frequently communicating with each other, these take complete ownership of their deliverables, often going beyond the scope of their job titles. In some organizations, there might not be any difference between development and operations; they are just seen as one product team that owns the complete life cycle of the software – they build it, they run it.

AWS, on the other hand, is a public cloud provider that helps users get rid of undifferentiated heavy lifting. It provides managed services that help you focus on what you do best while it takes care of everything else. However, it is important to understand that the benefit you reap from these services will largely depend on how you use them and the level of AWS expertise within your organization. Some customers end up increasing their costs while others are able to reduce them. Some might make their application architectures more resilient than before, and others may not. In addition, people mostly view the cloud as a replacement for their on-premises IT landscape, but its real value can also be seen during seamless integration with your existing or future platforms. Sometimes, it just sits there, alongside your data centers, providing you with core differentiating capabilities.

To keep it short, DevOps is not just about automation and tools; DevOps is not just about the cloud. There’s a lot more to it. But if you are a software professional, using AWS anyway, and want to accelerate your DevOps adoption, this book is for you. We want you to be successful in accelerating your software delivery by using AWS. The intent is not to have answers to all the problems you will ever face, but rather to ensure a strong foundational understanding and strategy that you can apply to a variety of problems on AWS, now and in the future.

In this chapter, we will go through some practical solution implementations and DevOps methodologies and will discuss how AWS fits into the overall picture.

The main topics in this chapter are the following:

  • AWS and DevOps – a perfect match
  • Key AWS DevOps services

AWS and DevOps – a perfect match

Gartner’s Cloud Infrastructure and Platform Services (CIPS) Magic Quadrant report (https://www.gartner.com/doc/reprints?id=1-2AOZQAQL&ct=220728) positioned AWS as a leader in both their metrics – Ability to Execute and Completeness of Vision. This speaks volumes about the reliability of the platform in hosting your mission-critical workloads that can adapt to changing customer demands. DevOps methodologies complement this with time-tested ways of working that have a positive impact on the overall IT service delivery.

However, let’s not sugar coat this – efficiently operating your enterprise-grade software environments on AWS can be challenging. The cloud provider has been expanding the list of services offered from the beginning. Your usage, implementation, and solid future strategy will be key.

Even before we dive into anything regarding AWS or DevOps, let’s first go through some real-life examples, covering aspects that are relevant to any software professional. The idea here is to discuss a few approaches that have helped me over many years of maintaining or writing software. I hope these topics are useful for you as well.

Production-like environments

While working as a DevOps specialist a few years ago, I was tasked with helping developers in my team to ship features faster. Upon understanding the challenges they faced in the huge monolithic LAMP stack (Linux, Apache, MySQL, and PHP) application, MySQL surfaced as a common pain point across the board. The continuously increasing size of these on-premises production databases (1 TB+) meant that the developers were frequently unaware of the challenges the application would face in the live environment when the production load kicked in. The application (warehouse management system) was used across several countries in Europe, with each country having its own dedicated MySQL instance and read replicas. Every minute of downtime or system degradation directly impacted shipping, packaging, and order invoicing. During that time, the developers were using local MySQL instances to develop and test new features and would later ship them off to staging, followed by production rollout.

With the problem statement clear, a promising next step was to enable them to develop and test in production-like environments. This would allow them to see how their systems were reacting to evolving customer usage, and to have a better understanding of the production issues at an earlier stage.

With a strong understanding of Bash scripting (…or at least I thought so) and basic MySQL administration skills, I decided to build shell scripts that could prepare these testbed environments, and refresh them with the most recent production data, on a daily basis. A request for a similar number of new MySQL servers was raised with the on-premises infrastructure team. They provisioned them in a week, set up the operating system, and required libraries for MySQL, before handing over to me my newly acquired liability. Moving forward, all maintenance and upkeep of these servers were my responsibility. I later scheduled cronjobs on all these servers to run the scripts created previously. They would perform the following steps at a pre-configured time of the day:

  1. Copy the previous day’s production backups from the fileserver to the local filesystem.
  2. Remove all existing data in the local MySQL database.
  3. Dump the new backup data into the local MySQL database.
  4. Perform data anonymization and remove certain other confidential tables.

You can see how the different entities communicated in the following figure:

Figure 1.1: Bash scripts to manage database operations

Figure 1.1: Bash scripts to manage database operations

The following day, the developers had recent production replicas available at their disposal, anonymized for development and testing. This was a huge step forward and the developers were excited about this as they were now developing and testing against environments that matched production-level scale and complexity. The excitement, however, was soon overshadowed by new issues. What happens if a particular cronjob execution does not terminate successfully? What if database import takes forever? What if fileservers are not responding? What if the local storage disk is full?

How about a parent orchestrator script that manages all these edge cases? Brilliant idea! I spent the next few days building an orchestrator layer that coordinated the execution of all these scripts. It was not too long before I had a new set of Bash scripting problems to solve: tracking child process executions, exception handling, graceful cleanups, handling kernel signals…the list goes on. Doing all this in Bash was a Herculean task. What started as a simple set of scripts now evolved into a framework that required a lot of investment in time and effort.

Days and weeks passed, and the framework kept evolving. Bugs were identified. New feature requests from the developers were implemented. The framework now had a fancy new name: Bicycleend-to-end life cycle management for Bash scripts. Finally, after two months, a YAML config-driven Bash framework came into being that sent error notifications on Slack, executed report attachments via email, orchestrated and measured the entire flow of operations, and was generic enough to be adopted by other teams easily. This was not solely for managing database operations, but rather any collection of Bash scripts put together to accomplish a task, as you can see in Figure 1.2:

Figure 1.2: Bicycle framework – managing the life cycle of Bash scripts

Figure 1.2: Bicycle framework – managing the life cycle of Bash scripts

The framework served the team well for two to three weeks and then the next wave of challenges became evident:

  • Data import time was increasing exponentially. To execute parallel MySQL threads, I needed more compute power – so, another request to the infrastructure team was needed.
  • High I/O operations required more performant disks. To upgrade these, the infrastructure team had to raise a new purchase order and install new SSDs.
  • Developers now wanted the capability to be able to maintain the latest X versions of the backups. Where should these be stored?

As you might have noticed already, the framework had matured considerably, but it was as good as the availability, scalability, and reliability of the underlying infrastructure. With underpowered machines, disk capacity issues, and frequent network timeouts, there was only so much the framework could do.

What started as an initiative to increase development velocity soon transitioned into a technology-focused framework. Were there some technical learnings from this exercise? A lot. In fact, there were so many that I wrote a Best Practices for Bash Scripts blog afterward, which can be found at https://medium.com/p/17229889774d. Did it resolve all the problems the developers were facing? Probably not. Before embarking on these lengthy development cycles and going down the rabbit hole of solving technical problems, it would have been better to build a little, test a little, and always challenge myself with the question, Is this what my customers really need?

Knowing your customers (and their future needs)

It’s of paramount importance to know the end beneficiary of your work. You will always have a customer – internal, external, or both. If you are not clear about it, I would strongly recommend discussing this with your manager or colleagues to understand for whom the solution is being built. It’s essential to put yourself in your customer’s shoes and approach the problems and solutions from their perspective. Always ask yourself whether what you are doing will address your customers’ issues and delight them. It took me at least two or three months to bring the Bash framework up to the desired level of performance and utility. However, a Bash script orchestrator was not something my customers (developers) required. They needed a simple, scalable, and reliable mechanism to reproduce production-like databases. In fact, operating this framework was an additional overhead for them. It was not helping them with what they did best – writing code and delivering business outcomes.

Focusing on iterative development and failing fast

Another prerequisite for delivering impactful customer solutions is to establish an iterative working model: deliver work into manageable pieces, monitor the success metrics, and establish a feedback mechanism to validate progress. Applying this to the aforementioned situation would have meant that developers had complete visibility of the implementation from the beginning. Collaborating together, we could have defined the success criteria (time to provision production database replicas) at the very start of the process.

This is very similar to the trunk-based development approach used by high-performing software teams. They frequently merge small segments of working code in the main branch of the repository, which greatly improves visibility and highlights problems more quickly.

Prioritizing business outcomes over technology

As an IT professional, it is very easy to have tunnel vision, whereby the entire focus is on technical implementations. Establishing feedback mechanisms to ensure that the business outcomes are met will avoid such situations and will help with effective team communications while accelerating delivery. This is what DevOps is all about.

Offering solutions as a service

It is important to take the cognitive load off of your customers and offer them services that are operationally light, are easy to consume, and require little to no intervention. This enables them to focus on their core job, without worrying about any add-on responsibilities.

Offering well-documented and easy-to-consume interfaces (or APIs) would have been far easier for the developers in my team rather than onboarding them to learn how to use the custom-built Bash framework. What they really needed was an easy method to provision production-like databases. Exposing them to underlying infrastructure scalability issues and Bash internals was an unnecessary cognitive load that ideally should have been avoided.

Similarly, the on-premises infrastructure team’s focus on their customer (myself) could have eased my job of requesting new infrastructure resources, without having to go through all the logistics and endure a long wait until something tangible was ready for use.

This is an area AWS excels in. It reduces the cognitive load for the end user and enables them to deliver business outcomes, instead of focusing on the underlying technology. The customer consumes the services and has less to worry about when it comes to the availability, scalability, and reliability of the service, as well as the underlying infrastructure. It offers these services in the form of APIs with which the developers can interact, using the tools of their choice.

Mapping the solution components to the AWS services

One exercise that will often help you when working with AWS is designing your solution, identifying key components, and then evaluating whether some of these un-differentiated tasks can be offloaded to AWS services. It’s good practice to compare these choices and to conduct a cost-benefit analysis before adopting the services immediately. Let’s dive quickly into the main components of the database replicas’ example discussed in the Production-like environments section previously, and consider whether AWS services could have been an option to avoid reinventing the wheel:

Figure 1.3: Mapping Bash framework components to AWS services

Figure 1.3: Mapping Bash framework components to AWS services

From a timeline perspective, building the entire stack from scratch, as seen in Figure 1.3, took around three months, but you can provision similar services in your AWS account in less than three hours. That’s the level of impact AWS can have in your DevOps journey. In addition, it’s important to understand that you need not go all-in on AWS. If Amazon S3 (data storage and retrieval service) is all that was needed, then retaining the other components on-premises and using AWS as an extension of the solution could also be considered as an approach for solving the problems at hand.

To summarize, understand your needs, evaluate the benefits provided by AWS services, and adopt only what helps you in the long run.

Now, let’s discuss another instance in which I helped the same developers scale their continuous integration activities with GitLab Continuous Integration and Continuous Deliver, but this time, with AWS. If you have not been exposed to these terms before, continuous integration is a practice that automates the integration of code changes from multiple developers into a single project, and GitLab is a software development platform that helps with the adoption of DevOps practices.

Scaling with the cloud

The GitLab Continuous Integration and Continuous Delivery suite helps software teams to collaborate better and frequently deploy small manageable chunks of code into production environments. My company at that time was using a self-hosted, on-premises version of GitLab Continuous Integration and Continuous Delivery.

There are three main architectural components of GitLab Continuous Integration and Continuous Delivery that are especially relevant to this discussion:

  • Control plane: This is the layer that interacts with the end user, so APIs, web portals, and so on all fall into this category. This was owned and managed by the central infrastructure and operations team.
  • Runners: Runners are the compute environments used by GitLab to run/execute the pipeline stages and the respective processes. As soon as developers commit code to their repository, a pipeline triggers and executes the pipeline stages in sequence by leveraging these compute resources. Due to the heterogeneous project requirements, each team owned and operated their own runners. Based on the technology stacks they worked with, they could decide which type of compute resources would best fit their needs. As a fallback mechanism, there was also a shared pool of GitLab runners, which could be used by teams. However, as you can imagine, these were not very reliable in terms of availability and spiky workloads. For example, if you need two cores of CPU and 1 GB RAM for your Java build immediately, the release of an urgent patch to production could be a challenge. Therefore, it was generally recommended to begin with these but to switch to custom-built, self-managed runners when needed.
  • Pipelines: Lastly, if you have used Jenkins or AWS CodePipeline in the past, GitLab Pipelines are similar in terms of functionality. You can define different phases of your software delivery process in a YAML file, commit it alongside your code, and let GitLab manage your software delivery from there on.

At that time, I was supporting five to six software projects for the developers of my team. Having started with the shared pool of GitLab runners hosted on-premises, we were able to leverage the compute resources for our needs, for roughly 80-120 builds per day. However, with increasing adoption throughout the company, the resources on these runners would frequently become exhausted, leading to several pipeline processes waiting for execution. Additionally, occasional VM failures meant that all software delivery processes dependent on these shared resources across the company would come to a halt. This was certainly not a good situation to be in. The central ops team added more resources to this shared pool, but this was still a static server farm, whereby my team’s build jobs were dependent on how others used these resources.

Having learned from the issues relating to unscalable on-premises infrastructure during the database setup, as discussed previously, I decided to leverage AWS cloud capabilities this time. Discussions with the developers (customers) led to the definition of the following requirements, which were all fulfilled with AWS services out of the box:

  • Flexibility to scale infrastructure up/down
  • Usage monitoring for the runners in AWS
  • Less operational work

Across the entire solution design, the only effort required from my side was the code to register/de-register these runners with the control plane when the compute instances were started or stopped.

The final design (see Figure 1.4) leveraged auto-scaling groups in AWS, which is a mechanism to dynamically scale up or trim down the compute resources depending on the usage patterns:

Figure 1.4: GitLab Continuous Integration and Continuous Delivery with runners hosted in AWS

Figure 1.4: GitLab Continuous Integration and Continuous Delivery with runners hosted in AWS

As soon as the new servers were started, they registered themselves with the GitLab control plane.

Extending your on-premises IT landscape with AWS

AWS cloud adoption need not always translate into shutting down data centers and migrating applications through a lift-and-shift approach. The real value lies in starting small, measuring impact, and utilizing cloud offerings as a natural extension to your on-premises IT landscape.

As seen in the previous scenario, the GitLab control plane still continued to remain in the on-premises data center, while the runners leveraged the elasticity of the cloud. This gave the developers immediate benefits in terms of compute selection, scalability, reliability, and elasticity of the cloud. Amazon EC2 is the Elastic Compute Cloud offering, which offers scalable computing capacity for virtual servers, security, networking, and storage. Combining this with the EC2 Auto Scaling service, I configured capacity thresholds that allowed us to maintain a default set of runners and scale, with the demand driven by real usage.

Infrastructure management in data centers usually lacks this level of flexibility unless there are interfaces or resource orchestrators made available to the end user for provisioning resources in an automated way.

Collecting metrics for understanding resource usage

Requirements relating to measuring usage and alerting on thresholds were further simplified with the use of Amazon CloudWatch. CloudWatch is a metrics repository in which different AWS services and external applications publish usage data. EC2, like other services, makes data points such as CPU, memory, and storage consumption available, which helps to identify threshold breaches, resulting in the automation of scaling decisions.

Having access to these metrics out of the box is a considerable automation accelerator for two reasons. You need not invest any effort in capturing this data with third-party agents and the close-to-real-time nature of this data helps with dynamic decision-making. Furthermore, AWS also offers native integrations around alarms and service triggers with CloudWatch. So, extending these to usual notification mechanisms, such as email, SMS, or an external API, is generally a low-effort implementation.

Paying for what you use

AWS offers a pay-as-you-go pricing model. In contrast to this, on-premises resources come with a fixed-priced costing model and require a lot of time and operational effort. Combining this with metrics from CloudWatch, it was possible to automatically scale down the EC2 compute resources during periods of low usage (after work hours and weekends). This further reduced AWS costs by ~20-30%.

Generally speaking, on-premises infrastructure resources are mostly over-provisioned. This is done to maintain an additional buffer of resources, should ad-hoc demand require it. AWS, on the other side, offers the capability to right-size all your resources based on your exact needs at the moment. This is a big win for agile teams to respond to changing customer demands and usage patterns.

Simplifying service delivery through cloud abstractions

Software technology these days is all about abstractions. This is a topic that we will explore in more depth in Chapter 2, Choosing the Right Cloud Service. All AWS services abstract the complexity from the end users around operational aspects. As a result, end users are empowered to focus on the differentiating features and business outcomes. Earlier, we discussed the need to take the cognitive load off the developers. AWS makes this a reality, and you can develop proofs of concept, demos, and production-grade applications in hours or days, which previously took months.

Leveraging the infrastructure elasticity of the cloud

AWS cloud benefits are not limited to procuring more resources when needed but are also about contracting when possible. Of course, this needs to align with the type of workload you plan to run in the cloud. Sometimes, there are known events that would require more resources to handle the increased load, such as festive sales and marketing initiatives. In other cases, when the usage spikes cannot be determined in advance, you can leverage AWS’ auto-scaling capabilities, as we did for GitLab runners.

So far, we have discussed two solution implementations and how adopting cloud services gives a big boost to reliability and scalability, leading to better customer outcomes. Next, let’s learn about some DevOps methodologies that help accelerate the software delivery process. We will later map these key areas to certain AWS services.

DevOps methodologies to accelerate software delivery

As we discussed at the beginning of the chapter, successful organizations use software automation to catapult their digital transformation journey.

As highlighted in the 2022 State of DevOps Report by DORA (https://cloud.google.com/devops/state-of-devops), DevOps methodologies positively influence your team culture and foster engineering best practices to help you be able to ship software with increased velocity and better reliability. In software engineering, the following principles have been well established and are known to optimize the way teams work and collaborate.

Continuous integration

Continuous Integration (CI) is a software engineering best practice that advocates the frequent merging of code from all software developers in a team to one central repository. This increases the confidence in and visibility of new features being released to the customers. At the same time, automated tests make releasing code multiple times a day seamless and easy. Developers also get quick feedback regarding any bugs that might have been introduced into the system as a result of implementing features in isolation.

Continuous delivery

Continuous Delivery (CD) is the practice of producing code in short cycles that can be released to production at any time. Automatically deploying to a production-like environment is key here. Fast-moving software teams leverage CD to confidently roll out features or patches, on demand, with lightweight release processes.

Continuous deployment

Continuous deployment enables teams to automatically release the code to production. This is indicative of high DevOps maturity and rock-solid automation practices. Using continuous deployment, code is automatically deployed to the production environments.

This requires deep integration into how the software stack functions. All ongoing operations and customer requests are automatically taken care of, and the release process is hardly noticeable to the end user.

In later chapters, we will go through some hands-on examples around CI/CD processes and use native AWS services to see things in action.

Infrastructure as Code (IaC)

Managing AWS infrastructure components with code, using SDKs, APIs, and so on, makes it very convenient to reliably manage environments at scale. Unlike static provisioning methods used on-premises, these practices enable the creation of complete infrastructure stacks with the use of programmable workflows.

This also reduces the ownership silos across the development and operations teams. The developers are free to use familiar programming language constructs and have end-to-end control of the foundational infrastructural elements.

Effective communication and collaboration

Collaboration between team members is crucial for faster software delivery. It is advisable to have small teams that share a common goal. Amazon uses the concept of the Two-Pizza Team rule, which suggests creating a workgroup that is no larger than one that can be fed by two pizzas, so roughly an 8-to-10-member team.

Furthermore, this enables the team to not just deliver software but own it end to end. Operations, deployment, support, and feature development are all owned by the members of this team.

Now, since we have a good understanding of key DevOps methodologies, let’s dive into several AWS services that make this a reality in the cloud.

Key AWS DevOps services

AWS offers managed services that cater to each of these principles. Depending on the organization’s operating model, you can deploy these services in your AWS accounts and give autonomy to all team members to leverage the unlimited potential of the cloud.

Feature roadmaps of all these AWS services are strongly driven by customer feedback. This increases the likelihood of enterprise-grade usage patterns being supported out of the box. Imagine use cases such as automatic notifications and deployment triggers as soon as code is committed to a repository, for example. Let’s have a deeper look into the variety of offerings that simplify your DevOps adoption in each of the key areas.

CI

Git workflows are instrumental to the success of any software team. The way they commit code, the comments they use, and how they collaborate across feature requests say a lot about their engineering practices. High-performing teams also ensure quick automated feedback for every single commit that ends up in the central repository. AWS offers three key services to support such requirements.

AWS CodeCommit

A simple explanation for this would be Git as a Service. Git is a distributed version control system that addresses the limitations of the previously used centralized model, such as SVN (Apache Subversion). AWS makes it easier for users to create, operate, and scale Git repositories for their software workloads. Traditionally, on-premises administrators used to provision and manage Git repositories on a self-hosted server. This had its challenges, but with AWS, you just focus on consuming the service for your collaboration needs and everything else is taken care of.

CodeCommit allows you to easily create branches, commit code, and create pull requests for review by your team members. With all AWS offerings, security is the highest priority, and CodeCommit is no different. By default, all data is encrypted at rest and secure transit mechanisms such as SSH and HTTPS are used for any access requirements. For the end user, nothing changes. as they still use the same tooling (the git CLI) to communicate with the service endpoints.

Like other services, it also publishes important metrics and events to CloudWatch, which can be used to build automation workflows. Let’s check out just some events that might be interesting for your team’s collaboration needs:

  • Creation of pull requests
  • Tracking comments on pull requests
  • Pull request merge status changed
  • Restriction of access to certain branches only for a set of users

AWS CodeBuild

Soon after the code is committed to a repository, automated processes are triggered. These might be creating artifacts, running tests, or building container images. CodeBuild is a service that provides a lightweight and scalable execution environment in which certain operations can be performed on the recently committed code. You can configure your build environments with basic configuration details, such as CPU/memory resources and the commands you would like to run.

If you have configured and managed build servers on your own, you can imagine the benefit such managed services bring to the table. You are only charged for the duration for which the builds run, and the service scales automatically to process multiple parallel executions.

Finally, it can also store build artifacts, such as JAR files, executables, or even obfuscated JavaScript files, in locations such as Amazon S3.

AWS CodeArtifact

This is the artifact repository where your compiled binaries, scripts, and executables can be stored for later consumption. This replaces the need for package managers, which teams generally manage on their own, although they sometimes opt for a remote-hosted offering. Out-of-the-box compatibility with PyPI, Maven, NPM, and so on makes it easy to store your artifacts directly in AWS.

We have just scratched the surface by discussing these services that enable CI. There is more to them, which will be covered in the following chapters.

Next, let’s discuss delivery and deployment methodologies, which prepare or deploy builds for production usage.

CD and continuous deployment

Successful implementations of CI practices allow for the automatic preparation of code release activities. High-performing teams typically automate an integration test suite while practicing CD and continuous deployment. They deploy the code in production-like environments, measure performance, load tests, and evaluate known edge cases before deploying in live environments.

The only difference between CD and continuous deployment is that the former does not automatically promote the artifact to production, and there is no need for rollbacks when failures are detected. Continuous delivery prepares a production-ready build, but the final deployment still requires human intervention. With the increasing maturity of tooling and automation, the teams at some stage start automatically rolling out code to production environments, which is continuous deployment. AWS offers two main services in these areas.

AWS CodeDeploy

As the name suggests, this is a code deployment service. It provides support for a variety of compute offerings, such as EC2, AWS container services, and even on-premises machines. Furthermore, several deployment strategies control the rollout process for you and back it up with health checks that add to the visibility and reliability of code rollout procedures.

Depending on the application architecture and rollout methodology, one of the following could be used:

  • In-place deployments: Update code in all instances in the application group followed by a service restart. The scope of change could be controlled by going all in at once or doing a controlled release.
  • Blue-green deployments: An identical environment is set up and CodeDeploy deploys different versions in both, giving the end user the capability to switch the production traffic when possible and revert when issues are observed.
  • Canary deployments: This is a deployment strategy in which new code is released in phases. For example, every few minutes, X% of the servers get the code upgrade, and this continues until a rollback is explicitly performed.

AWS CodePipeline

CodePipeline is an orchestrator that works with all the services discussed previously. It manages the overall software delivery process and is responsible for invoking certain services, in the defined order.

Using YAML and JSON templates, you can code an automated procedure that can be used to reliably release software every single time. The service shines in terms of native integrations with many other services. This abstracts lots of internal details and lets you focus on application-specific details.

IaC

With the ever-increasing complexity of software applications, infrastructure requirements have grown exponentially. Managing all these components manually is error prone and subject to human limitations. Using standard tools, SDKs, and APIs, AWS makes it easy to manage the entire IaC. It takes minutes to spin up and tear down infrastructure across an entire AWS region.

AWS offers SDKs in different programming languages such as Python, Go, Ruby, JavaScript, C++, and many more. Using familiar programming syntax, you can develop and operate your entire software stack using code. In the later chapters, we will learn about the relevant AWS services, such as CloudFormation and Cloud Development Kit (CDK).

AWS CloudFormation

With JSON or YAML templates, users can define their entire infrastructure stacks and maintain them as code. CloudFormation allows them to build resource dependency graphs automatically and provision all services in the desired order. It further supports multi-region and multi-account rollouts, which is helpful for enterprise-grade AWS landscapes.

AWS CDK

This is an open source, infrastructure management framework that works using the concept of constructs – readymade abstractions for deploying integrated application components. Under the hood, it works with CloudFormation templates but abstracts these details from the end user. It offers native programming language features such as conditionals, composition, and inheritance, which enable the user to apply programming methodologies to infrastructure management. These reusable components can then be shared with other teams in the company. This not only accelerates overall DevOps adoption but also leads to standardized infrastructure solutions for a particular application pattern.

Summary

In this chapter, we discussed two real-life examples that helped you to compare different approaches to building solutions and to enable your teams. While this highlighted certain drawbacks in the traditional on-premises infrastructure model, you also learned about the practical benefits of using AWS services in the cloud.

Along the way, we also covered some guiding principles that will make your life as a software professional easier. You learned how to think of your end users as customers, align technology offerings to their business outcomes, reduce their cognitive load, and focus on iterative development practices.

With the foundations set, we then dived into important DevOps methodologies that enable software delivery at scale, reliably and securely. Toward the end, you learned about key AWS services that can boost your DevOps adoption and offer enterprise-grade reliability, availability, and security.

In Chapter 2, Choosing the Right Cloud Service, we will learn about the different service models offered by AWS and some strategies to decide what works best for your organization.

Further reading

You can gain a greater insight into AWS service offerings in the continuous integration, continuous delivery, and continuous deployment space by examining the official whitepaper: https://docs.aws.amazon.com/whitepapers/latest/practicing-continuous-integration-continuous-delivery/welcome.html

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Increase your organization’s DevOps maturity level from both strategic and tactical standpoint
  • Get hands-on AWS experience with ready-to-deploy code examples covering enterprise scenarios
  • Advance your career with practical advice to ensure customer satisfaction and stakeholder buy-in
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

DevOps and AWS are the two key enablers for the success of any modern software-run business. DevOps accelerates software delivery, while AWS offers a plethora of services, allowing developers to prioritize business outcomes without worrying about undifferentiated heavy lifting. This book focuses on the synergy between them, equipping you with strong foundations, hands-on examples, and a strategy to accelerate your DevOps journey on AWS. AWS DevOps Simplified is a practical guide that starts with an introduction to AWS DevOps offerings and aids you in choosing a cloud service that fits your company's operating model. Following this, it provides hands-on tutorials on the GitOps approach to software delivery, covering immutable infrastructure and pipelines, using tools such as Packer, CDK, and CodeBuild/CodeDeploy. Additionally, it provides you with a deep understanding of AWS container services and how to implement observability and DevSecOps best practices to build and operate your multi-account, multi-Region AWS environments. By the end of this book, you’ll be equipped with solutions and ready-to-deploy code samples that address common DevOps challenges faced by enterprises hosting workloads in the cloud.

Who is this book for?

This book is for software professional who build or operate software on AWS. If you have basic knowledge of AWS Console or CLI, this book will help you build or enhance your DevOps skills by developing a solid foundational understanding of AWS offerings. You’ll also find it useful if you’re looking to optimize your software delivery cycles and build reliable, cost-optimized, secure, and sustainable solutions on AWS.

What you will learn

  • Develop a strong and practical understanding of AWS DevOps services
  • Manage infrastructure on AWS using tools such as Packer and CDK
  • Implement observability to bring key system behaviors to the surface
  • Adopt the DevSecOps approach by integrating AWS and open source solutions
  • Gain proficiency in using AWS container services for scalable software management
  • Map your solution designs with AWS's Well-Architected Framework
  • Discover how to manage multi-account, multi-Region AWS environments
  • Learn how to organize your teams to boost collaboration
Estimated delivery fee Deliver to Argentina

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 29, 2023
Length: 318 pages
Edition : 1st
Language : English
ISBN-13 : 9781837634460
Concepts :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Colour book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Argentina

Standard delivery 10 - 13 business days

$12.95

Premium delivery 3 - 6 business days

$40.95
(Includes tracking information)

Product Details

Publication date : Sep 29, 2023
Length: 318 pages
Edition : 1st
Language : English
ISBN-13 : 9781837634460
Concepts :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 149.97
AWS for Solutions Architects
$54.99
AWS DevOps Simplified
$49.99
Mastering AWS CloudFormation
$44.99
Total $ 149.97 Stars icon
Banner background image

Table of Contents

18 Chapters
Part 1 Driving Transformation through AWS and DevOps Chevron down icon Chevron up icon
Chapter 1: Accelerating Your DevOps Journey with AWS Chevron down icon Chevron up icon
Chapter 2: Choosing the Right Cloud Service Chevron down icon Chevron up icon
Chapter 3: Leveraging Immutable Infrastructure in the Cloud Chevron down icon Chevron up icon
Part 2 Faster Software Delivery with Consistent and Reproducible Environments Chevron down icon Chevron up icon
Chapter 4: Managing Infrastructure as Code with AWS CloudFormation Chevron down icon Chevron up icon
Chapter 5: Rolling Out a CI/CD Pipeline Chevron down icon Chevron up icon
Chapter 6: Programmatic Approach to IaC with AWS CDK Chevron down icon Chevron up icon
Part 3 Security and Observability of Containerized Workloads Chevron down icon Chevron up icon
Chapter 7: Running Containers in AWS Chevron down icon Chevron up icon
Chapter 8: Enabling the Observability of Your Workloads Chevron down icon Chevron up icon
Chapter 9: Implementing DevSecOps with AWS Chevron down icon Chevron up icon
Part 4 Taking the Next Steps Chevron down icon Chevron up icon
Chapter 10: Setting Up Teams for Success Chevron down icon Chevron up icon
Chapter 11: Ensuring a Strong AWS Foundation for Multi-Account and Multi-Region Environments Chevron down icon Chevron up icon
Chapter 12: Adhering to AWS Well-Architected Principles Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(7 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




William Francillette Nov 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book did not disappointed me and was so pleasant to read. I've learned a lot not only about AWS but also concepts and general guidance about DevOps and cloud.For every chapters, the author prepared a concrete example available in his GitHub repo so that you can see the magic in action.The book starts with the benefits of DevOps, the cloud in general and how it can boost your organization.Then the author take us through the templating features of AWS and particularly the Cloud Formation templates and the CDK. Those are powerful tools and convenient for deploying configuration and resources at scale.We then learn about CodeCommit, CodeBuild, CodeDeploy, CodePipeline and using Cloud9 IDE to deploy pipelines and infrastructure as code. The author also take us through ECS, ECR and EKS before looking at monitoring solution like CloudWatch.The last part of the book is more about cloud architecture and best practices using the AWS well architected framework.Thanks to Packt for such quality content!
Amazon Verified review Amazon
Padmini Subramanian Dec 18, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Books are good resources even for tech, to enhance your learning progress along with Virtual Labs for Hands-On eg. AWS skill builder, Cloud Academy, Whizlabs, etc.This Book is one such Manual, helping me to progress my DevOps Learning Journey along with Skill builder, Udemy, Cloud Academy, ... Virtual Labs.Love this Book, afterall it's from an Evangelist and endorsement written on the Cover is From AWS Global Leadership - Director of DevSecOps.
Amazon Verified review Amazon
Lucky Dec 02, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
AWS DevOps Simplified" by Akshay Kapoor is a comprehensive guide to implementing DevOps on AWS. The book covers a wide range of topics, from the basics of AWS DevOps to more advanced concepts such as infrastructure as code, containerization, and observability.Kapoor's writing is clear and concise, and he uses plenty of examples to illustrate his points. The book is also well-organized, making it easy to find the information you need.I particularly enjoyed the book's focus on hands-on learning. Kapoor provides readers with step-by-step instructions on how to use a variety of AWS DevOps tools, including Packer, CDK, and CodeBuild."AWS DevOps Simplified" is an essential resource for any software professional who wants to learn how to implement DevOps on AWS. The book is also a valuable reference for anyone who is already familiar with DevOps and wants to learn more about how to use AWS to implement their DevOps practices.Here are some of the things I liked most about the book:The book is comprehensive and covers a wide range of topics.Kapoor's writing is clear and concise.The book uses plenty of examples to illustrate the concepts.The book is well-organized and easy to follow.The book is focused on hands-on learning.I highly recommend "AWS DevOps Simplified" to anyone who is interested in learning how to implement DevOps on AWS.
Amazon Verified review Amazon
Jan M. Nov 14, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I really enjoyed reading the book. Lots of insights! A clear recommendation for everyone interested in DevOps implemented via AWS
Amazon Verified review Amazon
arunvel arunachalam Oct 29, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book provides an insightful and practical guide to Amazon Web Services (AWS). It offers a comprehensive overview of AWS services, architectures, and best practices, making it an invaluable resource for both beginners and experienced professionals in the field of cloud computing. The book's clear and concise explanations, along with real-world examples, facilitate a deep understanding of AWS, from basic concepts to advanced applications. Whether you're looking to leverage AWS for your business or simply aiming to enhance your cloud knowledge, "AWS DevOps Simplified" is a highly recommended resource that demystifies the cloud and equips you with the skills needed to navigate this ever-evolving technology landscape.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela