Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
AWS for Solutions Architects
AWS for Solutions Architects

AWS for Solutions Architects: The definitive guide to AWS Solutions Architecture for migrating to, building, scaling, and succeeding in the cloud , Second Edition

Arrow left icon
Profile Icon Saurabh Shrivastava Profile Icon Imtiaz Sayed Profile Icon Alberto Artasanchez Profile Icon Neelanjali Srivastav
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (64 Ratings)
Paperback Apr 2023 692 pages 2nd Edition
eBook
€32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Saurabh Shrivastava Profile Icon Imtiaz Sayed Profile Icon Alberto Artasanchez Profile Icon Neelanjali Srivastav
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3 (64 Ratings)
Paperback Apr 2023 692 pages 2nd Edition
eBook
€32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€32.99
Paperback
€41.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

AWS for Solutions Architects

Join our book community on Discord

https://packt.link/AWS4SAs

The last decade has revolutionized the IT infrastructure industry; cloud computing was introduced and now it is everywhere, from small start-ups to large enterprises. Nowadays, the cloud is the new normal. It all started with Amazon launching a cloud service called Amazon Web Services (AWS) in 2006 with a couple of services.

Netflix migrated to AWS in 2008 and became a market disrupter. After that, there was no looking back and there were many industry revolutions led by cloud-born start-ups like Airbnb in hospitality, Robinhood in finance, Lyft in transportation, and many more. The cloud rapidly gained the market share, and now big names like Capital One, JP Morgan Chase, Nasdaq, the NFL, and General Electric are all accelerating their digital journey with cloud adoption.

Even though the term cloud is pervasive today, not everyone understands what the cloud is as the cloud can be different things for different people, and it is continuously evolving. In this chapter, you will learn what the cloud is, and then what AWS is more specifically. You will learn about the vast and ever-growing influence and adoption of the cloud in general and of AWS in particular. After that, you will start getting introduced to some elementary cloud and AWS terms to get your feet wet with the lingo while gaining an understanding of why cloud computing is so popular. In this chapter, we will cover the following topics:

  • What is cloud computing?
  • What is Amazon Web Services (AWS)?
  • The market share, influence, and adoption of AWS
  • Basic cloud and AWS terminology
  • Why is AWS so popular?

Let's get started, shall we?

What is cloud computing?

At a high level, cloud computing is the on-demand availability of IT resources such as servers, storage, databases, and so on over the web, without the hassle of managing physical infrastructure. The best way to understand the cloud is to take the electricity supply analogy. To get light in your house, you just flip a switch on, and electric bulbs light up your home. In this case, you only pay for your electricity use when you need it; when you switch off electric appliances, you do not pay anything. Now, imagine if you needed to power a couple of appliances, and for that, you had to set up an entire powerhouse. It would be costly, right? It would involve the costs of maintaining the turbine and generator and building the whole infrastructure. Utility companies make your job easier by supplying electricity in the quantity you need. They maintain the entire infrastructure to generate electricity and they can keep costs down by distributing electricity to millions of houses, which helps them benefit from mass utilization. Here, the utility companies represent cloud providers such as AWS, and the electricity represents the IT infrastructure available in the cloud.

While consuming cloud resources, you pay for IT infrastructure such as computing, storage, databases, networking, software, machine learning, and analytics in a pay-as-you-go model. Here, public clouds like AWS do the heavy lifting to maintain IT infrastructure and provide you with on-demand access over the internet under a pay-as-you-go model. As you generally only pay for the time and services you use, most cloud providers can provide massive scalability, making it easy to scale services up and down. Where, traditionally, you would have to maintain your servers all by yourself on-premise to run your organization, now you can offload that to the public cloud and focus on your core business. For example, Capital One's core business is banking and it does not run a large data center.

As much as we tried to nail it down, this is still a pretty broad definition. For example, we specified that the cloud can offer software, that's a pretty general term. Does the term software in our definition include the following?

  • Video conferencing
  • Virtual desktops
  • Email services
  • Contact center
  • Document management

These are just a few examples of what may or may not be included as available services in a cloud environment. When AWS started, it only offered a few core services, such as compute (Amazon EC2) and basic storage (Amazon S3). AWS has continually expanded its services to support virtually any cloud workload. As of 2022, it has more than 200 fully featured services for computing, storage, databases, networking, analytics, machine learning, artificial intelligence, Internet of Things, mobile, security, hybrid, virtual and augmented reality, media, application development, and deployment. As a fun fact, as of 2021, Amazon Elastic Cloud Compute (EC2) alone offers over 475 types of compute instances.

For the individual examples given here, AWS offers the following:

  • Video conferencing  – Amazon Chime
  • Virtual desktops – AWS WorkSpaces
  • Email services – Amazon WorkMail
  • Contact Center – Amazon Connect
  • Document Management – Amazon WorkDocs

Not all cloud services are highly intertwined with their cloud ecosystems. Take these scenarios, for example:

  • Your firm may be using AWS services for many purposes, but they may be using WebEx, Microsoft Teams, Zoom, or Slack for their video conference needs instead of Amazon Chime. These services have little dependency on other underlying core infrastructure cloud services.
  • You may be using Amazon SageMaker for artificial intelligence and machine learning projects, but you may be using the TensorFlow package in Sagemaker as your development kernel, even though Google maintains TensorFlow.
  • If you are using Amazon RDS and choose MySQL as your database engine, you should not have too much trouble porting your data and schemas over to another cloud provider that supports MySQL if you decide to switch over.

However, it will be a lot more difficult to switch to some other services. Here are some examples:

  • Amazon DynamoDB is a NoSQL proprietary database only offered by AWS. If you want to switch to another NoSQL database, porting it may not be a simple exercise.
  • Suppose you are using CloudFormation to define and create your infrastructure. In that case, it will be difficult, if not impossible, to use your CloudFormation templates to create infrastructure in other cloud provider environments. Suppose the portability of your infrastructure scripts is important to you, and you are planning on switching cloud providers. In that case, using Ansible, Chef, or Puppet may be a better alternative.
  • Suppose you have a streaming data requirement and use Amazon Kinesis Data Streams. You may have difficulty porting out of Amazon Kinesis since the configuration and storing mechanism are quite dissimilar if you decide to use another streaming data service like Kafka.

As far as we have come in the last 15 years with cloud technologies, I think vendors realize that these are the beginning innings, and locking customers in right now while they are still deciding who their vendor should be will be a lot easier than trying to do so after they pick a competitor.

However, looking at a cloud-agnostic strategy has its pros and cons. You want to distribute your workload between cloud providers to have competitive pricing and keep your options open like in the old days. But each cloud has different networking needs, and connecting distributed workloads between clouds to communicate with each other is a complex task. Also, each major cloud provider, like AWS, Azure, and GCP, has a breadth of services, and building a workforce with all three skill sets is another challenge.

Finally, clouds like AWS provide economy of scale, which means the more you use, the more the price goes down, which may not benefit you if you choose multi-cloud. Again, it doesn't mean you cannot choose a multi-cloud strategy, but you have to think about logical workload isolation. It would not be wise to run the application layer in one cloud and the database layer in other, but you can think about logical isolation like running the analytics workload and application workload in a separate cloud.

In this section, you learned about cloud computing at a very high level. Let’s learn about the difference between the public and private clouds.

Private versus public clouds

A private cloud is a service dedicated to a single customer—it is like your on-premise data center, which is accessible to one large enterprise. A private cloud is a fancy name for a data center managed by a trusted third party. This concept gained momentum to ensure security as, initially, enterprises were skeptical about public cloud security, which is multi-tenant. However, having your own infrastructure in this manner diminishes the value of the cloud as you have to pay for resources even if you are not running them.

Let's use an analogy to understand the difference between private and public clouds further. The gig economy has great momentum. Everywhere you look, people are finding employment as contract workers. One of the reasons contract work is getting more popular is because it enables consumers to contract services that they may otherwise not be able to afford. Could you imagine how expensive it would be to have a private chauffeur? But with Uber or Lyft, you almost have a private chauffeur who can be at your beck and call within a few minutes of you summoning them.

A similar economy of scale happens with a public cloud. You can have access to infrastructure and services that would cost millions of dollars if you bought them on your own. Instead, you can access the same resources for a small fraction of the cost.

In general, private clouds are expensive to run and maintain in comparison to public clouds. For that reason, many of the resources and services offered by the major cloud providers are hosted in a shared tenancy model. In addition to that, you can run your workloads and applications on a public cloud securely: you can use security best practices and sleep well at night knowing that you use AWS’s state-of-the-art technologies to secure your sensitive data.

Additionally, most major cloud providers' clients use public cloud configurations. That said, there are a few exceptions even in this case. For example, the United States government intelligence agencies are a big AWS customer. As you can imagine, they have deep pockets and are not afraid to spend. In many cases with these government agencies, AWS will set up the AWS infrastructure and dedicate it to the government workload. For example, AWS launched Top Secret Region—AWS Top Secret-West accredited to operate workloads at the Top-Secret U.S. security classification level. The other AWS GovCloud regions are:

  • GovCloud (US-West) Region - Launched in 2011
    Availability Zones: 3
  • GovCloud (US-East) Region - Launched in 2018
    Availability Zones: 3

AWS GovCloud (US) consists of isolated AWS Regions designed to allow U.S. government agencies and customers to move sensitive workloads to AWS. It addresses specific regulatory and compliance requirements, including Federal Risk and Authorization Management Program (FedRAMP) High, Department of Defense Security Requirements Guide (DoD SRG) Impact Levels 4 and 5, and Criminal Justice Information Services (CJIS) to name a few.

Public cloud providers such as AWS provide you choices to adhere to compliance needs as required by government or industry regulations. For example, AWS offers Amazon EC2 dedicated instances, which are EC2 instances that ensure that you will be the only user for a given physical server. Further, AWS offers AWS Outpost, where you can order server racks and host workloads on-premise using the AWS control plane.

Dedicated instance and outpost costs are significantly higher than on-demand EC2 instances. On-demand instances are multi-tenant, which means the physical server is not dedicated to you and may be shared with other AWS users. However, just because the physical servers are multi-tenant doesn’t mean that anyone else can access your server as those will be dedicated virtual EC2 instances accessible to you only. As we will discuss later in this chapter, you will never know the difference when using EC2 instances if they are hosted on a dedicated physical server compared to a multi-tenant server because of virtualization and hypervisor technology. One common use case for choosing dedicated instances is government regulations and compliance policies that require certain sensitive data to not be in the same physical server with other cloud users.

Now that we have gained a better understanding of cloud computing in general, let's get more granular and learn about how AWS does cloud computing.

What is AWS (Amazon Web Services)?

Amazon Web Services (AWS) is the world’s most broadly adopted cloud platform, offering over 200 fully-featured services from data centers globally. Even though there are a few worthy competitors, it doesn't seem like anyone will push them off the podium for a while.

For example, it’s difficult to catch up with AWS’ pace of innovation. AWS services and features have grown exponentially every year: as shown in the following figure, in 2011, AWS released over 80 new significant services and features, followed by nearly 160 in 2012; 280 in 2013; 516 in 2014; 722 in 2015; 1,017 in 2016; 1,430 in 2017; and 1,957 in 2018; 2,345 in 2019, 2,757 in 2020, and 3,084 in 2021:

Figure 1.1 – AWS – number of features released per year

There is no doubt that the number of offerings will continue to grow at a similar rate for the foreseeable future. Gartner named AWS as a leader for the 11th consecutive year in the 2021 Gartner Magic Quadrant for Cloud Infrastructure & Platform Services. AWS is innovating fast, especially in new areas such as machine learning and artificial intelligence, the Internet of Things (IoT), serverless computing, blockchain, and even quantum computing.

The following are some of the key differentiators for AWS in a nutshell:

Oldest and most experienced cloud provider AWS was the first major public cloud provider (started in 2006) and since then it has gained millions of customers across the globe.
Fast pace of innovation AWS has 200+ fully featured services to support any cloud workload. They released 3000+ features in 2021 to meet customer demand.
Continuous price reduction AWS has reduced its prices across various services 111 times since its inception in 2006 to improve the Total Cost of Ownership ( TCO ).
Community of partners to help accelerate the cloud journey AWS has a large Partner Network of 100,000+ Partners across 150 + countries. These partners include large consulting partners and software vendors.
Security and compliance AWS provides security standards and compliance certifications to fulfill your local government and industry compliance needs.
Global infrastructure AWS has 84 Availability Zones within 26 geographic Regions, 17 Local Zones, 24 Wavelength Zones, 310+ Points of Presence (300+ Edge locations and 13 regional mid-tier caches) in 90+ cities across 47 countries.

It’s not always possible to move all workloads into the cloud, and for that purpose, AWS provides a broad set of hybrid capabilities in the areas of networking, data, access, management, and application services. For example, VMware Cloud on AWS allows customers to seamlessly run existing VMware workloads on AWS with the skills and toolsets they already have without additional hardware investment. If you want to run your workload on-premise, then AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility. You will learn more details about hybrid cloud services later in this book.

This is just a small sample of the many AWS services that you will see throughout this book. Let's delve a little deeper into how influential AWS currently is and how influential it has the potential to become.

The market share, influence, and adoption of AWS

For the first nine years of AWS's existence, Amazon did not break down its AWS sales, and since 2015 Amazon started reporting AWS sales separately. As of April 2022, Microsoft does not fully break down its Azure revenue and profit in its quarterly reports. They disclosed their Azure revenue growth rate without reporting the actual revenue number, instead burying Azure revenues in a bucket called Commercial Cloud, which also includes items such as Office 365 revenue. Google has been cagey about breaking down its Google Cloud Platform (GCP) revenue for a long time. Google finally broke down its GCP revenue in February 2019, but GCP also combines its cloud and workplace (G-suite) tools in the same bucket.

AWS has a large market share with a $74B run rate in 2021 and 37% year-over-year growth, which is phenomenal for a business of its size. As of 2021, AWS is leading cloud IaaS with 39% of the market share as per TechRadar’s global cloud market report. AWS has done a great job of protecting its market share by adding more and more services, adding features to existing services, building higher-level functionality on top of the core services it already offers, and educating the masses on how to best use these services.

We are in an exciting period when it comes to cloud adoption. Until a few years ago, many C-suite executives were leery of adopting cloud technologies to run their mission-critical and core services. A common concern was that they felt having on-premises implementations was more secure than running their workloads on the cloud.

It has become apparent to most of them that running workloads on the cloud can be just as secure as running them on-premises. There is no perfectly secure environment, and it seems that almost every other day, we hear about sensitive information being left exposed on the internet by yet another company. But having an army of security experts on your side, as is the case with the major cloud providers, will often beat any security team that most companies can procure on their own.

The current state of the cloud market for most enterprises is a state of Fear Of Missing Out (FOMO). Chief executives are watching their competitors jumping on the cloud, and they are concerned that they will be left behind if they don't leap.

Additionally, we see an unprecedented level of disruption in many industries propelled by the power of the cloud. Let's take the example of Lyft and Uber. Both companies rely heavily on cloud services to power their infrastructure, and old-guard companies in the space, such as Hertz and Avis, that depend on older on-premises technology are getting left behind. Part of the problem is the convenience that Uber and Lyft offer by being able to summon a car on demand. But the inability to upgrade their systems to leverage cloud technologies undoubtedly played a role in their diminishing share of the car rental market.

Let's continue learning some of the basic cloud terminologies and AWS terminology.

Basic cloud and AWS terminology

There is a constant effort by technology companies to offer common standards for certain technologies while providing exclusive and proprietary technology that no one else offers. An example of this can be seen in the database market. The Standard Query Language (SQL) and the ANSI-SQL standard have been around for a long time. The American National Standards Institute (ANSI) adopted SQL as the SQL-86 standard in 1986. Since then, database vendors have continuously supported this standard while offering various extensions to make their products stand out and lock in customers to their technology.

Cloud providers provide the same core functionality for a wide variety of customer needs, but they all feel compelled to name these services differently, no doubt in part to try to separate themselves from the rest of the pack. As an example, every major cloud provider offers compute services. In other words, it is simple to spin up a server with any provider, but they all refer to this compute service differently:

  • AWS uses Elastic Cloud Computing (EC2) instances.
  • Azure uses Azure Virtual Machines.
  • GCP uses Google Compute Engine.

The following tables give a non-comprehensive list of the different core services offered by AWS, Azure, and GCP and the names used by each of them. However, if you are confused by all the terms in the tables, don't fret. We will learn about many of these services throughout the book and when to use them.

Figure 1.2 – Cloud provider terminology and comparison (part 1)

These are some of the other services, including serverless technology services and database services:

Figure 1.3 – Cloud provider terminology and comparison (part 2)

These are additional services:

Figure 1.4 – Cloud provider terminology and comparison (part 3)

The next section will explain why cloud services are becoming popular and why AWS adoption is prevalent.

Elasticity

Elasticity may be one of the most important reasons for the cloud's popularity. Let's first understand what it is.

Do you remember the feeling of going to a toy store as a kid? There is no feeling like it in the world. Puzzles, action figures, games, and toy cars were all at your fingertips, ready for you to play with them. There was only one problem: you could not take the toys out of the store. Your mom or dad always told you that you could only buy one toy. You always had to decide which one you wanted, and invariably, after one or two weeks of playing with that toy, you got bored with it, and the toy ended up in a corner collecting dust, and you were left longing for the toy you didn't choose.

What if I told you about a special, almost magical, toy store where you could rent toys for as long as you wanted, and the second you got tired with the toy, you could return it, change it for another toy, and stop any rental charges for the first toy? Would you be interested?

The difference between the first, traditional store and the second, magical store is what differentiates on-premises environments and cloud environments.

The first toy store is like setting up infrastructure on your own premises. Once you purchase a piece of hardware, you are committed to it and will have to use it until you decommission it or sell it at a fraction of what you paid for it.

The second toy store is analogous to a cloud environment. If you make a mistake and provision a resource that's too small or too big for your needs, you can transfer your data to a new instance, shut down the old instance, and, importantly, stop paying for that instance.

More formally defined, elasticity is the ability of a computing environment to adapt to changes in workload by automatically provisioning or shutting down computing resources to match the capacity needed by the current workload.

In AWS and the other main cloud providers, resources can be shut down without having to terminate them completely, and the billing for resources will stop if the resources are shut down.

One important characteristic of public cloud providers such as AWS is the ability to quickly and frictionlessly provision resources. These resources could be a single instance of a database or a thousand copies of the application and web servers used to handle your web traffic. These servers can be provisioned within minutes.

Contrast that with how performing the same operation may play out in a traditional on-premises environment. Let's use an example. Say you need to set up a cluster of servers to host your latest service. Your next actions would probably look something like this:

  1. You visit the data center and realize that the current capacity is insufficient to host this new service.
  2. You map out a new infrastructure architecture.
  3. You size the machines based on the expected load, adding a few more terabytes and a few gigabytes to ensure that you don't overwhelm the service.
  4. You submit the architecture for approval to the procurement department and hard vendors.
  5. You wait. Most likely for months.

It may not be uncommon once you get the approvals to realize that the market opportunity for this service is now gone or that it has grown more. Imagine what will happen if, after getting everything set up in the data center and after months of approvals, you told the business sponsor that you made a mistake. You ordered a 64 GB RAM server instead of a 128 GB, so you won't have enough capacity to handle the expected load. Getting the right server will take a few more months. Also, the market is moving fast, and your user workload increases five times by the time you get the server. Now it's bad news for business, as you cannot scale your server so fast, the user experience will ultimately be compromised, and users will switch to other options.

This sort of problem is much less likely to happen in a cloud environment because instead of needing months to provision your servers, they can be provisioned in minutes. Correcting the size of the server may be as simple as shutting down the server for a few minutes, changing a drop-down box value, and restarting the server again. You can even go serverless and let the cloud handle the scaling for you while you focus on your business problems. You will learn more about serverless computing in Chapter 5, Harnessing the Power of Cloud Computing.

Hopefully, the example above drives our point home about the power of the cloud. The cloud exponentially improves the time to market by accelerating the time it takes for resources to be provisioned. Being able to deliver quickly may not just mean getting there first. It may be the difference between getting there first and not getting there in time.

Another powerful characteristic of a cloud computing environment is the ability to quickly shut down resources and, significantly, not be charged for that resource while it is down. In our continuing on-premises example, if we shut down one of our servers. Do you think we can call the company that sold us the server and politely asks them to stop charging us because we shut the server down? That would be a very odd conversation. It would probably not be a delightful user experience, depending on how persistent we were. They would probably say, "You bought the server; you can do whatever you want with it, including using it as a paperweight." Once the server is purchased, it is a sunk cost for the duration of the server's useful life.

In contrast, whenever we shut down a server in a cloud environment. The cloud provider can quickly detect that and put that server back into the pool of available servers for other cloud customers to use that newly unused capacity.

This distinction cannot be emphasized enough. The only time absolute on-premises costs may be lower than cloud costs is when workloads are extremely predictable and consistent. Computing costs in a cloud environment on a per-unit basis may be higher than on-premises prices, but the ability to shut resources down and stop getting charged for them makes cloud architectures cheaper in the long run, often in a quite significant way. Let's look at exactly what this means by reviewing a few examples.

Web storefront - A famous use case for cloud services is to use them to run an online storefront. Website traffic in this scenario will be highly variable depending on the day of the week, whether it's a holiday, the time of day, and other factors—almost every retail store in the USA experiences more than a 10x user workload during Thanksgiving week. The same goes for boxing day in the UK, Diwali in India, Singles’ day in china, and almost every country has a shopping festival. This kind of scenario is ideally suited for a cloud deployment. In this case, we can set up resource auto-scaling that automatically scales up and down compute resources as needed. Additionally, we can set up policies that allow database storage to grow as needed.

Big data workloads – As data volumes are increasing exponentially, the popularity of Apache Spark and Hadoop continues to increase to analyze GBs and TBs of data. Many Spark clusters don't necessarily need to run consistently. They perform heavy batch computing for a period and then can be idle until the next batch of input data comes in. A specific example would be a cluster that runs every night for 3 or 4 hours and only during the working week. In this instance, you need decoupled compute and data storage where you can shut down resources that may be best managed on a schedule rather than by using demand thresholds. Or, we could set up triggers that automatically shut down resources once the batch jobs are completed. AWS provides that flexibility where you can store your data in Amazon Simple Storage Service (S3) and spin up an Amazon Elastic Map-reduce cluster (EMR) to run spark jobs and shut them down after storing results back in decoupled Amazon S3. You will learn more about these services in Chapter 9, AWS EMR and AWS Glue – Extracting, Transforming, and Loading Data.

Employee workspace - In an on-premise setting, you provide a high configuration desktop/laptop to your development team and pay for it for 24 hours a day, including weekends. However, they are using one-fourth of the capacity considering an eight-hour workday. AWS provides workspaces accessible by low configuration laptops, and you can schedule them to stop during off-hours and weekends, saving almost 70% of the cost.

Another common use case in technology is file and object storage. Some storage services may grow organically and consistently. The traffic patterns can also be consistent. This may be one example where using an on-premises architecture may make sense economically. In this case, the usage pattern is consistent and predictable.

Elasticity is by no means the only reason that the cloud is growing in leaps and bounds. The ability to easily enable world-class security for even the simplest applications is another reason why the cloud is becoming pervasive.

Security

The perception of on-premises environments being more secure than cloud environments was a common reason companies big and small would not migrate to the cloud. More and more enterprises now realize that it is tough and expensive to replicate the security features provided by cloud providers such as AWS. Let's look at a few of the measures that AWS takes to ensure the security of its systems.

Physical security

AWS data centers are highly secured and continuously upgraded with the latest surveillance technology. Amazon has had decades to perfect its data centers' design, construction, and operation.

AWS has been providing cloud services for over 15 years, and they have an army of technologists, solution architects, and some of the brightest minds in the business. They are leveraging this experience and expertise to create state-of-the-art data centers. These centers are in nondescript facilities. You could drive by one and never know what it is. It will be extremely difficult to get in if you find out where one is. Perimeter access is heavily guarded. Visitor access is strictly limited, and they always must be accompanied by an Amazon employee.

Every corner of the facility is monitored by video surveillance, motion detectors, intrusion detection systems, and other electronic equipment. Amazon employees with access to the building must authenticate themselves four times to step on the data center floor.

Only Amazon employees and contractors that have a legitimate right to be in a data center can enter. Any other employee is restricted. Whenever an employee does not have a business need to enter a data center, their access is immediately revoked, even if they are only moved to another Amazon department and stay with the company. Lastly, audits are routinely performed and are part of the normal business process.

Encryption

AWS makes it extremely simple to encrypt data at rest and data in transit. It also offers a variety of options for encryption. For example, for encryption at rest, data can be encrypted on the server side, or it can be encrypted on the client side. Additionally, the encryption keys can be managed by AWS, or you can use keys that are managed by you using tamper-proof appliances like a Hardware Security Module (HSM). AWS provides you with a dedicated cloud HSM to secure your encryption key if you want one. You will learn more about AWS security in Chapter 7, Best Practices for Application Security, Identity, and Compliance.

AWS supports compliance standards

AWS has robust controls to allow users to maintain security and data protection. We'll discuss how AWS shares security responsibilities with its customers, but the same is true of how AWS supports compliance. AWS provides many attributes and features that enable compliance with many standards established in different countries and organizations. By providing these features, AWS simplifies compliance audits. AWS enables the implementation of security best practices and many security standards, such as these:

  • STAR
  • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70)
  • SOC 2
  • SOC 3
  • FISMA, DIACAP, and FedRAMP
  • PCI DSS Level 1
  • DOD CSM Levels 1-5
  • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018
  • MTCS Level 3
  • FIPS 140-2
  • I TRUST

In addition, AWS enables the implementation of solutions that can meet many industry-specific standards, such as these:

  • Criminal Justice Information Services (CJIS)
  • Family Educational Rights and Privacy Act (FERPA)
  • Cloud Security Alliance (CSA)
  • Motion Picture Association of America (MPAA)
  • Health Insurance Portability and Accountability Act (HIPAA)

The above is not a full list of compliance standards; there are many more compliance standards met by AWS according to industries and local authorities across the world.

Another important thing that can explain the meteoric rise of the cloud is how you can stand up high-availability applications without paying for the additional infrastructure needed to provide these applications. Architectures can be crafted to start additional resources when other resources fail. This ensures that we only bring additional resources when necessary, keeping costs down. Let's analyze this important property of the cloud in a deeper fashion.

Availability

When we deploy infrastructure in an on-premises environment, we have two choices. We can purchase just enough hardware to service the current workload or ensure that there is enough excess capacity to account for any failures. This extra capacity and eliminating single points of failure is not as simple as it may seem. There are many places where single points of failure may exist and need to be eliminated:

  • Compute instances can go down, so we need a few on standby.
  • Databases can get corrupted.
  • Network connections can be broken.
  • Data centers can flood or be hit by earthquakes.

Using the cloud simplifies the "single point of failure" problem. We have already determined that provisioning software in an on-premises data center can be long and arduous. Spinning up new resources can take just a few minutes in a cloud environment. So, we can configure minimal environments knowing that additional resources are a click away.

AWS data centers are built in different regions across the world. All data centers are "always-on" and deliver services to customers. AWS does not have "cold" data centers. Their systems are extremely sophisticated and automatically route traffic to other resources if a failure occurs. Core services are always installed in an N+1 configuration. In the case of a complete data center failure, there should be the capacity to handle traffic using the remaining available data centers without disruption.

AWS enables customers to deploy instances and persist data in more than one geographic region and across various data centers within a region. Data centers are deployed in fully independent zones. Data centers are constructed with enough separation between them such that the likelihood of a natural disaster affecting two of them simultaneously is very low. Additionally, data centers are not built in flood zones.

Data centers have discrete Uninterruptable Power Supplies (UPSes) and onsite backup generators to increase resilience. They are also connected to multiple electric grids from multiple independent utility providers. Data centers are connected redundantly to multiple tier-1 transit providers. Doing all this minimizes single points of failure. You will learn more details about AWS global Infrastructure in Chapter 3, AWS networking and content delivery.

Faster hardware cycles

When hardware is provisioned on-premises, it starts becoming obsolete from the instant that it is purchased. Hardware prices have been on an exponential downtrend since the first computer was invented, so the server you bought a few months ago may now be cheaper, or a new version of the server may be out that's faster and still costs the same. However, waiting until hardware improves or becomes cheaper is not an option. A decision needs to be made at some point to purchase it.

Using a cloud provider instead eliminates all these problems. For example, whenever AWS offers new and more powerful processor types, using them is as simple as stopping an instance, changing the processor type, and starting the instance again. In many cases, AWS may keep the price the same or even cheaper when better and faster processors and technology become available, especially with their own preoperatory technology like the Graviton chip.

The cloud optimizes costs by building virtualization at scale. Virtualization is running multiple virtual instances on top of a physical computer system using an abstract layer sitting on top of actual hardware. More commonly, virtualization refers to the practice of running multiple operating systems on a single computer at the same time. Applications running on virtual machines are unaware that they are not running on a dedicated machine and share resources with other applications on the same physical machine.

hypervisor is a computing layer that enables multiple operating systems to execute in the same physical compute resource. The operating systems running on top of these hypervisors are Virtual Machines (VMs) – a component that can emulate a complete computing environment using only software but as if it was running on bare metal. Hypervisors, also known as Virtual Machine Monitors (VMMs), manage these VMs while running side by side. A hypervisor creates a logical separation between VMs. It provides each of them with a slice of the available compute, memory, and storage resources. It allows VMs not to clash and interfere with each other. If one VM crashes and goes down, it will not make other VMs go down with it. Also, if there is an intrusion in one VM, it is fully isolated from the rest.

AWS uses its own proprietary Nitro hypervisor. The AWS Nitro System is the underlying platform for its next-gen EC2 instances, which help to improve performance while further reducing cost. Traditionally, hypervisors protect the physical hardware and BIOS virtualizes the CPU, storage, and networking, and provide a rich set of management capabilities. With the AWS Nitro System, you can break apart those functions, offload them to dedicated hardware and software, and reduce costs by delivering practically all of the resources of a server to EC2 instances.

System administration staff

An on-premises implementation may require a full-time system administration staff and a process to ensure that the team remains fully staffed. Cloud providers can handle many of these tasks by using cloud services, allowing you to focus on core application maintenance and functionality and not have to worry about infrastructure upgrades, patches, and maintenance.

By offloading this task to the cloud provider, costs can come down because the administrative duties can be shared with other cloud customers instead of having a dedicated staff. You will learn more details about system administration in Chapter 8, Drive Efficiency with Cloud Automation, Monitoring, and Alerts.

This ends the first chapter of the book, which provided a foundation on the cloud and AWS. As you move forward with your learning journey, in subsequent chapters, you will dive deeper and deeper into AWS services, architecture, and best practices.

Summary

This chapter pieced together many of the technologies, best practices, and AWS services we cover in the book. As fully featured as AWS has become, AWS will certainly continue to provide more and more services to help enterprises, large and small, simplify the information technology infrastructure.

In this chapter, you learned about cloud computing and the key differences between the public and private cloud. This lead into learning more about the largest public cloud provider, AWS, and you learned about AWS’s market share and adoption.

We also covered some reasons that the cloud in general and AWS, in particular, are so popular. As we learned, one of the main reasons for the cloud's popularity is the concept of elasticity, which we explored in detail. You learned about AWS services growth over the year along with it’s key differentiators from other cloud providers. Further, you explored AWS terminology compared to other key players like Azure and GCP. Finally, you learned about the benefits of AWS and the reasons behind its popularity.

AWS provides some of the industry's best architecture practices under their well-architected framework. Let's learn more about it. In the next chapter, you will learn about AWS's well-Architected tool and how you can build credibility by getting AWS certified.

Left arrow icon Right arrow icon

Key benefits

  • Gain expertise in automating, networking, migrating, and adopting cloud technologies using AWS
  • Use streaming analytics, big data, AI/ML, IoT, quantum computing, and blockchain to transform your business
  • Upskill yourself as an AWS solutions architect and explore details of the new AWS certification

Description

Are you excited to harness the power of AWS and unlock endless possibilities for your business? Look no further than the second edition of AWS for Solutions Architects! Imagine crafting cloud solutions that are secure, scalable, and optimized – not just good, but industry-leading. This updated guide throws open the doors to the AWS Well-Architected Framework, design pillars, and cloud-native design patterns empowering you to craft secure, performant, and cost-effective cloud architectures. Tame the complexities of networking, conquering edge deployments and crafting seamless hybrid cloud connections. Uncover the secrets of big data and streaming with EMR, Glue, Kinesis, and MSK, extracting valuable insights from data at speeds you never thought possible. Future-proof your cloud with game-changing insights! New chapters unveil CloudOps, machine learning, IoT, and blockchain, empowering you to build transformative solutions. Plus, unlock the secrets of storage mastery, container excellence, and data lake patterns. From simple configurations to sophisticated architectures, this guide equips you with the knowledge to solve any cloud challenge and impress even the most demanding clients. This book is your one-stop shop for architecting industry-standard AWS solutions. Stop settling for average – dive in and build like a pro!

Who is this book for?

This book is for application and enterprise architects, developers, and operations engineers who want to become well versed with AWS architectural patterns, best practices, and advanced techniques to build scalable, secure, highly available, highly tolerant, and cost-effective solutions in the cloud. Existing AWS users are bound to learn the most, but it will also help those curious about how leveraging AWS can benefit their organization. Prior knowledge of any computing language is not needed, and there’s little to no code. Prior experience in software architecture design will prove helpful.

What you will learn

  • Optimize your Cloud Workload using the AWS Well-Architected Framework
  • Learn methods to migrate your workload using the AWS Cloud Adoption Framework
  • Apply cloud automation at various layers of application workload to increase efficiency
  • Build a landing zone in AWS and hybrid cloud setups with deep networking techniques
  • Select reference architectures for business scenarios, like data lakes, containers, and serverless apps
  • Apply emerging technologies in your architecture, including AI/ML, IoT and blockchain

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 28, 2023
Length: 692 pages
Edition : 2nd
Language : English
ISBN-13 : 9781803238951
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 28, 2023
Length: 692 pages
Edition : 2nd
Language : English
ISBN-13 : 9781803238951
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 117.97
The Ultimate Docker Container Book
€37.99
AWS for Solutions Architects
€41.99
50 Algorithms Every Programmer Should Know
€37.99
Total 117.97 Stars icon

Table of Contents

19 Chapters
AWS for Solutions Architects, Second Edition: Design your cloud infrastructure by implementing DevOps, containers, and Amazon Web Services Chevron down icon Chevron up icon
1 Understanding AWS Principles and Key Characteristics Chevron down icon Chevron up icon
2 Understanding AWS Well-Architected Framework and Getting Certified Chevron down icon Chevron up icon
3 Leveraging the Cloud for Digital Transformation Chevron down icon Chevron up icon
4 Networking in AWS Chevron down icon Chevron up icon
5 Storage in AWS – Choosing the Right Tool for the Job Chevron down icon Chevron up icon
6 Harnessing the Power of Cloud Computing Chevron down icon Chevron up icon
7 Selecting the Right Database Service Chevron down icon Chevron up icon
8 Best Practices for Application Security, Identity, and Compliance Chevron down icon Chevron up icon
9 Dive efficiency with Cloud Operation Automation and DevOps in AWS Chevron down icon Chevron up icon
10 Bigdata and streaming data processing in AWS Chevron down icon Chevron up icon
11 Datawarehouse, Data Query and Visualization in AWS Chevron down icon Chevron up icon
12 Machine Learning, IoT, and Blockchain in AWS Chevron down icon Chevron up icon
13 Containers in AWS Chevron down icon Chevron up icon
14 Microservice and Event-Driven Architectures Chevron down icon Chevron up icon
15 Domain-Driven Design Chevron down icon Chevron up icon
16 Data Lake Patterns – Integrating Your Data across the Enterprise Chevron down icon Chevron up icon
17 Availability, Reliability, and Scalability Patterns Chevron down icon Chevron up icon
18 AWS Hands-On Lab and Use Case Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(64 Ratings)
5 star 70.3%
4 star 12.5%
3 star 3.1%
2 star 7.8%
1 star 6.3%
Filter icon Filter
Top Reviews

Filter reviews by




Rajendra S Chandrawat May 28, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book "AWS for Solutions Architects: The definitive guide to AWS Solutions Architecture.." is indeed a complete, comprehensive guide for AWS Solutions Architecture. It is a one-stop-shop, that will set you in the right direction towards your AWS Solutions Architecture career goals. The prolific author, Saurabh, who I've known for about a good couple decades now, has been excellent in his approach; his attention to details is Spielberg level.I found the book to be very well structured. The topics are well organized in simple, separate units, wherein individual chapters are self-contained, yet pretty well correlated. Kudos to the whole team; the writers, editors, publishers, I did not find a single continuity mistake, in a book this big.Authors maintain a simple layman tone while explaining the most profound topics and numerous jargon of AWS cloud world. I wish I had this book available to me when I did my AWS certifications. Being an AWS fan, I look forward to the future editions of 'The definitive guide to AWS Solutions Architecture..'.Totally recommended.10/10!Thanks,Rajendra Chandrawat (Raj)Sr. Enterprise Solutions ArchitectUSDA/FNS, CPSC
Amazon Verified review Amazon
Dietrich Jan 07, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
AWS for Solution Architects hast 627 Seiten und besteht aus 16 Kapiteln.Kapitel 1 ist eine allgeimeine Einführung in das Thema Cloud Computing. Wer schon Erfahrungen mit Azure oder GCP gemacht hat, findet hier Lookup-Tabellen, in denen die verschiednen Dienste aufeinander gemappt werden. Ansonsten wirklich nur für Anfänger.Etwa doppelt so lang ist Kapitel 2 mit ca. 40 Seiten. Hier wird das AWS Well Architected Framework vorgestellt. Die Vorstellung ist kurz und knackig - wer da tiefer rein möchte (und das sollte jeder Solution architect), der findet einige URLs, wo es weiter geht. Am Ende des Kapitels werden noch Optionen zu Zertifizierungen genannt. Für Berufsanfänger sicher sehr spannend.Im dritten Kapitel werden Begriffe wie IaaS, PaaS, etc. eingeführt. Danach geht es um Migrationsszenarien. Hier fand ich ein paar neue Buzzwords - neben dem allgemein bekannten "lift & shift" bspws. "drop & shop". Abschluss des Kapitel ist ein Ausflug zu Hochverfügbarkeit und Chaos Engineering...Die Kapitel 4, 5 und 6 beschreiben das Fundament von AWS Cloud Services: AWS Networking, Storage und Computing. Aus meiner Sicht wird hier das Basiswissen vermittelt, ohne das alle weiterführenden Dienste nicht ordentlich verwendet werden können. Dabei ist das Kapitel Networking sicher das wichtigste. Hier werden kurz und prägnant mit guten Skizzen unterstützt, verschiedene Netzwerkthemen präsentiert. Dabei darf natürlich auch Route 53 (DNS) nicht fehlen. Mir haben hier die Beispiele zur Anbindung von Corporate Datacenters gut gefallen. Die Kapitel Storage und Compute sehe ich eher als Nachschlage Referenzen. Hier werden diverse Szanrien bzgl. IOPS, Latency oder Performance vorgestellt. Erst am Ende bei der Vorstellung von Loadbalancer-Varianten und Serverless Computing wird es hier wieder interessant. Stichworte sind hier AWS Lambda und Fargate. Das Kapitel schließt mit AWS Outpost, der Möglichkeit AWS Dienste auf lokaler Server Hardware zu betreiben und VMware on AWS laufen zu lassen. Beides extrem gute Optionen um Workloads in die Cloud oder zurück schieben zu können.Das siebte Kapitel befasst sich mit Datenbanken in AWS. Hier ist die Einleitung wirklich gut gelungen. 9 Seiten über OLTP, OLAP, ACID, BASE, etc. (Wer mit diesen Abkürzungen nichts angfangen kann, sollte sich diese Seiten wirklich gut durchlesen. Ein muss für Entwickler, Admins und Architekten!). Im zweiten Teil des Kapitels werden dann die verschiedensten DBs von Amazon durchdekliniert. Die erstaunlichsten waren hier für mich "Amazon DevOps Guru for RDS" und "Amazon Quantum Ledger Database". Das Kapitel endet mit einigen Empfehlungen für Migrationen von selbst betriebenen DBs zu PaaS DBs. Sehr gut hat mir hier die tabellarische Übersicht (Figure 7.8) gefallen, die kurz zusammenfasst, welche DBs bei Amazon für die verschiedenen Anwendungsfälle zur Verfügung stehen.Kapitel 8 ist das Kapitel für alle, die in einem Unternehmen AWS im größeren Stil einführen möchten. Die ersten Seiten bzgl. "Shared responsibility model" können hier getrost übersprungen werden. Allerdings sind die Abschnitte zu AWS Organizations, Service Control Policies, AWS Control Tower und Integration mit Microsoft AD bzw. AAD sehr hilfreich. Der Rest des Kapitels ist leider nur eine Auflistung diverser Services bzgl. Protection und und Security. Da hier Beispiele fehlen, ist es wirklich schwer, sich die verschiedenen Begriffe zu merken. (z.B. DoS-Schutz = AWS Shield, verschiedenste ML basierende Dienste zum Schutz von Daten, Erkennung von bösartigen Zugriffen etc. = Amazon Detective, Amazon Macie, etc.).Das neunte Kapitel kommt mit dem etwas sperrigen Titel "Driving Efficiency with CloudOps". Dabei geht es um die 6 Pfeiler (nine pillars) die Amazon für den Betrieb der Cloud vorschlägt. Diese 6 Grundsätze sind für jedweden Cloudbetrieb empfehlenswert. Zwar wird bei jedem Pfeiler auf die passenden Umsetzungen von Amazon verwiesen, aber dieses Betriebsmodell ist komplett Hyperscaler unabhängig und kann auch auf GCS oder Azure übertragen werden. Wer dafür nicht das Buchkaufen möchte: Einfach mal nach "Amazon 6 pillars" im Internet suchen.Kapitel 10 dreht sich um die Verarbeitung und Analyse von großen Datenmengen. Dabei werden die Lösungen AWS Glue und Amazon EMR verglichen.In Kapitel 11 geht es dann weiter um Daten: "Data Warehouses, Data Queries and Visualization in AWS". Zu Beginn des Kapitels wird kurz die Datenbank Redshift vorgestellt, aber der Hauptteil dreht sich um Amazon Athena, das Abfragen jeglicher Datenquellen erlaubt. Diverse Fileformate (die z.B. in S3 Buckets liegen) plus Inhalte diverse Datenbanken (SQL + NoSQL). Zum Schluss wird kurz auf Amazon QuickSight eingegangen, das zur Datenvisualisierung verwendet werden kann.Das Kapitel 12 gibt einen kurzen Überblick über ML und IOT. Aus meiner Sicht ist es nicht möglich auf 50 Seiten ML und IOT zusammenzufassen. Wer eine Referenz braucht, um zu wissen, mit welchen Amazon Produktion man hier hantieren soll, wird hier fündig. Wer keine Kenntnisse in diesen Gebieten hat, wird hier erkennen, dass er erst die Grundlagen lernen muss.In Kapitel 13 geht es um Container-Infrastukturen. Die Einführung ist da sehr gelungen und streift alles (sogar docker-compose und docker-swarm werden kurz beschrieben). Das Kapitel selbst setzt sich im Detail mit den Unterscheiden zwischen ECS, EKS und Redhat OpenShift on AWS (ROSA) auseinander. Gerade die letzte Option fand ich jetzt nicht so interessant, aber falls nötig wäre das hier ein guter Einstieg. Zum Abschluss des Kapitels gibt es dann nochmal eine Tabelle, in der die Vorzüge gegeneinander aufgezeigt werden.Im Kapitel "Microservice Architecture in AWS" werden aus meiner Sicht recht allgemein Microservices, Event Driven Architectures und Domain Driven Design angesprochen. Dieses Kapitel hat recht wenig konkrete Verknüpfungen zu AWS - d.h. wer sich diesen Konzepten nähern will, sollte das ausgiebig studieren. Aber wer Konkretes im Bezug auf AWS sucht, wird enttäuscht werden. Zu Microservices hat der Autor 17 Beispiele zusammengetragen. Wer danach kein Gefühl dafür gewonnen hat, dem ist nicht zu helfen.Das vorletzte Kapitel handelt von Data Lake Pattern. Aus meiner Sicht wieder ein gelungenes Kapitel, gerade die 2 Seiten mit einer Liste von allen Metadaten, die beim Aufbau eines Datalakes berücksichtigt werden sollen, haben mir sehr gefallen. Auch der Abstecher in Data Security fand ich sehr erhellend. Natürlich fehlen auch die "5 Vs of big data" nicht: Volume, Velocity, Variety, Veracity, Value. Natürlich dürfen zum Ende des Kapitels die neuen Konzepte wie Lakehouse und Data Mesh nicht fehlen. Gut ist der Vergleich zwischen den 3 Konzepten mit Erläuterungen, unter welchen Bedingungen das eine oder andere besser passt.Das letzte Kapitel ist dann quasi Training des vorher gelesenen: Es wird der AWSsome Store komplett einmal durchdekliniert. Einige Designentscheidungen hätte ich hier anders getroffen - aber genau das macht es ja so lesenswert. Man kann seine eigenen Entscheidungen hinterfragen und überlegen, ob die vorgestellten Entscheidungen nicht vielleicht besser sind.Das Buch kann natürlich aufgrund der Menge der Themen nicht wirklich detailliert auf Services eingehen - aber das ist aus meiner Sicht auch gar nicht nötig.Richtig gut haben mir die Netzwerkskizzen in Kapitel 4 und die 6 Pillars aus Kapitel 9 gefallen. Der Rest passt aber auch - zu jedem Gebiet gibt es diverse Stichwörter oder Links, die einen guten Einstieg ermöglichen. Für alle, die andere Hyperscaler kennen und in AWS wechseln bzw. mit AWS den ersten Kontakt mit der Cloud haben, sollte dieses Buch eine Pflichtlektüre sein.Richtig gut haben mir die Kapitel 9, 14, 15, 16 gefallen, da diese allgemein die Themen behandeln. Hier kümmert sich der Autor mehr um das Wissensfundament des Lesers als um die konkrete Implementierung in AWS (und diese lässt er ja nicht aus).Absolute Leseempfehlung!
Amazon Verified review Amazon
Kapil May 16, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Saurabh really stands out when it comes to articulating architecture requirements and solutions using AWS. Being a AWS SAA and a cloud auditor since more than 4 years now, I can certainly say that this will remain my go to book for a long time! Thanks Saurabh for taking the time to recreate the second edition! Its indeed a great read already!
Amazon Verified review Amazon
Andres May 10, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Contenido detallado de cada servicio en AWS incluidos tips de experiencias reales. Una guía que te ayudará a profundizar tus conocimientos y aprender sobre los servicios más importantes y del corazón de aws. Bien planteado el desarrollo de contenido y enlazado de gran manera.Una mejora es la tapa del libro. Al ser blanda y tener 650 páginas, no aguanta mucho. Hay que tratarlo con cuidado.
Amazon Verified review Amazon
Khrys Oct 12, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book explains all that you need to know about AWS in depth.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.