Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
AWS for Solutions Architects

You're reading from   AWS for Solutions Architects The definitive guide to AWS Solutions Architecture for migrating to, building, scaling, and succeeding in the cloud

Arrow left icon
Product type Paperback
Published in Apr 2023
Publisher Packt
ISBN-13 9781803238951
Length 692 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Neelanjali Srivastav Neelanjali Srivastav
Author Profile Icon Neelanjali Srivastav
Neelanjali Srivastav
Saurabh Shrivastava Saurabh Shrivastava
Author Profile Icon Saurabh Shrivastava
Saurabh Shrivastava
Alberto Artasanchez Alberto Artasanchez
Author Profile Icon Alberto Artasanchez
Alberto Artasanchez
Imtiaz Sayed Imtiaz Sayed
Author Profile Icon Imtiaz Sayed
Imtiaz Sayed
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

AWS for Solutions Architects, Second Edition: Design your cloud infrastructure by implementing DevOps, containers, and Amazon Web Services
1 Understanding AWS Principles and Key Characteristics FREE CHAPTER 2 Understanding AWS Well-Architected Framework and Getting Certified 3 Leveraging the Cloud for Digital Transformation 4 Networking in AWS 5 Storage in AWS – Choosing the Right Tool for the Job 6 Harnessing the Power of Cloud Computing 7 Selecting the Right Database Service 8 Best Practices for Application Security, Identity, and Compliance 9 Dive efficiency with Cloud Operation Automation and DevOps in AWS 10 Bigdata and streaming data processing in AWS 11 Datawarehouse, Data Query and Visualization in AWS 12 Machine Learning, IoT, and Blockchain in AWS 13 Containers in AWS 14 Microservice and Event-Driven Architectures 15 Domain-Driven Design 16 Data Lake Patterns – Integrating Your Data across the Enterprise 17 Availability, Reliability, and Scalability Patterns 18 AWS Hands-On Lab and Use Case

What is cloud computing?

At a high level, cloud computing is the on-demand availability of IT resources such as servers, storage, databases, and so on over the web, without the hassle of managing physical infrastructure. The best way to understand the cloud is to take the electricity supply analogy. To get light in your house, you just flip a switch on, and electric bulbs light up your home. In this case, you only pay for your electricity use when you need it; when you switch off electric appliances, you do not pay anything. Now, imagine if you needed to power a couple of appliances, and for that, you had to set up an entire powerhouse. It would be costly, right? It would involve the costs of maintaining the turbine and generator and building the whole infrastructure. Utility companies make your job easier by supplying electricity in the quantity you need. They maintain the entire infrastructure to generate electricity and they can keep costs down by distributing electricity to millions of houses, which helps them benefit from mass utilization. Here, the utility companies represent cloud providers such as AWS, and the electricity represents the IT infrastructure available in the cloud.

While consuming cloud resources, you pay for IT infrastructure such as computing, storage, databases, networking, software, machine learning, and analytics in a pay-as-you-go model. Here, public clouds like AWS do the heavy lifting to maintain IT infrastructure and provide you with on-demand access over the internet under a pay-as-you-go model. As you generally only pay for the time and services you use, most cloud providers can provide massive scalability, making it easy to scale services up and down. Where, traditionally, you would have to maintain your servers all by yourself on-premise to run your organization, now you can offload that to the public cloud and focus on your core business. For example, Capital One's core business is banking and it does not run a large data center.

As much as we tried to nail it down, this is still a pretty broad definition. For example, we specified that the cloud can offer software, that's a pretty general term. Does the term software in our definition include the following?

  • Video conferencing
  • Virtual desktops
  • Email services
  • Contact center
  • Document management

These are just a few examples of what may or may not be included as available services in a cloud environment. When AWS started, it only offered a few core services, such as compute (Amazon EC2) and basic storage (Amazon S3). AWS has continually expanded its services to support virtually any cloud workload. As of 2022, it has more than 200 fully featured services for computing, storage, databases, networking, analytics, machine learning, artificial intelligence, Internet of Things, mobile, security, hybrid, virtual and augmented reality, media, application development, and deployment. As a fun fact, as of 2021, Amazon Elastic Cloud Compute (EC2) alone offers over 475 types of compute instances.

For the individual examples given here, AWS offers the following:

  • Video conferencing  – Amazon Chime
  • Virtual desktops – AWS WorkSpaces
  • Email services – Amazon WorkMail
  • Contact Center – Amazon Connect
  • Document Management – Amazon WorkDocs

Not all cloud services are highly intertwined with their cloud ecosystems. Take these scenarios, for example:

  • Your firm may be using AWS services for many purposes, but they may be using WebEx, Microsoft Teams, Zoom, or Slack for their video conference needs instead of Amazon Chime. These services have little dependency on other underlying core infrastructure cloud services.
  • You may be using Amazon SageMaker for artificial intelligence and machine learning projects, but you may be using the TensorFlow package in Sagemaker as your development kernel, even though Google maintains TensorFlow.
  • If you are using Amazon RDS and choose MySQL as your database engine, you should not have too much trouble porting your data and schemas over to another cloud provider that supports MySQL if you decide to switch over.

However, it will be a lot more difficult to switch to some other services. Here are some examples:

  • Amazon DynamoDB is a NoSQL proprietary database only offered by AWS. If you want to switch to another NoSQL database, porting it may not be a simple exercise.
  • Suppose you are using CloudFormation to define and create your infrastructure. In that case, it will be difficult, if not impossible, to use your CloudFormation templates to create infrastructure in other cloud provider environments. Suppose the portability of your infrastructure scripts is important to you, and you are planning on switching cloud providers. In that case, using Ansible, Chef, or Puppet may be a better alternative.
  • Suppose you have a streaming data requirement and use Amazon Kinesis Data Streams. You may have difficulty porting out of Amazon Kinesis since the configuration and storing mechanism are quite dissimilar if you decide to use another streaming data service like Kafka.

As far as we have come in the last 15 years with cloud technologies, I think vendors realize that these are the beginning innings, and locking customers in right now while they are still deciding who their vendor should be will be a lot easier than trying to do so after they pick a competitor.

However, looking at a cloud-agnostic strategy has its pros and cons. You want to distribute your workload between cloud providers to have competitive pricing and keep your options open like in the old days. But each cloud has different networking needs, and connecting distributed workloads between clouds to communicate with each other is a complex task. Also, each major cloud provider, like AWS, Azure, and GCP, has a breadth of services, and building a workforce with all three skill sets is another challenge.

Finally, clouds like AWS provide economy of scale, which means the more you use, the more the price goes down, which may not benefit you if you choose multi-cloud. Again, it doesn't mean you cannot choose a multi-cloud strategy, but you have to think about logical workload isolation. It would not be wise to run the application layer in one cloud and the database layer in other, but you can think about logical isolation like running the analytics workload and application workload in a separate cloud.

In this section, you learned about cloud computing at a very high level. Let’s learn about the difference between the public and private clouds.

Private versus public clouds

A private cloud is a service dedicated to a single customer—it is like your on-premise data center, which is accessible to one large enterprise. A private cloud is a fancy name for a data center managed by a trusted third party. This concept gained momentum to ensure security as, initially, enterprises were skeptical about public cloud security, which is multi-tenant. However, having your own infrastructure in this manner diminishes the value of the cloud as you have to pay for resources even if you are not running them.

Let's use an analogy to understand the difference between private and public clouds further. The gig economy has great momentum. Everywhere you look, people are finding employment as contract workers. One of the reasons contract work is getting more popular is because it enables consumers to contract services that they may otherwise not be able to afford. Could you imagine how expensive it would be to have a private chauffeur? But with Uber or Lyft, you almost have a private chauffeur who can be at your beck and call within a few minutes of you summoning them.

A similar economy of scale happens with a public cloud. You can have access to infrastructure and services that would cost millions of dollars if you bought them on your own. Instead, you can access the same resources for a small fraction of the cost.

In general, private clouds are expensive to run and maintain in comparison to public clouds. For that reason, many of the resources and services offered by the major cloud providers are hosted in a shared tenancy model. In addition to that, you can run your workloads and applications on a public cloud securely: you can use security best practices and sleep well at night knowing that you use AWS’s state-of-the-art technologies to secure your sensitive data.

Additionally, most major cloud providers' clients use public cloud configurations. That said, there are a few exceptions even in this case. For example, the United States government intelligence agencies are a big AWS customer. As you can imagine, they have deep pockets and are not afraid to spend. In many cases with these government agencies, AWS will set up the AWS infrastructure and dedicate it to the government workload. For example, AWS launched Top Secret Region—AWS Top Secret-West accredited to operate workloads at the Top-Secret U.S. security classification level. The other AWS GovCloud regions are:

  • GovCloud (US-West) Region - Launched in 2011
    Availability Zones: 3
  • GovCloud (US-East) Region - Launched in 2018
    Availability Zones: 3

AWS GovCloud (US) consists of isolated AWS Regions designed to allow U.S. government agencies and customers to move sensitive workloads to AWS. It addresses specific regulatory and compliance requirements, including Federal Risk and Authorization Management Program (FedRAMP) High, Department of Defense Security Requirements Guide (DoD SRG) Impact Levels 4 and 5, and Criminal Justice Information Services (CJIS) to name a few.

Public cloud providers such as AWS provide you choices to adhere to compliance needs as required by government or industry regulations. For example, AWS offers Amazon EC2 dedicated instances, which are EC2 instances that ensure that you will be the only user for a given physical server. Further, AWS offers AWS Outpost, where you can order server racks and host workloads on-premise using the AWS control plane.

Dedicated instance and outpost costs are significantly higher than on-demand EC2 instances. On-demand instances are multi-tenant, which means the physical server is not dedicated to you and may be shared with other AWS users. However, just because the physical servers are multi-tenant doesn’t mean that anyone else can access your server as those will be dedicated virtual EC2 instances accessible to you only. As we will discuss later in this chapter, you will never know the difference when using EC2 instances if they are hosted on a dedicated physical server compared to a multi-tenant server because of virtualization and hypervisor technology. One common use case for choosing dedicated instances is government regulations and compliance policies that require certain sensitive data to not be in the same physical server with other cloud users.

Now that we have gained a better understanding of cloud computing in general, let's get more granular and learn about how AWS does cloud computing.

You have been reading a chapter from
AWS for Solutions Architects - Second Edition
Published in: Apr 2023
Publisher: Packt
ISBN-13: 9781803238951
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime