Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
AWS for Solutions Architects

You're reading from   AWS for Solutions Architects The definitive guide to AWS Solutions Architecture for migrating to, building, scaling, and succeeding in the cloud

Arrow left icon
Product type Paperback
Published in Apr 2023
Publisher Packt
ISBN-13 9781803238951
Length 692 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (4):
Arrow left icon
Neelanjali Srivastav Neelanjali Srivastav
Author Profile Icon Neelanjali Srivastav
Neelanjali Srivastav
Saurabh Shrivastava Saurabh Shrivastava
Author Profile Icon Saurabh Shrivastava
Saurabh Shrivastava
Alberto Artasanchez Alberto Artasanchez
Author Profile Icon Alberto Artasanchez
Alberto Artasanchez
Imtiaz Sayed Imtiaz Sayed
Author Profile Icon Imtiaz Sayed
Imtiaz Sayed
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

AWS for Solutions Architects, Second Edition: Design your cloud infrastructure by implementing DevOps, containers, and Amazon Web Services
1 Understanding AWS Principles and Key Characteristics FREE CHAPTER 2 Understanding AWS Well-Architected Framework and Getting Certified 3 Leveraging the Cloud for Digital Transformation 4 Networking in AWS 5 Storage in AWS – Choosing the Right Tool for the Job 6 Harnessing the Power of Cloud Computing 7 Selecting the Right Database Service 8 Best Practices for Application Security, Identity, and Compliance 9 Dive efficiency with Cloud Operation Automation and DevOps in AWS 10 Bigdata and streaming data processing in AWS 11 Datawarehouse, Data Query and Visualization in AWS 12 Machine Learning, IoT, and Blockchain in AWS 13 Containers in AWS 14 Microservice and Event-Driven Architectures 15 Domain-Driven Design 16 Data Lake Patterns – Integrating Your Data across the Enterprise 17 Availability, Reliability, and Scalability Patterns 18 AWS Hands-On Lab and Use Case

Elasticity

Elasticity may be one of the most important reasons for the cloud's popularity. Let's first understand what it is.

Do you remember the feeling of going to a toy store as a kid? There is no feeling like it in the world. Puzzles, action figures, games, and toy cars were all at your fingertips, ready for you to play with them. There was only one problem: you could not take the toys out of the store. Your mom or dad always told you that you could only buy one toy. You always had to decide which one you wanted, and invariably, after one or two weeks of playing with that toy, you got bored with it, and the toy ended up in a corner collecting dust, and you were left longing for the toy you didn't choose.

What if I told you about a special, almost magical, toy store where you could rent toys for as long as you wanted, and the second you got tired with the toy, you could return it, change it for another toy, and stop any rental charges for the first toy? Would you be interested?

The difference between the first, traditional store and the second, magical store is what differentiates on-premises environments and cloud environments.

The first toy store is like setting up infrastructure on your own premises. Once you purchase a piece of hardware, you are committed to it and will have to use it until you decommission it or sell it at a fraction of what you paid for it.

The second toy store is analogous to a cloud environment. If you make a mistake and provision a resource that's too small or too big for your needs, you can transfer your data to a new instance, shut down the old instance, and, importantly, stop paying for that instance.

More formally defined, elasticity is the ability of a computing environment to adapt to changes in workload by automatically provisioning or shutting down computing resources to match the capacity needed by the current workload.

In AWS and the other main cloud providers, resources can be shut down without having to terminate them completely, and the billing for resources will stop if the resources are shut down.

One important characteristic of public cloud providers such as AWS is the ability to quickly and frictionlessly provision resources. These resources could be a single instance of a database or a thousand copies of the application and web servers used to handle your web traffic. These servers can be provisioned within minutes.

Contrast that with how performing the same operation may play out in a traditional on-premises environment. Let's use an example. Say you need to set up a cluster of servers to host your latest service. Your next actions would probably look something like this:

  1. You visit the data center and realize that the current capacity is insufficient to host this new service.
  2. You map out a new infrastructure architecture.
  3. You size the machines based on the expected load, adding a few more terabytes and a few gigabytes to ensure that you don't overwhelm the service.
  4. You submit the architecture for approval to the procurement department and hard vendors.
  5. You wait. Most likely for months.

It may not be uncommon once you get the approvals to realize that the market opportunity for this service is now gone or that it has grown more. Imagine what will happen if, after getting everything set up in the data center and after months of approvals, you told the business sponsor that you made a mistake. You ordered a 64 GB RAM server instead of a 128 GB, so you won't have enough capacity to handle the expected load. Getting the right server will take a few more months. Also, the market is moving fast, and your user workload increases five times by the time you get the server. Now it's bad news for business, as you cannot scale your server so fast, the user experience will ultimately be compromised, and users will switch to other options.

This sort of problem is much less likely to happen in a cloud environment because instead of needing months to provision your servers, they can be provisioned in minutes. Correcting the size of the server may be as simple as shutting down the server for a few minutes, changing a drop-down box value, and restarting the server again. You can even go serverless and let the cloud handle the scaling for you while you focus on your business problems. You will learn more about serverless computing in Chapter 5, Harnessing the Power of Cloud Computing.

Hopefully, the example above drives our point home about the power of the cloud. The cloud exponentially improves the time to market by accelerating the time it takes for resources to be provisioned. Being able to deliver quickly may not just mean getting there first. It may be the difference between getting there first and not getting there in time.

Another powerful characteristic of a cloud computing environment is the ability to quickly shut down resources and, significantly, not be charged for that resource while it is down. In our continuing on-premises example, if we shut down one of our servers. Do you think we can call the company that sold us the server and politely asks them to stop charging us because we shut the server down? That would be a very odd conversation. It would probably not be a delightful user experience, depending on how persistent we were. They would probably say, "You bought the server; you can do whatever you want with it, including using it as a paperweight." Once the server is purchased, it is a sunk cost for the duration of the server's useful life.

In contrast, whenever we shut down a server in a cloud environment. The cloud provider can quickly detect that and put that server back into the pool of available servers for other cloud customers to use that newly unused capacity.

This distinction cannot be emphasized enough. The only time absolute on-premises costs may be lower than cloud costs is when workloads are extremely predictable and consistent. Computing costs in a cloud environment on a per-unit basis may be higher than on-premises prices, but the ability to shut resources down and stop getting charged for them makes cloud architectures cheaper in the long run, often in a quite significant way. Let's look at exactly what this means by reviewing a few examples.

Web storefront - A famous use case for cloud services is to use them to run an online storefront. Website traffic in this scenario will be highly variable depending on the day of the week, whether it's a holiday, the time of day, and other factors—almost every retail store in the USA experiences more than a 10x user workload during Thanksgiving week. The same goes for boxing day in the UK, Diwali in India, Singles’ day in china, and almost every country has a shopping festival. This kind of scenario is ideally suited for a cloud deployment. In this case, we can set up resource auto-scaling that automatically scales up and down compute resources as needed. Additionally, we can set up policies that allow database storage to grow as needed.

Big data workloads – As data volumes are increasing exponentially, the popularity of Apache Spark and Hadoop continues to increase to analyze GBs and TBs of data. Many Spark clusters don't necessarily need to run consistently. They perform heavy batch computing for a period and then can be idle until the next batch of input data comes in. A specific example would be a cluster that runs every night for 3 or 4 hours and only during the working week. In this instance, you need decoupled compute and data storage where you can shut down resources that may be best managed on a schedule rather than by using demand thresholds. Or, we could set up triggers that automatically shut down resources once the batch jobs are completed. AWS provides that flexibility where you can store your data in Amazon Simple Storage Service (S3) and spin up an Amazon Elastic Map-reduce cluster (EMR) to run spark jobs and shut them down after storing results back in decoupled Amazon S3. You will learn more about these services in Chapter 9, AWS EMR and AWS Glue – Extracting, Transforming, and Loading Data.

Employee workspace - In an on-premise setting, you provide a high configuration desktop/laptop to your development team and pay for it for 24 hours a day, including weekends. However, they are using one-fourth of the capacity considering an eight-hour workday. AWS provides workspaces accessible by low configuration laptops, and you can schedule them to stop during off-hours and weekends, saving almost 70% of the cost.

Another common use case in technology is file and object storage. Some storage services may grow organically and consistently. The traffic patterns can also be consistent. This may be one example where using an on-premises architecture may make sense economically. In this case, the usage pattern is consistent and predictable.

Elasticity is by no means the only reason that the cloud is growing in leaps and bounds. The ability to easily enable world-class security for even the simplest applications is another reason why the cloud is becoming pervasive.

Security

The perception of on-premises environments being more secure than cloud environments was a common reason companies big and small would not migrate to the cloud. More and more enterprises now realize that it is tough and expensive to replicate the security features provided by cloud providers such as AWS. Let's look at a few of the measures that AWS takes to ensure the security of its systems.

Physical security

AWS data centers are highly secured and continuously upgraded with the latest surveillance technology. Amazon has had decades to perfect its data centers' design, construction, and operation.

AWS has been providing cloud services for over 15 years, and they have an army of technologists, solution architects, and some of the brightest minds in the business. They are leveraging this experience and expertise to create state-of-the-art data centers. These centers are in nondescript facilities. You could drive by one and never know what it is. It will be extremely difficult to get in if you find out where one is. Perimeter access is heavily guarded. Visitor access is strictly limited, and they always must be accompanied by an Amazon employee.

Every corner of the facility is monitored by video surveillance, motion detectors, intrusion detection systems, and other electronic equipment. Amazon employees with access to the building must authenticate themselves four times to step on the data center floor.

Only Amazon employees and contractors that have a legitimate right to be in a data center can enter. Any other employee is restricted. Whenever an employee does not have a business need to enter a data center, their access is immediately revoked, even if they are only moved to another Amazon department and stay with the company. Lastly, audits are routinely performed and are part of the normal business process.

Encryption

AWS makes it extremely simple to encrypt data at rest and data in transit. It also offers a variety of options for encryption. For example, for encryption at rest, data can be encrypted on the server side, or it can be encrypted on the client side. Additionally, the encryption keys can be managed by AWS, or you can use keys that are managed by you using tamper-proof appliances like a Hardware Security Module (HSM). AWS provides you with a dedicated cloud HSM to secure your encryption key if you want one. You will learn more about AWS security in Chapter 7, Best Practices for Application Security, Identity, and Compliance.

AWS supports compliance standards

AWS has robust controls to allow users to maintain security and data protection. We'll discuss how AWS shares security responsibilities with its customers, but the same is true of how AWS supports compliance. AWS provides many attributes and features that enable compliance with many standards established in different countries and organizations. By providing these features, AWS simplifies compliance audits. AWS enables the implementation of security best practices and many security standards, such as these:

  • STAR
  • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70)
  • SOC 2
  • SOC 3
  • FISMA, DIACAP, and FedRAMP
  • PCI DSS Level 1
  • DOD CSM Levels 1-5
  • ISO 9001 / ISO 27001 / ISO 27017 / ISO 27018
  • MTCS Level 3
  • FIPS 140-2
  • I TRUST

In addition, AWS enables the implementation of solutions that can meet many industry-specific standards, such as these:

  • Criminal Justice Information Services (CJIS)
  • Family Educational Rights and Privacy Act (FERPA)
  • Cloud Security Alliance (CSA)
  • Motion Picture Association of America (MPAA)
  • Health Insurance Portability and Accountability Act (HIPAA)

The above is not a full list of compliance standards; there are many more compliance standards met by AWS according to industries and local authorities across the world.

Another important thing that can explain the meteoric rise of the cloud is how you can stand up high-availability applications without paying for the additional infrastructure needed to provide these applications. Architectures can be crafted to start additional resources when other resources fail. This ensures that we only bring additional resources when necessary, keeping costs down. Let's analyze this important property of the cloud in a deeper fashion.

Availability

When we deploy infrastructure in an on-premises environment, we have two choices. We can purchase just enough hardware to service the current workload or ensure that there is enough excess capacity to account for any failures. This extra capacity and eliminating single points of failure is not as simple as it may seem. There are many places where single points of failure may exist and need to be eliminated:

  • Compute instances can go down, so we need a few on standby.
  • Databases can get corrupted.
  • Network connections can be broken.
  • Data centers can flood or be hit by earthquakes.

Using the cloud simplifies the "single point of failure" problem. We have already determined that provisioning software in an on-premises data center can be long and arduous. Spinning up new resources can take just a few minutes in a cloud environment. So, we can configure minimal environments knowing that additional resources are a click away.

AWS data centers are built in different regions across the world. All data centers are "always-on" and deliver services to customers. AWS does not have "cold" data centers. Their systems are extremely sophisticated and automatically route traffic to other resources if a failure occurs. Core services are always installed in an N+1 configuration. In the case of a complete data center failure, there should be the capacity to handle traffic using the remaining available data centers without disruption.

AWS enables customers to deploy instances and persist data in more than one geographic region and across various data centers within a region. Data centers are deployed in fully independent zones. Data centers are constructed with enough separation between them such that the likelihood of a natural disaster affecting two of them simultaneously is very low. Additionally, data centers are not built in flood zones.

Data centers have discrete Uninterruptable Power Supplies (UPSes) and onsite backup generators to increase resilience. They are also connected to multiple electric grids from multiple independent utility providers. Data centers are connected redundantly to multiple tier-1 transit providers. Doing all this minimizes single points of failure. You will learn more details about AWS global Infrastructure in Chapter 3, AWS networking and content delivery.

Faster hardware cycles

When hardware is provisioned on-premises, it starts becoming obsolete from the instant that it is purchased. Hardware prices have been on an exponential downtrend since the first computer was invented, so the server you bought a few months ago may now be cheaper, or a new version of the server may be out that's faster and still costs the same. However, waiting until hardware improves or becomes cheaper is not an option. A decision needs to be made at some point to purchase it.

Using a cloud provider instead eliminates all these problems. For example, whenever AWS offers new and more powerful processor types, using them is as simple as stopping an instance, changing the processor type, and starting the instance again. In many cases, AWS may keep the price the same or even cheaper when better and faster processors and technology become available, especially with their own preoperatory technology like the Graviton chip.

The cloud optimizes costs by building virtualization at scale. Virtualization is running multiple virtual instances on top of a physical computer system using an abstract layer sitting on top of actual hardware. More commonly, virtualization refers to the practice of running multiple operating systems on a single computer at the same time. Applications running on virtual machines are unaware that they are not running on a dedicated machine and share resources with other applications on the same physical machine.

hypervisor is a computing layer that enables multiple operating systems to execute in the same physical compute resource. The operating systems running on top of these hypervisors are Virtual Machines (VMs) – a component that can emulate a complete computing environment using only software but as if it was running on bare metal. Hypervisors, also known as Virtual Machine Monitors (VMMs), manage these VMs while running side by side. A hypervisor creates a logical separation between VMs. It provides each of them with a slice of the available compute, memory, and storage resources. It allows VMs not to clash and interfere with each other. If one VM crashes and goes down, it will not make other VMs go down with it. Also, if there is an intrusion in one VM, it is fully isolated from the rest.

AWS uses its own proprietary Nitro hypervisor. The AWS Nitro System is the underlying platform for its next-gen EC2 instances, which help to improve performance while further reducing cost. Traditionally, hypervisors protect the physical hardware and BIOS virtualizes the CPU, storage, and networking, and provide a rich set of management capabilities. With the AWS Nitro System, you can break apart those functions, offload them to dedicated hardware and software, and reduce costs by delivering practically all of the resources of a server to EC2 instances.

System administration staff

An on-premises implementation may require a full-time system administration staff and a process to ensure that the team remains fully staffed. Cloud providers can handle many of these tasks by using cloud services, allowing you to focus on core application maintenance and functionality and not have to worry about infrastructure upgrades, patches, and maintenance.

By offloading this task to the cloud provider, costs can come down because the administrative duties can be shared with other cloud customers instead of having a dedicated staff. You will learn more details about system administration in Chapter 8, Drive Efficiency with Cloud Automation, Monitoring, and Alerts.

This ends the first chapter of the book, which provided a foundation on the cloud and AWS. As you move forward with your learning journey, in subsequent chapters, you will dive deeper and deeper into AWS services, architecture, and best practices.

You have been reading a chapter from
AWS for Solutions Architects - Second Edition
Published in: Apr 2023
Publisher: Packt
ISBN-13: 9781803238951
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image