Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Self-Taught Cloud Computing Engineer

You're reading from   The Self-Taught Cloud Computing Engineer A comprehensive professional study guide to AWS, Azure, and GCP

Arrow left icon
Product type Paperback
Published in Sep 2023
Publisher Packt
ISBN-13 9781805123705
Length 472 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Dr. Logan Song Dr. Logan Song
Author Profile Icon Dr. Logan Song
Dr. Logan Song
Arrow right icon
View More author details
Toc

Table of Contents (24) Chapters Close

Preface 1. Part 1: Learning about the Amazon Cloud
2. Chapter 1: Amazon EC2 and Compute Services FREE CHAPTER 3. Chapter 2: Amazon Cloud Storage Services 4. Chapter 3: Amazon Networking Services 5. Chapter 4: Amazon Database Services 6. Chapter 5: Amazon Data Analytics Services 7. Chapter 6: Amazon Machine Learning Services 8. Chapter 7: Amazon Cloud Security Services 9. Part 2:Comprehending GCP Cloud Services
10. Chapter 8: Google Cloud Foundation Services 11. Chapter 9: Google Cloud’s Database and Big Data Services 12. Chapter 10: Google Cloud AI Services 13. Chapter 11: Google Cloud Security Services 14. Part 3:Mastering Azure Cloud Services
15. Chapter 12: Microsoft Azure Cloud Foundation Services 16. Chapter 13: Azure Cloud Database and Big Data Services 17. Chapter 14: Azure Cloud AI Services 18. Chapter 15: Azure Cloud Security Services 19. Part 4:Developing a Successful Cloud Career
20. Chapter 16: Achieving Cloud Certifications 21. Chapter 17: Building a Successful Cloud Computing Career 22. Index 23. Other Books You May Enjoy

The history of computing

In this section, we will briefly review the computing history of human beings, from the first computer to Amazon EC2, and understand what has happened in the past 70+ years and what led us to the cloud computing era.

The computer

The invention of the computer is one of the biggest milestones in human history. On December 10, 1945, Electronic Numerical Integrator and Computer (ENIAC) was first put to work for practical purposes at the University of Pennsylvania. It weighed about 30 tons, occupied about 1,800 sq ft, and consumed about 150 kW of electricity.

From 1945 to now, in over 75 years, we human beings have made huge progress in upgrading the computer. From ENIAC to desktop and data center servers, laptops, and iPhones, Figure 1.1 shows the computer evolution landmarks:

Figure 1.1 – Computer evolution landmarks

Figure 1.1 – Computer evolution landmarks

Let’s take some time to examine a computer—say, a desktop PC. If we remove the cover, we will find that it has the following main hardware parts—as shown in Figure 1.2:

  • Central processing unit (CPU)
  • Random access memory (RAM)
  • Hard disk (HD)
  • Network interface card (NIC)
Figure 1.2 – Computer hardware components

Figure 1.2 – Computer hardware components

These hardware parts work together to make the computer function, along with the software including the operating system (such as Windows, Linux, macOS, and so on), which manages the hardware, and the application programs (such as Microsoft Office, web servers, games, and so on) that run on top of the operating system. In a nutshell, hardware and software specifications decide how much power a computer can serve for different business use cases.

The data center

Apparently, one computer does not serve us well. Computers need to be able to communicate with each other to fulfill network communications, resource sharing, and so on. The work at Stanford University in the 1980s led to the birth of Cisco Systems, Inc., an internet company that played a great part in connecting computers together and forming the intranet and the internet. Connecting many computers together, data centers emerged as a central location for computing resources—CPU, RAM, storage, and networking.

Data centers provide resources for businesses’ information technology needs: computing, storing, networking, and other services. However, the concept of data center ownership lacks flexibility and agility and entails huge investment and maintenance costs. Often, building a new data center takes a long time and a big amount of money, and maintaining existing data centers—such as tech refresh projects—is very costly. In certain circumstances, it is not even possible to possess the computing resources to complete certain projects. For example, the Human Genome Project was estimated to consume up to 10,000 trillion CPU hours and 40 exabytes (1 exabyte = 1018 bytes) of disk storage, and it is impossible to acquire resources at this scale without leveraging cloud computing.

The virtual machine

The peace of physical computers was broken in 1998 when VMware was founded and the concept of a virtual machine (VM) was brought to Earth. A VM is a software-based computer composed of virtualized components of a physical computer—CPU, RAM, HD, network, operating system, and application programs.

VMware’s hypervisor virtualizes hardware to run multiple VMs on bare-metal hardware, and these VMs can run various operating systems of Windows, Linux, or others. With virtualization, a VM is represented by a bunch of files. It can be exported to a binary image that can be deployed on any physical hardware at different locations. A running VM can be moved from one host to another, LIVE—so-called" v-Motion". The virtualization technologies virtualized physical hardware and caused a revolution in computer history, and also made cloud computing feasible.

The idea of cloud computing

The limitation of data centers and virtualization technology made people explore more flexible and inexpensive ways of using computing resources. The idea of cloud computing started from the concept of “rental”—use as needed and pay as you go. It is the on-demand, self-provisioning of computing resources (hardware, software, and so on) that allows you to pay only for what you use. The key concept of cloud computing is disposable computing resources. In the traditional information technology and data center concept, a computer (or any other compute resource) is treated as a pet. When a pet dies, people are very sad, and they need to get a new replacement right away. If an investment bank’s trading server goes down at night, it is the end of the world—everyone is woken up to recover the server. However, in the new cloud computing concept, a computer is treated as cattle in a herd. For example, the website of an investment bank, zhebank.com, is supported by a herd of 88 servers—www001 to www88. When one server goes down, it’s taken out of the serving line, shot, and replaced with another new one with the same configuration and functionalities, automatically!

With cloud computing, enterprises are leveraging the cloud service provider (CSP)’s unlimited computing resources that are featured as global, elastic and scalable, highly reliable and available, cost-effective, and secure. The main CSPs, such as Amazon, Microsoft, and Google, have global data centers that are connected by backbone networks. Because of cloud computing’s pay-as-you-go characteristics, it makes sense for cost savings. Because of its strong monitoring and logging features, cloud computing offers the most secure hosting environment. Instead of building physical hardware data centers with big investments over a long time, virtual software-based data centers can be built within several hours, immutable and repeatedly, in the global cloud environment. Infrastructure is represented as code that can be managed with version control, which we can call Infrastructure as Code (IaC). More details can be found at https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html.

EC2 was first introduced in 2006 as a web service that allowed customers to rent virtual computers for computing tasks. Since then, it has become one of the most popular cloud computing platforms available, offering a wide range of services and features that make it an attractive option for enterprise customers. Amazon categorizes the VMs with different EC2 instance types based on hardware (CPU, RAM, HD, and network) and software (operating system and applications) configurations. For different business use cases, cloud consumers can choose EC2 instances with a variety of instance types, operating system choices, network options, storage options, and more. In 2013, Amazon introduced the Reserved Instance feature, which gave customers the opportunity to purchase instances at discounted rates in exchange for committing to longer usage terms. In 2017, Amazon released EC2 Fleet, which allows customers to manage multiple instance types and instance sizes across multiple Availability Zones (AZs) with a single request.

The computer evolution path

From ENIAC to EC2, a computer has evolved from a huge, physical unit to a disposable resource that is flexible and on-demand, portable, and replaceable, and a data center has evolved from being expensive and protracted to a piece of code that can be executed globally at any time on demand, within hours.

In the next sections of this chapter, we will look at the Amazon Global Cloud Infrastructure and then provision our EC2 instances in the cloud.

You have been reading a chapter from
The Self-Taught Cloud Computing Engineer
Published in: Sep 2023
Publisher: Packt
ISBN-13: 9781805123705
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime