Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Linux Administration on Azure

You're reading from   Hands-On Linux Administration on Azure Explore the essential Linux administration skills you need to deploy and manage Azure-based workloads

Arrow left icon
Product type Paperback
Published in Aug 2018
Publisher Packt
ISBN-13 9781789130966
Length 410 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Frederik Vos Frederik Vos
Author Profile Icon Frederik Vos
Frederik Vos
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Exploring the Azure Cloud FREE CHAPTER 2. Getting Started with the Azure Cloud 3. Basic Linux Administration 4. Managing Azure 5. Advanced Linux Administration 6. Managing Linux Security and Identities 7. Deploying Your Virtual Machines 8. Exploring Continuous Configuration Automation 9. Container Virtualization in Azure 10. Working with Azure Kubernetes Service 11. Troubleshooting and Monitoring Your Workloads 12. Assessments 13. Other Books You May Enjoy

Fundamentals of cloud computing

If you are starting in a new area of expertise in Information Technology (IT), most of the time you'll start studying the concepts, the architecture, and sooner or later you'll start playing around and getting familiar with the topic.

However, in cloud computing, it really helps if you not only understand the concept and the architecture, but also where it comes from. I don't want to give you a lesson in the facts of history, but I want to show you that inventions and ideas in the past are still in use in modern cloud environments. This will give you a better understanding of what the cloud is and how to use it within your organization.

Virtualization

In the early 1970s, IBM was working on some sort of virtualization: each user had their own separated operating system, while still sharing the overall resources of the underlying system.

The main reason to develop this system was the possibility of assigning the resources based on the application needs, to add extra security and reliability: if a virtual machine crashes, the other virtual machines are still running without any problem. Nowadays, this type of virtualization has evolved into container virtualization!

Fast forward to 2001, and another type of virtualization, called hardware virtualization, was introduced by companies such as VMWare. In their product, VMware Workstation, they added a layer on top of an existing operating system that provided a set of standard hardware, build-in software instead of physical elements, to run a virtual machine. This layer become known as a hypervisor. Later on, they built their own operating system that specialized in running virtual machines: VMware ESX.

In 2008, Microsoft entered the hardware-virtualization market with the Hyper-V product, as an optional component of Windows 2008.

Hardware virtualization is all about separating software from hardware, breaking the traditional boundaries between hardware and software. The hypervisor is responsible for mapping the virtual resources on physical resources.

This type of virtualization was the enabler for a revolution in data centers:

  • Because of the standard set of hardware, every virtual machine can run everywhere
  • Because virtual machines are isolated from each other, there is no problem if a virtual machine crashes
  • Because a virtual machine is just a set of files, you have new possibilities for backup, moving virtual machines, and so on
  • New options possible in high availability (HA), the migration of running virtual machines
  • New deployment options, for example, working with templates
  • New options in central management, orchestration, and automation, because it's all software
  • Isolation, reservation, and limiting of resources where needed, sharing resources where possible

Software-Defined Datacenter

Of course, if you can transform hardware into software for compute, it's only a matter of time before someone realizes you can do the same for network and storage.

For networking, it all started with the concept of virtual switches. Like every other form of hardware virtualization, it is nothing more than building a network switch in the software instead of hardware.

In 2004, development started on Software Defined Networking (SDN), to decouple the control plane and the data plane. In 2008, there was the first real switch implementation that achieved this goal using the OpenFlow protocol at Stanford University.

Using SDN, you have similar advantages as in compute virtualization:

  • Central management, automation, and orchestration
  • More granular security by traffic isolation and providing firewall and security policies
  • Shaping and controlling data traffic
  • New options available for HA and scalability

In 2009, Software-Defined Storage (SDS) development started at several companies, such as scality and cleversafe. Again, it's about abstraction: decoupling services (logical volumes and so on) from the physical storage elements.

If you have a look into the concepts of SDS, some vendors added a new feature to the already existing advantages of virtualization. You can add a policy to a virtual machine, defining the options you want: for instance, replication of data or a limit on the number of IOPS. This is transparent for the administrator; there is communication between the hypervisor and the storage layer to provide the functionality. Later on, this concept was also adopted by some SDN vendors.

You can actually see that virtualization slowly changed to a more service-oriented way of thinking.

If you can virtualize every component of the physical data center, you have a Software-Defined Datacenter (SDDC). The virtualization of networking, storage, and compute function made it possible to go further than the limits of one piece of hardware. SDDC makes it possible, by abstracting the software from the hardware, to go beyond the borders of the physical data center.

In the SDDC environment, everything is virtualized and often fully automated by the software. It totally changes the traditional concept of data centers. It doesn't really matter where the service is hosted or how long it's available (24-7 or on demand), and there are possibilities to monitor the service, maybe even add options such as automatic reporting and billing, which all make the end user happy.

SDDC is not the same as the cloud, not even a private cloud running in your data center, but you can argue that, for instance, Microsoft Azure is a full-scale implementation of SDDC. Azure is by definition software-defined.

SOA

In the same period that hardware virtualization become mainstream in the data center, and the development of SDN and SDS started, something new was coming in the world of software development and implementation for web-based applications' SOA:

  • Minimal services that can talk to each other, using a protocol such as SOAP. Together they deliver a complete web-based application.
  • The location of the service doesn't matter, the service must be aware of the presence of the other service, and that's about it.
  • A service is a sort of black box; the end user doesn't need to know what's inside the box.
  • Every service can be replaced.

For the end user, it doesn't matter where the application lives or that it consists of several smaller services. In a way, it's similar to virtualization: what seems to be one physical resource, for instance, a storage LUN, can actually include several physical resources (storage devices) in multiple locations.

The power of virtualization combined with SOA gives you even more options in scalability, reliability, and availability.

There are many similarities between the SOA model and SDDC, but there is a difference: SOA is about interaction between different services; SDDC is more about the delivery of services to the end user.

The modern implementation of SOA is microservices, provided by cloud environments such as Azure, running standalone or running in virtualization containers such as Docker.

Cloud services

here's that magic word: cloud. It's not that easy to find out exactly what it means. One way to describe it is that you want to provide a service that:

  • Is always available, or available on-demand
  • Can be managed by self-service
  • Is able to scale up/down, and so is elastic
  • Offers rapid deployment
  • Can be fully automated and orchestrated

On top of that, you want monitoring and new types of billing options: most of the time, you only pay for what you use.

Cloud technology is about the delivery of a service via the internet, in order to give an organization access to resources such as software, storage, network, and other types of IT infrastructure and components.

The cloud can offer you many service types, here are the most important ones:

  • Infrastructure as a service (IaaS): A platform to host your virtual machines
  • Platform as a service (PaaS): A platform to develop, build, and run your applications, without the complexity of building and running your own infrastructure
  • Software as a service (SaaS): Using an application running in the cloud, such as Office 365

Cloud types

There are several cloud implementations possible:

  • Public cloud: Running all the services at a service provider. Microsoft's Azure is an implementation of this type.
  • Private cloud: Running your own cloud in your data center. Microsoft recently developed a special version of Azure for this: Azure Stack.
  • Hybrid cloud: A combination of a public and private cloud. One example is combining the power of Azure and Azure Stack, but you can also think about new disaster recovery options or moving services from your data center to the cloud and back if more resources are temporarily needed.

The choice for one of these implementations depends on several factors, to name a few:

  • Costs: Hosting your services in the cloud can be more expensive than hosting them locally, caused by resource usage. On the other hand, it can be cheaper; for example, you don't need to implement complex and costly availability options.
  • Legal restrictions: Sometimes you are not allowed to host data in a public cloud.
  • Internet connectivity: There are still countries where the necessary bandwidth or even the stability of the connection is a problem.
  • Complexity: Hybrid environments can be especially difficult to manage; support for applications and user-management can be challenging.
You have been reading a chapter from
Hands-On Linux Administration on Azure
Published in: Aug 2018
Publisher: Packt
ISBN-13: 9781789130966
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image