The next level of virtualization is containers as they provide a better solution than virtual machines within Hyper-V, as containers optimize resources by sharing as much as possible of the existing container platform.
Azure Kubernetes Service (AKS) simplifies the deployment and operations of Kubernetes and enables users to dynamically scale their application infrastructure with agility; along with simplifying cluster maintenance with automated upgrades and scaling. Azure Container Service (ACS) simplifies the management of Docker clusters for running containerized applications
This tutorial will combine the above-defined concepts and describe how to design and implement containers, and how to choose the proper solution for orchestrating containers. You will get an overview of how Azure can help you to implement services based on containers and get rid of traditional virtualization stuff with redundant OS resources that need to be managed, updated, backed-up, and optimized.
To run containers in a cloud environment, no specific installations are required, as you only need the following:
With Azure, you will have the option to order a container directly in Azure as an Azure Container Instance (ACI) or a managed Azure solution using Kubernetes as orchestrator.
If you need to set up a container environment to be used by the developers in your Azure tenant, you will have to think about where to store your container images. In general, the way to do this is to provide a container registry. This registry could reside on a VM itself, but using PaaS services with cloud technologies always provides an easier and more flexible design.
This is where Azure Container Service (ACS) comes in, as it is a PaaS solution that provides high flexibility and even features such as replication between geographies.
This means you will need to fill in the following details:
When you create your container registry, you will need to define the following:
The following table details the features and limits of the basic, standard, and premium service tiers:
Resource
|
Basic
|
Standard
|
Premium
|
Storage | 10 GiB | 100 GiB | 500 GiB |
Max image layer size | 20 GiB | 20 GiB | 50 GiB |
ReadOps per minute | 1,000 | 3,000 | 10,000 |
WriteOps per minute | 100 | 500 | 2,000 |
Download bandwidth MBps | 30 | 60 | 100 |
Upload bandwidth MBps | 10 | 20 | 50 |
Webhooks | 2 | 10 | 100 |
Geo-replication | N/A | N/A | Supported |
Switching between the different SKUs is supported and can be done using the portal, PowerShell, or CLI.
By running your workloads in ACI, you don't have to set up a management infrastructure for your containers, you just can put your focus on design and building the applications.
Let's create a first simple container in Azure using the portal:
We will need to define the Azure container name. Of course, this needs to be unique in your environment. Then, we will need to define the source of the image and to which resource group and region it should be deployed within Azure.
After we have finalized the preceding steps, we have an ACI up and running, which means that you are able to provide container images, load them up to Azure, and run them.
In the public Azure Marketplace, you can find existing container images that just can be deployed to your subscription. These are pre-packaged images that give you the option to start with your first container in Azure. As cloud services provide reusability and standardization, this entry point is always good to look at first.
At first, we will need to decide on the corresponding image and choose to create a new ACR, or use an existing one. Furthermore, the Azure region, the resource group, and the tag (for example, version) need to be defined in the following dialog:
Finally, the first containerized image for a web app has been deployed to Azure.
One of the most interesting topics with regard to containers is that they provide technology for scaling. For example, if we need more performance on a website that is running containerized, we would just spin off an additional container to load-balance the traffic. This could even be done if we needed to scale down.
Regarding this technology, we need an orchestration tool to provide this feature set. There are some well-known container orchestration tools available on the market, such as the following:
Kubernetes is the most-used one, and therefore could be deployed as a service in most public cloud services, such as in Azure. It provides the following features:
Installing, maintaining, and administering a Kubernetes cluster manually could mean a huge investment of time for a company. In general, these tasks are one-off costs and therefore it would be best to not waste these resources. In Azure today, there is a feature called AKS, where K emphasizes that it is a managed Kubernetes service.
For AKS, there is no charge for Kubernetes masters, you just have to pay for the nodes that are running the containers.
Before you start, you will have to fulfill the following prerequisites:
The dashboard looks something like the one shown in the following screenshot:
Afterward, the AKS CLI needs to be enabled. It is called kubectl.exe.
The preceding dashboard provides a way to monitor and administer your Azure Kubernetes environment, in general, from a GUI.
You can now upload your containers to your AKS-enabled Docker and have a huge scalable infrastructure with a minimum of administrative tasks and time for the implementation itself.
The Nodes tab provides the following information per node:
This not only gives a brief overview of the health status but also the number of containers and the load on the node itself.
To get everything up-and-running, the following to-do list gives a brief overview of all the tasks needed to provide an app within AKS:
AKS has the following service quotas and limits:
Resource | Default limit |
Max nodes per cluster | 100 |
Max pods per node (basic networking with KubeNet) | 110 |
Max pods per node (advanced networking with Azure CNI) | 301 |
Max clusters per subscription | 100 |
As you have seen, AKS in Azure provides great features with a minimum of administrative tasks.
In this tutorial, we learned the basics required to understand, deploy, and manage container services in a public cloud environment. Basically, the concept of containers is a great idea and surely the next step in virtualization that applications need to go to. Setting up the environment manually is quite complex, but by using the PaaS approach, the setup procedure is quite simple (because of automation) and allows you to just start using it.
To understand how to build robust cloud solutions on Azure, check out our book Implementing Azure Solutions - Second Edition
Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more!
Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes
DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized app