Why the WAF?
Microsoft Azure has incredible documentation that can help any beginner to deploy their first workload in Azure. With the help of this well-planned documentation and tutorials, deployment is not a tedious task. Now, the question is: Are these workloads optimized or running in the best shape?
When it comes to optimizing, some considerations include the following:
- What is the cost of running this workload?
- What is the business continuity (BC) and disaster recovery (DR) strategy?
- Are the workloads secured from common internet attacks?
- Are there any performance issues during peak hours?
These are some common considerations related to optimization. Nonetheless, considerations may vary from workload to workload. We need to understand the best practices and guidelines for each of our workloads, and if it’s a complex solution, then finding the best practices for each service can be a weighty task. This is where the Microsoft Azure WAF comes into the picture.
Quoting Microsoft’s documentation: “The Azure Well-Architected Framework is a set of guiding tenets that can be used to improve the quality of a workload.”
While some organizations have already completed their cloud adoption journey, others are still in the transition and early stages. As the documentation states, this framework is a clear recipe for improving the quality of mission-critical workloads we migrate to the cloud. Incorporating the best practices outlined by Microsoft will produce a high-standard, durable, and cost-effective cloud architecture.
Now that we know the outcome of leveraging the WAF, let’s look at its pillars. The framework comprises five interconnected pillars of architectural excellence, as follows:
- Cost optimization
- Operational excellence
- Performance efficiency
- Reliability
- Security
The assessment of the workload will be aligned with these pillars, and the pillars are interconnected. Let’s take an example to understand what interconnected means.
Consider the case of a web application running on a virtual machine (VM) scale set. We can improve the performance by enabling autoscaling so that the number of instances is increased automatically whenever there is a performance bottleneck. On the other hand, when we enable autoscaling, we are only using the extra compute power whenever we need it; this way, we only pay for the extra instances at the time of need, not 24x7.
As you can see in this scenario, both performance and cost optimization are achieved by enabling autoscaling. Similarly, we can connect these pillars and improve the quality of the workload. Nonetheless, there will be trade-offs as well—for example, trying to improve reliability will increase the cost; we will discuss this later in this book.
Let’s take a closer glimpse of these pillars in the next section.