Managing clusters without Operators
Kubernetes is a powerful microservice container orchestration platform. It provides many different controllers, resources, and design patterns to cover almost any use case, and it is constantly growing. Because of this, applications designed to be deployed on Kubernetes can be very complex.
When designing an application to use microservices, there are a number of concepts to be familiar with. In Kubernetes, these are mainly the native application programming interface (API) resource objects included in the core platform. Throughout this book, we will assume a foundational familiarity with the common Kubernetes resources and their functions.
These objects include Pods, Replicas, Deployments, Services, Volumes, and more. The orchestration of any microservice-based cloud application on Kubernetes relies on integrating these different concepts to weave a coherent whole. This orchestration is what creates a complexity that many application developers struggle to manage.
Demonstrating on a sample application
Take, for example, a simple web application that accepts, processes, and stores user input (such as a message board or chat server). A good, containerized design for an application such as this would be to have one Pod presenting the frontend to the user and a second backend Pod that accepts the user's input and sends it to a database for storage.
Of course, you will then need a Pod running the database software and a Persistent Volume to be mounted by the database Pod. These three Pods will benefit from Services to communicate with each other, and they will also need to share some common environment variables (such as access credentials for the database and environment variables to tweak different application settings).
Here is a diagram of what a sample application of this sort could look like. There are three Pods (frontend, backend, and database), as well as a Persistent Volume:
This is just a small example, but it's already evident how even a simple application can quickly involve tedious coordination between several moving parts. In theory, these discrete components will all continue to function cohesively as long as each individual component does not fail. But what about when a failure does occur somewhere in the application's distributed design? It is never wise to assume that an application's valid state will consistently remain that way.
Reacting to changing cluster states
There are a number of reasons a cluster state can change. Some may not even technically be considered a failure, but they are still changes of which the running application must be aware. For example, if your database access credentials change, then that update needs to be propagated to all the Pods that interact with it. Or, a new feature is available in your application that requires tactful rollout and updated settings for the running workloads. This requires manual effort (and, more importantly, time), along with a keen understanding of the application architecture.
Time and effort are even more critical in the case of an unexpected failure. These are the kinds of problems that the Operator Framework addresses automatically. If one of the Pods that make up this application hits an exception or the application's performance begins to degrade, these scenarios require intervention. That means a human engineer must not only know the details of the deployment, but they must also be on-call to maintain uptime at any hour.
There are additional components that can help administrators monitor the health and performance of their applications, such as metrics aggregation servers. However, these components are essentially additional applications that must also be regularly monitored to make sure they are working, so adding them to a cluster can reintroduce the same issues of managing an application manually.