Resource configuration challenges
In the previous section, we covered how Kubernetes has two different configuration methods—imperative and declarative. One question to consider is this: What challenges do users need to be aware of when creating Kubernetes resources with imperative and declarative methodologies?
Let’s discuss some of the most common challenges.
The many types of Kubernetes resources
First of all, as described in the Deploying a Kubernetes application section, there are many different types of resources in Kubernetes. In order to be effective on Kubernetes, developers need to be able to determine which resources are required to deploy their applications, and they need to understand them at a deep enough level to configure them appropriately. This requires a lot of knowledge of and training on the platform. While understanding and creating resources may already sound like a large hurdle, this is actually just the beginning of many different operational challenges.
Keeping live and local states in sync
A method of configuring Kubernetes resources that we would encourage is to maintain their configuration in source control for teams to edit and share, which also allows the source control repository to become the source of truth. The configuration defined in source control (referred to as the local state) is then created by applying them to the Kubernetes environment, and the resources become live or enter what can be called a live state. This sounds simple enough, but what happens when developers need to make changes to their resources? The proper answer would be to modify the files in source control and apply the changes to synchronize the local state to the live state. However, this isn’t what always ends up happening. It is often simpler, in the short term, to modify the live resource in place with kubectl edit
or kubectl patch
and completely skip over modifying the local files. This results in state inconsistency between local and live states and is an act that makes scaling on Kubernetes difficult.
Application life cycles are hard to manage
Life cycle management is a loaded term, but in this context, we’ll refer to it as the concept of installing, upgrading, and rolling back applications. In the Kubernetes world, an installation would include API resources for deploying and configuring an application. The initial installation would create what we refer to here as version 1 of an application.
An upgrade, then, can be thought of as a modification to one or many of those Kubernetes resources. Each batch of edits can be thought of as a single upgrade. A developer could modify a single service resource, which would bump the version number to version 2. The developer could then modify a deployment, a configmap, and a service at the same time, bumping the version count to version 3.
As newer versions of an application continue to be rolled out onto Kubernetes, it becomes more difficult to keep track of changes that have occurred across relevant API resources. Kubernetes, in most cases, does not have an inherent way of keeping a history of changes. While this makes upgrades harder to keep track of, it also makes restoring a prior version of an application much more difficult. Say, for example, a developer previously made an incorrect edit on a particular resource. How would a team know where to roll back to? The n-1
case is particularly easy to work out, as that is the most recent version. What happens, however, if the latest stable release was five versions ago? Teams often end up scrambling to resolve issues because they cannot quickly identify the latest stable configuration that worked previously.
Resource files are static
This is a challenge that primarily affects the declarative configuration style of applying YAML resources. Part of the difficulty in following a declarative approach is that Kubernetes resource files are not natively designed to be parameterized. Resource files are largely designed to be written out in full before being applied, and the contents remain the source of truth (SOT) until the file is modified. When dealing with Kubernetes, this can be a frustrating reality. Some API resources can be lengthy, containing many different customizable fields, and it can be quite cumbersome to write and configure YAML resources in full.
Static files lend themselves to becoming boilerplate. Boilerplate represents text or code that remains largely consistent in different but similar contexts. This becomes an issue if developers manage multiple different applications, where they could potentially manage multiple different deployment resources, multiple different services, and so on. In comparing the different applications’ resource files, you may find large numbers of similar YAML configurations between them.
The following screenshot depicts an example of two resources with significant boilerplate configuration between them. The blue text denotes lines that are boilerplate, while the red text denotes lines that are unique:
Figure 1.2 – An example of two resources with boilerplate
Notice, in this example, that both files are almost exactly the same. When managing files that are as similar as this, boilerplate becomes a major headache for teams managing their applications in a declarative fashion.