We discussed a little bit about the differences between an imperative style where you clearly specify actions to take—such as start three more Pods—and a declarative one where you specify your intent—such as there should be three Pods running for the deployment—and actions need to be calculated (you might increase or decrease the Pods or do nothing if three are already running). Both imperative and declarative ways are implemented in the kubectl
client.
Imperative – direct commands
Whenever we create, update, or delete a Kubernetes object, we can do it in an imperative style.
To create a namespace, run the following command:
kubectl create namespace test-imperative
Then, in order to see the created namespace, use the following command:
kubectl get namespace test-imperative
Create a deployment inside that namespace, like so:
kubectl create deployment nginx-imperative --image=nginx -n test-imperative
Then, you can use the following command to see the created deployment:
kubectl get deployment -n test-imperative nginx-imperative
To update any of the resources we created, we can use specific commands, such as kubectl label
to modify the resource labels, kubectl scale
to modify the number of Pods in a Deployment, ReplicaSet, StatefulSet
, or kubectl set
for changes such as environment variables (kubectl set env
), container images (kubectl set image
), resources for a container (kubectl set resources
), and a few more.
If you want to add a label to the namespace, you can run the following command:
kubectl label namespace test-imperative namespace=imperative-apps
In the end, you can remove objects created previously with the following commands:
kubectl delete deployment -n test-imperative nginx-imperative
kubectl delete namespace test-imperative
Imperative commands are clear on what they do, and it makes sense when you use them for small objects such as namespaces. But for more complex ones, such as Deployments, we can end up passing a lot of flags to it, such as specifying a container image, image tag, pull policy, if a secret is linked to a pull (for private image registries), and the same for init
containers and many other options. Next, let’s see if there are better ways to handle such a multitude of possible flags.
Imperative – with config files
Imperative commands can also make use of configuration files, which make things easier because they significantly reduce the number of flags we would need to pass to an imperative command. We can use a file to say what we want to create.
This is what a namespace configuration file looks like—the simplest version possible (without any labels or annotations). The following files can also be found at https://github.com/PacktPublishing/ArgoCD-in-Practice/tree/main/ch01/imperative-confi
Copy the following content into a file called namespace.yaml
:
apiVersion: v1
kind: Namespace
metadata:
name: imperative-config-test
Then, run the following command:
kubectl create -f namespace.yaml
Copy the following content and save it in a file called deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: imperative-config-test
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
Then, run the following command:
kubectl create -f deployment.yaml
By running the preceding commands, we create one namespace and one Deployment, similar to what we have done with imperative direct commands. You can see this is easier than passing all the flags to kubectl create deployment
. What’s more, not all the fields are available as flags, so using a configuration file can become mandatory in many cases.
We can also modify objects via the config file. Here is an example of how to add labels to a namespace. Update the namespace we used before with the following content (notice the extra two rows starting with labels
). The updated namespace can be seen in the official https://github.com/PacktPublishing/ArgoCD-in-Practice/tree/main/ch01/imperative-config repository in the namespace-with-labels.yaml
file:
apiVersion: v1
kind: Namespace
metadata:
name: imperative-config-test
labels:
name: imperative-config-test
And then, we can run the following command:
kubectl replace -f namespace.yaml
And then, to see if a label was added, run the following command:
kubectl get namespace imperative-config-test -o yaml
This is a good improvement compared to passing all the flags to the commands, and it makes it possible to store those files in version control for future reference. Still, you need to specify your intention if the resource is new, so you use kubectl create
, while if it exists, you use kubectl replace
. There are also some limitations: the kubectl replace
command performs a full object update, so if someone modified something else in between (such as adding an annotation to the namespace), those changes will be lost.
Declarative – with config files
We just saw how easy it is to use a config file to create something, so it would be great if we could modify the file and just call some update
/sync
command on it. We could modify the labels inside the file instead of using kubectl label
and could do the same for other changes, such as scaling the Pods of a Deployment, setting container resources, container images, and so on. And there is such a command that you can pass any file to it, new or modified, and it will be able to make the right adjustments to the API server: kubectl apply
.
Please create a new folder called declarative-files
and place the namespace.yaml
file in it, with the following content (the files can also be found at https://github.com/PacktPublishing/ArgoCD-in-Practice/tree/main/ch01/declarative-files):
apiVersion: v1
kind: Namespace
metadata:
name: declarative-files
Then, run the following command:
kubectl apply -f declarative-files/namespace.yaml
The console output should then look like this:
namespace/declarative-files created
Next, we can modify the namespace.yaml
file and add a label to it directly in the file, like so:
apiVersion: v1
kind: Namespace
metadata:
name: declarative-files
labels:
namespace: declarative-files
Then, run the following command again:
kubectl apply -f declarative-files/namespace.yaml
The console output should then look like this:
namespace/declarative-files configured
What happened in both of the preceding cases? Before running any command, our client (or our server—there is a note further on in this chapter explaining when client-side or server-side apply is used) compared the existing state from the cluster with the desired one from the file, and it was able to calculate the actions that needed to be applied in order to reach the desired state. In the first apply
example, it realized that the namespace didn’t exist and it needed to create it, while in the second one, it found that the namespace exists but it didn’t have a label, so it added one.
Next, let’s add the Deployment in its own file called deployment.yaml
in the same declarative-files
folder, as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: declarative-files
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
And we will run the following command that will create a Deployment in the namespace:
kubectl apply -f declarative-files/deployment.yaml
If you want, you can make the changes to the deployment.yaml
file (labels, container resources, images, environment variables, and so on) and then run the kubectl apply
command (the complete one is the preceding one), and the changes you made will be applied to the cluster.
Declarative – with config folder
In this section, we will create a new folder called declarative-folder
and two files inside of it.
Here is the content of the namespace.yaml
file (the code can also be found here: https://github.com/PacktPublishing/ArgoCD-in-Practice/tree/main/ch01/declarative-folder):
apiVersion: v1
kind: Namespace
metadata:
name: declarative-folder
Here is the content of the deployment.yaml
file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: declarative-folder
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
And then, we will run the following command:
kubectl apply -f declarative-folder
Most likely, you will see the following error, which is expected, so don’t worry:
namespace/declarative-folder created
Error from server (NotFound): error when creating "declarative-folder/deployment.yaml": namespaces "declarative-folder" not found
That is because those two resources are created at the same time, but deployment depends on the namespace, so when a deployment needs to be created, it needs to have the namespace ready. We see the message says that a namespace was created but the API calls were done at the same time and on the server, so the namespace was not available when the deployment started its creation flow. We can fix this by running the following command again:
kubectl apply -f declarative-folder
And in the console, we should see the following output:
deployment.apps/nginx created
namespace/declarative-folder unchanged
Because the namespace already existed, it was able to create a deployment inside it while no change was made to the namespace.
The kubectl apply
command took the whole content of the declarative-folder
folder, made the calculations for each resource found in those files, and then called the API server with the changes. We can apply entire folders, not just files, though it can get trickier if the resources depend on each other, and we can modify those files and call the apply
command for the folder, and the changes will get applied. Now, if this is how we build applications in our clusters, then we had better save all those files in source control for future reference so that it will get easier to apply changes after some time.
But what if we could apply a Git repository directly, not just folders and files? After all, a local Git repository is a folder, and in the end, that’s what a GitOps operator is: a kubectl apply
command that knows how to work with Git repositories.
Note
The apply
command was initially implemented completely on the client side. This means the logic for finding changes was running on the client, and then specific imperative APIs were called on the server. But more recently, the apply
logic moved to the server side; all objects have an apply
method (from a REST API perspective, it is a PATCH
method with an application/apply-patch+yaml
content-type header), and it is enabled by default starting with version 1.16 (more on the subject here: https://kubernetes.io/docs/reference/using-api/server-side-apply/).