Even though we managed to deploy MongoDB replica set with three instances, the process was far from optimum. We had to execute manual steps. Since I don't believe that manual hocus-pocus type of intervention is the way to go, we'll try to improve the process by removing human interaction. We'll do that through sidecar containers that will do the work of creating MongoDB replica set (not to be confused with Kubernetes ReplicaSet).
Let's take a look at yet another iteration of the go-demo-3 application definition.
1 cat sts/go-demo-3.yml
The output, limited to relevant parts, is as follows.
... apiVersion: apps/v1beta2 kind: StatefulSet metadata: name: db namespace: go-demo-3 spec: ... template: ... spec: terminationGracePeriodSeconds: 10 containers: ... - name: db-sidecar image: cvallance/mongo-k8s-sidecar env: - name: MONGO_SIDECAR_POD_LABELS value: "app=db" - name: KUBE_NAMESPACE value: go-demo-3 - name: KUBERNETES_MONGO_SERVICE_NAME value: db ...
When compared with sts/go-demo-3-sts.yml, the only difference is the addition of the second container in the StatefulSet db. It is based on cvallance/mongo-k8s-sidecar (https://hub.docker.com/r/cvallance/mongo-k8s-sidecar) Docker image. I won't bore you with the details but only give you the gist of the project. It creates and maintains MongoDB replica sets.
The sidecar will monitor the Pods created through our StatefulSet, and it will reconfigure db containers so that MongoDB replica set is (almost) always up to date with the MongoDB instances.
Let's create the resources defined in sts/go-demo-3.yml and check whether everything works as expected.
1 kubectl apply \ 2 -f sts/go-demo-3.yml \ 3 --record
4 5 # Wait for a few moments
6 7 kubectl -n go-demo-3 \ 8 logs db-0 \ 9 -c db-sidecar
We created the resources and outputted the logs of the db-sidecar container inside the db-0 Pod.
The output, limited to the last entry, is as follows.
... Error in workloop { [Error: [object Object]] message: { kind: 'Status', apiVersion: 'v1', metadata: {}, status: 'Failure', message: 'pods is forbidden: User "system:serviceaccount:go-demo-3:default" cannot list pods in the namespace "go-demo-3"', reason: 'Forbidden', details: { kind: 'pods' }, code: 403 }, statusCode: 403 }
We can see that the db-sidecar container is not allowed to list the Pods in the go-demo-3 Namespace. If, in your case, that's not the output you're seeing, you might need to wait for a few moments and re-execute the logs command.
It is not surprising that the sidecar could not list the Pods. If it could, RBAC would be, more or less, useless. It would not matter that we restrict which resources users can create if any Pod could circumvent that. Just as we learned in The DevOps 2.3 Toolkit: Kubernetes, how to set up users using RBAC, we need to do something similar with service accounts. We need to extend RBAC rules from human users to Pods. That will be the subject of the next chapter.
On Docker for Mac (or Windows), the db-sidecar can list the Pods even with RBAC enabled. Even though Docker for Mac or Windows supports RBAC, it allows any internal process inside containers to communicate with Kube API. Be aware that even though the sidecar could list the Pods in Docker for Mac or Windows, it will not work in any other cluster with RBAC enabled.