Let’s improve the last manifest and add some probes to create the following nginx-probe.yaml manifest file:
…
startupProbe:
exec:
command:
- cat
- /usr/share/nginx/html/index.html failureThreshold: 30 periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 3
restartPolicy: Always
The manifest file contains all three probes:
- The startup probe checks whether the /usr/share/nginx/html/index.html file exists. It will continue checking it 30 times at an interval of 10 seconds until one of them succeeds. Once it detects the file, the startup probe will stop probing further.
- The readiness probe checks whether there is a listener on port 80 and responds with HTTP 2xx
– 3xx on path /. It waits for 5 seconds initially and then checks the pod every 5 seconds. If it gets a 2xx – 3xx response, it will report the container as ready and accept requests.
- The liveness probe checks whether the pod responds with HTTP 2xx – 3xx on port 80 and path /. It waits for 5 seconds initially and probes the container every 3 seconds. Suppose, during a check, that it finds the pod not responding for failureThreshold times (this defaults to 3). In that case, it will kill the container, and the kubelet will take appropriate action based on the pod’s restartPolicy field.
- Let’s apply the YAML file and watch the pods come to life by using the following command:
$ kubectl delete pod nginx && kubectl apply -f nginx-probe.yaml && \ kubectl get pod -w
NAME READY STATUS nginx 0/1 Running
RESTARTS
0
AGE
4s
nginx | 0/1 | Running | 0 | 11s |
nginx | 1/1 | Running | 0 | 12s |
As we can see, the pod is quickly ready from the running state. It takes approximately 10 seconds for that to happen as the readiness probe kicks in 10 seconds after the pod starts. Then, the liveness probe keeps monitoring the health of the pod.
Now, let’s do something that will break the liveness check. Imagine someone getting a shell to the container and deleting some important files. How do you think the liveness probe will react? Let’s have a look.
Let’s delete the /usr/share/nginx/html/index.html file from the container and then check how the container behaves using the following command:
$ kubectl exec -it nginx — rm -rf /usr/share/nginx/html/index.html && \ kubectl get pod nginx -w
NAME | READY | STATUS | RESTARTS | AGE | ||
nginx | 1/1 | Running | 0 | 2m5s | ||
nginx | 0/1 | Running | 1 | (2s | ago) | 2m17s |
nginx | 1/1 | Running | 1 | (8s | ago) | 2m22s |
So, while we watch the pod, the initial delete is only detected after 9 seconds. That’s because of the liveness probe. It tries for 9 seconds, three times periodSeconds, since failureThreshold defaults to 3, before declaring the pod as unhealthy and killing the container. No sooner does it kill the container than the kubelet restarts it as the pod’s restartPolicy field is set to Always. Then, we see the startup and readiness probes kicking in, and soon, the pod gets ready. Therefore, no matter what, your pods are reliable and will work even if a part of your application is faulty.
Tip
Using readiness and liveness probes will help provide a better user experience, as no requests go to pods that are not ready to process any request. If your application does not respond appropriately, it will replace the container. If multiple pods are running to serve the request, your service is exceptionally resilient.
As we discussed previously, a pod can contain one or more containers. Let’s look at some use cases where you might want multiple containers instead of one.