Kubernetes Services and Ingresses – Managing Advanced Kubernetes Resources

It’s story time! Let’s simplify Kubernetes Services.

Imagine you have a group of friends who love to order food from your restaurant. Instead of delivering each order to their houses separately, you set up a central delivery point in their neighborhood. This delivery point (or hub) is your “service.”

In Kubernetes, a Service is like that central hub. It’s a way for the different parts of your application (such as your website, database, or other things) to talk to each other, even if they’re in separate containers or machines. It gives them easy-to-remember addresses to find each other without getting lost.

The Service resource helps expose Kubernetes workloads to the internal or external world. As we know, pods are ephemeral resources—they can come and go. Every pod is allocated a unique IP address and hostname, but once a pod is gone, the pod’s IP address and hostname change. Consider a scenario where one of your pods wants to interact with another. However, because of its transient nature, you cannot configure a proper endpoint. If you use the IP address or the hostname as the endpoint of a pod and the pod is destroyed, you will no longer be able to connect to it. Therefore, exposing a pod on its own is not a great idea.

Kubernetes provides the Service resource to provide a static IP address to a group of pods. Apart from exposing the pods on a single static IP address, it also provides load balancing of traffic between pods in a round-robin configuration. It helps distribute traffic equally between the pods and is the default method of exposing your workloads.

Service resources are also allocated a static fully qualified domain name (FQDN) (based on the Service name). Therefore, you can use the Service resource FQDN instead of the IP address within your cluster to make your endpoints fail-safe.

Now, coming back to Service resources, there are multiple Service resource types— ClusterIP,

NodePort, and LoadBalancer, each having its own respective use case:

Figure 6.4 – Kubernetes Services

Let’s understand each of these with the help of examples.

Related Posts

Static provisioning – Managing Advanced Kubernetes Resources-2

As the Service resource is created, we can create a StatefulSet resource that uses the created PersistentVolume and Service resources. The StatefulSet resource manifest, nginx-manual-statefulset.yaml, looks like…

Static provisioning – Managing Advanced Kubernetes Resources-1

Static provisioning is the traditional method of provisioning volumes. It requires someone (typically an administrator) to manually provision a disk and create a PersistentVolume resource using the…

StatefulSet resources – Managing Advanced Kubernetes Resources

StatefulSet resources help manage stateful applications. They are similar to Deployment resources, but unlike a Deployment resource, they also keep track of state and require Volume and…

Managing stateful applications – Managing Advanced Kubernetes Resources

Imagine you’re a librarian in a magical library. You have a bunch of enchanted books that store valuable knowledge. Each book has a unique story and is…

Horizontal Pod autoscaling – Managing Advanced Kubernetes Resources-2

Now, let’s autoscale this deployment. The Deployment resource needs at least 1 pod replica and can have a maximum of 5 pod replicas while maintaining an average…

Horizontal Pod autoscaling – Managing Advanced Kubernetes Resources-1

Imagine you’re the manager of a snack bar at a park. On a sunny day, lots of people come to enjoy the park, and they all want…

Leave a Reply

Your email address will not be published. Required fields are marked *