LoadBalancer Service resources – Managing Advanced Kubernetes Resources

LoadBalancer Service resources help expose your pods on a single load-balanced endpoint. These Service resources can only be used within cloud platforms and platforms that provide Kubernetes

controllers with access to spin up external network resources. A LoadBalancer Service practically spins up a NodePort Service resource and then requests the Cloud API to spin up a load balancer

in front of the node ports. That way, it provides a single endpoint to access your Service resource from the external world.

Spinning up a LoadBalancer Service resource is simple—just set the type to LoadBalancer.

Let’s expose the Flask application as a load balancer using the following manifest—flask-

loadbalancer.yaml:


spec:
type: LoadBalancer

Now, let’s apply the manifest using the following command:

$ kubectl apply -f flask-loadbalancer.yaml

Let’s get the Service resource to notice the changes using the following command:

$ kubectl get svc flask-app

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) flask-app LoadBalancer 10.3.240.246 34.71.95.96 5000:32618

The Service resource type is now LoadBalancer. As you can see, it now contains an external IP along with the cluster IP.

You can then curl on the external IP on port 5000 using the following command:

$ curl 34.71.95.96:5000

Hi there! This page was last visited on 2023-06-26, 08:37:50.

And you get the same response as before. Your Service resource is now running externally.

Tip

LoadBalancer Service resources tend to be expensive as every new resource spins up a network load balancer within your cloud provider. If you have HTTP-based workloads, use Ingress resources instead of LoadBalancer to save on resource costs and optimize traffic as they spin up an application load balancer instead.

While Kubernetes Services form the basic building block of exposing your container applications internally and externally, Kubernetes also provides Ingress resources for additional fine-grained control over traffic. Let’s have a look at this in the next section.

Related Posts

Static provisioning – Managing Advanced Kubernetes Resources-2

As the Service resource is created, we can create a StatefulSet resource that uses the created PersistentVolume and Service resources. The StatefulSet resource manifest, nginx-manual-statefulset.yaml, looks like…

Static provisioning – Managing Advanced Kubernetes Resources-1

Static provisioning is the traditional method of provisioning volumes. It requires someone (typically an administrator) to manually provision a disk and create a PersistentVolume resource using the…

StatefulSet resources – Managing Advanced Kubernetes Resources

StatefulSet resources help manage stateful applications. They are similar to Deployment resources, but unlike a Deployment resource, they also keep track of state and require Volume and…

Managing stateful applications – Managing Advanced Kubernetes Resources

Imagine you’re a librarian in a magical library. You have a bunch of enchanted books that store valuable knowledge. Each book has a unique story and is…

Horizontal Pod autoscaling – Managing Advanced Kubernetes Resources-2

Now, let’s autoscale this deployment. The Deployment resource needs at least 1 pod replica and can have a maximum of 5 pod replicas while maintaining an average…

Horizontal Pod autoscaling – Managing Advanced Kubernetes Resources-1

Imagine you’re the manager of a snack bar at a park. On a sunny day, lots of people come to enjoy the park, and they all want…

Leave a Reply

Your email address will not be published. Required fields are marked *