All Blog Posts

How to K8s: Expose a Public IP via a Load Balancer Service in Kubernetes

Load balancing in Kubernetes

A Kubernetes deployment is considered a best practice because it offers a high availability solution with truly reliable uptime. You can use Orka to set up your iOS/macOS CI without any knowledge of Kubernetes. That said, there are some more advanced scenarios in which you might want to use Kubernetes functionality in addition to Orka’s other features. Orka makes this possible with its Sandbox functionality.

Deployments provide declarative updates for ReplicaSets, another Kubernetes abstraction that is responsible for ensuring the good health of multiple running copies of your code, each of which is running in its own pod. In the event that a given pod should fail, the ReplicaSet is responsible for tearing that pod down and spinning up another in its place. This gets interesting quickly, because each pod, upon its creation, is assigned its own IP – in effect, you have “X” number of pods running the app, each with its own dynamically assigned IP address that will need to be targeted by a requesting server.

How will the requesting server know which pod to try to communicate with? Will it need to keep track of each of the pods' IP addresses that are being created “from the hip” by the ReplicaSet? Thankfully, no. Instead, we can use a Kubernetes LoadBalancer Service to expose a single IP, and the traffic it receives will be distributed intelligently amongst each of the pods in the deployment.

Below, you can see the result of calling kubectl get services. I explicitly included the --kubeconfig flag, although chances are you have already set yours as an environment variable. If so, you can disregard that tidbit. And, actually, this particular tutorial is an outgrowth of working with the Chef Development Kit (ChefDK) – hence the Chef Server services listed below.

kubectl get services
Results of kubectl get services


As you can see, we get lots of helpful information. The NAME, TYPE and AGE are pretty self-explanatory, but we also get the CLUSTER-IP, EXTERNAL-IP, and our external and internal PORT(S) i.e: 80:30526=external:internal.

Because the Kubernetes CLUSTER-IP is only exposed internally within the cluster, we will never be able to access our service there from outside the cluster. Instead, we need to identify the EXTERNAL-IP that has been assigned to our service and pair it with the external PORT, which in the above case is 80. By convention, in Orka’s Kubernetes implementation we will always want the first EXTERNAL-IP listed for a given service.

So, to put it all together, given the above response from our kubectl get services call, if we want to access the service chef-server1 we will need to hit 10.10.10.93:80. Notice also that the underlying connection type is TCP. This is set by default, but the service type LoadBalancer can also handle UDP and HTTP(S).

Takeaways

There are several ways to expose a public IP to a Kubernetes cluster, but one of the simplest and most powerful setups can be stood up from the kubectl CLI tool with just a few simple commands. Moreover, because the LoadBalancer service exposes just one public IP, that it directs traffic from to the various pods in the cluster, external services are not responsible for tracking each individual node’s IP, which makes for much simpler code to maintain across the board.

Of course, if you’re not ready to dive head-first into managing a Kubernetes cluster, check out Orka! Our Mac build orchestration solution takes away the complexity of Kubernetes while retaining its most powerful features - containerization, scalability, and control.

You May Also Like

Was this article helpful?

Vote Submitted
Oops! Something went wrong while submitting the form.
Vote Submitted
Oops! Something went wrong while submitting the form.
Return to Blog Home