AWS Kubernetes: The #1 Rule You Need To Master Before Going To Production.

Akintola L. F. ADJIBAO
3 min readJun 30, 2022

--

This is the most important thing to consider before going to production on EKS.

The Actual Problem

After a week of commitment and hard work, analyzing, dockerizing, and building an entire Kubernetes cluster with Minikube, the time came to go to production.

Everything was fine and working pretty well on my local. But as a war veteran, I knew that I had to face a couple of issues before having everything working on production as usual.

I’m used to deploying clusters on GKE (Kubernetes on Google Cloud Platform). This time I had to set up everything on AWS. I decided to go with EKS with non-managed worker nodes.

So I first created the cluster with a t2.micro NodeGroup for testing. After applying the deployments, some newly created pods started crashing with this error message:

Error: 0/2 nodes are available: 2 Too many pods.

At first glance, it may seem that the problem is related to the nodes, not to the app. Let’s dive deep into the error message to really understand what’s going on.

  • 0/2 nodes are available : it means that there are two available worker nodes but currently no one is available.
  • 2 Too many pods : it means that we reach the limit of pods that can be created and requested to create 2 more with the deployments.

Now let’s tackle the right question: WHY THIS ERROR?

The Solution

AWS EKS on EC2 supports native Amazon VPC networking using the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. The AWS VPC CNI Plugin:

  • creates elastic network interfaces (ENI) and attaches them to your Amazon EC2 nodes.
  • assigns a private IPv4 or IPv6 address from your VPC to each pod and service.

It means that there is a limited number of IP addresses per node based on the EC2 instance that you’re using. You can check the number of IP addresses per EC2 type here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI.

And since there is a limited number of IP addresses per node (EC2 instance), we can only have a limited number of pods per node based on the type of EC2 we choose for our NodeGroups.

Through my research, it turns out that the number of pods that your node can support is also based on your Amazon CNI add-on version. Fortunately, the CNI add-on version is tied to the Kubernetes version you’re using.

Knowing that, let’s now figure out how to choose the correct EC2 type for your worker nodes.

This guide gives you the steps to calculate the maximum pods that your EC2 worker nodes can support.

For example:

Let’s assume you’re using Kubernetes 1.22. Then:

  • t2.micro and t3.micro worker nodes can host up to 4 pods,
  • t2.medium and t3.medium worker nodes can have up to 17 pods,
  • t2.large and t3.large worker nodes can host up to 35 pods,
  • and m5.large worker nodes can host up to 29 pods.

But …, there is one thing left:

If the number of pods you have is exactly the number of pods that the worker node allows, chances are your deployment will fail.

Actually, EKS deploys DaemonSets for e.g. CoreDNS and kube-proxy on each worker node, so some IP addresses on each node are already allocated.

So you can leave a margin of 5 pods for any EKS use 😉 🤖.

Conclusion

The journey of shipping a production-ready application to Kubernetes is not as easy as it may seem.

When it comes to building a cluster on EKS with EC2, choosing the right AWS EC2 instance type is one of the most important things to consider or even the most important.

Thanks once again for reading me!

If you need help with your Kubernetes setup on AWS EKS, you can reach out to Me On Upwork.

Till next time, take care and keep improving 🪐🏆🥇.

Cheers!

--

--