Member-only story
AWS Kubernetes: The #1 Rule You Need To Master Before Going To Production.
This is the most important thing to consider before going to production on EKS.

The Actual Problem
After a week of commitment and hard work, analyzing, dockerizing, and building an entire Kubernetes cluster with Minikube, the time came to go to production.
Everything was fine and working pretty well on my local. But as a war veteran, I knew that I had to face a couple of issues before having everything working on production as usual.
I’m used to deploying clusters on GKE (Kubernetes on Google Cloud Platform). This time I had to set up everything on AWS. I decided to go with EKS with non-managed worker nodes.
So I first created the cluster with a t2.micro NodeGroup for testing. After applying the deployments, some newly created pods started crashing with this error message:
Error: 0/2 nodes are available: 2 Too many pods.
At first glance, it may seem that the problem is related to the nodes, not to the app. Let’s dive deep into the error message to really understand what’s going on.
0/2 nodes are available
: it means that there are two available worker nodes but currently no one is available.2 Too many pods
: it means that we reach the limit of pods that can be created and requested to create 2 more with the deployments.
Now let’s tackle the right question: WHY THIS ERROR?
The Solution
AWS EKS on EC2 supports native Amazon VPC networking using the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. The AWS VPC CNI Plugin:
- creates elastic network interfaces (ENI) and attaches them to your Amazon EC2 nodes.
- assigns a private
IPv4
orIPv6
address from your VPC to each pod and service.
It means that there is a limited number of IP addresses per node based on the EC2 instance that you’re using. You can check the number of IP addresses per EC2 type here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI.