AWS Kubernetes: The #1 Rule You Need To Master Before Going To Production.
This is the most important thing to consider before going to production on EKS.
The Actual Problem
After a week of commitment and hard work, analyzing, dockerizing, and building an entire Kubernetes cluster with Minikube, the time came to go to production.
Everything was fine and working pretty well on my local. But as a war veteran, I knew that I had to face a couple of issues before having everything working on production as usual.
I’m used to deploying clusters on GKE (Kubernetes on Google Cloud Platform). This time I had to set up everything on AWS. I decided to go with EKS with non-managed worker nodes.
So I first created the cluster with a t2.micro NodeGroup for testing. After applying the deployments, some newly created pods started crashing with this error message:
Error: 0/2 nodes are available: 2 Too many pods.
At first glance, it may seem that the problem is related to the nodes, not to the app. Let’s dive deep into the error message to really understand what’s going on.
0/2 nodes are available
: it means that there are two available worker nodes but currently no one is available.