How to increase the number of pods limit per Kubernetes Node
When you set up a kubernetes cluster, there are default limits defined in terms of the sizing of the cluster supported. For example following are the limits for Kubernetes 1.17 version released in late 2019.
- Max Nodes : 5000
- Max Pods : 150,000
- Max Containers : 300,000
- Max Pods/Node : 110
You could refer to the documentation here https://kubernetes.io/docs/setup/best-practices/cluster-large/
Despite official document which mentions the number 100 pods per node, in reality this limit is set to 110 pods/node. You could validate it by checking the status of the node in a real cluster
kubectl get nodes NAME STATUS ROLES AGE VERSION kube-01 Ready master 6d15h v1.17.3 kube-02 Ready <none> 6d15h v1.17.3 kube-03 Ready <none> 6d15h v1.17.3 kubectl describe node kube-03 | grep -i capacity -A 13 Capacity: cpu: 2 ephemeral-storage: 81120924Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4046380Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 74761043435 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3943980Ki pods: 110
Where kube-03 is one of the nodes in my cluster. Replace it with the name of the actual node in yours. You should clearly see that the Capacity.pods is defined as 110 for this node.
Who enforces this configuration ?
Pod limit is set and enforced by kubelet running on the node. If you have to change it, and that is possible, you should be looking at and updating the kubelet configurations. This official document for kubelet lists all the configuration options. The one you are interested here is
--max-pods int32 Number of Pods that can run on this Kubelet. (default 110) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
How to change max pod capacity per node ?
I am going to describe a manual process of updating this configuraiton. However, you could automate this if you are using a automation/deployment tool to create your kubernetes cluster.
Step 1: You need to log in to the node for which you would like to change and find out where the kubelet configuration is
root@kube-03:~# cat /etc/issue Ubuntu 16.04.5 LTS \n \l root@kube-03:~# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Fri 2020-02-14 15:22:28 UTC; 6 days ago Docs: https://kubernetes.io/docs/home/ Main PID: 1077 (kubelet) Tasks: 20 Memory: 107.8M CPU: 12h 25min 12.131s CGroup: /system.slice/kubelet.service └─1077 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-con
In this example, I have a ubuntu system and kubelet is managed by systemd. Since its a systemd, it would read files inside /etc/systemd while launching the processes.
I found the kubelet launch configuration at the following path
ls /etc/systemd/system/kubelet.service.d/ 10-kubeadm.conf
And I edited the file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf and added the --max-pods option to ExecStart configuration as follows
# Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/default/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --max-pods=243
Why 243 ? Well, its a uncommon number for a default configuration and would help me validate whether my configuration is effective or not.
Now its time to restart kubelet service as,
systemctl restart kubelet Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units. root@kube-03:~# systemctl daemon-reload root@kube-03:~# systemctl restart kubelet
You need to ensure that systemctl daemon-reload is run before restarting kubelet so that it picks up the configuration changes made earlier.
Now its time to test by going back to my kubectl console and run the same command as earlier,
kubectl describe node kube-03 | grep -i capacity -A 13 Capacity: cpu: 2 ephemeral-storage: 81120924Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 4046380Ki pods: 243 Allocatable: cpu: 2 ephemeral-storage: 74761043435 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 3943980Ki pods: 243
Voila ! As you would have noticed, the max pod capacity for the kubernetes node has been just updated. And you could follow this process to essentially update any configurations that kubelet could take.
Update: The more modern approach could be to update the config file and reload the service instead of adding it as a optin to kubelet start command. Click here to read more about config files
If you like this article and interested in learning more such tips and tricks, do subscribe to this blog and to my Youtube Channel. You could also enroll to my Kubernetes Course on Udemy here.