Issue
This guide addresses issues with inotify watchers limits on Kubernetes nodes, a common problem when working with file-watching-intensive applications like Angular in a Kubernetes-managed development environment. This can manifest itself as ENOSPC: System limit for number of file watchers reached
errors appearing in the pod logs.
In this case, the Kubernetes node itself, rather than an individual container, is likely reaching the file watchers limit, which is why increasing the limit within a single container didn’t resolve the issue. This limit is a per-node setting that affects all containers on that node.
Solution Overview
To resolve this issue, we need to increase the inotify watchers limit on the Kubernetes node. There are two primary approaches:
- Directly increase the inotify watchers limit on each Kubernetes node.
- Apply a configuration to adjust this limit persistently across all nodes in the Kubernetes cluster.
Option 1: Increase the inotify Watchers Limit on Individual Nodes
In this approach, you will log into each node and manually increase the watchers limit.
Step-by-Step Instructions
- Log into the Kubernetes Node
Access the node where the application is experiencing the issue. Use SSH or your preferred method for node access. - Check the Current Limit
Confirm the current inotify watchers limit by running:cat /proc/sys/fs/inotify/max_user_watches
- Increase the Limit Temporarily
To increase the limit for the current session (until reboot), run:sudo sysctl fs.inotify.max_user_watches=524288
Adjust the value as needed;524288
is typically sufficient. - Persist the Change
To make the change persistent across reboots, add it to the sysctl configuration:echo "fs.inotify.max_user_watches=524288" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
- Restart Affected Pods
After making this change, restart the pods experiencing the issue to apply the new watchers limit:kubectl rollout restart deployment <your-deployment-name>
Option 2: Increase the inotify Watchers Limit Cluster-Wide (Persistent DaemonSet Approach)
This approach is suitable for applying the setting across multiple nodes in a larger Kubernetes cluster or in environments where nodes are created/destroyed dynamically.
Step-by-Step Instructions
- Create a DaemonSet
A DaemonSet can automatically configure all nodes in the cluster to increase themax_user_watches
value. - Define the DaemonSet Configuration
Create a YAML file,inotify-watcher-limit.yaml
, with the following content:apiVersion: apps/v1 kind: DaemonSet metadata: name: increase-inotify-limits namespace: kube-system spec: selector: matchLabels: name: increase-inotify-limits template: metadata: labels: name: increase-inotify-limits spec: containers: - name: sysctl image: busybox command: - /bin/sh - -c - "sysctl -w fs.inotify.max_user_watches=524288 && sleep infinity" securityContext: privileged: true hostPID: true
- Apply the DaemonSet
Deploy this DaemonSet to your Kubernetes cluster:kubectl apply -f inotify-watcher-limit.yaml
This configuration will run a privileged container on each node, setting themax_user_watches
value to524288
. - Verify the Change
After deploying, verify that the limit has been applied on each node:kubectl exec -it <pod-name> -- cat /proc/sys/fs/inotify/max_user_watches
Replace<pod-name>
with the name of any running pod on a node to confirm the change. - Restart Affected Pods
Restart the application pods to apply the changes effectively.
Troubleshooting Tips
- Verify Limits: If the error persists, confirm that the new
max_user_watches
limit is sufficient and isn’t overwritten by another configuration. - Application-Specific Adjustments: Some applications, like Webpack or Angular CLI, may have additional settings to reduce required file watchers.
- Monitor Resource Usage: Increasing the file watchers limit can increase memory usage on the node. Monitor nodes to ensure they have adequate resources.
By following these steps, you should be able to resolve the ENOSPC: System limit for number of file watchers reached
error and prevent it from disrupting development workflows on Coder.