Why does my app fail with a group ID error for NFS file storage permissions?
Classic infrastructure
After you create or add existing NFS storage to your cluster, your app's container deployment fails. You see group ID (GID) error messages.
When you create a container from an image that does not specify a user and user ID (UID), all instructions in the Dockerfile are run by the root user (UID: 0) inside the container by default.
However, when you want to mount an NFS file share to your container, the user ID 0
inside the container is mapped to the user ID nobody
on the NFS host system. Therefore, the volume mount path is owned by the user ID
nobody
and not by root
. This security feature is also known as root squash. Root squash protects the data within NFS by mounting the container without granting the user ID root permissions on the actual NFS host file
system.
Use a Kubernetes DaemonSet
to enable root permission to the storage mount path on all your worker nodes for NFSv4
file shares.
To allow root permission on the volume mount path, you must set up a ConfigMap on your worker node. The ConfigMap maps the user ID nobody
from the NFS host system to the root user ID 0
in your container. This process
is also referred to as no root squash. An effective way of updating all your worker nodes is to use a daemon set, which runs a specified pod on every worker node in your cluster. In this case, the pod that is controlled by the daemon set updates
each of your worker nodes to enable root permission on the volume mount path.
The deployment is configured to allow the daemon set pod to run in privileged mode, which is necessary to access the host file system. Running a pod in privileged mode does create a security risk, so use this option with caution.
While the daemon set is running, new worker nodes that are added to the cluster are automatically updated.
Before you begin:
Steps:
-
Copy the
norootsquash
daemon set deployment YAML file -
Create the
norootsquash
daemon set deployment.oc apply -f norootsquash.yaml
-
Get the name of the pod that your storage volume is mounted to. This pod is not the same as the
norootsquash
pods.oc get pods
-
Log in to the pod.
oc exec -it mypod /bin/bash
-
Verify that the permissions to the mount path are
root
.root@mypod:/# ls -al /mnt/myvol/ total 8 drwxr-xr-x 2 root root 4096 Feb 7 20:49 . drwxr-xr-x 1 root root 4096 Feb 20 18:19 .
This output shows that the UID in the first row is now owned by
root
(instead of previouslynobody
). -
If the UID is owned by
nobody
, exit the pod and reboot your cluster's worker nodes. Wait for the nodes to reboot.ibmcloud oc worker reboot --cluster <my_cluster> --worker <my_worker1>,<my_worker2>
-
Repeat Steps 4 and 5 to verify the permissions.