Why can't I add non-root user access to persistent storage?
Classic infrastructure
After you add non-root user access to persistent storage or deploy a Helm chart with a non-root user ID specified, the user can't write to the mounted storage.
Your app deployment or Helm chart configuration specifies the security context for the pod's fsGroup
(group
ID) and runAsUser
(user ID). Generally, a pod's default security context sets runAsNonRoot
so that the
pod can't run as the root user. Because the fsGroup
setting is not designed for shared storage such as NFS file storage, the fsGroup
setting is not supported, and the runAsUser
setting is automatically
set to 2020
. These default settings don't allow other non-root users to write to the mounted storage.
To allow a non-root user read and write access to a file storage device, you must allocate a supplemental group ID in a storage class,
refer to this storage class in the PVC, and set the pod's security context with a runAsUser
value that is automatically added to the supplemental group ID. When you grant the supplemental group ID read and write access to the file
storage, any non-root user that belongs to the group ID, including your pod, is granted access to the file storage.
You can use one of the provided gid
storage classes or create your own storage class to define your own supplemental group ID.
Allocating a supplemental group ID for a non-root user of a file storage device is supported for single zone clusters only, and can't be used in multizone clusters.
-
Select one of the provided
gid
storage classes to assign the default group ID65531
to your non-root user that you want to read and write to your file storage. If you want to assign a custom group ID, create a YAML file for a customized storage class. In your customized storage class YAML file, include thegidAllocate: "true"
parameter and define the group ID in thegidFixed
parameter.Example storage classes for assigning the default group ID
65531
.ibmc-file-bronze-gid
ibmc-file-silver-gid
ibmc-file-gold-gid
Example customized storage class to specify a different group ID.
apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: ibmc-file-bronze-gid-custom labels: kubernetes.io/cluster-service: "true" provisioner: ibm.io/ibmc-file parameters: type: "Endurance" iopsPerGB: "2" sizeRange: "[1-12000]Gi" mountOptions: nfsvers=4.1,hard billingType: "hourly" reclaimPolicy: "Delete" classVersion: "2" gidAllocate: "true" gidFixed: "65165"
To create the storage class in your cluster, run
kubectl apply -f storageclass.yaml
. -
Create a YAML file for your PVC that uses the storage class that you created.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: gid-pvc labels: billingType: "monthly" spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi storageClassName: ibmc-file-bronze-gid
-
Create the PVC in your cluster.
kubectl apply -f pvc.yaml
-
Wait a few minutes for the file storage to be provisioned and the PVC to change to a
Bound
status. Note that if you created the PVC in a multizone cluster, the PVC remains in apending
state.kubectl get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE gid-pvc Bound pvc-5e4acab4-9b6f-4278-b53c-22e1d3ffa123 20Gi RWX ibmc-file-bronze-gid 2m54s
-
Create a YAML file for your deployment that mounts the PVC that you created. In the
spec.template.spec.securityContext.runAsUser
field, specify the non-root user ID that you want to use. This user ID is automatically added to the supplemental group ID that is defined in the storage class to gain read and write access to the file storage.Example for creating an
node-hello
deployment.apiVersion: apps/v1 kind: Deployment metadata: name: gid-deployment labels: app: gid spec: selector: matchLabels: app: gid template: metadata: labels: app: gid spec: containers: - image: gcr.io/google-samples/node-hello:1.0 name: gid-container volumeMounts: - name: gid-vol mountPath: /myvol securityContext: runAsUser: 2020 volumes: - name: gid-vol persistentVolumeClaim: claimName: gid-pvc
-
Create the deployment in your cluster.
kubectl apply -f deployment.yaml
-
Verify that your pod is in a Running status.
kubectl get pods
Example output
NAME READY STATUS RESTARTS AGE gid-deployment-5dc86db4c4-5hbts 2/2 Running 0 69s
-
Log in to your pod.
kubectl exec <pod_name> -it -- bash
Verifying the read and write permissions for the non-root user
-
List the user ID and group IDs for the current user inside the pod. The setup is correct if your non-root user ID is listed as
uid
and the supplemental group ID that you defined in your storage class is listed undergroups
.id
Example output
uid=2020 gid=0(root) groups=0(root), 65531
-
List the permissions for your volume mount directory that you defined in your deployment. The setup is correct if the supplemental group ID that you defined in your storage class is listed with read and write permissions in your volume mount directory.
ls -l /<volume_mount_path>
Example output
drwxrwxr-x 2 nobody 65531 4096 Dec 11 07:40 . drwxr-xr-x 1 root root 4096 Dec 11 07:30 ..
-
Create a file in your mount directory.
echo "Able to write to file storage with my non-root user." > /myvol/gidtest.txt
-
List the permissions for the files in your volume mount directory.
ls -al /mnt/nfsvol/
Example output
drwxrwxr-x 2 nobody 65531 4096 Dec 11 07:40 . drwxr-xr-x 1 root root 4096 Dec 11 07:30 .. -rw-r--r-- 1 2020 4294967294 42 Dec 11 07:40 gidtest.txt .
In your CLI output, the non-root user ID is listed with read and write access to the file that you created.
-
Exit your pod.
exit
If you need to change the ownership of the mount path from nobody
, see App fails when a non-root user owns the NFS file storage mount path.