Adding File Storage for VPC to apps
IBM Cloud® Kubernetes Service provides pre-defined storage classes that you can use to provision File Storage for VPC. Each storage class specifies the type of File Storage for VPC that you provision, including available size, IOPS, file system, and the retention policy. You can also create your own storage classes depending on your use case.
The File Storage for VPC cluster add-on is available in Beta.
The following limitations apply to the add-on beta.
- If your cluster and VPC are in separate resource groups, then before you can provision file shares, you must create your own storage class and provide your VPC resource group ID under the
resourceGroup
section along with thekube-<clusterID>
security group ID undersecurityGroupIDs
section. To retrieve security group ID do the following. For more information, see Creating your own storage class. - New security group rules were introduced in cluster versions 4.11 and later. These rule changes mean that you must sync your security groups before you can use File Storage for VPC. For more information, see Adding File Storage for VPC to apps.
- New storage classes were added with version 2.0 of the add-on. You can no longer provision new file shares that use the older storage classes. Existing volumes that use the older storage classes continue to function, however you cannot expand the volumes that were created using the older classes. For more information, see the Migrating to a new storage class.
After you provision a specific type of storage by using a storage class, you can't change the type, or retention policy for the storage device. However, you can change the size and the IOPS if you want to increase your storage capacity and performance. To change the type and retention policy for your storage, you must create a new storage instance and copy the data from the old storage instance to your new one.
Decide on a storage class. For more information, see the storage class reference.
Review the following notes and considerations for File Storage for VPC.
- By default, File Storage for VPC cluster add-on provisions file shares in the
kube-<clusterID>
security group. This means pods can access file shares across nodes and zones. - If you need the following features, you must create your own storage class.
- Your app needs to run as non-root.
- Your cluster is in a different resource group from your VPC and subnet.
- You need to limit file share access to pods on a given node or in a given zone.
- You need to bring your own (BYOK) encryption using a KMS provider such as HPCS or Key Protect.
- You need to manually specify the subnet or IP address of the Virtual Network Interface (VNI).
New security group rules were introduced in versions 4.11 and later. These rule changes mean you must sync your security groups before you can use File Storage for VPC. If your cluster was initially created at version 4.11 or earlier, run the following commands to sync your security group settings.
- Get the ID of your cluster.
ibmcloud oc cluster ls
- Get the ID of the
kube-<clusterID>
security group.ibmcloud is sg kube-<cluster-id> | grep ID
- Sync the
kube-<clusterID>
security group by using the ID that you retrieved in the previous step.ibmcloud ks security-group sync -c <cluster ID> --security-group <ID>
Quick start for File Storage for VPC
Create a persistent volume claim (PVC) to dynamically provision File Storage for VPC for your cluster. Dynamic provisioning automatically creates the matching persistent volume (PV) and orders the file share in your account.
-
Review the pre-installed storage classes by running the following command. For more information, see the storage class reference.
oc get sc | grep vpc-file
-
Save the following YAML to a file. This example creates a claim that is named
my-pvc
by using theibmc-vpc-file-min-iops
storage class with a gigabyte size of10Gi
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc # Enter a name for your PVC. spec: accessModes: - ReadWriteMany # The file share can be mounted on multiple nodes and pods. resources: requests: storage: 20Gi # Enter the size of the storage in gigabytes (Gi). storageClassName: ibmc-vpc-file-min-iops # Enter the name of the storage class that you want to use.
-
Create the PVC.
oc apply -f my-pvc.yaml
-
Verify that your PVC is created and bound to the PV.
oc describe pvc my-pvc
Example output
Name: my-pvc Namespace: default StorageClass: "" Status: Bound Volume: pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2 Labels: <none> Capacity: 20Gi Access Modes: RWX Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {ibm.io/ibmc-vpc-file 31898035-3011-11e7-a6a4-7a08779efd33 } Normal ProvisioningSucceeded Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
-
Save the following deployment configuration to file called
deployment.yaml
and reference the PVC that you created in the previous step.apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-deployment spec: selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: busybox:1.28 command: [ "sh", "-c", "sleep 1h" ] volumeMounts: - name: my-vol mountPath: /data/demo # Mount path for the application. volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc # Your PVC name.
volumeMounts.mountPath
- In the container volume mounts section, enter the absolute path of the directory to where the volume is mounted inside the container. Data that is written to the mount path is stored under the
root
directory in your physical File Storage for VPC instance. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps. volumeMounts.name
- In the container volume mounts section, enter the name of the volume to mount to your pod.
volume.name
- In the volumes section, enter the name of the volume to mount to your pod. Typically this name is the same as
volumeMounts.name
. claimName
- In the volumes persistent volume claim section, enter the name of the PVC that binds the PV that you want to use.
-
Create the deployment.
oc apply -f deployment.yaml
-
Verify that the PV is successfully mounted.
oc describe deployment my-deployment
The mount point is in the Volume Mounts field and the volume is in the Volumes field.
Containers: Mounts: /data/demo from my-vol (rw) Volumes: my-vol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: my-pvc ReadOnly: false
-
Optional: After your pod is running, try expanding your storage volume.
Migrating to a new storage class
- New storage classes were added with version 2.0 of the add-on.
- You can no longer provision new file shares that use the older storage classes.
- Existing volumes that use the older storage classes continue to function, however you cannot expand the volumes that were created using the older classes.
- If you need the volume expansion feature, complete the following steps to migrate your apps to a newer storage class.
- If you don't need the volume expansion feature, you do not need to migrate and your PVCs continue to function as normal.
- The following steps cover manual migration. If you use a backup service such as PX backup or Velero, you can use those services to backup and restore your apps to a new storage class.
-
Find the PVC you want to migrate and make a note of the both the PVC name and the associated PV name.
oc get pvc
-
Scale down your app that is using the PVC.
oc scale deployment DEPLOYMENT --replicas 0
-
Edit the PV object that your app is using to change reclaim policy to
Retain
and storage class toibmc-vpc-file-min-iops
.kubectl edit pv PV
spec: accessModes: - ReadWriteMany capacity: storage: 20Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: <pvc-name> namespace: default ... persistentVolumeReclaimPolicy: Retain # Change delete to retain storageClassName: ibmc-vpc-file-min-iops # Enter a new storage class volumeMode: Filesystem
-
Delete the existing PVC object.
kubectl delete pvc PVC
-
Edit the PV again and remove the
claimRef
section.kubectl edit pv PV
spec: accessModes: - ReadWriteMany capacity: storage: 20Gi #claimRef: #apiVersion: v1 #kind: PersistentVolumeClaim #name: <pvc-name> #namespace: default #resourceVersion: "381270" #uid: 4042f319-1233-4187-8549-8249a840a8dd
-
Create a PVC that has the same name and same size as your previous PVC. This should be done one by one for all affected PVCs.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <pvc-name> spec: accessModes: - ReadWriteMany resources: requests: storage: <size>Gi storageClassName: ibmc-vpc-file-min-iops
-
Scale up your app which was using the PVC.
k scale deployment DEPLOYMENT --replicas x
-
To continue using volume expansion, see Setting up volume expansion.
Setting up volume expansion
To provision volumes that support expansion, you must use storage class that has allowVolumeExpansion
set to true
.
The File Storage for VPC cluster add-on supports expansion in both online and offline modes. However, expansion is only possible within the given size and IOPs range of the File Storage for VPC profile.
New storage classes were introduced with version 2.0. Volume expansion does not work for shares that use the storage classes from earlier versions of the add-on.
Before you begin
-
To use volume expansion, make sure you update your add-on to at least version 2.0.
-
Make sure your app is using one of the latest storage classes. For migration steps, see Migrating to a new storage class
-
If you don't have a running app, begin by deploying the quick start example PVC and Deployment.
Expanding a mounted volume
-
After your PVC is mounted by an app pod, you can expand your volume by editing the value of the
spec.resources.requests.storage
field in your PVC. To expand your volume, edit your PVC and increase the value in thespec.resources.requests.storage
field.kubectl edit pvc my-pvc
spec: accessModes: - ReadWriteMany resources: requests: storage: 50Gi
-
Save and close the PVC. Wait a few minutes for the volume to be expanded.
-
Verify that your volume is expanded.
kubectl get pvc
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE my-pvc Bound pvc-25b6912e-75bf-41ca-b6b2-567fa4f9d245 50Gi RWX ibmc-vpc-file-min-iops 3m31s
Attaching existing file storage to an app
Create a persistent volume claim (PVC) to statically provision File Storage for VPC for your cluster. Static provisioning allows cluster administrators to make existing storage devices available to a cluster.
-
Get your cluster ID.
ibmcloud ks cluster ls
-
Get the ID of your
kube-<clusterID>
security group.ibmcloud is sg kube-<cluster-id> | grep ID
-
Create a file share. For more information, see Creating file shares and mount targets.
ibmcloud is share-create --name my-file-share --zone us-south-2 --profile dp2 --size 1000 --iops 1000
-
Create a share mount target and specify the
kube-<clusterID>
security group ID that you retrieved earlier in the--vni-sgs
option.ibmcloud is share-mount-target-create my-file-share --subnet my-subnet --name NAME --vni-name my-share-vni-1 --vni-sgs kube-<cluster-id> --resource-group-name Default --vpc ID
-
Before you can create a persistent volume (PV), retrieve the details about your file share.
ibmcloud is shares
-
Get the details of your share. Make a note of the mount targets.
ibmcloud is share SHARE-ID
Example command.
ibmcloud is share r134-bad98878-1f63-45d2-a3fd-60447094c2e6
Example output
ID r134-bad98878-1f63-45d2-a3fd-60447094c2e6 Name pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0 CRN crn:v1:staging:public:is:us-south-1:a/77f2bceddaeb577dcaddb4073fe82c1c::share:r134-bad98878-1f63-45d2-a3fd-60447094c2e6 Lifecycle state stable Access control mode security_group Zone us-south-1 Profile dp2 Size(GB) 10 IOPS 100 User Tags clusterid:cpjao3l20dl78jadqkd0,namespace:default,provisioner:vpc.file.csi.ibm.io,pv:pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0,pvc:pv-file,reclaimpolicy:delete,storageclass:custom-eni Encryption provider_managed Mount Targets ID Name r134-aa2aabb8-f616-47be-886b-99220852b728 pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0 Resource group ID Name 300b9469ee8676f9a038ecdf408c1a9d Default Created 2024-06-11T19:55:11+05:30 Replication role none Replication status none Replication status reasons Status code Status message
-
Get the
nfsServerPath
, also called theMount Path
.ibmcloud is share-mount-target SHARE-ID SHARE-TARGET-ID
Example command.
ibmcloud is share-mount-target r134-bad98878-1f63-45d2-a3fd-60447094c2e6 r134-aa2aabb8-f616-47be-886b-99220852b728
Example output
ID r134-aa2aabb8-f616-47be-886b-99220852b728 Name pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0 VPC ID Name r134-f05922d4-d8ab-4f64-9a3d-82664b303bc1 vpc-public Access control mode security_group Resource type share_mount_target Virtual network interface ID Name 0716-6407fb4b-e962-49c4-8556-dc94f4574b4b defective-chloride-huffy-gladly Lifecycle state stable Mount path 10.240.0.23:/89d8a454_f552_42bf_8374_4d31481edf4d Transit Encryption none Created 2024-06-11T19:55:12+05:30
-
Create a PV configuration file called
static-file-share.yaml
that references your file share.apiVersion: v1 kind: PersistentVolume metadata: name: static-file-share spec: mountOptions: - hard - nfsvers=4.1 - sec=sys accessModes: - ReadWriteMany capacity: storage: 10Gi csi: volumeAttributes: nfsServerPath: NFS-SERVER-PATH isEITEnabled: true # The default is false driver: vpc.file.csi.ibm.io volumeHandle: FILE-SHARE-ID#SHARE-TARGET-ID
-
Create the PV.
oc apply -f static-file-share.yaml
-
Create a PVC.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-static spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: "" #Leave the storage class blank.
-
Create the PVC to bind your PV.
oc apply -f pvc-static.yaml
-
Create a deployment file name
testpod.yaml
to attach your fileshare to the desired application pod.apiVersion: apps/v1 kind: Deployment metadata: name: testpod labels: app: testpod spec: replicas: 1 selector: matchLabels: app: testpod template: metadata: labels: app: testpod spec: containers: - image: IMAGE # The name of the container image that you want to use. name: CONTAINER-NAME # The name of the container that you want to deploy to your cluster. volumeMounts: - mountPath: /myvol name: pvc-name volumes: - name: pvc-name persistentVolumeClaim: claimName: pvc-static # The name of the PVC that you created earlier
spec.containers.volumeMounts.mountPath
- Enter the absolute path of the directory to where the volume is mounted inside the container. Data that is written to the mount path is stored under the
root
directory in your physical File Storage for VPC instance. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps. volumeMounts.name
- Enter the name of the volume to mount to your pod.
volume.name
- Enter the name of the volume to mount to your pod. Typically this name is the same as
volumeMounts.name
. volumes.persistentVolumeClaim.claimName
- Enter the name of the PVC that binds the PV that you want to use.
-
Create the deployment.
oc apply -f testpod.yaml
Creating your own storage class
Create your own customized storage class with the preferred settings for your File Storage for VPC instance. The following example uses the dp2
profile.
If your cluster and VPC are not in the same resource group, you must specify the VPC resource group ID in the resourceGroup
section and the kube-<clusterID>
security group ID in the securityGroupIDs
section. You can find the ID of the kube-<clusterID>
security group by running ibmcloud is sg kube-<cluster-id> | grep ID
.
-
Create a storage class configuration file.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ibmc-vpc-file-custom-sc labels: app.kubernetes.io/name: ibm-vpc-file-csi-driver annotations: version: v2.0 provisioner: vpc.file.csi.ibm.io mountOptions: - hard - nfsvers=4.1 - sec=sys parameters: profile: "dp2" billingType: "hourly" # hourly or monthly encrypted: "false" encryptionKey: "" # If encrypted is true, then a user must specify the CRK-CRN. resourceGroup: "" # Resource group ID. By default, the resource group of the cluster will be used from storage-secrete-store secret. isENIEnabled: "true" # VPC File Share VNI feature will be used by all PVCs created with this storage class. securityGroupIDs: "" # By default cluster security group i.e kube-<clusterID> will be used. User can provide their own comma separated SGs. subnetID: "" # User can provide subnetID in which the VNI will be created. Zone and region are mandatory for this. If not provided CSI driver will use the subnetID available in the cluster's VPC zone. region: "" # VPC CSI driver will select a region from cluster node's topology. The user can override this default. zone: "" # VPC CSI driver will select a region from cluster node's topology. The user can override this default. primaryIPID: "" # Existing ID of reserved IP from the same subnet as the file share zone. Zone and region are mandatory for this. SubnetID is not mandatory for this. primaryIPAddress: "" # IPAddress for VNI to be created in the respective subnet of the zone. Zone, region and subnetID are mandatory for this. tags: "" # User can add a list of tags "a, b, c" that will be used at the time of provisioning file share, by default CSI driver has its own tags. uid: "0" # The initial user identifier for the file share, by default its root. gid: "0" # The initial group identifier for the file share, by default its root. classVersion: "1" reclaimPolicy: "Delete" allowVolumeExpansion: true
-
Create the customized storage class in your cluster.
oc apply -f custom-storageclass.yaml
-
Verify that your storage class is available in the cluster.
oc get sc
Example output
ibmc-vpc-file-custom-sc vpc.file.csi.ibm.io
Setting the default storage class
- Changing the default storage class is only available for add-on version 2.0 or later.
- You can set the default storage class to one of the pre-installed File Storage for VPC classes or your own custom storage class. If you are using a custom storage class, make sure the provisioner is set to
vpc.file.csi.ibm.io
. - If multiple storage classes are set as the default in a cluster, any of the default storage classes might be used. As a best practice, and to ensure the correct storage class is used, remove any existing default storage classes in the cluster before setting a new default class.
-
Edit the
addon-vpc-file-csi-driver-configmap
configmap and specify the storage class name inSET_DEFAULT_STORAGE_CLASS
parameter.oc edit cm addon-vpc-file-csi-driver-configmap -n kube-system
Example output
SET_DEFAULT_STORAGE_CLASS: "ibmc-vpc-file-eit"
-
Verify the default is set correctly by describing the
file-csi-driver-status
configmap.oc describe cm file-csi-driver-status -n kube-system
Example output.
events: ---- - event: EnableVPCFileCSIDriver description: 'VPC File CSI Driver enable successful, DriverVersion: v2.0.6' timestamp: "2024-09-20 12:01:02" - event: Change default storage class request description: Successfully set 'ibmc-vpc-file-eit' as default storage class timestamp: "2024-09-20 12:01:36"
Deploying an app that runs as non-root
-
Create your own storage class and specify the group ID or user ID that you want to use for your app.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ibmc-vpc-file-custom-sc labels: app.kubernetes.io/name: ibm-vpc-file-csi-driver annotations: version: v2.0 provisioner: vpc.file.csi.ibm.io mountOptions: - hard - nfsvers=4.1 - sec=sys parameters: profile: "dp2" iops: "100" billingType: "hourly" # hourly or monthly encrypted: "false" uid: "3000" # The initial user identifier for the file share. gid: "1000" # The initial group identifier for the file share. classVersion: "1" reclaimPolicy: "Delete" allowVolumeExpansion: true
-
Save the following YAML to a file called
my-pvc.yaml
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: ibmc-vpc-file-custom-sc
-
Create the PVC.
oc apply -f my-pvc.yaml
-
Create a pod that mounts the PVC.
apiVersion: v1 kind: Pod metadata: name: security-context-demo spec: securityContext: runAsUser: 3000 runAsGroup: 1000 volumes: - name: sec-ctx-vol emptyDir: {} containers: - name: sec-ctx-demo image: busybox:1.28 command: [ "sh", "-c", "sleep 1h" ] volumeMounts: - name: sec-ctx-vol mountPath: /data/demo securityContext: allowPrivilegeEscalation: false persistentVolumeClaim: claimName: my-pvc
-
Verify the pod is running.
oc get pods
Setting up KMS encryption for File Storage for VPC
Use a key management service (KMS) provider, such as IBM® Key Protect, to create a private root key that you use in your File Storage for VPC instance to encrypt data as it is written to storage. After you create the private root key, create your own storage class or a Kubernetes secret with your root key and then use this storage class or secret to provision your File Storage for VPC instance.
-
Create an instance of the KMS provider that you want to use.
-
Create a root key in your KMS instance.
- Key Protect root key.
- Hyper Protect Crypto Services root key. By default, the root key is created without an expiration date.
-
Set up service to service authorization. Authorize File Storage for VPC to access IBM® Key Protect. Make sure to give File Storage for VPC at least
Reader
access to your KMS instance. -
Create a custom storage class and specify your KMS details.
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-sc-encrypted labels: app.kubernetes.io/name: ibm-vpc-file-csi-driver annotations: version: v2.0 provisioner: vpc.file.csi.ibm.io mountOptions: - hard - nfsvers=4.1 - sec=sys parameters: profile: "dp2" billingType: "hourly" # hourly or monthly encrypted: "true" encryptionKey: "" # Specify the root key CRN. resourceGroup: "" # Resource group ID. By default, the resource group of the cluster will be used from storage-secrete-store secret. isENIEnabled: "true" # VPC File Share VNI feature will be used by all PVCs created with this storage class. securityGroupIDs: "" # By default cluster security group i.e kube-<clusterID> will be used. User can provide their own comma separated SGs. subnetID: "" # User can provide subnetID in which the VNI will be created. Zone and region are mandatory for this. If not provided CSI driver will use the subnetID available in the cluster's VPC zone. region: "" # VPC CSI driver will select a region from cluster node's topology. The user can override this default. zone: "" # VPC CSI driver will select a region from cluster node's topology. The user can override this default. primaryIPID: "" # Existing ID of reserved IP from the same subnet as the file share zone. Zone and region are mandatory for this. SubnetID is not mandatory for this. primaryIPAddress: "" # IPAddress for VNI to be created in the respective subnet of the zone. Zone, region and subnetID are mandatory for this. tags: "" # User can add a list of tags "a, b, c" that will be used at the time of provisioning file share, by default CSI driver has its own tags. uid: "0" # The initial user identifier for the file share, by default its root. gid: "0" # The initial group identifier for the file share, by default its root. classVersion: "1" reclaimPolicy: "Delete" allowVolumeExpansion: true
-
Create the storage class.
oc apply -f encrypted-class.yaml
-
Save the following YAML to a file called
my-pvc.yaml
.apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: custom-sc-encrypted
-
Create the PVC.
oc apply -f my-pvc.yaml
-
Save the following deployment configuration to file called
deployment.yaml
and reference the PVC that you created in the previous step.apiVersion: apps/v1 kind: Deployment metadata: name: my-deployment labels: app: my-deployment spec: selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: containers: - name: busybox image: busybox:1.28 command: [ "sh", "-c", "sleep 1h" ] volumeMounts: - name: my-vol mountPath: /data/demo # Mount path for the application. volumes: - name: my-vol persistentVolumeClaim: claimName: my-pvc # Your PVC name.
volumeMounts.mountPath
- In the container volume mounts section, enter the absolute path of the directory to where the volume is mounted inside the container. Data that is written to the mount path is stored under the
root
directory in your physical File Storage for VPC instance. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps. volumeMounts.name
- In the container volume mounts section, enter the name of the volume to mount to your pod.
volume.name
- In the volumes section, enter the name of the volume to mount to your pod. Typically this name is the same as
volumeMounts.name
.
-
Create the deployment.
oc apply -f deployment.yaml
Setting up encryption in-transit (EIT)
Review the following information about EIT.
- By default, file shares are encrypted at rest with IBM-managed encryption.
- If you choose to use encryption in-transit, you need to balance your requirements between performance and enhanced security. Encrypting data in-transit can have performance impacts due to the processing that is needed to encrypt and decrypt the data at the endpoints.
- EIT is not available for Secure by Default clusters and requires you the disable outbound traffic protection in clusters 4.15 and later.
- EIT is available for cluster versions 4.16 and later.
- For more information about encryption in-transit, see VPC Encryption in Transit.
Complete the following steps to set up encryption-in-transit (EIT) for file shares in your Red Hat OpenShift on IBM Cloud cluster. Enabling EIT installs the required packages on your worker nodes.
-
Make a note of the worker pools in your cluster where you want to enable EIT.
-
Edit the
addon-vpc-file-csi-driver-configmap
.oc edit cm addon-vpc-file-csi-driver-configmap -n kube-system
-
In the configmap, set
ENABLE_EIT:true
and add worker pools where you want to enable EIT to theWORKER_POOLS_WITH_EIT
. For example:"wp1, wp2"
.apiVersion: v1 data: EIT_ENABLED_WORKER_POOLS: "wp1,wp2" # Specify the worker pools where you want to enable EIT. If this field is blank, EIT is not enabled on any worker pools. ENABLE_EIT: "true" PACKAGE_DEPLOYER_VERSION: v1.0.0 kind: ConfigMap metadata: annotations: version: v2.0.1 creationTimestamp: "2024-06-18T09:45:48Z" labels: app.kubernetes.io/name: ibm-vpc-file-csi-driver name: addon-vpc-file-csi-driver-configmap namespace: kube-system ownerReferences: - apiVersion: csi.drivers.ibmcloud.io/v1 blockOwnerDeletion: true controller: true kind: VPCFileCSIDriver name: ibm-vpc-file-csi-driver uid: d3c8bbcd-24fa-4203-9352-4ab7aa72a055 resourceVersion: "1251777" uid: 5c9d6679-4135-458b-800d-217b34d27c75
-
After enabling EIT, save and close the config map.
-
To verify EIT is enabled, review the events of the
file-csi-driver-status
config map.oc describe cm file-csi-driver-status -n kube-system
Example output
apiVersion: v1 data: EIT_ENABLED_WORKER_NODES: | default: - 10.240.0.10 - 10.240.0.8 PACKAGE_DEPLOYER_VERSION: v1.0.0 events: | - event: EnableVPCFileCSIDriver description: 'VPC File CSI Driver enable successful, DriverVersion: v2.0.3' timestamp: "2024-06-13 09:17:07" - event: EnableEITRequest description: 'Request received to enableEIT, workerPools: , check the file-csi-driver-status configmap for eit installation status on each node of each workerpool.' timestamp: "2024-06-13 09:17:31" - event: 'Enabling EIT on host: 10.240.0.10' description: 'Package installation successful on host: 10.240.0.10, workerpool: wp1' timestamp: "2024-06-13 09:17:48" - event: 'Enabling EIT on host: 10.240.0.8' description: 'Package installation successful on host: 10.240.0.8, workerpool: wp2' timestamp: "2024-06-13 09:17:48"
-
Select a pre-installed storage class that supports EIT or create your own storage class.
- Create a PVC by using either the
ibmc-vpc-file-eit
storage class. - Create your own storage class and set the
isEITenabled
parameter totrue
.
- Create a PVC by using either the
-
Create a PVC that references the storage class you selected, then deploy an app that uses your PVC.
Limiting file share access by worker pool, zone, or worker node
The default behavior for File Storage for VPC cluster add-on is that pods on any node can access file shares. You can also apply more granular control over how pods access your file shares. For example, you might limit file share access to only pods on a specific node, in a specific zone, on a specific worker pool. Review the following scenarios for how you can configure pod access to your file shares.
When a PVC is created, it creates one file share target per PVC and one VNI IP is reserved on that respective subnet in the zone. This means the max number of PVCs for VPC file storage depends on the available IP addresses on that subnet.
If you use the following VNI features to limit pod access to your file shares, your app might not be highly available.
Before you begin
To limit file share access by node, zone, or resource group, you must first create a custom VPC security group.
-
List your clusters and make a note of the cluster ID where you want to deploy file storage.
ibmcloud ks cluster ls
-
List your security groups and make a note of the ID
kube-<clusterID>
security group for your cluster. You need the security group ID later when adding security group rules.ibmcloud is sg
Example output
ID Name Rules Targets VPC Resource group r006-4aaec88f-4986-4b7c-a737-401f7fef1555 kube-clusterID 15 0 my-vpc default
-
Create a custom security group in the same VPC as your cluster. Your can use this security group to control access to your file shares by adding security group rules.
ibmcloud is security-group-create my-custom-security-group VPC-ID
-
Create your own storage class and enter the ID of the custom security group you created earlier. All PVCs created from this storage class are in your custom security group.
-
Create a PVC that uses your own storage class.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-custom-vni spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi storageClassName: "" # For example: my-custom-storage-class
-
After the PV binds to the PVC, get the PV details and make a note of the
nfsServerPath
value to find the VNI IP address.kubectl get pv pvc-XXXX -o yaml | grep nfsServerPath
Example output
nfsServerPath: XXX.XX.XX.XXX:/XX # VNI IP address
Limiting file share access to pods on one worker node
-
Make sure you have completed the prerequisites.
-
Add the following rule to the custom security group you created earlier.
ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
-
Add the following rule to the
kube-clusterID
security group.ibmcloud is sg-rulec kube-<cluster-id> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
-
Create a deployment that uses your PVC. Only pods that are deployed on the worker node that matches the rule that you created are able to mount or use the PVC. Pods deployed on other nodes are stuck in the container
creating
state.
Limiting file share access to pods on worker nodes in a single zone
-
Make sure you have completed the prerequisites.
-
Add the following rule to the custom security group you created earlier.
ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.0/24 # zone subnet cidr range
-
Add the following rule to the
kube-clusterID
security group. Specify the IP address of the virtual network interface (VNI).ibmcloud is sg-rulec kube-<cluster-ID> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
-
Create a deployment that uses your PVC. Only pods deployed in the zone that is listed in the previous rule can mount the PVC. Pods that are deployed in other zones cannot access the PVC and are stuck in the Container
creating
state.
Limiting file share access to pods on worker nodes in a single worker pool
-
Make sure you have completed the prerequisites.
-
Create inbound rules for each worker pool subnet range.
ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.0/24 # zone 1 subnet cidr range ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.0/24 # zone 2 subnet cidr range
-
Add the following rule to the
kube-clusterID
security group. Specify the IP address of the virtual network interface (VNI) as the remote or source.ibmcloud is sg-rulec kube-<cluster-ID> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
-
Deploy an app that uses the PVC you created earlier. Only pods on the worker pools indicated in the earlier rule can mount the PVC. Or, if you deploy your app in a daemonset, pods will only successfully deploy on worker nodes that you created security group rules for. Pods on worker pools that aren't in the specified worker pool fail with a
MountVolume.SetUp failed for volume "pvc-184b8c92-33ea-4874-b2ac-17665e53c060" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
error.
Limiting file share access to pods on worker nodes in multiple worker pools
-
Make sure you have completed the prerequisites.
-
Add the following rules to your custom security group. Specify the worker pools and the subnet CIDR ranges as the remote or source.
ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.0/24 # worker pool 1, zone 1 subnet CIDR range ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.0/24 # worker pool 1, zone 2 subnet CIDR range ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.241.0.0/24 # worker pool 2, zone 1 subnet CIDR range ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.241.1.0/24 # worker pool 2, zone 2subnet CIDR range ibmcloud is sg-rulec CUSTOM-SG outbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.7 # VNI-IP
-
Add the following rule to the
kube-<clusterID>
security group. Specify the IP address of the virtual network interface (VNI) as the remote or source.ibmcloud is sg-rulec kube-<clusterID> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.7 # VNI-IP
-
Create a deployment that uses your PVC. Only pods deployed in the zone indicated in the rule can mount the PVC. Pods deployed on the restricted worker nodes are stuck in the Container
creating
state.