IBM Cloud Docs
Adding File Storage for VPC to apps

Adding File Storage for VPC to apps

IBM Cloud® Kubernetes Service provides pre-defined storage classes that you can use to provision File Storage for VPC. Each storage class specifies the type of File Storage for VPC that you provision, including available size, IOPS, file system, and the retention policy. You can also create your own storage classes depending on your use case.

The File Storage for VPC cluster add-on is available in Beta.

The following limitations apply to the add-on beta.

  • It is recommended that your cluster and VPC are part of same resource group. If your cluster and VPC are in separate resource groups, then before you can provision file shares, you must create your own storage class and provide your VPC resource group ID. For more information, see Creating your own storage class.
  • New security group rules were introduced in cluster versions 1.25 and later. These rule changes mean that you must sync your security groups before you can use File Storage for VPC. For more information, see Adding File Storage for VPC to apps.
  • New storage classes were added with version 2.0 of the add-on. You can no longer provision new file shares that use the older storage classes. Existing volumes that use the older storage classes continue to function, however you cannot expand the volumes that were created using the older classes. For more information, see the Migrating to a new storage class.

After you provision a specific type of storage by using a storage class, you can't change the type, or retention policy for the storage device. However, you can change the size and the IOPS if you want to increase your storage capacity and performance. To change the type and retention policy for your storage, you must create a new storage instance and copy the data from the old storage instance to your new one.

Decide on a storage class. For more information, see the storage class reference.

Review the following notes and considerations for File Storage for VPC.

  • By default, File Storage for VPC cluster add-on provisions file shares in the kube-<clusterID> security group. This means pods can access file shares across nodes and zones.
  • If you need the following features, you must create your own storage class.
    • Your app needs to run as non-root.
    • Your cluster is in a different resource group from your VPC and subnet.
    • You need to limit file share access to pods on a given node or in a given zone.
    • You need to bring your own (BYOK) encryption using a KMS provider such as HPCS or Key Protect.
    • You need to manually specify the subnet or IP address of the Virtual Network Interface (VNI).

New security group rules were introduced in versions 1.25 and later. These rule changes mean you must sync your security groups before you can use File Storage for VPC.

If your cluster was initially created at version 1.25 or earlier, run the following commands to sync your security group settings.

  1. Get the ID of the kube-<clusterID> security group.
    ibmcloud ks security-group ls --cluster CLUSTER
    
  2. Sync your security group settings.
    ibmcloud ks security-group sync -c <cluster ID> --security-group <ID>
    

Quick start for File Storage for VPC

Create a persistent volume claim (PVC) to dynamically provision File Storage for VPC for your cluster. Dynamic provisioning automatically creates the matching persistent volume (PV) and orders the file share in your account.

  1. Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.

  2. Review the pre-installed storage classes by running the following command. For more information, see the storage class reference.

    kubectl get sc | grep vpc-file
    
  3. Save the following YAML to a file. This example creates a claim that is named my-pvc by using the ibmc-vpc-file-min-iops storage class with a gigabyte size of 10Gi.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc # Enter a name for your PVC.
    spec:
      accessModes:
      - ReadWriteMany # The file share can be mounted on multiple nodes and pods.
      resources:
        requests:
          storage: 20Gi # Enter the size of the storage in gigabytes (Gi).
      storageClassName: ibmc-vpc-file-min-iops # Enter the name of the storage class that you want to use.
    
  4. Create the PVC.

    kubectl apply -f my-pvc.yaml
    
  5. Verify that your PVC is created and bound to the PV.

    kubectl describe pvc my-pvc
    

    Example output

    Name:        my-pvc
    Namespace:    default
    StorageClass:    ""
    Status:        Bound
    Volume:        pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    Labels:        <none>
    Capacity:    20Gi
    Access Modes:    RWX
    Events:
        FirstSeen    LastSeen    Count    From                                SubObjectPath    Type        Reason            Message
        ---------    --------    -----    ----                                -------------    --------    ------            -------
        1m        1m        1    {ibm.io/ibmc-vpc-file 31898035-3011-11e7-a6a4-7a08779efd33 }            Normal        ProvisioningSucceeded    Successfully provisioned volume pvc-0d787071-3a67-11e7-aafc-eef80dd2dea2
    
    
  6. Save the following deployment configuration to file called deployment.yaml and reference the PVC that you created in the previous step.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-deployment
      labels:
        app: my-deployment
    spec:
      selector:
        matchLabels:
          app: busybox
      template:
        metadata:
          labels:
            app: busybox
        spec:
          containers:
          - name: busybox
            image: busybox:1.28
            command: [ "sh", "-c", "sleep 1h" ]
            volumeMounts:
            - name: my-vol
              mountPath: /data/demo # Mount path for the application.
          volumes:
          - name: my-vol
            persistentVolumeClaim:
              claimName: my-pvc # Your PVC name.
    
    volumeMounts.mountPath
    In the container volume mounts section, enter the absolute path of the directory to where the volume is mounted inside the container. Data that is written to the mount path is stored under the root directory in your physical File Storage for VPC instance. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps.
    volumeMounts.name
    In the container volume mounts section, enter the name of the volume to mount to your pod.
    volume.name
    In the volumes section, enter the name of the volume to mount to your pod. Typically this name is the same as volumeMounts.name.
    claimName
    In the volumes persistent volume claim section, enter the name of the PVC that binds the PV that you want to use.
  7. Create the deployment.

    kubectl apply -f deployment.yaml
    
  8. Verify that the PV is successfully mounted.

    kubectl describe deployment my-deployment
    

    The mount point is in the Volume Mounts field and the volume is in the Volumes field.

    Containers:
      Mounts:
        /data/demo from my-vol (rw)
    Volumes:
    my-vol:
      Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
      ClaimName:  my-pvc
      ReadOnly:   false
    
  9. Optional: After your pod is running, try expanding your storage volume.

Migrating to a new storage class

  • New storage classes were added with version 2.0 of the add-on.
  • You can no longer provision new file shares that use the older storage classes.
  • Existing volumes that use the older storage classes continue to function, however you cannot expand the volumes that were created using the older classes.
  • If you need the volume expansion feature, complete the following steps to migrate your apps to a newer storage class.
  • If you don't need the volume expansion feature, you do not need to migrate and your PVCs continue to function as normal.
  • The following steps cover manual migration. If you use a backup service such as PX backup or Velero, you can use those services to backup and restore your apps to a new storage class.
  1. Find the PVC you want to migrate and make a note of the both the PVC name and the associated PV name.

    kubectl get pvc
    
  2. Scale down your app that is using the PVC.

    kubectl scale deployment DEPLOYMENT --replicas 0
    
  3. Edit the PV object that your app is using to change reclaim policy to Retain and storage class to ibmc-vpc-file-min-iops.

    kubectl edit pv PV
    
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 20Gi
      claimRef:
        apiVersion: v1
        kind: PersistentVolumeClaim
        name: <pvc-name>
        namespace: default
        ...
      persistentVolumeReclaimPolicy: Retain # Change delete to retain
      storageClassName: ibmc-vpc-file-min-iops # Enter a new storage class
      volumeMode: Filesystem
    
  4. Delete the existing PVC object.

    kubectl delete pvc PVC
    
  5. Edit the PV again and remove the claimRef section.

    kubectl edit pv PV
    
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 20Gi
      #claimRef:
        #apiVersion: v1
        #kind: PersistentVolumeClaim
        #name: <pvc-name>
        #namespace: default
        #resourceVersion: "381270"
        #uid: 4042f319-1233-4187-8549-8249a840a8dd
    
  6. Create a PVC that has the same name and same size as your previous PVC. This should be done one by one for all affected PVCs.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <pvc-name>
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: <size>Gi
      storageClassName: ibmc-vpc-file-min-iops
    
  7. Scale up your app which was using the PVC.

    k scale deployment DEPLOYMENT --replicas x
    
  8. To continue using volume expansion, see Setting up volume expansion.

Setting up volume expansion

To provision volumes that support expansion, you must use storage class that has allowVolumeExpansion set to true.

The File Storage for VPC cluster add-on supports expansion in both online and offline modes. However, expansion is only possible within the given size and IOPs range of the File Storage for VPC profile.

New storage classes were introduced with version 2.0. Volume expansion does not work for shares that use the storage classes from earlier versions of the add-on.

Before you begin

Expanding a mounted volume

  1. After your PVC is mounted by an app pod, you can expand your volume by editing the value of the spec.resources.requests.storage field in your PVC. To expand your volume, edit your PVC and increase the value in the spec.resources.requests.storage field.

    kubectl edit pvc my-pvc
    
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 50Gi
    
  2. Save and close the PVC. Wait a few minutes for the volume to be expanded.

  3. Verify that your volume is expanded.

    kubectl get pvc
    

    Example output

    NAME     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
    my-pvc   Bound    pvc-25b6912e-75bf-41ca-b6b2-567fa4f9d245   50Gi       RWX            ibmc-vpc-file-min-iops   3m31s
    

Attaching existing file storage to an app

Create a persistent volume claim (PVC) to statically provision File Storage for VPC for your cluster. Static provisioning allows cluster administrators to make existing storage devices available to a cluster.

  1. Create a file share and target. For more information, see Creating file shares and mount targets.

  2. Before you can create a persistent volume (PV), retrieve the details about your file share.

    ibmcloud is shares
    
  3. Get the details of your share. Make a note of the mount targets.

    ibmcloud is share SHARE-ID
    

    Example command.

    ibmcloud is share r134-bad98878-1f63-45d2-a3fd-60447094c2e6
    

    Example output

    ID                           r134-bad98878-1f63-45d2-a3fd-60447094c2e6   
    Name                         pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0   
    CRN                          crn:v1:staging:public:is:us-south-1:a/77f2bceddaeb577dcaddb4073fe82c1c::share:r134-bad98878-1f63-45d2-a3fd-60447094c2e6   
    Lifecycle state              stable   
    Access control mode          security_group   
    Zone                         us-south-1   
    Profile                      dp2   
    Size(GB)                     10   
    IOPS                         100   
    User Tags                    clusterid:cpjao3l20dl78jadqkd0,namespace:default,provisioner:vpc.file.csi.ibm.io,pv:pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0,pvc:pv-file,reclaimpolicy:delete,storageclass:custom-eni   
    Encryption                   provider_managed   
    Mount Targets                ID                                          Name      
                                r134-aa2aabb8-f616-47be-886b-99220852b728   pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0      
                                    
    Resource group               ID                                 Name      
                                300b9469ee8676f9a038ecdf408c1a9d   Default      
                                    
    Created                      2024-06-11T19:55:11+05:30   
    Replication role             none   
    Replication status           none   
    Replication status reasons   Status code   Status message      
    
  4. Get the nfsServerPath, also called the Mount Path.

    ibmcloud is share-mount-target SHARE-ID SHARE-TARGET-ID
    

    Example command.

    ibmcloud is share-mount-target  r134-bad98878-1f63-45d2-a3fd-60447094c2e6 r134-aa2aabb8-f616-47be-886b-99220852b728
    

    Example output

    ID                          r134-aa2aabb8-f616-47be-886b-99220852b728   
    Name                        pvc-e7e005a9-e96b-41ad-9d6e-74650a9110a0   
    VPC                         ID                                          Name      
                                r134-f05922d4-d8ab-4f64-9a3d-82664b303bc1   vpc-public      
                                  
    Access control mode         security_group   
    Resource type               share_mount_target   
    Virtual network interface   ID                                          Name      
                                0716-6407fb4b-e962-49c4-8556-dc94f4574b4b   defective-chloride-huffy-gladly      
                                  
    Lifecycle state             stable   
    Mount path                  10.240.0.23:/89d8a454_f552_42bf_8374_4d31481edf4d   
    Transit Encryption          none   
    Created                     2024-06-11T19:55:12+05:30
    
  5. Create a PV configuration file called static-file-share.yaml that references your file share.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: static-file-share
    spec:
      mountOptions:
      - hard
      - nfsvers=4.1
      - sec=sys
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Gi
      csi:
        volumeAttributes:
          nfsServerPath: NFS-SERVER-PATH
        driver: vpc.file.csi.ibm.io
        volumeHandle: FILE-SHARE-ID#SHARE-TARGET-ID
    
  6. Create the PV.

    kubectl apply -f static-file-share.yaml
    
  7. Create a PVC.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-static
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: "" #Leave the storage class blank.
    
  8. Create the PVC to bind your PV.

    kubectl apply -f pvc-static.yaml
    
  9. Create a deployment file name testpod.yaml to attach your fileshare to the desired application pod.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: testpod
      labels:
        app: testpod
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: testpod
      template:
        metadata:
          labels:
            app: testpod
        spec:
          containers:
          - image: IMAGE # The name of the container image that you want to use.
            name: CONTAINER-NAME # The name of the container that you want to deploy to your cluster.
            volumeMounts:
            - mountPath: /myvol  
              name: pvc-name 
          volumes:
          - name: pvc-name 
            persistentVolumeClaim:
              claimName: pvc-static # The name of the PVC that you created earlier
    
    spec.containers.volumeMounts.mountPath
    Enter the absolute path of the directory to where the volume is mounted inside the container. Data that is written to the mount path is stored under the root directory in your physical File Storage for VPC instance. If you want to share a volume between different apps, you can specify volume sub paths for each of your apps.
    volumeMounts.name
    Enter the name of the volume to mount to your pod.
    volume.name
    Enter the name of the volume to mount to your pod. Typically this name is the same as volumeMounts.name.
    volumes.persistentVolumeClaim.claimName
    Enter the name of the PVC that binds the PV that you want to use.
  10. Create the deployment.

    kubectl apply -f testpod.yaml
    

Creating your own storage class

Create your own customized storage class with the preferred settings for your File Storage for VPC instance. The following example uses the dp2 profile.

  1. Create a storage class configuration file.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ibmc-vpc-file-custom-sc
      labels:
        app.kubernetes.io/name: ibm-vpc-file-csi-driver
      annotations:
        version: v2.0
    provisioner: vpc.file.csi.ibm.io
    mountOptions:
      - hard
      - nfsvers=4.1
      - sec=sys
    parameters:
        profile: "dp2"
        billingType: "hourly" # hourly or monthly
        encrypted: "false"
        encryptionKey: "" # If encrypted is true, then a user must specify the CRK-CRN.
        resourceGroup: "" # Resource group ID. By default, the resource group of the cluster will be used from storage-secrete-store secret.
        isENIEnabled: "true" # VPC File Share VNI feature will be used by all PVCs created with this storage class.
        securityGroupIDs: "" # By default cluster security group i.e kube-<clusterID> will be used. User can provide their own comma separated SGs.
        subnetID: "" # User can provide subnetID in which the VNI will be created. Zone and region are mandatory for this. If not provided CSI driver will use the subnetID available in the cluster's VPC zone.
        region: "" # VPC CSI driver will select a region from cluster node's topology. The user can override this default.
        zone: "" # VPC CSI driver will select a region from cluster node's topology. The user can override this default.
        primaryIPID: "" # Existing ID of reserved IP from the same subnet as the file share zone. Zone and region are mandatory for this. SubnetID is not mandatory for this.
        primaryIPAddress: "" # IPAddress for VNI to be created in the respective subnet of the zone. Zone, region and subnetID are mandatory for this.
        tags: "" # User can add a list of tags "a, b, c" that will be used at the time of provisioning file share, by default CSI driver has its own tags.
        uid: "0" # The initial user identifier for the file share, by default its root.
        gid: "0" # The initial group identifier for the file share, by default its root.
        classVersion: "1"
    reclaimPolicy: "Delete"
    allowVolumeExpansion: true
    
  2. Create the customized storage class in your cluster.

    kubectl apply -f custom-storageclass.yaml
    
  3. Verify that your storage class is available in the cluster.

    kubectl get sc
    

    Example output

    ibmc-vpc-file-custom-sc                       vpc.file.csi.ibm.io
    

Deploying an app that runs as non-root

  1. Create your own storage class and specify the group ID or user ID that you want to use for your app.

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: ibmc-vpc-file-custom-sc
      labels:
        app.kubernetes.io/name: ibm-vpc-file-csi-driver
      annotations:
        version: v2.0
    provisioner: vpc.file.csi.ibm.io
    mountOptions:
      - hard
      - nfsvers=4.1
      - sec=sys
    parameters:
        profile: "dp2"
        iops: "100"
        billingType: "hourly" # hourly or monthly
        encrypted: "false"
        uid: "3000" # The initial user identifier for the file share.
        gid: "1000" # The initial group identifier for the file share.
        classVersion: "1"
    reclaimPolicy: "Delete"
    allowVolumeExpansion: true
    
  2. Save the following YAML to a file called my-pvc.yaml.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: my-pvc
    spec:
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: ibmc-vpc-file-custom-sc
    
  3. Create the PVC.

    kubectl apply -f my-pvc.yaml
    
  4. Create a pod that mounts the PVC.

    apiVersion: v1
    kind: Pod
    metadata:
      name: security-context-demo
    spec:
      securityContext:
        runAsUser: 3000
        runAsGroup: 1000
      volumes:
      - name: sec-ctx-vol
        emptyDir: {}
      containers:
      - name: sec-ctx-demo
        image: busybox:1.28
        command: [ "sh", "-c", "sleep 1h" ]
        volumeMounts:
        - name: sec-ctx-vol
          mountPath: /data/demo
        securityContext:
          allowPrivilegeEscalation: false
        persistentVolumeClaim:
          claimName: my-pvc
    
  5. Verify the pod is running.

    kubectl get pods
    

Setting up your KMS provider for encrypting File Storage for VPC

Use a key management service (KMS) provider, such as IBM® Key Protect, to create a private root key that you use in your File Storage for VPC instance to encrypt data as it is written to storage. After you create the private root key, create your own storage class or a Kubernetes secret with your root key and then use this storage class or secret to provision your File Storage for VPC instance.

  1. Create an instance of the KMS provider that you want to use.

  2. Create a root key in your KMS instance.

  3. Set up service to service authorization. Authorize File Storage for VPC to access IBM® Key Protect. Make sure to give File Storage for VPC at least Reader access to your KMS instance.

Setting up encryption in-transit (EIT)

Review the following information about EIT.

  • By default, file shares are encrypted at rest with IBM-managed encryption.
  • If you choose to use encryption in-transit, you need to balance your requirements between performance and enhanced security. Encrypting data in-transit can have performance impacts due to the processing that is needed to encrypt and decrypt the data at the endpoints.
  • EIT is not available for Secure by Default clusters and requires you the disable outbound traffic protection in clusters 1.30 and later.
  • For more information about encryption in-transit, see VPC Encryption in Transit.

Complete the following steps to set up encryption-in-transit (EIT) for file shares in your IBM Cloud Kubernetes Service cluster. Enabling EIT installs the required packages on your worker nodes.

  1. Make a note of the worker pools in your cluster where you want to enable EIT.

  2. Edit the addon-vpc-file-csi-driver-configmap.

    kubectl edit cm addon-vpc-file-csi-driver-configmap -n kube-system
    
  3. In the configmap, set ENABLE_EIT:true and add worker pools where you want to enable EIT to the WORKER_POOLS_WITH_EIT. For example: "wp1, wp2".

    apiVersion: v1
    data:
      EIT_ENABLED_WORKER_POOLS: "wp1,wp2" # Specify the worker pools where you want to enable EIT. If this field is blank, EIT is not enabled on any worker pools.
      ENABLE_EIT: "true"
      PACKAGE_DEPLOYER_VERSION: v1.0.0
    kind: ConfigMap
    metadata:
      annotations:
        version: v2.0.1
      creationTimestamp: "2024-06-18T09:45:48Z"
      labels:
        app.kubernetes.io/name: ibm-vpc-file-csi-driver
      name: addon-vpc-file-csi-driver-configmap
      namespace: kube-system
      ownerReferences:
      - apiVersion: csi.drivers.ibmcloud.io/v1
        blockOwnerDeletion: true
        controller: true
        kind: VPCFileCSIDriver
        name: ibm-vpc-file-csi-driver
        uid: d3c8bbcd-24fa-4203-9352-4ab7aa72a055
      resourceVersion: "1251777"
      uid: 5c9d6679-4135-458b-800d-217b34d27c75
    
  4. After enabling EIT, save and close the config map.

  5. To verify EIT is enabled, review the events of the file-csi-driver-status config map.

    kubectl describe cm file-csi-driver-status -n kube-system
    

    Example output

    apiVersion: v1
    data:
      EIT_ENABLED_WORKER_NODES: |
        default:
        - 10.240.0.10
        - 10.240.0.8
      PACKAGE_DEPLOYER_VERSION: v1.0.0
      events: |
        - event: EnableVPCFileCSIDriver
          description: 'VPC File CSI Driver enable successful, DriverVersion: v2.0.3'
          timestamp: "2024-06-13 09:17:07"
        - event: EnableEITRequest
          description: 'Request received to enableEIT, workerPools: , check the file-csi-driver-status
            configmap for eit installation status on each node of each workerpool.'
          timestamp: "2024-06-13 09:17:31"
        - event: 'Enabling EIT on host: 10.240.0.10'
          description: 'Package installation successful on host: 10.240.0.10, workerpool: wp1'
          timestamp: "2024-06-13 09:17:48"
        - event: 'Enabling EIT on host: 10.240.0.8'
          description: 'Package installation successful on host: 10.240.0.8, workerpool: wp2'
          timestamp: "2024-06-13 09:17:48"
    
  6. Select a pre-installed storage class that supports EIT or create your own storage class.

    • Create a PVC by using either the ibmc-vpc-file-eit storage class.
    • Create your own storage class and set the isEITenabled parameter to true.
  7. Create a PVC that references the storage class you selected, then deploy an app that uses your PVC.

Limiting file share access by worker pool, zone, or worker node

The default behavior for File Storage for VPC cluster add-on is that pods on any node can access file shares. You can also apply more granular control over how pods access your file shares. For example, you might limit file share access to only pods on a specific node, in a specific zone, on a specific worker pool. Review the following scenarios for how you can configure pod access to your file shares.

When a PVC is created, it creates one file share target per PVC and one VNI IP is reserved on that respective subnet in the zone. This means the max number of PVCs for VPC file storage depends on the available IP addresses on that subnet.

If you use the following VNI features to limit pod access to your file shares, your app might not be highly available.

Before you begin

To limit file share access by node, zone, or resource group, you must first create a custom VPC security group.

  1. List your clusters and make a note of the cluster ID where you want to deploy file storage.

    ibmcloud ks cluster ls
    
  2. List your security groups and make a note of the ID kube-<clusterID> security group for your cluster. You need the security group ID later when adding security group rules.

    ibmcloud is sg 
    

    Example output

    ID                                          Name                                             Rules   Targets   VPC       Resource group
    r006-4aaec88f-4986-4b7c-a737-401f7fef1555   kube-clusterID                       15      0         my-vpc   default
    
  3. Create a custom security group in the same VPC as your cluster. Your can use this security group to control access to your file shares by adding security group rules.

    ibmcloud is security-group-create my-custom-security-group VPC-ID
    
  4. Create your own storage class and enter the ID of the custom security group you created earlier. All PVCs created from this storage class are in your custom security group.

  5. Create a PVC that uses your own storage class.

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: pvc-custom-vni
      spec:
        accessModes:
        - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        storageClassName: "" # For example: my-custom-storage-class
    
  6. After the PV binds to the PVC, get the PV details and make a note of the nfsServerPath value to find the VNI IP address.

    kubectl get pv pvc-XXXX -o yaml | grep nfsServerPath
    

    Example output

    nfsServerPath: XXX.XX.XX.XXX:/XX # VNI IP address
    

Limiting file share access to pods on one worker node

  1. Make sure you have completed the prerequisites.

  2. Add the following rule to the custom security group you created earlier.

    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
    
  3. Add the following rule to the kube-clusterID security group.

    ibmcloud is sg-rulec kube-<cluster-id> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
    
  4. Create a deployment that uses your PVC. Only pods that are deployed on the worker node that matches the rule that you created are able to mount or use the PVC. Pods deployed on other nodes are stuck in the container creating state.

Limiting file share access to pods on worker nodes in a single zone

  1. Make sure you have completed the prerequisites.

  2. Add the following rule to the custom security group you created earlier.

    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.0/24 # zone subnet cidr range
    
  3. Add the following rule to the kube-clusterID security group. Specify the IP address of the virtual network interface (VNI).

    ibmcloud is sg-rulec kube-<cluster-ID> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
    
  4. Create a deployment that uses your PVC. Only pods deployed in the zone that is listed in the previous rule can mount the PVC. Pods that are deployed in other zones cannot access the PVC and are stuck in the Container creating state.

Limiting file share access to pods on worker nodes in a single worker pool

  1. Make sure you have completed the prerequisites.

  2. Create inbound rules for each worker pool subnet range.

    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.0/24 # zone 1 subnet cidr range
    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.0/24 # zone 2 subnet cidr range
    
  3. Add the following rule to the kube-clusterID security group. Specify the IP address of the virtual network interface (VNI) as the remote or source.

    ibmcloud is sg-rulec kube-<cluster-ID> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.0.10 # VNI IP
    
  4. Deploy an app that uses the PVC you created earlier. Only pods on the worker pools indicated in the earlier rule can mount the PVC. Or, if you deploy your app in a daemonset, pods will only successfully deploy on worker nodes that you created security group rules for. Pods on worker pools that aren't in the specified worker pool fail with a MountVolume.SetUp failed for volume "pvc-184b8c92-33ea-4874-b2ac-17665e53c060" : rpc error: code = DeadlineExceeded desc = context deadline exceeded error.

Limiting file share access to pods on worker nodes in multiple worker pools

  1. Make sure you have completed the prerequisites.

  2. Add the following rules to your custom security group. Specify the worker pools and the subnet CIDR ranges as the remote or source.

    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.0/24 # worker pool 1, zone 1 subnet CIDR range
    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.0/24 # worker pool 1, zone 2 subnet CIDR range
    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.241.0.0/24 # worker pool 2, zone 1 subnet CIDR range
    ibmcloud is sg-rulec CUSTOM-SG inbound tcp --port-min 111 --port-max 2049 --remote 10.241.1.0/24 # worker pool 2, zone 2subnet CIDR range
    ibmcloud is sg-rulec CUSTOM-SG outbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.7 # VNI-IP
    
  3. Add the following rule to the kube-<clusterID> security group. Specify the IP address of the virtual network interface (VNI) as the remote or source.

    ibmcloud is sg-rulec kube-<clusterID> outbound tcp --port-min 111 --port-max 2049 --remote 10.240.1.7 # VNI-IP
    
  4. Create a deployment that uses your PVC. Only pods deployed in the zone indicated in the rule can mount the PVC. Pods deployed on the restricted worker nodes are stuck in the Container creating state.