IBM Cloud Docs
Managing IBM Cloud File Storage for VPC

Managing IBM Cloud File Storage for VPC

When you set up persistent storage in your cluster, you have three main components: the Kubernetes persistent volume claim (PVC) that requests storage, the Kubernetes persistent volume (PV) that is mounted to a pod and described in the PVC, and the file share. Depending on how you created your storage, you might need to delete all three components separately.

The File Storage for VPC cluster add-on is available in Beta.

The following limitations apply to the add-on beta.

  • It is recommended that your cluster and VPC are part of same resource group. If your cluster and VPC are in separate resource groups, then before you can provision file shares, you must create your own storage class and provide your VPC resource group ID. For more information, see Creating your own storage class.
  • New security group rules were introduced in cluster versions 1.25 and later. These rule changes mean that you must sync your security groups before you can use File Storage for VPC. For more information, see Adding File Storage for VPC to apps.
  • New storage classes were added with version 2.0 of the add-on. You can no longer provision new file shares that use the older storage classes. Existing volumes that use the older storage classes continue to function, however you cannot expand the volumes that were created using the older classes. For more information, see the Migrating to a new storage class.

Updating the File Storage for VPC cluster add-on

Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.

  1. Get your cluster ID.

    ibmcloud ks cluster ls
    
  2. Review the available add-on versions.

    ibmcloud ks cluster addon versions
    
  3. Disable the add-on.

    ibmcloud ks cluster addon disable vpc-file-csi-driver --cluster CLUSTER
    
  4. Enable the newer version of the add-on.

    ibmcloud ks cluster addon enable vpc-file-csi-driver --cluster CLUSTER --version VERSION
    
  5. Verify the add-on is enabled by running the following commands.

    kubectl get deploy -n kube-system | grep file
    
    ibm-vpc-file-csi-controller   2/2     2            2           13m
    
    kubectl get ds -n kube-system | grep file
    
    ibm-vpc-file-csi-node    2         2         2       2            2           <none>          14m
    
    kubectl get pods -n kube-system  | grep file
    
    ibm-vpc-file-csi-controller-7899db784-kc29g   5/5     Running   0             14m
    ibm-vpc-file-csi-controller-7899db784-mp5jt   5/5     Running   0             14m
    ibm-vpc-file-csi-node-bfqdz                   4/4     Running   0             14m
    ibm-vpc-file-csi-node-n7jbx                   4/4     Running   0             14m
    

Updating encryption in-transit (EIT) packages

The PACKAGE_DEPLOYER_VERSION in the addon-vpc-file-csi-driver-configmap indicates the image version of the EIT packages.

When a new image is available, edit the add-on configmap and specify the new image version, to update the packages on your worker nodes.

  1. Edit the addon-vpc-file-csi-driver-configmap configmap and specify the new image version.

    kubectl edit cm addon-vpc-file-csi-driver-configmap -n kube-system
    

    Example output

    PACKAGE_DEPLOYER_VERSION: v1.0.0
    
  2. Follow the status of the update by reviewing the events in the file-csi-driver-status config map

    kubectl get cm file-csi-driver-status -n kube-system -o yaml
    
      events: |
    - event: EnableVPCFileCSIDriver
      description: 'VPC File CSI Driver enable successful, DriverVersion: v2.0.3'
      timestamp: "2024-06-13 09:17:07"
    - event: EnableEITRequest
      description: 'Request received to enableEIT, workerPools: , check the file-csi-driver-status
        configmap for eit installation status on each node of each workerpool.'
      timestamp: "2024-06-13 09:17:31"
    - event: 'Enabling EIT on host: 10.240.0.10'
      description: 'Package installation successful on host: 10.240.0.10, workerpool:
        default'
      timestamp: "2024-06-13 09:17:48"
    - event: 'Enabling EIT on host: 10.240.0.8'
      description: 'Package installation successful on host: 10.240.0.8, workerpool: default'
      timestamp: "2024-06-13 09:17:48"
    - event: 'Enabling EIT on host: 10.240.0.8'
      description: 'Package update successful on host: 10.240.0.8, workerpool: default'
      timestamp: "2024-06-13 09:20:21"
    - event: 'Enabling EIT on host: 10.240.0.10'
      description: 'Package update successful on host: 10.240.0.10, workerpool: default'
      timestamp: "2024-06-13 09:20:21"
    

Disabling the add-on

Disabling the vpc-file-csi-driver removes the encryption in-transit packages from your worker nodes.

  1. Run the following command to disable the add-on.

    ibmcloud ks cluster addon disable --addon vpc-file-csi-driver --cluster CLUSTER
    
  2. Verify the pods have been removed.

    kubectl get pods -n kube-system  | grep file
    

Understanding your storage removal options

Tagging was not supported in version 1.2. This impacts the removal of file shares when a cluster is deleted with the --force-delete-storage option. Make sure you clean up all PVCs that were created with version 1.2 of the add-on before deleting your cluster.

Removing persistent storage from your IBM Cloud account varies depending on how you provisioned the storage and what components you already removed.

Is my persistent storage deleted when I delete my cluster?
During cluster deletion, you have the option to remove your persistent storage. However, depending on how your storage was provisioned, the removal of your storage might not include all storage components. If you dynamically provisioned storage with a storage class that sets reclaimPolicy: Delete, your PVC, PV, and the storage instance are automatically deleted when you delete the cluster. For storage that was statically provisioned or storage that you provisioned with a storage class that sets reclaimPolicy: Retain, the PVC and the PV are removed when you delete the cluster, but your storage instance and your data remain. You are still charged for your storage instance. Also, if you deleted your cluster in an unhealthy state, the storage might still exist even if you chose to remove it.
How do I delete the storage when I want to keep my cluster?
When you dynamically provisioned the storage with a storage class that sets reclaimPolicy: Delete, you can remove the PVC to start the deletion process of your persistent storage. Your PVC, PV, and storage instance are automatically removed. For storage that was statically provisioned or storage that you provisioned with a storage class that sets reclaimPolicy: Retain, you must manually remove the PVC, PV, and the storage instance to avoid further charges.
How does the billing stop after I delete my storage?
Depending on what storage components you delete and when, the billing cycle might not stop immediately. If you delete the PVC and PV, but not the storage instance in your IBM Cloud account, that instance still exists and you are charged for it.

If you delete the PVC, PV, and the storage instance, the billing cycle stops depending on the billingType that you chose when you provisioned your storage and how you chose to delete the storage.

  • When you manually cancel the persistent storage instance from the IBM Cloud console or the CLI, billing stops as follows:

    • Hourly storage: Billing stops immediately. After your storage is canceled, you might still see your storage instance in the console for up to 72 hours.
    • Monthly storage: You can choose between immediate cancellation or cancellation on the anniversary date. In both cases, you are billed until the end of the current billing cycle, and billing stops for the next billing cycle. After your storage is canceled, you might still see your storage instance in the console or the CLI for up to 72 hours.
    • Immediate cancellation: Choose this option to immediately remove your storage. Neither you nor your users can use the storage anymore or recover the data.
    • Anniversary date: Choose this option to cancel your storage on the next anniversary date. Your storage instances remain active until the next anniversary date and you can continue to use them until this date, such as to give your team time to make backups of your data.
  • When you dynamically provisioned the storage with a storage class that sets reclaimPolicy: Delete and you choose to remove the PVC, the PV and the storage instance are immediately removed. For hourly billed storage, billing stops immediately. For monthly billed storage, you are still charged for the remainder of the month. After your storage is removed and billing stops, you might still see your storage instance in the console or the CLI for up to 72 hours.

What do I need to be aware of before I delete persistent storage?
When you clean up persistent storage, you delete all the data that is stored in it. If you need a copy of the data, make a backup.
I deleted my storage instance. Why can I still see my instance?
After you remove persistent storage, it can take up to 72 hours for the removal to be fully processed and for the storage to disappear from your IBM Cloud console or CLI.

Cleaning up persistent storage

Remove the PVC, PV, and the storage instance from your IBM Cloud account to avoid further charges for your persistent storage.

Before you begin:

To clean up persistent data:

  1. List the PVCs in your cluster and note the NAME of the PVC, the STORAGECLASS, and the name of the PV that is bound to the PVC and shown as VOLUME.

    kubectl get pvc
    

    Example output

    NAME                  STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS            AGE
    claim1   Bound     pvc-06886b77-102b-11e8-968a-f6612bb731fb   20Gi       RWO           class       78d
    claim2     Bound     pvc-457a2b96-fafc-11e7-8ff9-b6c8f770356c   4Gi        RWX           class 105d
    claim3      Bound     pvc-1efef0ba-0c48-11e8-968a-f6612bb731fb   24Gi       RWX           class        83d
    
  2. Review the ReclaimPolicy and billingType for the storage class.

    kubectl describe storageclass <storageclass_name>
    

    If the reclaim policy says Delete, your PV and the physical storage are removed when you remove the PVC. If the reclaim policy says Retain, or if you provisioned your storage without a storage class, then your PV and physical storage are not removed when you remove the PVC. You must remove the PVC, PV, and the physical storage separately.

    If your storage is charged monthly, you still get charged for the entire month, even if you remove the storage before the end of the billing cycle.

  3. Remove any pods that mount the PVC. List the pods that mount the PVC. If no pod is returned in your CLI output, you don't have a pod that uses the PVC.

    kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
    

    Example output

    depl-12345-prz7b:    claim1
    
  4. Remove the pod that uses the PVC. If the pod is part of a deployment, remove the deployment.

    kubectl delete pod <pod_name>
    
  5. Verify that the pod is removed.

    kubectl get pods
    
  6. Remove the PVC.

    kubectl delete pvc <pvc_name>
    
  7. Review the status of your PV. Use the name of the PV that you retrieved earlier as VOLUME. When you remove the PVC, the PV that is bound to the PVC is released. Depending on how you provisioned your storage, your PV goes into a Deleting state if the PV is deleted automatically, or into a Released state, if you must manually delete the PV. Note: For PVs that are automatically deleted, the status might briefly say Released before it is deleted. Rerun the command after a few minutes to see whether the PV is removed.

    kubectl get pv <pv_name>
    
  8. If your PV is not deleted, manually remove the PV.

    kubectl delete pv <pv_name>
    
  9. Verify that the PV is removed.

    kubectl get pv
    
  10. List your shares.

    ibmcloud is shares
    
  11. List each file share and find the associated cluster ID.

    ibmcloud is share SHARE | grep CLUSTER-ID
    
  12. Delete the shares.

    ibmcloud is share-delete (SHARE1 SHARE2 ...)