IBM Cloud Docs
Debugging persistent storage failures

Debugging persistent storage failures

Review the options to debug persistent storage and find the root causes for failures.

Checking whether the pod that mounts your storage instance is successfully deployed

  1. List the pods in your cluster. A pod is successfully deployed if the pod shows a status of Running.

    kubectl get pods
    
  2. Get the details of your pod and check whether errors are displayed in the Events section of your CLI output.

    kubectl describe pod <pod_name>
    
  3. Retrieve the logs for your app and check whether you can see any error messages.

    kubectl logs <pod_name>
    

Restarting your app pod

  1. If your pod is part of a deployment, delete the pod and let the deployment rebuild it. If your pod is not part of a deployment, delete the pod and reapply your pod configuration file.

    kubectl delete pod <pod_name>
    
    kubectl apply -f <app.yaml>
    
  2. If restarting your pod does not resolve the issue, reload your worker node.

  3. Verify that you use the latest IBM Cloud and IBM Cloud Kubernetes Service plug-in version.

    ibmcloud update
    
    ibmcloud plugin repo-plugins
    

Verifying that the storage driver and plug-in pods show a status of Running

  1. List the pods in the kube-system namespace.
    kubectl get pods -n kube-system
    
  2. If the storage driver and plug-in pods don't show a Running status, get more details of the pod to find the root cause. Depending on the status of your pod, you might not be able to execute all the following commands.
    1. Get the names of the containers that run in the driver pod.
      kubectl get pod <pod_name> -n kube-system -o jsonpath="{.spec['containers','initContainers'][*].name}" | tr -s '[[:space:]]' '\n'
      
      Example output for Block Storage for VPC with three containers:
      csi-provisioner
      csi-attacher
      iks-vpc-block-driver
      
      Example output for Block Storage for Classic:
      ibmcloud-block-storage-driver-container
      
    2. Export the logs from the driver pod to a logs.txt file on your local machine. Include the driver container name.
      kubectl logs <pod_name> -n kube-system -c <container_name> > logs.txt
      
    3. Review the log file.
      cat logs.txt
      
  3. Analyze the Events section of the CLI output of the kubectl describe pod command and the latest logs to find the root cause for the error.

Checking whether your PVC is successfully provisioned.

  1. Check the state of your PVC. A PVC is successfully provisioned if the PVC shows a status of Bound.

    kubectl get pvc
    
  2. If the state of the PVC shows Pending, retrieve the error for why the PVC remains pending.

    kubectl describe pvc <pvc_name>
    
  3. Review common errors that can occur during the PVC creation.

  4. Review common errors that can occur when you mount a PVC to your app.

  5. Verify that the kubectl CLI version that you run on your local machine matches the Kubernetes version that is installed in your cluster. If you use a kubectl CLI version that does not match at least the major.minor version of your cluster, you might experience unexpected results. For example, [Kubernetes does not support kubectl client versions that are 2 or more versions apart from the server version (n +/- 2).

    1. Show the kubectl CLI version that is installed in your cluster and your local machine.
      kubectl version
      
      Example output
      Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.31", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"}
      Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.31+IKS", GitCommit:"e15454c2216a73b59e9a059fd2def4e6712a7cf0", GitTreeState:"clean", BuildDate:"2019-04-01T10:08:07Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
      
      The CLI versions match if you can see the same version in GitVersion for the client and the server. You can ignore the +IKS part of the version for the server.
    2. If the kubectl CLI versions on your local machine and your cluster don't match, either update your cluster or install a different CLI version on your local machine.
  6. For Block Storage for VPC, verify that you have the latest version of the add-on.

  7. For classic block storage, object storage, and Portworx only: Make sure that you installed the latest Helm chart version for the plug-in.

    Block and object storage:

    1. Update your Helm chart repositories.
      helm repo update
      
    2. List the Helm charts in the repository. For classic block storage:
      helm search repo iks-charts | grep block-storage-plugin
      
      Example output
      iks-charts-stage/ibmcloud-block-storage-plugin    1.5.0                                                        A Helm chart for installing ibmcloud block storage plugin   
      iks-charts/ibmcloud-block-storage-plugin          1.5.0                                                        A Helm chart for installing ibmcloud block storage plugin   
      
      For object storage:
      helm search repo ibm-charts | grep object-storage-plugin
      
      Example output
      ibm-charts/ibm-object-storage-plugin             1.0.9            1.0.9                             A Helm chart for installing ibmcloud object storage plugin  
      
    3. List the installed Helm charts in your cluster and compare the version that you installed with the version that is available.
      helm list --all-namespaces
      
    4. If a more recent version is available, install this version. For instructions, see Updating the IBM Cloud Block Storage plug-in and Updating the IBM Cloud Object Storage plug-in.

Portworx

  1. Find the latest Helm chart version that is available.

  2. List the installed Helm charts in your cluster and compare the version that you installed with the version that is available.

    helm list --all-namespaces
    
  3. If a more recent version is available, install this version. For instructions, see Updating Portworx in your cluster.

OpenShift Data Foundation

Describe your ODF resources and review the command outputs for any error messages.

  1. List the name of your ODF cluster.
    kubectl get ocscluster
    
    Example output:
    NAME             AGE
    ocscluster-vpc   71d
    
  2. Describe the storage cluster and review the Events section of the output for any error messages.
    kubectl describe ocscluster <ocscluster-name>
    
  3. List the pods in the kube-system namespace and verify that they are Running.
    kubectl get pods -n kube-system
    
  4. Describe the ibm-ocs-operator-controller-manager pod and review the Events section in the output for any error messages.
    kubectl describe pod <ibm-ocs-operator-controller-manager-a1a1a1a> -n kube-system
    
  5. Review the logs of the ibm-ocs-operator-controller-manager.
    kubectl logs <ibm-ocs-operator-controller-manager-a1a1a1a> -n kube-system
    
  6. Describe NooBaa and review the Events section of the output for any error messages.
    kubectl describe noobaa -n openshift-storage