Debugging IBM Cloud File Storage for Classic failures
Review the options to debug File Storage for Classic and find the root causes of any failures.
Checking whether the pod that mounts your storage instance is successfully deployed
Follow the steps to review any error messages related to pod deployment.
-
List the pods in your cluster. A pod is successfully deployed if the pod shows a status of Running.
kubectl get pods
-
Get the details of your pod and review any error messages that are displayed in the Events section of your CLI output.
kubectl describe pod <pod_name>
-
Retrieve the logs for your pod and review any error messages.
kubectl logs <pod_name>
-
Review the File Storage for Classic troubleshooting documentation for steps to resolve common errors.
Restarting your app pod
Some issues can be resolved by restarting and redeploying your pods. Follow the steps to redeploy a specific pod.
-
If your pod is part of a deployment, delete the pod and let the deployment rebuild it. If your pod is not part of a deployment, delete the pod and reapply your pod configuration file.
- Delete the pod.
Example outputkubectl delete pod <pod_name>
pod "nginx" deleted
- Reapply the configuration file to redeploy the pod.
Example outputkubectl apply -f <app.yaml>
pod/nginx created
- Delete the pod.
-
If restarting your pod does not resolve the issue, reload your worker nodes.
-
Verify that you use the latest IBM Cloud and IBM Cloud Kubernetes Service plug-in version.
ibmcloud update
ibmcloud plugin repo-plugins
ibmcloud plugin update
Verifying that the storage driver and plug-in pods show a status of Running
Follow the steps to check the status of your storage driver and plug-in pods and review any error messages.
-
List the pods in the
kube-system
namespace.kubectl get pods -n kube-system
-
If the storage driver and plug-in pods don't show a Running status, get more details of the pod to find the root cause. Depending on the status of your pod, the following commands might fail.
-
Get the names of the containers that run in the driver pod.
kubectl describe pod <pod_name> -n kube-system
-
Export the logs from the driver pod to a
logs.txt
file on your local machine.kubectl logs <pod_name> -n kube-system > logs.txt
-
Review the log file.
cat logs.txt
-
-
Check the latest logs for any error messages. Review the File Storage for Classic troubleshooting documentation for steps to resolve common errors.
Checking whether your PVC is successfully provisioned.
Follow the steps to check the status of your PVC and review any error messages.
-
Check the status of your PVC. A PVC is successfully provisioned if the PVC shows a status of Bound.
kubectl get pvc
-
If the PVC shows a status of Bound, the PVC is successfully provisioned.
Example output
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE silver-pvc Bound pvc-4b881a6b-ada8-4a44-b568-fe909107d756 24Gi RWX ibmc-file-silver 7m29s
-
If the status of the PVC shows Pending, describe the PVC and review the Events section of the output for any warnings or error messages. Note that PVCs that reference storage classes with the volume binding mode set to
WaitForFirstConsumer
remain Pending until an app pod is deployed that uses the PVC.kubectl describe pvc <pvc_name>
Example output
Name: local-pvc Namespace: default StorageClass: sat-local-file-gold Status: Pending Volume: Labels: <none> Annotations: <none> Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 60s (x42 over 11m) persistentvolume-controller storageclass.storage.k8s.io "sat-local-file-gold" not found
-
Checking and updating the kubectl CLI version
If you use a kubectl
CLI version that does not match at least the major.minor version of your cluster, you might experience unexpected results. For example, Kubernetes does not support kubectl
client versions that are 2 or more versions apart from the server version (n +/- 2).
-
Verify that the
kubectl
CLI version that you run on your local machine matches the Kubernetes version that is installed in your cluster. Show thekubectl
CLI version that is installed in your cluster and your local machine.kubectl version
Example output:
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.31", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.31+IKS", GitCommit:"e15454c2216a73b59e9a059fd2def4e6712a7cf0", GitTreeState:"clean", BuildDate:"2019-04-01T10:08:07Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
The CLI versions match if you can see the same version in
GitVersion
for the client and the server. You can ignore the+IKS
part of the version for the server. -
If the
kubectl
CLI versions on your local machine and your cluster don't match, either update your cluster or install a different CLI version on your local machine.