IBM Cloud Docs
Debugging Block Storage for VPC metrics

Debugging Block Storage for VPC metrics

When you try to view Block Storage for VPC metrics in the monitoring dashboard, the metrics do not populate.

Metrics might fail to populate in the dashboard for one of the following reasons:

  • The PVC you want to monitor might not be mounted. Metrics are only populated for PVCs that are mounted to a pod.
  • There might be a console-related issue, which can be verified by manually viewing the storage metrics in the CLI.

Check that the PVC is mounted. If the issue persists, manually view your metrics in the CLI to determine if the cause is related to issues with the console.

  1. Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.

  2. Describe the PVC. If the Used By row of the output is populated with the name of a pod, then the PVC is mounted.

    kubectl describe pvc <pvc_name>
    

    Example output

    Name:          my-pvc
    Namespace:     default
    StorageClass:  ibmc-vpc-block-5iops-tier
    Status:        Bound
    Volume:        pvc-a11a11a1-111a-111a-a1a1-aaa111aa1a1a 
    Labels:        <none>
    Annotations:   pv.kubernetes.io/bind-completed: yes
                   pv.kubernetes.io/bound-by-controller: yes
                   volume.beta.kubernetes.io/storage-provisioner: vpc.block.csi.ibm.io
    Finalizers:    [kubernetes.io/pvc-protection]
    Capacity:      10Gi
    Access Modes:  RWO
    VolumeMode:    Filesystem
    Used By:       my-pod-11a1a1a1a1-1a11a 
    Events:        <none>
    
  3. If the PVC is not mounted to a pod, review the steps for setting up Block Storage for VPC and mount the PVC to a pod. Then try to view the metrics again.

  4. If the PVC is mounted, follow the steps for manually verifying the Block Storage for VPC metrics then [open a support issue](/docs/containers?topic=containers-get-help. The steps for manual verification in the CLI allow you to view your metrics, but are not a solution for metrics that do not populate in the console. However, if you are able to manually verify your metrics, this indicates that there is a console issue for which you must open an issue.

Manually viewing storage metrics in the CLI

If your storage metrics are not visible in the monitoring dashboard, you can manually view them in the CLI. Note that manual verification of your storage metrics is a temporary workaround and not a permanent monitoring solution for viewing metrics. After completing the following steps, if you are able to manually view the metrics in the CLI and not the dashboard, this indicates that there is a console issue for which you must [open a support issue](/docs/containers?topic=containers-get-help.

After you complete the following steps, make sure to remove the resources you created while debugging.

  1. Create and deploy a custom clusterRole configuration. In this example, the clusterRole is named test-metrics-reader.

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
    name: test-metrics-reader
    rules:
    - nonResourceURLs:
        - "/metrics"
        verbs:
        - get
    - apiGroups:
        - ""
        resources:
        - nodes/metrics
        verbs:
        - get
    
    kubectl apply -f <file_name>
    
  2. Create a service account. In this example, the service account is named test-sa.

    kubectl create sa test-sa
    
  3. Add a clusterRoleBinding to the clusterRole.

    kubectl create clusterrolebinding test-metrics-reader --clusterrole test-metrics-reader --serviceaccount=default:test-sa
    
  4. List your nodes and note the name and IP of the node for which you want to gather metrics.

    kubectl get nodes
    

    Example output

    NAME          STATUS    ROLES    AGE     VERSION              
    10.111.1.11   Ready     <none>   1d      v1.31+IKS            
    
  5. Create a yaml file to deploy a pod onto the node. Make sure to specify the service account you created and the node IP address.

    apiVersion: v1
    kind: Pod
    metadata:
    name: testpod
    spec:
    nodeName: 10.111.1.111
    containers:
    - image: nginx
        name: nginx
    serviceAccountName: test-sa
    
    kubectl apply -f <file_name>
    
  6. Retrieve the service account token from within the pod.

    1. Log in to the pod.

      kubectl exec testpod -it -- bash
      
    2. Run the following command to get the token. Note that there is no output.

      token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
      
  7. While you are still logged in to the pod, run the command to view the storage metrics. Make sure to specify the node IP address.

    curl -k -H "authorization: bearer <token> https://<node_IP>:10250/metrics | grep kubelet_volume_stats
    
  8. View the metrics in the terminal output. You might have to wait several minutes for the metrics to output. If you are still unable to view metrics, [open a support issue](/docs/containers?topic=containers-get-help.

  9. After you have finished viewing the metrics, and determined whether the issue is related to dashboard or the metrics agent, delete the configurations and resources that you created in the previous steps.

Do not skip this step.

  1. Exit the pod.
    exit
    
  2. Delete the pod.
    kubectl delete pod testpod
    
  3. Delete the clusterRoleBinding.
    kubectl delete clusterrolebinding test-metrics-reader
    
  4. Delete the service account.
    kubectl delete sa test-sa
    
  5. Delete the cluster role.
    kubectl delete clusterrole test-metrics-reader