Debugging common CLI issues with clusters
Virtual Private Cloud Classic infrastructure
Review the following common reasons for CLI connection issues or command failures.
Firewall prevents running CLI commands
When you run ibmcloud
, kubectl
,oc
, or calicoctl
commands from the CLI, they fail.
You might have corporate network policies that prevent access from your local system to public endpoints via proxies or firewalls.
Allow TCP access for the CLI commands to work.
This task requires the Administrator IBM Cloud IAM platform access role for the cluster.
kubectl
or oc
commands don't work
When you run kubectl
or oc
commands against your cluster, your commands fail with an error message similar to the following.
No resources found.
Error from server (NotAcceptable): unknown (get nodes)
invalid object doesn't have additional properties
error: No Auth Provider found for name "oidc"
You have a different version of kubectl
than your cluster version.
Kubernetes does not support kubectl
client versions that are 2 or more versions apart from the server version (n +/- 2). If you use
a community Kubernetes cluster, you might also have the Red Hat OpenShift version of kubectl
, which does not work with community Kubernetes clusters.
To check your client kubectl
version against the cluster server version, run oc version --short
.
Install the version of the CLI that matches the version of your cluster.
If you have multiple clusters at different versions or different container platforms such as Red Hat OpenShift, download each kubectl
version binary file to a separate directory. Then, you can set up an alias in your local command-line
interface (CLI) profile to point to the kubectl
binary file directory that matches the kubectl
version of the cluster that you want to work with, or you might be able to use a tool such as brew switch kubernetes-cli <major.minor>
.
Time out when trying to connect to a pod
You try to connect to a pod, such as logging in with oc exec
or getting logs with oc logs
. The pod is healthy, but you see an error message similar to the following.
Error from server: Get https://<10.xxx.xx.xxx>:<port>/<address>: dial tcp <10.xxx.xx.xxx>:<port>: connect: connection timed out
The VPN server is experiencing configuration issues that prevent accessing the pod from its internal address.
Before you begin: Access your Red Hat OpenShift cluster.
- Check if a cluster and worker node updates are available by viewing your cluster and worker node details in the console or a
cluster ls
orworker ls
command. If so, update your cluster and worker nodes to the latest version. - Restart the VPN pod by deleting it. Another VPN pod is scheduled. When its STATUS is Running, try to connect the pod that you previously could not connect to.
oc delete pod -n kube-system -l app=vpn
500 error when trying to log in to a Red Hat OpenShift cluster via oc login
When you try to log in to a Red Hat OpenShift cluster via oc login
for the first time and you see an error message similar to the following.
$ oc login SERVER -u apikey -p <APIKEY>
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Error from server (InternalError): Internal error occurred: unexpected response: 500
Some recent changes to the IAM user role have not yet been synchronized to the Red Hat OpenShift cluster.
Synchronize the IAM user information to the Red Hat OpenShift cluster. After the initial user synchronization is performed, further RBAC synchronization should occur automatically.
Before you begin:
Access your Red Hat OpenShift cluster.
To synchronize the IAM information for the user, you have 2 options:
- Log in to your cluster from the Red Hat OpenShift Red Hat OpenShift clusters console.
- Set your command line context for the cluster by running the
ibmcloud oc cluster config --cluster CLUSTER
command.
If you use an API key for a functional ID or another user, make sure to log in as the correct user.
After the impacted user completes the IAM synchronization, the cluster administrator can verify the user exists in the cluster by listing users with the oc get users
command.
Missing projects or oc
and kubectl
commands fail
Virtual Private Cloud Classic infrastructure
You don't see all the projects that you have access to. When you try to run oc
or kubectl
commands, you see an error similar to the following.
No resources found.
Error from server (Forbidden): <resource> is forbidden: User "IAM#user@email.com" can't list <resources> at the cluster scope: no RBAC policy matched
You need to download the admin
configuration files for your cluster to run commands that require the cluster-admin
cluster role.
Run ibmcloud oc cluster config --cluster <cluster_name_or_ID> --admin
and try again.