Why does the Ingress status show an ERRIODEG
error?
Virtual Private Cloud Classic infrastructure Satellite
You can use the ibmcloud oc ingress status-report ignored-errors add
command to add an error to the ignored-errors list. Ignored errors still appear in the output of the ibmcloud oc ingress status-report get
command,
but are ignored when calculating the overall Ingress Status.
When you check the status of your cluster's Ingress components by running the ibmcloud oc ingress status-report get
command, you see an error similar to the following.
The Ingress Operator is in a degraded state (ERRIODEG).
The Ingress Operator checks the health of the Ingress Controllers and enters a degraded state when the checks fail.
Get the details of the ingress
ClusterOperator and complete the steps based on the error message.
Check the status of the ingress
ClusterOperator. If you see False
in the DEGRADED
column, wait 10 to 15 minutes to see if the Ingress Status warning disappears. If not, proceed with the troubleshooting steps
based on the message in the MESSAGE
column.
oc get clusteroperator ingress
One or more status conditions indicate unavailable: DeploymentAvailable=False
- Ensure that your cluster has at least two workers. For more information, see Adding worker nodes to Classic clusters or Adding worker nodes to VPC clusters.
- Ensure that your cluster workers are healthy, otherwise Ingress Controller pods cannot be scheduled. For more information, see Worker node states.
One or more status conditions indicate unavailable: LoadBalancerReady=False
- VPC only: Ensure that you did not reach your LBaaS instance quota. For more information, see Quotas and service limits and the
ibmcloud is load-balancers
command. - Ensure that your cluster masters are healthy. For more information, see Reviewing master health.
- Refresh your cluster masters by running the
ibmcloud oc cluster master refresh
command.
One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False
-
Ensure that the correct LoadBalancer service address is registered for your Ingress subdomain.
- Run the
ibmcloud oc cluster get
command to see your Ingress subdomain. - Run the
ibmcloud oc nlb-dns get
command to see the registered addresses. - Run the
oc get services -n openshift-ingress
command to get the actual load balancer addresses. - Compare the registered and actual addresses and update the subdomain if it differs.
VPC: Run the
ibmcloud oc nlb-dns replace
command to replace the current address. Classic: Remove the currently registered addresses by running theibmcloud oc nlb-dns rm classic
command, then add the new addresses with theibmcloud oc nlb-dns add
command. Satellite: The actual addresses depends on your configuration: if you expose your worker nodes with an external load balancer, register the load balancer addresses, otherwise register the IP addresses assigned to therouter-external-default
service in theopenshift-ingress
namespace (use theoc get services -n openshift-ingress router-external-default -o yaml
command to retrieve the addresses). Remove the currently registered addresses by running theibmcloud oc nlb-dns rm classic
command, then add the new addresses with theibmcloud oc nlb-dns add
command.
- Run the
-
VPC only: canary health check traffic originates from one of the worker nodes of your cluster.
- Health check traffic originates from one of the worker nodes of your cluster. In the case of clusters with public service endpoint, the traffic is directed to the public floating IP address of the VPC Load Balancer instance, therefore
it is required to have a Public Gateway attached to all the worker subnets. In the case of clusters with only private service endpoints, the traffic is directed to the VPC subnet IP address of the VPC Load Balancer, therefore a Public
Gateway is not required. For clusters with public service endpoint:
- Run the
ibmcloud is public-gateways
to see your public gateways. - Run the
ibmcloud is subnets
to see your subnets. - For every subnet run the
ibmcloud is subnet <subnet-id>
to check whenever it has a public gateway.- If your subnet does not have a public gateway attached, you need to attach one. For more information, see Creating public gateways
- Run the
- If your VPC Load Balancers are located on a subnet other than the worker nodes of your cluster, you must update the Security Group attached to the VPC Load Balancer subnet to allow incoming traffic from the worker subnets.
- For more information, see Creating a Red Hat OpenShift cluster in your Virtual Private Cloud, Configuring VPC subnets and Creating and managing VPC security groups.
- Health check traffic originates from one of the worker nodes of your cluster. In the case of clusters with public service endpoint, the traffic is directed to the public floating IP address of the VPC Load Balancer instance, therefore
it is required to have a Public Gateway attached to all the worker subnets. In the case of clusters with only private service endpoints, the traffic is directed to the VPC subnet IP address of the VPC Load Balancer, therefore a Public
Gateway is not required. For clusters with public service endpoint:
-
Ensure that no firewall rules block the canary traffic. VPC: canary traffic originates from one of the worker nodes, flows through a VPC Public Gateway and arrives to the public side of the VPC Load Balancer instance. Configure your VPC Security Groups to allow this communication. For more information, see Controlling traffic with VPC security groups. Classic: canary traffic originates from the public IP address of one of the worker nodes and arrives to the public IP address of your classic load balancers. Configure your network policies to allow this communication. For more information, see Controlling traffic with network policies on classic clusters.
Next steps
- Wait 30 minutes, then run the
oc get clusteroperator ingress
command and check theMESSAGE
column again. - If you see a different error message repeat the troubleshooting steps.
- If the issue persists, contact support. Open a support case. In the case details, be sure to include any relevant log files, error messages, or command outputs.