IBM Cloud Docs
Changing service endpoints or VLAN connections in Red Hat OpenShift 3.11

Changing service endpoints or VLAN connections in Red Hat OpenShift 3.11

After you initially set up your network when you create a cluster, you can change the service endpoints that your cluster master is accessible through or change the VLAN connections for your worker nodes.

The content on this page is specific to classic clusters only. For information about VPC clusters, see VPC cluster networking.

The content on this page is specific to Classic clusters that run Red Hat OpenShift 3.11 only. In clusters that run Red Hat OpenShift 3.11, you must enable the public cloud service endpoint during cluster creation, and you can't disable it later. You can later enable the private cloud service endpoint. In clusters that run version 4, you choose the public cloud service endpoint only or public and private cloud service endpoints during cluster creation, and you can't later change the cloud service endpoints.

Setting up the private cloud service endpoint

Enable the private cloud service endpoint for your cluster.

The private cloud service endpoint makes your Kubernetes master privately accessible. Your worker nodes and your authorized cluster users can communicate with the Kubernetes master over the private network. To determine whether you can enable the private cloud service endpoint, see Worker-to-master and user-to-master communication. Note that you can't disable the private cloud service endpoint after you enable it.

  1. Enable VRF in your IBM Cloud infrastructure account. To check whether a VRF is already enabled, use the ibmcloud account show command.

  2. Enable your IBM Cloud account to use service endpoints.

  3. Enable the private cloud service endpoint.

    ibmcloud oc cluster master private-service-endpoint enable --cluster <cluster_name_or_ID>
    
  4. Refresh the Kubernetes master API server to use the private cloud service endpoint. You can follow the prompt in the CLI, or manually run the following command. It might take several minutes for the master to refresh.

    ibmcloud oc cluster master refresh --cluster <cluster_name_or_ID>
    
  5. Create a configmap to control the maximum number of worker nodes that can be unavailable at a time in your cluster. When you update your worker nodes, the ConfigMap helps prevent downtime for your apps as the apps are rescheduled orderly onto available worker nodes.

  6. Update all the worker nodes in your cluster to pick up the private cloud service endpoint configuration.

    By issuing the update command, the worker nodes are reloaded to pick up the service endpoint configuration. If no worker update is available, you must reload the worker nodes manually. If you reload, be sure to cordon, drain, and manage the order to control the maximum number of worker nodes that are unavailable at a time.

    ibmcloud oc worker update --cluster <cluster_name_or_ID> --worker <worker1,worker2>
    
  7. If the cluster is in an environment behind a firewall:

Setting up the public cloud service endpoint

Enable the public cloud service endpoint for your cluster.

Your cluster must have a public cloud service endpoint on classic infrastructure. For a cluster with only a private cloud service endpoint, create a VPC cluster instead.

The public cloud service endpoint makes your Kubernetes master publicly accessible. Your worker nodes and your authorized cluster users can securely communicate with the Kubernetes master over the public network. For more information, see Worker-to-master and user-to-master communication.

Steps to enable the public cloud service endpoint

If you previously disabled the public endpoint, you can re-enable it.

  1. Enable the public cloud service endpoint.
    ibmcloud oc cluster master public-service-endpoint enable --cluster <cluster_name_or_ID>
    
  2. Refresh the Kubernetes master API server to use the public cloud service endpoint. You can follow the prompt in the CLI, or manually run the following command. It might take several minutes for the master to refresh.
    ibmcloud oc cluster master refresh --cluster <cluster_name_or_ID>
    
  3. Create a configmap to control the maximum number of worker nodes that can be unavailable at a time in your cluster. When you update your worker nodes, the ConfigMap helps prevent downtime for your apps as the apps are rescheduled orderly onto available worker nodes.
  4. Update all the worker nodes in your cluster to remove the public cloud service endpoint configuration.
    ibmcloud oc worker update --cluster <cluster_name_or_ID> --worker <worker1,worker2>
    
    By issuing the update command, the worker nodes are reloaded to pick up the service endpoint configuration. If no worker update is available, you must reload the worker nodes manually with the ibmcloud oc worker reload command. If you reload, be sure to cordon, drain, and manage the order to control the maximum number of worker nodes that are unavailable at a time.

Changing your worker node VLAN connections

When you create a cluster, you choose the public and private VLANs to connect your worker nodes to. Your worker nodes are part of worker pools, which store networking metadata that includes the VLANs to use to provision future worker nodes in the pool. You might want to change your cluster's VLAN connectivity setup later, such as if the worker pool VLANs in a zone run out of capacity, and you need to provision a new VLAN for your cluster worker nodes to use.

Trying to change the service endpoint for master-worker communication instead? Check out the topics to set up the private service endpoint.

Access your Red Hat OpenShift cluster.

To change the VLANs that a worker pool uses to provision worker nodes.

  1. List the names of the worker pools in your cluster.

    ibmcloud oc worker-pool ls --cluster <cluster_name_or_ID>
    
  2. Determine the zones for one of the worker pools. In the output, look for the Zones field.

    ibmcloud oc worker-pool get --cluster <cluster_name_or_ID> --worker-pool <pool_name>
    
  3. For each zone that you found in the previous step, get an available public and private VLAN that are compatible with each other.

    1. Check the available public and private VLANs that are listed under Type in the output.
      ibmcloud oc vlan ls --zone <zone>
      
    2. Check that the public and private VLANs in the zone are compatible. To be compatible, the Router must have the same pod ID. In this example output, the Router pod IDs match: 01a and 01a. If one pod ID was 01a and the other was 02a, you can't set these public and private VLAN IDs for your worker pool.
      ID        Name   Number   Type      Router         Supports Virtual Workers
      229xxxx          1234     private   bcr01a.dal12   true
      229xxxx          5678     public    fcr01a.dal12   true
      
    3. If you need to order a new public or private VLAN for the zone, you can order in the IBM Cloud console, or use the following command. Remember that the VLANs must be compatible, with matching Router pod IDs as in the previous step. If you are creating a pair of new public and private VLANs, they must be compatible with each other.
      ibmcloud sl vlan create -t [public|private] -d <zone> -r <compatible_router>
      
    4. Note the IDs of the compatible VLANs.
  4. Set up a worker pool with the new VLAN network metadata for each zone. You can create a new worker pool, or modify an existing worker pool.

    • Create a worker pool: See adding worker nodes by creating a new worker pool.

    • Modify an existing worker pool: Set the worker pool's network metadata to use the VLAN for each zone. Worker nodes that were already created in the pool continue to use the previous VLANs, but new worker nodes in the pool use new VLAN metadata that you set.

      ibmcloud oc zone network-set --zone <zone> --cluster <cluster_name_or_ID> --worker-pool <pool_name> --private-vlan <private_vlan_id> --public-vlan <public_vlan_id>
      
  5. Add worker nodes to the worker pool by resizing the pool.

    ibmcloud oc worker-pool resize --cluster <cluster_name_or_ID> --worker-pool <pool_name> --size-per-zone <number_of_workers_per_zone>
    

    If you want to remove worker nodes that use the previous network metadata, change the number of workers per zone to double the previous number of workers per zone. Later in these steps, you can cordon, drain, and remove the previous worker nodes.

  6. Verify that new worker nodes are created with Public IP and Private IP addresses in the output.

    ibmcloud oc worker ls --cluster <cluster_name_or_ID> --worker-pool <pool_name>
    
  7. Optional: Remove the worker nodes with the previous network metadata from the worker pool.

    1. In the output of the previous step, note the ID of the worker nodes that you want to remove from the worker pool.

    2. Remove the worker node.

      ibmcloud oc worker rm --cluster <cluster_name_or_ID> --worker <worker_name_or_ID>
      
    3. Verify that the worker node is removed.

      ibmcloud oc worker ls --cluster <cluster_name_or_ID> --worker-pool <pool_name>
      
    4. Rebalance the worker pool.

      ibmcloud oc worker-pool rebalance --cluster <cluster_name_or_ID> --worker-pool <pool_name>
      

      For Satellite clusters, do not use the ibmcloud oc worker-pool rebalance command if you have manually assigned worker nodes to your worker pool. Rebalancing a pool with manually assigned worker nodes might remove more than the expected number of worker nodes.

  8. Optional: You can repeat steps 2 - 7 for each worker pool in your cluster. After you complete these steps, all worker nodes in your cluster are set up with the new VLANs.

  9. Move networking services to the new VLANs. The networking services in your cluster are still bound to the old VLAN because their IP addresses are from a subnet on that VLAN.

  10. Optional: If you no longer need the subnets on the old VLANs, you can remove them.