Isolating Classic NLBs to edge worker nodes
Classic
In the following steps, you add the dedicated=edge
label to worker nodes on each public or private VLAN in your cluster. This label is used to deploy your network load balancers (NLBs) to those worker nodes only. You can deploy both
public and private NLBs can deploy to edge worker nodes.
If you plan to use an existing worker pool, the pool must span all zones in your cluster and have at least two worker nodes per zone. You can label the worker pool with dedicated=edge
by using the ibmcloud ks worker-pool label set
command.
Before you begin
- Ensure that you have the following IBM Cloud IAM roles:
- Any platform access role for the cluster
- Writer or Manager service access role for all namespaces
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
-
Create a worker pool with the label
dedicated=edge
or add the label to one of your existing worker pools.- To create a worker pool you can use the
worker-pool create classic
command.ibmcloud oc worker-pool create classic --name POOL_NAME --cluster CLUSTER --flavor FLAVOR --size-per-zone WORKERS_PER_ZONE --hardware ISOLATION --label dedicated=edge
- To label an existing worker pool, you can use the
worker-pool label set
command.ibmcloud oc worker-pool label set --cluster CLUSTER --worker-pool POOL --label dedicated=edge
- To create a worker pool you can use the
-
Verify that the worker pool and worker nodes have the
dedicated=edge
label.- To check the worker pool, run the
get
command.ibmcloud ks worker-pool get --cluster <cluster_name_or_ID> --worker-pool <worker_pool_name_or_ID>
- To check individual worker nodes, review the Labels field of the output of the following command.
kubectl describe node <worker_node_private_IP>
- To check the worker pool, run the
-
Retrieve all existing NLBs in the cluster. In the output, note the namespace and name of each load balancer.
kubectl get services --all-namespaces | grep LoadBalancer
Example output
kube-system private-crc81nk5l10gfhdql4i3qg-nlb1 LoadBalancer 172.21.233.160 10.216.23.123 80:31345/TCP,443:32630/TCP 8d kube-system public-crc81nk5l10gfhdql4i3qg-nlb1 LoadBalancer 172.21.190.18 169.46.17.2 80:31345/TCP,443:32630/TCP 8d
-
Using the output from the previous step, run the following command for each NLB. This command redeploys the NLB to an edge worker node.
kubectl get service -n <namespace> <name> -o yaml | kubectl apply -f -
Example output
service "private-crc81nk5l10gfhdql4i3qg-nlb1" configured service "public-crc81nk5l10gfhdql4i3qg-nlb1" configured
-
To verify that networking workloads are restricted to edge nodes, confirm that the load balancers are scheduled onto the edge nodes and are not scheduled onto non-edge nodes.
- NLB pods
- Confirm that the NLB pods are deployed to edge nodes. Search for the external IP address of the load balancer service that is listed in the output of step 3. Replace the periods (
.
) with hyphens (-
). In the following example for thecrc81nk5l10gfhdql4i3qg
, the NLB has an external IP address of169.46.17.2
.
Example outputkubectl describe nodes -l dedicated=edge | grep "169-46-17-2"
ibm-system ibm-cloud-provider-ip-169-46-17-2-76fcb4965d-wz6dg 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) ibm-system ibm-cloud-provider-ip-169-46-17-2-76fcb4965d-2z64r 5m (0%) 0 (0%) 10Mi (0%) 0 (0%)
- Confirm that no NLB pods are deployed to non-edge nodes. Example for the
public-crc81nk5l10gfhdql4i3qg-nlb1
NLB that has an external IP address of169.46.17.2
:kubectl describe nodes -l dedicated!=edge | grep "169-46-17-2"
- If the NLB pods are correctly deployed to edge nodes, no NLB pods are returned. Your NLBs are successfully rescheduled onto only edge worker nodes.
- If NLB pods are returned, continue to the next step.
- Confirm that the NLB pods are deployed to edge nodes. Search for the external IP address of the load balancer service that is listed in the output of step 3. Replace the periods (
- NLB pods
-
If NLB pods are still deployed to non-edge nodes, you can delete the pods so that they redeploy to edge nodes.
Delete only one pod at a time, and verify that the pod is rescheduled onto an edge node before you delete other pods.
-
Delete a pod. For example, if one of the
public-crc81nk5l10gfhdql4i3qg-alb1
NLB pods did not schedule to an edge node, you might use this deletion command:kubectl delete pod ibm-cloud-provider-ip-169-46-17-2-76fcb4965d-wz6dg -n ibm-system
-
Verify that the pod is rescheduled onto an edge worker node. Rescheduling is automatic, but might take a few minutes. Example for the
public-crc81nk5l10gfhdql4i3qg-alb1
NLB that has an external IP address of169.46.17.2
:kubectl describe nodes -l dedicated=edge | grep "169-46-17-2"
Example output
ibm-system ibm-cloud-provider-ip-169-46-17-2-76fcb4965d-wz6dg 5m (0%) 0 (0%) 10Mi (0%) 0 (0%) ibm-system ibm-cloud-provider-ip-169-46-17-2-76fcb4965d-2z64r 5m (0%) 0 (0%) 10Mi (0%) 0 (0%)
-
You labeled worker nodes in a worker pool with dedicated=edge
and redeployed all the existing NLBs to the edge nodes. All subsequent NLBs that are added to the cluster are also deployed to an edge node in your edge worker pool. Next,
prevent other workloads from running on edge worker nodes and block inbound traffic to NodePorts on worker nodes.