Exposing apps in Satellite clusters
Securely expose apps that run in your Satellite cluster to traffic requests from the public network, from resources that are connected to your hosts' private network, or from resources in IBM Cloud.
You have several options for exposing apps in Satellite clusters:
- MetalLB: A
LoadBalancer
implementation suitable for on-premises Satellite clusters. - Red Hat OpenShift routes: Quickly expose apps to requests from the public or a private network with a hostname. The Red Hat OpenShift Ingress controller provides DNS registration and optional certificates for your routes.
- Third-party load balancer and Red Hat OpenShift routes: Expose apps with a hostname, and add health checking for the host IP addresses that are registered in the Ingress controller's DNS records.
- NodePorts: Expose non-HTTP(S) apps, such as UDP or TCP apps, with a NodePort in the 30000 - 32767 range.
- Red Hat OpenShift routes and Satellite Link endpoints: Expose your app with a private route, and create a Link endpoint of type
location
for the route. Only a resource that is connected to the IBM Cloud private network can access your app.
Setting up MetalLB
MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols. For more information, see About MetalLB and the MetalLB Operator in the Red Hat OpenShift documentation.
To install and configure MetalLB, follow the instructions under Installing the MetalLB Operator in the Red Hat OpenShift documentation. Before you begin, make sure you have a dedicated subnet (IPAddressPool
) for the external IP of the LoadBalancer
services. Check that the IP addresses included in the IPAddressPool
are not reserved or used for other purposes, otherwise the load balancing function might fail.
Exposing apps with Red Hat OpenShift routes
Quickly expose the services in your cluster on the Red Hat OpenShift Ingress controller's external IP address by using a route.
An Red Hat OpenShift route exposes a service as a hostname in the format <service_name>-<project>.<cluster_name>-<random_hash>-0000.upi.containers.appdomain.cloud
.
A Ingress controller is deployed by default to your cluster, which enables routes to be used by external clients. The Ingress controller uses the service selector to find the service and the endpoints that back the service. You can configure
the service selector to direct traffic through one route to multiple services. You can also create either unsecured or secured routes by using the TLS certificate that is assigned by the Ingress controller for your hostname. Note that the
Ingress controller supports only the HTTP and HTTPS protocols.
Before you begin with routes, review the following considerations.
- Host network connectivity
- If the hosts for your cluster have public network connectivity, your cluster is created with a public Ingress controller by default. You can use this Ingress controller to create public routes for your app. If the hosts for your cluster have private network connectivity only, your cluster is created with a private Ingress controller by default. You can use this Ingress controller to create private routes for your app that are accessible only from within your hosts' private network. To set up public routes in clusters that have private network connectivity only, first set up your own third-party load balancer that has public network connectivity in front of your private Ingress controller before completing the following steps.
- Health checks
- DNS registration management is provided by default for your cluster's Ingress controller. For example, if you remove a host that was assigned to your cluster from your location and replace it with a different host, IBM updates the host IP addresses in your Ingress controller's DNS record for you. Note that while the DNS registration for routes are provided for you, no load balancer services are deployed in front of the Ingress controller in your cluster. To health check the IP addresses of the hosts that are registered in the Ingress controller's DNS records, you can set up your own third-party load balancer in front of your Ingress controller before completing the following steps.
To create routes for your apps:
-
Create a Kubernetes
ClusterIP
service for your app deployment. The service provides an internal IP address for the app that the Ingress controller can send traffic to.oc expose deploy <app_deployment_name> --name my-app-svc
-
Set up a domain for your app.
- IBM-provided domain: If you don't need to use a custom domain, a route hostname is generated for you in the format
<service_name>-<project>.<cluster_name>-<random_hash>-0000.upi.containers.appdomain.cloud
. Continue to the next step. - Custom domain: Work with your DNS provider to create a custom domain. Note that if you previously set up a third-party load balancer in front of your Ingress controller, work with your DNS provider to create a custom domain for the load balancer instead.
- IBM-provided domain: If you don't need to use a custom domain, a route hostname is generated for you in the format
-
Get the IP addresses for the Ingress controller service in the EXTERNAL-IP column.
oc get svc router-external-default -n openshift-ingress
-
Create a custom domain with your DNS provider. If you want to use the same subdomain for multiple services in your cluster, you can register a wildcard subdomain, such as
*.example.com
. -
Map your custom domain to the Ingress controller's IP addresses by adding the IP addresses as A records.
-
Set up a route that is based on the type of TLS termination that your app requires. If you don't have a custom domain, don't include the
--hostname
option so that a route hostname is generated for you. If you registered a wildcard subdomain, specify a unique subdomain in each route that you create. For example, you might specify--hostname svc1.example.com
in this route, and--hostname svc2.example.com
in another route.-
Simple:
oc expose service <app_service_name> [--hostname <subdomain>]
-
Passthrough:
oc create route passthrough --service <app_service_name> [--hostname <subdomain>]
Need to handle HTTP/2 connections? After you create the route, run
oc edit route <app_service_name>
and change the route'stargetPort
value tohttps
. You can test the route by runningcurl -I --http2 https://<route> --insecure
. -
Edge: If you use a custom domain, include
--hostname
,--cert
, and--key
options, and optionally the--ca-cert
option. For more information about the TLS certificate requirements, see the Red Hat OpenShift edge route documentation.oc create route edge --service <app_service_name> [--hostname <subdomain> --cert <tls.crt> --key <tls.key> --ca-cert <ca.crt>]
-
Re-encrypt: If you use a custom domain, include
--hostname
,--cert
, and--key
options, and optionally the--ca-cert
option. For more information about the TLS certificate requirements, see the Red Hat OpenShift re-encrypt route documentation.oc create route reencrypt --service <app_service_name> --dest-ca-cert <destca.crt> [--hostname <subdomain> --cert <tls.crt> --key <tls.key> --ca-cert <ca.crt>]
-
-
Verify that the route for your app service is created.
oc get routes
-
Optional: Customize default routing rules with optional configurations. For example, you can use route-specific HAProxy annotations.
Setting up a third-party load balancer in front of the Red Hat OpenShift Ingress controller
To health check the IP addresses of the hosts that are registered in the Ingress controller's DNS records, you can set up your own third-party load balancer in front of the IP addresses for the hosts that are assigned as worker nodes to your cluster.
For example, if you remove a host that was assigned to your cluster from your location and replace it with a different host, IBM updates the host IP addresses in your Ingress controller's DNS record for you. But if you power off a host, such as through your cloud provider's infrastructure management, the host's IP address is not removed from your Ingress controller's DNS records and might cause a call to fail if the DNS record is resolved to that host's IP address. By setting up a load balancer in front of your Ingress controller, you can ensure that host IP addresses are regularly health checked, such as to ensure high availability for production-level workloads.
After you create a load balancer in front of your Ingress controller, you can use the Ingress controller to create routes for your app. When a request is sent to the route for your app, the request is first received by your load balancer before being forwarded to your Ingress controller, which then forwards the request to your app.
-
List the details of the default Ingress controller for your cluster. In the EXTERNAL-IP column of the output, get the worker node IP addresses that are registered for your cluster's Ingress controller. In the PORT(S) column of the output, depending on whether you want to create a public or private load balancer, get the node port that the Ingress controller service currently exposes for public or private network traffic.
oc get svc router-external-default -n openshift-ingress
In the following example output, node port
30783
is exposed for public traffic (80).NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.21.84.172 169.xx.xxx.xxx, 169.xx.xxx.xxx 80:30783/TCP,443:30413/TCP 24h
-
Using these IP addresses and the node port, create a layer 4 load balancer that is connected to your hosts' private network. For example, you might deploy a load balancer from your hosts' cloud provider, or deploy an F5 load balancer to your on-premises network. To create public routes, the load balancer must have public network connectivity and must be able to forward TCP and UDP traffic to the port for public traffic that you found in the previous step. To create private routes, the load balancer must be able to forward TCP and UDP traffic to the port for private traffic that you found in the previous step.
-
Get the Hostname for your cluster. This subdomain in the format
<cluster_name>-<random_hash>-0000.upi.containers.appdomain.cloud
is registered with your cluster's Ingress controller.ibmcloud oc nlb-dns ls --cluster <cluster_name_or_ID>
-
Add the public IP addresses of your load balancer to your cluster's subdomain. Repeat this command for all public IP addresses that you want to add.
ibmcloud oc nlb-dns add --ip <public_IP> --cluster <cluster_name_or_ID> --nlb-host <hostname>
-
Remove the worker node IP addresses from your cluster's subdomain. Repeat this command for all IP addresses that you retrieved earlier.
ibmcloud oc nlb-dns rm classic --ip <private_IP> --cluster <cluster_name_or_ID> --nlb-host <hostname>
-
Verify that the public IP addresses for your load balancer are now registered with your cluster subdomain.
ibmcloud oc nlb-dns ls --cluster <cluster_name_or_ID>
-
Continue with the steps in Exposing apps with Red Hat OpenShift routes to create routes for your apps.
If you configure an external loadbalancer or VIP to register with the subdomain rather than using the default registration, that loadbalancer needs inbound access to the cluster hosts and the cluster hosts need outbound access to the loadbalancer.
Exposing apps with NodePorts
If you can't use the Red Hat OpenShift Ingress controller to expose an app, such as if you must expose a TCP or UDP app, you can create a NodePort for your app.
-
Create a NodePort for your app. A NodePort in the range of 30000 - 32767 and an internal cluster IP address is assigned to your app.
oc expose deployment <deployment_name> --type=NodePort --name=<nodeport_svc_name>
-
Get the NodePort that was assigned to your app.
oc describe svc <nodeport_svc_name>
-
Get the Hostname for your cluster in the format
<cluster_name>-<random_hash>-0000.upi.containers.appdomain.cloud
.ibmcloud oc nlb-dns ls --cluster <cluster_name_or_ID>
-
Access your app by using your cluster's subdomain and the NodePort in the format
<cluster_name>-<random_hash>-0000.upi.containers.appdomain.cloud:<nodeport>
. Note that if your hosts have private network connectivity only, you must be connected to the hosts' private network, such as through VPN access. -
Optional: If you don't want to access the NodePort directly, or if you must expose your apps on a specific port such as 443, you can set up your own third-party, layer 4 load balancer that is connected to your hosts' private network and forwards traffic to the NodePort. For example, you might deploy a load balancer from your hosts' cloud provider, or deploy an F5 load balancer to your on-premises network. The load balancer must be able to forward TCP and UDP traffic for ports
30000 - 32767
.
Exposing apps with routes and Link endpoints for traffic from IBM Cloud
If you want to access an app in your Satellite cluster from a resource in IBM Cloud over the private network, you can use your private Ingress controller to create a private route for your app. Then, you can create a Link endpoint of type location
for the route, which is accessible only from within the IBM Cloud private network.
-
Follow the steps in Exposing apps with Red Hat OpenShift routes to create a private route for your app. This route is accessible only from within your hosts' private network.
-
Follow the steps in Creating
location
endpoints to connect to resources in a location to create a Satellite Link endpoint for your app's private route. -
Optional: To allow access to the endpoint from only the specific resource in IBM Cloud, add the resource to your endpoint's source list.