Deploying apps in Red Hat OpenShift clusters
With Red Hat® OpenShift® on IBM Cloud® clusters, you can deploy apps from a remote file or repository such as GitHub with a single command. Also, your clusters come with various built-in services that you can use to help operate your cluster.
Moving your apps to Red Hat OpenShift
To create an app in your Red Hat OpenShift on IBM Cloud cluster, you can use the Red Hat OpenShift console or CLI.
Seeing errors when you deploy your app? Red Hat OpenShift has different default settings than community Kubernetes, such as stricter security context constraints. Review the common scenarios where you might need to modify your apps so that you can deploy them on Red Hat OpenShift clusters.
Deploying apps through the console
You can create apps through various methods in the Red Hat OpenShift console by using the Developer perspective. For more information, see the Red Hat OpenShift documentation.
- From the Red Hat OpenShift clusters console, select your cluster.
- Click Red Hat OpenShift web console.
- From the perspective switcher, select Developer. The Red Hat OpenShift web console switches to the Developer perspective, and the menu now offers items such as +Add, Topology, and Builds.
- Click +Add.
- In the Add pane menu bar, select the Project that you want to create your app in from the drop-down list.
- Click the method that you want to use to add your app, and follow the instructions. For example, click From Git.
Deploying apps through the CLI
To create an app in your Red Hat OpenShift on IBM Cloud cluster, use the oc new-app
command.
For example, you might refer to a public GitHub repo, a public GitLab repo with a URL that ends in .git
, or another local or remote repo. For more information, try out the tutorial and review the Red Hat OpenShift documentation.
oc new-app --name <app_name> https://github.com/<path_to_app_repo> [--context-dir=<subdirectory>]
- What does the
new-app
command do? - The
new-app
command creates a build configuration and app image from the source code, a deployment configuration to deploy the container to pods in your cluster, and a service to expose the app within the cluster. For more information about the build process and other sources besides Git, see the Red Hat OpenShift documentation.
Deploying apps to specific worker nodes by using labels
When you deploy an app, the app pods indiscriminately deploy to various worker nodes in your cluster. Sometimes, you might want to restrict the worker nodes that the app pods to deploy to. For example, you might want app pods to deploy to only worker nodes in a certain worker pool because those worker nodes are on bare metal machines. To designate the worker nodes that app pods must deploy to, add an affinity rule to your app deployment.
Before you begin
- Access your Red Hat OpenShift cluster.
- Optional: Set a label for the worker pool that you want to run the app on.
To deploy apps to specific worker nodes,
-
Get the ID of the worker pool that you want to deploy app pods to.
ibmcloud oc worker-pool ls --cluster <cluster_name_or_ID>
-
List the worker nodes that are in the worker pool, and note one of the Private IP addresses.
ibmcloud oc worker ls --cluster <cluster_name_or_ID> --worker-pool <worker_pool_name_or_ID>
-
Describe the worker node. In the Labels output, note the worker pool ID label,
ibm-cloud.kubernetes.io/worker-pool-id
.The steps in this topic use a worker pool ID to deploy app pods only to worker nodes within that worker pool. To deploy app pods to specific worker nodes by using a different label, note this label instead. For example, to deploy app pods only to worker nodes on a specific private VLAN, use the
privateVLAN=
label.oc describe node <worker_node_private_IP>
Example output
NAME: 10.xxx.xx.xxx Roles: <none> Labels: arch=amd64 beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=b3c.4x16.encrypted beta.kubernetes.io/os=linux failure-domain.beta.kubernetes.io/region=us-south failure-domain.beta.kubernetes.io/zone=dal10 ibm-cloud.kubernetes.io/encrypted-docker-data=true ibm-cloud.kubernetes.io/ha-worker=true ibm-cloud.kubernetes.io/iaas-provider=softlayer ibm-cloud.kubernetes.io/machine-type=b3c.4x16.encrypted ibm-cloud.kubernetes.io/sgx-enabled=false ibm-cloud.kubernetes.io/worker-pool-id=00a11aa1a11aa11a1111a1111aaa11aa-11a11a ibm-cloud.kubernetes.io/worker-version=1.31_1534 kubernetes.io/hostname=10.xxx.xx.xxx privateVLAN=1234567 publicVLAN=7654321 Annotations: node.alpha.kubernetes.io/ttl=0 ...
-
Add an affinity rule for the worker pool ID label to the app deployment.
Example YAML
apiVersion: apps/v1 kind: Deployment metadata: name: with-node-affinity spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: ibm-cloud.kubernetes.io/worker-pool-id operator: In values: - <worker_pool_ID> ...
In the affinity section of the example YAML,
ibm-cloud.kubernetes.io/worker-pool-id
is thekey
and<worker_pool_ID>
is thevalue
. -
Apply the updated deployment configuration file.
oc apply -f with-node-affinity.yaml
-
Verify that the app pods deployed to the correct worker nodes.
-
List the pods in your cluster.
oc get pods -o wide
Example output
NAME READY STATUS RESTARTS AGE IP NODE cf-py-d7b7d94db-vp8pq 1/1 Running 0 15d 172.30.xxx.xxx 10.176.48.78
-
In the output, identify a pod for your app. Note the NODE private IP address of the worker node that the pod is on.
In the previous example output, the app pod
cf-py-d7b7d94db-vp8pq
is on a worker node with the IP address10.xxx.xx.xxx
. -
List the worker nodes in the worker pool that you designated in your app deployment.
ibmcloud oc worker ls --cluster <cluster_name_or_ID> --worker-pool <worker_pool_name_or_ID>
Example output
ID Public IP Private IP Machine Type State Status Zone Version kube-dal10-crb20b637238bb471f8b4b8b881bbb4962-w7 169.xx.xxx.xxx 10.176.48.78 b3c.4x16 normal Ready dal10 1.8.6_1504 kube-dal10-crb20b637238bb471f8b4b8b881bbb4962-w8 169.xx.xxx.xxx 10.176.48.83 b3c.4x16 normal Ready dal10 1.8.6_1504 kube-dal12-crb20b637238bb471f8b4b8b881bbb4962-w9 169.xx.xxx.xxx 10.176.48.69 b3c.4x16 normal Ready dal12 1.8.6_1504
If you created an app affinity rule based on another factor, get that value instead. For example, to verify that the app pod deployed to a worker node on a specific VLAN, view the VLAN that the worker node is on by running
ibmcloud oc worker get --cluster <cluster_name_or_ID> --worker <worker_ID>
. -
In the output, verify that the worker node with the private IP address that you identified in the previous step is deployed in this worker pool.
-
Deploying an app on a GPU machine
If you have a GPU machine type, you can speed up the processing time required for compute intensive workloads such as AI, machine learning, inferencing, and more.
In the following steps, you learn how to deploy workloads that require the GPU. However, you can also deploy apps that don't need to process their workloads across both the GPU and CPU.
You can also try mathematically intensive workloads such as the TensorFlow machine learning framework with this Kubernetes demo.
Prerequisites
Before you begin
-
Create a cluster or worker pool that uses a GPU flavor. Keep in mind that setting up a bare metal machine can take more than one business day to complete. For a list of available flavors, see the following links.
-
Make sure that you are assigned a service access role that grants the appropriate Kubernetes RBAC role so that you can work with Kubernetes resources in the cluster.
Private only cluster limitations: If you cluster doesn't have public network connectivity, you must allow public network connectivity or mirror images from external registries and image streams to icr.io
. In the
following example, the GPU app uses the OLM marketplace with image streams. This example does not work if your cluster can't access the NVIDIA registry (nvcr.io
).
- Install the NVIDIA GPU operator for your cluster version.
- Install the Node Feature Discovery and NVIDIA GPU operators for your cluster version.
You must use NVIDIA GPU operator version 1.3.1 or later. When you install the Node Feature Discovery operator, select the update channel that matches your Red Hat OpenShift cluster version. Don't install the operators through another method, such as a Helm chart.
If you experience issues installing the Node Feature Discovery Operator or the NVIDIA GPU Operator, contact NVIDIA support for help or open an issue in the NVIDIA GPU Operator repo
Deploying a workload
-
Create a YAML file. In this example, a
Job
YAML manages batch-like workloads by making a short-lived pod that runs until the command completes and successfully terminates.For GPU workloads, you must specify the
resources: limits: nvidia.com/gpu
field in the job YAML.apiVersion: batch/v1 kind: Job metadata: name: nvidia-devicequery labels: name: nvidia-devicequery spec: template: metadata: labels: name: nvidia-devicequery spec: containers: - name: nvidia-devicequery image: nvcr.io/nvidia/k8s/cuda-sample:devicequery-cuda11.7.1-ubuntu20.04 imagePullPolicy: IfNotPresent resources: limits: nvidia.com/gpu: 2 restartPolicy: Never
Understanding your YAML components Component Description Metadata and label names Enter a name and a label for the job, and use the same name in both the file's metadata and the spec template
metadata. For example,nvidia-devicequery
.containers.image
Provide the image that the container is a running instance of. In this example, the value is set to use the DockerHub CUDA device query image: nvcr.io/nvidia/k8s/cuda-sample:devicequery-cuda11.7.1-ubuntu20.04
.containers.imagePullPolicy
To pull a new image only if the image is not currently on the worker node, specify IfNotPresent
.resources.limits
For GPU machines, you must specify the resource limit. The Kubernetes Device Plug-in sets the default resource request to match the limit.
- You must specify the key as
nvidia.com/gpu
. - Enter the whole number of GPUs that you request, such as
2
. Note that container pods don't share GPUs and GPUs can't be overcommitted. For example, if you have only 1mg1c.16x128
machine, then you have only 2 GPUs in that machine and can specify a maximum of2
.
- You must specify the key as
-
Apply the YAML file. For example:
oc apply -f nvidia-devicequery.yaml
-
Check the job pod by filtering your pods by the
nvidia-devicequery
label. Verify that the STATUS is Completed.oc get pod -A -l 'name in (nvidia-devicequery)'
Example output
NAME READY STATUS RESTARTS AGE nvidia-devicequery-ppkd4 0/1 Completed 0 36s
-
Describe the pod to see how the GPU device plug-in scheduled the pod.
-
In the
Limits
andRequests
fields, see that the resource limit that you specified matches the request that the device plug-in automatically set. -
In the events, verify that the pod is assigned to your GPU worker node.
oc describe pod nvidia-devicequery-ppkd4
Example output
NAME: nvidia-devicequery-ppkd4 Namespace: default ... Limits: nvidia.com/gpu: 1 Requests: nvidia.com/gpu: 1 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 1m default-scheduler Successfully assigned nvidia-devicequery-ppkd4 to 10.xxx.xx.xxx ...
-
-
To verify that the job used the GPU to compute its workload, you can check the logs.
oc logs nvidia-devicequery-ppkd4
Example output
/cuda-samples/sample Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "Tesla P100-PCIE-16GB" CUDA Driver Version / Runtime Version 11.4 / 11.7 CUDA Capability Major/Minor version number: 6.0 Total amount of global memory: 16281 MBytes (17071734784 bytes) (056) Multiprocessors, (064) CUDA Cores/MP: 3584 CUDA Cores GPU Max Clock rate: 1329 MHz (1.33 GHz) Memory Clock rate: 715 Mhz Memory Bus Width: 4096-bit L2 Cache Size: 4194304 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 65536 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Enabled Device supports Unified Addressing (UVA): Yes Device supports Managed Memory: Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 175 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.4, CUDA Runtime Version = 11.7, NumDevs = 1 Result = PASS
In this example, you see a GPU was used to execute the job because the GPU was scheduled in the worker node. If the limit is set to 2, only 2 GPUs are shown.
Now that you deployed a test GPU workload, you might want to set up your cluster to run a tool that relies on GPU processing, such as IBM Maximo Visual Inspection.