Deploying OpenShift Data Foundation on VPC clusters
OpenShift Data Foundation is a highly available storage solution that you can use to manage persistent storage for your containerized workloads in Red Hat® OpenShift® on IBM Cloud® clusters.
Installing OpenShift Data Foundation from OperatorHub is not supported on IBM Cloud clusters. To install ODF, complete the following steps to deploy the cluster add-on.
- Minimum required permissions
Administrator
platform access roleManager
service access role for the cluster in IBM Cloud Kubernetes Service.
Prerequisites
Review the following prerequisites.
-
Install or update the CLI.
-
Create a VPC cluster with at least 3 worker nodes. For high availability, create a cluster with at least one worker node per zone across three zones. Each worker node must have a minimum of 16 CPUs and 64 GB RAM. Make sure each of your subnets have a public gateway attached.
You can deploy OpenShift Data Foundation on 3 worker nodes of 16 CPUs and 32 GB RAM, but you must taint your worker nodes to run only ODF pods. You can't run any additional app workloads or system pods on your ODF nodes when you use this setup.
-
Cluster versions 4.15 and later: Your cluster must have public internet access.
- Disable outbound traffic protection in your cluster.
ibmcloud oc vpc outbound-traffic-protection disable --cluster CLUSTER
- Edit OperatorHub and change
disableAllDefaultSources
tofalse
.oc edit operatorhub cluster -n openshift-marketplace
disableAllDefaultSources: "false"
- Make sure that pods in the
openshift-marketplace
project are running before continuing.oc get po -n openshift-marketplace
- Disable outbound traffic protection in your cluster.
Optional: Setting up an IBM Cloud Object Storage service instance
Complete the following steps to create an IBM Cloud Object Storage instance which you can use as the default backing store in your ODF deployment. If you don't want to set up IBM Cloud Object Storage, you can skip this step and install the add-on.
If you want to set up IBM Cloud Object Storage as the default backing store in your storage cluster, create an instance of IBM Cloud Object Storage. Then, create a set of HMAC credentials and a Kubernetes secret that uses your Object Storage HMAC credentials. If you don't specify IBM Cloud Object Storage credentials during installation, then the default backing store in your storage cluster is created by using the PVs in your cluster. You can set up additional backing stores after deploying ODF, but you can't change the default backing store.
Access your Red Hat OpenShift cluster.
- Create an
openshift-storage
namespace in your cluster. The driver pods are deployed to this namespace. Copy the following YAML and save it asos-namespace.yaml
on your local machine.apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-storage
- Create the
openshift-storage
namespace by using the YAML file that you saved.oc create -f os-namespace.yaml
- Verify that the namespace is created.
oc get namespaces | grep storage
- Create an IBM Cloud Object Storage service instance.
ibmcloud resource service-instance-create noobaa-store cloud-object-storage standard global
- Create HMAC credentials. Make a note of your credentials.
ibmcloud resource service-key-create cos-cred-rw Writer --instance-name noobaa-store --parameters '{"HMAC": true}'
- Create the Kubernetes secret named
ibm-cloud-cos-creds
in theopenshift-storage
namespace that uses your Object Storage HMAC credentials. When you run the command, specify your Object Storage HMAC access key ID and secret access key. Note that your secret must be namedibm-cloud-cos-creds
.oc -n 'openshift-storage' create secret generic 'ibm-cloud-cos-creds' --type=Opaque --from-literal=IBM_COS_ACCESS_KEY_ID=<access_key_id> --from-literal=IBM_COS_SECRET_ACCESS_KEY=<secret_access_key>
- Verify that your secret is created.
oc get secrets -A | grep cos
Optional: Setting up encryption by using Hyper Protect Crypto Services or Key Protect
If you want to set up encryption, create an instance of Hyper Protect Crypto Services or Key Protect. Then, create a root key, and a Kubernetes secret that uses your Hyper Protect Crypto Services or Key Protect credentials.
- Your API key for Hyper Protect Crypto Services or Key Protect must have the following minimum required permissions:
Reader
Reader Plus
- If you are using cluster wide encryption and storage class encryption, your API key must have the following required permissions:
Reader
Reader Plus
Writer
-
Create an Hyper Protect Crypto Services or Key Protect service instance.
-
Create a root key.
-
After creating your instance and root key, make a note of your Hyper Protect Crypto Services or Key Protect instance name, instance ID, root key ID, and public endpoint.
-
Create a service ID, API key, and access policy that allows access to either Hyper Protect Crypto Services and Red Hat OpenShift on IBM Cloud or Key Protect and Red Hat OpenShift on IBM Cloud. Make a note of the API that you create.
-
Private clusters: Create a virtual private endpoint gateway that allows access to your KMS instance. Make sure to bind at least 1 IP address from each subnet in your VPC to the VPE.
Access your Red Hat OpenShift cluster.
- List your namespaces to determine whether you have an
openshift-storage
namespace. If you don't have anopenshift-storage
namespace, create it.oc get namespaces | grep openshift-storage
- Create an
openshift-storage
namespace in your cluster. The driver pods are deployed to this namespace. Copy the following YAML and save it asos-namespace.yaml
on your local machine.apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-storage
- Create the
openshift-storage
namespace by using the YAML file that you saved.oc create -f os-namespace.yaml
- Verify that the namespace is created.
oc get namespaces | grep storage
- Create an
- Encode both the ID of your root key and the API key of the service ID that you created to base64.
printf "ROOT-KEY-ID" | base64
printf "SERVICE-ID-API-KEY" | base64
- Create the Kubernetes secret in the
openshift-storage
namespace that uses your Hyper Protect Crypto Services credentials.- Save the following secret as a YAML file called
ibm-hpcs-secret.yaml
.apiVersion: v1 data: IBM_KP_CUSTOMER_ROOT_KEY: AaAAAaZAAAAy11AAAyAAkaAaQtAAk0AAA2AzY5AjYaaa67aa # your base64 encoded root key ID IBM_KP_SERVICE_API_KEY: AAAaaajAAAAAncmAAaaaaAAAAdAAId1AtVjBJRU1aAAaAeTh1aEw=AaaaA # your base64 encoded API kind: Secret metadata: name: ibm-hpcs-secret namespace: openshift-storage type: Opaque
- Create the secret in your cluster.
oc apply -f ibm-hpcs-secret.yaml
- Save the following secret as a YAML file called
- Verify that your secret is created.
oc get secrets -A | grep ibm-hpcs-secret
Installing the OpenShift Data Foundation add-on from the console
To install ODF in your cluster, complete the following steps.
-
Before you enable the add-on, review the change log for the latest version information.
-
From the Red Hat OpenShift clusters console, select the cluster where you want to install the add-on.
-
On the cluster Overview page, on the OpenShift Data Foundation card, click Install. The Install ODF panel opens.
-
In the Install ODF panel, enter the configuration parameters that you want to use for your ODF deployment.
-
Select either Essentials or Advanced as your billing plan. For more information about billing type, see Feature support by billing type.
-
For VPC clusters, select Remote provisioning to dynamically provision volumes for ODF by using the Block Storage for VPC.
-
In the OSD storage class name field, enter the name of the Block Storage for VPC ODF storage class that you want to use to provision storage volumes. For multizone clusters, use a storage class with the
VolumeBindingMode
ofWaitForFirstConsumer
. See the Storage Class Reference for more information. -
In the OSD pod size field, enter the size of the volume that you want to provision. Enter at least 512Gi.
-
In the Worker nodes field, enter the node names of the worker nodes where you want to deploy ODF. You must enter at least 3 worker node names. To find your node names, run the
oc get nodes
command in your cluster. Node names must be comma-separated with no spaces between names. For example:10.240.0.24,10.240.0.26,10.240.0.25
.Leave this field blank to deploy ODF on all worker nodes. -
In the Number of OSD disks required field, enter the number of OSD disks (app storage) to provision on each worker node.
-
If you want to encrypt the OSD volumes (cluster wide encryption) used by the ODF system pods, select Enable cluster encryption.
-
If you want to enable encryption for the application volumes (app storage), select Enable volume encryption.
- In the Instance name field, enter a unique name for your Hyper Protect Crypto Services or Key Protect instance.
- In the Instance type field, enter the type of encryption instance.
- In the Instance ID field, enter your Hyper Protect Crypto Services or Key Protect instance ID. For example:
d11a1a43-aa0a-40a3-aaa9-5aaa63147aaa
. - In the Secret name field, enter the name of the secret that you created using your Hyper Protect Crypto Services or Key Protect credentials. For example:
ibm-hpcs-secret
. - In the Base URL field, enter the public endpoint of your Hyper Protect Crypto Services or Key Protect instance. For example:
https://api.eu-gb.hs-crypto.cloud.ibm.com:8389
. - In the Token URL field, enter
https://iam.cloud.ibm.com/identity/token
.
-
After you enter the parameters that you want to use, click Install
-
Wait a few minutes for the add-on deployment to complete. When the deployment is complete, the add-on status is
Normal - Addon Ready
. -
Verify your installation. Access your Red Hat OpenShift cluster.
-
Run the following command to verify the ODF pods are running.
oc get pods -n openshift-storage
- Next steps
- Deploy an app that uses ODF.
Installing the add-on from the CLI
You can install the add-on by using the ibmcloud oc cluster addon enable
command.
-
Review the VPC parameter reference. When you enable the add-on, you can override the default values by specifying the
--param "key=value"
option for each parameter that you want to override. -
List the
openshift-data-foundation
add-on versions. Make a note of the default version and determine the version that you want to install.ibmcloud ks cluster addon versions
-
Before you enable the add-on, review the change log for the latest version information. Note that the add-on supports
n+1
cluster versions. For example, you can deploy version4.10.0
of the add-on to an OCP4.9
or4.11
cluster. If you have a cluster version other than the default, you must specify the--version
option when you enable the add-on. -
Review the add-on options.
ibmcloud oc cluster addon options --addon openshift-data-foundation --version 4.12.0
Example add-on options for version 4.12.0
Add-on Options Option Default Value clusterEncryption false hpcsTokenUrl <Please provide the KMS token URL> osdDevicePaths <Please provide IDs of the disks to be used for OSD pods if using local disks or standard classic cluster> ocsUpgrade false autoDiscoverDevices false hpcsServiceName <Please provide the KMS Service instance name> hpcsSecretName <Please provide the KMS secret name> osdSize 250Gi osdStorageClassName ibmc-vpc-block-metro-10iops-tier billingType advanced hpcsInstanceId <Please provide the KMS Service instance ID> hpcsBaseUrl <Please provide the KMS Base (public) URL> odfDeploy true numOfOsd 1 workerNodes all hpcsEncryption false ignoreNoobaa false
-
Enable the
openshift-data-foundation
add-on. If you want to override any of the default parameters, specify the--param "key=value"
option for each parameter you want to override. If you don't want to create your storage cluster when you enable the add-on, you can enable the add-on first, then create your storage cluster later by creating a CRD.Example command to deploy add-on version 4.10 with the default storage cluster settings and encryption with Hyper Protect Crypto Services enabled.
ibmcloud oc cluster addon enable openshift-data-foundation -c <cluster-name> --version 4.12.0 --param "odfDeploy=true" --param "hpcsTokenUrl=https://iam.cloud.ibm.com/identity/token" --param "hpcsEncryption=true" --param "hpcsBaseUrl=<hpcs-instance-public-endpoint>" --param "hpcsInstanceId=<hpcs-instance-id>" --param "hpcsServiceName=<hpcs-instance-name>" --param "hpcsSecretName=<hpcs-secret-name>"
Example command for deploying the ODF add-on only.
ibmcloud oc cluster addon enable openshift-data-foundation -c <cluster_name> --version <version> --param "odfDeploy=false"
Example command for deploying the ODF and creating a storage cluster with the default configuration parameters.
ibmcloud oc cluster addon enable openshift-data-foundation -c <cluster_name> --version <version>
Example command for deploying the ODF and creating a storage cluster while overriding the
osdSize
parameter.ibmcloud oc cluster addon enable openshift-data-foundation -c <cluster_name> --version <version> --param "osdSize=500Gi"
-
Verify the add-on is in a
Ready
state.oc get storagecluster -n openshift-storage
Example output:
NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 53m Ready 2023-03-10T12:20:52Z 4.11.0
-
Verify that the
ibm-ocs-operator-controller-manager-*****
pod is running in thekube-system
namespace.oc get pods -A | grep ibm-ocs-operator-controller-manager
-
If you enabled the add-on with
odfDeploy
set tofalse
, follow the steps to create an ODF custom resource.
Installing the add-on from Terraform
- Install the Terraform CLI and the IBM Cloud Provider plug-in.
- Make sure you have an IBM Cloud API key.
-
Create a Terraform provider file. Save the file in your Terraform directory. For more information, see the Terraform IBM Cloud Provider documentation.
Example Terraform provider file.
terraform { required_providers { ibm = { source = "IBM-Cloud/ibm" version = "1.53.0" } } } provider "ibm" { region = "us-south" ibmcloud_api_key = "<api-key>" }
-
Create a Terraform configuration file for the ODF add-on. Save the file in your Terraform directory.
Example configuration file.
ibmcloud_api_key = "" # Enter your API Key cluster = "" # Enter the Cluster ID region = "us-south" # Enter the region # For add-on deployment odfVersion = "4.12.0" # For CRD Creation and Management autoDiscoverDevices = "false" billingType = "advanced" clusterEncryption = "false" hpcsBaseUrl = null hpcsEncryption = "false" hpcsInstanceId = null hpcsSecretName = null hpcsServiceName = null hpcsTokenUrl = null ignoreNoobaa = "false" numOfOsd = "1" ocsUpgrade = "false" osdDevicePaths = null osdSize = "250Gi" osdStorageClassName = "ibmc-vpc-block-metro-10iops-tier" workerNodes = null
-
In the CLI, navigate to your Terraform directory.
cd <terraform_directory>
-
Run the commands to initialize and plan your Terraform actions. Review the plan output to make sure the correct actions are performed.
terraform init
terraform plan
-
Apply the Terraform files to create the cluster. Then, navigate to the IBM Cloud console to check that the cluster is provisioning.
terraform apply
Creating your ODF custom resource
To create an ODF storage cluster in your VPC cluster by using dynamic provisioning for your storage volumes, you can create a custom resource to specify storage device details.
If you want to use an IBM Cloud Object Storage service instance as your default backing store, make sure that you created the service instance, and created the Kubernetes secret in your cluster. When you create
the ODF CRD in your cluster, ODF looks for a secret named ibm-cloud-cos-creds
to set up the default backing store that uses your Object Storage HMAC credentials.
-
Create a custom resource definition called
OcsCluster
. Save one of the following custom resource definition files on your local machine and edit it to include the name of the your storage class that you created earlier as themonStorageClassName
andosdStorageClassName
parameters. For more information about theOcsCluster
parameters, see the parameter reference.Example custom resource definition for installing ODF on all worker nodes on a 4.8 cluster.
apiVersion: ocs.ibm.io/v1 kind: OcsCluster metadata: name: ocscluster-vpc # Kubernetes resource names can't contain capital letters or special characters. Enter a name for your resource that uses only lowercase letters, numbers, `-` or `.` spec: osdStorageClassName: <osdStorageClassName> # Specify an ODF storage class with a waitForFirstConsumer volume binding mode osdSize: <osdSize> # The OSD size is the total storage capacity of your OCS storage cluster. Use at least 250Gi OSDs for production workloads. numOfOsd: 1 billingType: advanced ocsUpgrade: false
Example custom resource definition for installing ODF only on specified worker nodes on a 4.8 cluster.
apiVersion: ocs.ibm.io/v1 kind: OcsCluster metadata: name: ocscluster-vpc # Kubernetes resource names can't contain capital letters or special characters. Enter a name for your resource that uses only lowercase letters, numbers, `-` or `.` spec: osdStorageClassName: <osdStorageClassName> # Specify an ODF storage class with a waitForFirstConsumer volume binding mode osdSize: <osdSize> # The OSD size is the total storage capacity of your OCS storage cluster. Use at least 250Gi OSDs for production workloads. numOfOsd: 1 billingType: advanced ocsUpgrade: false workerNodes: # Specify the private IP addresses of the worker nodes where you want to install OCS. - <workerNodes> # To get a list worker nodes, run `oc get nodes`. - <workerNodes> - <workerNodes>
-
Save the file and create the
OcsCluster
custom resource to your cluster.oc create -f <ocs-cluster-filename>.yaml
-
Verify that your
OcsCluster
is running.oc describe ocscluster ocscluster-vpc
Example output
Name: ocscluster-vpc Namespace: Labels: <none> Annotations: <none> API Version: ocs.ibm.io/v1 Kind: OcsCluster Metadata: Creation Timestamp: 2021-03-23T20:56:51Z Finalizers: finalizer.ocs.ibm.io Generation: 1 Managed Fields: API Version: ocs.ibm.io/v1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:billingType: f:monSize: f:monStorageClassName: f:numOfOsd: f:ocsUpgrade: f:osdSize: f:osdStorageClassName: Manager: oc Operation: Update Time: 2021-03-23T20:56:51Z API Version: ocs.ibm.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"finalizer.ocs.ibm.io": f:status: .: f:storageClusterStatus: Manager: manager Operation: Update Time: 2021-04-09T23:12:02Z Resource Version: 11372332 Self Link: /apis/ocs.ibm.io/v1/ocsclusters/ocscluster-vpc UID: aa11a1a1-111f-aace-afac-1fa1afe1111a Spec: Billing Type: hourly Mon Size: 20Gi Mon Storage Class Name: ibmc-vpc-block-10iops-tier Num Of Osd: 1 Ocs Upgrade: false Osd Size: 250Gi Osd Storage Class Name: ibmc-vpc-block-10iops-tier Status: Storage Cluster Status: Events: <none>
Scaling ODF
You can scale your ODF configuration by increasing the numOfOsd
setting. When you increase the number of OSDs, ODF provisions that number of disks of the same osdSize
capacity in GB in each of the worker nodes in your
ODF cluster. However, the total storage that is available to your applications is equal to the osdSize
multiplied by the numOfOsd
.
Number of worker nodes | Initial osdSize |
numOfOsd |
Storage capacity available to applications | Total storage of provisioned disks |
---|---|---|---|---|
3 | 250Gi | 1 | 250Gi | 750Gi |
3 | 250Gi | 2 | 500Gi | 1500Gi |
3 | 250Gi | 3 | 750Gi | 2250Gi |
3 | 250Gi | 4 | 1000Gi | 3000Gi |
Scaling by increasing the numOfOsd
Access your Red Hat OpenShift cluster.
-
Get the name of your
OcsCluster
custom resource.oc get ocscluster
-
Save your
OcsCluster
custom resource YAML file to your local machine asocscluster.yaml
.oc get ocscluster ocscluster-vpc -o yaml
-
Increase the
numOfOsd
parameter and reapply theocscluster
CRD to your cluster.oc apply -f ocscluster.yaml
-
Verify that the additional OSDs are created.
oc get pv
Expanding ODF by adding worker nodes to your VPC cluster
To increase the storage capacity in your storage cluster, add compatible worker nodes to your cluster.
-
Expand the worker pool of the cluster that is used for OCS by adding worker nodes. Ensure that your worker nodes meet the requirements for ODF. If you deployed ODF on all the worker nodes in your cluster, the ODF drivers are installed on the new worker nodes when they are added to your cluster.
-
If you deployed ODF on a subset of worker nodes in your cluster by specifying the private
<workerNodes>
parameters in yourOcsCluster
custom resource, you can add the node name of the new worker nodes to your ODF deployment by editing the custom resource definition.oc edit ocscluster ocscluster-vpc
-
Save the
OcsCluster
custom resource file to reapply it to your cluster.
Limitations
Review the following limitations for deploying ODF.
Kubernetes resource ID character limit: Kubernetes PVC names must be fewer than 63 characters. If you deploy ODF in a multizone VPC cluster and create your ODF storage cluster by using a metro retain
storage class
such as ibmc-vpc-block-metro-retain-10iops-tier
, the corresponding ODF device set that is created by using this storage class fails. For more information see ODF device set creation fails because of the Kubernetes character limitation.