Installing OpenShift Data Foundation on a private cluster
OpenShift Data Foundation is a highly available storage solution that you can use to manage persistent storage for your containerized workloads in Red Hat® OpenShift® on IBM Cloud® clusters.
This is an experimental feature that is available for evaluation and testing purposes and might change without notice.
In standard OpenShift Data Foundation configurations, the operators and drivers pull images from public container registries like registry.redhat.io. However, in private-only, air-gapped clusters (without access to the public internet) you must first mirror the ODF images to IBM Cloud Container Registry, then configure your OpenShift Data Foundation deployment to pull those images from a private container registry.
Since this approach involves manually mirroring images from registry.redhat.io to your IBM Cloud Container Registry, this means you are responsible for repeating the mirroring process to get the latest patch updates or security fixes when they are available for OpenShift Data Foundation.
Prerequisites
Before you install OpenShift Data Foundation in your cluster, meet the following prerequisite conditions.
- Create a Red Hat account if you do not already have one. For more information on creating a Red Hat account, see Create a Red Hat login.
- Create or have access to a private cluster for OpenShift Data Foundation. If you already have a private cluster make sure it meets the following requirements.
- Your cluster version must be at least version 4.11.
- Your worker node operating system must be RHEL 8.
- 1 Virtual Private Cloud (VPC) with 3 subnets (1 per zone) with no public gateway attached.
- 1 Red Hat OpenShift on IBM Cloud cluster with at least 3 worker nodes spread evenly across 3 zones. The worker nodes must be at least 16x64.
- An IBM Cloud Container Registry instance with at least one namespace in the same region as your cluster. If you don't have an instance of IBM Cloud Container Registry, see Getting started with Container Registry to create one.
- Optional: If you plan to use Hyper Protect Crypto Services or Key Protect for encryption, create a virtual private endpoint gateway that allows access to your KMS instance. Make sure to bind at least 1 IP address from each subnet in your VPC to the VPE.
Create an additional subnet in your VPC and attach a Public Gateway
In addition to the 3 required subnets, create another subnet and attach a public gateway to it.
From the Subnets for VPC console, create an additional subnet in your VPC. Note that this subnet must be separate from the subnets your worker nodes are in and must have a public gateway attached.
Create a bastion host
From the Virtual Servers for VPC console, create a virtual server in the subnet that you created in the previous step. This virtual server is used as a bastion host to connect to your private cluster. The operating system for your bastion host must be at least Ubuntu 20.04 or RHEL 8.
Reserve a floating IP and bind it to your bastion host
From the Floating IPs console, reserve a floating IP in the zone where the subnet that you created earlier is located and bind it to your bastion host.
Install the CLI tools
-
From the Red Hat OpenShift downloads page, download the Red Hat OpenShift command-line interface (
oc
) and the Red Hat OpenShift Client (oc) mirror plug-in. -
Copy the
oc
and theoc-mirror
tar files to your bastion host.scp /path/to/download root@BASTION-HOST-IP:/root
-
Log in to your bastion host. For more information, see Connecting to your instance.
ssh -i <path-to-key-file> root@<bastion-host-ip-address>
-
Unpack each of tar files and move them to
/usr/local/bin
.tar -C /usr/local/bin -xvzf oc.tar.gz
tar -C /usr/local/bin -xvzf oc-mirror.tar.gz
-
Install the IBM Cloud CLI tools.
curl -fsSL https://clis.cloud.ibm.com/install/linux | sh
-
Install the
container-service
andcontainer-registry
plug-ins.ibmcloud plugin install container-service
ibmcloud plugin install container-registry
-
Install Podman.
Log in to your cluster and disable the default OperatorHub sources
In a restricted network environment, you must have administrator access to disable the default catalogs. You can then configure OperatorHub to use local catalog sources.
While logged into your bastion host, complete the following steps.
-
Disable the default remote OperatorHub sources.
oc patch operatorhub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Log in to your container registries
-
Log in to
registry.redhat.io
. If you don't have a Red Hat account, follow the steps to create one.podman login registry.redhat.io
-
Log in to IBM Cloud Container Registry with the username
iamapikey
.podman login us.icr.io -u iamapikey -p IAM-API-KEY
Create a namespace in IBM Cloud Container Registry
- Set the IBM Cloud Container Registry region in your CLI. This region must be the same region your cluster is in.
ibmcloud cr region-set us-south
- Create a namespace in IBM Cloud Container Registry. This namespace is used for the OpenShift Data Foundation images.
ibmcloud cr namespace-add NAMESPACE
Mirror the Operator index to IBM Cloud Container Registry
-
Copy the following
ImageSetConfiguration
and save it as a file calledimageset.yaml
.apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: registry: imageURL: us.icr.io/NAMESPACE/redhat-operator-index skipTLS: false mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 packages: - name: local-storage-operator - name: ocs-operator - name: mcg-operator - name: odf-operator - name: odf-csi-addons-operator
-
Mirror the OpenShift Data Foundation images from
registry.redhat.io
to your IBM Cloud Container Registry namespace.Before running
oc-mirror
make sure to set the umask of your bastion host to0022
.oc-mirror --config=imageset.yaml docker://us.icr.io/NAMESPACE --dest-skip-tls
Create a secret to pull images from IBM Cloud Container Registry
-
Find and record your unique Red Hat registry pull secret. For more information on how to find your Red Hat registry pull secret, see Red Hat Container Registry Authentication.
-
Rename the
pull-secret
file secret toauth.json
. -
Encode your IAM API key in base64.
printf "iamapikey:IAM-API-KEY" | base64
-
Add the following section to your
auth.json
file.{"auths": {"us.icr.io": {"auth": "BASE64-VALUE","email": "IBM-EMAIL"}}}
-
Create the secret in the
openshift-marketplace
namespace.oc create secret generic odf-secret -n openshift-marketplace --from-file=.dockerconfigjson=auth.json --type=kubernetes.io/dockerconfig
Update the catalog source in your cluster
-
After mirroring is complete, a results directory is created on your bastion host called
oc-mirror-workspace
. -
Change directories into the
oc-mirror-workspace
directory.cd oc-mirror-workspace
-
Look for a
results-XXX
directory andcd
into it.ls
cd results-XXX
-
Look for the
catalogSource-redhat-operator-index.yaml
.ls
-
Edit the catalog source. Change the name to
redhat-operators
, add theodf-secret
, and your registry details.apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-operators # Make sure the name is redhat-operators namespace: openshift-marketplace spec: image: us.icr.io/NAMESPACE/redhat/redhat-operator-index:v4.10 # Add your registry sourceType: grpc displayName: Red Hat Operators publisher: Red Hat updateStrategy: registryPoll: interval: 10m0s secrets: # Add the odf-secret - "odf-secret"
-
Create the catalog source in your cluster.
oc create -f catalogSource-redhat-operator-index.yaml
-
Verify that the pods and
packagemanifest
are created in your cluster.oc get pods,packagemanifest -n openshift-marketplace
Update your image pull secret
-
Extract the global pull secret to a file called
.dockerconfigjson
.oc extract secret/pull-secret -n openshift-config --to=.
Example output
.dockerconfigjson
-
Print the details of your
auth.json
file.printf auth.json
-
Add the
icr.io
section fromauth.json
to your.dockerconfigjson
.{"auths": {"us.icr.io": {"auth": "BASE64-VALUE","email": "IBM-EMAIL"}}}
-
Update the pull secret in the
openshift-config
namespace to use your.dockerconfigjson
.oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson
Replace each worker node to pick up configuration changes
- Get a list of nodes in your cluster.
ibmcloud oc worker ls -c CLUSTER
- Run the
ibmcloud oc worker replace
to replace each worker node in your cluster.ibmcloud oc worker replace -c CLUSTER --worker WORKER-NODE
Update the registries.conf
file on each node
After replacing each worker node, start a debug pod on each node and update the registries.conf
file.
-
List your worker nodes with
oc get nodes
.oc get nodes
-
Start a debug pod on one of the nodes.
oc debug node/NODE-NAME
-
Allow host binaries.
chroot /host
-
Open the
registries.conf
file.vi etc/containers/registries.conf
-
Append the following image mappings to the
registries.conf
file.[[registry]] location = "registry.redhat.io/odf4" insecure = false blocked = false mirror-by-digest-only = false prefix = "" [[registry.mirror]] location = "us.icr.io/NAMESPACE/odf4" insecure = false [[registry]] location = "registry.redhat.io/openshift4" insecure = false blocked = false mirror-by-digest-only = false prefix = "" [[registry.mirror]] location = "us.icr.io/NAMESPACE/openshift4" insecure = false [[registry]] location = "registry.redhat.io/ocs4" insecure = false blocked = false mirror-by-digest-only = false prefix = "" [[registry.mirror]] location = "us.icr.io/NAMESPACE/ocs4" insecure = false [[registry]] location = "registry.redhat.io/rhceph" insecure = false blocked = false mirror-by-digest-only = false prefix = "" [[registry.mirror]] location = "us.icr.io/NAMESPACE/rhceph" insecure = false [[registry]] location = "registry.redhat.io/rhel8" insecure = false blocked = false mirror-by-digest-only = false prefix = "" [[registry.mirror]] location = "us.icr.io/NAMESPACE/rhel8" insecure = false
-
For each of the registry mirrors that you added in the previous step (
openshift4
,ocs4
,rhceph
,rhel8
), remove the duplicate entry inregistries.conf
that has anarmada-master
mirror location.Example
rhel8
registry to remove fromregistries.conf
.[[registry]] location = "registry.redhat.io/rhel8/postgresql-12" insecure = false blocked = false mirror-by-digest-only = false prefix = "" [[registry.mirror]] location = "us.icr.io/armada-master/rhel8-postgresql-12" insecure = false
-
Repeat the previous steps to update the
registries.conf
file on each worker node.
Reboot each worker node
- Reboot each worker node in your cluster one at a time.
ibmcloud oc worker reboot -c CLUSTER -w WORKER
- Wait for each node to reach the
Ready
status before rebooting the next.
Install the OpenShift Data Foundation add-on from the console
To install ODF in your cluster, complete the following steps.
- Before you enable the add-on, review the change log for the latest version information. Note that the add-on supports
n+1
cluster versions. For example, you can deploy version4.10.0
of the add-on to an OCP4.9
or4.11
cluster. If you have a cluster version other than the default, you must install the add-on from the CLI and specify the--version
option. - Review the parameter reference
- From the Red Hat OpenShift clusters console, select the cluster where you want to install the add-on.
- On the cluster Overview page, find the OpenShift Data Foundation card and click Install. The Install ODF panel opens.
- In the Install ODF panel, enter the configuration parameters that you want to use for your ODF deployment.
- Select either Essentials or Advanced as your billing plan.
- If you want to automatically discover the available storage devices on your worker nodes and use them in ODF, select Local disk discovery.
- In the Worker nodes field, enter the node names of the worker nodes where you want to deploy ODF. You must enter at least 3 worker node names. To find your node names, run the
oc get nodes
command in your cluster. Node names must be comma-separated with no spaces between names. For example:10.240.0.24,10.240.0.26,10.240.0.25
.Leave this field blank to deploy ODF on all worker nodes. - In the Number of OSD disks required field, enter the number of OSD disks (app storage) to provision on each worker node.
- If you are re-enabling the add-on to upgrade the add-on version, select the Upgrade ODF option.
- If you want to encrypt the volumes used by the ODF system pods, select Enable cluster encryption.
- If you want to enable encryption on the OSD volumes (app storage), select Enable volume encryption.
- In the Instance name field, enter the name of your Hyper Protect Crypto Services instance. For example:
Hyper-Protect-Crypto-Services-eugb
. - In the Instance ID field, enter your Hyper Protect Crypto Services instance ID. For example:
d11a1a43-aa0a-40a3-aaa9-5aaa63147aaa
. - In the Secret name field, enter the name of the secret that you created by using your Hyper Protect Crypto Services credentials. For example:
ibm-hpcs-secret
. - In the Base URL field, enter the public endpoint of your Hyper Protect Crypto Services instance. For example:
https://api.eu-gb.hs-crypto.cloud.ibm.com:8389
. - In the Token URL field, enter
https://iam.cloud.ibm.com/identity/token
.
- In the Instance name field, enter the name of your Hyper Protect Crypto Services instance. For example:
Verify OpenShift Data Foundation is running
-
List the pods in the
openshift-storage
namespace and verify they are running.oc get pods -n openshift-storage
-
List the available storage classes.
oc get sc