Installing the IBM Cloud Object Storage plug-in
Virtual Private Cloud Classic infrastructure
Install the IBM Cloud Object Storage plug-in to set up pre-defined storage classes for IBM Cloud Object Storage. You can use these storage classes to create a PVC to provision IBM Cloud Object Storage for your apps.
If you are migrating from RHEL 7 to RHEL 8, you must uninstall and then reinstall the plug-in version 2.2.6
or later. If you are upgrading from a chart version before 2.2.5
, you must uninstall and reinstall the plug-in
and then re-create your PVCs and pods or the upgrade will fail.
Installing the plug-in via Helm
- Prerequisites
- The IBM Cloud Object Storage plug-in requires at least 0.2 vCPU and 128 MB of memory.
Before you begin: Access your Red Hat OpenShift cluster.
Install the ibmc
Helm plug-in and the ibm-object-storage-plugin
:
-
Make sure that your worker node applies the latest patch for your minor version to run your worker node with the latest security settings. The patch version also ensures that the root password on the worker node is renewed.
If you did not apply updates or reload your worker node within the last 90 days, your root password on the worker node expires and the installation of the storage plug-in might fail.
-
List the current patch version of your worker nodes.
ibmcloud oc worker ls --cluster <cluster_name_or_ID>
Example output
OK ID Public IP Private IP Machine Type State Status Zone Version kube-dal10-crb1a23b456789ac1b20b2nc1e12b345ab-w26 169.xx.xxx.xxx 10.xxx.xx.xxx b3c.4x16.encrypted normal Ready dal10 1.31_1523*
If your worker node does not apply the latest patch version, you see an asterisk (
*
) in the Version column of your CLI output. -
Review the Red Hat OpenShift on IBM Cloud version information to find the latest changes.
-
Apply the latest patch version by reloading your worker node. Follow the instructions in the ibmcloud oc worker reload command to safely reschedule any running pods on your worker node before you reload your worker node. Note that during the reload, your worker node machine is updated with the latest image and data is deleted if not stored outside the worker node.
-
-
Review the change log and verify support for your cluster version and architecture.
-
Follow the instructions to install the version 3 Helm client on your local machine.
If you enabled VRF and service endpoints in your IBM Cloud account, you can use the private IBM Cloud Helm repository to keep your image pull traffic on the private network. If you can't enable VRF or service endpoints in your account, use the public Helm repository.
-
Add the IBM Cloud Helm repo to your cluster.
helm repo add ibm-helm https://raw.githubusercontent.com/IBM/charts/master/repo/ibm-helm
-
Update the Helm repos to retrieve the most recent version of all Helm charts in this repo.
helm repo update
-
If you installed the IBM Cloud Object Storage Helm plug-in earlier, remove the
ibmc
plug-in.helm plugin uninstall ibmc
-
Download the Helm charts and unpack the charts in your current directory.
helm fetch --untar ibm-helm/ibm-object-storage-plugin
If the output shows
Error: failed to untar: a file or directory with the name ibm-object-storage-plugin already exists
, delete youribm-object-storage-plugin
directory and rerun thehelm fetch
command. -
If you use OS X or a Linux distribution, install the IBM Cloud Object Storage Helm plug-in
ibmc
. The plug-in automatically retrieves your cluster location and to set the API endpoint for your IBM Cloud Object Storage buckets in your storage classes. If you use Windows as your operating system, continue with the next step.-
Install the Helm plug-in.
helm plugin install ./ibm-object-storage-plugin/helm-ibmc
-
Verify that the
ibmc
plug-in is installed successfully.helm ibmc --help
Example output
Helm version: v3.13.1+g3547a4b Install or upgrade Helm charts in IBM K8S Service(IKS) Usage: helm ibmc [command] Available Commands: install Install a Helm chart upgrade Upgrade the release to a new version of the Helm chart Available Flags: -h, --help (Optional) This text. -u, --update (Optional) Update this plugin to the latest version Example Usage: Install: helm ibmc install ibm-object-storage-plugin ibm-helm/ibm-object-storage-plugin Upgrade: helm ibmc upgrade [RELEASE] ibm-helm/ibm-object-storage-plugin Note: 1. It is always recommended to install latest version of ibm-object-storage-plugin chart. 2. It is always recommended to have 'kubectl' client up-to-date.
-
Optional: If the output shows the error
Error: fork/exec /home/iksadmin/.helm/plugins/helm-ibmc/ibmc.sh: permission denied
, run the following command.chmod 755 /Users/<user_name>/Library/helm/plugins/helm-ibmc/ibmc.sh
Then retry the
ibmc --help
command.helm ibmc --help
-
-
Optional: Limit the IBM Cloud Object Storage plug-in to access only the Kubernetes secrets that hold your IBM Cloud Object Storage service credentials. By default, the plug-in can access all Kubernetes secrets in your cluster.
-
Store your IBM Cloud Object Storage service credentials in a Kubernetes secret).
-
Change directories to the
ibm-object-storage-plugin
directory.cd ibm-object-storage-plugin
-
From the
ibm-object-storage-plugin
, navigate to thetemplates
directory and list available files. OS X and Linuxcd templates && ls
Windows
chdir templates && dir
-
Open the
provisioner-sa.yaml
file and look for theibmcloud-object-storage-secret-reader
ClusterRole
definition. -
Add the name of the secret that you created earlier to the list of secrets that the plug-in is authorized to access in the
resourceNames
section.kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: ibmcloud-object-storage-secret-reader rules: - apiGroups: [""] resources: ["secrets"] resourceNames: ["<secret_name1>","<secret_name2>"] verbs: ["get"]
-
Save your changes and navigate to your working directory.
-
Install the
ibm-object-storage-plugin
in your cluster. When you install the plug-in, pre-defined storage classes are added to your cluster. If you completed the previous step for limiting the IBM Cloud Object Storage plug-in to access only the Kubernetes secrets that hold your IBM Cloud Object Storage service credentials and you are still targeting thetemplates
directory, change directories to your working directory. To set a limit on how much storage is available for the bucket, set the--set quotaLimit=true
VPC clusters only: To enable authorized IPs on VPC, set the--set bucketAccessPolicy=true
option.
If you don't set the --set quotaLimit=true
option during installation, you can't set quotas for your PVCs.
Example helm ibmc install
commands for OS X and Linux for RHEL and Ubuntu worker nodes.
helm ibmc install ibm-object-storage-plugin ibm-helm/ibm-object-storage-plugin --set license=true [--set quotaLimit=true/false] [--set bucketAccessPolicy=false] [--set allowCrossNsSecret=true/false]
Example helm ibmc install
commands for OS X and Linux for Red Hat OpenShift CoreOS worker nodes.
helm install ibm-object-storage-plugin ./ibm-object-storage-plugin --set license=true --set kubeDriver=/etc/kubernetes --set dcname="${DC_NAME}" --set provider="${CLUSTER_PROVIDER}" --set workerOS="${WORKER_OS}" --region="${REGION}" --set platform="${PLATFORM}" [--set quotaLimit=true/false] [--set bucketAccessPolicy=false] [--set allowCrossNsSecret=true/false]
Example helm install
command for Windows.
helm install ibm-object-storage-plugin ./ibm-object-storage-plugin --set dcname="${DC_NAME}" --set provider="${CLUSTER_PROVIDER}" --set workerOS="${WORKER_OS}" --region="${REGION} --set platform="${PLATFORM}" --set license=true [--set bucketAccessPolicy=false]
quotaLimit
- A quota limit sets the maximum amount of storage (in bytes) available for a bucket. If you set this option to
true
, then when you create PVCs, the quota on buckets created by those PVCs is equal to the PVC size. The default value istrue
. allowCrossNsSecret
- By default, the plug-in searches for the Kubernetes secret in namespaces other than the PVC namespace. If you set this option to
false
the plug-in searches for the Kubernetes secret in only the PVC namespace. The default value istrue
. kubeDriver
- RHEL worker nodes: set to
/usr/libexec/kubernetes
. - CoreOS worker nodes: set to
/etc/kubernetes
. DC_NAME
- The cluster data center. To retrieve the data center, run
oc get cm cluster-info -n kube-system -o jsonpath="{.data.cluster-config\.json}{'\n'}"
. Store the data center value in an environment variable by runningSET DC_NAME=<datacenter>
. Optional: Set the environment variable in Windows PowerShell by running$env:DC_NAME="<datacenter>"
. CLUSTER_PROVIDER
- The infrastructure provider. To retrieve this value, run
oc get nodes -o jsonpath="{.items[*].metadata.labels.ibm-cloud\.kubernetes\.io\/iaas-provider}{'\n'}"
. If the output from the previous step containssoftlayer
, then set theCLUSTER_PROVIDER
to"IBMC"
. If the output containsgc
,ng
, org2
, then set theCLUSTER_PROVIDER
to"IBMC-VPC"
. Store the infrastructure provider in an environment variable. For example:SET CLUSTER_PROVIDER="IBMC-VPC"
. WORKER_OS
andPLATFORM
- The operating system of the worker nodes. To retrieve these values, run
oc get nodes -o jsonpath="{.items[*].metadata.labels.ibm-cloud\.kubernetes\.io\/os}{'\n'}"
. Store the operating system of the worker nodes in an environment variable. For Red Hat OpenShift on IBM Cloud clusters, runSET WORKER_OS="redhat"
andSET PLATFORM="openshift"
. REGION
- The region of the worker nodes. To retrieve this value, run
oc get nodes -o yaml | grep 'ibm-cloud\.kubernetes\.io/region'
. Store the region of the worker nodes in an environment variable by runningSET REGION="< region>"
.
Updating the IBM Cloud Object Storage plug-in
You can upgrade the existing IBM Cloud Object Storage plug-in to the most recent version.
-
Get the name of your IBM Cloud Object Storage plug-in Helm release and the version of the plug-in in your cluster.
helm ls -A | grep object
Example output
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION <release_name> <namespace> 1 2020-02-13 16:05:58.599679 -0500 EST deployed ibm-object-storage-plugin-1.1.2 1.1.2
-
Update the IBM Cloud Helm repo to retrieve the most recent version of all Helm charts in this repo.
helm repo update
-
Update the IBM Cloud Object Storage
ibmc
Helm plug-in to the most recent version.helm ibmc --update
-
Install the most recent version of the
ibm-object-storage-plugin
for your operating system.Example
helm ibmc upgrade
command for OS X and Linux.helm ibmc upgrade ibm-object-storage-plugin ibm-helm/ibm-object-storage-plugin --set license=true [--set bucketAccessPolicy=false]
Example
helm upgrade
command for Windows.helm upgrade ibm-object-storage-plugin ./ibm-object-storage-plugin --set dcname="${DC_NAME}" --set provider="${CLUSTER_PROVIDER}" --set workerOS="${WORKER_OS}" --region="${REGION} --set platform="${PLATFORM}" --set license=true [--set bucketAccessPolicy=false]
DC_NAME
- The cluster data center. To retrieve the data center, run
oc get cm cluster-info -n kube-system -o jsonpath="{.data.cluster-config\.json}{'\n'}"
. Store the data center value in an environment variable by runningSET DC_NAME=<datacenter>
. Optional: Set the environment variable in Windows PowerShell by running$env:DC_NAME="<datacenter>"
. CLUSTER_PROVIDER
- The infrastructure provider. To retrieve this value, run
oc get nodes -o jsonpath="{.items[*].metadata.labels.ibm-cloud\.kubernetes\.io\/iaas-provider}{'\n'}"
. If the output from the previous step containssoftlayer
, then set theCLUSTER_PROVIDER
to"IBMC"
. If the output containsgc
,ng
, org2
, then set theCLUSTER_PROVIDER
to"IBMC-VPC"
. Store the infrastructure provider in an environment variable. For example:SET CLUSTER_PROVIDER="IBMC-VPC"
. WORKER_OS
andPLATFORM
- The operating system of the worker nodes. To retrieve these values, run
oc get nodes -o jsonpath="{.items[*].metadata.labels.ibm-cloud\.kubernetes\.io\/os}{'\n'}"
. Store the operating system of the worker nodes in an environment variable. For Red Hat OpenShift on IBM Cloud clusters, runSET WORKER_OS="redhat"
andSET PLATFORM="openshift"
. REGION
- The region of the worker nodes. To retrieve this value, run
oc get nodes -o yaml | grep 'ibm-cloud\.kubernetes\.io/region'
. Store the region of the worker nodes in an environment variable by runningSET REGION="< region>"
. |
-
Verify that the
ibmcloud-object-storage-plugin
is successfully upgraded. The upgrade of the plug-in is successful when you seedeployment "ibmcloud-object-storage-plugin" successfully rolled out
in your CLI output.oc rollout status deployment/ibmcloud-object-storage-plugin -n ibm-object-s3fs
-
Verify that the
ibmcloud-object-storage-driver
is successfully upgraded. The upgrade is successful when you seedaemon set "ibmcloud-object-storage-driver" successfully rolled out
in your CLI output.oc rollout status ds/ibmcloud-object-storage-driver -n ibm-object-s3fs
-
Verify that the IBM Cloud Object Storage pods are in a
Running
state.oc get pods -n <namespace> -o wide | grep object-storage
If you're having trouble updating the IBM Cloud Object Storage plug-in, see Object storage: Installing the Object storage ibmc
Helm plug-in fails and Object storage: Installing the IBM Cloud Object Storage plug-in fails.
Removing the IBM Cloud Object Storage plug-in
If you don't want to provision and use IBM Cloud Object Storage in your cluster, you can uninstall the ibm-object-storage-plugin
and the ibmc
Helm plug-in.
Removing the ibmc
Helm plug-in or the ibm-object-storage-plugin
doesn't remove existing PVCs, PVs, or data. When you remove the ibm-object-storage-plugin
, all the related driver pods and daemon sets are
removed from your cluster, which means you can't provision new IBM Cloud Object Storage for your cluster unless you configure your app to use the IBM Cloud Object Storage API directly. There is no impact on existing PVCs and PVs.
Before you begin:
- Access your Red Hat OpenShift cluster.
- Make sure that you don't have any PVCs or PVs in your cluster that use IBM Cloud Object Storage. To list all pods that mount a specific PVC, run
oc get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
.
To remove the ibmc
Helm plug-in and the ibm-object-storage-plugin
:
-
Get the name of your
ibm-object-storage-plugin
Helm installation.helm ls -A | grep ibm-object-storage-plugin
Example output
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION ibm-object-storage-plugin default 2 2020-04-01 08:46:01.403477 -0400 EDT deployed ibm-object-storage-plugin-1.1.4 1.1.4
-
Uninstall the
ibm-object-storage-plugin
.helm uninstall <release_name>
Example command for a release named
ibm-object-storage-plugin
.helm uninstall ibm-object-storage-plugin
-
Verify that the
ibm-object-storage-plugin
pods are removed.oc get pod -n <namespace> | grep object-storage
The removal of the pods is successful if no pods are displayed in your CLI output. The removal of the storage classes is successful if no storage classes are displayed in your CLI output.
-
Verify that the storage classes are removed.
oc get sc | grep s3
-
If you use OS X or a Linux distribution, remove the
ibmc
Helm plug-in. If you use Windows, this step is not required.-
Remove the
ibmc
Helm plug-in.helm plugin uninstall ibmc
-
Verify that the
ibmc
plug-in is removed. Theibmc
plug-in is removed successfully if theibmc
plug-in is not listed in your CLI output.helm plugin list
Example output
NAME VERSION DESCRIPTION
-
Deciding on the object storage configuration
Red Hat OpenShift on IBM Cloud provides pre-defined storage classes that you can use to create buckets with a specific configuration.
-
List available storage classes in Red Hat OpenShift on IBM Cloud.
oc get sc | grep s3
Example output
ibmc-s3fs-cold-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-cold-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-perf-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-perf-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-perf-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-perf-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-vault-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-vault-regional ibm.io/ibmc-s3fs 8m
-
Choose a storage class that fits your data access requirements. The storage class determines the storage capacity, read and write operations, and outbound bandwidth for a bucket. The option that is correct for you is based on how frequently data is read and written to your service instance.
- Standard: This option is used for hot data that is accessed frequently. Common use cases are web or mobile apps.
- Vault: This option is used for workloads or cool data that are accessed infrequently, such as once a month or less. Common use cases are archives, short-term data retention, digital asset preservation, tape replacement, and disaster recovery.
- Cold: This option is used for cold data that is rarely accessed (every 90 days or less), or inactive data. Common use cases are archives, long-term backups, historical data that you keep for compliance, or workloads and apps that are rarely accessed.
- Smart: This option is used for workloads and data that don't follow a specific usage pattern, or that are too huge to determine or predict a usage pattern.
-
Decide on the level of resiliency for the data that is stored in your bucket. For more information, see Regions and endpoints.
- Cross-region: With this option, your data is stored across three regions within a geolocation for highest availability. If you have workloads that are distributed across regions, requests are routed to the nearest regional
endpoint. The API endpoint for the geolocation is automatically set by the
ibmc
Helm plug-in that you installed earlier based on the location that your cluster is in. For example, if your cluster is inUS South
, then your storage classes are configured to use theUS GEO
API endpoint for your buckets. - Regional: With this option, your data is replicated across multiple zones within one region. If you have workloads that are located in the same region, you see lower latency and better performance than in a cross-regional
setup. The regional endpoint is automatically set by the
ibm
Helm plug-in that you installed earlier based on the location that your cluster is in. For example, if your cluster is inUS South
, then your storage classes were configured to useUS South
as the regional endpoint for your buckets.
- Cross-region: With this option, your data is stored across three regions within a geolocation for highest availability. If you have workloads that are distributed across regions, requests are routed to the nearest regional
endpoint. The API endpoint for the geolocation is automatically set by the
-
Review the detailed IBM Cloud Object Storage bucket configuration for a storage class.
oc describe storageclass <storageclass_name>
Example output
Name: ibmc-s3fs-standard-cross-region IsDefaultClass: No Annotations: <none> Provisioner: ibm.io/ibmc-s3fs Parameters: ibm.io/chunk-size-mb=16,ibm.io/curl-debug=false,ibm.io/debug-level=warn,ibm.io/iam-endpoint=https://iam.bluemix.net,ibm.io/kernel-cache=true,ibm.io/multireq-max=20,ibm.io/object-store-endpoint=https://s3-api.dal-us-geo.objectstorage.service.networklayer.com,ibm.io/object-store-storage-class=us-standard,ibm.io/parallel-count=2,ibm.io/s3fs-fuse-retry-count=5,ibm.io/stat-cache-size=100000,ibm.io/tls-cipher-suite=AESGCM AllowVolumeExpansion: <unset> MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
ibm.io/chunk-size-mb
- The size of a data chunk that is read from or written to IBM Cloud Object Storage in megabytes. Storage classes with
perf
in their name are set up with 52 megabytes. Storage classes withoutperf
in their name use 16 megabyte chunks. For example, if you want to read a file that is1GB
, the plug-in reads this file in multiple 16 or 52-megabyte chunks. ibm.io/curl-debug
- Enable the logging of requests that are sent to the IBM Cloud Object Storage service instance. If enabled, logs are sent to
syslog
and you can forward the logs to an external logging server. By default, all storage classes are set tofalse
to disable this logging feature. ibm.io/debug-level
- The logging level that is set by the IBM Cloud Object Storage plug-in. All storage classes are set up with the
WARN
logging level. ibm.io/iam-endpoint
- The API endpoint for IBM Cloud Identity and Access Management (IAM).
ibm.io/kernel-cache
- Enable or disable the kernel buffer cache for the volume mount point. If enabled, data that is read from IBM Cloud Object Storage is stored in the kernel cache to ensure fast read access to your data. If disabled, data is not cached and
always read from IBM Cloud Object Storage. Kernel cache is enabled for
standard
andsmart
storage classes, and disabled forcold
andvault
storage classes. ibm.io/multireq-max
- The maximum number of parallel requests that can be sent to the IBM Cloud Object Storage service instance to list files in a single directory. All storage classes are set up with a maximum of 20 parallel requests.
ibm.io/object-store-endpoint
- The API endpoint to use to access the bucket in your IBM Cloud Object Storage service instance. The endpoint is automatically set based on the region of your cluster. If you want to access an existing bucket that is located in a different region than the one where your cluster is in, you must create your own storage class and use the API endpoint for your bucket.
ibm.io/object-store-storage-class
- The name of the storage class.
ibm.io/parallel-count
- The maximum number of parallel requests that can be sent to the IBM Cloud Object Storage service instance for a single read or write operation. Storage classes with
perf
in their name are set up with a maximum of 20 parallel requests. Storage classes withoutperf
are set up with two parallel requests by default. ibm.io/s3fs-fuse-retry-count
- The maximum number of retries for a read or write operation before the operation is considered unsuccessful. All storage classes are set up with a maximum of five retries.
ibm.io/stat-cache-size
- The maximum number of records that are kept in the IBM Cloud Object Storage metadata cache. Every record can take up to 0.5 kilobytes. All storage classes set the maximum number of records to 100000 by default.
ibm.io/tls-cipher-suite
- The TLS cipher suite that must be used when a connection to IBM Cloud Object Storage is established via the HTTPS endpoint. The value for the cipher suite must follow the OpenSSL format. If your worker nodes run an Ubuntu operating system, your storage classes are set up to use the
AESGCM
cipher suite by default. For worker nodes that run a Red Hat operating system, theecdhe_rsa_aes_128_gcm_sha_256
cipher suite is used by default.
For more information about each storage class, see the storage class reference. If you want to change any of the pre-set values, create your own customized storage class.
-
Decide on a name for your bucket. The name of a bucket must be unique in IBM Cloud Object Storage. You can also choose to automatically create a name for your bucket by the IBM Cloud Object Storage plug-in. To organize data in a bucket, you can create subdirectories.
The storage class that you chose earlier determines the pricing for the entire bucket. You can't define different storage classes for subdirectories. If you want to store data with different access requirements, consider creating multiple buckets by using multiple PVCs.
-
Choose if you want to keep your data and the bucket after the cluster or the persistent volume claim (PVC) is deleted. When you delete the PVC, the PV is always deleted. You can choose if you want to also automatically delete the data and the bucket when you delete the PVC. Your IBM Cloud Object Storage service instance is independent from the retention policy that you select for your data and is never removed when you delete a PVC.
Now that you decided on the configuration that you want, you are ready to create a PVC to provision IBM Cloud Object Storage.
Verifying your installation
Review the pod details to verify that the plug-in installation succeeded.
-
Verify the installation succeeded by listing the driver pods.
oc get pod --all-namespaces -o wide | grep object
Example output
ibmcloud-object-storage-driver-9n8g8 1/1 Running 0 2m ibmcloud-object-storage-plugin-7c774d484b-pcnnx 1/1 Running 0 2m
The installation is successful when you see one
ibmcloud-object-storage-plugin
pod and one or moreibmcloud-object-storage-driver
pods. The number ofibmcloud-object-storage-driver
pods equals the number of worker nodes in your cluster. All pods must be in aRunning
state for the plug-in to function properly. If the pods fail, runoc describe pod -n ibm-object-s3fs <pod_name>
to find the root cause for the failure. -
Verify that the storage classes are created successfully.
oc get sc | grep s3
Example output
ibmc-s3fs-cold-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-cold-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-perf-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-perf-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-smart-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-perf-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-perf-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-standard-regional ibm.io/ibmc-s3fs 8m ibmc-s3fs-vault-cross-region ibm.io/ibmc-s3fs 8m ibmc-s3fs-vault-regional ibm.io/ibmc-s3fs 8m
If you want to set one of the IBM Cloud Object Storage storage classes as your default storage class, run
oc patch storageclass <storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
. Replace<storageclass>
with the name of the IBM Cloud Object Storage storage class. -
Follow the instructions to add object storage to your apps.
If you're having trouble installing the IBM Cloud Object Storage plug-in, see Object storage: Installing the Object storage ibmc
Helm plug-in fails and Object storage: Installing the IBM Cloud Object Storage plug-in fails.