Red Hat OpenShift on IBM Cloud storage overview
Virtual Private Cloud Classic infrastructure Satellite
Review the following sections for an overview of the available storage options for your cluster.
When you're done with this page, try out the quiz.
Before you can decide what type of storage is the correct solution for your Red Hat® OpenShift® on IBM Cloud® clusters, you must understand the IBM Cloud infrastructure provider, your app requirements, the type of data that you want to store, and how often you want to access this data.
- Decide whether your data must be permanently stored.
- Persistent storage: Data stored on persistent storage persists even when the container, the worker node, or the cluster is removed. Use persistent storage in the for stateful apps, core business data or data that must be available due to legal requirements, such as a defined retention period. Persistent storage is also a good option for auditing.
- Non-persistent storage: Your data can be removed when the container, the worker node, or the cluster is removed. Non-persistent storage is typically used for logging information, such as system logs or container logs, development testing, or when you want to access data from the host's file system.
- If you must persist your data, analyze if your app requires a specific type of storage. When you use an existing app, the app might be designed to store data in one of the following ways.
- In a file system: The data can be stored as a file in a directory. For example, you could store this file on your local hard disk. Some apps require data to be stored in a specific file system, such as
nfs
orext4
to optimize the data store and achieve performance goals. - In a database: The data must be stored in a database that follows a specific schema. Some apps come with a database interface that you can use to store your data. For example, WordPress is optimized to store data in a MySQL database. In these cases, the type of storage is selected for you.
- Determine the type of data that you want to store.
- Structured data: Data that you can store in a relational database where you have a table with columns and rows. Data in tables can be connected by using keys and is usually easy to access due to the pre-defined data model. Examples are phone numbers, account numbers, Social Security numbers, or postal codes.
- Semi-structured data: Data that does not fit into a relational database, but that comes with some organizational properties that you can use to read and analyze this data more easily. Examples are markup language files such as CSV, XML, or JSON.
- Unstructured data: Data that does not follow an organizational pattern and that is so complex that you can't store it in a relational database with pre-defined data models. To access this data, you need advanced tools and software. Examples are e-mail messages, videos, photos, audio files, presentations, social media data, or web pages.
If you have structured and unstructured data, try to store each data type separately in a storage solution that is designed for this data type. Using an appropriate storage solution for your data type eases up access to your data and gives you the benefits of performance, scalability, durability, and consistency.
- Analyze how you want to access your data. Storage solutions are usually designed and optimized to support read or write operations.
- Read-only: You don't want to write or change your data. Your data is read-only.
- Read and write: You want to read, write, and change your data. For data that is read and written, it is important to understand if the operations are read-heavy, write-heavy, or balanced.
- Determine the frequency that your data is accessed.
- Hot data: Data that is accessed frequently. Common use cases are web or mobile apps.
- Cool or warm data: Data that is accessed infrequently, such as once a month or less. Common use cases are archives, short-term data retention, or disaster recovery.
- Cold data: Data that is rarely accessed, if at all. Common use cases are archives, long-term backups, historical data.
- Frozen data: Data that is not accessed and that you need to keep due to legal reasons.
If you can't predict the frequency or the frequency does not follow a strict pattern, determine whether your workloads are read-heavy, write-heavy, or balanced. Then, look at the storage option that fits your workload and investigate what storage
tier gives you the flexibility that you need. For example, IBM Cloud Object Storage provides a flex
storage class that considers how frequent data is accessed in a month and takes into account this measurement to optimize your monthly
billing.
- Investigate if your data must be shared across multiple app instances, zones, or regions.
- Access across pods: When you use Kubernetes persistent volumes to access your storage, you can determine the number of pods that can mount the volume at the same time. Some storage solutions can be accessed by one pod at a time only. With other storage solutions, you can share volume across multiple pods.
- Access across zones and regions: You might require your data to be accessible across zones or regions. Some storage solutions, such as file and block storage, are data center-specific and can't be shared across zones in a multizone cluster setup.
If you want to make your data accessible across zones or regions, make sure to consult your legal department to verify that your data can be stored in multiple zones or a different country.
- Understand other storage characteristics that impact your choice.
- Consistency: The guarantee that a read operation returns the latest version of a file. Storage solutions can provide
strong consistency
when you are guaranteed to always receive the latest version of a file, oreventual consistency
when the read operation might not return the latest version. You often find eventual consistency in geographically distributed systems where a write operation first must be replicated across all instances. - Performance: The time that it takes to complete a read or write operation.
- Durability: The guarantee that a write operation that is committed to your storage survives permanently and does not get corrupted or lost, even if gigabytes or terabytes of data are written to your storage at the same time.
- Resiliency: The ability to recover from an outage and continue operations, even if a hardware or software component failed. For example, your physical storage experiences a power outage, a network outage, or is destroyed during a natural disaster.
- Availability: The ability to provide access to your data, even if a data center or a region is unavailable. Availability for your data is usually achieved by adding redundancy and setting up failover mechanisms.
- Scalability: The ability to extend capacity and customize performance based on your needs.
- Encryption: The masking of data to prevent visibility when data is accessed by an unauthorized user.
Non-persistent storage options
You can use non-persistent storage options if your data is not required to be persistently stored or if you want to unit-test your app components. The following image shows available non-persistent data storage options in Red Hat OpenShift on IBM Cloud.
Characteristics | Inside the container | On the worker node's primary or secondary disk |
---|---|---|
Multizone capable | No | No |
Data types | All | All |
Capacity | Limited to the worker node's available secondary disk. To limit the amount of secondary storage that is consumed by your pod, use resource requests and limits for ephemeral storage. | Limited to the worker node's available space on the primary (hostPath ) or secondary disk (emptyDir ). To limit the amount of secondary storage that is consumed by your pod, use resource requests and limits for
ephemeral storage. |
Data access pattern | Read and write operations of any frequency | Read and write operations of any frequency |
Access | Via the container's local file system | Via Kubernetes hostPath for access to worker node primary storage. Via Kubernetes emptyDir volume for access to worker node secondary storage. |
Performance | High | High with lower latency when you use SSD |
Resiliency | Low | Low |
Availability | Specific to the container | Specific to the worker node |
Scalability | Difficult to extend as limited to the worker node's secondary disk capacity | Difficult to extend as limited to the worker node's primary and secondary disk capacity |
Durability | Data is lost when the container crashes or is removed. | Data in hostPath or emptyDir volumes is lost when the worker node is deleted, the worker node is reloaded or updated, the cluster is deleted, the IBM Cloud account reaches a suspended state. In addition, data
in an emptyDir volume is removed when the assigned pod is permanently deleted from the worker node, the assigned pod is scheduled on another worker node. |
Common use cases | Local image cache or container logs | Setting up a high-performance local cache, accessing files from the worker node file system, or running unit tests. |
Non-ideal use cases | Persistent data storage or sharing data between containers | Persistent data storage |
Single zone clusters
If you have a single zone region cluster, you can choose between the following options in Red Hat OpenShift on IBM Cloud that provide fast access to your data. For higher availability, use a storage option that is designed for geographically distributed data and, if possible for your requirements, create a multizone cluster.
The following image shows the options that you have in Red Hat OpenShift on IBM Cloud to permanently store your data in a single cluster.
Characteristics | Description |
---|---|
Deployment guide | Setting up File Storage for Classic. |
Ideal data types | All |
Supported provisioning type | Dynamic and static |
Data usage pattern | Random read-write operations, sequential read-write operations, or write-intensive workloads |
Access | Via file system on mounted volume |
Supported Kubernetes access modes |
|
Performance | Predictable due to assigned IOPS and size. IOPS are shared between the pods that access the volume. |
Consistency | Strong |
Durability | High |
Resiliency | Medium as specific to a data center. File storage server is clustered by IBM with redundant networking. |
Availability | Medium as specific to a data center. |
Scalability | Difficult to extend beyond the data center. You can't change an existing storage tier. |
Encryption | At rest |
Backup and recovery | Set up periodic snapshots, replicate snapshots, duplicate storage, back up data to IBM Cloud Object Storage, or copy data to and from pod and containers. |
Common use cases | Mass or single file storage or file sharing across a single zone cluster. |
Non-ideal use cases | Multizone clusters or geographically distributed data. |
Characteristics | Description |
---|---|
Deployment guide | Setting up Block Storage for Classic. |
Ideal data types | All |
Supported provisioning type | Dynamic and static |
Data usage pattern | Random read-write operations, sequential read-write operations, or write-intensive workloads |
Access | Via file system on mounted volume. |
Supported Kubernetes access modes | ReadWriteOnce (RWO) |
Performance | Predictable due to assigned IOPS and size. IOPS are not shared between pods. |
Consistency | Strong |
Durability | High |
Resiliency | Medium as specific to a data center. Block storage server is clustered by IBM with redundant networking. |
Availability | Medium as specific to a data center. |
Scalability | Difficult to extend beyond the data center. You can't change an existing storage tier. |
Encryption | At rest. |
Backup and recovery | Set up periodic snapshots, replicate snapshots, duplicate storage, back up data to IBM Cloud Object Storage, or copy data to and from pod and containers. |
Common use cases | Stateful sets, backing storage when you run your own database, or high-performance access for single pods. |
Non-ideal use cases | Multizone clusters, geographically distributed data, or sharing data across multiple app instances. |
Characteristic | Description |
---|---|
Deployment guide | Setting up File Storage for VPC. |
Ideal data types | All |
Supported provisioning type | Dynamic and static |
Data usage pattern | Random read-write operations, sequential read-write operations, or write-intensive workloads |
Access | Via file system on mounted volume |
Supported Kubernetes access modes |
|
Performance | Predictable due to assigned IOPS and size. IOPS are not shared between pods. |
Consistency | Strong |
Durability | High |
Resiliency | Medium as specific to a data center. File storage server is clustered by IBM with redundant networking. |
Availability | Medium as specific to a data center. |
Scalability | Difficult to extend beyond the data center. You can't change an existing storage tier. |
Encryption | None |
Backup and recovery | Run kubectl cp or copy data to and from pod and containers. |
Common use cases | Mass or single file storage or file sharing across a single zone cluster. |
Non-ideal use cases | Multizone clusters, geographically distributed data, or sharing data across multiple app instances. |
Characteristics | Description |
---|---|
Deployment guide | Setting up Block Storage for VPC. |
Multizone-capable | No, as specific to a data center. Data can't be shared across zones, unless you implement your own data replication. |
Ideal data types | All |
Data usage pattern | Random read-write operations, sequential read-write operations, or write-intensive workloads |
Access | Via file system on mounted volume |
Supported Kubernetes access writes | ReadWriteOnce (RWO) |
Performance | Predictable due to assigned IOPS and size. IOPS are not shared between pods. |
Consistency | Strong |
Durability | High |
Resiliency | Medium as specific to a data center. Block storage server is clustered by IBM with redundant networking. |
Availability | Medium as specific to a data center. |
Scalability | Difficult to extend beyond the data center. You can't change an existing storage tier. |
Encryption | Encryption in transit with Key Protect |
Backup and recovery | Set up periodic snapshots, replicate snapshots, duplicate storage, back up data to IBM Cloud Object Storage, or copy data to and from pod and containers. |
Common use cases | Stateful sets, backing storage when you run your own database, or high-performance access for single pods. |
Non-ideal use cases | Multizone clusters, geographically distributed data, or sharing data across multiple app instances. |
Multizone clusters
The following sections show the options that you have in Red Hat OpenShift on IBM Cloud to permanently store your data in a multizone cluster and make your data highly available. You can use these options in a single zone cluster, but you might not get the high availability benefits that your app requires.
Characteristic | Description |
---|---|
Deployment guide | Setting up IBM Cloud Object Storage. |
Supported infrastructure providers | Classic, VPC, Satellite |
Ideal data types | Semi-structured and unstructured data |
Data usage pattern | Read-intensive workloads. Few or no write operations. |
Access | Via file system on mounted volume (plug-in) or via REST API from your app |
Supported Kubernetes access modes | ReadWriteMany (RWX) |
Performance | High for read operations. Predictable due to assigned IOPS and size when you use non-SDS machines. |
Consistency | Eventual |
Durability | Very high as data slices are dispersed across a cluster of storage nodes. Every node stores only a part of the data. |
Resiliency | High as data slices are dispersed across three zones or regions. Medium, when set up in a single zone region only. |
Availability | High due to the distribution across zones or regions. |
Scalability | Scales automatically |
Encryption | In transit and at rest |
Backup and recovery | Data is automatically replicated across multiple nodes for high durability. For more information, see the SLA in the IBM Cloud Object Storage service terms. |
Common use cases | Geographically distributed data, static big data, static multimedia content, web apps, backups, archives, stateful sets. |
Non-ideal use cases | Write-intensive workloads, random write operations, incremental data updates, or transaction databases. |
Characteristics | Description |
---|---|
Deployment guide | Setting up Portworx. |
Supported infrastructure providers | Classic, VPC, Satellite |
Ideal data types | Any |
Data usage pattern | Read-intensive workloads. Few or no write operations. |
Access | Via file system on mounted volume (plug-in) or via REST API from your app |
Supported Kubernetes access modes |
|
Performance | High for read operations. Predictable due to assigned IOPS and size when you use non-SDS machines. |
Consistency | Eventual |
Durability | Very high as data slices are dispersed across a cluster of storage nodes. Every node stores only a part of the data. |
Resiliency | High as data slices are dispersed across three zones or regions. Medium, when set up in a single zone region only. |
Availability | High due to the distribution across zones or regions. |
Scalability | Scales automatically |
Encryption | Bring your own key to protect your data in transit and at rest with IBM Key Protect. |
Backup and recovery | Data is automatically replicated across multiple nodes for high durability. For more information, see the SLA in the IBM Cloud Object Storage service terms. Use local or cloud snapshots to save the current state of a volume. For more information, see Create and use local snapshots. |
Common use cases | Multizone clusters. Geographically distributed data. Static big data. Static multimedia content |
Non-ideal use cases | Write-intensive workloads, random write operations, incremental data updates, or transaction databases. |
Characteristic | Description |
---|---|
Deployment guide | Setting up Block Storage for Classic. |
Supported infrastructure providers | Classic, VPC, Satellite |
Ideal data types | Any |
Data usage pattern | Write-intensive workloads. Random read and write operation. Sequential read and write operations. |
Access | Via file system on mounted volume or via REST API from your app. |
Supported write access levels | All |
Performance | Close to bare metal performance for sequential read and write operations when you use SDS machines. Create a storage layer based on the storage class performance (IOPs) that you need. |
Consistency | Strong |
Durability | Very high as three copies of your data are always maintained. |
Resiliency | High when set up with replication across three zones. Medium, when you store data in a single zone only. |
Availability | High when you replicate data across three worker nodes in different zones. |
Scalability | Increase volume capacity by resizing the volume. To increase overall storage layer capacity, you must add worker nodes or remote block storage. Both scenarios require monitoring of capacity by the user. |
Encryption | Bring your own key with IBM Key Protect or HPCS. In-transit and at rest. |
Backup and recovery | Use local or cloud snapshots to save the current state of a volume. |
Characteristics | Description |
---|---|
Deployment guide | Connect a Cloud Databases deployment to an IBM Cloud Kubernetes Service application. |
Supported infrastructure providers | Classic, VPC, Satellite |
Ideal data types | Depends on the DBaaS |
Data usage pattern | Read-write-intensive workloads |
Access | Via REST API from your app. |
Supported Kubernetes access writes | N/A as accessed from the app directly. |
Performance | High if deployed to the same data center as your app. |
Consistency | Depends on the DBaaS |
Durability | High |
Resiliency | Depends on the DBaaS and your setup. |
Availability | High if you set up multiple instances. |
Scalability | Scales automatically |
Encryption | At rest |
Backup and recovery | Depends on the DBaaS |
Common use cases | Multizone clusters, relational and non-relational databases, or geographically distributed data. |
Non-ideal use cases | App that is designed to write to a file system. |
Next steps
Test your knowledge with a quiz.
To continue the planning process, document your environment architecture.