Logging for clusters
For cluster and app logs, Red Hat® OpenShift® on IBM Cloud® clusters include built-in tools to help you manage the health of your single cluster instance. You can also set up IBM Cloud tools for multi-cluster analysis or other use cases, such as IBM Cloud Kubernetes Service cluster add-ons: IBM Cloud Logs and IBM Cloud Monitoring.
Understanding options for logging
To help understand when to use the built-in Red Hat OpenShift tools or IBM Cloud integrations, review the following information.
- IBM Cloud Logs
-
Customizable user interface for live streaming of log tailing, real-time troubleshooting issue alerts, and log archiving.
- Quick integration with the cluster via a script.
- Aggregated logs across clusters and cloud providers.
- Historical access to logs that is based on the plan you choose.
- Highly available, scalable, and compliant with industry security standards.
- Integrated with IBM Cloud IAM for user access management.
-
View cluster management events that are generated by the Red Hat OpenShift on IBM Cloud API. To access these logs, provision an instance of IBM Cloud Logs. For more information about the types of IBM Cloud Kubernetes Service events that you can track, see Activity Tracker events.
- Built-in Red Hat OpenShift logging tools
-
Built-in view of pod logs in the Red Hat OpenShift web console.
- Built-in pod logs are not configured with persistent storage. You must integrate with a cloud database to back up the logging data and make it highly available, and manage the logs yourself.
To set up an OpenShift Container Platform Elasticsearch, Fluentd, and Kibana EFK stack, see installing the cluster logging operator. Keep in mind that your worker nodes must have at least 4 cores and GB memory to run the cluster logging stack.
- Built-in Red Hat OpenShift audit logging tools
-
API audit logging to monitor user-initiated activities is currently not supported.
Migrating logging and monitoring agents to Cloud Logs
The observability CLI plug-in ibmcloud ob
and the v2/observe
endpoints are deprecated and support ends on 28 March 2025. There is no direct replacement, but you can now manage your logging and monitoring integrations
from the console or through the Helm charts. For the latest steps, Managing the Logging agent for Red Hat OpenShift on IBM Cloud clusters and Working with the Red Hat OpenShift monitoring agent.
- What happens after 28 March 2025?
- You can no longer use the
ob
plugin, Terraform, or API to install observability agents on a cluster or to modify your existing configuration. Sysdig agents continue to send metrics to the specified IBM Cloud Monitoring instance. LogDNA agents can no longer send logs since IBM Cloud Log Analysis is replaced by IBM Cloud Logs. - What needs to be done before 28 March 2025?
- If you are still using LogDNA, migrate to Cloud Logs.
- If you use observability (
ob
) plugin to install LogDNA or Sysdig agents on your cluster, uninstall the agents and reinstall using the Container dashboard, using Terraform, or manually
Reviewing your observability agents
The observability plugin installs Sysdig and LogDNA agents in the ibm-observe
namespace.
Before March 28, 2025:
- If needed, install the
ob
plugin and list your logging configs.
- List your logging configs.
ibmcloud ob logging config list --cluster CLUSTER
- List your monitoring configs.
ibmcloud ob logging config list --cluster CLUSTER
If there is no logging or monitoring config, then any observability agents in the cluster were not installed with the IKS observability (ob) plugin.
After March 28, 2025:
- Review the configmaps in the
ibm-observe
namespace.kubectl get cm -n ibm-observe
Example output NAME DATA AGE e405f1fc-feba-4350-9337-e7e249af871c 6 25m f59851a6-ede6-4719-afa0-eee7ce65eeb5 6 20m
- Observability agents installed by the observability plug-in use a configmap with the GUID of the IBM Cloud Monitoring instance or the IBM Cloud Log Analysis instance that logs or metrics are being sent to. If your cluster has agents in a
namespace other than
ibm-observe
or the configmaps inibm-observe
are not named with the instance GUIDs, then these agents were not installed with the IKS observability (ob) plugin.
Removing the observability plug-in agents
-
Before 28 March 2025, you can still use the
ob
plug-in to delete your observability configs.ibmcloud ob logging config delete --cluster <cluster> --instance <logging instance guid>
ibmcloud ob monitoring config delete --cluster <cluster> --instance <monitoring instance guid>
-
After March 28, 2025, when support for the
ob plugin
ends, you must delete each component individually.- Clean up the daemonsets and configmaps.
kubectl delete daemonset logdna-agent -n ibm-observe kubectl delete daemonset sysdig-agent -n ibm-observe kubectl delete configmap <logdna-configmap> -n ibm-observe kubectl delete configmap <sysdig-configmap> -n ibm-observe
- Optional: Delete the namespace. After no other resources are running in the namespace.
kubectl delete namespace ibm-observe
- Clean up the daemonsets and configmaps.
After removing the plug-in has been removed, reinstall Logging and Monitoring agents in your cluster using the Cluster dashboard, Terraform, or manually.
For more information, see the following links:
Using the cluster logging operator
To deploy the OpenShift Container Platform cluster logging operator and stack on your Red Hat OpenShift on IBM Cloud cluster, see the Red Hat OpenShift documentation. Additionally, you must update the cluster logging instance to use an IBM Cloud Block Storage storage class.
-
Prepare your worker pool to run the operator.
- Create a VPC or classic worker pool with a flavor of at least 4 cores and 32 GB memory and 3 worker nodes.
- Label the worker pool.
- Taint the worker pool so that other workloads can't run on the worker pool.
-
From the Red Hat OpenShift web console Administrator perspective, click Operators > Installed Operators.
-
Click Cluster Logging.
-
In the Provided APIs section, Cluster Logging tile, click Create Instance.
-
Modify the configuration YAML to change the storage class for the ElasticSearch log storage from
gp2
to one of the following storage classes that vary with your cluster infrastructure provider.- Classic clusters:
ibmc-block-gold
- VPC clusters:
ibmc-vpc-block-10iops-tier
... elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy storage: storageClassName: ibmc-block-gold #or ibmc-vpc-block-10iops-tier for VPC clusters size: 200G ...
- Classic clusters:
-
Modify the configuration YAML to include the node selector and toleration for the worker pool label and taint that you previously created. For more information and examples, see the following Red Hat OpenShift documents. The examples use a label and toleration of
logging: clo-efk
.- Node selector. Add the node selector to the Elasticsearch (
logstore
)and Kibana (visualization
), and Fluentd (collector.logs
) pods.spec: logStore: elasticsearch: nodeSelector: logging: clo-efk ... visualization: kibana: nodeSelector: logging: clo-efk ... collection: logs: fluentd: nodeSelector: logging: clo-efk
- Toleration. Add the node selector to the Elasticsearch (
logstore
)and Kibana (visualization
), and Fluentd (collector.logs
) pods.spec: logStore: elasticsearch: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute" ... visualization: kibana: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute" ... collection: logs: fluentd: tolerations: - key: app value: clo-efk operator: "Exists" effect: "NoExecute"
- Node selector. Add the node selector to the Elasticsearch (
-
Click Create.
-
Verify that the operator, Elasticsearch, Fluentd, and Kibana pods are all Running.