1.29 version information and update actions
Review information about version 1.29 of IBM Cloud® Kubernetes Service. For more information about Kubernetes project version 1.29, see the Kubernetes change log.
IBM Cloud Kubernetes Service is a Certified Kubernetes product for version 1.29 under the CNCF Kubernetes Software Conformance Certification program. Kubernetes® is a registered trademark of The Linux Foundation in the United States and other countries, and is used pursuant to a license from The Linux Foundation.
Release timeline
The following table includes the expected release timeline for version 1.29 of IBM Cloud® Kubernetes Service. You can use this information for planning purposes, such as to estimate the general time that the version might become unsupported.
Dates that are marked with a dagger (†
) are tentative and subject to change.
| Version | Supported? | Release date | IBM Cloud Kubernetes Service | Unsupported date | |------|------|----------|----------| | 1.29 | Yes | 14 February 2024 | 23 April 2025 †
|
Preparing to update
This information summarizes updates that are likely to have an impact on deployed apps when you update a cluster to version 1.29. For a complete list of changes, review the community Kubernetes change log and IBM version change log for version 1.29. You can also review the Kubernetes helpful warnings.
Update before master
The following table shows the actions that you must take before you update the Kubernetes master.
Type | Description |
---|---|
Unsupported: v1beta2 version of the FlowSchema and PriorityLevelConfiguration API |
Migrate manifests and API clients to use the flowcontrol.apiserver.k8s.io/v1beta3 API version, which is available since Kubernetes version 1.26. For more information, see Deprecated API Migration Guide - v1.29. |
Unsupported: CronJob timezone specifications |
When creating a CronJob resource, setting the CRON_TZ or TZ timezone specifications by using .spec.schedule is no longer allowed. Migrate your CronJob resources to use
.spec.timeZone instead. See Unsupported TimeZone specification for details. |
Updated extra user claim prefix | In the Kubernetes API server auditing records, extra user claim information is prefixed with cloud.ibm.com/authn_ instead of authn_ .
If your apps parsed this information, update them accordingly. |
Tigera operator namespace migration | Tigera operator component is added and manages the Calico installation. As a result, Calico resources run in the calico-system and tigera-operator namespaces instead of kube-system . These namespaces
are configured to be privileged like the kube-system namespace. During an upgrade, the Tigera operator migrates Calico resources and customizations from the kube-system namespace to the calico-system namespace. You can continue normal cluster operations during the migration because the migration might be in progress after the cluster master upgrade is completed. If your apps or operations tooling rely on Calico running in the kube-system namespace, update them accordingly. Before updating your cluster master, review the steps in the Understanding the Tigera migration section. |
Calico custom resource short names | For new clusters, Calico custom resource short names gnp and heps are removed from the globalnetworkpolicies.crd.projectcalico.org and hostendpoints.crd.projectcalico.org custom resource
definitions. Upgraded clusters retain the short names. Either way, if your kubectl commands rely on the short names, update them to use the standard names of globalnetworkpolicies and hostendpoints instead. |
Legacy service account token cleanup | Kubernetes legacy service account token cleaner automatically labels, invalidates, and deletes unused legacy
service account tokens. Tokens are labeled and invalidated when unused for one year, and then if unused for another year, deleted. You can use the kubectl get secrets -A -l kubernetes.io/legacy-token-last-used -L kubernetes.io/legacy-token-last-used command to determine when a legacy service account token was last used and the kubectl get secrets -A -l kubernetes.io/legacy-token-invalid-since -L kubernetes.io/legacy-token-invalid-since command to determine if any
legacy service account tokens are invalid and future candidates for deletion. Tokens labeled as invalid can be re-actived by removing the kubernetes.io/legacy-token-invalid-since label. For more information about these
labels, see kubernetes.io/legacy-token-last-used and kubernetes.io/legacy-token-invalid-since . |
Removed: Node Hostname address type |
Kubernetes no longer adds Hostname address type to .status.addresses . If you rely on the previous behavior, migrate to the InternalIP address type instead. For more information, see the related
Kubernetes community issue |
Update after master
The following table shows the actions that you must take after you update the Kubernetes master.
Type | Description |
---|---|
Tigera operator namespace migration | After the cluster master upgrade is completed, the Tigera operator namespace migration might still be in progress. For larger clusters, the migration might take several hours to complete. During the migration, Calico resources exist
in both the kube-system and calico-system namespaces. When the migration is completed, Calico resources are fully removed from the kube-system namespace with the next cluster master operation.
Wait for the migration to complete before upgrading worker nodes because updating, reloading, replacing, or adding nodes during the migration makes the process take longer to complete. The migration must be completed before your cluster
is allowed to upgrade to version 1.30. To verify that the migration is complete, run the following commands. If no data is returned, the migration is complete. kubectl get nodes -l projectcalico.org/operator-node-migration --no-headers --ignore-not-found ; kubectl get deployment calico-typha -n kube-system -o name --ignore-not-found .
If your apps or operation tooling rely on Calico running in the kube-system namespace, update them accordingly. |
Customizing Calico configuration | Customizing your Calico configuration has changed. Your existing customizations are migrated for you. However, to customize your Calico configuration in the future, see Changing the Calico maximum transmission unit (MTU) and Disabling the port map plug-in. |
Understanding the Tigera Operator namespace migration
In version 1.29, Tigera Operator was introduced to manage Calico resources. All the Calico components are migrated from the kube-system
to the calico-system
namespace. During the master upgrade process, a new deployment
appears called Tigera Operator, which manages the migration process and the lifecycle of Calico components, such as calico-node
, calico-typha
, and calico-kube-controllers
.
Before performing a master update, if your cluster has any tainted nodes, make sure that you have at least 6 untainted nodes so that the calico-typha
pods can be migrated effectively. If 6 untainted nodes is not possible, then
alternatively you can complete the following steps.
-
Make sure that you have at least 4 untainted nodes.
-
Run the following command immediately before the master update to 1.29 to reduce the
kube-system/calico-typha
pod requirements to 1.kubectl scale deploy -n kube-system calico-typha --replicas 1
-
Before updating, check the status of the Calico components. The operator can start its job only when all the Calico components are healthy, up, and running. If the Calico components are healthy, the rollout status returned by the command for each component is
successfully rolled out
.kubectl rollout status -n kube-system deploy/calico-typha deploy/calico-kube-controllers ds/calico-node
-
Continue the master update.
-
After the master upgrade is finished, some Calico resources might remain in the
kube-system
namespace. These resources are no longer used by Calico and are no longer needed after the migration is complete. The next master operation removes them. Do not remove them yourself.- "kind": "ConfigMap", "name": "calico-config" - "kind": "Secret", "name": "calico-bgp-password" - "kind": "Service", "name": "calico-typha" - "kind": "Role", "name": "calico-node-secret-access" - "kind": "RoleBinding", "name": "calico-node-secret-access" - "kind": "PodDisruptionBudget", "name": "calico-kube-controllers" - "kind": "PodDisruptionBudget", "name": "calico-typha" - "kind": "ServiceAccount", "name": "calico-cni-plugin" - "kind": "ServiceAccount", "name": "calico-kube-controllers" - "kind": "ServiceAccount", "name": "calico-node" - "kind": "ClusterRole", "name": "calico-cni-plugin-migration" - "kind": "ClusterRole", "name": "calico-kube-controllers-migration" - "kind": "ClusterRoleBinding", "name": "calico-cni-plugin-migration" - "kind": "ClusterRoleBinding", "name": "calico-kube-controllers-migration"