Changes compared to v33.2.0
Components
- cluster-cloud-director from v2.4.1 to v2.4.2
- cluster from v4.4.0 to v4.4.1
cluster v4.4.0…v4.4.1
Changed
- Control Plane: Make etcd image tag configurable. (#841)
gsoci.azurecr.io.io.giantswarm.application.audience and io.giantswarm.application.managed chart annotations for Backstage visibility.controller Vertical Pod Autoscaler (VPA) resource syntax.PodSecurityPolicy.global.podSecurityStandards.enforced helm value.resource.psp helm value.Chart.yaml to OpenContainers format (io.giantswarm.application.team).PodSecurityPolicy.global.podSecurityStandards.enforced helm value.app label which is already added by the selector helper.observability-policiesChart.yaml to OpenContainers format (io.giantswarm.application.team).Chart.Version used in labels. This is needed because flux apapends the digest to the version using the + character which is not allowed in labels.kyverno (app) to v0.23.0.kyverno-crds (app) to v1.16.0.reports-server (app) to v0.1.0.cloudnative-pg (app) to v0.0.13.kubescape (app) to v0.0.5.starboard-exporter (app) to v1.0.2.io.giantswarm.application.audience and io.giantswarm.application.managed chart annotations for Backstage visibility.fix-dns-nic-allocation.sh Ignition script to attach DNS servers to correct network interfaces./run/metadata/coreos.ntpd unit.tl;dr: Please first upgrade your existing cluster to Giant Swarm Release v33.2.0 for VMware Cloud Director or newer before upgrading to this release! Otherwise, you risk service outage and severe issues.
Giant Swarm Release v34.0.0 for VMware Cloud Director comes with Kubernetes v1.34. This version contains etcd v3.6, which makes use of the so-called v3 store by default. Before, with etcd v3.5, the v2 store was used by default and synchronized to the already existing v3 store.
Different flaws could lead to an inconsistency between the old v2 store and the already present but unused standby v3 store in etcd v3.5 and before. Because of this, new etcd v3.6 members, which first start to use this v3 store, might suffer from these inconsistencies.
This can come into play when upgrading a cluster to this and future releases from any release older than Giant Swarm Release v33.2.0 for VMware Cloud Director. For this reason, we require you to first upgrade your cluster to Giant Swarm Release v33.2.0 for VMware Cloud Director or newer before upgrading to this or future releases.
This release introduces optional support for Kubernetes Structured Authentication Configuration for OIDC providers. We recommend testing this feature on a non-production cluster first.
global:
controlPlane:
oidc:
structuredAuthentication:
enabled: true
issuers:
- issuerUrl: https://your-idp.example.com
clientId: kubernetes
global:
controlPlane:
oidc:
structuredAuthentication:
enabled: true
issuers:
- issuerUrl: https://your-idp.example.com
clientId: kubernetes
usernameClaim: email # Optional: use 'email' instead of 'sub'
groupsClaim: roles # Optional: use 'roles' instead of 'groups'
usernamePrefix: "oidc:" # Optional: prefix usernames
groupsPrefix: "oidc:" # Optional: prefix groups
If you already use OIDC with the legacy configuration, add structuredAuthentication.enabled: true to migrate:
global:
controlPlane:
oidc:
issuerUrl: https://your-idp.example.com
clientId: kubernetes
structuredAuthentication:
enabled: true
This will automatically convert your legacy configuration to the new structured format.
Additional configuration options are available for more complex setups, including:
audiences, audienceMatchPolicy)discoveryUrl)caPem)claimValidationRules, userValidationRules)claimMappings)Refer to the Kubernetes Structured Authentication documentation for details.
fix-dns-nic-allocation.sh Ignition script to attach DNS servers to correct network interfaces.priority-classes default app, enabled by default. This app provides standardised PriorityClass resources like giantswarm-critical and giantswarm-high, which should replace the previous inconsistent per-app priority classes."helm.sh/resource-policy": keep annotation to VCDCluster CR so that it doesn’t get removed by Helm when uninstalling this chart. The CAPI controllers will take care of removing it, following the expected deletion order./run/metadata/coreos.ntpd unit.cluster to v5.1.2.cluster to v5.1.1.cluster to v5.1.0.cluster to v5.0.0.kube_servicemonitor_info and kube_podmonitor_info for ServiceMonitor and PodMonitor resourceskube_podlog_info for PodLog resourcekube-prometheus-stack-app to 19.0.0edgedb to gel.cloudnative-pg (app) to v0.0.12.gel (app) to v1.0.1.This patch release fixes an issue with the installation of the Teleport Kube Agent app.
coredns image to 1.13.2.Update Kubernetes to v1.33.6, Flatcar to v4459.2.1 and various component upgrades.
cluster to v4.4.0.cluster to v4.3.0.kubescape (app) version v0.0.4.kyverno (app) to v0.21.1.kyverno-crds (app) to v1.15.0.kyverno (app) to v0.20.1.kyverno-crds (app) to v1.14.0.kyverno-policies (app) to v0.24.0.reports-server (app) to v0.0.3.ephemeral-storage requests and limits to satisfy Kyverno policy require-emptydir-requests-and-limits.This release updates Flatcar to v4230.2.4 and includes several app updates and improvements.
cainjector-servicecoredns image to 1.13.1.coredns image to 1.13.0.kyverno (app) to v0.20.1.kyverno-crds (app) to v1.14.0.kyverno-policies (app) to v0.24.0.reports-server (app) to v0.0.3.kyverno update (#536, #531, #538).kyverno-policy-operator (app) to v0.1.6.kyverno (app) to v0.20.0.kyverno-crds (app) to v1.14.0.kyverno-policies (app) to v0.24.0.kyverno-policy-operator (app) to v0.1.5.trivy-operator (app) to v0.12.1.trivy (app) to v0.14.1.falco (app) to v0.11.0.WARNING: With Flatcar 4230.2.0, cgroups v1 backwards compatibility has been removed. This means that enabling legacy cgroups v1 is no longer supported and nodes still using them will fail to update.
cluster to v3.0.1..internal.advancedConfiguration.cgroupsv1 and .global.nodePools.().cgroupsv1 flags have been removed.cluster to v2.6.2.cluster to v2.6.1.alloy ingress rules for cainjector metrics ingestion.coredns image to 1.12.3.kube-prometheus-stack-app to 18.1.0cluster-api-monitoring-app so that cluster_id label points to the workload cluster name as expected in some alert definitionskube-prometheus-stack to 77.0.1kube-prometheus-stack to 76.4.0This release updates the cluster-cloud-director chart and the underlying cluster chart to address an issue around Helm values schema validation uncovered by newer Helm versions.
cluster to v2.5.1.