Changes and Releases
Updates on Giant Swarm workload cluster releases, apps, UI improvements and documentation changes.
Breaking change
How to migrate to v0.7.0
Please ensure you did install [yq](https://mikefarah.gitbook.io/yq/) first.
To migrate values from cluster-azure `v0.7.0`, we provide below a bash script which writes an `app.yaml` file which you need to apply.
This will move the existing user config values into `global` and it also increases the `version` field of `cluster-azure` app to `v0.7.0`.
* Login to the management cluster and run the script (e.g: `./migrate.sh organization my-cluster`)
* Verify the `app.yaml` file and apply it to the management cluster (e.g: `kubectl apply -f app.yaml`)
```bash
#!/bin/bash
# Check if two arguments are provided
if [ $# -ne 2 ]
then
echo "Incorrect number of arguments supplied. Please provide the organization name and the cluster name."
exit 1
fi
# Use the first argument as the organization name and the second as the cluster name
org=$1
cluster=$2
# Fetch the ConfigMap YAML
kubectl get cm -n org-$org ${cluster}-userconfig -o yaml > ${cluster}_cm.yaml
# Extract the ConfigMap values into a temporary file
yq eval '.data.values' ${cluster}_cm.yaml > tmp_cm_values.yaml
##### OPTIONAL START
# Fetch AppCatalog YAML
kubectl get helmreleases.helm.toolkit.fluxcd.io -n flux-giantswarm appcatalog-cluster -o yaml > catalog.yaml
# Extract the AppCatalog values into a temporary file
yq eval '.spec.values.appCatalog.config.configMap.values' catalog.yaml >> tmp_cm_values.yaml
###### OPTIONAL END
# Modify the values in tmp_cm_values.yaml as needed
yq eval --inplace 'with(select(.metadata != null); .global.metadata = .metadata) |
with(select(.connectivity != null); .global.connectivity = .connectivity) |
with(select(.controlPlane != null); .global.controlPlane = .controlPlane) |
with(select(.nodePools != null); .global.nodePools = .nodePools) |
with(select(.managementCluster != null); .global.managementCluster = .managementCluster ) |
with(select(.providerSpecific != null); .global.providerSpecific = .providerSpecific) |
with(select(.baseDomain != null); .global.connectivity.baseDomain = .baseDomain) |
with(select(.managementCluster != null); .global.managementCluster = .managementCluster) |
del(.metadata) |
del(.connectivity) |
del(.controlPlane) |
del(.nodePools) |
del(.managementCluster) |
del(.baseDomain) |
del(.provider) |
del(.providerSpecific)' tmp_cm_values.yaml
# Merge the modified values back into the ConfigMap YAML
yq eval-all 'select(fileIndex==0).data.values = select(fileIndex==1) | select(fileIndex==0)' ${cluster}_cm.yaml tmp_cm_values.yaml > app.yamlAdded
- Add dashboard “Worker node utilization”.
Added
- Add dashboard “Worker node utilization”.
- Upgrading to the
v0.9.16
version.
- Upgrading to the
v0.9.16
version.
Changed
- Make error message actionable in case
kubectl gs template cluster
fails because the user did not log into, or point to, the management cluster - Support internal api URLs in
kubectl gs login
id token verification - Print a warning in case
kubectl gs login
id token verification fails but don’t fail the command
Changed
- No major change in
v0.50.0
, except that we are moving to a release based upgrade cycle with Kubernetes version, VM template and other defaults are set in the chart values. They shouldn’t be overridden as they are managed by Giant Swarm. - Bump
kube-vip
to v0.8.0
.
removed
- Removed kubernetes version from the chart
removed
- Removed kubernetes version from the chart