Changed
- Chart: Change
restartPolicytoOnFailurefor the CRD job. (#298)
Updates on Giant Swarm workload cluster releases, apps, UI improvements and documentation changes.
restartPolicy to OnFailure for the CRD job. (#298)mimir.ingester.resources.requests.cpu property.scaleTargetRef.metrics property.capz-app-collection.global.connectivity.localRegistryCache Helm values and support for in-cluster, local registry cache mirrors in containerd configuration.
In such cases, the registry should be exposed via node ports and containerd connects via that port at 127.0.0.1 via HTTP (only allowed for this single use case).containerd config file generation when multiple registries are set with authenticationdefaultPolicies.enabled=true in cilium-app when internal.ciliumNetworkPolicy.enabled=true after all clusters are migrated.extraPolicies.remove=true in cilium-app after all clusters are migrated..global.connectivity.localRegistryCache Helm values and support for in-cluster, local registry cache mirrors in containerd configuration.
In such cases, the registry should be exposed via node ports and containerd connects via that port at 127.0.0.1 via HTTP (only allowed for this single use case).set-static-routes unit and use it as drop-in to systemd-networkd.defaultPolicies.remove=true in cilium-app.containerd config file generation when multiple registries are set with authenticationglobal.metadata.preventDeletion to add the deletion prevention label to cluster resourcesdefault-apps-azure to cluster-azure.--prevent-deletion flag to cluster template command for capa, capa-eks, capz clustersWe are happy to announce our first Cluster API for AWS (CAPA) release v25.
This is the first Giant Swarm supported CAPA release. It is available on CAPA Management Clusters and will be used as a first release to be upgraded to from Vintage workload clusters.
Each existing customer using the Giant Swarm Vintage AWS product has been given a presentation about CAPA benefits. We have gathered the most crucial high-level advantages in the list below:
Besides the benefits listed above, we have also presented changes that are introduced with CAPA. Here is the summary of most important points:
org- namespaces - This change will allow to simplify RBAC and enable GitOps managed clusters to be created with a pre-defined set of applications.GP2 volumes not supported - The majority of customers is already using gp3 volumes, as we are refreshing the infrastructure, deprecated kubernetes.io/aws-ebs provisioner creating gp2 volumes will also be removedTeleport for Kubernetes API and direct node access. It helps to strengthen security and simplifies compliance with regulations and network topology. This service is only available to Giant Swarm engineers, while customers can obtain audit logs for any operations and logins performed by Giant Swarm support.The migration itself is a fully automated process ran by Giant Swarm engineers using a migration-cli that handles all infrastructure as well as workload migrations.
The experience in the migration process itself should be the same as in an usual upgrade of the workload cluster, where the nodes rollout takes place.
Prior to running the tool, a new Management Cluster based on the CAPA solution has to be created in order to fully make use of the CAPI lifecycle management as well as infrastructure. CAPA clusters are bringing new structure for Workload Cluster definition in terms of Custom Resources. Hence for the period of workload cluster migration, any customer automation manging the cluster such as GitOps has to be disabled. After the migration, customers will have to adopt the new structure and adjust forementioned automations.
As Giant Swarm manages the cloud infrastructure, there are no actions needed from customers for the migration itself. We have aimed to match the Vintage features as close as possible, introducing improvements where needed.
One of the many improvements is the deprecation of the k8s-initiator application, which allowed the customization of some parts of the Kubernetes environment, catering for customer needs.
This tool however brought a lot of freedom in terms of Bash implementation that was run in the tool itself. We reviewed the use-cases that customers have implemented, exposed certain settings in CAPA and prepared a migration plan for those features as well as allowing any future customization.
The most important part for each customer is to prepare the {cluster_name}-migration-configuration YAML file, representing the k8s-initiator app features used, which will then be consumed by the migration-cli and be populated to Cluster charts for future usage.
Your Account Engineer will provide you with a detailed checklist to go over prior to migration.