Last modified April 4, 2023
Advanced cluster autoscaler configuration
Your Giant Swarm installation comes with a default configuration for the cluster-autoscaler addon
You can override these defaults in a ConfigMap named cluster-autoscaler-user-values
.
Where is the user values ConfigMap
The following examples assume the cluster you are trying to configure has an id of 123ab
You will find the cluster-autoscaler-user-values
ConfigMap on the Control Plane in the 123ab
namespace:
$ kubectl -n 123ab get cm cluster-autoscaler-user-values --context=control-plane
NAME DATA AGE
cluster-autoscaler-user-values 0 11m
Warning:
Please do not edit any other cluster-autoscaler related ConfigMaps.
Only the user values ConfigMap is safe to edit.
On cluster creation the user values ConfigMap is empty (or might not exist yet) and the following defaults will be applied to the final cluster-autoscaler deployment. To customize any of the configuration options, you just need to add the respective line(s) in the data field of the user ConfigMap.
How to set configuration options using the user values ConfigMap
On the Control Plane, create or edit a ConfigMap named cluster-autoscaler-user-values
in the workload cluster namespace:
# On the Control Plane, in the abc12 namespace
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: cluster-autoscaler
name: cluster-autoscaler-user-values
namespace: abc12
data:
values: |
configmap:
scaleDownUtilizationThreshold: 0.30
Configuration Reference
The following sections explain some of the configuration options and what their defaults are. They show only the ‘data’ field of the ConfigMap for brevity.
The most recent source of truth for these values can be found in the values.yaml file of the cluster-autoscaler-app
Scale down utilization threshold
The scaleDownUtilizationThreshold
defines the proportion between requested resources and capacity, which under the value cluster-autoscaler will trigger the scaling down action.
Our default value is 65%, which means in order to scale down, one of the nodes has to have less utilization (CPU/memory) than this threshold.
# 9.0.1 and greater
data:
values: |
configmap:
scaleDownUtilizationThreshold: 0.65
# 9.0.0 and below
data:
scaleDownUtilizationThreshold: 0.65
Scan Interval
Define what interval is used to review the state for taking a decision to scale up/down. Our default value is 10 seconds.
data:
values: |
configmap:
scanInterval: "100s"
Skip system pods
By default, the cluster-autoscaler will never delete nodes which run pods of the kube-system
namespace (except daemonset
pods). It can be modified by setting following property to "false"
.
data:
values: |
configmap:
skipNodesWithSystemPods: "false"
Skip pods with local storage
The cluster-autoscaler configuration by default deletes nodes with pods using local storage (hostPath
or emptyDir
). In case you want to disable this action, you need to set the following property to "true"
.
data:
values: |
configmap:
skipNodesWithLocalStorage: "true"
Balance similar node groups
Added in release v17.0.0
The cluster-autoscaler configuration by default doesn’t differentiate between node groups when scaling. In case you want to enable this action, you need to set the following property to "true"
.
data:
values: |
configmap:
balanceSimilarNodeGroups: "true"
Further reading
Need help, got feedback?
We listen to your Slack support channel. You can also reach us at support@giantswarm.io. And of course, we welcome your pull requests!