Last modified September 29, 2021
gsctl delete nodepool
gsctl and the REST API are being phased out. We don't have an end-of-life date yet. However, we recommend to familiarize yourself with our Management API and the kubectl gs plugin as a future-proof replacement.
gsctl delete nodepool command deletes a node pool.
Deleting a node pool means that all worker nodes in the pool will be drained, cordoned and then terminated.
In case you are running production workloads on the node pool you want to delete,
make sure that there is at least one other node pool with capacity to
schedule the workloads. Also check whether label selectors, taints and
tolerations will allow scheduling on other pool’s worker nodes. The best
way to observe this is by manually cordoning and draining the pool’s
worker nodes and checking workload’s node assignments, before issuing
delete nodepool command.
Note: Data stored outside of persistent volumes will be lost and there is no way to undo this.
The command is called with the cluster and node pool ID as the only argument, separated by a slash.
gsctl delete nodepool f01r4/op1dl
f01r4 is the cluster ID and
op1dl is the node pool ID.
You can also use the cluster’s name for identifying the cluster:
gsctl delete nodepool "Cluster name"/op1dl
A confirmation will be required to finally delete the node pool. To suppress this
confirmation, add the