Last modified July 3, 2025
Persistent volumes
Dynamic provisioning and storage classes
If your cluster is running in the cloud on Amazon Web Services (AWS), it comes with a dynamic storage provisioner for Elastic Block Storage (EBS). This enables you to store data beyond the lifetime of a pod.
Your Kubernetes cluster will have a default Storage Class gp3
deployed. It is automatically selected if you do not specify the Storage Class in your Persistent Volumes.
As a cluster admin, you can create additional Storage Classes or edit the default class to use different types of EBS volumes, or add encryption, for example. For this, you need to create (or edit) StorageClass
objects.
managed-premium
. It provisions a Premium LRS
Managed disk and thus can be used only on Kubernetes nodes that run on supported VM types (those with an s
in their name).If your cluster is running on VMware Cloud Director (VCD), it comes with a dynamic storage provisioner for Named Disks. This enables you to store data beyond the lifetime of a pod into virtual disks.
Your Kubernetes cluster will have a default storage class csi-vcd-sc-delete
deployed, which will automatically get selected if you do not specify the storage class in your persistent volumes. On deletion of the volume, the data is deleted. In order to avoid deleting the data when the PersistentVolume
object is deleted, you can use the storage class csi-vcd-sc-retain
which is configured with reclaimPolicy: Retain
.
As a cluster admin, you can create additional storage classes to use different types of VMware Storage Profiles, or add encryption, for example. For this, you need to create (or edit) StorageClass
objects.
If your cluster is running on VMware vSphere, it comes with a dynamic storage provisioner for Cloud Native Storage disks (CNS). This enables you to store data beyond the lifetime of a pod into virtual disks.
Your Kubernetes cluster will have a default storage class csi-vsphere-sc-delete
deployed, which will automatically get selected if you do not specify the storage class in your persistent volumes. Another storage class csi-vsphere-sc-retain
is also created with reclaimPolicy: Retain
which you must explicitely specify to use.
As a cluster admin, you can create additional storage classes to use different types of VMware Storage Profiles, or add encryption, for example. For this, you need to create (or edit) StorageClass
objects.
Creating persistent volumes
The usual and most straight forward way to create a persistent volume is to create a PersistentVolumeClaim
object, which will automatically create a corresponding PersistentVolume
(PV) for you.
A less used alternative is to first create a PersistentVolume
object and then claim that PV using a PersistentVolumeClaim
(PVC) using a selector.
Under the hood, the Dynamic Storage Provisioner will take care that a corresponding volume with the correct parameters is created. For example, an EBS Volume if your cluster is in the AWS cloud and the corresponding storage class was chosen.
Using persistent volumes in a pod
Once you have a PersistentVolumeClaim
, you can mount it as a volume in your pods.
Most storage classes, such as AWS EBS volumes, only allow mounting the volume in a single pod (ReadWriteOnce
access mode). Depending on the storage class, you can set a different access mode in the PVC object.
Note that Azure Managed Disk volumes can only be used by a single pod at a time. Thus, the access mode of your PVC can only be ReadWriteOnce
.
This limitation doesn’t apply to Azure File Share volumes which can be attached with the ReadWriteMany
policy.
CNS disks currently only support ReadWriteOnce
access mode with block storage-backed virtual disks but vSAN File Services (NFS in the background) supports ReadWriteMany
as an alternative.
Under the hood, the CNS disk stays detached from the virtual machines as long as it is not claimed by a pod and you can visualize it in the vSphere client by browsing Cluster > Monitor > Cloud Native Storage > Container volumes
. As soon as a pod claims it, it gets attached to the virtual machine running the node that holds the pod.
Example
First, create a PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 6Gi
# storageClassName: "xyz"
The storage class is commented out in the above example. Kubernetes will therefore use the default storage class. We recommend specifying the class explicitly.
Now you can create a pod that mounts the PVC:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: myvol
volumes:
- name: myvol
persistentVolumeClaim:
claimName: myclaim
This deploys an NGINX pod which serves the contents of the volume (which at this point is very likely still empty).
Expanding the size of persistent volume claims
Persistent volume claims can be expanded by editing the claim and requesting a larger size. It will trigger an update in the underlying persistent volume, and the related provider object such as an AWS EBS volume. Kubernetes always uses the existing one.
In case the volume to be expanded contains a file system, the resizing is only performed once the pod is restarted. Not all volumes can be resized. The StorageClass.spec.allowVolumeExpansion
field typically denotes the availability of this features. Expansion can also be time-consuming, so consider that the pod may not be available during the resizing process. Kubernetes will trigger an update in the PersistentVolume
object and go through the events Resizing
, FileSystemResizeRequired
and FileSystemResizeSuccessful
.
The vSphere Container Storage Plug-in supports volume expansion for block volumes that are created dynamically or statically.
Persistent volume claims can be expanded in Offline
and Online
mode (PVC used by a pod and mounted on a node) by editing the claim and requesting a larger size.
Deleting persistent volumes
If you delete a PersistentVolumeClaim
resource, the respective PersistentVolume
gets deleted as well.
The volume and its data will persist as long as the corresponding PersistentVolume
resource exists. By default, deleting the resource will also delete the corresponding provider-specific volume, which means that all stored data is lost. This can be changed with the reclaim policy. The default storage class uses reclaimPolicy: Delete
. If you have data that must not get lost even on accidental deletion of the Kubernetes objects, consider using a storage class with reclaimPolicy: Retain
.
Note that deleting an application that is using persistent volumes might not automatically delete its storage, so in some cases you will have to manually delete the PersistentVolumeClaim
resources to clean up.
Further reading
Provider-specific
Need help, got feedback?
We listen to your Slack support channel. You can also reach us at support@giantswarm.io. And of course, we welcome your pull requests!