Ok in Survivors guide to Kubernetes, kubectl and other scary things… - Part 1 we started to get more comfortable with the #Kubernetes command-line utility called kubectl - in this post we are going to delve a little deeper into storage.
Table of contents so we don't get lost.
- Refresher, listing storage classes
- Determining which CSI driver is used
- Create a test volume
- Describing a running application (statefulset)
- Dumping the YAML for a deployed application
- Listing persistent volumes
- Listing details of a persistent volume
- Listing all persistent volume claims
- Listing all persistent volumes
- Listing all applications
Refresher, listing storage classes
First things first, it is important to understand what StorageClasses are defined on the cluster.
kubectl get sc
Output will look something like this:
Commvault recommends using a StorageClass that has a Container Storage Interface (CSI) driver.
Storage with a CSI driver can be dynamically provisioned, mounted/unmounted, expanded, snapshotted, and cloned via the Kubernetes orchestrator / kube-apiserver. Let's take a look at how.
Determining which CSI driver is used
First, it is important to know which storage classes are using CSI, we can see this in the PROVISIONER column (above).
Anything with 'csi' in the name is using a CSI driver.
You can see the full list of CSI drivers here:- Drivers - Kubernetes CSI Developer Documentation
(be sure to bookmark this - it changes frequently )
Commvault will leverage CSI drivers that support snapshot for backup, if available. This is the preferred backup method for persistent volumes as the 'snapshot' and subsequent backup happen on a crash-consistent copy of the application volume vs. reading data directly from a open volume (i.e. the Production volume servicing the running containerized app)
You can check if a CSI driver supports snapshot by looking in the Capabilities column (see screenshot below)
We can use kubectl to get more information about our existing StorageClass
kubectl describe sc managed-csi
Output will look similar this, depending on the StorageClass you are using
We can see that the managed-csi StorageClass is
- not the default StorageClass
- uses the disk.csi.azure.com CSI driver
- Is using Azure Managed StandardSSD_LRS disks
- Allows volume expansion
- Deletes the volume (from Azure, from Kubernetes) after the application exits or is terminated
- WaitForFirstConsumer - Will not be provisioned until a container is created to consume the volume (saves cost)
Create a test volume
Let's create a test volume on azure managed-csi StorageClass
See GitHub - kubernetes-sigs/azuredisk-csi-driver: Azure Disk CSI Driver for details on operation of the Azure disk CSI driver.
Let's just do a quick test.
Let's create a StatefulSet that uses Azure CSI-based disk
kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/statefulset.yaml
Let's check our stateful set created
kubectl get statefulset
Output should show the following:
Let's get some more detail.
Describing a running application (statefulset)
Let's get detail on our newly created stateful set (see steps above)
kubectl describe statefulset statefulset-azuredisk
Output will show:
We can see our statefulset-azuredisk application has a persistent-storage volume claim, sitting on the managed-csi storageclass and is 10Gi in size.
We can also see it is mounted ReadWriteOnce - the volume can be mounted read/write by a single node. See Persistent Volumes | Kubernetes for more details.
Dumping the YAML for a deployed application
The output above is useful, but sometimes you might want the original YAML that was used to deploy an application.
Now you could go back to the developer, or the original Git repository, or use the -o yaml output flag, let's re-list our statefulset.
kubectl get statefulset statefulset-azuredisk -o yaml
Output will now be in YAML format. Useful for collecting during troubleshooting.
- apiVersion: apps/v1
- apiVersion: apps/v1
- set -euo pipefail; while true; do echo $(date) >> /mnt/azuredisk/outfile;
sleep 1; done
- mountPath: /mnt/azuredisk
- apiVersion: v1
Listing persistent volumes
So our persistent volume claim (or PVC) allows us access to mount a given persistent volume (or PV), let's take a look at our persistent volume claim, and then the underlying volume:
kubectl describe statefulset statefulset-azuredisk
and snipping just for the lines we want
Now, let's see what we can learn from our persistent-storage PVC
kubectl describe pvc persistent-storage
And the output....
We can see this PVC is located in the default namespace (more on that later), and:
- It is using the managed-csi StorageClass
- It is in Bound state
- It has a backing Volume called pvc-c75395e8-d282-47a6-a1bf-4170cbd0e54e
Let's see what we can learn about our volume.
Listing details of a persistent volume
We will describe our persistent volume
kubectl describe pv pvc-c75395e8-d282-47a6-a1bf-4170cbd0e54e
It shows us.
We can see it was provisioned (automatically) by disk.csi.azure.com (the CSI driver!)
We can see it is formated as ext4 and
Resides in the following Azure subscription /subscriptions/fd9f2da5-a8af-4f30-888c-8d94804a93ec
Let's take a look in the Azure console. We can see out disk is presented.
Let's look at some other ways to list the disks created
Listing all persistent volume claims
You can check the health of your persistent volume claims (PVC) with kubectl get persistentvolumeclaims
kubectl get persistentvolumeclaims
kubectl get pvc
Output will look something like this (it will be different on your system)
If you would like to list all PVCs across all namespaces, add the --all-namespaces flag to the command-line
kubectl get pvc --all-namespaces
The output will be slightly longer depending on how heavily used your cluster is:
Any PVCs not in Bound state, showing Pending warrant investigation.
Listing all persistent volumes
You may list all persistent volumes on the cluster as well.
kubectl get persistentvolumes
All PVs across all namespaces will be shown:
Any volumes not in Bound state warrant investigation.
Also, any volumes with a RECLAIM POLICY of delete may warrant a review, as the volume will be deleted with application.
For volume plugins that support the
Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to
Delete. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created. See Change the Reclaim Policy of a PersistentVolume.
Listing all applications
Applications within Kubernetes are typically one of three (3) api-resources
You use the following commands to list these api-resources:
- kubectl get pods - to list deployments
- kubectl get daemonset / kubectl get ds - to list daemonsets
- kubectl get statefulset / kubectl get ss - to list statefulsets
You may optionally add
Let's try it out
kubectl get pods
Try it out yourself...