Survivors guide to Kubernetes, kubectl and other scary things... - Part 2


Userlevel 1
Badge +2

Ok in Survivors guide to Kubernetes, kubectl and other scary things… - Part 1 we started to get more comfortable with the #Kubernetes command-line utility called kubectl - in this post we are going to delve a little deeper into storage.

 

Table of contents so we don't get lost.

 

 

Refresher, listing storage classes

First things first, it is important to understand what StorageClasses are defined on the cluster.

 

kubectl get sc

 

Output will look something like this:

 

Commvault recommends using a StorageClass that has a Container Storage Interface (CSI) driver.

 

Storage with a CSI driver can be dynamically provisioned, mounted/unmounted, expanded, snapshotted, and cloned via the Kubernetes orchestrator / kube-apiserver. Let's take a look at how.

 

Determining which CSI driver is used

 

First, it is important to know which storage classes are using CSI, we can see this in the PROVISIONER column (above).

 

Anything with 'csi' in the name is using a CSI driver.

 

You can see the full list of CSI drivers here:- Drivers - Kubernetes CSI Developer Documentation 

(be sure to bookmark this - it changes frequently )

 

Commvault will leverage CSI drivers that support snapshot for backup, if available. This is the preferred backup method for persistent volumes as the 'snapshot' and subsequent backup happen on a crash-consistent copy of the application volume vs. reading data directly from a open volume (i.e. the Production volume servicing the running containerized app)

 

You can check if a CSI driver supports snapshot by looking in the Capabilities column (see screenshot below)

 

 

We can use kubectl to get more information about our existing StorageClass

 

kubectl describe sc managed-csi

 

Output will look similar this, depending on the StorageClass you are using

 

We can see that the managed-csi StorageClass is

 

Create a test volume

Let's create a test volume on azure managed-csi StorageClass

 

See GitHub - kubernetes-sigs/azuredisk-csi-driver: Azure Disk CSI Driver  for details on operation of the Azure disk CSI driver.

 

Let's just do a quick test.

 

azuredisk-csi-driver/e2e_usage.md at master · kubernetes-sigs/azuredisk-csi-driver · GitHub 

 

Let's create a StatefulSet that uses Azure CSI-based disk

kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/deploy/example/statefulset.yaml

 

Let's check our stateful set created

 

kubectl get statefulset

 

Output should show the following:

 

Let's get some more detail.

 

Describing a running application (statefulset)

 

Let's get detail on our newly created stateful set (see steps above)

 

kubectl describe statefulset statefulset-azuredisk

 

Output will show:

We can see our statefulset-azuredisk application has a persistent-storage volume claim, sitting on the managed-csi storageclass and is 10Gi in size.

 

We can also see it is mounted ReadWriteOnce - the volume can be mounted read/write by a single node. See Persistent Volumes | Kubernetes  for more details.

 

Dumping the YAML for a deployed application

The output above is useful, but sometimes you might want the original YAML that was used to deploy an application.

 

Now you could go back to the developer, or the original Git repository, or use the -o yaml output flag, let's re-list our statefulset.

 

kubectl get statefulset statefulset-azuredisk -o yaml

 

Output will now be in YAML format. Useful for collecting during troubleshooting.

 

apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2021-05-03T23:38:16Z"
generation: 1
labels:
app: nginx
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:podManagementPolicy: {}
f:replicas: {}
f:revisionHistoryLimit: {}
f:selector:
f:matchLabels:
.: {}
f:app: {}
f:serviceName: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:spec:
f:containers:
k:{"name":"statefulset-azuredisk"}:
.: {}
f:command: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/mnt/azuredisk"}:
.: {}
f:mountPath: {}
f:name: {}
f:dnsPolicy: {}
f:nodeSelector:
.: {}
f:kubernetes.io/os: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
f:updateStrategy:
f:type: {}
f:volumeClaimTemplates: {}
manager: kubectl.exe
operation: Update
time: "2021-05-03T23:38:16Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:collisionCount: {}
f:currentReplicas: {}
f:currentRevision: {}
f:observedGeneration: {}
f:readyReplicas: {}
f:replicas: {}
f:updateRevision: {}
f:updatedReplicas: {}
manager: kube-controller-manager
operation: Update
time: "2021-05-03T23:38:45Z"
name: statefulset-azuredisk
namespace: default
resourceVersion: "316093"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/statefulset-azuredisk
uid: 822e12e2-5a3d-46af-b2db-dd43a4d2c413
spec:
podManagementPolicy: Parallel
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
serviceName: statefulset-azuredisk
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- command:
- /bin/bash
- -c
- set -euo pipefail; while true; do echo $(date) >> /mnt/azuredisk/outfile;
sleep 1; done
image: mcr.microsoft.com/oss/nginx/nginx:1.19.5
imagePullPolicy: IfNotPresent
name: statefulset-azuredisk
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /mnt/azuredisk
name: persistent-storage
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: managed-csi
creationTimestamp: null
name: persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
volumeMode: Filesystem
status:
phase: Pending
status:
collisionCount: 0
currentReplicas: 1
currentRevision: statefulset-azuredisk-6d4cc99764
observedGeneration: 1
readyReplicas: 1
replicas: 1
updateRevision: statefulset-azuredisk-6d4cc99764
updatedReplicas: 1

 

Listing persistent volumes

So our persistent volume claim (or PVC) allows us access to mount a given persistent volume (or PV), let's take a look at our persistent volume claim, and then the underlying volume:

 

kubectl describe statefulset statefulset-azuredisk

 

and snipping just for the lines we want

 

Now, let's see what we can learn from our persistent-storage PVC

 

kubectl describe pvc persistent-storage

 

And the output....

We can see this PVC is located in the default namespace (more on that later), and:

  • It is using the managed-csi StorageClass
  • It is in Bound state
  • It has a backing Volume called pvc-c75395e8-d282-47a6-a1bf-4170cbd0e54e

 

Let's see what we can learn about our volume.

 

Listing details of a persistent volume

 

We will describe our persistent volume

 

kubectl describe pv pvc-c75395e8-d282-47a6-a1bf-4170cbd0e54e

 

It shows us.

 

We can see it was provisioned (automatically) by disk.csi.azure.com (the CSI driver!)

We can see it is formated as ext4 and

Resides in the following Azure subscription /subscriptions/fd9f2da5-a8af-4f30-888c-8d94804a93ec

 

Let's take a look in the Azure console. We can see out disk is presented.

 

 

Let's look at some other ways to list the disks created

 

Listing all persistent volume claims

You can check the health of your persistent volume claims (PVC) with kubectl get persistentvolumeclaims

 

kubectl get persistentvolumeclaims

 

or

 

kubectl get pvc

 

Output will look something like this (it will be different on your system)

 

If you would like to list all PVCs across all namespaces, add the --all-namespaces flag to the command-line

 

kubectl get pvc --all-namespaces

 

The output will be slightly longer  depending on how heavily used your cluster is:

 

Any PVCs not in Bound state, showing Pending warrant investigation.

 

Listing all persistent volumes

 

You may list all persistent volumes on the cluster as well.

 

kubectl get persistentvolumes

 

All PVs across all namespaces will be shown:

 

 

Any volumes not in Bound state warrant investigation.

Also, any volumes with a RECLAIM POLICY of delete may warrant a review, as the volume will be deleted with application.

 

Delete

 

For volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created. See Change the Reclaim Policy of a PersistentVolume.

 

Listing all applications

 

Applications within Kubernetes are typically one of three (3) api-resources

  • Deployment
  • DaemonSet
  • StatefulSet

 

You use the following commands to list these api-resources:

  • kubectl get pods - to list deployments
  • kubectl get daemonset  / kubectl get ds - to list daemonsets
  • kubectl get statefulset / kubectl get ss - to list statefulsets

 

You may optionally add

-n namespace

--all-namespace

 

Let's try it out

 

kubectl get pods

 

Output

 

Try it out yourself...


0 replies

Be the first to reply!

Reply