Survivors guide to Kubernetes, kubectl and other scary things...


Userlevel 1
Badge +2

Ok, so if your new to the #Kubernetes space you will notice one thing - everyone does their demo on the command-line!

 

Arrgghh - didn't we stop using command-line tools 10 years ago? Well yes, but command-line and system to system automation is what is now driving our major cloud providers, and development of containerized or modern applications.

 

So what does all that mean? It means it's time to get comfortable with the command-line again  don't worry Kubernetes and it's command-line tool - kubectl is literally one of the easiest CLI tools I have ever learnt. 

 

Let's walk through the basics of what we normally check on a Metallic or Commvault Backup & Recovery protected system.

 

Table of Content so we don't get lost....

 

 

 

Kubectl

 

First things first - you are going to need kubectl.

 

 

Login details

Next your going to need some login details to place in your kubeconfig file, which is just a fancy name for the file that contains your credentials to the Kubernetes cluster you want to admin.

 

See Organizing Cluster Access Using kubeconfig Files for the details of managing at scale - but for now let's just take a look at an example file (below)

 

apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
SNIP
server: https://kubecon-democluster-f322d9f2d-2ffc8162.hcp.eastus.azmk8s.io:6443
name: KubeConEU
contexts:
- context:
cluster: KubeConEU
user: clusterUser_KubeCon2021_KubeConEU
name: KubeConEU
current-context: KubeConEU
kind: Config
preferences: {}
users:
- name: clusterUser_KubeCon2021_KubeConEU
user:
client-certificate-data:
SNIP
client-key-data:
SNIP
token:
SNIP
 

 

The important bits are server: which is the server (kube-apiserver) you want to connect to.

And the user and token sections.

 

Let's actually create a Commvault backup user as our next step.

 

Creating a backup user

Creating a Metallic backup user is the first thing you will do to start protecting your stateful K8s apps.

 

See Creating a Service Account for Kubernetes Authentication

 

Create a new serviceaccount

 

kubectl create serviceaccount cvbackup

 

You can list your new account with kubectl get sa

 

kubectl get sa

 

Output will look something like this (yours will likely differ)

 

NAME       SECRETS   AGE
cvbackup 1 21h
default 1 21h

 

You can list your new account secret, which contains the token (i.e. the password) using kubectl describe

 

kubectl describe sa cvbackup

 

Again, your output will differ, but the important thing is the Tokens line

 

Name:                cvbackup
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: cvbackup-token-2l26v
Tokens: cvbackup-token-2l26v
Events: <none>

 

Let's now describe our secret with kubectl describe secret

 

kubectl describe secret cvbackup-token-2l26v

 

Output will show something like this.

 

Name:         cvbackup-token-2l26v
Namespace: default
Labels: <none>
Annotations: kubernetes.io/service-account.name: cvbackup
kubernetes.io/service-account.uid: dce83758-8f4f-44a8-9f9a-75b8c34f28b6

Type: kubernetes.io/service-account-token

Data
====
ca.crt: 1761 bytes
namespace: 7 bytes
token: SNIPPED_TO_KEEP_MY_CLUSTER_SAFE

The token provides authenticated access to your cluster - ensure you store it safely.

Now you have:

  • Username: cvbackup
  • Token: SNIPPED_TO_KEEP_MY_CLUSTER_SAFE

 

All you need now is your Server URL or kube-apiserver endpoint.

 

Let's extract that in the next step.

 

Obtaining your kube-apiserver address

 

Depending on whether you are using an on-prem, cloud-hosted, or cloud-BYO Kubernetes cluster - you will need to determine your kube-apiserver address.

 

kube-apiserver is the single entry point for kubectlMetallic and all any/or automation systems used within your business. 

 

See kube-apiserver for more information

 

Let's use kubectl config view to view our current kube-apiserver details

 

kubectl config view

 

You will see something like this:

 

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubeconeu-kubecon2021-fd9f2d-2ffc8162.hcp.eastus.azmk8s.io:443
name: KubeConEU
contexts:
- context:
cluster: KubeConEU
user: clusterUser_KubeCon2021_KubeConEU
name: KubeConEU
current-context: KubeConEU
kind: Config
preferences: {}
users:
- name: clusterUser_KubeCon2021_KubeConEU
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: 7317d54fd19b8a050fb7136db412a537d02201d9f8a16c7b482a30bfee91263d4241925332e5de1d620f83d4182e64be22c9e3d3655d9ff0d652ead5db83c892

 

The server: line is the kube-apiserver.

 

Determining which version of Kubernetes is being used

If you manage a large number of clusters, or deploy apps to a large number of clusters - it may get hard to keep track which system you are operating on.

 

kubectl get nodes will provide details of the active nodes in your environment

 

kubectl get nodes

 

This is an example output from a Microsoft Azure AKS cluster (your output may differ)

 

NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-85424502-vmss000000 Ready agent 21h v1.19.9
aks-nodepool1-85424502-vmss000001 Ready agent 21h v1.19.9
aks-nodepool1-85424502-vmss000002 Ready agent 21h v1.19.9

 

Metallic supportability is listed in the following Requirements page

Commvault Backup & Recovery supportability is listed in the following Supported Kubernetes Distributions page

 

Determining which storage classes are available

It is key to understand what type of storage is used on a cluster, and whether that storage is supported by one of the Production CSI Drivers Drivers - Kubernetes CSI Developer Documentation 

 

Let's use kubectl get storageclasses or kubectl get sc to list the available storage classes:

 

kubectl get sc

 

Output will look something like this:

 

NAME                    PROVISIONER                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
azurefile kubernetes.io/azure-file Delete Immediate true 22h
azurefile-csi file.csi.azure.com Delete Immediate true 22h
azurefile-csi-premium file.csi.azure.com Delete Immediate true 22h
azurefile-premium kubernetes.io/azure-file Delete Immediate true 22h
default (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true 22h
managed-csi disk.csi.azure.com Delete WaitForFirstConsumer true 22h
managed-csi-premium disk.csi.azure.com Delete WaitForFirstConsumer true 22h
managed-premium kubernetes.io/azure-disk Delete WaitForFirstConsumer true 22h

 

Now what does this tell us?

 

Well we have a mixture of CSI-based StorageClasses and a number of in-tree StorageClasses (currently being phased out in favour of CSI implementations)

 

Metallic / Commvault will protect data stored in CSI-based StorageClasses (Out-of-Tree) and Non-CSI / In-Tree StorageClasses

The following storage classes are CSI based, these are the recommended best practice for snapshot based protection:

  • azurefile-csi
  • azurefile-csi-premium
  • managed-csi
  • managed-csi-premium

 

The following storage classes are non-CSI based, they predate the release of the CSI framework and are referred to as in-tree storage drivers.

  • azurefile
  • azilrefile-premum
  • default (actually azure-disk)
  • managed-premum

 

More information on azureDisk, and azureFile may be found in the Kubernetes core documentation.

 

We will continue the storage discovery in our next post...


0 replies

Be the first to reply!

Reply