Added a Kubernetes Cluster. The pods are discovered fine however during the backup I see the following on the vsbkp.log.
Job goes pending and after a while it fails.
4868 3940 01/27 13:37:18 JOB ID CKubsInfo::OpenVmdk() - Listing volume eqbc] failed with error r0xFFFFFFFF:{KubsApp::ListDir(1401)} + {KubsVol::ListDir(643)} + {KubsWS::ListDir(193)/ErrNo.-1.(Unknown error)-Exec failed with unhandled exception: set_fail_handler: 20: Invalid HTTP status.}]
4868 3940 01/27 13:37:18 JOBID CKubsFSVolume::Close() - Closing Disk: sqbc] of VM test-clst`StatefulSet`qbc`b5bf0bb6-a0fc-4123-ac68-9de3c3800807]
Documentation is very shallow and not enough KBs around kubernetes.
Ideas?
Page 1 / 1
Curious as to the version you are trying to protect @dude ?
Curious as to the version you are trying to protect @dude ?
CV SP20.32 Kubernetes 1.16.8
@dude
If possible can you share the vsbkp.log and yaml of the application you are trying to protect? It will help us better to understand the issue being faced.
-Manoranjan
@dude
If possible can you share the vsbkp.log and yaml of the application you are trying to protect? It will help us better to understand the issue being faced.
-Manoranjan
I have opened a ticket with commvault to discuss further.
@Dude,
if you found a solution to this problem, can you share the same as we are also facing the same issue?
That’s on me, @raj5725 ! Not sure how this one slipped through as ‘solved’!
Here is the last action for the case:
Here is a summary of the issues seen during troubleshooting on Tuesday.
Also it is advised to update to the latest available hotfixes as there are some fixes for Kubernetes backup.
On the call we added a new Kubernetes instance using API IP: port number (previously the url for Rancher was used)
1. Tested one Application - This failed due to a failed mount attempt on the volume. This error was seen on the Kubernetes side - please check with storage team on this
2. Failed backup attempt also left pods in a stuck creating or terminating state - Solution for this was to force delete the pod (k delete pod “podname” –force –grace-period=0
I did notice the case was closed, though @dude responded afterwards (though the case was closed so there was no further response).
@dude , do you recall what was the main fix for this issue?
ps I unmarked this as answered.
I dont think there was a proper fix to it. There was a lot of errors around disk mounts and unmounts that were unfamiliar. We decided not to pursue CV and Kubernetes. Documentation had very little info about configuring and troubleshooting at the time. Support Ticket did not help nor provided the confidence/results we expected.
Sorry not much of a help in this area.
No apology need, @dude !
@raj5725 , can you create a support incident and share the case number with me?
@Mike Struening
we are yet to install CV in production mode. Before going into production we were testing the CV +K8 integration in a test environment and we got these errors. we wanted to confirm before going into production. Let me add the CV licenses if possible to log a support incident
Sounds like a solid plan. Keep me posted!
Hey @raj5725 , following up to see if you had a chance to open a support case for this?
Let me know the case number!
Hi @raj5725 , gentle follow up on this one. Were you able to get an incident created? or did you resolve the issue? Please let me know how this is going.
Thanks!
Hi Mike,
sorry for the delayed response..
Since this environment is a very high sensitive government site, I am not able to add the CV licenses to the test environment and because of that I am not able to log a support incident. I am trying to find work around to achieve this and will keep you posted.
Regards
Understand completely. Please do keep us posted!
@raj5725 reach out to me mfasulo@commvault.com
Hi Mike and MFasulo,
I am finally able to create a support incident ( 211115-320). will wait for an answer from them.
Thanks, @raj5725 , I’ll keep an eye on it.
Sharing the Solution for the second incident:
Finding Details:
The temp pod was not creating during the backups for the PODs with PV.
Assisted in configuring the additional setting sK8sImageRegistryUrl to pull pod from local repository.
Hi,
Thanks for updating the solution and my apologies for the delayed response. I would summarize the problem and the solution that resolved my problem below
CommVault environment: was running at 11.24 when the issue was seen and then upgraded it to CV 11.25 – problem remained
Kubernetes environment: Charmed Distribution of Kubernetes (CDK- Canonical Ubuntu 18.04)