Solved

Kubernetes Backup Failing

  • 24 June 2021
  • 7 replies
  • 651 views

Userlevel 1
Badge +6

Hello,

Added a Kubernetes Cluster. The pods are discovered fine however during the backup I see the following on vsbkp.log and job goes into waiting state.

Error mounting snap volumes. Possible reason includes there is no FC or iscsi connectivity found with esx server. Please check whether nIscsiEnable registry key is set if you are using Iscsi method for mounting.

Not much in documentation for K8s 

 

Any Ideas ?

icon

Best answer by Mike Struening RETIRED 16 August 2021, 22:40

View original

7 replies

Userlevel 2
Badge +3

Do we the vsbkp logs for this? also whether this has been escalated to customer support?

 

Tagging Amit

@amitkar 

Badge

Do we know customer is on which service pack also is there any escalation ticket for this issue?

If snapshot creation fails due to any reason then we were giving this JPR.

JPR issue fixed in P23 : https://updatecenter.commvault.com/Form.aspx?BuildID=1100080&FormID=116288

Userlevel 7
Badge +23

@TNO , I’ll defer to @amitkar who would know better, though adding in I found this error in various incidents with varying solutions.

If you have an open support case, please share the incident number so I can track it.

Userlevel 1
Badge +3

@Mike Struening  Thanks for tagging me. The error comes from us failing to mount / create a snapshot of the PVC. Like @rohit mentioned, the error reporting has been enhanced in recent FRs to have more K8s specific language.

 

@TNO do you have additional context from vsbkp.log?  We’ll need a few more lines of context to see what the exact failure was. Alternatively if you have already have a support ticket we can get more information via that.

Userlevel 1
Badge +6

@rohit We are running on version 11.22.22.

@amitkar Yes we have an incident opened for that - 210624-280. Haven’t got inputs from support yet.

:slight_smile:

 

Userlevel 7
Badge +23

Thanks, I’ll keep an eye on it!

Userlevel 7
Badge +23

Sharing the CMR Resolution:

kubernetes backup failing "Error mounting snap volumes"

Solution:

- Kubernetes backup failing due to Node does not have enough resources to get temporary pods created
- Once customer reduced the load for the specific worker node the job was able to run and complete without issues

- CMR has been created: 325875 -  Optimization logic to be o created from Commvalt side during the provisional resources needed
 

Reply