Skip to main content
Question

Kubernetes OpenShift backup for Rabbit - RabbitMQ

  • September 7, 2025
  • 3 replies
  • 38 views

Forum|alt.badge.img+1

Hello!

I hope you all are doing well!
I have been working with Commvault for a few years, but I have never dealt with Kubernetes backup before. So I really do have a lot of questions and challenges.

While I am still trying to prepare the environment to be stable and resilient, I discussed with the Kubernetes team a specific application called RabbitMQ.
We currently have a scheduled backup once per day, but the k8s team told me that there are a lot of changes happening within this RabbitMQ queue. It is constantly changing.

So my concern is: how can I help the k8s team to properly cover this? Do you know any best practices for such cases? Something like a log backup maybe? I don’t think running a full RabbitMQ backup every 15–30 minutes is a good idea.
Do you have any tips?
 

3 replies

Forum|alt.badge.img+8
  • Vaulter
  • September 8, 2025

Hi Grzegorz_Z,

Commvault doesn’t have any specific RabbitMQ recommendations or best practices.
To cover an application appropriately, it’s best to figure out the Recovery Point Objective, so if the PVC for RabbitMQ went offline and needed recovery, how far back is the business willing to go to recover data?

If that is only 15-30 minutes, then you can configure a schedule backups to run every 15-30 minutes. The recommendation, however, would probably be to configure this to be incremental backups with synthetic full backups every week or 2, depending on your environment’s needs.

This should keep the application protected adequately.

If there are specific vendor requirements you could look at implementing them via CvTasks: https://documentation.commvault.com/11.40/software/application_consistent_backups_for_kubernetes_using_cvtasks.html

Cheers,


Forum|alt.badge.img+1
  • Author
  • Bit
  • September 11, 2025

Many thanks for your response!

 

Yea, i do agree that we can configure just a regular k8s backup for this Namespace or so. But I am wondering how it will affect our DDB, space and k8s environment itself.
This 30 minutes wont be to short? I do have faced a problem with DDB previously while making to many NFS backups per day. But here we do use Dell DDB instead of CV DDB…

 


Forum|alt.badge.img+8
  • Vaulter
  • September 11, 2025

Space shouldn’t be affected as the PVC data/snapshot is read from a file level when mounted against the worker node so any DDB should be able to deduplicate it well. k8s also shouldn’t be impacted as long as the resourcing is available for the worker to run. I can’t speak for a Dell DDB, however. I’d suggest reaching out to Dell to confirm if there might be any issues.

Cheers,