Solved

Granular data verification for virtual machine tasks

  • 1 March 2022
  • 8 replies
  • 166 views

Userlevel 4

Hello Commvault Community,

 

I hope you are doing well.

 

I come to you with a question about granular data verification for virtual machine backup jobs. 

 

The customer has a problem with the data saved on the tapes, the matter is quite heavily consulted in the Commvault support. (Incident 211108-483)

 

Due to the current situation, the client will need to backup the entire environment.

 

Therefore, when having problems writing data to tapes, I need to be sure which jobs may have problems with restoring task (mostly virtual machines).

 

  1. When a job fails with a chunk it should be a reference to JobID. 

    I am aware of this that Commvault has a VM-Centric mechanism that divides jobs into Parent Jobs and Child Jobs, but the client unfortunately has a VMware Pseudoclient in the indexing version V1, not V2.

    In V1, the VM-Centric mechanism is not working.

    Both in the history of VSA jobs and directly in the history of the virtual machine you can see the same JobID, they are not distinguished as in the V2 version.

    Ultimately, we would like to know which Child Job ended with a chunk error - this distinction is only possible in the V2 indexing version (if I understand correctly, correct me otherwise).

    Will Workflow from another thread in Commvault Community allow 1:1 copy of settings and * convert * from V1 to V2

    https://cloud.commvault.com/webconsole/downloadcenter/dcReadme.do?packageId=20274&platformId=1&status=0&downloadType=Script

 

  1. Is there any other alternative way to granularly extract information with which backups may be a problem with recovery (e.g. due to damaged chunks).

    We thought about another solution - creating separate Subclients for each virtual machine, then we will be sure that each granularity will be checked by the Data Verification operation. But it's an extremely time consuming solution and we would use it as a last resort if all other options didn't work out.

 

The matter is urgent for us, because it is not known what will happen with us in the days ...

 

Thanks&Regards,

Kamil

icon

Best answer by Kamil 14 March 2022, 12:07

View original

If you have a question or comment, please create a topic

8 replies

Userlevel 7
Badge +23

Hey @Kamil  I hope you’re doing well.

To clarify, is it only the 1 tape they have potential restore issues, or multiples?

Perhaps do a Media refresh to combine the tapes, and any problematic chunk files will fail and alert you ahead of time?

https://documentation.commvault.com/11.26/expert/10516_media_refresh.html

Userlevel 4

Hi @Mike Struening,

 

The problem is much wider, the thread in the support describes it exactly - but in short, the problem occur when restoring virtual machines from large tapes, where the data resides on two or more tapes.

 

When changing the tape in drive, there is a problem with reading the data, we try to locate the cause of the problem, be it software or hardware. The problem probably occurs when data is written to several tapes.

---

But that's not my main question, here in the thread I want to ask about granular verification of jobs for VM backups, so that they can be individually excluded if they report that they have a problem. It would be hard to verify which virtual machine may have a problem with the restore, if a chunk error message appears for a large backup.

 

Regards,

Kamil

Userlevel 4

We have listed three points that they want to meet:

 

  1. We want to be able to verify at the virtual machine level so that we can determine which machine cannot be recovered. Currently, verification applies to all machines in the job and in the event of an error it is not known which is causing the problem. (Indexing V1)

 

  1. Backup and recovery optimization:
    - all virtual machine files saved on one / subsequent tapes

    - virtual machines saved entirely one after the other, and not scattered parts of the backup of a given virtual machine on several tapes - currently some virtual machines (even smaller ones) are saved on several tapes.

    For recovery, several tapes must be used instead of one.

    - better control over the distribution of data on tapes, for example the possibility of manually placing individual machines on tapes, if there is not enough space to store all the data, then a new tape should be used, etc.
     
  2. Regarding to our problem with tapes - if we know which VM was recorded incorrectly, then it will be possible to rewrite it on another tape (here it seems to me that the conversion the indexing version from V1 to V2, and thus the visibility Parent and Child Jobów would solve the problem).
Userlevel 7
Badge +23

Thanks, @Kamil that’s much clearer!

Give me some time to see what I can find out for you.

Userlevel 4
Badge +8

Hello @Kamil 

 

The VM validation does a live mount operation which validates that the backup/VM is able to boot up and is responsive but keep in mind, this is only supported from DISK backups.

 

https://documentation.commvault.com/11.24/essential/111292_application_validation_for_vmware_vms.html

Userlevel 7
Badge +23

I can add in as well regarding the vm-per-tape request, it won’t work.

The VSA job does not split VM per stream and as a tape is a stream using VSA it is not possible other then creating separate subclients for each VM or using WFS to backup that client.

If they want to split vms then they usually do a WFS agent per VM to get the desired result.

Let me know if that helps!

 

Userlevel 4

Hi,

 

At the moment, it seems that everything has been cleared up.

The client is convinced of the transition to the V2 indexing versions, so we can consider this thread solved.

Thanks for your help, if any additional questions arise, I will come back.

 

Regards,
Kamil

Userlevel 7
Badge +23

Sounds good, I’ll be here!