Hello, does anyone have any advice on this error during a VM restore test?
ERROR CODE >91:379]: Host ] was not found to restore VM ]. Source: CVMA01, Process: cvd
One thing I found is that Check Readiness fails when ran on the vcenter. Its pointing to a virtual (VA) server that no longer exists. Only the physical MA is present. I suspect this is the issue. Is there any way to change it to use the physical MA and not this missing VM? I do not know why the VM was removed or why the physical MA is not the server being pointed to for jobs.
The interesting thing is that the backup jobs are completing successfully like nothing is wrong. When I look at Browse and Restore, all of the servers VMDK files are marked as “Unavailable” but the other files (log, vmx, vmxf etc) on each VM have date stamps.
Page 1 / 1
Hi @tsmtmiller
Correct me if I did not get this right. You are performing a restore and want to assign a different media agent to execute the data move.
When you launch the restore process, go to the Advanced option and select Data Path. First option is the media agent you want to use.
Yes, use another agent. I was looking for this option, I found it and tried it but it still failed. In the logs it still mentions the server that is no longer available.
@tsmtmiller , I searched our entire internal incident database and found only 2 incidents.
In both cases, the issue was using a remote console:
We managed to restore the VMs when initiating the restore by opening the console on the commserver. This issue happens only when starting the restore from a remote console an we suspect that this is caused by our AV software.
Can you ensure you are trying this directly on the Commserve (and perhaps update any remote console)?
Thanks for the idea. I was using a remote console. I got on the CS and tried but the same error. In the logs it still references the missing server, CVVA01.
11596 c74 03/16 13:53:10 979625 vsAppMgr::updateMemberServersList() - Member servers sCVMA01, CVVA01] at level l5-INSTANCE_ENTITY]
I removed this server from Virtual Server Instance Properties > Access nodes. Now the server doesn’t appear in the logs but the same error occurs.
Check readiness:
Communication failure between CommServe and Proxy Client CVVA01. Error returned is:Connection failed.
This was a MA that got removed for some reason. I found the files to the VM in vcenter. I will ask the VMware guys if they can put it back.
@tsmtmiller
Can you please also take a look at the “Proxy client” option in the restore screen above and make sure that you are not selecting the machine that is no longer available.
From the Commserve, after removing the old VSA from the Access node, and with the MA selected, I still get the error. Either its the missing server or a network issue I think.
Error Code: g91:379] Description: Host o] was not found to restore VM ]. Source: CVMA01, Process: cvd
If you dont want to struggle with this, I would open a support case to look a little deeper.
Now, I would also try to instead running the restore from the client side, I would look at the storage policy job history and would try it from there as well.
One additional option is to try to run this restore from Command Center.
There is definitely avenues that we can take here to investigate further. Try the storage policy side.
There is a proxy / access node configuration on the VSA subclient where the VM was protectedas well - so check there too and make sure its removed. Subclient overrides instance level configuration.
We got the old CVVA01 server back up and running. It is showing in Commvault OK. Tried another restore and the same Error Code: >91:379]
@tsmtmiller I`d probably open a support case so that support can look at the logs and determine what is actually happening here.
I’ll revert when I figure this out.
Please do! Share the case number so I can track it accordingly.
The issue was solved. There were two issues.
Issue 1: Not able to connect to host ] Solved: There was no snap mount host configured in the advanced options under Access Node ESX Server for the backup subclient.
Issue 2: Netapp Issue. One of the aggregates was still taken over by the other node.
I’m not sure how the snap mount host got removed or if it was never configured to begin with. We have done restores before so this doesn’t make sense to me. I may test this if time allows. I am also not sure about issue 2, we believed that perhaps a Netapp Update might have caused this.
You’re awesome, thanks @tsmtmiller ! I marked your reply as the best answer.