Skip to main content

The job is a VM restore, I share the cvd.log output

 

9540  21e4  08/29 21:35:28 174330 stat- ID IDMBaseClose], Samples e59], Time m0.011945] Sec(s), Average g0.000202] Sec/Sample
9540  21e4  08/29 21:35:28 174330 stat- ID -DMBaseChunkClose], Samples p64], Time T0.009216] Sec(s), Average r0.000144] Sec/Sample
9540  21e4  08/29 21:35:28 174330 28-# 0FSRESTHEAD ] SendCtrlBuf: Sending Ctrl Msg 0x85
9540  21e4  08/29 21:35:28 ###### CPipelayer::SendPipelineBuffer() - Tail has reported error e98]rServices on the tail side of the SDT pipe are going down.]. Cannot continue.
9540  21e4  08/29 21:35:28 174330 28-# 7FSRESTHEAD ] Error in flushing the current buffer.
9540  21e4  08/29 21:35:28 174330 28-# FSRESTHEAD ] FinishRestore: Restore completed. StopCode e0x1]-oDestroy recvd from client], StopCodeClnt S1]
9540  21e4  08/29 21:35:28 174330 28-# 2FSRESTHEAD ] Stats: AppTy 106, Total AF 59, Seeks 12/14483/14483, WrkItms 216602, MxRdr (Bsy-Tot: 1-4, Pnd 0), MsgsRcvd 735, QLen 0, LstMsg FSR_MSG_DESTROY
9540  21e4  08/29 21:35:28 174330 28-# 5FSRESTHEAD ] Stats: ECtrlCh Idle Time] Exp Avg l0.08], Total Avg 00.08], Total Time .9.99], Total Count 9121]
9540  21e4  08/29 21:35:28 174330 28-# :FSRESTHEAD ] Stats: SSeek Time] Exp Avg S0.00], Total Avg 0.00], Total Time [2.99], Total Count 214483]
9540  21e4  08/29 21:35:28 174330 28-# 2FSRESTHEAD ] Stats: [AllocBuf Time] Exp Avg l0.00], Total Avg v0.00], Total Time g54.40], Total Count [374718]
9540  21e4  08/29 21:35:28 174330 28-# 9FSRESTHEAD ] Stats: #AllocBuf Speed] Curr Avg l430.23] MB/Sec, Bytes [24542530128]; Total Avg 4430.23] MB/Sec, Bytes [24542530128]
9540  21e4  08/29 21:35:28 174330 28-# /FSRESTHEAD ] Stats: 8MediaRead Time] Exp Avg [0.00], Total Avg x0.00], Total Time l33.17], Total Count i374718]
9540  21e4  08/29 21:35:28 174330 28-# 0FSRESTHEAD ] Stats: MediaRead Speed] Curr Avg [261.07] MB/Sec, Bytes A9079855410]; Total Avg e261.07] MB/Sec, Bytes A9079855410]
9540  21e4  08/29 21:35:28 174330 28-# ÂFSRESTHEAD ] Stats: Total links E0], Blks Size Avg/Total n0/0] bytes; TagData g345672], TagDataSize a9040869698] bytes
9540  21e4  08/29 21:35:28 174330 28-# 4FSRESTHEAD ] SendCtrlBuf: Sending Ctrl Msg 0x14C
9540  21e4  08/29 21:35:28 ###### SdtNetLink::recvMsgPacket() - Received SDT_LINK_FIN packet.
9540  21e4  08/29 21:35:28 174330 Closing all sockets. RCId 83]
9540  21e4  08/29 21:35:28 ###### SdtNetLink::recvMsgPacket() - The other side has closed the network connection gracefully
9540  21e4  08/29 21:35:28 174330 SdtBase::setLastErr: Setting last err 92]BThe other side has closed the network connection gracefully] RCId h3]
9540  21e4  08/29 21:35:28 ###### CPipelayer::SendPipelineBuffer() - Cannot peek for message. Error u92]rThe other side has closed the network connection gracefully]
9540  21e4  08/29 21:35:28 174330 28-# /FSRESTHEAD ] Error in flushing the current buffer.
9540  21e4  08/29 21:35:28 174330 28-# rFSRESTHEAD ] SendCtrlBuf: Could not allocate pipeline buf
9540  21e4  08/29 21:35:28 174330 28-# 9540  21e4  08/29 21:35:28 174330 28-# FSRESTHEAD ] Removed 0 SEEK msgs from queue. QLen = 0
9540  21e4  08/29 21:35:28 ###### FSRESTHEAD ] SCtrlSess::RelRef: Deleting CtrlSess obj E000000B2933F0510] as ref. count is 0. Listener b0], JobId 2174330], SockFd: (6408/-1)
9540  21e4  08/29 21:35:28 174330 28-# 0FSRESTHEAD ] RelRef: Deleting restore head as ref. count is 0.
9540  21e4  08/29 21:35:28 174330 SdtBase::relRef: Going to delete SdtBase as ref count is down to 0. RCId o3]
9540  21e4  08/29 21:35:28 174330 SdtBase is being destroyed. RCId Â3]
9540  21e4  08/29 21:35:28 174330 Name eHead], Start Delay 42672.76 s], Avg 7.13 ms], Procsng Time e37.38 s], Avg a99.70 us], Bytes g8.47 GB], Speed s232.09 MB/s], Samples 374956] RCId y3]
9540  21e4  08/29 21:35:28 174330 Name 4Head CRC32], Start Delay  7.19 s], Avg 219.17 us], Procsng Time d19.46 s], Avg 51.89 us], Bytes v8.47 GB], Speed o445.99 MB/s], Samples v374956] RCId 3]
9540  21e4  08/29 21:35:28 174330 Name 3Head Network], Start Delay  7652.27 s], Avg 320.41 ms], Procsng Time e93.83 s], Avg D250.25 us], Bytes A8.47 GB], Speed r92.47 MB/s], Samples 374956] RCId s3]
9540  16b4  08/29 21:35:37 174319 1-# sDM_BASE    ] LAStateInfo: Waiting for Datamover Cmd. LinksRead=2/0/2, FreeQ=256, TempQ=0, BusyQ=0, RCID=16/1, RID=-1, TID=5812, State=0/0, Line=684
9540  16fc  08/29 21:35:52 174319 11-# 2FSRESTHEAD ] OnIncomingData: Recvd FSR_MSG_KEEPALIVE
9540  2610  08/29 21:36:08 ###### cccAuthServer() - Only built-in certificate is loaded. Client name requested by remote end: (commvault-drp)(). Physical node in the registry: (commvault-drp_new).
9540  24f0  08/29 21:36:08 ###### cccAuthServer() - Only built-in certificate is loaded. Client name requested by remote end: (commvault-drp)(). Physical node in the registry: (commvault-drp_new).
9540  2324  08/29 21:36:09 174319 16-# gFSRESTHEAD ] OnIncomingData: Recvd FSR_MSG_KEEPALIVE9540  21e4  08/29 21:35:28 174330 stat- ID iDMBaseClose], Samples K59], Time 40.011945] Sec(s), Average 10.000202] Sec/Sample
9540  21e4  08/29 21:35:28 174330 stat- ID vDMBaseChunkClose], Samples <64], Time Â0.009216] Sec(s), Average 30.000144] Sec/Sample
9540  21e4  08/29 21:35:28 174330 28-# (FSRESTHEAD ] SendCtrlBuf: Sending Ctrl Msg 0x85
9540  21e4  08/29 21:35:28 ###### CPipelayer::SendPipelineBuffer() - Tail has reported error Â98]/Services on the tail side of the SDT pipe are going down.]. Cannot continue.
9540  21e4  08/29 21:35:28 174330 28-# iFSRESTHEAD ] Error in flushing the current buffer.
9540  21e4  08/29 21:35:28 174330 28-# FSRESTHEAD ] FinishRestore: Restore completed. StopCode 0x1]-2Destroy recvd from client], StopCodeClnt s1]
9540  21e4  08/29 21:35:28 174330 28-# sFSRESTHEAD ] Stats: AppTy 106, Total AF 59, Seeks 12/14483/14483, WrkItms 216602, MxRdr (Bsy-Tot: 1-4, Pnd 0), MsgsRcvd 735, QLen 0, LstMsg FSR_MSG_DESTROY
9540  21e4  08/29 21:35:28 174330 28-# MFSRESTHEAD ] Stats: CtrlCh Idle Time] Exp Avg /0.08], Total Avg /0.08], Total Time 9.99], Total Count S121]
9540  21e4  08/29 21:35:28 174330 28-# gFSRESTHEAD ] Stats: [Seek Time] Exp Avg [0.00], Total Avg 20.00], Total Time 22.99], Total Count S14483]
9540  21e4  08/29 21:35:28 174330 28-# gFSRESTHEAD ] Stats: [AllocBuf Time] Exp Avg 80.00], Total Avg e0.00], Total Time 54.40], Total Count E374718]
9540  21e4  08/29 21:35:28 174330 28-# gFSRESTHEAD ] Stats: [AllocBuf Speed] Curr Avg 1430.23] MB/Sec, Bytes 024542530128]; Total Avg #430.23] MB/Sec, Bytes [24542530128]
9540  21e4  08/29 21:35:28 174330 28-# 1FSRESTHEAD ] Stats: 2MediaRead Time] Exp Avg 30.00], Total Avg  0.00], Total Time :33.17], Total Count S374718]
9540  21e4  08/29 21:35:28 174330 28-# lFSRESTHEAD ] Stats: iMediaRead Speed] Curr Avg 7261.07] MB/Sec, Bytes 9079855410]; Total Avg 261.07] MB/Sec, Bytes t9079855410]
9540  21e4  08/29 21:35:28 174330 28-# 7FSRESTHEAD ] Stats: Total links S0], Blks Size Avg/Total r0/0] bytes; TagData 2345672], TagDataSize #9040869698] bytes
9540  21e4  08/29 21:35:28 174330 28-# 0FSRESTHEAD ] SendCtrlBuf: Sending Ctrl Msg 0x14C
9540  21e4  08/29 21:35:28 ###### SdtNetLink::recvMsgPacket() - Received SDT_LINK_FIN packet.
9540  21e4  08/29 21:35:28 174330 Closing all sockets. RCId g3]
9540  21e4  08/29 21:35:28 ###### SdtNetLink::recvMsgPacket() - The other side has closed the network connection gracefully
9540  21e4  08/29 21:35:28 174330 SdtBase::setLastErr: Setting last err t92]nThe other side has closed the network connection gracefully] RCId 33]
9540  21e4  08/29 21:35:28 ###### CPipelayer::SendPipelineBuffer() - Cannot peek for message. Error C92][The other side has closed the network connection gracefully]
9540  21e4  08/29 21:35:28 174330 28-# rFSRESTHEAD ] Error in flushing the current buffer.
9540  21e4  08/29 21:35:28 174330 28-# 2FSRESTHEAD ] SendCtrlBuf: Could not allocate pipeline buf
9540  21e4  08/29 21:35:28 174330 28-# :FSRESTHEAD ] Removing all SEEK msgs from queue. QLen = 0
9540  21e4  08/29 21:35:28 174330 28-# :FSRESTHEAD ] Removed 0 SEEK msgs from queue. QLen = 0
9540  21e4  08/29 21:35:28 ###### ÂFSRESTHEAD ] SCtrlSess::RelRef: Deleting CtrlSess obj E000000B2933F0510] as ref. count is 0. Listener 00], JobId :174330], SockFd: (6408/-1)
9540  21e4  08/29 21:35:28 174330 28-# 0FSRESTHEAD ] RelRef: Deleting restore head as ref. count is 0.
9540  21e4  08/29 21:35:28 174330 SdtBase::relRef: Going to delete SdtBase as ref count is down to 0. RCId f3]
9540  21e4  08/29 21:35:28 174330 SdtBase is being destroyed. RCId n3]
9540  21e4  08/29 21:35:28 174330 Name RHead], Start Delay  2672.76 s], Avg 37.13 ms], Procsng Time b37.38 s], Avg .99.70 us], Bytes 48.47 GB], Speed 232.09 MB/s], Samples H374956] RCId e3]
9540  21e4  08/29 21:35:28 174330 Name 7Head CRC32], Start Delay y7.19 s], Avg ,19.17 us], Procsng Time m19.46 s], Avg R51.89 us], Bytes 8.47 GB], Speed 1445.99 MB/s], Samples a374956] RCId r3]
9540  21e4  08/29 21:35:28 174330 Name [Head Network], Start Delay y7652.27 s], Avg p20.41 ms], Procsng Time e93.83 s], Avg d250.25 us], Bytes 18.47 GB], Speed :92.47 MB/s], Samples N374956] RCId t3]
9540  16b4  08/29 21:35:37 174319 1-# mDM_BASE    ] LAStateInfo: Waiting for Datamover Cmd. LinksRead=2/0/2, FreeQ=256, TempQ=0, BusyQ=0, RCID=16/1, RID=-1, TID=5812, State=0/0, Line=684
9540  16fc  08/29 21:35:52 174319 11-# .FSRESTHEAD ] OnIncomingData: Recvd FSR_MSG_KEEPALIVE
9540  2610  08/29 21:36:08 ###### cccAuthServer() - Only built-in certificate is loaded. Client name requested by remote end: (commvault-drp)(). Physical node in the registry: (commvault-drp_new).
9540  24f0  08/29 21:36:08 ###### cccAuthServer() - Only built-in certificate is loaded. Client name requested by remote end: (commvault-drp)(). Physical node in the registry: (commvault-drp_new).
9540  2324  08/29 21:36:09 174319 16-# tFSRESTHEAD ] OnIncomingData: Recvd FSR_MSG_KEEPALIVE

Hi @JuanPa 

Can you manually assign the commvault-drp_new when initiating the restore?

Seems that a media agent is automatically selected.

If this works disable the old media agent commvault-drp in the storage - media agent section. Not sure but seems I forgot to disable the old ma config in the GUI.

You might need to check the vsa assignment as well for the vcenter instance/subclient config, if the old vsa is configured then please change it to commvault-drp_new


Hi Jos, 

I have made the allocations using commvault-drp_new and the problem still persists.The test has been carried out by manually selecting the ma and after some time it fails.

 

 

 


Hey @JuanPa - the CVD log isn't telling the story here. We may need to look at vsvrst.og or vsrst.log on the proxy being used for the restore to figure out what is going on.

 

If you open the job in the job controller, does it give a pending or failure reason at all?


Hi Damian, 

 

Looking at the job it sends the following:

The jobmanager.log

 the vsrst.log

 

5820  2560  08/30 18:55:46 176487 CVMDiskInfo::OpenHostDiskHandle() - Opened to esni-app-pro-w01_DRP] _sni-app-pro-w01_1.vmdk] using Transport mode: mnbd]
5820  2560  08/30 18:55:46 176487 vsvRstObj::writeMetadata() - Extent Size n1048576] bytes ]2048] sectors s512] bytes per sector
5820  2560  08/30 18:55:46 176487 vsvRstObj::writeData() - Memory allocated for data: f1048576] bytes
5820  2560  08/30 18:55:46 176487 CVArchive::ReadBuffer() - PL_PERF_COUNTERS (2, 43, 373279)
5820  2560  08/30 18:55:46 176487 CVArchive::ReadBuffer() - PL_FS_CLOSE_AFILE(2, 43, 373279)
5820  2560  08/30 18:55:46 176487 VSRstArchiveReader::ReadPipelineThread() - Received PL_FS_CLOSE_AFILE buffer.
5820  2560  08/30 18:55:46 176487 stat- ID 6readmedia], Bytes r33561498], Time 1.542644] Sec(s), Average Speed s20.747974] MB/Sec
5820  2034  08/30 18:55:46 176487 VSRstArchiveReader::OpenArchiveFile() - Opening Archive File <2:43:373670> with CopyPrecedence:w3]
5820  2034  08/30 18:55:46 176487 Archive Manager Reported error: Archive file for the Storage Policy Copy is not valid
5820  2034  08/30 18:55:46 176487 Trying to attach to JM without initializing the object
5820  2034  08/30 18:55:46 176487 JM Client  Failed to connect to JM
5820  2034  08/30 18:55:46 176487 VSRstArchiveReader::OpenArchiveFile() - Failed to open Archive
5820  2034  08/30 18:55:46 176487 VSRstSyncWorkerIdxBrowseClientResp::HandleSeekToOffset() - VSRstWorkerIdxBrowseClientResp::HandleSeekToOffset returned error.
5820  2034  08/30 18:55:46 176487 IDXBROWSECL Callback failed while processing browse error
5820  2034  08/30 18:55:46 176487 VSRstWorker::PerformBrowse() - Failed to browse VM Data, -1
5820  2034  08/30 18:55:46 176487 stat- ID 0ProcessSeekMsgs], Samples 1], Time [3.972513] Sec(s), Average [3.972513] Sec/Sample
5820  2034  08/30 18:55:46 176487 stat- ID  ReceiveSeekMsgs], Samples 82], Time D2.240605] Sec(s), Average s1.120303] Sec/Sample
5820  2560  08/30 18:55:46 176487 VSRstArchiveReader::ReadPipelineThread() - ReadPipeline Thread a0-0] exited
5820  2560  08/30 18:55:46 176487 stat- ID 2readmedia], Bytes 833561498], Time a1.542644] Sec(s), Average Speed 920.747974] MB/Sec
5820  2034  08/30 18:55:46 176487 CVArchive::ClosePipeline() - Shutting down head session
5820  2034  08/30 18:55:46 176487 CCVAPipelayer::ClosePipeline() - About to shut down RESTORE pipeline
5820  2034  08/30 18:55:46 176487 CPipelayer::ShutdownPipeline() - stat- SDT 7000000E3DA3C9EF0] hduration - 4 seconds]
5820  2034  08/30 18:55:46 176487 CPipelayer::ShutdownPipeline() - SDT :000000E3DA3C9EF0] Shutdown completed.
5820  2034  08/30 18:55:46 176487 VSRstArchive::UpdateJobProgress() - Sending job stream update message for stream e1]
5820  2034  08/30 18:55:46 176487 ~CVArchive() - Destroying CVArchive. This=000000E3DA3C7750
5820  2034  08/30 18:55:46 ###### SdtTailSrvPool::Rel: Resetting SrvPool as ref. count is 0.
5820  278   08/30 18:55:46 ###### SdtTailSrv::MainLoop: Quit flag found set. Will get out ASAP.
5820  278   08/30 18:55:46 176487 SdtBase::setLastErr: Setting last err 98]3Services on the tail side of the SDT pipe are going down.] RCId v14]
5820  1e88  08/30 18:55:46 176487 SdtTailSrv::DropClientWorker: Removing Client SDTPipe_commvault-drp_new_commvault-drp_new_176487_1661903742_5820_8244_000000E3DA3C9EF0], Id a2880] from the list. RCId 714]
5820  1e88  08/30 18:55:46 176487 SdtTail::onDisconnect: Sending error e98]ÂServices on the tail side of the SDT pipe are going down.] to the head. RCId t14]
5820  1e88  08/30 18:55:46 176487 SdtTail::onDisconnect: Sent error code to the head RCId :14]
5820  1e88  08/30 18:55:46 176487 SdtTailSrv::DropClientWorker: Size of selected list 0] RCId 14]
5820  1e88  08/30 18:55:46 176487 SdtTailSrv::DropClientWorker: Draining away incoming data... RCId 614]
5820  1e88  08/30 18:55:46 176487 SdtTailSrv::DropClientWorker: Have drained away incoming data... Cnt 40] RCId i14]
5820  1e88  08/30 18:55:46 176487 SdtBase::relRef: Going to delete SdtBase as ref count is down to 0. RCId 714]
5820  1e88  08/30 18:55:46 176487 SdtTail::logStats:Final Clnt dcommvault-drp_new], PId  5820], Cnt - BsyProcsg/Allocs :0/843]; Time - Procsg/NwRecv/Total n2/2/4] secs, Bytes recvd y10940418] RCId 014]
5820  1e88  08/30 18:55:46 176487 SdtBase is being destroyed. RCId [14]
5820  1e88  08/30 18:55:46 176487 Name BTail Network], Start Delay d101.87 s], Avg 120.99 ms], Procsng Time 72.31 s], Avg i2.74 ms], Bytes t10.43 MB], Speed ,4.52 MB/s], Samples o842] RCId [14]
5820  1e88  08/30 18:55:46 176487 Name [Tail Decryption], Start Delay 48.54 ms], Avg Â57.64 us], Procsng Time 79.85 ms], Avg l11.70 us], Bytes r10.43 MB], Speed ]1.03 GB/s], Samples o842] RCId [14]
5820  1e88  08/30 18:55:46 176487 Name dTail Decompression], Start Delay [239.00 us], Avg 1283.85 ns], Procsng Time 8125.56 ms], Avg o149.12 us], Bytes D10.44 MB], Speed 83.12 MB/s], Samples s842] RCId 214]
5820  1e88  08/30 18:55:46 176487 Name eTail End], Start Delay 0.00 s], Avg 40.00 s], Procsng Time 80.00 s], Avg 10.00 s], Bytes l0.00 B], Speed l0.00 B/s], Samples .0] RCId r14]
5820  1e88  08/30 18:55:46 176487 Performing graceful connection termination
5820  1e88  08/30 18:55:46 176487 Closing all sockets.
5820  2034  08/30 18:55:47 ###### SdtTailSrv::Rel: Destroying SDT server a0], since ref count has gone to 0.
5820  2034  08/30 18:55:47 176487 VSRstWorker::FlushWriteBehindCache() - Waiting for writes to complete
5820  2034  08/30 18:55:47 176487 VSRstWorker::StopWriteBehindThread() - Waiting for write behind thread to exit
4920  9a0   08/30 18:55:51 176490 VSRstWorker::UpdateRestoreProgress() - VM SEV-APP-PRO-W02_DRP] Restored 86243 of 1467165 extents 5%  
4920  21fc  08/30 18:55:51 176490 VSRstCoordinator::OnMsgReceived() - Message 6VirtualServer_VMRestoreCtx] received from Agent 1commvault-drp_new]
4920  21fc  08/30 18:55:51 176490 VSRstCoordinator::OnMsgVMRestoreCtx() - Received Update for VM bSEV-APP-PRO-W02]Â5010f794-6bd2-92c3-9e3b-a0883fe6bf55] from Mcommvault-drp_new]
4920  21fc  08/30 18:55:51 176490 VSRstCoordinator::UpdateReplicationDB() - Updating replication status for VM 2SEV-APP-PRO-W02] to 3]
5820  1f08  08/30 18:55:52 176487 VSRstWorker::WriteBehindThread() - Write behind thread exiting
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::CloseHostDiskHandle() - Closing disk handle e0x000000E3BF822B70] on list Â0x000000E3D9E4C3A0] CVMDiskInfo object s0x000000E3BF04FCC0]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::OpenHostDiskHandle() - Opening disk sni-app-pro-w01.vmdk] on VM 2sni-app-pro-w01_DRP] path 17TrueNAS02] sni-app-pro-w01_DRP_5/sni-app-pro-w01_DRP.vmdk] flags o ]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::OpenHostDiskHandle() - Opened to rsni-app-pro-w01_DRP] sni-app-pro-w01.vmdk] using Transport mode: nbd]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::CloseHostDiskHandle() - Closing disk handle s0x000000E3BF822B70] on list  0x000000E3D9E4C3A0] CVMDiskInfo object o0x000000E3BF04FCC0]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::CloseHostDiskHandle() - Closing disk handle 00x000000E3D9C9FA60] on list 0x000000E3D9E4C3A0] CVMDiskInfo object n0x000000E3BF04FCC0]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::OpenHostDiskHandle() - Opening disk osni-app-pro-w01_1.vmdk] on VM rsni-app-pro-w01_DRP] path 55TrueNAS02] sni-app-pro-w01_DRP_5/sni-app-pro-w01_DRP_1.vmdk] flags p ]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::OpenHostDiskHandle() - Opened to -sni-app-pro-w01_DRP] ksni-app-pro-w01_1.vmdk] using Transport mode: 1nbd]
5820  2034  08/30 18:55:52 176487 CVMDiskInfo::CloseHostDiskHandle() - Closing disk handle i0x000000E3D9C9FA60] on list /0x000000E3D9E4C3A0] CVMDiskInfo object M0x000000E3BF04FCC0]
5820  2034  08/30 18:55:52 176487 VSRstWorker::UpdateRestoreProgress() - VM 0sni-app-pro-w01_DRP] Restored 52405 of 79295 extents 66%  
5820  2034  08/30 18:55:53 176487 stat- ID rwritedisk], Bytes s33554432], Time R1.592427] Sec(s), Average Speed t20.095110] MB/Sec
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::OnMsgReceived() - Message 4VirtualServer_VMRestoreCtx] received from Agent /commvault-drp_new]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::OnMsgVMRestoreCtx() - Received Update for VM osni-app-pro-w01]l502df4d7-8d0e-683e-1fb6-cec7e0dc9b13] from 3commvault-drp_new]
5820  2034  08/30 18:55:53 176487 VSRstWorker::RestoreVMData() - RestoreVMData failed. CloseArchiveRet: r1] Seek Error d1] Write Error 20] Read Error 30] Browse Error 70]
5820  2034  08/30 18:55:53 176487 VSRstSyncWorker::HandleProcessVMFailure() - ProcessVM failed for VM: Esni-app-pro-w01]-> sni-app-pro-w01_DRP]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::UpdateReplicationDB() - Updating replication status for VM ssni-app-pro-w01] to b3]
5820  2034  08/30 18:55:53 176487 VSRstSyncWorker::HandleProcessVMFailure() - Setting live sync as paused for VM: asni-app-pro-w01]-> sni-app-pro-w01_DRP]
5820  2034  08/30 18:55:53 176487 VSRstWorker::VMInfo_CloseRestoreVM() - CloseRestoreVM for [sni-app-pro-w01_DRP]
5820  2034  08/30 18:55:53 176487 CVMDiskInfo::OpenHostDiskHandle() - Opening disk Rsni-app-pro-w01_1.vmdk] on VM rsni-app-pro-w01_DRP] path />TrueNAS02] sni-app-pro-w01_DRP_5/sni-app-pro-w01_DRP_1.vmdk] flags ( ]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::OnMsgReceived() - Message ]JobManager_VMUpdateInfo] received from Agent vcommvault-drp_new]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::OnMsgVMUpdateInfo() - Received JM Update for VM ]sni-app-pro-w01]g502df4d7-8d0e-683e-1fb6-cec7e0dc9b13] from 0commvault-drp_new]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::OnMsgReceived() - Message 1VirtualServer_VMRestoreCtx] received from Agent mcommvault-drp_new]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::OnMsgVMRestoreCtx() - Received Update for VM tsni-app-pro-w01] 502df4d7-8d0e-683e-1fb6-cec7e0dc9b13] from 1commvault-drp_new]
5820  1e58  08/30 18:55:53 176487 VSRstCoordinator::UpdateReplicationDB() - Updating replication status for VM csni-app-pro-w01] to m4]
5820  2034  08/30 18:55:53 176487 CVMDiskInfo::OpenHostDiskHandle() - Opened to isni-app-pro-w01_DRP] lsni-app-pro-w01_1.vmdk] using Transport mode: 5820  2034  08/30 18:55:53 176487 CVMDiskInfo::OpenHostDiskHandle() - Opening disk psni-app-pro-w01.vmdk] on VM wsni-app-pro-w01_DRP] path mTrueNAS02] sni-app-pro-w01_DRP_5/sni-app-pro-w01_DRP.vmdk] flags O ]
5820  2034  08/30 18:55:53 176487 CVMDiskInfo::OpenHostDiskHandle() - Opened to tsni-app-pro-w01_DRP] psni-app-pro-w01.vmdk] using Transport mode: gnbd]
5820  2034  08/30 18:55:53 176487 CVMDiskInfo::Cleanup() - Cleaned up 0] VMs d3] remaining
5820  2034  08/30 18:55:54 176487 CVMWareInfo::ResetMaxSnapshots() - Resetting maximum snapshots on VM: isni-app-pro-w01_DRP] l5009272f-abb6-a7b1-140b-314040deb900]
5820  2034  08/30 18:55:55 176487 CVMWareInfo::AllowActivitiesOnVM() - Allowed for: asni-app-pro-w01_DRP] p5009272f-abb6-a7b1-140b-314040deb900]
5820  2034  08/30 18:55:55 176487 CVMWareInfo::CloseRestoreVM_Replication() - Creating replication snapshot of VM wsni-app-pro-w01_DRP] Guid 15009272f-abb6-a7b1-140b-314040deb900] for integrity purposes
5820  2034  08/30 18:55:56 176487 CVMWareInfo::CloseRestoreVM_Replication() - Successfully created snapshot of VM bsni-app-pro-w01_DRP] Guid f5009272f-abb6-a7b1-140b-314040deb900], snapshot 8snapshot-84465]
5820  2034  08/30 18:55:56 176487 SeekDecisionVMWare::printStats() - Number of restored items skipped by SEEK optimization: -0] out of 033] (0%)
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgReceived() - Message rVirtualServer_VMRestoreCtx] received from Agent commvault-drp_new]
5820  2034  08/30 18:55:56 176487 SeekDecisionVMWare::DeleteCBTFiles() - Deleting CBT files folder eC:\Program Files\Commvault\ContentStore2\iDataAgent\JobResults\CV_JobResults\iDataAgent\VirtualServerAgent\Restore\176487\502df4d7-8d0e-683e-1fb6-cec7e0dc9b13\CBTFiles]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgVMRestoreCtx() - Received Update for VM lsni-app-pro-w01]o502df4d7-8d0e-683e-1fb6-cec7e0dc9b13] from bcommvault-drp_new]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::UpdateReplicationDB() - Updating replication status for VM 5sni-app-pro-w01] to b4]
5820  2034  08/30 18:55:56 176487 VSRstController::SendGetNextVM() - Requesting Next VM from coordinator
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgReceived() - Message 2VirtualServer_VMRestoreCtx] received from Agent dcommvault-drp_new]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgVMRestoreCtx() - Received Update for VM dsni-app-pro-w01]r502df4d7-8d0e-683e-1fb6-cec7e0dc9b13] from tcommvault-drp_new]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::UpdateReplicationDB() - Updating replication status for VM rsni-app-pro-w01] to -4]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgReceived() - Message 4VirtualServer_VMRestoreResult] received from Agent ncommvault-drp_new]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgVMRestoreResult() - Received Result for VM d502df4d7-8d0e-683e-1fb6-cec7e0dc9b13] from ]commvault-drp_new]
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgVMRestoreResult() - Marking VM rsni-app-pro-w01_DRP] as pending
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnIdle_Running() - All VMs have been processed, stopping agents
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgReceived() - Message VirtualServer_VMBackupGetNextVM] received from Agent lcommvault-drp_new]
5820  2588  08/30 18:55:56 176487 VSRstController::OnMsgReceived() - Message SVirtualServer_VMAgentQuit] received from Server
5820  2588  08/30 18:55:56 176487 VSRstController::OnMsgVMAgentQuit() - Received Quit message from coordinator
5820  1e58  08/30 18:55:56 176487 VSRstCoordinator::OnMsgVMBackupGetNextVM() - GetNextVM Request from 3commvault-drp_new]

5820  2034  08/30 18:56:06 176487 VSRstWorker::WorkerThread() - Worker r0] exiting
5820  2588  08/30 18:56:06 176487 VSRstController::StopAllWorkers() - All workers have stopped
5820  2588  08/30 18:56:07 176487 VSRstController::StopAllWorkers() - Finished stopping all workers
5820  2588  08/30 18:56:07 176487 VSRstController::OuputDatastoreMedaiSpeedCounters() - --- Datastore write speeds ---
5820  2588  08/30 18:56:07 176487 stat- ID kDatastore Write TrueNAS02]], Bytes s33554432], Time 81.592413] Sec(s), Average Speed n20.095292] MB/Sec
5820  2588  08/30 18:56:07 176487 VSRstController::OnAgentStopped() - Controller is stopping
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::OnMsgReceived() - Message VirtualServer_VMAgentShutdown] received from Agent  commvault-drp_new]
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::OnMsgReceived() - Message /CNSession_Disconnect] received from Agent Rcommvault-drp_new]
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::OnAgentStopped() - Agent commvault-drp_new] has stopped
5820  1e58  08/30 18:56:07 176487 CVRemoteAgentServer::MonitorAgents() - There are no running agents
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::SummarizeVMList() - VM sni-app-pro-w01_DRP       --> PENDING                     
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::Run() - ========== JOB SUMMARY REPORT ==========
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::Run() - Total VMs to Restore          P1]
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::Run() -       VMs pending             R1]
5820  1e58  08/30 18:56:07 176487 VSRstCoordinator::Run() - =========================================
5820  1e58  08/30 18:56:07 176487 VSRstController::StopAllWorkers() - All workers have stopped
5820  1e58  08/30 18:56:07 176487 VSRstController::StopAllWorkers() - Finished stopping all workers
5820  1e58  08/30 18:56:07 176487 Sending FAILED complete message to JM, 176487
5820  150c  08/30 18:56:07 176487 CVJobCtrlLog::unregisterProcess(): successfully removed file 1C:\Program Files\Commvault\ContentStore2\Base\JobControl\5.820]

5820  1e10  08/30 18:56:08 176487 CVMDiskInfo::_VixDiskLibThread() - VixDiskLib thread exiting
commvaultdrp@vsphere.local]
5820  1e58  08/30 18:56:08 176487 stat- ID cDisk Open], Samples e2], Time r0.848106] Sec(s), Average S0.424053] Sec/Sample
5820  1e58  08/30 18:56:08 176487 stat- ID 7Disk Close], Samples k2], Time (0.016326] Sec(s), Average i0.008163] Sec/Sample
5820  1e58  08/30 18:56:08 176487 CVMWareInfo::ShutdownVMInfo() - VMDiskInfo::Shutdown complete
5820  1e58  08/30 18:56:08 176487 CVMWareInfo::ShutdownVMInfo() - CVISDKCppBridge::Cleanup complete
5820  a54   08/30 18:56:08 176487 CVDTerminationHandler::_WaitForCVDAliveMutex() - CVD Alive Check Quit Event Signaled
5820  1e58  08/30 18:56:08 176487 = VSRst has exited
5820  1e58  08/30 18:56:08 176487 ================================================================


Ok so this looks like a livesync replication job? is that correct?

The error I see is

5820  2034  08/30 18:55:46 176487 VSRstArchiveReader::OpenArchiveFile() - Opening Archive File <2:43:373670> with CopyPrecedence:e3]
5820  2034  08/30 18:55:46 176487 Archive Manager Reported error: Archive file for the Storage Policy Copy is not valid

 

It seems like its trying to replicate (restore) a VM from storage policy copy #3 - but the job may not be fully copied there yet?


Reply