Skip to main content

Hello Community, 

We had some problem with one proxy server and we decided to create a new one in AZURE. Unfortunetly backup on new proxy server is not working for part of VM and backup doesn’t migrate the data after creation snapshot, but for another part machines VM backup is working well without any issue.

The machines are in the same subscriptions. We checked almost everything and nothing helped. The case in Commvault MA is opened, but support is not so helpful and clamed that issue is on Microsoft site, but they didn’t recognize where the issue is. In log  file only appears: 

“17 12:37:02 4409694 VSBkpCoordinator::OnIdle_Running() - Waiting for W1] VMs to be processed. WFRPARESB019]”

and

“Failed to get page ranges for the entire blob. Fetching page ranges in segments”

Incident ID is: 220727-453

What I think is not working: access to resource snapshot in AZURE via API, but only for particular machines, If somthing is blocked access will be unavailable for all machines not only for half of them. 

Regards, 

Michal  

 

Hi @Michal128 , thanks for sharing your issue!

I looked at the case notes which mention disk speed being the cause, though as you stated, other backups are running fine.  Do those SQL backups use this same Media Agent and the same slow disk?  This very well may be the correct explanation if the SQL backups use another MA altogether (or don’t use this disk).

It might be worth requesting a ticket requeue to another engineer now, while we see if other community members have ideas.

|*19830482*|*Perf*|4276654| =======================================================================================
|*19830482*|*Perf*|4276654| Job-ID: 4276654            Pipe-ID: 19830482]            App-Type: 106]             Data-Type: 1]
|*19830482*|*Perf*|4276654| Stream Source:   <MA 002>
|*19830482*|*Perf*|4276654| Network medium:   SDT
|*19830482*|*Perf*|4276654| Head duration (Local):  |23,July,22 02:01:47  ~  23,July,22 10:16:49] 08:15:02 (29702)
|*19830482*|*Perf*|4276654| Tail duration (Local):  623,July,22 02:01:47  ~  23,July,22 10:16:50] 08:15:03 (29703)
|*19830482*|*Perf*|4276654| ------------------------------------------------------------------------------------------------------------------------------------------
|*19830482*|*Perf*|4276654|     Perf-Counter                                                                       Time(seconds)              Size
|*19830482*|*Perf*|4276654| ------------------------------------------------------------------------------------------------------------------------------------------
|*19830482*|*Perf*|4276654| 
|*19830482*|*Perf*|4276654| Virtual Server Agent
|*19830482*|*Perf*|4276654|  |_VM*<vm name 053>]..................................................................         -                          
|*19830482*|*Perf*|4276654|    |_Disk<disk name 053>]......................................         -                          
|*19830482*|*Perf*|4276654|      |_Disk Read.....................................................................         1                 135266864  _129.00 MB] .453.52 GBPH]
|*19830482*|*Perf*|4276654|      |_Buffer allocation.............................................................         -                            7Samples - 388] _Avg - 0.000000]
|*19830482*|*Perf*|4276654|      |_Pipeline write................................................................         -                 114445920  |109.14 MB]
|*19830482*|*Perf*|4276654|  |_VMt<vm name 058>]..................................................................         -                          
|*19830482*|*Perf*|4276654|    |_DiskVW<disk name 058>]......................................         -                          
|*19830482*|*Perf*|4276654|      |_Disk Read.....................................................................       126                   1049136  1.00 MB] <0.03 GBPH]
|*19830482*|*Perf*|4276654|      |_Buffer allocation.............................................................         -                             Samples - 1957] 0Avg - 0.000000]


Hello Mike, 

Thanks for Your really fast respond and take care the issue. SQL backup is working on the same Media Agent and problem with transfer data doesn’t occur. Only during the VM backup that type of issue appears. 

Currently the problem is resolved. The solution is:

Recreation new Access Key for storage account (Storage account which is using for cloud library defined for media agent) and replace old Access Key to a new one. 

The solution is really strange for me. Because that storage is using only for keeping data from backup which had been deduplicated. 

I am curious why that type of issue appeared. 

Regards, 

Michal 


That is strange.  It could be that the Account was being throttled?  Azure does that for sure when activity is too high.  If you create multiple Applications then they can swap around.


Hello Mike, 

Thanks for Your update. I don’t think so that the storage account was throttled. Becuase for some Virtual Machines backup didn’t want even started (Transfare of data) and the job was started manully only for one Subclient, so workload on Media Agent and Storage was nothing. I will see what commvault support will share the root case of that type of issue. 

After that I will share the result of investigation. 

Regards, 

Michal 


Reply