Solved

Azure Restore Performance


Userlevel 4
Badge +12

Hi all

 

I saw the other topic on this exact issue. It was mentioned that FR26 would improve performance of a restore operation. Failing that Intellisnap could be leveraged for this.

Unfortunately we are still seeing the same performance.
The original poster mentioned that Commvault support mentioned that the max throughput is 60Gb/hr. We’re seeing close to this on our side.
We’re also restoring managed disks.

Is there any documentation that will confirm the limitation mentioned in the post on the 60Gb/hr limit, and would there be any workaround to try improve this? a 400GB VM takes about 10-12 hours to recover.

 

Regards,
Mauro

icon

Best answer by Mauro 30 January 2023, 09:28

View original

15 replies

Userlevel 7
Badge +19

Assuming this is a restore from a backup copy and not from an IntelliSnap copy there are many angles and factors that might influence the performance of the recovery. One of them for example is a possible disk performance cap of the destination volume. In case the storage is cloud storage e.g. Azure Blob then it might be influenced by the presence of a HTTP proxy. Also make sure to bridge in the endpoint so you bypass the external gateways.

Did you notice a restore performance degradation or has it been always the case in where restores were that slow? I would recommend opening a ticket.

Userlevel 4
Badge +12

Hi Onno

Thanks for the response.
So performance hasn’t degraded over time. It’s always maxed out at this throughput. The same goes for backups.
File systems and DB backups/restores are much faster.

I suspect the issue is on the Azure side (blob specifically from what I’ve read on another forum). I just wanted to get a full understanding of it and a potential improvement.

I’ll log a ticket to get a bit more information and update this thread.

Thanks,
Mauro 

Userlevel 7
Badge +23

@Mauro , following up on this one.  did you get this resolved?

Can you share the ticket number with me?

Thanks!

Userlevel 4
Badge +12

Hi

Not yet.
I know it’s been escalated to tier 2 support now. The case number is 220630-417.

I’ll keep this thread updated as I go along.

Userlevel 7
Badge +16

As this is cloud storage, just wondering if the blocksize for the storage policy has been kept default on 128 KB or has this been changed to 512 KB.

p.s. If 128 and not 512, you don't want to change it without considering the consequences as this will create a new baseline. Please consult support before making such changes.

Userlevel 7
Badge +19

By default it will pick 512kb. This is the case for a very long time already when you configure cloud storage, so I do not expect this to be the issue. Do make sure you run the latest maintenance release for FR26!

Userlevel 7
Badge +16

@Onno van den Berg good to know 👍

The question is, when was this config created, pre or post this adjustment of the default setting 🤪

Userlevel 7
Badge +19

It was changed to 512kb in v11 sp11 so its a while back 😉

Userlevel 7
Badge +16

@Onno van den Berg hahaha yes that is a while back 🤣 never mind 😋

Userlevel 7
Badge +23

Sharing the last action suggested in the case, though the incident is almost archived:

- seems writes to the staging storage account is causing the performance issues
- their Azure environment is configured to run entirely on private IP's by firewall rules, they are not using service endpoints to accomplish this
- waiting for hear from performance tests to the staging storage account

@Mauro can you confirm the status?

Userlevel 4
Badge +12

Hi Mike

We have to run an upload test with Azure (taking Commvault out the picture) to see what throughputs that achieves. This will be done from the Media Agent that we run the backups from.
Unfortunately we don’t have access to do this and the customer has been quite busy, so as soon they’re able to assist, I can share those findings as well as the next steps support recommends.

Regards,
Mauro

Userlevel 7
Badge +23

Appreciate the update, @Mauro !

We’ll be here once you do.

Userlevel 6
Badge +15

@Mauro Did you ever get this issue resolved?

Userlevel 4
Badge +12

@Mauro Did you ever get this issue resolved?

Hi Orazan

I did thanks. It was escalated to Development and their feedback is as follows:

Configure the below additional settings on the proxy running the restores
Name: bAzureWriteBehindEnabled
Category: VirtualServer
Type: Boolean
Value: true

Name: nAzureWriteBehindCacheSize
Categry: VirtualServer
Type: Integer
Value: default value is 32, if the performance is not satisfactory tune the performance by setting the additionalkey value to 64 or 128

Userlevel 6
Badge +15

That is great news!  Cheers!

Reply