Skip to main content
Solved

Max number of Simultaneously Mouted HotAdd Disks PVSCSI on VSA Proxies

  • October 14, 2021
  • 8 replies
  • 1596 views

Forum|alt.badge.img+1
  • Bit
  • 4 replies

Hi everyone,

We are currently deploying a Commvault environment to protect via VSA a great amount of VMs (almost 6000VMs)

VSA proxies are Windows Server 2019 SE VMs that reside on the same cluster that VMs we would like to protect. So transport mode will be HOTADD.

We are currently on test phase so as to obtain results and determine the number of VSAs Proxies that finally we will deploy.

The environment is the following:

 

  • Commvault version: v11.24.7
  • VCSA: VCenter 7.0u2
  • ESXi Version: 7.0uw
  • VSA Proxies O.S: Windows Server 2019 SE
  • VDDK version used on Backups: 7.0.1

As per VMWare documentation says on VMWare 7.0u2 each PVSCSI card on the proxy VSA should be able to mount 64 vmdks of the Backed-up VMs.

The issueis that we are monitoring during the backup and each proxy is just able to mount 15 disks via hotadd (stucked on the previous limitation of VSphere 6.5)

We have added manually more than 20VMDK disk on the Virtual Machine so we understand that this is not a VMware issue. The limitation arises just when Commvault tries to map more than 15 disk using a single PVSCSI. 


As a workaround we have configured 4 x PVSCSI ocards on the Proxies VMs, but we would like to understand if it’s possible to maximize the vmdks that are mapped to each proxy.


Do you know if there’s any kind of configuration / limitation on Commvault side that is not allowing to map 64 disks for each PVSCSI card using hotadd?


Thanks!!

 

 

 

Best answer by Gopinath

Hi GFC,

 

Its limitation from VDDK side, VDDK supports only 15 disks per controller regardless of how many disks SCSI controller supports.

https://kb.vmware.com/s/article/66870

yes, workaround of using 4 x PVSCSI controllers required on proxies as you are using.

 

Regards

Gopinath

View original
Did this answer your question?

8 replies

Forum|alt.badge.img+8
  • Vaulter
  • 68 replies
  • Answer
  • October 14, 2021

Hi GFC,

 

Its limitation from VDDK side, VDDK supports only 15 disks per controller regardless of how many disks SCSI controller supports.

https://kb.vmware.com/s/article/66870

yes, workaround of using 4 x PVSCSI controllers required on proxies as you are using.

 

Regards

Gopinath


Forum|alt.badge.img+1
  • Author
  • Bit
  • 4 replies
  • October 14, 2021

Thanks for your reply Gopinath!

You have been really helpful.
I have also checked vmware release notes of VDDK 7.0.0 and 7.0.1 and no news on when the 64 disks per PVSCSI will be supported.

Regards


Forum|alt.badge.img
  • Byte
  • 3 replies
  • May 17, 2022

this is a known vmware issue when using multiple SCSI adapters on VSA.

 

HotAdd Transport for VMware (commvault.com), see SCSI Port Number Sequence

When a VSA proxy has 4 SCSI adapters, you must reorder the SCSI port assignments to 0-160, 1-1184, 2-224, 3-256.

scsi0.pciSlotNumber = "160" 

scsi1.pciSlotNumber = "1184" 

scsi2.pciSlotNumber = "224" 

scsi3.pciSlotNumber = "256"

Hope this helps.

 


Nikos.Kyrm
Byte
Forum|alt.badge.img+13
  • Byte
  • 204 replies
  • October 27, 2022

Hello,

Can you please confirm if this configuration applies also for AVS?

Currently I have ~40 VMs with 1 X VSA proxy and I want to increase the simultaneously mouted HotAdd Disks.

Maybe an option will be to increase the “No. of readers” from VM groups → Configuration tab?

Thank you in advance,
Nikos


Forum|alt.badge.img
  • Byte
  • 3 replies
  • November 14, 2022

Hi Nikos,

Yes, you can increase the number of readers. Just have to monitor the throughput since the more readers you use, the slower the throughput you may see and longer the backup job may be holding the VM snapshots. it’s a balancing act.

 

Hope this helps.

 


Forum|alt.badge.img
  • Bit
  • 2 replies
  • July 26, 2023

Hello everyone, I am just walking by as I am looking for a specific answer, I just noted this post has something to do with my question, so I hope you have the patience to please help me understanding this Commvault new-world to me.

We need to deploy MA severs into our environment, so we have an internal SOP stating how to do so, and it reads we need to create the new VMs, its local drives, and 4 PVSCSI in order to configure the disklib for these MA servers.

My question is why a MA server needs 4 PVSCSI controllers (as per my understanding, the first controller will be capable to manage the first round of disks, including the DiskLib)…, and how about the other 3 PVSCSI controllers required on each MA, what´s the main purpose of that or in what cases those 3 additional controllers are needed?

… lastly just wanted to know whether my question has something to do with the main topic you guys are referring into this post.

 

Thanks a lot for your time and patience.


Forum|alt.badge.img
  • Byte
  • 3 replies
  • July 26, 2023

Hi JBA_Col,

you are correct. MA does not need additional pvscsi adapter to function. 

additional adapters are for improving backup performance and only if your backup is using hotadd transport (configured in the subclient).

assuming your server has 3 drives (1 OS, 2 data) -- then during backup, the server can mount up to 13 additional vmdk on the main interface.  each additional pvscsi adapter can mount additional 16 vmdk during backup.

Hope this helps.

 

 

 

 


Forum|alt.badge.img
  • Bit
  • 2 replies
  • July 26, 2023
Alan S. wrote:

Hi JBA_Col,

you are correct. MA does not need additional pvscsi adapter to function. 

additional adapters are for improving backup performance and only if your backup is using hotadd transport (configured in the subclient).

assuming your server has 3 drives (1 OS, 2 data) -- then during backup, the server can mount up to 13 additional vmdk on the main interface.  each additional pvscsi adapter can mount additional 16 vmdk during backup.

Hope this helps.

 

 

 

 

So much appreciated, have a great day !!.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings