Skip to main content

I’m in process of replacing a couple of Windows based Commvault A600 media agent appliances with Linux based media agents that are virtualized.  We downsized greatly and our backup needs are a lot smaller, plus the A600s are now EOL.

 

I’m trying to gather all best practices for configuring the Linux MA, specifically regarding disk parameters (size, blocksize, alignment,etc.).  The A600s were originally built by an automated setup process built by Commvault out of box. I’m doing this all manually for the new build.

 

My current MA houses about 10TB of data, which works out to an app size of about 45TB.  My DDB currently consumes about 30GB on disk.  Our needs will likely decrease over time vs increase.

 

I plan to provide a single 12TB volume (64KB block size) for the library content and about a 100GB single volume (4KB block size) for the dedupe database.

 

Is this OK?  The A600 is currently able to provide 150TB and the setup created multiple 9TB volumes for the content library.  I don’t know why, exactly.  Is 9 a magic or max number?  Is 12 above too much?  Should I do smaller?

 

Are my block sizes OK?  Commvault is currently configured for 128KB block sizes on top of the OS defined 64KB blocks on the existing library.  I assume that’s OK to persist?

 

Anyway looking for any guidance, especially in the realm of Linux as I’m primarily a Windows guy.

 

 

Hi @mcdonamw 

Hope you’re well.

From a support perspective, we can refer you to our documentation that should help, please refer to: 

DDB Building Block Guide: 

https://documentation.commvault.com/commvault/v11/article?p=12411.htm


MediaAgent Requirements:
https://documentation.commvault.com/commvault/v11/article?p=2822.htm


If you need help with an official ‘CV approved’ sizing and design solution (which will ensure the setup meets our/your requirements, company RPOs etc) this is best covered by our professional services team - you can coordinate this type of service thru your Commvault account manager.

 

Please let me know if this helps!

 

Chris


Appreciate it. I've been through all those guides but they aren't really providing the answers I'm needing. 

 

With all due respect, I've already looked at the support route and got same answer. Apparently Commvault is more interested in grabbing more money from a long standing existing customer who's spent probably 100-200k over the past 3-4 years on their product already instead of helping.

 

As such, I'm hoping someone from the community may be able to assist. 


@mcdonamw 

Just to clarify, it’s not that we are not wanting to help, it’s a matter of liability. Support are here to help with technical issues, we are not certified to provide design based solutions, therefore this is why we suggest our customers engage their account managers / professional services team when a design based question is asked that we cannot find any public information on. 

May I please ask what the original support incident number was? Also, could you please clarify what specific questions are not answered by our documentation site? 


Referring to the article I shared above, as well as another I found by looking for A600, this is what I’ve found:


Please let me know if this helps?


Chris


Hi,

As a customer, I can understand a bit the concerns about having to involve Commvault professionals, but on the other hands, this can be avoided by getting the training and certifications that would allow you to not have to involve Commvault professionals, as you would then be the Commvault professional :wink:

In the trainings/education services, you are provided the elements that are required to design relevant Commvault architectures. 

Aside from that consideration, I can provide part of answers but also questions..

On a physical server, the size and amount of volumes for the target mount paths are important. If you plan to use some linux VMs, then the storage would probably be also some virtualized storage. In that case, my advice would be to make sure that each future mount path would be stored on a different vmdk, and in linux, a volume group would be created per vmdk/mount path.

But, they would be probably stored on the same target storage array, So then what would be the use of multiplying storage axes if they all point to the same destination ? What would be the bottleneck ? This depends entirely on your environment.  

 

In the backup best practices, it’s better to make sure that the data (and then, storage holding it) you’re trying to protect is not the same used to store the backups (the VM you’ll be using could run and use the same storage). Then in case of storage failure you lose the source and the backups.. :dizzy_face:

 

Well, this is just an example of some deep analysis that should be performed on your environment, with global deep knowledge, to make sure that the potential solution designs made for you do really match your requirements, are reliable, fit your environment and do not have leaks.. :wink:


@mcdonamw 

Just to clarify, it’s not that we are not wanting to help, it’s a matter of liability. Support are here to help with technical issues, we are not certified to provide design based solutions, therefore this is why we suggest our customers engage their account managers / professional services team when a design based question is asked that we cannot find any public information on. 

May I please ask what the original support incident number was? Also, could you please clarify what specific questions are not answered by our documentation site? 


Referring to the article I shared above, as well as another I found by looking for A600, this is what I’ve found:


Please let me know if this helps?


Chris

@Chris Hollis , 

I apologize for my abrasiveness before.  It’s just frustrating.  I understand what you’re saying, but I’m not looking for someone to design my environment.  What I’m asking about should not have any liability concerns as I’m simply trying to discern best practices required for your software.  That doesn’t really change per customer environment, for the most part.  That’s why they are called requirements, right?

With that said, everything I find typically focuses on the DDB only.  Your responses as well.  For the most part I expect 4k block sizes to be appropriate for DDB.  It’s the large data that I’m concerned about.

Specifically, and this may fall outside of CV itself, is how it all plays together (your 128KB block sizes, on top of 64KB OS block sizes, and then whatever the backend volume blocksize and physical storage block size, which ultimately is 4KB I believe in my current environment).  I am just wondering if that’s actually best practice or not.  Again this was set up with a CV consultant and all on Windows originally, and to be honest, in the years since, I’ve identified a number of areas where that “specialist consultant” made mistakes in his original configuration.

Anyway, a new team is taking it over and I’m tasked with rebuilding it in Linux before I turn it over, but I’m not a Linux admin so I don’t really have the expertise to say for sure how it should be built on that OS.  The team I am turning it over to, has Linux experience, but they don’t have CV experience. 

This is really why I’m simply hoping someone who’s already been down this route with CV on Linux can simply speak as to their own experiences, especially if CV cannot simply provide the specifics I require.  

 

Hi,

As a customer, I can understand a bit the concerns about having to involve Commvault professionals, but on the other hands, this can be avoided by getting the training and certifications that would allow you to not have to involve Commvault professionals, as you would then be the Commvault professional :wink:

In the trainings/education services, you are provided the elements that are required to design relevant Commvault architectures. 

Aside from that consideration, I can provide part of answers but also questions..

On a physical server, the size and amount of volumes for the target mount paths are important. If you plan to use some linux VMs, then the storage would probably be also some virtualized storage. In that case, my advice would be to make sure that each future mount path would be stored on a different vmdk, and in linux, a volume group would be created per vmdk/mount path.

But, they would be probably stored on the same target storage array, So then what would be the use of multiplying storage axes if they all point to the same destination ? What would be the bottleneck ? This depends entirely on your environment.  

 

In the backup best practices, it’s better to make sure that the data (and then, storage holding it) you’re trying to protect is not the same used to store the backups (the VM you’ll be using could run and use the same storage). Then in case of storage failure you lose the source and the backups.. :dizzy_face:

 

Well, this is just an example of some deep analysis that should be performed on your environment, with global deep knowledge, to make sure that the potential solution designs made for you do really match your requirements, are reliable, fit your environment and do not have leaks.. :wink:

 

@Laurent , 

Appreciate the response.  Unfortunately training is not an option. Reason is too long to go into here, and is really outside the point.

I’m simply converting an EOL physical media agent currently based on Windows, over to an existing and decent sized VMware environment with shared storage.

Risks regarding the live VMs and the backup data residing on the same physical storage is known and accepted as the SANs do their own replication to another offsite SAN.

My focus is simply on converting from Windows to Linux and if there are any gotchas and best practices I should be aware of in doing so.  There is no time to become an expert on these matters, nor would I as I’m simply getting the environment to a place where I can simply turn it over to a new team to manage.

 


Hey @mcdonamw , bit late to the thread here, though I hope I can help.

I’ve reached out to a few people internally to see if we have any best practices guides, or docs.

As you see, most of them revolve around the DDB itself.

I have also reached out to our documentation team to see what is available (and to create something if needed).

 


@mcdonamw , I heard back from our docs team already!

They mentioned we have all of our sizing documentation here:

https://documentation.commvault.com/11.23/expert/1644_commcell_sizing.html

Dedupe MA:

https://documentation.commvault.com/11.23/expert/111985_hardware_specifications_for_deduplication_mode.html

Non Dedupe Media Agents:

https://documentation.commvault.com/11.23/expert/1656_hardware_specifications_for_non_deduplication_mode.html

Let me know if there is anything specific that these docs don’t cover and I’ll be happy to get an answer for you.

Thanks!


@mcdonamw , I heard back from our docs team already!

They mentioned we have all of our sizing documentation here:

https://documentation.commvault.com/11.23/expert/1644_commcell_sizing.html

Dedupe MA:

https://documentation.commvault.com/11.23/expert/111985_hardware_specifications_for_deduplication_mode.html

Non Dedupe Media Agents:

https://documentation.commvault.com/11.23/expert/1656_hardware_specifications_for_non_deduplication_mode.html

Let me know if there is anything specific that these docs don’t cover and I’ll be happy to get an answer for you.

Thanks!

@Mike Struening Thanks, but those are items I’ve already looked at.  These guides do not get as granular as I’m needing.  The DDB building block does partially answer one question, such as:

  • For Windows, we recommend that the DDB needs to be on a fast, dedicated disk formatted at 32KB and dedicated disk libraries formatted at 64 K. For Linux MediaAgents, we recommend to use DDB disks formatted at 4KB block size.

This leads to multiple questions:

  1. For the Linux config they ONLY mention the DDB disk block size (which differs greatly vs Windows).  Am I to assume that the dedicated disk library is to be the same as Windows at 64KB?  I hate to assume.
  2. This doesn’t touch on the different layers that all have configurable block sizes.  I would assume these block sizes are only specified for the OS layer?  What about the VM hypervisor layer and the hardware/raw layer as well?  Are these supposed to match?  I realize this is more outside the Linux area, per se, but these are things I expect Commvault to be able to clearly articulate.
  3. None of this mentions partition/block alignment which I assume to be important.  Is this just a given/assumption that Commvault expects its customers to have?

Then outside of block sizes, I’m wondering about disk/volume layout itself.  See previous posts.  Are there ideal volume sizes for the dedicated disk library partitions?  Should there be one partition or multiple?  The A600 self-configured multiple 9TB volumes that acts as a single media agent library. 

When it comes to manually configuring this, am I supposed to emulate something similar (multiple small partitions, or will one large volume work for the library?

Are there a best practice to consider with running a media agent on LInux, period?  I can’t seem to find any.

What about running within VMware?  There are several architecture guides for virtualization within cloud providers such as AWS and Azure, but I can’t find any such guides for an on-prem VMware environment.

I see references to there being Commvault virtual appliances as OVA templates to deploy within VMware, but there is no information as to what Commvault components are contained within.  I would assume this would be a full installation of a Commserve as they seem to be meant for getting someone up and backing up systems relatively quickly.  Are there any such OVAs for media agents, specifically with Linux? 

On a side note, the OVA mentioned above doesn’t even list what OS is running within it.  If I were to guess it’s Windows because the CommServe cannot run on any other OS from what I understand.  If something like that isn’t even documented, how am I expected to get the level of granular detail I’m looking for by searching these guides?

 

 


I reached out to our expert on this topic (part of the docs engineering team).

I’ll check in tomorrow to confirm you have your answers.


Most of the Linux OS supports up to 4K block size for logic volume, that’s the reason we specified use 4K for DDB disk. Same is applicable for disk library also. if higher block size is possible use 32K for DDB and 64K for disk library. 


Reply