Skip to main content

My company is planning to do a tech refresh on our aging data domain to a newer version. 

We also highlighted that we’re having backup slowness issues on some of our large Oracle databases & some NDMP backups. Our current configuration is backing up to a VTL located in our data domain & no compression or deduped are enabled from Commvault layer.

Dell’s sales team advice us to purchase an additional license for DDBoost for the new Data Domain. This is because DDBoost will be able to perform a very good dedupe & compression rate from the source level before transferring it to our data domain, thus saving the time it took to transfer via network.

However I’ve been checking around Commvault’s KB and looks like commvault only works with BoostFS & not DD Boost. I haven’t check with Dell yet regarding this. 

May I know has anyone implemented DDBoost on your environment for backing up databases/vm & NDMP? 

Hi JasonTF,

 

Thank you very much for your post.

 

I can confirm that DD Boost can be used with the Commvault Software.

 

The following documentation will give you answers on how to configure and the recommended settings : https://documentation.commvault.com/commvault/v11/article?p=9404.htm

 

Have a great day!

 

Warm regards,

 

Chris

 


Hi! You can leverage DD Boost by installing BoostFS plug-in on your MediaAgent.

As much as I like Data Domain, combining it with Commvault brings some limitations:

  • no true source-side dedupe (data will be reduced by MA)
  • no DFC support (DD Boost over FC)
  • no Retention Lock support

Hey Jason! I have this exact setup in our environment here! We currently use a Data Domain 6300 for our backup appliance and leverage BoostFS on our MediaAgent (Microsoft Server 2016). 

While I like the Data Domain, there are some limitations. You have to disable Deduplication on Commvault and let the hardware do everything so it adds a bit of complication. And support from Dell/EMC on how to get it up and running isn’t the easiest. There doesn’t seem to be alot of techs that are familiar with DDBOOST/BoostFS.. (They are one in the same, BoostFS is just the plugin in Linux/Windows that utilizes DDBoost).

 

Let me know if you have questions!


Hey Jason! I have this exact setup in our environment here! We currently use a Data Domain 6300 for our backup appliance and leverage BoostFS on our MediaAgent (Microsoft Server 2016). 

While I like the Data Domain, there are some limitations. You have to disable Deduplication on Commvault and let the hardware do everything so it adds a bit of complication. And support from Dell/EMC on how to get it up and running isn’t the easiest. There doesn’t seem to be alot of techs that are familiar with DDBOOST/BoostFS.. (They are one in the same, BoostFS is just the plugin in Linux/Windows that utilizes DDBoost).

 

Let me know if you have questions!

Are you doing Natural Fulls instead of Non-DASH Fulls with that combination?


Hey Jason! I have this exact setup in our environment here! We currently use a Data Domain 6300 for our backup appliance and leverage BoostFS on our MediaAgent (Microsoft Server 2016). 

While I like the Data Domain, there are some limitations. You have to disable Deduplication on Commvault and let the hardware do everything so it adds a bit of complication. And support from Dell/EMC on how to get it up and running isn’t the easiest. There doesn’t seem to be alot of techs that are familiar with DDBOOST/BoostFS.. (They are one in the same, BoostFS is just the plugin in Linux/Windows that utilizes DDBoost).

 

Let me know if you have questions!

Are you doing Natural Fulls instead of Non-DASH Fulls with that combination?

 

I am doing Natural Fulls. It works ok. However, if I were to redo my setup, I would have used a non deduplicating appliance or used Dell’s Data Domain Backup software as well. Keep the hardware/software similar.


For those using DDBoost to send non Commvault dedup/compressed backups to Data Domain. What kind of data reduction rates are you seeing?  Long term we’ve been hovering on average around 5.5:1.  We made some changes a few months ago that got our weekly dedup rates up to 8.1x, so hoping the numbers continue to improve once the the LTR backups fall off.  Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 


For those using DDBoost to send non Commvault dedup/compressed backups to Data Domain. What kind of data reduction rates are you seeing?  Long term we’ve been hovering on average around 5.5:1.  We made some changes a few months ago that got our weekly dedup rates up to 8.1x, so hoping the numbers continue to improve once the the LTR backups fall off.  Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Where are you looking for the data reduction rates?


Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Was that product optimized for writing to DD and claim DD support? Are you using boostFS for your mount paths? just curious!

There are settings you can change that will alter the way Commvault writes data (chunk size etc), but obviously commvault favors its own dedupe engine without the need of proprietary storage - that has always been our gig. I have seen some folks disable dedupe on DD and run CV dedupe instead - but I’m not sure what impact that has on DD. I know if you write commvault dedupe data to DD that restore performance is crippled on the DD side, so certainly don't recommend that.

@JasonTF No DDBoost support unfortunately - I think the want is there but the other party wont play ball :wink:

 


Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing. ​

Are you sure you were getting 20x reduction? Some backup products include thick-provisioned white space towards their total data reduction. That’s cheating in my view, and if you’re licensed by front end capacity it implies they’re charging you money to backup white space.

However, some data types, such as database native backup dumps, really will dedup better on Data Domain because it uses variable-length patterns and Commvault uses fixed-length patterns. If that’s the case you might get a better dedup rate by enabling Variable Content Alignment for the Commvault client (just be aware it hammers the CPU doing the dedup).

I have seen some folks disable dedupe on DD and run CV dedupe instead - but I’m not sure what impact that has on DD.

You can’t disable dedup on Data Domain. You can disable compression, and the Data Domain console doesn’t make it clear exactly what you have and haven’t disabled, but deduplication is always on.


For those using DDBoost to send non Commvault dedup/compressed backups to Data Domain. What kind of data reduction rates are you seeing?  Long term we’ve been hovering on average around 5.5:1.  We made some changes a few months ago that got our weekly dedup rates up to 8.1x, so hoping the numbers continue to improve once the the LTR backups fall off.  Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Where are you looking for the data reduction rates?

@JasonTF @Rodger Spruill  We’re looking for the reduction rates on DataDomain.  Compression, dedup, and encryption are disabled on the Commvault side.  


Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Was that product optimized for writing to DD and claim DD support? Are you using boostFS for your mount paths? just curious!

There are settings you can change that will alter the way Commvault writes data (chunk size etc), but obviously commvault favors its own dedupe engine without the need of proprietary storage - that has always been our gig. I have seen some folks disable dedupe on DD and run CV dedupe instead - but I’m not sure what impact that has on DD. I know if you write commvault dedupe data to DD that restore performance is crippled on the DD side, so certainly don't recommend that.

@JasonTF No DDBoost support unfortunately - I think the want is there but the other party wont play ball :wink:

 

Prior to this we used NBU to an older model DD.  They had a driver to utilize DDBoost.  Yes, we are using BoostFS for Windows now on our MA mount paths: CV related documentation: https://documentation.commvault.com/11.24/expert/9404_disk_libraries_frequently_asked_questions.html#can-i-configure-data-domain-boost-dd-boost-on-disk-libraries-using-emc-data-domain

So we’re getting source side dedup from the media agent to DD.  I thought that may effect dedup on the target storage, but I think that would have been an issue for NBU as well.

 

Thanks for the chunk size suggestion.  I'll look into that more and maybe compare it to what NBU was doing.


Another advantage with DDBOOST is you can get the Oracle DBAs to do everything themselves.   DDBOOST for ORACLE can hook directly into the DDs and Oracle from there they can manage all their own backup and recovery.  CV admins taken completely out of the loop.  

Last time I did this it was included in the DDBOOST license.


Hi, just my little experience with Datadomain(s).

We’ve had 2, one DD670 replaced by another one DD4200, and that last one got expanded.

We mostly used them as VTLs, but gave a try to the NFS/CIFS storage.

It worked and achieved quite good performance with the previous backup tools we had, before commvault. Backups were almost always OK. 

The bad point was the post-compression that ran initially weekly, that we had to re-run quite often as time went by, before expanding its capacity. 

For single restores, it worked. 

For multiple / simultaneous restores, then it was much more different.

And when we attached them to our Commvault MAs, this behaviour was much more highlighted. Clearly a bottleneck (again, used as VTL with like 10 drives and 500 tapes). I thought it too complex to have what I would call a ‘driver’ (ddboost or whatever would be required to enhance the performance) aside from the system drivers and the backup tool itself. In case of issues, you never know where it could come from.

Then we switched to simple disk devices for auxcopies, using deduplication and compression, and restores were as fast -or as slow :laughing:- as using Datadomain, but so much cheaper ! Storage used for this purpose was smaller and we stored more.

I like it when it’s simple, it often works better. :smile:

 


Just wanted to come back and share that we were able to fix our dedup issue on the Data Domain.  Very early on we worked with dell on a boostfs stability issue which would cause the storage to dismount from the media agents.  The fix was to disable “marker detection” which fixed the stability issue but broke dedup.  Without marker detection the Commvault backups appeared to  the data domain to have a high change rate.  We now have marker detection set to auto and are getting the dedup we expected.  There is even a setting to set vendor specific markers and Commvault is one of the listed vendors, but haven’t tried that yet.


Thanks for sharing!  It’s never too late to bring things to resolution, or at least mostly resolved!


Just wanted to come back and share that we were able to fix our dedup issue on the Data Domain.  Very early on we worked with dell on a boostfs stability issue which would cause the storage to dismount from the media agents.  The fix was to disable “marker detection” which fixed the stability issue but broke dedup.  Without marker detection the Commvault backups appeared to  the data domain to have a high change rate.  We now have marker detection set to auto and are getting the dedup we expected.  There is even a setting to set vendor specific markers and Commvault is one of the listed vendors, but haven’t tried that yet.

Forgot to mention the last DD update fixed the stability issue, so we can enable marker detection without issue.


Hi JasonTF,

 

Thank you very much for your post.

 

I can confirm that DD Boost can be used with the Commvault Software.

 

The following documentation will give you answers on how to configure and the recommended settings : https://documentation.commvault.com/commvault/v11/article?p=9404.htm

 

Have a great day!

 

Warm regards,

 

Chris

 

Hello Chris!

I see that link is down and I cannot find any similar documentation from the Commvault side.

From Dell I found this guide--> shorturl.at/guR48


Wanted to add that when we tried to implement boostfs into our Commvault environment we ran into an extra licensing issue that we didn’t anticipate also.  Had to switch to using pure CIFS.

We are since ditching the data domains and wouldn’t recommend them for Commvault shops.


Hi, I just read the thread because I have a new customer who is using a Data domain, it’s currently configured has a CIFS Share without the DDBoost driver on the MA.

Can we configure the DDBoost Driver after the initial configuration or I should not ?

What are the pro and con ?

There is actually around 500 TB of data currently written in this library !


@Marco Lachance , hope all is well!

The consensus here is that it should be fine since you’re only changing the way it’s mounted, though we don’t know for sure 100%.

DDBoost allows a client can write directly to the DD and bypass the hops to the MA.


DD Seems like such a bad fit for commvault storage.

Yes you get backend dedupe, but you lose all the magic of source side deduplication, in most instances bandwidth is at a premium and storage is cheap in comparison. 

 

Just my .02 cents.


do any if you intend to make boostfs and gridstore works ?

 

For what i saw when you create a library with boostfs as a target you have to select a local mountpoint (pointing to the boostfs drive you just mount with the plugin).

 

If that’s really the case how to make gridstor works ? I mean we can’t have 2 media agent pointing to the same local point…

 

regards


also another weird stuff is that:

https://documentation.commvault.com/11.26/expert/9415_can_i_configure_data_domain_boost_dd_boost_on_disk_libraries_using_emc_data_domain.html

When using EMC Data Domain with DD Boost, we recommend a default deduplication block size of 512 K B for all data type backups. Also ensure that the block size is set to 512 KB on storage pools used for making copies using an Auxiliary Copy operation.

Deduplication block size can be modified using the Block level Deduplication factor option available in the Storage Policy Properties - Advanced tab.

 

i was thinking that with boostfs you MUST disable all compression/dedup feature on the commvault size


@Aglidic you can still use it of course but it will take a way a lot of the efficiency results from the Data Domain box because Commvault will be responsible for the biggest portion of the space savings. 

Someone else already highlighted the benefits when enabling it on Commvault side, I honestly am always surprised that customers buy these expensive boxes.  


Reply