Data Domain DDBoost feature

  • 14 April 2021
  • 12 replies
  • 1089 views

Badge

My company is planning to do a tech refresh on our aging data domain to a newer version. 

We also highlighted that we’re having backup slowness issues on some of our large Oracle databases & some NDMP backups. Our current configuration is backing up to a VTL located in our data domain & no compression or deduped are enabled from Commvault layer.

Dell’s sales team advice us to purchase an additional license for DDBoost for the new Data Domain. This is because DDBoost will be able to perform a very good dedupe & compression rate from the source level before transferring it to our data domain, thus saving the time it took to transfer via network.

However I’ve been checking around Commvault’s KB and looks like commvault only works with BoostFS & not DD Boost. I haven’t check with Dell yet regarding this. 

May I know has anyone implemented DDBoost on your environment for backing up databases/vm & NDMP? 


12 replies

Userlevel 1
Badge +1

Hi JasonTF,

 

Thank you very much for your post.

 

I can confirm that DD Boost can be used with the Commvault Software.

 

The following documentation will give you answers on how to configure and the recommended settings : https://documentation.commvault.com/commvault/v11/article?p=9404.htm

 

Have a great day!

 

Warm regards,

 

Chris

 

Userlevel 3
Badge +8

Hi! You can leverage DD Boost by installing BoostFS plug-in on your MediaAgent.

As much as I like Data Domain, combining it with Commvault brings some limitations:

  • no true source-side dedupe (data will be reduced by MA)
  • no DFC support (DD Boost over FC)
  • no Retention Lock support
Badge

Hey Jason! I have this exact setup in our environment here! We currently use a Data Domain 6300 for our backup appliance and leverage BoostFS on our MediaAgent (Microsoft Server 2016). 

While I like the Data Domain, there are some limitations. You have to disable Deduplication on Commvault and let the hardware do everything so it adds a bit of complication. And support from Dell/EMC on how to get it up and running isn’t the easiest. There doesn’t seem to be alot of techs that are familiar with DDBOOST/BoostFS.. (They are one in the same, BoostFS is just the plugin in Linux/Windows that utilizes DDBoost).

 

Let me know if you have questions!

Userlevel 4
Badge +10

Hey Jason! I have this exact setup in our environment here! We currently use a Data Domain 6300 for our backup appliance and leverage BoostFS on our MediaAgent (Microsoft Server 2016). 

While I like the Data Domain, there are some limitations. You have to disable Deduplication on Commvault and let the hardware do everything so it adds a bit of complication. And support from Dell/EMC on how to get it up and running isn’t the easiest. There doesn’t seem to be alot of techs that are familiar with DDBOOST/BoostFS.. (They are one in the same, BoostFS is just the plugin in Linux/Windows that utilizes DDBoost).

 

Let me know if you have questions!

Are you doing Natural Fulls instead of Non-DASH Fulls with that combination?

Badge

Hey Jason! I have this exact setup in our environment here! We currently use a Data Domain 6300 for our backup appliance and leverage BoostFS on our MediaAgent (Microsoft Server 2016). 

While I like the Data Domain, there are some limitations. You have to disable Deduplication on Commvault and let the hardware do everything so it adds a bit of complication. And support from Dell/EMC on how to get it up and running isn’t the easiest. There doesn’t seem to be alot of techs that are familiar with DDBOOST/BoostFS.. (They are one in the same, BoostFS is just the plugin in Linux/Windows that utilizes DDBoost).

 

Let me know if you have questions!

Are you doing Natural Fulls instead of Non-DASH Fulls with that combination?

 

I am doing Natural Fulls. It works ok. However, if I were to redo my setup, I would have used a non deduplicating appliance or used Dell’s Data Domain Backup software as well. Keep the hardware/software similar.

Badge

For those using DDBoost to send non Commvault dedup/compressed backups to Data Domain. What kind of data reduction rates are you seeing?  Long term we’ve been hovering on average around 5.5:1.  We made some changes a few months ago that got our weekly dedup rates up to 8.1x, so hoping the numbers continue to improve once the the LTR backups fall off.  Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Badge

For those using DDBoost to send non Commvault dedup/compressed backups to Data Domain. What kind of data reduction rates are you seeing?  Long term we’ve been hovering on average around 5.5:1.  We made some changes a few months ago that got our weekly dedup rates up to 8.1x, so hoping the numbers continue to improve once the the LTR backups fall off.  Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Where are you looking for the data reduction rates?

Userlevel 7
Badge +15

Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Was that product optimized for writing to DD and claim DD support? Are you using boostFS for your mount paths? just curious!

There are settings you can change that will alter the way Commvault writes data (chunk size etc), but obviously commvault favors its own dedupe engine without the need of proprietary storage - that has always been our gig. I have seen some folks disable dedupe on DD and run CV dedupe instead - but I’m not sure what impact that has on DD. I know if you write commvault dedupe data to DD that restore performance is crippled on the DD side, so certainly don't recommend that.

@JasonTF No DDBoost support unfortunately - I think the want is there but the other party wont play ball :wink:

 

Badge +1

Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing. ​

Are you sure you were getting 20x reduction? Some backup products include thick-provisioned white space towards their total data reduction. That’s cheating in my view, and if you’re licensed by front end capacity it implies they’re charging you money to backup white space.

However, some data types, such as database native backup dumps, really will dedup better on Data Domain because it uses variable-length patterns and Commvault uses fixed-length patterns. If that’s the case you might get a better dedup rate by enabling Variable Content Alignment for the Commvault client (just be aware it hammers the CPU doing the dedup).

I have seen some folks disable dedupe on DD and run CV dedupe instead - but I’m not sure what impact that has on DD.

You can’t disable dedup on Data Domain. You can disable compression, and the Data Domain console doesn’t make it clear exactly what you have and haven’t disabled, but deduplication is always on.

Badge

For those using DDBoost to send non Commvault dedup/compressed backups to Data Domain. What kind of data reduction rates are you seeing?  Long term we’ve been hovering on average around 5.5:1.  We made some changes a few months ago that got our weekly dedup rates up to 8.1x, so hoping the numbers continue to improve once the the LTR backups fall off.  Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Where are you looking for the data reduction rates?

@JasonTF @Rodger Spruill  We’re looking for the reduction rates on DataDomain.  Compression, dedup, and encryption are disabled on the Commvault side.  

Badge

Prior to CV we used another data protection product that wrote to DataDomain and we were getting around 20x data reduction, so a little shocked in the low numbers we’re seeing.

@JasonTF @Rodger Spruill 

Was that product optimized for writing to DD and claim DD support? Are you using boostFS for your mount paths? just curious!

There are settings you can change that will alter the way Commvault writes data (chunk size etc), but obviously commvault favors its own dedupe engine without the need of proprietary storage - that has always been our gig. I have seen some folks disable dedupe on DD and run CV dedupe instead - but I’m not sure what impact that has on DD. I know if you write commvault dedupe data to DD that restore performance is crippled on the DD side, so certainly don't recommend that.

@JasonTF No DDBoost support unfortunately - I think the want is there but the other party wont play ball :wink:

 

Prior to this we used NBU to an older model DD.  They had a driver to utilize DDBoost.  Yes, we are using BoostFS for Windows now on our MA mount paths: CV related documentation: https://documentation.commvault.com/11.24/expert/9404_disk_libraries_frequently_asked_questions.html#can-i-configure-data-domain-boost-dd-boost-on-disk-libraries-using-emc-data-domain

So we’re getting source side dedup from the media agent to DD.  I thought that may effect dedup on the target storage, but I think that would have been an issue for NBU as well.

 

Thanks for the chunk size suggestion.  I'll look into that more and maybe compare it to what NBU was doing.

Badge +2

Another advantage with DDBOOST is you can get the Oracle DBAs to do everything themselves.   DDBOOST for ORACLE can hook directly into the DDs and Oracle from there they can manage all their own backup and recovery.  CV admins taken completely out of the loop.  

Last time I did this it was included in the DDBOOST license.

Reply