Skip to main content
Question

For your DDB ssds: natively exposed to OS, or managed by your hardware PERC?

  • November 6, 2025
  • 4 replies
  • 31 views

ZachHeise
Byte
Forum|alt.badge.img+9

Hello, I’m curious how others are doing this or what Commvault recommends. For our Windows-based mediaagents, the DDB disk is actually 4 SSDs that are plugged into a Dell PERC, specifically an H740P, and then in the Dell hardware management, iDRAC, we configured a single SCSI disk out of RAID-5 for those SSDs, and so that’s what Windows actually sees. I was just wondering if Commvault has any opinions on setting up the DDB on SSDs when going through a hardware-level obfuscation like this, or whether Commvault believes that there would be a noticeable change or improvement to performance, if the SSDs were natively exposed to the OS and managed that way.

4 replies

Mohammed Ramadan
Forum|alt.badge.img+6

Hi Zach
nice cat BTW dude 😄

I had tested this before but not on the same hardware previously the MediaAgent was on a physical host and we couldn t provide local SSDs for the DDB so we went with that setup at the beginning everything worked fine and I was monitoring the DDB Q&I times (Query & Insert) they were under 2 ms so performance was very good however after some time the CST reported errors on backup jobs and some jobs were taking a long time when I checked I saw an event DDB reaching Q&I time threshold . looking at the DDB I noticed the Q&I times had gone over 50 ms. After moving the DDB partition to a local disk performance improved significantly 

so if your MediaAgent hardware doesnt have SSDs go with your scenario will work But with heavy writes maybe you will face issue if you can use local SSDs use it better commvault recommends it also  and dont worry about losing the DDB there are many ways to restore it 😅.

Best Regards,
Mohammed Ramadan
Data Protection Engineer


ZachHeise
Byte
Forum|alt.badge.img+9
  • Author
  • Byte
  • November 10, 2025

Hi Mohammed, yes this old cat is my work-from-home buddy. He’s sitting on my lap right now as I commvault!

Anyway, perhaps some slight confusion here - we are definitely running SSDs for the DDB. Commvault makes it very clear that running the DDBs on HDDs, or on a network drive (gasp) would introduce huge latency.

Basically to rephrase, the question is about how to ‘expose’ the SSDs, to commvault, when you have the DDB split across multiple SSDs for speed and or security (i.e. RAID-5 or RAID-6). Right now, Windows and Commvault have no idea that my DDB is on an SSD, because the only thing they can see is that it’s a SCSI Dell Perc Card provided generic volume. The Dell PERC card exposes the unformatted, unpartitioned bare volume to Windows as a single drive instead of the 4x SSDs that Dell combined into a single volume.

So I was wondering if this is okay by commvault, performance-wise, or whether we’re taking a penalty by not exposing the SSDs directly to Windows, and using software-RAID in windows to manage the 4x SSDs together.

See what I mean now?


Mohammed Ramadan
Forum|alt.badge.img+6

Hi Zach ,
I love cats, and it looks like we have a professional Commvault cat here

Its totally fine commvault just cares about performance IOPS and latency not how many drives Windows sees if the RAID meets the performance targets you are good to go

The key point here dude is that Commvault not cares about physical disk visibility the DDB must be on a fast volume use a tool to check if it meets the minimum IOPS requirements.https://documentation.commvault.com/v11/software/planning_for_deduplication.html?utm_source=chatgpt.com

if everything looks good go ahead i thing below 2ms milliseconds or 2000µs microseconds you are good configure it keep monitoring the Q&I thresholds also you will see major or critical event messages in the event viewer

Best Regards,
Mohammed Ramadan
Data Protection Engineer


Forum|alt.badge.img+2
  • Bit
  • November 10, 2025

Hello, I’m curious how others are doing this or what Commvault recommends. For our Windows-based mediaagents, the DDB disk is actually 4 SSDs that are plugged into a Dell PERC, specifically an H740P, and then in the Dell hardware management, iDRAC, we configured a single SCSI disk out of RAID-5 for those SSDs, and so that’s what Windows actually sees. I was just wondering if Commvault has any opinions on setting up the DDB on SSDs when going through a hardware-level obfuscation like this, or whether Commvault believes that there would be a noticeable change or improvement to performance, if the SSDs were natively exposed to the OS and managed that way.

Here’s how we’ve approached this in our environment:

 

We’ve done several DDB performance tests on Windows MediaAgents using both hardware RAID and direct SSD presentation, and the difference really depends on how the controller handles caching and latency.

 

In our case, we’re also using Dell PERC controllers (H740P on PowerEdge servers), and we noticed that when the controller cache is properly tuned (write-back mode with BBU), RAID-10 or RAID-5 can perform quite well. The small additional latency from the controller layer isn’t dramatic — especially if you’re using quality enterprise SSDs.

 

That said, when we tested direct-attached SSDs (no RAID, or RAID-0 per disk) with the OS managing them natively, we saw slightly better random I/O performance and lower write latency, which makes sense for DDB workloads that are very metadata-heavy.

 

From what Commvault recommends, both setups are supported — what really matters is staying under ~1 ms latency and ensuring sustained IOPS. If redundancy is covered elsewhere (for example with multiple DDB partitions or MediaAgents), I’d go for direct SSD exposure.

If the local DDB protection is important, then hardware RAID with a tuned cache remains a safe and efficient approach.

 

So, in short: it’s a trade-off between raw I/O performance and local resiliency, but both configurations can perform really well if properly tuned.