Solved

Disk Performance Tool - average read/write throughput - DDB disk

  • 14 April 2022
  • 1 reply
  • 1551 views

Userlevel 2
Badge +9

Documentation mentions to use the following default parameters to ensure that the average read throughput of the disk is approximately 600 GB per hour and the average write throughput of the disk is approximately 700 GB per hour for a disk volume as a mount path of a disk library:

  • BLOCKSIZE of 65536, BLOCKCOUNT of 16384, THREADCOUNT of 6 (each thread uses 1 file of 1 GB) and FILECOUNT of 6.

  1. What are reasonably good values for the average read and write throughput of the disk as a DDB disk, not a mount path of a disk library
  2. And what are suggested parameters (blocksize, blockcount, threadcount and filecount) to test the disk reserved to act as DDB disk?
icon

Best answer by Orazan 14 April 2022, 20:21

View original

If you have a question or comment, please create a topic

1 reply

Userlevel 6
Badge +15

We do not measure the DDB disk in reads and writes.  For the DDB disk, we test iops.  For Iometer, here are the instructions for testing the iops for the DDB disk:

https://documentation.commvault.com/11.24/expert/8825_testing_iops_of_deduplication_database_disk_on_windows_01.html

Please let me know if you have any questions.