Skip to main content
Solved

When start using drilling holes?

  • 14 November 2022
  • 3 replies
  • 577 views

Forum|alt.badge.img+3

I have a productive commcell, where all mount path supports drilling of holes (Sparse). When I open a mount path property in Windows, I can see "Size on disk" is much more smaller than the "Size" of the folder. The whole partition smaller than the "Size" of the folder, but of course, larged than "Size on disk".
I installed a test environment, where all mount path also supports drilling of holes (Sparse). Scheduled or manual backups are succeed and stored on mount paths. But when I open a mount path property in Windows on test MA, then I can see the "Size" and the "Size on disk" are the same. Or "Size on disk" is a bit larger. If I check a file with "fsutil sparse queryflag", then I get the response: "This file is NOT set as sparse"
My question is, when backend file sizes start to decrease? When will be sparse set on backup files, what stored on a sparse supported mounth path?

Best answer by Ledoesp

I think short answer is time and retention. In your test environment you run a few jobs and of course they are not aged until they meet the retention, that is why you do not see the same behaviour as in the production environment. Once some jobs are aged in the test environment, then you will start getting the sparse files. And I am assuming also you are using CommVault deduplication for your test environment.

View original
Did this answer your question?

3 replies

Mike Struening
Vaulter
Forum|alt.badge.img+23

@VBalazs , the answer to all of that is a bit long, but I think will help figure what (if anything awry) is going on.

I’m copying the answer I gave on another thread which covers the main beats:

That’s a very good question, and the explanation is a bit long…..so bear with me :joy:

Assuming Deduplication is involved (which these days, is almost always the case), the Commserve itself is not really handling the actual pruning.

Here’s what happens at a high-ish level:

  1. Data Aging runs on the CS and all logical Job IDs that have met all retention rules are marked for aging
  2. The CS sends the Media Agent(s) a list of ALL of the individual Archive Files (MMDeletedAF table) from those aged jobs
  3. The Media Agent(s) gets this list, and connects to the Dedupe Database (which has 3 tables: Primary - The unique block by hash, Secondary - the number of job references per Primary record, and ZeroRef - The primary blocks no longer referenced/needed)
  4. The Media Agent applies the list of those Archive Files to the Secondary Table on the DDB and decrements 1 reference to each unique block in the list (from the CS)
  5. Once complete, any Primary Record that has 0 references in the Secondary table is moved to the ZeroRef table
  6. The Media Agent(s) involved in pruning will physically remove/delete the files in the ZeroRef table and update the DDB accordingly

Now that list is very high level and there’s all sorts of transactions and TIME involved, though I think you’ll start to see why the CS Data Aging job doesn’t show space reclaimed.  The truth is, it doesn’t know.  

The time between the logical DA job finishing and the totality of ZeroRef jobs being pruned can be hours….it can be a day in some cases.  The DDB decrementing phase (step 4) is the first time and place you can even get an idea of what will prune, it happens at step 5, and confirmed at step 6; however, there are so many caveats to the whole process that trying to claim an actual amount freed at any point becomes tough.

In some reports, we will display an amount to be pruned, though that is based off the App Size of those jobs compared to the average Dedupe Ratio.  It’s not accurate for a handful of jobs, but the bigger the sample size, the more correct it becomes.

 

 


With that said, you should see space return overall over time, though you MAY need to run a compaction IF the mount path does not have sparse files enabled:

https://documentation.commvault.com/11.24/expert/127689_performing_space_reclamation_operation_on_deduplicated_data.html

Not that this is definitely your solution; I wanted to share it for completeness on this thread.

You may need to open a support case to have someone go into the deduplication database and library files to see where the mismatch is coming from since the number of possible causes is rather high.

I would, though first check the SIDBPrune.log and SIDBPhsyicalDeletes.log file to see if there are any errors related to pruning (which would help us solve it here).

Thanks!

 


Forum|alt.badge.img+13
  • Vaulter
  • 217 replies
  • Answer
  • November 15, 2022

I think short answer is time and retention. In your test environment you run a few jobs and of course they are not aged until they meet the retention, that is why you do not see the same behaviour as in the production environment. Once some jobs are aged in the test environment, then you will start getting the sparse files. And I am assuming also you are using CommVault deduplication for your test environment.


Forum|alt.badge.img+3
  • Author
  • Byte
  • 9 replies
  • November 15, 2022

Thanks guys, it is started. I can see some files set as sparse.


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings