Question

Front end data report

  • 2 November 2022
  • 5 replies
  • 239 views

Badge +1

Hello,

 

I need your help ;-)

i was asked to prepare tabled report showing Front End data stored in local library (NetApp).

I already explained the difference in logical and physical use, but my management still wants to see the list of everything that will sum-up to 297 TB.

I tried the chargeback report but it's showing me data from Jun last year. 

 

Does anyone know how can I achieve this goal? 


5 replies

Userlevel 7
Badge +23

@NG_Pawel , thanks for the post, and welcome!

Assuming these are deduplicated, you won’t exactly be able to get full details of what exactly is in there (as you already mentioned), but you can run a Jobs in Storage Policy report to see data written.

https://documentation.commvault.com/2022e/expert/40300_storage_policy_report_overview.html

If this isn’t what you need, let me know what they are looking to get out of your report and I’ll point you better.

Thanks!

Badge +1

Hello @Mike Struening 

thank you for the reply.

In short:

I need to present table of items on that aggr and their sizes. Sum of the size column must show 297TB (as on the screen).

 

Userlevel 7
Badge +23

@NG_Pawel the Jobs In Storage Policy copy would be your best bet.  Might need to output to Excel and add it up there, but this should give you what you need for Data Written.

However, if the base jobs have aged off, you may not get a full picture.  Run the report and see what you end up with.

You should also be able to review the various Deduplication Engines writing to this library and see the data written there.

Badge +1

Hello @Mike Struening,

the library that is in question is the target for aux jobs from many policies as well as hosting some cifs on its own. 

i don't think i get filter Storage policy copy report in such details to focus only on one of copy job per policy

Userlevel 7
Badge +23

Hmmm, so they are looking for space used per job and looking for it to add up to the disk space used?  I assumed they just wanted to see what ‘collection’ wrote what size of files on the disk.

You pretty much nailed it on the original post.  If you’re doing deduplication (and I assume you are) then you’ll almost never get that to add up.  Job 1 writes the while baseline and jobs 2+ only write the changed blocks.

However, once the base jobs age off, then the reports will leave those sizes out and the reports won’t match.

I know you know this already, though hopefully this explanation will satisfy them?  Otherwise, just check the store sizes of all copies, then add in the Aux Copies (if not deduped).

Reply