That’s very interesting. Size looks the same, prunable records, etc.
My initial thought is that you have more unique records on the Aux Copy because we don’t dedupe against concurrent streams, meaning if you are sending multiple streams at the same time, we won’t dedupe those streams against each other (at first).
Now, once they get written, subsequent streams will dedupe against the already written items and it eventually evens out from a space used perspective; however, you’ll still have an increased number of unique blocks (until they full age off).
An increase of what you are seeing is entirely possible.
It’s also possible that these DDBs are partitioned, and the Aux Partition was down for a prolonged period creating new primary records, In time, things should even out, though like the above, that will all depend on retention.
Â
Hi @KamilÂ
This one’s peaked my interest as a difference in Unique Blocks SHOULD come with a discrepancy in Data Written (as we have more unique blocks).
This is usually what we’d see where Dash Copies are run with many streams, as Mike mentioned, we may be sending two identical signatures on different streams so we end up writing both at the destination creating the discrepancy.
Yours are essentially identical in every way EXCEPT unique blocks.
If you have some logs, there is a log line within SIDBEngine.log which will show us the Unique Block count. Id like to match this up, as this will eliminate any GUI mismatch issue.
Should look something like:
10280 3490 06/04 19:47:23 ### 3-0-4-0 LogCtrs         6002 Â0]     Total] Primary 3155589359]- 131676182690390]-30]--0]-10]-10],
Â
With regard to the space question, if you refer to the disk space for the drive hosting the Dedupe Database itself, than adding a partition to another disk will eventually balance out the two partitions, however the larger partition will only start to shrink once job references age out. It doesnt balance out immediately, so if your retention is 30 days, the two partitions will look ‘similar’ (but not identical) after about 60 days.
DDB Compaction will help shrink the DDB Partition, though the largest impact is to compact the secondary records. This will take the longest but recover the most space, definitiely worth the investment if you can afford the downtime.Â
If talking about target storage where your Data is being written, than adding a partition will increase usage of foot print by approx 100TB (based on the ~200Tb from the screnshots) until the 60day mark when we can start to reclaim the references from the original partition.
Garbage Collection will help with reclaiming space from the target storage. It does not consolidate or compact things, but it will improve pruning efficiencies and should impact the performance.Â
Â
Hopefully this makes sense!
Cheers,
Jase
Â
Thank you @Mike Struening and @jgeorges  for your detailed answer.
Â
So what can we do to make the number of blocks comparable / the same? Currently, the number of blocks varies significantly between DDBs.
Â
Bielsko CVMA1> 2 536 242 896
Kety CVMA1> 1 763 738 506
Â
The difference in efficiency of about 30% is a bit much for such a stabilized environment
I still have a question about the structure of the DDB. What are "Secondary Blocks" for? We have almost five times more blocks of this type than "Unique Blocks".
Â
@Mike as you wrote about DDB Partitions, I think the Client has one partition in both cases.
Â
Regards,
Kamil
I’ll answer the second question firstÂ
The secondary Records are the number of references to each of the Primary Records:
- Primary Records - Actual unique blocks
- Secondary Records - How many Job References exist per Primary Record
- Zero Ref - Primary Records with 0 Secondary Records (these entries get sent to be deleted)
It makes perfect sense to have more Secondary Refs (you have to).
Now regarding the Unique/Primary discrepancy? In time, they should even out assuming it’s the combine streams issue. The more records you get written, the more likely they will be referenced, though there will always be a delta.
If you want to be 100% sure, I would suggest opening a support case and having someone deep dive into the records. If you do, share the case number here so I can track it!
@Kamil I had a thought last night but couldnt drag myself out of bed to respond here.
Â
Can you share a screenshot of the block size set for each DDB?
https://documentation.commvault.com/11.24/expert/12471_modifying_properties_of_global_deduplication_policy.html
As Mike mentioned, the more unique blocks, the higher the primary record. And with a lower block size (128kb vs 64kb) we’ll see many more unique blocks.
Â
Mike’s explanation with regard to stream count, would usually come with a duplicate unique chunk written and often we see a discrepancy on size at rest (this is what actually affects deplication savings, Physical Size vs Application Size.) but your savings are very near identical.
So block size may explain the difference between unique counts.
Â
Cheers,
Jase
Â
Â
Thank you for further information on this matter.
Â
Below I am sending screenshots you asked for, both are configured with 128 KB block.(screenblock1-2.png)
Â
Thanks,
Kamil
That’s interesting for sure. At this point, I’d raise a support case, unless @jgeorges has any more input.
@Kamil @Mike StrueningÂ
Â
Thats me exhausted of all ideas.Â
@Kamil if you IM me your CCID i can look to get a support case raised and have someone reach out to assist.
Â
Cheers,
Jase
Hi @Kamil ! Can you confirm this incident was created for this thread?
211014-168
Thanks!
Hello @Mike StrueningÂ
Â
No, the incident number you provided is for a different problem.
Â
I have in mind your recommendations to create an escalated case in the CV support, I am waiting for the client's answer what exactly questions should I ask to clarify the analysis of the problem.
Â
When I get the information and create the application in the CV support, I will give you the number incident.Â
Â
Thanks,
Kamil
Ok, I’ll await your update!
Hi @Kamil , hope all is well! any word from the customer?
Thanks!
Hi @Mike Struening ,
Â
Forgive me for not updating, I haven't done it yet. As soon as I have a free moment, I will deal with the escalation of this thread and let you know.
Â
Regards,
Kamil
Thanks for the update. No need to apologize at all!
Hi @Kamil ! Following up to see if you had a chance to work on this issue.
Thanks!
Hi @Kamil , gentle follow up on this one.
Let me know if you were able to find a solution!
Hi @Mike StrueningÂ
Â
I am angry that I neglected it so much, but I had many more important matters that left the topic on the sidelines.
I have created a support application - Incident 211207-324.
Â
I will inform you when I find out something, thanks for your understanding;)
Â
Regards.
Kamil
Don’t be mad, we’re all super busy these days!
I’ll track the case on my endÂ
Hi @Mike StrueningÂ
Â
I got an answer that cleared my doubts.
I think this news will teach us all an interesting fact about Commvault deduplication.
Â
Findings:
Reason for this is that in primary copy Signature is generated first for 128k data block, and then compressed, in secondary data is compressed first and then signature is generated, So multiple unique data blocks of primary copy are going to be a single unique data block in secondary causing less number of signatures in secondary. If you see the size of the unique blocks in primary it is almost close to what we see in secondary.
Â
I tried to find an environment where a similar situation would be, but unfortunately I don’t have access to a similar environment at the moment. Maybe You Mike or someone else from the Commvault Community has and can verify if it really is so? :)
‘
Thanks for help,
Regards
Kamil
Hi @KamilÂ
Â
I had a look at the incident and will clarify for you to ensure there’s no confusion.
Firstly, since early version 10 days, all backups are performed in this order ‘Compression > Deduplication > Encryption’.
However, in more recent years we found with Database backups, as compression can cause very high rates of change to the dataset, we find better performance performing deduplication first (Deduplication > Compresssion > Encryption).
When we perform Dash Copy, we assume again that we are doing Compression > deduplication > Encryption and so when reading from Primary (regardless of IDA), we remove encryption and then perform signature generation.Â
Â
With that, for this to explain your findings, you’d need to check that you’re doing a good amount of Database backups. If so, than this certainly explains it as the other key areas match up (size on disk and number of jobs).
If you still want to try and remove those discrepancies, you can look to make the two copies perform compression/deduplication in the same order:
https://kb.commvault.com/article/55258
Note, before you make these changes, you should understand that there will be NEW Signatures being generated resulting in more data being written down.
You need to ensure the destination copy has the space to allow this new data to be written down and for the DDB to grow.
New data being written will balance off and be reclaimed after the existing jobs meet retention.
In this case was the primary or the secondary showing more Unique blocks? I presume this is one way DASH not 2 active sites cross DASH?
Â
just trying to make sure I know which side is likely to be bigger as we have some customers with large SQL DBs using TDE - we turn off CV encryption as the data is already encrypted so I guess its just the difference between compress and dedupe and dedupe and compress. Â
out of curiosity is there a BP KB for how to handle SQL with TDE/compression etc?
Normally if either is bigger, it’s the Aux Copy. This is because as we send simultaneous streams, we are not deduplicating those against each other….only the next set of streams against what is written.
Let me know if that clarifies, @Karl Langston !