Hi @Stefano Castelli , and welcome to the community!
My expectation here is that if the migration created new unique blocks, that our dedupe ratio would be poor. Now, subsequent Fulls should be better, though much of that depends on how the data is stored, changed, etc.
How many Fulls have run post migration? Do you have ratios per each of those?
Thanks!
Hi @Stefano Castelli , and welcome to the community!
My expectation here is that if the migration created new unique blocks, that our dedupe ratio would be poor. Now, subsequent Fulls should be better, though much of that depends on how the data is stored, changed, etc.
How many Fulls have run post migration? Do you have ratios per each of those?
Thanks!
Hello Mike and thanks a lot for the answer and for checking the thread.
Yep, that was what I was expecting, yet the ratio on following Full backups is still quite high, even though it is really slowly improving.
What “scares” me is that out of the Oracle RAC backups I get about 20 TB of data written on disk from the reports.
And yet, according to the storage report, these 20 TB account for 89% of the disk library.
Now, if the disk library is about 38 TB, the math is weird here.
IS there a way I can check if “orphaned” data is clogging the library?
I ran the Retention Forecast Report and it is “clear” of unprunable jobs.
Any idea?
Thanks in advance.
Regards
Hmmmm….you COULD have some stale blocks, though this is quite a rabbit hole
- Do you have more than 1 DDB store on the library? Could have a corrupt store as well.
- Does the library support sparse files/drilling of holes? That could be an issue if we can’t free out space within the chunks.
- What is the actual library itself? Assuming not Cloud because you said disk library, but important to cover.
Now assuming you don’t have sparse file support, you can/should runa space reclamation (which I believe is what you were asking about):
https://documentation.commvault.com/11.25/expert/127689_performing_space_reclamation_operation_on_deduplicated_data.html
This could be the answer to that woe, though the dedupe ratio on the Oracle backup is another matter. If that data is moving around somehow, and the blocks change? That would do it, though for a really full detailed investigation, I’d get a support case created (share the incident number with me to follow up). There’s so many factors to even consider
Hello again and thanks a lot for the reply.
The library in use is a local array of disks in a physical Media Agent.
It contains just a single DDB. DDB Verification jobs run regurarly without resulting in issues.
I’ll check about the sparse files configuration, thanks a lot.
Actually I already ran the space recmaimation job but it did not - ehm - reclaim that much in comparison to the total (about 550GB using the highest setting).
As you say, a support case would the best thing now, I’ll ask the customer to open one while I investigate.
Thanks a lot
Yeah, definitely the best action now.
Once you have an incident created, let me know the case number so I can follow up and monitor.
Thanks, @Stefano Castelli !!
@Stefano Castelli , following up, did you ever get this addressed (or an incident created to track it down)?
Thanks!
@Stefano Castelli , following up before marking this solved. Were you able to get this answered/fixed on your end?
Hello,
We have same situation here.
With Oracle cloud storage : dedup ratio is only 20% (instead of more than 90% before migration to Cloud).
Did you find any solution?
Many thanks for sharing :)
@Frederico , what kind of data are you sending to the library? Also, Primary copy or secondary?
We can dig into it though we’ll need to get some information first.
Hello @MikeRoSoft, thanks for your answer !
We are sending pirmary copies to OCI (Oracle Cloud Infrastructure) Cloud Library.
All data are Oracle DB and archive logs.
Don’t hesitate if you need more information.
Thanks