Skip to main content
Solved

Retention planning questions for 19 TB of vms on a 35 TB storage array


Forum|alt.badge.img+2

Hello

 

I have a 35 Tera storage array and I want to back up 19 Tera virtual machines.
I want to know the possible retention to save them.

 

Thanks

Best answer by Mike Struening RETIRED

Don’t forget that Dedupe will cut down on the subsequent Full size by a LARGE amount.  The first round of Fulls will be 19 TB, but the second Fulls might take up a minor fraction of that amount.  You very well may have enough room for more retention.

If you want, start with 15 days and 1 cycle.  As you get towards the 15-21 day mark, so how much space you have and see what the DRFC report shows as far as job prune dates.

If you have plenty of space, increase another 7 days and repeat.  You can always lower the retention if things look tight.

View original
Did this answer your question?

14 replies

Damian Andre
Vaulter
Forum|alt.badge.img+23
  • Vaulter
  • 1207 replies
  • January 25, 2021

Hi @Zitouni Seifeddine and welcome to the community!

 

I just wanted to clarify your question - Do you mean that you have 35 TB of free storage on your backup storage, and you want to back-up 19 TB of virtual machines and understand how much retention you can set before the backup storage is full?

 


Forum|alt.badge.img+2

yes exactly i want to back up the virtual machines and I schedule the retention before the backup storage is full

and I want to know is what CM saves then erases the data or does it erase and then save them


Mike Struening
Vaulter
Forum|alt.badge.img+23

Good question.  What’s the rate of change for data for these machines?  Assuming you are using Deduplication, you will need to factor in the baseline (which is 19 TB), then around 10% of that for file change (but that can depend on the data change on the source vms).

What I generally suggest is setting a higher retention than you think you need, then see how storage usage trends.  Deduplication can save you a ton of space which is great, but the REAL benefit is giving you more points of data recovery.  Since any deduplicated file is just added to the reference count for that block, you can extend your points of recovery out much further than expected.

As you start to see the space filling up, run a Data Retention Forecast and Compliance Report and see what jobs are pruning next, and when.  You can always lower retention, then rerun the report to see what will age off based on those changes.  If you like the changes, then leave the setting changed.  If not, set it back.  Just be sure to disable any Data Aging schedules so the operation doesn’t run in between setting adjustments.

For the second question, I’m not sure exactly what you are asking by “and I want to know is what CM saves then erases the data or does it erase and then save them”.  If you can clarify that I’ll get you an answer.

Thanks!


Forum|alt.badge.img+2

the rate of change of data, it is deferred from one machine to another but can I put one month retention and 0 cycles so that the disc does not saturate?


Forum|alt.badge.img+2

deduplication is active 

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

I pretty much always caution against using 0 cycles because you can get caught with no data.  I’ll explain below:

The way retention is calculated (Basic retention, ignoring other factors) is that you count the number of DAYS you set, starting at the last Incremental of a given Full.  Once that many days have passed, you check the cycle count.  If the number of newer Fulls that have run (and Completed without errors, generally) is equal to or greater than your cycle count, the job will prune (leaving out other factors).

Now, with the above in mind, imagine you have 7 days and 0 cycles (as an example).  Now you run a Full on a VM, but there’s a problem with the actual server, or backups keep failing, etc. and you notice 8 days later.  You’ll also notice, your backup is gone (and you may have no backups at all).  Why?  Because 7 days passed, and our requirement for newer Fulls is zero….so your only backup pruned.

Think of Cycles as a catch all, or a safety net.  No matter how much time has passed, you will always have X number of Fulls on that copy, where X equals your cycle count.

Go for 30 days and 1 cycle if you want (or 30 days and 4 cycles assuming weekly fulls).  As long as your backups run regularly, the cycle count won’t matter, but IF there are problems, you’ll be safe.

As you approach the 30-36 day mark, start running daily Data Retention Forecast and Compliance reports to see what is pruning and when, and look at your free space.  You may find you can increase the retention another week or so.

Remember, you can ALWAYS lower retention.  Raising it doesn’t do much good if the jobs are aged off.


Forum|alt.badge.img+2

according to your explanation and according to the space of my storage bay36 T , the ideal is 15 days and 1 cycles knowing that I make the complete backup of the VMs in the weekend it is 19 tera and there are 25 tera left.

this is you have this case how you can configure?

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Don’t forget that Dedupe will cut down on the subsequent Full size by a LARGE amount.  The first round of Fulls will be 19 TB, but the second Fulls might take up a minor fraction of that amount.  You very well may have enough room for more retention.

If you want, start with 15 days and 1 cycle.  As you get towards the 15-21 day mark, so how much space you have and see what the DRFC report shows as far as job prune dates.

If you have plenty of space, increase another 7 days and repeat.  You can always lower the retention if things look tight.


Forum|alt.badge.img+2

OK thank you for your collaboration


Mike Struening
Vaulter
Forum|alt.badge.img+23

My pleasure!  Feel free to mark whatever response was most helpful as ‘Best Answer’.  I also would liek to add a bit of context to your subject/title to help others find this useful thread more easily :-)


Forum|alt.badge.img+2

ok it is noted and with pleasure :)


Mike Struening
Vaulter
Forum|alt.badge.img+23

Likewise!!


Patrick McGrath
Vaulter
Forum|alt.badge.img+3

Could also try profiling the data with the Activate File Storage Optimization (FSO) product to see what kind of data exists across those locations with aging.  We have a number of customers who are shaping their retention / ILM policies using a combination of FSO and Archiving from small TB locations up past 30PB.   

Patrick


Forum|alt.badge.img+2

 

Hello

I want to save a DMZ cluster with two nodes:

  • Hyper V Cluster  - HyperClusterDMZ - Two Nodes Cluster
    • DMZHV11
    • DHZHV12

but a hard DMZHV11 error message "Virtual machine was not found" after a diagnosis we found with the support team it is a problem of service vssadmin list writers.

 

is there a solution?

 


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings