Skip to main content

Question on Scheduling Groups


Forum|alt.badge.img+10

So another question.

Regarding scheduling backups.  So our environment which is mostly windows servers and a few linux servers and 1 MsSQL.   Is following an accurate assumption:

If I want to create a new client group, example - “Windows Server Agents” and then associate all of our Windows servers to this group this approach seems much cleaner way to create bkup policy then creating a seperate backup schedule for each windows server individually.  

What is recommened?  

Also if I was to create this group client Windows server bkup schedule I notice it doesnt let me chosse what drives I want to backup and the system state option.   I only see these options if I set up a backup schedule for an idividual client as there is a “content” tab where I can select aLL local drives or browse etc..and then there is also a check box for the “system State”   Does the client group bkup schedule automatically bkup all local drives -  C:\ drive and if there happens to be a D:\ it will back that up as well etc…??   Also will it backup the System State options.??  

As Always thank you very much!!

 

BC

24 replies

Mike Struening
Vaulter
Forum|alt.badge.img+23

@bc1410 , I’ll answer this from a higher level first, then dive deeper as I think this will frame things better.

Client Computer Groups themselves are only logical buckets.  

This allows you to do mass associations, schedules, etc.

When you go to schedule them, you can’t pick the content because they can potentially have various different subclient configurations.

You CAN set up subclient policies so all clients have the same subclient definitions (if that makes sense for your environment).

It does make sense to create groups based on iDA for long term scalability.  As an example, if you want to push out updates for the Windows file System, it’s much easier to update all of the one group vs picking each server in a big list.

In that regard, setting up Smart Client groups will likely help you in this venture.


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 16, 2021

Thanks Mike for your reply.

So I might as well hit all the areas for questions..   So with regard to Retention. 

Seems like you can control it via the subclient schedule if I create the backup schedules this way although I only see it in amount of days to keep the days, or it can be controled via the storage policy which offers both Days and Cycles.  

 

What about if I want for example my windows file systems Full backups to be retained for 6 months and then I want the same window file systems DIFF backups to be reatined for only 4 months and these backups (full and DIFF) use same storage policy.  How is the to different retentions set on the storage policy?? Or do I need to create seperate storage policies (doesnt make sense).  

Maybe Im looking into commvault to much but its pretty complex…  I have used Veritas netbackup and its nothing like this..  

 

Thanks

BC

 


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 16, 2021

Or do I just create multiple subclients for the client - one for FULL; another one for DIFF/INC? and adjust the retention on the subclient scheudle.?

 

Is it wrong to just have have one storage policy (global Dedup) which points to our S3 AWS bucket.   

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

All good!  I’m happy to have you with us, and happy to help!!

You can set specific retention per job, but that’s not really scalable.  You should utilize the expanded option set within each Storage Policy Copy.

For your specific needs, it’s extremely easy to do:

  1. Set your Basic retention to 120 days (or more, your call) to cover the 4 months minimum retention.  The Cycle count dictates how many NEWER fulls must exist before you can age off any eligible full, and is highly dependent on your full schedule.  Assuming Weekly backups, 15 cycles should be fine.
  2. For the Extended Retention, set ALL Fulls to 180 days.

This will result in all Fulls being held for 6 months, and any non-full for 4 months at a minimum.

There’s a few threads out there covering Data Aging, including the one below that might prove valuable to you.

Let me know if you have any other questions!!

 


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 16, 2021

understood about being not scalable.

You should utilize the expanded option set within each Storage Policy Copy.  Sorry Mike Im a little confused

So basically are saying I need to create another “Copy” (right click on our storage policy) from the current globalDedup Storage policy so server as our Incremental/DIFF ?  If thats the case (or am I dead wrong) then how do I assocate a storage policy copy to a schdule (such as a FULL or DIFF)..

 

 

 

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

A Storage Policy is not connected to a Schedule.  Subclients are associated to a Storage Policy (as well as a schedule):

  • The Schedule dictates when jobs run and what type (Full, Diff, Inc, etc.)
  • The Storage Policy dictates where the jobs are stored and how long they are kept

there’s some more options that augment the above, but that’s essentially it.  The Storage Policy doesn’t know (or ‘care’) when the backups run.  They certainly affect each other since Retention for the Storage Policy relies on when each job runs, but you don’t need to create another copy.  you can utilize the Extended Retention settings and get both retentions (Full and Diff) in 1 copy.


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 16, 2021

Sorry mike - I do understand that the Storage policy is not connected to a schedule and the subclients are associated with Storage policy.  I get that now

 

So we have only one Global Dedup Storage policy set up.   (thats the only storage policy i have set up besides of course the CommServerDR storage policy) so the Extended Retention Rules pops up  a message warning.  That selection of extended rules on a deduplicated copy may cause unneccessary DDB size growth and that its recommended you use a selective copy for extended retention requirements.  

 

But my whole thing is that even if we use the extended rules it states its for Full bkups.   So Im not sure how the whole retention or extended rules come into play DIFF type job

 

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Appreciate the additional detail!

Generally, you do want to avoid extended retention for any Deduplicated copy, though in your case, you only have the one Storage Policy. 

You could go one of three ways:

  1. Just set the extended retention.  Diff jobs will get the Basic Retention and all Fulls will get the Extended Retention
  2. Slight variation, just set the 180 days as your Basic Retention.  Depending on the Dedupe ratio, this might work out fine and not take up that much more space compared to option 1
  3. Set up an Incremental Storage Policy.  This is just another Storage Policy that you associate the main SP to.  This complicates the setup, but results in all the Diffs going to a different Storage Policy altogether and you can then just set basic retention of 180 days on the main SP and 120 on the incremental SP.

If everything is going to S3, and you don’t have other subclients requiring other retention settings/libraries, then one SP is fine assuming you are using v5 Deduplication (which splits out agent dedupe for you).  Otherwise, each agent should be split out for best dedupe ratios.

Are you planning to set up any Aux Copies (say, to tape)?  You’ll want to consider egress costs unless everything is in the cloud from one library to another through a cloud Media Agent.

I know I just dropped a lot of options above, so the first thing might be asking, “What are you looking to accomplish with your configuration?”.  If you need AT LEAST x days of data, or have business requirements for X and Y retention than it becomes easier to see which options give you what you require without overly complex setups, or storage costs.


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 16, 2021

Yea you dropped a lot on me..  lol  But I love it..  Im all about trying to learn…  Im basically on my own in my organiztion as nobody really has any commvault experience.  

I just know the higher ups wanted to make sure Im taking advantage of the commvault deduplication (global) and that we are have everything go to S3 so only one cloud storage library for right now  They want to retain Full levels for 6 months and and DIFF for 4 months.  We have 2 SQL servers with AG and a hand full of windows servers and a handful odf linux (i think you remeber this from other thread).

 

I rather not take a chance with uneccessary DDB size growth by using the extended rules.

I guess Im ready to complicate things and go with the incremental SP.   So since my main Sorage Policy (the only one i have right now beside the commserverDR one) is that of a Global Deduplication SP do I just right click one storage policies and create a Data Protection & Archiving SP type  and give it a name of like “Incremental SP” then it ask to select storage pool and I will select the only one I have which is the Deduplication Pool then click finish.    Then I just go back to the main Global Dedup policy and assign this to the Incremental option SP.  Does this create a new whole Global Dedup SP though?  just FYI when I right click on Storage Policies why is it that I dont see the option to Create a New Global Dedup”?

 

 

 

 

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Awesome, you are in the right place to learn!  We also have all sorts of training and certification (when you’re ready).

to setup the Incremental SP, you have it 100%.  Create another SP called Incremental xyz for easy identification, point the Dedupe to the Global one you already have for dedupe purposes, library, etc. would be the same.

Once it is created, go to the existing one and tell it to use your Inc SP for all Incrementals and that’s it.  The SP is smart now to send all non-Fulls to the other location.  all backups will use the same Deduplication database so you’ll get all those wonderful space savings.

Not sure why you aren’t able to create additional Global DDB Storage Policies….

To confirm, you’re right clicking on the Storage Policies header, not the existing SP?

https://documentation.commvault.com/commvault/v11_sp20/article?p=12464.htm

Procedure

  1. From the CommCell Browser, expand Policies.
  2. Right-click Storage Policies, and then click New Global Storage Policy.

    The Create Storage Policy Wizard opens. Follow the instructions in the storage policy creation wizard.

  3. Click Finish to create the storage policy.

    The new storage policy appears under the Storage Policies node.

Can you send me a screenshot?


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 17, 2021

yea this is a snip from the commcelwhen i right click on the storage policy..   This is all i see..  Not sure if I set something up incorrectly etc…

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

@bc1410 , I’ll investigate.  I’m thinking we may have limitations based on # of MAs or something similar.


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 17, 2021

So we are still working on the evaluation license right now.. Would that have something to do with it.  ?


Mike Struening
Vaulter
Forum|alt.badge.img+23

While I am looking, can you confirm the Commserve version for me?  There are slight changes in DDBs that MIGHT be at play here.


Mike Struening
Vaulter
Forum|alt.badge.img+23
bc1410 wrote:

So we are still working on the evaluation license right now.. Would that have something to do with it.  ?

Possibly, depending on what you are licensed for.

Let me know what your current version is, and take a look at your License Summary report details.  I would not expect you to have a limit on anything as evals are generally rather open (so you can try everything).

I believe your limitation here is based on resources, (i.e. you have have one Media Agent and it doesn’t want to overburden the sole MA) or due to some changes in Dedupe scaling.


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 17, 2021

Commserve Version 11.22.13

let me take a look at the license summary


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 17, 2021

We have onle one media agent…  We are using the the one windows 2019 server as both Media and commserve.   

I dont see anything out of ordinary in license report..  We arent stepping out of bounds sort of speak with what we have set up.  For example server files System is unlimited with the evaluation and we have 12. Same with cloud storage - evaluation offers 10 and we have 1..

  


Mike Struening
Vaulter
Forum|alt.badge.img+23

It’s the version :-)  The location to create these is now somewhere else, under storage pools:​

 

https://documentation.commvault.com/11.22/expert/12460_creating_global_deduplication_policy.html

jumping off point for creating a global ddb based on disk : https://documentation.commvault.com/11.22/expert/9095_configuring_disk_network_storage_pool.html

jumping off point for creating global ddb based on cloud : https://documentation.commvault.com/11.22/expert/9110_configuring_cloud_network_storage_pool.html


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 17, 2021

Thanks for the great info Mike.  

So after I get going with my incremental storage policy etc do you think we should create another storage pool / policy  (maybe not deduplication) using cloud storage s3 just for our Trans MsSQL jobs that we want to run on the hour or every 2 hours?  or should these go to the Global Dedup we already have in place.

 

Another words with a S3 cloud library that we have can  (not sure if this make sense) can we use some of that S3 for a non deduplication storage , like just a regular Disk storage for TRANS jobs.

 

Im not sure if we can create multiple libraries off the same s3..  Probably would have to spin up another S3?  correct?  or can you just create another Storage pool with a differnet name and uncheck deduplication and have it use the same existing S3?

 

hopefully some of this makes sense…

Cant thanks you enough Mike for all the great help and patience.

BC

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

It’s absolutely my pleasure!!  Normally I’d split each of these questions out into new threads to track, but we’ll just keep going :nerd:

The quick answer is that we don’t dedupe trans logs regardless of Storage Policy, so you don’t need to do anything.

Now, if you REALLY wanted to make multiple libraries within the same bucket, you COULD, but there might not really be any point.

Generally, you have one mount path per bucket, so no need to create extras.

Cloud libraries have a bunch of fine tuning options that you might want to review as well.


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 17, 2021

Hey Mike - so when I created the Inc/DIFF storage Policy as we discussed prior and it popped up under storage policies (it also had the same commvault icon as my main SP which is the “Global Deduplication Storage Policy” icon.  Had to find a cheat sheet for these commvault icons..  so many of them...   I then right clicked on my main SP and place a check mark in the box next to “incremental storage policy” and used the drop down boxs to select the newly created Inc/DIFF SP.  Then said OK to close out dialog box.    So the weird thing I noticed is that the ICON for my main SP changed to that of an incremental storage policy.  weird..  I thought it would be the other way arounf.. see my scree shot i attached…  

Just for reference my main SP (the first one I created _ global dedup) is named C*-GlobalDedupSP and the newly created Inc/Diff SP is called C*-Diff-SP.

This doesnt seem correct.??

 

Sorry for blacking out the names..

Thanks

BC

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Yup, that’s right!  I created one in my lab here.  Alpha is using Primary Server Plan as its Incremental SP and the icon changed to the ascending bar graph.

Remember that an Incremental SP is still a regular old SP.  It’s just that one of your SP uses it as an Incremental SP….you can still associate subclients to it as their normal SP.  that’s why the icon only changes on the main SP to show that ITS Incrementals go elsewhere.  Any SOP can be an Incremental for another.

 


Forum|alt.badge.img+10
  • Author
  • Byte
  • 77 replies
  • March 19, 2021

hey Mike-

sorry about late response. had to obtain a new access card to login to our systems as my was about to expire.  long story short could login with my new access card..…   

Thank you very much for the assistance and Great knowledge.

Thanks for taking the time to test the icon change in your lab..  So bottom line is that what I witnessed regarding the icon changing on the main global dedup SP to that of a bar graph icon and then miy newly created Incremental SP changed a global dedup icon once I click the check box to use an incremental SP on my main SP is what is expected and ok..

Thanks again Mike..

BC

 


Mike Struening
Vaulter
Forum|alt.badge.img+23

That’s right, you’re all set on the Inc SP (and no need for any apologies; I’m here to help :sunglasses: )!


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings