Solved

Commserve and MediaAgent OS Upgrade

  • 15 October 2021
  • 16 replies
  • 2011 views

Userlevel 2
Badge +6

Hi,

is it a good idea to Upgrade the Operating System of an Commserve and a MediaAgent in place or is it better to install a new machine and migrate the CS or MA?

 

What is the supported way to get a new OS?

 

Kind Regards

Florian

icon

Best answer by Mike Struening RETIRED 15 October 2021, 19:14

View original

16 replies

Userlevel 7
Badge +15

Hi @flokaiser 

In place upgrade obviously carries an element of risk.

Since you have Commserve and Media Agent on the same machine, the risk is doubled.

I would recommend separating the Commserve and Media Agent to different machines, at which point you can move the roles to machines with a newer OS.

Separating the CommServe from a CommServe-MediaAgent Computer

CommServe Hardware Refresh Overview

Thanks,

Stuart

Userlevel 2
Badge +6

Hi,

 

the Commserve and the MediaAgent are on different machines.

Userlevel 7
Badge +23

@flokaiser , appreciate the confirmation.  Check the link for the CS hardware refresh, as well as the MA refresh.  Same idea as @Stuart Painter mentioned; you are just building new servers, then migrating the resources over.

Badge

Was just considering this as my commserve and media agents are 2012R2.  The commserve is a vm so the build new and migrate is an easy option.  The mediaagents are physical.  Is there a process or any precedence for doing an in place upgrade of the OS to 2019?  Is the only option a hardware replacement?

Userlevel 7
Badge +23

@Steve Cohen , the only safe, semi-guaranteed way (in that nothing is really guaranteed) method is to do the migrate option.  It’s much cleaner and more consistent an experience.

You can try to upgrade in place, though that leaves you in a pinch if something goes wrong (and then you’d need to do it the original way anyway).

 

Badge

Mike,

Thanks for the feedback.  My commserve is virtual so i suppose i can snapshot and rollback if it goes bad.  The media agents are physical so will both do the migration to new OS and also go to virtual for these.  Agree about the cleaner process, just looking to save some time but sometimes cutting corners costs more time in the long run :)

Thanks again.

Userlevel 7
Badge +23

Anytime!!

Badge +1

@Mike Struening .

 

Maybe I’m hitting a glitch in the 11.24 LTS docs but, I started from this post with your answer to flokaiser using this link MA refresh, and well, there’s something odd about the 4 steps at the bottom of the page.

Step 1 - no probs.

Step 2 - Shutting down the old MA. ok. stop all services and prepare libraries and index cache for migration, but if you follow the link Shutting down the old MediaAgent, the page it takes you to talks about deleting the media agent. As a former NetBackup guy, I think that’s a bit premature. That step should happen long after the DDB (if in use, and who doesn’t with disk), the libraries and index cache have all been migrated.  So, technically, this should be swapped with Step 3 from that page, or made part of the post migration operations (step 4), and edited to only cover the actual removal from the environment.

I thought this an odd place to land on step 2, and when I read the steps under the link for Step 3  Setting up the new MediaAgent, I was pretty much convinced. :sunglasses:

I’m currently drafting up a migration from VM to physical MA, because we need the better performance of dedicated servers with NVMe vols for our DDB’s, and in the process, we’re moving from 2 node grids to 4 node grids, and of course, I’m racing against the clock to get this done before we run out of free space to let me do the mount path migrations.  The reason I’m drafting this is so that I can have other eyes review this as part of our attempt to implement proper change management in our environment as we grow.  Having come across that weirdly out of place seeming set of instructions, I’m glad I am.

Frankly, I’d love it if there was a documented way I could just unmount these volumes (they are iSCSI from the array presented directly) from the vm’s and remount them on the new hardware and run the relevant update commands (qcommands or powershell) to say - hey - your DDB’s, index cache and your deduped disk library mount paths are all now on server X, in paths A, B, C, rather than move one or two mount paths at a time, and mess around with that slow shuffle step. (I’m dealing with well over 350 TB of data here)  I suspect that would be a heck of a lot faster than trying follow the steps as outlined - like down for a day tops, vs. being down for multiple days. 

 

David K

 

Userlevel 7
Badge +23

@DKerrivan , I think you caught a bad link!

I’ll share with our docs team and handle the MR.

To let you know, if any document looks wrong, click the little chat bubble in the top right to leave feedback.  This becomes a work order for the docs team.

Badge +1

@Mike Struening I was really hoping you’d offer some guidance on shifting my iSCSI mounts whole sale from one machine to another ;-)  On an uncongested network I’m looking at over 4 days of time to shift things, and that’s assuming I’m sitting there monitoring each and every mount path as it moves.  It would be soo much faster to do it by just reallocating the mounts to a new via iSCSI sharing. 

Ah well.. I continue to review my options.

Userlevel 7
Badge +23

@DKerrivan , I may be able to get some advice from some of our internal folks,  Let me see what I can find!

Userlevel 5
Badge +11

@Mike Struening I was really hoping you’d offer some guidance on shifting my iSCSI mounts whole sale from one machine to another ;-)  On an uncongested network I’m looking at over 4 days of time to shift things, and that’s assuming I’m sitting there monitoring each and every mount path as it moves.  It would be soo much faster to do it by just reallocating the mounts to a new via iSCSI sharing. 

Ah well.. I continue to review my options.

@DKerrivan you can definitely do a move of iscsi mounts without having to copy data. Once new MA is setup, disconnect the iSCSI mounts from old MA and then configure them onto new MA. All the data should still be present. If you previously mapped to particular drive letters, before disconnecting, note the unique base folder name for each path so that you can map it to same location on new MA. Alternatively, if you have a very old library/MP that doesn’t have unique base folder names, simply create a text file in each with the drive letter they should be so you know on new MA where it came from. 

If you’ve already disconnected without noting things down and can’t figure out which path used to be what, raise a support incident and we can still help you identify what each MP used to be. 

 

Once paths are presented to new MA, simply share each mount path in the GUI with the new MA and it’s corresponding path. Once each MP has a new share path to new MA, you can simply delete the original path on the old MA. 

 

Commvault treats mountpaths based on ID’s, not what path they are. The shared paths indicate that both old and new MA are pointing to the same data and thus would have the same ID. 

 

Hope that makes sense.

 

Thank you

Badge +1

sigh. one of those days. Started to answer this at the start of the day, but then, stuff.  @Jordan, thanks. I will be vetting those steps with our CV partner, as I have some other interesting challenges on this.  

 

I do have my mount points uniquely named so tracking changes isn’t going to be a problem 

 

MA1 and MA2 are a two node grid, two DDB partitions for my on prem data, and two DDB partitions for my offsite data in a non-AWS S3 location. 

The new MA’s are currently setup as a 4 node grid.  There is zero data written to the new MA’s so, I can break ‘em down and reconfigure if need be, but I was hoping to distribute the iSCSI mounts across all 4 nodes in a somewhat tidier manner, but am not sure I can take the 20+ mounts on MA1 and redistribute them across the 4 new nodes, and the same for the 20+ mounts on MA2 on the first pass.  However, if it’s a case of I need to move MA1’s mounts to node 1, MA2’s mounts to node 2, and then do move mount commands later to rebalance across nodes 3 and 4, that’ll be possible.

 

But, thanks again for that - that will make my change window a lot shorter either way I think!

Badge +1

Just thought I’d report back on the migration experience. I wore out my pointing device….

I have never had to do so many clicks to make something happen - this is where I actively say I prefer scripting things (Yes, I’m that old school).  I moved 2, 2 partition DDB’s, and shifted 50 iscsi mounts from vm’s to physical systems.  Between CommVault, my storage array, and Window’s - each mount took well over 20 mouse clicks.   I kid you not, my left button on my trackball gave up the ghost going through this process. Thankfully, I had a spare.  That was the fast part of this migrtation and took about 2 hours.   

My DDB’s took well over 5 hours to migrate (about 1.7 TB of data in total in each DDB).  Copy paste of data across the network would have been under 20 minutes. I get there’s some house keeping that needs to be done as they are databases, but that was painful - I get the feeling there could be some optimizations made there. 

Then there’s the section on changing the location of the Index Cache Directory - that doesn’t actually cover moving from one host to another, just moving the index from path A to path B on host 1...  I ended up using Change Index Server, but that too took a crazy amount of time for data that at a straight file copy could move in under half an hour. 

I won’t get into the other elements of re-associating policies, and networking, but that too could use some streamlining.  

This is one area that Veritas NetBackup may an advantage over CommVault - when I last officially used the product, they had a media agent decommissioning binary that did virtually all of the heavy lifting for you, and used significantly fewer keystrokes than this did mouse clicks!  

I am not looking forward to when we have to refresh these servers in ~5 yrs. time.

Ah well, two more DDB’s to move, then figure out why the aux copies to offsite long term storage aren’t running.

Userlevel 7
Badge +23

Appreciate the experience (and if scripting didn’t prove your old school creds, the track ball did :nerd: )!

 

Badge +1

 

Appreciate the experience (and if scripting didn’t prove your old school creds, the track ball did :nerd: )!

 

 @Mike Struening - too funny! 

Reply