Commvault Q&A, release updates, and best practices
Amazon MALZ integration with Commvault
Good day.Does anyone have experience with protection workloads in a AWS AMS/MALZ environment? Are all the features supported? I heard that some features relating to agentless protection and recovery operations will not be possible due to the restrictions placed on the making use of snapshots (VSA / RDS), creating VMs/DBs, assigning permissions to resources via IAM, etc.When using the agent in-guest approach this should not be an issue as I have it.The idea is to integrate this environment into an existing on-premise Commvault deployment. I do see that there is an option to create a “Customer Managed” OU within the MALZ environment but this would still make use of some components within the Core OU managed by AWS.Any information will be appreciated.Regards,Ignes
I am not able to browse and restore data from latest incremental backup.
There is a data loss in customer environment. Customer wants me to check if there was anything which was backed up and could possible be restored. There are 2 incremental backups from which data needs to be restored. But while browsing the incremental data is stuck at “loading data….” and not actually loading data. Previous incremental backups are being browsed successfully.
Storage Pool - best practices or no logic?
HelloFollowing the Commvault SE recommendations we have created a storage pool of 4 Media Agents with DAS storage. Initially all MAs could read and write data to each mount path and I have noticed that the "lan-free" logic does not work. I mean, each MA tries to access each mount path even if "closest" or fastest way available. Despite our network is 10Gb, data transfer between MAs is slow, very slow. Now, I have allowed only reads for any MA for any path and it works better, but still not perfect. The most important fact that aux copy to tape is very slow. Each policy allows each media agent access to any tape so my idea was "ok, let's stop access via IP and only MA that has its DAS will read/write data". So, backups are fast, aux copy is.... failing! Because MAs couldn't access data on another MAs.So, I am stuck with no ideas, except that stop using Storage Pool and go back to use each media agent as standalone.Any ideas how to:- have a storage pool- force each MA to use their DAS
Disaster Recovery Backups have started failing
My administrative Job Summary Report has started to show:ERROR CODE [34:53]: CommServeDR: Destination Directory [\\<sever_216>\D$\DR_Dump_Prod] does not exist or is inaccessibleSource: <server_57>, Process: commserveDRWhen I check <server_216>, I see there’s lots of disk space and that the D:\DR_Dump_Prod folder exists and has several folders named SET_99999 with the most recent folder being 2 days old. Does anyone know how to check what’s gone wrong?Ken
Replace Old Media Agent with New One with LUNs
Hello, Everyone. So i have this critical issue. I have a Media Agent with LUNs attached to it, used as Libraries with all Mount Paths configured. Now, the Media agent has crashed. I have provisioned a new Media and those LUNs can be seen locally the new Media agent. I want to Library to pick the name of the New Media agent over the old one that crashed. This is very urgent for me, please. The company is one of our major clients.
Unable to open windows in Commcell Console after upgrade to FR11.28
Hi All,I have upgraded the CommServe from 11.24 to 11.28.After the upgrade the windows do not open in Commcell Console - for example: Job Controller, Event Viewer, Alert, Scheduler, Commserve Browser,... Even the Commcell Console window itself does not close when I click the cross button.To check/reproduce it on another computer I ugraded the Commcell Console module on Media Agent server to 11.28 and the behavior is the same as on the CommServe server. Thanks for help,Lubos
Can it be done : NDMP (Isilon) via secondary IP on MediaAgent using DIP
Scenario - Three way NDMP backup from Isilon. The Isilon is using an IP pool of x.x.x.x (eg 192.168.1.1 - .4) for NDMP operations The media agent has a primary IP of y.y.y.y (eg 192.168.2.1). I have put a second IP on the media agent that is in the same subnet as the Isilon, eg 192.168.1.250. This second IP is on a different nic from the primary IP. When I do a backup of the Isilon the job fails due to “network communication error”. When I look at the logs the problem seems to be that the nasbackup command is still using the media agents original primary IP and not the new secondary IP. I have tried setting up DIP/Backup Network pair but the job still fails and the logs show that the media agent primary IP is still being used. Is it possible for the NDMP backup to use a specified ip address for the MediaAgent that is not the primary IP? Based on a bit of digging around in the forum it seems that DIPs are not used for NDMP traffic management since the NDMP server (Isilon) is not really
Hello, please i have an issue with my DDB reconstruction. Not quite long when i move my DDB to another folder e.g folder1 to folder2 but on same server. 3 days later, my colleague wants to restart the server and force kill the SID process for process manager. and now the server went into DDB recovery. since one week now file system recovery will complete but adding records will fail . now today i found out the revoery process is taken from folder1.reason becuase after DDB move the new path has not done DDB weekly backup.now my question is according to commvault https://documentation.commvault.com/11.24/expert/12582_moving_deduplication_database_to_another_location.htmlnow the file system recovery is pointing to folder1 instead of folder2 what do you suggest.can i move DDB back to his previous folder1 ? and what could happen since it keep doing reconstruction and failing. i log a ticket against support a lady came help still is still failing.what other way can i move this DDB folder ba
PLANS - how do I stagger backups?
Hi !I’m used to old-style storage/schedule policies and clients/subclients associations..But we’re told that PLANS are the future, and we’ll have to move to Plans for sure.So I’ve deployed a few new MAs to protect some locations inside my company, where I have to protect VSA + file level backups.I have local disk backup, then auxcopy to tape and auxcopy to cloud from primary.MA is physical linux, as we have windows VSA clients, I have also deployed Windows VSA Proxy. Then I created Storage pools for local disk, tape copies and Cloud copies.To simplify things and test Plans, I created a Plan per location, with standard details, like 1 day RPO, 1 month of retention, with my backup timeslots, full timeslots (confusing with synthfull out of control, but we’ll discuss that later in this thread I guess).I created a VSA VM group that points to my VMWare location ( = selects all the VMs in that location, including my VSA proxy VM).For some VMs, I was asked to also provide file-level backup of
Azure Restore Performance
Hi all I saw the other topic on this exact issue. It was mentioned that FR26 would improve performance of a restore operation. Failing that Intellisnap could be leveraged for this.Unfortunately we are still seeing the same performance.The original poster mentioned that Commvault support mentioned that the max throughput is 60Gb/hr. We’re seeing close to this on our side.We’re also restoring managed disks.Is there any documentation that will confirm the limitation mentioned in the post on the 60Gb/hr limit, and would there be any workaround to try improve this? a 400GB VM takes about 10-12 hours to recover. Regards,Mauro
Standby CommServe install hangs
Good day Community,For a second customer this month, when I install the standby CommServe it hangs and spins. The first customer took three full reinstall to finally get it through. But the current client, it never works and always hangs. In the install log file, it hangs on this line.Adding environment variable [CV_Instance001] = [C:\Program Files\Commvault\ContentStore\Base]When I look the variables, it is there.I’m using version 11.28.44Is this something common?Thanks in advance.
Delete Mount Path from de-dupe library and decommission Media agent
Team,We are using windows servers as Backup media agents , I want to decommission one of the media agent “x” which is part of 3 libraries and dedupe storage policies. I have disbaled the mount paths on all 3 libraries associated with media agent “x” , view content shows that there is no data present on the mount path . When I try to delete the mount path associated with media agent ‘x’ faces below error -Mount path is used by a Deduplication database.The data on this mount path used by the deduplication DB could be referenced by other backup jobs. The mount path can be deleted only when all associated storage policies/copies with deduplication enabled are deleted. See Deduplication DBs tab on the property dialog of this mount path to view the list of DDBs and storage policies/copies. if I unshare the mount paths associated with media agent ‘x’ with other mouth paths of the same library and remove media agent ‘x’ from data paths tab in dedupe storage policy , the restore jobs starts f
Block Level Backup option missing from MS cluster client
Hello,I have a situation where the option to enable block level backup on subclient is missing. It’s not grayed out, but it’s not available there. The missing option is from MS cluster virtual client (pseudo client). The config meets requirements, nodes are on win2012 r2 and mediagent is installed on both nodes. Here are the cluster subclient:here’s some random subclient without mediagent package installed As for the doc, blb is supported at ms clusters https://documentation.commvault.com/commvault/v11/article?p=18527.htmhttps://documentation.commvault.com/commvault/v11/article?p=3505.htm
Problem with copy media LTO4 (IBM Tape library) to LTO7 (HPE Tape Library)
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
WebConsole/AdminConsole not loading/working
Hi,We recently upgraded to FR20 and I can’t get the Command Center or Web Console to load.I always get the same error: TomCat is running fine. I restarted the services several times, and even rebooted the CommServe.When I check the CVWebService, I receive the message: “Webservice is Running!”. So I’m not sure what’s happening. I found some Knowledge Base articles, but none solved the issue.Has anybody else had this issue in the past?Thanks!Jeremy
Implementing cloud combined storage tiers
We’re setting up a POC to use a cloud MA to copy longer term retention copies (1 and 7 year) from Azure cool blob storage to archive, and would like to use combined tier storage for the storage in the library where the long term copies will be kept. This being our first time configuring combined tier, I tried to find documentation describing how to configure it, but as of yet have not been able to. One question I’m hoping to answer is do we need to (or can we) pre-create the cool and archive storage accounts that will be used when configuring the new library, or is there some other way this gets done?
Delete Mount Path associated to DDB
Hello there,I’m have a minor issue, I cannot delete unused Mount Path, since it’s used by DDB. There’s a few of MPs under Disk Library dedicated to this DDB. In the DDB properites I can only remove whole Disk Lib, which is not the point. CommCell says that in order to delete this MP, I need to delete each Storage Policy Copy which is referencing to this Disk Lib. It’s not an option neither. Logs are saying something similar: EvMMConfigMgr::onMsgConfigStorageLibrary() - Error [470, Mount path is used by a Deduplication database.] occurred while deleting the mountPath[xx] ###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:6170: Failed to delete mountpath [xx] due to error [470, Mount path is used by a Deduplication database.].###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:5593: Failed to delete MountPath from database for Id [xx] due to error Mount path is used by a Deduplication database.:470 Do you have ideas, workaround to delete single MP in that situation?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.