Q&A. Technical expertise. Configuration tips. And more!
Recently active
Hello, I need to change the local path to network path.1 ) Can I modify the path directly, or should I verify somethings before the modifications ? 2 )Should I put the path like thisZ:\Share\CS_DRor\\server01\Share\CS_DRThanks !
How do I keep the tabs open if I click onto another item in the CommCell browser ?For exampleI want to keep the client computer groups and client computers tabs open and next to one another
when I look at the aux copy job status it reflects total data to process as 14.49 TB however when I right click on the storage policy and select Media Not copied the total amount of data is less then 300GB...does that mean the deduplicated data to aux copy is only 300 GB?Also, as the Aux copy utilizes a DDB at the secondary location the data copied would be less than 300GB?
Aux copy job shows running after operational window.The aux copy job stops running at 7am due to blackout window then Automatically resumes and the job is killed by the System ( reason the job has exceeded the total running time.)The answer I am looking for is does it pass traffic after the blackout window is in place
Hello,My Technical Security Team want to run a vulnerability scan of the MongoDB on several of my Commvault servers. They need some form of credentialed access into these databases? Is this possible or are the MongoDBs purely for internal Commvault use?RegardsFergus
Hello,My Commvault environment is v11 SP16.On a lot of my servers, Java is installed in C:\Program Files\Java.Does Commvault rely on this or does it use the JRE within the ContentStore folder?Thanks in advance.RegardsFergus
hi guys, we have an issue with one of our Oracle VMs. The VM is not really big, the backup data is 1-2 gb in size (size of application). Compression rate is ~70%.96% of the time the backup is in read phase. The backup take around 30 minutes. The disk is really busy inside the VM. When the backup starts we see 20-30k read IOPS and 100% busy state, which I can’t explain. We can see it on full backups and on incremental backups of the default subclient.We already limited the read throughput with the RATE setting inside CV to 250MB/s. (before we saw 800-900 MB/s read). Anyway, this do not help with the high read IOPS at the start of the backup. We see this high IOPS for 30-60 seconds and then it falls to 500-1000 during the rest of the backup.Anything we can do to limit the READ IOPS in this case? What is the reason for such high read iops for this small database? What exactly is happening at the start of the rman backup? On the backend storage I can not see any bottleneck. We use All Flas
Hello friends,I have two questions, maybe more of one statement and one question.It’s still true that if I change the encryption in a stg policy copy it’s only new content that get’s the new encryption and any older will keep the original encryption. Am I right here?Currently we have the setting “Do not deduplicate against objects older then” 180 days enabled.Retention for the policy is lets say 730 days.If we where to change the encryption for that stg policy copy, it would take 360 days before we would have no referenses to the “old” encryption? Or would there be jobs with the “old” encryption throughout all 730 days?Not sure I describe the question in a good way.. BRHenke
Hi All,I have an issue when adding a OneDrive cloud storage.I am configuring via CommCell Console. If I enter Application ID, Tenant ID and Shared Secret and then click the Detect button I receive an error “ ### EvMMConfigMgr::onMsgCloudOperation() - Failed to check cloud server status, error = [[Cloud] The requested URI does not represent any resource on the server. Message: Invalid hostname for this tenancy ”Commvault support answer is “The cloud vendor should be able to help you with the right URL. This is outside Commvault unfortunately ”.Does anybody have any experience using Microsoft OneDrive as a cloud storage?Thank you,Lubos
Has anyone who has recently upgraded to MR34 (11.21.34) and using Application Aware VM backups having issues with the VSAAppAwareBackupWorkflow job Failing at “update app aware status for VM2” task? Error messages showing “com.microsoft.sqlserver.jdbc.SQLServerException: Column name or number of supplied values does not match table definition.Source: , Process: Workflow”If you are also facing this issue have you any workaround/temporary fix?
Hi All, Recently I gone through some of the great discussion about Plans in the forum and also tried the same in our environment. But, at few areas I struggle to figure out the solution and looking helps.I understood that Synthetic Full backup schedule is automated and it run for every 30 days. But, where can modify it according to my environment. How the data aging will be impacted if I haven’t enabled the Full backup option How can I create a Plans for OS specific and how to restrict the windows or linux client should be added into that Plan like schedule policy. How can I enable the 1-Touch recovery option at plan level How can I schedule a AUX copy as I couldn’t find option for sync copy. By default it takes 1 cycle for all the backups. How can make into 0. Because in our case, we have daily INC and monthly Syntactic full backup and there is no full for VM/FS agents. How to split up Backup Window for Primary copy and Aux Copy How to disable client association into Base Plan and
We have four Exchange servers. Each DB has a passive and active copy with them being on different servers. We installed the media agent and a backup disk on the exchange db server. Usually i know what DBs are passive on each server so i setup a subclient on each one so the back up will run from the passive disk directly over the SAN to the backup disk without using the network. However sometime exchange admins will do patching late at night and the DBs will failover and they wont fail them back til later. when this happens the db backups are much slower. I want to avoid this by configuring each subclient to only backup the passive DBs that are on the same server as the media agent/backup disk. my goal is to never run backups over the network even when DBs are failed over to other servers. is this possible?