Commvault Cloud Topics
Q&A. Technical expertise. Configuration tips. And more!
Hello everyone,a customer contacted us and reported that the VSS files on the domain controller were bloated on C:\System Volume Information. He suspects either Commvault or VMware. I looked at several things but couldn't come to a clear conclusion. Maybe someone here knows this behavior and can give me a tip for the solution if this comes through Commvault. Kind RegardsThomas
I need to figure out how to setup a subclient to backup vvol disks on our VMware environment. When I google this, it brings up everything but vvol and I am havng difficulty finding it in Commvault dicumentation. Our level is 11.28.83. Is this just the same as setting up any subclient? Or does it need a plugin to work? Any help yoou guys can give me on this is appreciated.
Hi Community,In addition to the normal CV backup, I have the challenge of backing up the "virtual machine files" (*.vmdk, *.vmx, ...) on a stand-alone NAS, in an independent Commvault format.My idea was to restore existing backups via a dedicated proxy on which the NAS NFS shares are also mounted.Manually, via CommmandCenter, this works fine, but I would like to schedule it periodically with a monthly backup/restore procedure.Do any of you already have experience of how this could work?I would be happy about any answer ….
https://documentation.commvault.com/2022e/expert/2822_mediaagent_system_requirements.html MediaAgent: System Requirement are outdated - for example for 11.32 (release 2023) there are informations from Updated Monday, July 11, 2022We need modern OS support like Ubuntu 22.04 listed in file system backup - https://documentation.commvault.com/2022e/essential/142272_system_requirements_for_linux.html
Error Code: [18:183]Description: Failed with Oracle DB/RMAN error [RMAN-03009: failure of backup command on ch1 channel at 12/04/2023 05:03:31 RMAN-10038: database session for channel ch1 terminated unexpectedly RMAN-03009: failure of backup command on ch2 channel at 12/04/2023 05:03:31 RMAN-10038: database session for channel ch2 terminated unexpectedly ]Source, Process: ClOraAgent
Hi Commvaulters, Can someone advise me with the needed network ports to be open in order to perform an aux copy between 2 MAs.We installed a new MA on a remote site, and we want to open only the needed ports between it and the CS (For communication) and between it and the MAs located on the main site.Please to note that we are running CV version 11.24, I know that the main ports are 8400 for communication and 8403 for data transfer.Is there any other once that needs to be opened ? We want to minimize port opening in order to fully secure the remote MA. Regards.
Hello Community, Our client wants to us to trigger Oracle dataguard backups only from secondary node. I couldn’t find a setting on “Preperties” to set it. Does anyone know how to do it? Or if it’s even possible? The primary node is very loaded so they wish to run all backups from secondary. Thanks!!Zivile
Hi guys, one quick question.NetApp offers the feature to lock Snapshots to prevent deletion of it without the need of an SnapLock Aggregate. The feature is called “Tamperprrof Snapshot”.https://docs.netapp.com/us-en/ontap/snaplock/snapshot-lock-concept.html?q=TamperproofIs this feature supportet by Commvault?Could Commvault handle/manage the retention/lock of the snapshots on the ONTAP-level?For primay snap and snapvault copies.If not, what would be the best solution to keep the snapshots safe from deletion → SnapLock? RegardsJulian
Hi teams, Have a good day, I needed a help. One of my clients has a backup issue related to indexing. I have also taken help from support but the issue has not been resolved, maybe the experiment is going on. If I delete the data from the problem job and then take a new backup, will it be ok? This is an issue of 20 days jobs. ThanksAmol
Maybe someone in here can help out and straighten out the Plans part for me.The below is going to one stg policy. Several clients schedules attached.How would that pain out if done in a “Plan”How many Plans, backup destinations and so on required?I am not all aboard on plans since I’ve been using the “old” way for the last 10 years. Thanks for any enlightment//Henke
Hello, does anyone in the community have experience with offside backup to an S3 bucket in AWS?We created a bucket in the AWS with the "Object Lock" option in compliance mode with a retention of one day. We also did the same with a storage policy in Commvault. Also compliance lock and retention 1 day.We then sent a VSA backup to this storage policy and everything worked immediately without any errors. However, after the retention expired, the data in the AWS bucket was not deleted. To find out whether something was stuck here, I manually deleted the data in the storage policy. That was yesterday. Today, the data that Commvault wrote is still available in the AWS bucket. There is no delete marker on it and the bucket has not changed in size from 7.7 GB and the number of files and folders. RegardsThomas
Hello, Hope someone can help me understand the OneDrive Active vs Inactive data. Not so long ago, we migrated our users over to Microsoft O365, and activated backup of Exchange Online, OneDrive, sharepoint and Teams. For OneDrive vi had about 23TB of data for about 22.5K users. And under Protect > Office365 we could see the “Backup Size” and same with a Dashboard element showing top 5 largest servers. OneDrive was one of our largest servers. But about two weeks ago the OneDrive storage dropped from almost 24TB and down to 2.52TB. When i go to OneDrive in Commandcenter and look at “Details” under Backup Stats, i can see that Active size is 2.52TB and Innactive is 21.94TB. From the Microsoft 365 admin console i can see no problems. So my question is regarding the Active and Inactive data. Might this be that the 2.52TB is data that has been changed since the last backup, and inactive is old unchanged data? Or is it that the inactive data is actual data commvault think has b
I want to configure extended retention for monthly fulls. I would prefer to have the extended full backups written to separate tapes than the regular tape copies so that a large number of tapes filled with mostly aged data are not being tied up. I know I can accomplish this with second tape copy for just the extended fulls. This method writes the full with extended retention to both tape copies. Is there a way to send the extended retention copies to a different set of tapes using on secondary tape copy and therefore, reduce the total number of tapes and at the same time performing only one copy to tape of the extended full backups?
HelloMy VSA proxy server has issues communication with the media agent. cvping works, and also test-netconnection. The media agent has 2 NIC, admin and backup. Admin network is not reachable for the VSA proxy. There are 2 DNS records for the media agent.mediaagent.domain.net and mediaagent.backup.domain.netNot sure if it matters but the media agent is registered in the commcell with the admin hostname, mediaagent.domain.net Anyway we have created DIP and backup networks so the VSA proxy actually tries to connect to the correct IP address. But in cvfwd.log I get the following: ######## ######## Switching fd=952 to non-blocking modeConnecting fd=952 to xx.xx.xxx.12:8403Completing connect() for fd=952Successfully connected to xx.xx.xxx.12:8403=> HTTP/1.1 POST xx.xx.xxx.12:8403ERROR: cvfwd_iot_wait(): Socket READ failed. Got READ error on ON_DEMAND control tunnel from "vsaproxy" to "mediaagent" via (xx.xxx.x.134, xx.xx.xxx.12): The specified network name is no longer available.Error in
I often have a wizard-block defined with a process-block in order to make my workflows clean.The problem is that the workflow end activity doesn't actually kill the workflow it just exits the process block. Is there a way to kill the entire workflow from within the process-block?
Hi community,we have a lot of LTO9 tapes with only few Index Jobs on them (Jobs are in megabyte range) and I would like to re-use these LTO9 tapes (17.58 TB) for other tape-out copies. When these jobs will expire, nobody knows. Is there any way to copy still valid index jobs to another tape?Pick for refresh, tape to tape copy…?Thanks
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.