Ask questions, give answers, get good karma
- 3,967 Topics
- 17,648 Replies
We have a legacy setup wherein a share is both archived and backed up utilizing different sudo-clients. There is a sudo-client to take intellisnap backup; and then there is another sudo client which runs archive job on the same location. I am a bit confused by this setup. What is archiving actually doing? My understanding was it would create a stub and move the data from primary storage to the MediaAgent saving some space on the primary storage and the backups would be smaller.Also when we restore the backups would it restore the stub or actual data? Also what happens during the restore/recall of archive; does stub gets replaced by the files and would that mean that the if there is another client running backup on the same share; it will capture all data rather than stub; Setup: Netapp Share: NetApp 9.84 Cluster-ModeSudo client ZCluster1: Snapshot backup runs (Intellinsnap); backup copy goes to the MA (has DDB/IndexCache). Location it protects“/svm-hsc-cifs/SharedDrives” Anothe
Hello guys, I’m running a MS SQL backup via VM in Nutanix and even I disabling the Intellisnap in the client everytime a backup start a backup of snap start to run. I checked in the Command Center and in the configurations the Intellisnap is disabled. My concern in this is because the license of snap is being applied and I don’t want to.Is there something that I can do to disable permanently to not use Intellisnap in this case?
Hi all!I would like to discuss following situation. We are unable to change retention under Storage Policy Properites → Storage Policy Copy of Primary. After changing the value and clicking OK we see no real changes performed.Is it possible that this setting is somewhere blocked or overwritten by another settings?
Hello,during regular DR-Tests we see problems with the jobresults directory usage for larger DBs (1 TB+):Error Code: [30:375]Description: Encountered error while writing to the file. Error code . Please make sure there is enough disk space on [C:\Program Files\Commvault\ContentStore\iDataAgent\JobResults\CV_JobResults\2\0\4455051\RstStage].Source: Clientname, Process: SQLiDACV stages the whole database there, so we must add an additional 3-5 TB disk for that in the DR-Site on each larger machine. We have also bigger SQL machines which does not consume that much of space in the Job Results Directoy, even when the Jobresultsdirectory size is smaller than the size of DB we are restoring.Does anybody know why this is different between SQL-Systems?
Getting an error -> Could not complete SSO.Check the configuration and retry.While creating a customized package to install on laptop .
Hello All,I am new to commvault. I am trying to create a customised commvault package to be installed on laptop client.However on step (Select how you want to authenticate with the server). I selected Single sign on.Getting below error:Could not complete SSO.Check the configurations and retry. I am able to login to the Commcell console with SSO by using the same domain user but while creating a package , i am getting the error as described above.Could you please help to get this issue resolved ? Thanks in advance !
I have servers (VMs and servers with agents) that are being decommissioned. The request from my boss is to do a full backup and keep it after the server is deleted. The VMs are in a subclient and I am worried that their backups will be deleted if the VM is removed from the subclient. Similarly, I am concerned the backups will be lost if a SQL agent is removed. I am looking for some documents or advice that can guide me through this process of removing a server from the backup schedule while retaining the past backups. Then comes the question “How would you restore a backup of a decommissioned server?”Any help for this newbie is appreciated.
Hello guys, The Customer asked me the following question: “ I ldeployed an additional Command Center server - Webconsole. The server is in the DMZ and I really need to hide any other components than https: //x.x.x.x/webconsole.So, for example, that https: //x.x.x.x/adminconsole should not be available.This server will be an Edge Drive proxy server.I tried to change parameters in Apache files but after restart all settings return to default, i.e. after typing https: //x.x.x.x/, I automatically want to get to the link https: //x.x.x.x/adminconsole.” Is there any way to change the default https: //x.x.x.x/ url redirection from https: //x.x.x.x/adminconsole to https: //x.x.x.x/webconsole? Rgds,Kamil
I have configured sql live sync, and there is a schedule configured to be performed after the backup job completes,Everything is going fine but suddenly the sync operation converted to full restore without any change in the source server also there is no change occurred in destination side so i am asking about this behavior as its not logic to change to full restore with out any change !Commserve,Sql clients & MA version is 11.23..
Hi, In an MSP Commvault environment customers connects through a Network Proxy (Portal) and the CS is protected behind FW’s and layers of security. A customer want to login with SSO authorizing to their Azure AD. Is this possible and if so, how can I accomplish this without compromising security. Regards,/Patrik
A customer is protecting Oracle running in a failover cluster between Solaris Local Zones. Customer is currently using TSM and want to switch to Commvault. In TSM the Oracle databases are protected with the Oracle Agent running in the Solaris Global Zone only, there is no agent in the Clustered Local Zones. This means that no matter in which Local Zone the Ora DB is running the backup and restores will work without setting this up as cluster client. The question is if this is possible in a Commvault setup or if we are required to install the agent in each Local Zone and then configure the cluster client in a traditional way. Appreciate if anyone has experience of a similar setup has any input if this is at all possible and if so any pros & cons.Regards,/Patrik
how to deal with DFS backup, I have multiple servers replicating data between, right now the dfs shares are backup using \\unc\path from windows subclinet but many files are skipped. I’m surprised that there is no DFS client option to backup shares from multiple servers. If I deploy the agent all servers consumes my license and backup the same data multiple times
Thanks Bill for detailed documentation. This helped me during setting up NFS for teradata backup purposes.I see performance for the jobs written on NFS is not same as other jobs using same disk library.NFS jobs are running with average throughput of 100GB/hr while other agent jobs to same disk library is in TB/hr.Ran CVperformance check on disk cache and that is also much better (throughput in TB/hr) Anything to check further. I have logged support case but any advice is appreciated.
This Custom QI Time Alert is not working for me https://cloud.commvault.com/webconsole/softwarestore/store.do#!/137/681/12839 I change the MMCONFIG_AVG_QI_TIME_LIMIT_CRITICAL_EVENT_PERCENT to trigger alert the alert returns Alert Rule Name : Not Applicable SIDBStoreId : Not Applicable SIDBStoreName : Not Applicable StoragePolicy : Not Applicable CopyName : Not Applicable MediaAgent : Not Applicable DDBPath : Not Applicable QITimeInMilliseconds : Not Applicable Condition Cleared: : Not Applicable <?xml version="1.0" encoding="UTF-8"?><App_SetCustomRuleRequest> <queryDetail doesQuerySupportOutputFilter="1" frequency="86400" isDisabled="0" isOverwriteAssociationAtAlertAllowed="1" isPrimaryKeyPresent="0" isQueryModifyEnabled="1" isSystemCreated="0" queryCriteriaName="Custom QI Time Alert" queryDescription="Custom QI Time A
Hi Guys, I need your help. Here’s the scenario. The Customer needs to mount a disk to a client before Scan phase, then backup needs to be performed and after job’s completed disk needs to be unmounted. They decide to set Pre Scan and Post Backup Scripts with the following configuration: With Full backups everything works fine. However, if Inremental backup runs and there was no changes since last backup, Backup phase will not run. As a result PostBackup process will not run and disk is still kept as mounted to the client. I suggested to client to change settings as follows:So, we’ve got 2 sets of script:PreScan process - to mount a drive / disk PostScan process - to unmount a drive / disk PreBackup process - to mount a drive / disk again PostBackup process - to unmount a drive / diskHowever, the Customer is not very happy with the solution I provided.Do you know any different way to mount a disk to the client and unmount it after job’s completed, even if backup job ends on Scan phase?
In data aging admin job, How can I see the quantity of space free released after this job completed. I can see on SO level in media agent, I would expect to see some information about amount of space released clicking in detail of job but nothing is showed. Is there way to get this type of granular information?
Working on hardware and OS refresh. My first shot was to attach new machine to LiveSync (already has active-passive configuration present) allow it to sync and make new one active one. Is it possible? Or better follow Commserve hardware migration procedure? Not sure if OS (Windows 2019 vs 2012) and SQL (2012 vs 2016) version mismatch would allow me to do so? First attempt was not successful, as failover instance is not able to communicate with other nodes. Throwing “Failed to get current active node from config file for node” errors. Don’t want to wast time for troubleshooting as it may not te supported at all.
HI I am backing up an AIX (7.2) sever with an Oracle data base (19C), data base size is ± 20TB. Problem I am having is the job results folder keeps filling up, till now my colleagues have just been extending the volume to make the problem go away, however we are now sitting with jobs result folder that is 50Gb in size.I have arranged root access to the server in question and found that the problem is actually in the CV_CLDB folder where I have a CacheTable.dat file that is currently around 35Gb in size and a CacheTable.idx that is currently around 14Gb in size, so my understanding is that these should shrink when pruning takes place which set for 3 days but it would appear that this is not happening, I have tried changing the pruning frequency to check if it makes a difference but nothing, not sure how to reduce the size of these files at this point, and don't want to keep throughing space at the problem.Commvault version installed is 11.20.36
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.