Ask questions, give answers, get good karma
- 3,460 Topics
- 15,938 Replies
Kubernetes Container Path
Hello Commvault Community,My name is Kamil.I need your help with Kubernetes. The client configured backups on the Kubernetes cluster, unfortunately, when an attempt is made to backup the components to which PVC is attached, an auxiliary container is created, whose task is to copy data from persistent volume. Unfortunately, the Pod is created automatically by Commvault, along with the image's path.In the case of Client configuration, they do not have posted traffic to official docker repositories, therefore Pods hang in the ImagePullBackoff state. They've already tested manually changing the address of a broken Pod's image to the same image in their private docker registry, which was successful. The client sent an example of the configuration they use.“For example, for Poda rabbitmq-0 with pvc attached, it was created Pod/ data-xyz-cv-1816940. In Pod configuration:spec:containers:- command:- / bin / sh- -c- tail -f / dev / nullimage: debian: stretch-slim” I am asking you for help on ho
The snapshot generated from the file scan database is not consistent; the database on this volume should be rebuilt..
hello all,wondering whether anyone has seen the specific error:“The snapshot generated from the file scan database is not consistent; the database on this volume should be rebuilt...”This clearly is related to a specific volume (C:\), however the job finished successfully the 1st day, I'm now seeing system state failing.Is the “database” the error refers to, the client metadata db?Cheers,Glen.
Command Center Database backup daily full only
We want to create a Server Plan for databases that only runs a Full backup one a day.We cannot find a way to achieve that. When we configure an RPO, it automatically runs an incremental (which we don’t want), and under "Database options" we cannot find an option to completely disable transaction log backups.Am I overlooking something?How can we achieve only daily fulls through the Command Center?
Duplicate Edge Clients
Hi All, maybe you have an idea. We have the problem that all laptops and workstations are re imaged with sccm. So The clients get an new operating system. What happens in the commcell is that we get duplicate clients. new Clients are generated with name xxx____1 . When you check the backup history for both clients you can see that new clients are backuped to the new object. The problem is that the users cant access there old data. The strange part here is that not all clients with the new name xx____1 have the new backups. There are also some clients with an new name xx___1 but actually backup history is coupelt with the xxx name. I found an regkey: dForceClientOverride but its still active.. https://documentation.commvault.com/commvault/v11/article?p=107073.htmWe had opened an support case and will work on this issue
backup and restore physical Unix Media Agents
Hello, I have a question about how to backup and restore physical media agentsfor example, I have actually 4 Unix physical media agents with no deduplication (handled by storeonce)On the subclient I left the default to backup all content (maybe I need to exclude some folders ?) for restore I have found this documentationRecovery - 1-Touch for Linuxhttps://documentation.commvault.com/commvault/v11/article?p=116195.htm Is it the good way to recover a physical MA that crashed ? another question, before when I click on help button in the dialog boxes I was redirected to the good page with all explanations, now all links are broken. Thanks !
15 minutes backup frequncy
we have a potential customer who use a snapshot backup product to take 15 minutes backup; it is a sector based product which has a filter driver installed on the sevrer; withCommvault file system agent we only do backups once a day; is it possible to use the file agent to do a 15 minute backup; any potential issues. Or should we be using the snapshot backup(VSS; disk sectors) rather than the traditional file system Commvault backups where it reads all the files
Copy hotfixes is skipped because the media is not available for it.
Hello, I am trying to copy maintenance release to software cache but I have this error message. Copy hotfixes is skipped for [C:\Users\commvault\Downloads\Commvault_Maintenance_11_22_22_linux-x8664.tar] because the media is not available for it. I have copied the windows package without any issue.Also do you know where I can find all commvault error codes ? Thanks !
Problem with data pruning after seal DDB
Hello Guys, I need yor help.The Customer has sealed a DDB.However, they ignored the following warning:There was not enough free space on DL.As a result Disk Library is out space of now and the Customer has disabled schedules to not run any backup.As a troubleshooting step we tried to delete some jobs assigned to sealed DDB.AS you can see, there are over 20,000 jobs assigned to this DDB.We deleted over 12,000 jobs.Hovewer, Commvault is still showing the same number of jobs and no data has been pruned from DL.Is any way to force Commvault to free some disk space removing data assigned to sealed DDB?Rgds,Kamil
During the backup I have issue: /opt/PostgreSQL/8.3/bin/pg_dump: unrecognized option `--lock-wait-timeout=60000'Commvault 11.20.46Postgres SQL 8.3 I know that the new version of Comvault no longer supports this version of the databases I have a question, is it possible to modify the pg_dump command in Commvault? (to omit an unrecognizable option) Alternatively, how can the following problem?
Problem with SyFull after v11.22.22
after upgrading from v11.22.17 to Maintennce Release v11.22.22 all our SyFull Schedule Policies (with Option “run Incr Backup before SyFull) stay in waiting Status after finishing the Pre-Incremental-Job.Suspending or Killing the Jobs is not possible. Only Cycling the Commserver-Services is helping. After Cycling the CS Services, the SyFull Jobs change to running State and the Jobs will be finished successfully.Anybody else having this Problems after installing MR v11.22.22?Regards, Alex
Isilon NDMP Incremental - Selection rules?
Hello,We have an Isilon cluster for primary storage for CIFS and NFS clients. We back it up to tape and disk libraries using 3-way ndmp. Incremental backups can be very large from time to time, so large that it seems like it might be backing up files that have not really changed. For example the current backup has transfered 1.7 million files. It seems unlikely that 1.7 million files have changed in the last 24 hours since the previous incremental (daily).So the questions I have are -How are files selected for incremental backup and is there a way to modify/control it? Is there a way to get a report on what files were backed up in a given job and what changed in the file that caused it to be backed up?Thanks,Ron
Archiving with Symantec EndPoint Protection 14 Installed
Hi All,I have somewhat discussed this topic in the below post that has not clearly answered / resolved my issue.https://community.commvault.com/technical-q-a-2/stubb-recall-not-working-183 When disabling SEP completely the recalls work successfully.By configuring and adding all process and folders to the AV exclusion list does not resolve the problem. The only method that allows for stub recalls to work successfully is by configuring the AV On-Access scanner to not scan for Read operations in the folder where the stub is located. Can someone please assist in clarifying exactly what processes are involved with the recall of files? I can see cvd.exe, cvods.exe and clmgrs.exe popping up from a Commvault point of view. Could there be any native Windows process involved in this process that might have a read operation during the recall process? Maybe the the filter drivers that need to be added to the exclusions as well? I have added several processes to the cvmhsm registry key as ExcludedP
using disk restores gives ORA-27048
Hi!For time-saving purposes I decided to restore Oracle RMAN backupsets to disk using this documetation link: https://documentation.commvault.com/commvault/v11/article?p=20540.htmWhen I try to restore from it I got ORA-27048:RMAN> restore controlfile from '/u02/app/oracle/backup/1165136/794747_PBS_r0u1qblp_1_1';Starting restore at 07-05-2021 13:44:34using channel ORA_DISK_1channel ORA_DISK_1: restoring control fileRMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03002: failure of restore command at 05/07/2021 13:44:36ORA-19870: error while restoring backup piece /u02/app/oracle/backup/1165136/794747_PBS_r0u1qblp_1_1ORA-19505: failed to identify file "/u02/app/oracle/backup/1165136/794747_PBS_r0u1qblp_1_1"ORA-27048: skgfifi: file header information is invalidAdditional information: 2Backup was made from Oracle Linux x
to be copied jobs still running for a while
Hello community !On the Storage policyWhen I right click on the Primary copy > view jobsI have 6 To be copied Jobs Job 18586View job detailIt was aws instance that I have removed from the subclient and I don’t neet this jobJob 18592It was aws instance that I have removed from the subclient and I don’t neet this jobJob 18725Job 19195Job 19687Job 20147What should I do to let these job to be copied and remove the red alarm on the Storage policy ?Thanks again for your valuable aid !
Why does the WMI service hang?
I also see same issue in my environment frequently where backups on multiple servers hang randomly. And then we see commands like ‘tasklist” timing out on the client which confirms WMI is not responding.We have to reboot every week or so to fix.Does anyone know why WMI hangs? Unfortunately we don’t have Microsoft support so have not been able to take up this issue with them.
HANA database activity on tenant - db level
Hi,Another question: We have HANA instances, with couple of tenants(dbs) in there.So : HANA instance1 => TenantDB1 => TenantDB2If we want to disable the HANA backups (backup activity), we can disable it on the HANA instance1. So far so good.But that option doesn’t exist on tenant/db level:Is there a way to also have the same functionality on HANA tenant/db level?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.