Ask questions, give answers, get good karma
- 3,967 Topics
- 17,648 Replies
Hi Team,In our Infra we have facing one issue in commvault backup solution,Our Commvault backup infra having 3 media agent, while running the auxillary copy backup jobs it will take only one media agent for both source and destination.Some times all the auxillary copy jobs are hang state and failed the jobs, and every time we need to reboot the media agent then only auxillary jobs are work, I am looking for any solution and suggestions for this above case.
Hello,we have configured O365 mailbackups to one of our virtual machine disks which would be almost full in few weeks from now.We don’t want to shorten retention period (it’s 1 year and its our policy to keep it this long) so we would like to ask if its possible to somehow distribute/divide backups to separate disks? Or easiest way would be to attach new disk and reconfigure backup location (backup destination in plan settings)?
Hi Vaulters, Hope everyone is doing well.We have a new commvault platform that will be installed, and we’ve been asked by our storage team what are the LUNs size that are needed for backed up data. In order for them to create and map them to our MediaAgents. So, it’s a sizing question. Are there any recommendations per Commvault or utility that can be used to help us decide what’s the best storage configuration will fit our needs ? Knowing that our platform will back up multiple workloads : VMs, FileSystems, DBs (SAP HANA), Mailboxes (Exchange), Active Directory ...etc. Any recommendation from you guys would be much appreciated.RegardS.
Hello. Will greatly appreciate your feedback/guidance here. I need to enable intellisnap for subclients with the data on a netapp source. It’s vmware, and supposedly volumes that are “just volumes”: not nfs, not cifs or anything specfic to a particular application. The customer “doesn’t want NDMP in the environment”. I see in the NAS client there is, ndmp, nfs and cifs as options. How can I configure the nas client to snap ‘generic’ or just ‘ad hoc’ volumes for lack of a better way to describe them without using the ndmp or nfs or cifs agents under the CV NAS client ? I am not seeing a way to do this. The storage policy is set up with a snap primary copy for the source and a vault/replica copy to a destination netapp. The customer doesn’t want ndmp used and doesn’t want streaming method used. Is this even possible ?Thank you!
Hi,My client changed the port number of his SQL instance to a specific number. His reason was to hide the instance for security purposes. However, it caused Commvault failed to backup that SQL database. with the error “Failed to validate the credentials for instance...”The client told me that everything that needs to point to that instance has to has the instance name like this SQLDB01,port-number\DB01 I wonder, what could I do in Commvault to back up this SQL database?Thank you.
How to generate the command "qlist backupfiles" for a client MS SQL Server. Like this link, but for a MS SQL Server database https://documentation.commvault.com/commvault/v11_sp20/article?p=45143.htmEspecially how to indicate the parameter <paths path=.…What to indicate for a SQL Server database Thanks.Gustavo
Hi,We are running standard streaming backups (configured in Command Center) for virtual machines. We have 3 backup copy destinations for virtual machines:primary site - disks primary site - tapes (monthly fulls) - extended retention secondary site - disksWe would like to configure and run Disaster Recovery and Replication jobs for some of our protected virtual machines to replicate them from primary site and create replica in secondary site. We would like to run this as periodic(daily) replication with VM - hot site type.What is a best practice in scenario, where we need traditional backup and also replica in secondary site? Only DR and replication jobs with longer retention or coexistence of backup and DR and replication jobs in the same time? regards,Przemek
Hi All, Reaching out to you with regards to a customer query that I am dealing with. It has been verified earlier with Dev (via 230614-889) that support and certification for RHEL 8.8 with RWP (Ransomware Protection for Redhat 8.8 Media agent) is expected to be completed by the end of August, based on the current timeline.However, upon reviewing through: https://documentation.commvault.com/2022e/expert/126625_system_requirements_for_ransomware_protection.html, it does not currently list Redhat 8.8 as a supported OS.Could you please confirm if Redhat 8.8 holds good for RWP support. PS: Current Environment is based on SP version 11.28 Looking forward in hearing from you,
Hello,i’m searching a way to automate commcell migrations operations for migrating LTO-5 tape metadata on a new Commcell in order to be able to restore.i didn’t find any REST API or qcommand to do this.is there a way to automate these actions or it’s only manual operations ?regards,Christophe
Does anyone know if it is possible to protect ONTAP S3 with Commvault?Only these solutions are supported in the official documentation: Cloudian S3-Compatible Object Storage Huawei OceanStor Pacific (formerly called "OceanStor 100D" and "FusionStorage") MinIO Red Hat Ceph Storage 3 S3 endpoints (SSL with hosted domain) StorageGRID object storage (S3 compatible) Pure Storage FlashBlade https://documentation.commvault.com/2023e/expert/30015_amazon_s3_overview.htmlWould it be possible to protect ONTAP S3 in some other way with Commvault? Is it supported in future releases?
Hello, we have deployed 2new windows 2019 machine and made a disaster recovery of the Commvault database using the recovery assisstant tool and also installed commvault on another host to build a livesync solution like we had before.The IP and the hostaname of the commserve host has been changed, but the name for the commserve client is the same. We installed failover packages on both primary and secondary machines as written in the documentation. The problem is, that now we have 4 clients in the failover assistant. How to get rid of the restored failover clients now? IF I delete them, the new sql clients will be also deleted as the IPs are the same?
Dear Community,I need a report at the beginning of each month for the sum of the "Application Size", independent of "Application Type" and "SubClient", of Clients within a "Client Group" from the previous month.I have tried to find out the information using the report "Backup job summary", but I only get the sizes per job and not per client.Which report can I use to map my requirements?Can someone help me?Thanks a lot!
Dear all,I hope you can help me to better understand the Synthetic Full option availability. In a VMware virtualised environment, all our VMs are grouped into a bunch of subclients for which I can perform a Synthetic Full backup, after right-clicking on the subclient name and selecting Backup command from the popup menu.Now, if the same sequence is run from one of the VMs, from the Client Computers tree, the Synthetic Full option is not available.In both cases above the subclient belong to agent type Virtual Server. What am I missing? Thank you in advanceGaetano
Hello Commvault Community,I hope you're all doing well. I wanted to share a recent experience I've had with Commvault, and I'm seeking your insights and advice.I'm facing some challenges with Tomcat services, and I've attached screenshots to illustrate the issue - Install Logs- Tomcat Logs I'm reaching out to the community to see if anyone has encountered a similar issue with Tomcat services during installation or if you have any advice on how to troubleshoot this further.Thank you all for your support and insights. I look forward to your feedback and suggestions.
Hello we were doing an NDMP restore of some CIFS files.The snapshot we needed is present on commvault, visible on the array and also on NEtapp. We are restoring from vault. NDMP credentials are configured both on priamry and secondary site. restore fails with a nonsense error: “Operation failed for path [/de04xxx022fsa01/v_c_msql_BD_PROD_VV_1/.snapshot/SP_2_3566761_348xxx_1694037804]Please check if volume [/de04xxx022fsa01/v_c_msql_BD_PROD_VV_1] and snapshot [SP_2_3566761_348877_xxxxx37804] exist. “I can browse easily from snapshot when using recovery in commvault. Files are visible. Snapshot is healthy. Different media agents were tested. When restoring i have checked the box “restore from snapshot” but also tried without checking the box. The issue is the same. We are restoring to the same SVM /machine/ but also tried to restore onto our Media agent but it failed to. I am wondering where is the problem. Have anybody experienced such behavior?ThX for every hint. Laci
I'd like to ask you a question about commvault license sku.I want to know the difference between CV-DR-VM and CV-BKRC-VM10 SKU.No CV-DR-VM required for CV-BKRC-VM10 purchase?Do I need to purchase CV-DR-VM to use VMware Replication (LIVESYNC) capabilities? Or can I also use the feature with CV-BKRC-VM10?
Hello,We have set up a WORM S3 library in Commvault, specifically for the purpose of second copies. Commvault advises against performing DDB verification on cloud storage. I'm curious if it's possible to start a recopy (in case of bad chunks) for this WORM configuration. If so, I'd like to clarify whether, once the recopy is finished, the job will continue to use the same job ID in the S3 immut storage but will reference the newly copied objects, then removing the references to the previous objects associated with the same job.Furthermore, since we're dealing with immutable storage, how to ensure that the new objects (those that have been recopied) maintain a link or reference to the same job ID within the S3 immutable bucket coz the metadata of the same job also being locked in the first place can't modify or delete. thanks
Hi everyone, my team has 6 full-time data protection specialists that administer our large CommVault (CV) environment (plus 12 storage admins - PowerMax, PowerStore, Unity, NetApp, etc.). I’m looking for a full-time senior to join the team but it’s hard to find people who have senior experience. Our team switched over from Networker, NetBackup and others to CV about 3+ yrs ago. We’ve done all the CV education training, but it’s really not enough. I’m curious if anyone has recommendations for increasing our team’s knowledge and skills. I’m referring mainly to standard backup & recovery, IntelliSnap, database agents, etc.FYI - we backup Windows (2016+), HP-UX, AIX, Solaris, AIX, Linux, etc., Oracle, SQL, Iris, & other databases and no cloud to speak of. We run about 570k jobs a month with 99.83% success, on 4300 VMs & 500 servers in CV using all disk-based backup targets. We have 35PB of storage we manage, of which a portion of that is protected with CV. Your insights are app
Hello community,I hope somebody can help me working around a problem we will have for some time.We have an Azure based Media Agent which is very short on memory. It will take some weeks before we can expand it but, in the meanwhile, Commvault processes restart and the DDBBackup often fails.Is there any way to reduce the memory used by CV processes, even slowing down a little bit the backups? I mean something like reducing the readers or similar. Thanks a lotGaetano
Estou tendo problema no Commvault para realizar o backup Auxiliary Copy sempre apresenta esse erro abaixo alguem ja teve esse problema e sabe como posso resover? I'm having problems with Commvault when performing the backup. Auxiliary Copy always shows this error below. Has anyone had this problem and knows how I can solve it? Error Code: [13:138]Description: Error occurred while processing chunk  in media [V_504103], at the time of error in library [Pool_Disk_VMs_Definitivo] and mount path [[xxx_bkp01] C:\Commvault_Pool_Disk_VMs_Definitivo], for storage policy [Policy_Pool_Disk_VMs_Definitivo] copy [Copia-Fita-Definitiva-Manual] MediaAgent [xxx_bkp01]: Backup Job . Encountered an I/O error while performing the operation.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.