Ask questions, give answers, get good karma
- 3,460 Topics
- 15,938 Replies
Keep clients' Commvault software up to date
We manage the Commvault service for a customer.In the CommServe, the customer has more than 1500 clients.Whenever there is a new client, the customer will install Commvault on their client, and we configure the backup in the CS when the new client appears.However, the customer doesn’t install the same version as the CS. They always use an older version.Is there a way to push the correct version to new clients automatically?Today, hundreds of the customer’s clients have an older version, and we would like them to get on the same version as CS.Is there a best practice for keeping clients up to date?We have discussed creating an automatic group and add a rule for clients with status “Needs update”, but we are concerned it will create issues for the customer or the CS, like delay in backups or overload the CS..?How does CS queue the update jobs, if we have a group with 300 clients and push update to the entire group..?
SAP Hana Cross Restore
Heyy all, i had to do an corss db restore. i started with the system DB. SAP HANA Error [2022-05-05T12:32:19+02:00 P0022113 18093c6fe09 ERROR RECOVERY RECOVER DATA finished with error:  recovery strategy could not be determined,  Catalog backup not found using Backint(path=/usr/sap/Destination/SYS/global/hdb/backint/SYSTEMDB),  Backint cannot find file '/usr/sap/source/SYS/global/hdb/backint/SYSTEMDB/log_backup_0_0_0_0',  Backint terminated successfully without connect,  Error reading backup from 'BACKINT' '/usr/sap/destination/SYS/global/hdb/backint/SYSTEMDB/log_backup_0_0_0_0',  Not all data could be written: Expected 4096 but transferred 0,  Backint cannot find file '/usr/sap/source/SYS/global/hdb/backint/SYSTEMDB/log_backup_0_0_0_0' ].Source: AGI01V2T078, Process: ClHanaAgent In the hana aget log i found: Log_Files/ClHanaAgent.log:27604 6bd4 05/05 14:08:25 6500590 ClHanaAgent::ExecuteRestore() - hanaRecoverCmd=[/usr/sap/HHS/HDB4
A way to see or predict the size of a VM guest files restore before running the job?
When restoring guest files from VMs I should be able to see how big the restore will be, right?I should be able to see the size of the data, in order to predict how much space will be left on the destination VM after the restore..?If I select a specific folder in the restore job and then click “List media and size”, it doesn’t show me the size of the folder I selected.Am I looking in the wrong place? Or is this a nonexistent feature?Thank you =)
Error Code: [24:64] Description: There was nothing to restore - subclient(s) not backed up
Hi,I'm getting the error [24:64] Description: There was nothing to restore - subclient(s) not backed up.When I do the restore of the virtual machine the restore works normally, but when I restore the guest files and folder, the commvault shows this error. I can also browse the files normally, does anyone have any idea what might be going on? restores are for Nutanix AHV.
Disk Performance Tool - average read/write throughput - DDB disk
Documentation mentions to use the following default parameters to ensure that the average read throughput of the disk is approximately 600 GB per hour and the average write throughput of the disk is approximately 700 GB per hour for a disk volume as a mount path of a disk library: BLOCKSIZE of 65536, BLOCKCOUNT of 16384, THREADCOUNT of 6 (each thread uses 1 file of 1 GB) and FILECOUNT of 6. What are reasonably good values for the average read and write throughput of the disk as a DDB disk, not a mount path of a disk library? And what are suggested parameters (blocksize, blockcount, threadcount and filecount) to test the disk reserved to act as DDB disk?
worm workflow question with Azure storage container for long term backups
Hi, we are testing backups to Azure blob with time based immutable options. we have 2 containers for short and long term backups.for short term backups, we decided to use worm workflow, but our issue is with long term backups.our weekly backups are retained for 7 years, if i use worm workflow it seals DDB every 365 days (maximum with workflow), and we planning to leverage manually setup for this enable immutability on storage container, enable worm option on storage policy level, and make DDB seal time as 7 years. can someone please verify this setup?Thanks,
HPE StoreOnce: Catalyst over SAN. Grid Store possible ?
Hi there.Can a StoreOnce Catalyst device be presented to 2 Media agents and shared to leverage load balancing and fault tolerance ?On all project I have encountered StoreOnce, they were either over LAN or attached to a single media agent.Also, can we use a Linux Media agent with Catalyst over FC ? Documentation says MA on linux is supported but not very clear on how the device will be “seen” at the OS level and if additional Linux drivers/software are required. ThanksAbdel
Encryption Key management via built in Commvault
Hello All, We are working on encrypting all of our jobs (backups) via software encryption on the policies. While setting it up I was curious on how the option “No Access” works. Would we be given the option to store the decryption key somewhere else or is it all stored in Commvault regardless? If it is stored in Commvault how do we get to the key to save it for later decryption use. I know “Via Media Password” has it stored in the library and now I wonder if it is possible to get to that decryption key as well. Thank you all for the help! (Sorry if I didn’t make it clear, I will try to clarify if there is any confusion)
Third Party KMS
Hi Team,If we use third party Key management server such as AWS KMS with Commvault , will there be any impact on backups and recovery throughput or performance .Iam assuming that Encryption keys retrieval is faster when its present in CS database as compared to retrieve keys from third part KMS ? Let me know if my understanding is not right .Also , during a backup or recovery job does encryption keys retrieval from CS DB or third party KMS happen only 1 time or it will be continuous for each and every block/chunk ?
Carbon Black and Deploying Commvault Client to Windows Clients
I am on 11.24.29 version of Commvaultwe have Carbon Black in our environment, which blocks the install of CV Client.The IT Security team is asking me for the location of the “app”, which I assume is the windows client, that gets pushed to the new Clientwhere is the location of the Windows Client on the COMMSERVE what is the path on the Client of where the Client gets copied to before it tries to install it
Exchange Search Index Restore
Hi Team,I would like to clarify / check some about the Exchange Index Restore.In my environment we have Index folder separately for exchange index catalog/metadata. May i know below:1. Is the restore index will rebuild from last hour it was backup?2. What are the procedure to rebuild the index after restore?3. Is the restored data will get conflict with latest backup? May need your guidance on this. Thanks.
1-Touch Linux - Host file
HiI’m trying to do a 1-Touch restore on Linux, but we do not have DNS in the network used for restore.Is it possible to exit the YAST2 configuration and edit the host file before continue?(I need to add some DNS addresses to the /etc/hosts file).I know there is a feature for Windows, where F6 will bring up the CMD, which have solved the problem on that platform. Hope you can help.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.