Commvault Q&A, release updates, and best practices
We have some trouble with paths to a Synology nas going offline.Currently we have the DNS name in the path. I’d like to change that to referring the IP address instead. I’m pretty sure I can do it without any issues but better safe then sorry. Anyone see any risks of changing the \\DNSname to \\IPaddress for the path(s)? //Henke
Deduplication transaction log backup
Hi,We have a little discussion going on if transaction log backups are non-dedup by default despite the StoragePolicyCopy has deduplication enabled.Or do we need to configure a non dedup storage policy for this?In my believe a Log Storage Policy is there for other retention times and copies. The backup type decides if the data must be dedupped or not.Can someone clarify for me?Kind regards,Danny
The SDT data transfer was terminated on a request from the Job Manager on 11.20.40
Running v11SP20.40. We’re having a ton of occurrences in the last 3 weeks of this error across multiple environments:[The SDT data transfer was terminated on a request from the Job Manager.]I work in a MSP environment with the CS in one datacenter and the MAs spread out around the country. We have a minimum of 10GB on our datacenter links from MA-MA and MA-CS. I’m curious if anyone has seen this and has any resolution or troubleshooting steps.Thanks!
Commvault Feature Release 11.23 is out.
Commvault Feature Release 11.23 is out and here are some of the capabilities within this release;Newsletter: https://documentation.commvault.com/11.23/assets/new_features/newsletters/Newsletter_11.23_External.pdfFeature Release 11.23 Documentation Page: https://documentation.commvault.com/11.23/expert/133535_feature_release_1123.htmlComplete Backup And RecoveryEmail Alerts for Client Computers That Might Miss or Might Have Missed the SLA For Azure, Specify the Disk Type for the Restored Virtual Machine Automatically Retire Laptops That Are Offline for a Specific Length of TimeComplete: Manage New WorkloadsGranular Restores for MongoDB Collections and DatabasesComplete: Protect Virtual EnvironmentsReplicate Google Cloud Platform Instances Across Regions Live Recovery of Virtual Machines for Hyper-V Convert Virtual Machines from Nutanix AHV to VMwareJourney To The CloudIndexing Version 2 for Amazon Hypervisors Convert Virtual Machine Instances from Amazon to GCP Convert Virtual Machines
Anyone using Automation using the API?
Hello all, First off a disclaimer - I work as a TAM for Commvault…Over the last year or so I have started to get more questions from my customers about using APIs to help automate common tasks or far more complex integrations.I am just wondering if you are doing the same yourself and curious to see how you are finding it and if you have any questions or feedback that could be useful for others in the community to learn from.
Intellisnap / CommVault snapshot copy mirror to offsite NetApp storage cluster
We converted from Intellisnap to CommVault for support. We have been using the Storage Policy Snapshot Copy of type Mirror to selectively copy from Vault volumes to snap mirror volumes at an offsite NetApp storage cluster. We were told that Mirror Snapshot Copies were no longer supported in future service packs.We’ve had issues where the Mirror became stale or broken. My boss, a NetApp guru ripped out the snapshot mirror storage policy and created snap mirrors by hand. But he’s retiring and we are trying to get back to a more automated and monitored CommVault managed process. Does anyone have familiarity and can explain what CommVault convention is being used to provide some sort of DR capability from Vault to a Snap Mirror located on an offsite NetApp cluster in the current service packs?
SAP hana failed to restore out place
Hi community.I have the follow problem to try to restore a sap hana database out place. SAP HANA Error [hdbsql execution Failed.448: recovery could not be completed:  Buffer does not provide complete page.; $EndPos$=0x00007f60c9932000; $ReadPos$=0x00007f60c9913000; $PageSize$=262144,  Backint exited with exit code 1 instead of 0. console output: No additional Information was received SQLSTATE: HY000]. 219989 35b55 03/27 18:25:10 41123561 ::restoreFile() - READ ERROR encountered by the Data Mover.219989 35b55 03/27 18:25:10 41123561 ::restoreFile() -#ERROR /usr/sap/H4P/SYS/global/hdb/backint/DB_H4P/41088714_COMPLETE_DATA_BACKUP_databackup_2_1219989 35b55 03/27 18:25:10 41123561 ::performRestore() - Failed to Restore the file=/usr/sap/H4P/SYS/global/hdb/backint/DB_H4P/41088714_COMPLETE_DATA_BACKUP_databackup_2_1 retCode=2 errno=0219989 35b55 03/27 18:25:10 41123561 ::performRestore() -#ERROR /usr/sap/H4P/SYS/global/hdb/backint/DB_H4P/41088714_COMPLETE_DATA_BACKUP_databac
HyperScale X performance went time come to auxcopy to tape ?
Hi guys, I'm having a customer who is actually using Standard MA with Commvault to back up to copy 1 to disk (PureFlash with NVMe Drives), copy 2 (and other PureFlash) and a copy 3 to Tape around 350 TB per week to send to tapes (LTO7) 4 tape drives (weekly Full) at a sustain throughput of 700 GB/Hour per drive (4 drive in parallel). We are looking to replace both MA with two HyperScaleX clusters. The questions are ! How we need to configure the HyperScaleX (reference architecture) to sustain the weekly tape creation of 350 TB per week with the same throughput knowing that we are going to use NearLine SAS in the HyperscaleX Cluster ! Or ca we use SSD for the Storage Pool drive in a HyperScaleX.
Index cache free space issue
I got during the last full backup the following message:“Index cache [D:\IndexCache] free space on MediaAgent [host name] is  MB. It is considered critically low. Backups will start failing if the cache runs out of space. Take one of the following actions: 1. Increase the index cache total disk space. 2. Reduce the retention setting on the index cache.”Question:How can I reduce the index cache size?
After data clean-up and aging the Disk Library shows the same Free Space??
Hello all,I ran into a bit of an issue… Yesterday, one of the Disk Libraries got filled up and the backups went into waiting status. After haveing a look at the utilization, indeed it turned out to be 99,7% full.The main culprit were SQL server backups: There were some backup jobs with an extended retention - so I’ve deleted those, and some more of the old backup jobs to make space. I also ran Data Aging and could clearly see data chunks being deleted in SIDBPhysicalDeletes.log and after a while I got this: So, I assume quite a bit of data was deleted. The Primary copy (blue) went from 52.95TB down to 19.81 TB. However, when I check the Free Space on the Library I got very little: So I checked the Mountpaths Space Usage for that DL: Data Written corresponds to amount of space used by the Primary copy: 19.8 TBHowever Size on Disk, which takes into account Data Written + aged jobs which are still referenced by valid jobs is still very high. Almost unchanged. I am quite confused by this.
Block Level Backup option missing from MS cluster client
Hello,I have a situation where the option to enable block level backup on subclient is missing. It’s not grayed out, but it’s not available there. The missing option is from MS cluster virtual client (pseudo client). The config meets requirements, nodes are on win2012 r2 and mediagent is installed on both nodes. Here are the cluster subclient:here’s some random subclient without mediagent package installed As for the doc, blb is supported at ms clusters https://documentation.commvault.com/commvault/v11/article?p=18527.htmhttps://documentation.commvault.com/commvault/v11/article?p=3505.htm
Backing up the OS for my commserve server ? Greyed out
Coomvault version 11.22.13So we are still working on the evaluation license and hopefully will get the active license today.. Not sure if this has anything to do with it..But Im wondering why my commserve server which also falls under the clients has the “File System” greyed out. I think we want to get a backup of the C: drive and system state for our commvault commserve server.. right?I looked under the properties for the client server and see that under Activity Control the ENABLE BACKUP is checked off.Im still figureing this product out but getting better… Im adding a screen shot : ThanksBC
How do I view Failed Folders in the Admin console?
Running commserve 11.22.13I just recently opened the flood gates on our new commvault environment and ran a bunch of jobs last night. All the jobs appear successful but I noticed when I looked at the adminconsole Backup job Summary under the summary section I see “FAILED FOLDERS” was 1,211. Seems like a lot with only like 15 or so small clients (windows, linux, SQL Server).How do I determine what folders are failing and from what server/client or job.? Im not having much luck figureing this out myself. I understand these folders could be folders that are being used and not able to get backup but would like to know what they are. Thanks in advance BC
Pre/Post Command options for backup jobs
Have a requirement to stop a specific service prior to backup running so created a simple bat file with “net stop servicename”and configured the bat file in the pre backup command for the subclient. However backup job in pending state and I am getting an error along the lines of “User selection still set to none.”Is there more to the configuration of the bat than just stopping the required service?
Maximum size of a mount path on a disk library
Hello,We are in the process of migrating to a new disk library. This disk library is a pair of NAS devices with 300 TB on each NAS.We can carve out the NAS into multiple volumes with a maximum size of 150 TB per volume.When we first setup our disk library about 10 years ago, the maximum recommended size of a mount path was 4 TB. I know that is old guidance and I am sure this has increased over the years.We were trying to find something on the documentation and the closest we found was a reference to the maximum mount path being 25 TB, but it appears that the limitation can be overridden with a registry setting.So a few questions:Is there a maximum mount path size in a disk library? If there is, what is it? If there is, what happens if you hit the limit without adjusting the registry and can it be overridden with a registry setting? Regardless of a maximum mount path size, from a performance and management perspective is there a best practices on sizing the mount paths. We have thre
Exchange Online: access node member of domain (?)
Hi,I want to backup Exchange Online using an on-premises Active Directory environment:Does anyone know why the access node needs to be “member of the domain”? As the Exchange Online service account “must be created in Microsoft Azure AD only” (https://documentation.commvault.com/11.23/expert/28853_assigning_full_access_to_service_accounts_for_exchange_online_through_on_premises_active_directory.html)?Why can it not be “any domain”? Why can I not access the Exchange Online data just with the service account? We have a setup that makes an access node that is “member of the domain” a bit difficult, as our CommCell serves more than one O365 environment.Thanks!
Virtualize Me -Vmware question
Hi all. I have a question about Virtualize Me - Vmware. I have a physical server, which I want to make a clone of, to our Vmware environment. I do not want the cloned server to be connected to the network in anyway as its a DC server. So my question is, when the Virtualize Me process is running, will the server at any point be connected to the network for the restore process to work or is all done “offline”? Thanks-Anders
Azure Immutable Blob Library - WORM applied without warning through workflow
Hi,I have another trouble with the Azure immutable blob library. I started a recommended workflow “ Enable WORM on Cloud Storage”. Because we have not the “IAM VM Role Assignment” authentication method(we have Access key), the workflow was not completed. But it set “WORM” option on the cloud copy without a warning. Now there are some “Index server backups” which will not expire. They cannot be deleted (WORM copy) and block a whole baseline in the cloud. Maybe I have to open a support ticket for it ? Because I had already similar problems with these Index server backups, maybe it would be better to save these backups in another “no WORM” storage policy ? (I guess restore of data could be possible without these index backups too - with some performance penalties necessary for rebuild of indexes (?))This is only a test storage policy with one client VM, but the customer plans to place about 150TB to cloud. So “locked” data in the cloud could be costly.FYI see content of all relevant SP c
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.