Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 620 Topics
- 3,252 Replies
Hi,I’m working on an issue with DDB Verification on a HyperScale X environment.I talked to someone at Commvault Support who explained that planned DDB verifications were removed from the best practices on HyperScale X because it could impact Space Reclamation Jobs, aswell as impact performance. However, our customer needs to be able to produce reports to document and prove compliance in regards to ISO certifications. They were able to import a report on their Private Metrics Reporting server so they can generate reports based on Admin jobs.However, they are noticing that DDB verifications are taking a very long time and they aren’t actually completing, so all subsequent DDB verification are just queueing up. The DDBs in this environment are very large.Does anybody have a suggestion how we could solve this issue and actually getting these DDB verification jobs to finish in a timely manner? Would scheduling them more often solve the issue?Looking forward to your suggestions.Jeremy
Hello All,I would like to use the Amazon KMS for the encryption, how do I achieve this. Do I need to register the Amazon KMS in our commcell and use it in our policies?As per below documentation we were asked to add additional keys to enable encryption. How that works, can anyone explain.https://documentation.commvault.com/11.24/expert/9263_enabling_server_side_encryption_with_amazon_s3_managed_keys_sse_s3.htmlwhat is the difference between the above documentation and registering the Amazon KMS in commcell?
Hello all, I have an auxiliary copy with deduplication running to a Glacier and i would like to run a DDB Verification on it, just to see an estimate cost for future verifications.What i would like to know is how do i use the workflow Cloud Storage Archive Recall for every job id that is referenced in that DDB, since in the workflow window it makes me choose a backup job id. Kind regards,Jmiamaral
Hi All, we had an internal discussion for new customers what Library is the best way. Often we are running Windows Cluster withs csv volumes or windows filecluster or single servers with san attached storage. In the past there are a lot of problems with the ransomware on CSV an Filecluster. Do you have some more information wich way should be good or better to prevent redirected I/O in cluster and also errors during maintanance ? Also is there any possibility to check if the ransomware protection is working on a CSV / Windows file cluster ? Sure the option is set but did we had an option to test if its working ?
Hi,One of our Media agent is down. It has windows server OS. We are unable to bring up the server. Currently MA is offline. The server also have over 10TB of critical backed up data on it.Our OS team has failed to bring up the server. Please suggest how can we recover from this situation.
CS Ver FR 24I have on on prem S3 solution. I have presented this to Commvault as a Library, with multiple buckets as mount points. Within Commvault I have limited the size of these buckets to 100TB (performance tuning for library, decided not to limit buckets on storage side). You can see, size on disk of data in the bucket Commvault therefore has enough information to calculate Capacity, Free Space, and Usable Free space when I view my library and mountpath stats, but I see nothing. Any suggestions?
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
Hi All,I have question regarding the Restore process specially for the VM Gues File system restore when done from the combined Storage Tier Cool-Archive being as used as end for long term Library.As understand, the Index and Meta data in this case will be in Cool storage and hence we would be able to Browse the data without having to run the recall workflow.Having selected the folders to restore, will the complete VM data/disk data be rehydrated from archive to tier or just the selected data?
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
Hi All, The Adminconsole showing the recovery Point of the client in Dashboard and as per my understanding Recovery Point refers data available from Primary Copy.Sameway, how can I get the receovery Point from secondary copy?Any report and any way to get the information? Thanks in AdvanceMani
Afternoon folksI have an auxiliary copy that backs up to tape - I have deleted all of the existing jobs on the tape media with the hope of starting the backup chain from scratch. However, since deleting the backup jobs, if I go back to the storage policy and right click on the tape auxiliary copy and view “media not copied” it is blank. I was expecting to see all backup jobs for the backup period that I have selected.Is it possible for me to “restart the schedule” so to speak without deleting the auxiliary job? TIA
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
Hello everyone,All my backups are set to replicate from my primary site to my DR site and all copies have the same retention. Weirdly, the Disk Library Growth report shows the media agent at my primary site has 178TB worth of data but my DR site is only 132TB so I’m wondering where the 46TB difference comes from.Question: Is there a way to compare the contents of two media agents to see where discrepancies are coming from?ThanksKen
If we have an existing DDB on a drive for a media agent and that drive gets encrypted with Bitlocker does that cause a problem?My thought is that it isn’t since all reads/writes are happening inside the server. There might be a performance penalty though. Or am I totaly wrong?//Henke
Hello community ! I try to find a way to send only weekly and monthly backups to a secondary storage (Azure Blob Storage) from a Primary Storage Policy that haves only daily backups (7 days retention).I see from Aux-Copy wizard, that the only way is to create a Selective Copy with only full backup jobs. So, basically to copy only synthetic fulls once a week in my case.But, I cant figure it out, how to copy also monthly jobs!Please for your feedback!Best regards,Nikos
I have a productive commcell, where all mount path supports drilling of holes (Sparse). When I open a mount path property in Windows, I can see "Size on disk" is much more smaller than the "Size" of the folder. The whole partition smaller than the "Size" of the folder, but of course, larged than "Size on disk".I installed a test environment, where all mount path also supports drilling of holes (Sparse). Scheduled or manual backups are succeed and stored on mount paths. But when I open a mount path property in Windows on test MA, then I can see the "Size" and the "Size on disk" are the same. Or "Size on disk" is a bit larger. If I check a file with "fsutil sparse queryflag", then I get the response: "This file is NOT set as sparse"My question is, when backend file sizes start to decrease? When will be sparse set on backup files, what stored on a sparse supported mounth path?
Hello Experts, I am working on a proposal for a large Korean manufacturing company that is migrating its IT infrastructure to AWS. 1. Backup environment- Backup Source: AWS EC2 VMs, Veritas Cluster File System, File Data- Backup Storage: AWS S3 Object Storage 2. Unusual characteristics of previous backup tests- Tested Backup Solutions: Veritas NetBackup, DellEMC NetWorker, Veeam- When backing up the environment using AWS EC2 VM and Veritas Cluster File System to S3 storage, AWS EBS storage was used as the cache (staging ?) area.- In particular, the Veeam solution used more EBS storage as a cache (staging ?) area than Veritas NetBackup and DellEMC NetWorker. 3. What needs to be confirmed1) When performing a backup for the above environment with Commvault, is AWS EBS storage used as the cache (staging ?) area?- The conclusion that the customer reached after discussion with AWS and Veritas is that all backup solutions will use AWS EBS storage as the cache (staging ?) area.2) If AWS EBS st
Hi,i have a question regarding the implementation of a cloud library with Scality Ring.We can create two type of mount path S3 Compatible Storage or Scality Ring.Which is required ? (i have some cloud libraries already created in S3 compatible storage instead of scality ring type).There is a difference between them ?Kind regards, Christophe
We were testing a small (200MB) backup and restore to tape and back to disk. Backup takes < 8 minutes however restore takes 3 hours. It appears that after making contact with the index server the restore is waiting almost three hours before mounting the first tape. Transfer from tape to disk actually take a few minutes. Any idea why it can take so long to mount the first tape?
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.