Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
MA GRID setup | potential
Hi, Do you have a best practice how to utilize media agents in a GRID format?For example we have 4 MA’s and let’s say 3 subclients..1 sub client always uses 1 MA (more vsa proxies within the job for vm’s) for VMware backups. So there is a some sorting mechanism for other 2 sub clients to either receive idling MA’s to start the backup, or use the one which is already used by sub client 1… But is there an option to utilize the potential of a GRID solution where we can use more than 1 MA for a backup of a sub client? Hope it’s clear of what I am trying to achieve.
Tape library - 3 week sets
Afternoon allI have a few questions - hopefully nothing too complicated!I have recently configured a tape library in our Commvault environment which appears to be working OK - I just had a few questions around configuring the tape library to suite our needs.The plan is to backup to tapes so they can be take offsite daily and used in a DR scenario. We would like to keep 3 weeks worth of data on the tapes and would like to have a tape for each day.I’ve configured the auxiliary copy job and the related schedules to backup at a suitable time which I think so far is fine however where I appear to be struggling is with configuring Commvault to backup to a new tape each day.We will have 21 tapes and as an example tape 001 will be week 1 Monday, tape 002 will be week 1 Tuesday etc - tape 007, 008 and 009 will be Friday, Saturday and Sunday respectively.There will be x2 SP’s that will be backing up to tape - my question is, is it possible to configure CV so that one the backup job from SP2 is c
CV_MAGNETIC\V_1508600] [The parameter used for the current operation is not supported by the Operating System, OS Drivers or the underlying Hardware.]. For more help, please call your vendor's support hotline.
Hello Community,Thanks for all answers.I am having same issues with CIFS shares using VSan Dell storage.How do I increase the SMB credits to 256? In media agent Windows registry key?Thanks Error occurred in Disk Media, PathCV_MAGNETIC\V_1508600] [The parameter used for the current operation is not supported by the Operating System, OS Drivers or the underlying Hardware.]. For more help, please call your vendor's support hotline.
Azure Cold and Archive
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
Secondary Copy - Migration from Tapes to Azure Storage
Hi Guys, Our on-premises Commvault infrastructure is with 2x Media Agents, 1x Tape Library, and few proxies. The backups - Primary on disks and the Secondary on Tapes. We are planning to migrate the Secondary copies over to Azure Storage and here’s our plan,Placing Media Agent in Azure Placing DDB in Azure (on Media Agent) Switching Auxiliary Job Migrating old data from tapes to AzurePlease provide some guidance and best practise on this.Thanks in advance.
Cloud Library - Capacity Metrics not showing
CS Ver FR 24I have on on prem S3 solution. I have presented this to Commvault as a Library, with multiple buckets as mount points. Within Commvault I have limited the size of these buckets to 100TB (performance tuning for library, decided not to limit buckets on storage side). You can see, size on disk of data in the bucket Commvault therefore has enough information to calculate Capacity, Free Space, and Usable Free space when I view my library and mountpath stats, but I see nothing. Any suggestions?
DELETE HPT CATALYS
Hi I need your help, in the initial implementation, the customer added a machine running debian 11.00 as MediaAgent, and it is not supported as it only allows to configure HPE-catalyst storage and not the libraries. Therefore the client uninstalled this debian OS from the mediaagent, and now I see that I can not remove the HPE-Catalyst.At the moment I added the mediaAgent with Ubuntu OS and it manages to see both the disk storage and the libraries.The error is about WORM media data.
Size on disk value related to cloud libraries is incorrect
While trying to figure out how to gather BET for charging purposes I noticed that the size on disk as displayed both in Command Center and CommCell console for cloud libraries to be incorrect. I have opened a ticket for it and in particular referred to S3 buckets, but I was wondering it other customers see the same and if it also occurs on libraries using Microsoft Azure Storage or other types/vendors. Please comment in case you see identify the same. I noticed it while running FR26 and FR28 (2022e).
Setting up a Proxy Server to Access the Cloud Storage Library
Hi, We are configuring a Cloud Library as the Export Destination for Disaster Recovery (DR) Backups.So whenever we take a DR Backup, the metadata is exported to our Cloud Library.However, the CommServe has no direct access to the Cloud Library, it must connect to the cloud storage through a proxy server as explained below:https://documentation.commvault.com/v11/expert/9171_setting_up_proxy_server_to_access_cloud_storage_library.html I am wondering which port should we use in step “8”, because using random port number doesn’t work.Do you have any idea? Best Regards
Pruning on isolated MA
Hi Commvaulters, Had a question regarding data aging/pruning on isolated MAs.So, we have an isolated MAs (Air gapped = They are powered on only during Aux Copies, then, when the last aux copy finishes, the MAs are shutdown), the data is Deduped. All jobs concerning the MAs are rescheduled to be aligned with the Air Gap window (Aux Copy, DDB Backup, Data Aging...etc). My concern now, is that the Air Gap window may not be sufficient to process the data pruning on the storage, since the MAs are directly shutdown after the Aux Copies. Can the Air Gap window be an issue for the data aging/pruning process ? If someone gives us some guidance regarding this, that would be great. Regards.
What is the Impact on DDB after WORM option is enabled at Primary Copy level of storage policies.
Hi, For data security I was told to do search on the WORM option at Primary copy level of our storage policies.. Just a bit of background of our environment, we have short data retention set on our Primary Copy - 35 days 1 cycle. My understanding is that this WORM option in storage polices work within Commvault software and no admin(or anyone) can delete backup jobs after it is enabled, and we have to wait for job to age out and then be pruned by commvault automatically. Then I was advised that if I enable WORM, DDB for its storage policy will be sealed. A new DDB will be created automatically after rebaselined. So I have some questions below. is DDB sealing an automate process? i did enable WORM option more than a month ago for a storage policy for testing but I cannot see a sealed ddb under ‘Deduplication engines’. Our disk libraries are quite big (from 300 TB to 800TB). if we need to do the rebaseline every time, will this take long time and have impact on performance? With only 35
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
Large S3 Bucket Backup
Hi Community ,Can we take a backup of S3 bucket which is 80 TB in size using Commvault ?Consider 10-15% daily change of data. How does Commvault takes backup of S3 . IIs it streaming backup , reading objects 1 by 1 which i expect would be very slow or some sort of Intellisnap capability is available for S3 backup ?Regards, Mohit
Best practices to ensure maximum streams for Aux to MCSS
Hello Community We are doing a pilot by pushing around 50t of data over to MCSS. We have noticed that when the jobs are progressing, there are multiple streams of different sizes at the beginning of the job and as the job progresses the number of streams go down. Is there a way to ensure that the data that is being pushed to MCSS is distributed among multiple stream equally so that we get consistent performance out of the job till the end?
Error Code: [19:861] Could not connect to the DeDuplication Database process
Hi,We’ve been having this Error (Error Code: [19:861] Could not connect to the DeDuplication Database process for Store Id [xxx], Process:clBackupChild, JobManager or clBackup) on multiple clients recently (agents seem random : Linux FS, Windows FS, Oracle RAC… v. 11.24.34). Our network team did not change any FW rules.The only temporary solution we found for backups to complete is disabling Client Side Dedup on all clients having the error, but we would prefer not having our MAs do the dedup.If anyone had a similar problem, any suggestion and ideas would be appreciated!Regards
Description: Error occurred while processing chunk - error Code: [13:138]
Hi all!could you advice me how to troubleshoot following types of error: Error Code: [13:138] Description: Error occurred while processing chunk [xxx] in media [xxx], at the time of error in library [disklib01] and mount path [[xxx] /srv/commvault/disklib01/xxx], for storage policy [XXX] copy [Xxx] MediaAgent [svma1]: Backup Job [xxx]. Unable to setup the copy pipeline. Please check connectivity between Source MA [svma1] and Destination MA [svma1]. At a glance, it seems that it is not possible for CV to process chunk from the (index?)/disk library...However, the issue is connected with storage policy copy, that moves data from the disk library to the tape library (secondary copy). The main problem for us is that it is not possible to copy data to the tapes. Therefore, it may say Unable to setup the copy pipeline. The media agent is one server/device, that communicates with both disk and tape library. Lastly, the files in the related directories dont seem to be corrupted...Any suggesti
Disable deduplication on CommServe client
Hi Guys, Is there any way to disable dedup only on one single client only ?In our case, we have a Storage Pool which has Dedup Enabled, the storage pool is used to store aux copies received from our main site. The main thing is that from all the aux copies that are sent to the storage pool, there is the one of the CommServe DR backup. As per our knowledge, in case of a DR scenario, we have to use “Media Explorer” utility on the MA in order to retrieve the CommServe DR backups from the MOUNT PATH, and from the documentation, the tool is not usable when the data is deduplicated, as per below :From this came the need to disable the dedup only on the CommServe client or from the CommServe DR storage policy, and if there is any way to disable it for the CommServe only, is it really sufficient ? Since the storage pool is using deduplication, or does the CommServe DR Storage Policy has to be assigned to a specific Storage Pool that does not use Dedup ? Any advices regarding this would be grea
DDB V5 Conversion - how was it for you?
Hi Team, We are about to embark on the V4 to V5 DDB conversion process, but I thought I would ask here and see how it went for those that have completed this. We have a few partitioned DDB’s with a reasonable amount of size, and I am trying to gauge how long our backup-outage might be, as we have to guesstimate on behalf of our customer. I can see that the pre-upgrade report does give estimates, so I’m wondering how ball-park they might be. One more thing … is there a way if identifying what version of DDB we have> Thanks
Catalog jobs from a cloud storage object
Hi Guys,Is there a way to catalog jobs from a bucket within a cloud storage library, like below:The tool offers only a Tape or a Disk as a Media. How do we retrieve our DR backups from a Cloud storage in case we lose everything in order to perform a Disaster Recovery.I found the link below, however it doesn’t show how to retrieve the DR DB.https://documentation.commvault.com/11.24/expert/43588_retrieving_disaster_recovery_dr_backups_from_cloud_storage_using_cloud_test_tool.htmlI’ve also found the below note:Does this mean that if deduplication is enabled, there is no way to retrieve the DR DB?Thanks a lot. Best Regards
Please confirm that the 11.28 version contains all of the fixes found in 11.26.
Below is the analysis from commvault: - StoreOnce Aux copy failing due to "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != ."- FR25 MR23 contains the fix for the issue you are seeing.https://documentation.commvault.com/11.25/assets/service_pack/updates/11_25_23.htm3395 -- AuxCopy jobs from HPE StoreOnce catalyst libraries as source and destination may fail with Error "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != .- Customer to install MR23 on source MA.So we decided to upgrade to 11.26 as it is the latest and has all fixes as 11.25.32. But can we wait for 11.28 and upgrade to 11.28. Will it have same bug fixes as that of 11.26.
Unable to change value of "sparse support" checkbox in Disk Library
Hello,We just added more storage to a mediaagent, via a new mount path in an existing disk library. The new mount path is an SMB share from a Dell Isilon, and the admin of the isilon confirmed they had enabled sparse support for the share - but he turned it on, after I’d already added the mount path.As this user in the Commvault forums described, I should just be able to check/uncheck the checkbox for ‘sparse support’ right? It is not working for me though. What is the trick to changing the status of this checkbox? Are there other ways via the Web Command Center, or CLI, to do the same thing?I also don’t see any means in the ‘properties’ of this mount path to enable this.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.