Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
Secondary copy schedule
Hello,Can I change the Secondary Copy schedule from the webconsole?I haven’t used the webconsole very much but with deploying a new commcell I’d thought I’d look into it. I do know how to handle it from the javaconsole. Documentation didn’t give me much as it is a mix odfjavaconsole and webconsole content (confusing)Best regards Henrik
Retrieve information from CSDB - DDB Information
Similar to this article, I'd like to show simple queries to retrieve DDB information from CSDB.Important note: do not modify CSDB data and modules, just use READ operations only.Also pleaseTo keep your activities safe you need to keep uncommitted read from any table as one of the following techniques:use CommServ -- just for convenience-- place the following at the top of any queryset transaction isolation level read uncommitted;-- place with(nolock) statement in each table referenceselect * from APP_Application with(nolock)Most of the DDB information is stored in tables started with Idx, for configuration of DDB is stored mainly the following 3 tables, first one is for DDB information, latter 2 for partitions:select * from IdxSIDBStoreselect * from IdxSIDBSubStoreselect * from IdxAccessPathTo combine this, including which MA is in use for each partition, like the following:select store.SIDBStoreName as 'DDB Name' ,apc.name as 'MediaAgent' ,ap.Path as 'Partition path'from Id
When start using drilling holes?
I have a productive commcell, where all mount path supports drilling of holes (Sparse). When I open a mount path property in Windows, I can see "Size on disk" is much more smaller than the "Size" of the folder. The whole partition smaller than the "Size" of the folder, but of course, larged than "Size on disk".I installed a test environment, where all mount path also supports drilling of holes (Sparse). Scheduled or manual backups are succeed and stored on mount paths. But when I open a mount path property in Windows on test MA, then I can see the "Size" and the "Size on disk" are the same. Or "Size on disk" is a bit larger. If I check a file with "fsutil sparse queryflag", then I get the response: "This file is NOT set as sparse"My question is, when backend file sizes start to decrease? When will be sparse set on backup files, what stored on a sparse supported mounth path?
Is it possible that high Q&I times can cause the following error message: Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]
Hello, I have Oracle backups running which are scheduled through RMAN which occasionally fail with the following error code: “Error Code: [82:177] Description: Timeout while waiting for a reply from DeDuplication Database engine for Store Id [X]” Steps that I have taken to troubleshoot the issue: Dedupe engine is active and running Created a one way persistent tunnel from the client to the commserve and Media agentsas suggested by another thread pertaining to the same issue “ERROR CODE [82:172]: Could not connect to the DeDuplication Database process for Store Id [xx]. Source: xxxx, Process: backint_oracle | Community (commvault.com)”Our Q&I times across the board for our media agents are quite high. For this particular engine the times are 6,416 microseconds (307%). I believe the ddb is not running on SSD, but regular disk. Is it possible that this could be causing the above error? If not I’m not sure what else could be the reason for this.
Huawei Ocean Store 5500 and deduplication
when performing a backup with the NAS agent that has deduplication enabled in Commvault, is it convenient to enable on the source storage side its native compression and deduplication functions at the same time, specifically from a Huawei Ocean Store 5500 devicePlease advice if we can enable compression and deduplication on commvault and Huawei Ocean store end?
Backup job with no compression
Hi,we installed a new dedupe appliance as a new library. we are now configure a new SP with no compression and no encryption (commvault dedupe in on).we need to replicate the data using aux copy to the DR site with a straight disk library (JBOD).my question - if we backup with no compression ‘ will aux copy also transfers uncompressed data to the DR site? because in this case , we’ll be run out of capacity in the DR library.thanks.
Error Code: [62:1419] Description: The required media is currently in a different library. Source: hq-vm-commserv, Process: MediaManager
Error Code: [62:1419]Description: The required media is currently in a different library.Source: hq-vm-commserv, Process: MediaManagerpls urgent the tape is in same libray that copy it why is it saying is in DR
DDB Disk Library Migration
Our new SAN for CommVault Disk library is Dell PowerScale H7000 OneFS, the library in production is NetApp 2750. We use Windows 2019 MA and ISCSI LUNS on NetApp. Since PowerScale H7000 doesn't support iSCSI LUNS and Sealing DDB or Creating new Disk Library > Global DDB > New Primary is not the preferred path. Is it possible to create new path on current disk library the path on SMB share on DELL OneFS and disable the writes on all local ISCSI path’s with option checked “Prevent data block references for new backup” we could do this to all local disk at once or gradually. Our retention is 90 days on this SP. So could I delete those paths after jobs age out and continue using path on SMB share.
IBM Elastic Storage System as a backup target... Anyone?
Hi peeps.. Anyone out there with experience in using IBM ESS 5000 as a backup target for CVLT?We're about to set this up... Multiple MA grids will be connecting to the IBM ESS using SMB on a 100Gb network.. If anyone has some do's and don'ts or best practices feel free to chip in :-) Would be very much appreciated.. Thank you.. Best regardsKim
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
Front end data report
Hello, I need your help ;-)i was asked to prepare tabled report showing Front End data stored in local library (NetApp).I already explained the difference in logical and physical use, but my management still wants to see the list of everything that will sum-up to 297 TB.I tried the chargeback report but it's showing me data from Jun last year. Does anyone know how can I achieve this goal?
Copy backup data from tape library to Cloud Library (Cloudian)
We have a storage policy with aux copies that was sending disk backups to a tape library. The library in question that contained all these tapes has been decommissioned. A new library was stood up and all tapes were put into this new tape library. However the aux copy that represents this data belonged to a different media server and physical library. We are trying to figure out how to take the data sent to the AUX copy in the old storage policy and move it to a new Cloudian array that has been configured as a Cloud library.
Synth Full on Seconday Copy Storage Policy
Hi,we backup a File Server in China with Inc. On Saturday there is synth Full.The Aux Copy of the Synt Full ( 1.8 TB )( to Germany is really slow.Can i schedule the Synt Full on the secondy Storage Policy ?In China always incremental, aux copy to germany and then in germany the Synt Full.We need a full for the tape backups.RegardsPeter Rupp
Configuration of two different NAS disk library using single / two DDB partition
Hello Team,We are planning to configure two different NAS disk libraries with three media agent physical servers.Each media agent server is having dedicated two SSD drives and configuration of RAID1.We want to know each disk library will use seperate DDB or will use single DDB.DDB disk is only one in each media agent server.Thanks and Regards,Anand
Aux Copy Job missing ?
Hi, I have a customer with 2 copies :1-Primary dedup on disk with 66 jobs.2-Secondary dedup on disk with 133 jobs He created a copy 3 to replace the secondary but he chose #1 for the source, but there are jobs only in the secondary copy, is there a way to pick the missing jobs by changing the source of copies #3 for source = #2 ? If I change it and run and Aux Copy the missing jobs are not picked ? Or I have to delete the copy #3 and start over !?
Refresh arrays with intellisnap
Hi team, I need help designing a Commvault configuration for storage infrastructure renovation. We have a customer who has an old netapp storage with two arrays with snapvault and snapmirror between them managed by Commvault with OCUM and all clients using Intellisnap. The customer will refresh the storage arrays by moving SVMs to new arrays, I am not really sure if just moving the SVM will be completely transparent for Comvault transparent for Commvault or will it be necessary to configure the new arrays, the SVMs, and create the replicas from scratch again. Thank you.
WORM on VTL
Hello Expers, Recently, customers want to apply WORM function to VTL storage in order to respond to ransomware issue.I searched BOL and the Commvault community, but I could not find a guide on how to configure and operate in detail except for WORM media configuration.https://documentation.commvault.com/2022e/expert/10496_worm_media_configuration.htmlI hope I get detailed guide information for implementing WORM function in VTL or Tape Library. For example, once the WORM media is fully used, It is moved automatically to the Retired Media pool.https://documentation.commvault.com/2022e/expert/10493_worm_media.htmlAnd then is this media reusable? If possible, through what procedure can it be reused? RegardsKim KK
MA hardware refresh and library mounpath sharing
Hi,i have to perform OS refresh of some physical MA ( 2 grid of 4 MA). We will upgrade from Windows Server 2012 to 2019.my purpose is about data availability because our disk libraries are configured with Local LUNs on each MA shared with others via Dataserver-IP option.The problem is when one MA is unavailable for the upgrade, their mount paths were not readable for the others MA. I have search in the documentation and i don’t find any uses cases for doing a MA refresh with shared mount paths. please advice Kind regards,christophe
Azure CloudLib Data Written does not match Dedupe statistics
I have a CloudLibrariy in Azure configured with three Cool Blobs. (three mount paths). Commvault reports Application Size 30TB and Data on Disk 50TB. In Azure the disks reports to be filled with 12TB. We have verified that WORM is disabled on the volumes in Azure. It seems that Commvault messes up the statistics for some reason. Have anyone seen this before on Azure CloudLibs?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.