Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 675 Topics
- 3,383 Replies
Storage Pool - best practices or no logic?
HelloFollowing the Commvault SE recommendations we have created a storage pool of 4 Media Agents with DAS storage. Initially all MAs could read and write data to each mount path and I have noticed that the "lan-free" logic does not work. I mean, each MA tries to access each mount path even if "closest" or fastest way available. Despite our network is 10Gb, data transfer between MAs is slow, very slow. Now, I have allowed only reads for any MA for any path and it works better, but still not perfect. The most important fact that aux copy to tape is very slow. Each policy allows each media agent access to any tape so my idea was "ok, let's stop access via IP and only MA that has its DAS will read/write data". So, backups are fast, aux copy is.... failing! Because MAs couldn't access data on another MAs.So, I am stuck with no ideas, except that stop using Storage Pool and go back to use each media agent as standalone.Any ideas how to:- have a storage pool- force each MA to use their DAS
Hello, please i have an issue with my DDB reconstruction. Not quite long when i move my DDB to another folder e.g folder1 to folder2 but on same server. 3 days later, my colleague wants to restart the server and force kill the SID process for process manager. and now the server went into DDB recovery. since one week now file system recovery will complete but adding records will fail . now today i found out the revoery process is taken from folder1.reason becuase after DDB move the new path has not done DDB weekly backup.now my question is according to commvault https://documentation.commvault.com/11.24/expert/12582_moving_deduplication_database_to_another_location.htmlnow the file system recovery is pointing to folder1 instead of folder2 what do you suggest.can i move DDB back to his previous folder1 ? and what could happen since it keep doing reconstruction and failing. i log a ticket against support a lady came help still is still failing.what other way can i move this DDB folder ba
Delete Mount Path from de-dupe library and decommission Media agent
Team,We are using windows servers as Backup media agents , I want to decommission one of the media agent “x” which is part of 3 libraries and dedupe storage policies. I have disbaled the mount paths on all 3 libraries associated with media agent “x” , view content shows that there is no data present on the mount path . When I try to delete the mount path associated with media agent ‘x’ faces below error -Mount path is used by a Deduplication database.The data on this mount path used by the deduplication DB could be referenced by other backup jobs. The mount path can be deleted only when all associated storage policies/copies with deduplication enabled are deleted. See Deduplication DBs tab on the property dialog of this mount path to view the list of DDBs and storage policies/copies. if I unshare the mount paths associated with media agent ‘x’ with other mouth paths of the same library and remove media agent ‘x’ from data paths tab in dedupe storage policy , the restore jobs starts f
DDB increase in (CommServe Job Records to be Deleted)
I’m getting the following alert email roughly once an hour:> Anomaly Notification> The system detected an unusual drop in the pruning performance for the following databases in commcell <CommServer_Host>> Deduplication Database Reason> HyperScale_Primary inrease in (CommServe Job Records to be Deleted)> CV Cloud Storage increase in (CommServe Job Records to be Deleted)> Please click here for more details.When I follow the “click here” link, I see:> 1 CommServe Job Records to be DeletedThis has been going on for a couple of weeks. I don’t think an annoying email is a big enough deal to open a ticket for but I’d still like to clean this up. Does anyone know what the problem is and how to fix it? I’m sorry to say the CommVault help pages for this are not very useful. Ken
Recovery Point from AUX Copy
Hi All, The Adminconsole showing the recovery Point of the client in Dashboard and as per my understanding Recovery Point refers data available from Primary Copy.Sameway, how can I get the receovery Point from secondary copy?Any report and any way to get the information? Thanks in AdvanceMani
Problem with copy media LTO4 (IBM Tape library) to LTO7 (HPE Tape Library)
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
problem with Aux copy job Failedi have [13:138] [40:91] [40:65]
i have next problem Alert:Aux copy job FailedType:Job Management - Auxiliary CopyDetected Criteria:Job FailedIs escalated:Detected Time:Wed Dec 28 23:42:51 2022CommCell:CommServeUser:AdministratorJob ID:63139Status:FailedStorage Policy Name:CommServeDRCopy Name:SecondaryStart Time:Wed Dec 28 23:00:11 2022Scheduled Time:Wed Dec 28 23:00:08 2022End Time:Wed Dec 28 23:42:51 2022Error Code:[13:138] [40:91] [40:65]Failure Reason:Error occurred while processing chunk  in media [V_845], at the time of error in library [RezervnaKopija] and mount path [[CommServe] \\192.168.99.51\RezervnaKopija], for storage policy [CommServeDR] copy [Secondary] MediaAgent [CommServe]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [CommServeDR], Copy [Primary], Host [DRI-COMMVAULT.dri.local], Path [\\192.168.99.51\RezervnaKopija\MX19RW_07.26.2022_08.59\CV_M
Synchronize All DDBs grayed out
Hi Guys, I hope everybody is fine !So I ran a health report on my Commserve and on the “DDB Performance and Status” section in the “readiness” column, it shows: Needs resync.When I try to resynchronize the DDBs from Storage Resources→ Deduplication Engines, the “Synchronize All DDBs” is grayed out.The DDB status is showing active.Does it mean that as long as it is online I can’t run the synchronization, or I missed something? Thanks !
Data aging, deleting old jobs
Hello, We have some issues with getting storage space free. When i run Forecast repoort i dont see sometging wrong, mostly jobs are, basic days or last of the week/month underOn the webconsole under Storage - Data Retantion i see 80TB above year. Really i those data i cannot find under the SP.Last week under SP> Summory> Storage Policy / Copy Space storage Recovery prediction i sow 16-4 35TB and 30 TB on 17-4. Under are prediction for this weekAfter aging i sow alot of prunable records in DDB, i run DDB verification and after really didnt sow any space freed on storage. I run Space reclamintation with lvl 1 with Clear orphan data, nothing changes on Storage. I wil uploade dataging log, if need to see sidbprune log i can upload it aswel
Implementing cloud combined storage tiers
We’re setting up a POC to use a cloud MA to copy longer term retention copies (1 and 7 year) from Azure cool blob storage to archive, and would like to use combined tier storage for the storage in the library where the long term copies will be kept. This being our first time configuring combined tier, I tried to find documentation describing how to configure it, but as of yet have not been able to. One question I’m hoping to answer is do we need to (or can we) pre-create the cool and archive storage accounts that will be used when configuring the new library, or is there some other way this gets done?
Setting up a Proxy Server to Access the Cloud Storage Library
Hi, We are configuring a Cloud Library as the Export Destination for Disaster Recovery (DR) Backups.So whenever we take a DR Backup, the metadata is exported to our Cloud Library.However, the CommServe has no direct access to the Cloud Library, it must connect to the cloud storage through a proxy server as explained below:https://documentation.commvault.com/v11/expert/9171_setting_up_proxy_server_to_access_cloud_storage_library.html I am wondering which port should we use in step “8”, because using random port number doesn’t work.Do you have any idea? Best Regards
AWS S3 as Cloud Library security requirement
To connect to S3 bucket as cloud library from Media Agent (on-premises) we can use below options:1.AWS Direct Connect2.VPN Gateway3. InternetMy query is if we are using option 3 internet to connect to S3 bucket how can we protect/secure S3 bucket from outside attackers or any non authorized users accessing the S3 bucket over internet.
Delete Mount Path associated to DDB
Hello there,I’m have a minor issue, I cannot delete unused Mount Path, since it’s used by DDB. There’s a few of MPs under Disk Library dedicated to this DDB. In the DDB properites I can only remove whole Disk Lib, which is not the point. CommCell says that in order to delete this MP, I need to delete each Storage Policy Copy which is referencing to this Disk Lib. It’s not an option neither. Logs are saying something similar: EvMMConfigMgr::onMsgConfigStorageLibrary() - Error [470, Mount path is used by a Deduplication database.] occurred while deleting the mountPath[xx] ###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:6170: Failed to delete mountpath [xx] due to error [470, Mount path is used by a Deduplication database.].###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:5593: Failed to delete MountPath from database for Id [xx] due to error Mount path is used by a Deduplication database.:470 Do you have ideas, workaround to delete single MP in that situation?
four to six DDB partitions
Situation is I have a setup with three hyperscale nodes and a cloud library. The wish is to use all three the nodes to the cloud library and make two DDB partition on each node. At the moment we were able to connect two of the nodes to the library and create two DDB partitions on the two nodes, but when I try to add two more partitions for the third node the add partitions option os greyed out. As from the documentation :”Configuring Additional Partitions for a Deduplication Database” you should be able to create a six partition DDB. This is also the case for the two existing Storage Pool Libraries on the Hyperscales. So my question is, am I doing something wrong, missing a limitation or is something wrong within the configuration which is causing this?
Prevent reuse tape command line
Hello,Is there a script that prevents a tape from being reused? Only script i found was the mark tape full, but that doesn´t assure me that the tape won’t get used once the data in it expires.I have around 12k tapes that i need to prevent reuse and doing it via GUI doens’t seem to be very workable.Any ideas? Kind regards,Amaral
Ubuntu Linux Media Agent Ransomware
HI All,currently only on RHEL / CentOS ransomware protection can be used. https://documentation.commvault.com/11.23/expert/126625_system_requirements_for_ransomware_protection.htmlthe main reason for this is usage of the SELINUX modules. Are there also plan for creating this feature on Ubuntu Linux?or are the other ways to achieve this?
Microsoft OneDrive Cloud Storage
Hi All,I have an issue when adding a OneDrive cloud storage.I am configuring via CommCell Console. If I enter Application ID, Tenant ID and Shared Secret and then click the Detect button I receive an error “ ### EvMMConfigMgr::onMsgCloudOperation() - Failed to check cloud server status, error = [[Cloud] The requested URI does not represent any resource on the server. Message: Invalid hostname for this tenancy ”Commvault support answer is “The cloud vendor should be able to help you with the right URL. This is outside Commvault unfortunately ”.Does anybody have any experience using Microsoft OneDrive as a cloud storage?Thank you,Lubos
Dedupe & gzip compression. Does the --rsyncable option help?
Anybody used gzip with --rsyncable to increase dedupe efficiency and does it actually help? People still like to do app/database dump and pickups its all good until they compress the dumps and you convert the pickup backup from tape to disk dedupe. Since dedupe doesn’t like compressed files as a source there’s rumors that --rsyncable option will help here.This option uses a totally different compression algorithm that's rsync friendly and only increases the compressed size by about 1% when compared to the regular flavoured gzip file.
Disable deduplication on CommServe client
Hi Guys, Is there any way to disable dedup only on one single client only ?In our case, we have a Storage Pool which has Dedup Enabled, the storage pool is used to store aux copies received from our main site. The main thing is that from all the aux copies that are sent to the storage pool, there is the one of the CommServe DR backup. As per our knowledge, in case of a DR scenario, we have to use “Media Explorer” utility on the MA in order to retrieve the CommServe DR backups from the MOUNT PATH, and from the documentation, the tool is not usable when the data is deduplicated, as per below :From this came the need to disable the dedup only on the CommServe client or from the CommServe DR storage policy, and if there is any way to disable it for the CommServe only, is it really sufficient ? Since the storage pool is using deduplication, or does the CommServe DR Storage Policy has to be assigned to a specific Storage Pool that does not use Dedup ? Any advices regarding this would be grea
Additional Partitions for a Deduplication Database
HelloScenary: 2 MA´s in a CommCell.The Customer has bought 1 more MA and he wants to add the SSD disk space to the existing DDB. I was reading the manual(Commvault site) and have a question, the site says “before you start”.Where can i get this “Authenticate Code”?https://documentation.commvault.com/m/commvault/v11_sp5/article?p=features/deduplication/t_configuring_additional_partitions.htm
Add Library Fujitsu LT20 to commserv
@Mike Struening Hello Guys,for add a new physical(first) library to a CommCell, what´s the best procedure and sequence to add?The customer need to send the historical data to a physical library(Fujitsu LT20) and generate new Full Backup, and need to send 3 times (backups full) before erase data.
How many DDB partitions are supported on single MA?
I’m a little confused about multiple partitions and or DDBs limitation. There are several places when limits are mentioned like the one and only Hardware Specifications for Deduplication Mode1) https://documentation.commvault.com/11.24/expert/111985_hardware_specifications_for_deduplication_mode.htmlwhich mentions “2 DDB Disks” per MA andConfiguring Additional Partitions for a Deduplication Database2) https://documentation.commvault.com/11.24/expert/12455_configuring_additional_partitions_for_deduplication_database.htmlwhich mentions “30 DDB partitions” per MA.Also, In my recent discussion with PS member I was told that a partition should be treated as DDB itself.All of this creates a lot of confusion likea) is “DDB Disk” the same as DDB?b) is “DDB partition” the same as DDB?If you look at 1) /Scaling and Resiliency there is a informationThe back-end size of the data. For example: Each 2 TiB DDB disk holds up to 250 TiB for disk and 500 TiB for cloud extra large MediaAgent. That means
Secondary Copy on Disk - Implications of WORM setting
Commvault 11.18 (soon to be 11.20). - We are on the cusp of eliminating our secondary backups to tape. The benefit of a secondary copy on tape was the build-in air-gapping (and ability to move it offsite for safe-keeping). We plan to move to creating our secondary copies on disk in a different city. Commvault’s built-in Ransomware protection is no-brainer, but WHAT ABOUT WORM? What are the implications of the WORM storage on space consumption? Is there any scenario in which a WORM-enabled deduplicated secondary copy that is a true copy of the deduplicated primary copy (and with the same retention) would be any LARGER than the primary copy? Presumably, if a 1000 jobs share one block on the secondary storage, that block will not be removed until the last of those 1000 jobs ages out.Any info is appreciated. Thanks!
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.