Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 588 Topics
- 3,129 Replies
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Hi guys, I'm having a customer who is actually using Standard MA with Commvault to back up to copy 1 to disk (PureFlash with NVMe Drives), copy 2 (and other PureFlash) and a copy 3 to Tape around 350 TB per week to send to tapes (LTO7) 4 tape drives (weekly Full) at a sustain throughput of 700 GB/Hour per drive (4 drive in parallel). We are looking to replace both MA with two HyperScaleX clusters. The questions are ! How we need to configure the HyperScaleX (reference architecture) to sustain the weekly tape creation of 350 TB per week with the same throughput knowing that we are going to use NearLine SAS in the HyperscaleX Cluster ! Or ca we use SSD for the Storage Pool drive in a HyperScaleX.
Hi All,I came across the CommVault documentation, mentioning that deduplication won’t make much impact when I am keep my long retention copy in cloud as tape replacement.Can anyone share more details about the your own experience or CommVault documentation regarding pros/cons of keep the long term copy in cloud with/without dedup. Thanks,Mani
Hello,I am hoping someone can direct me in the right direction.I have a secondary copy to tape with an infinite retention period. The tapes will be stored offsite for safe keeping, currently tapes are onsite. I noticed during restores of older data I would need to insert the tape media containing the index to browse content to restore.Is there a way/method to keep this index on a local storage so the tape is not needed to browse the contents?
Team,We are using windows servers as Backup media agents , I want to decommission one of the media agent “x” which is part of 3 libraries and dedupe storage policies. I have disbaled the mount paths on all 3 libraries associated with media agent “x” , view content shows that there is no data present on the mount path . When I try to delete the mount path associated with media agent ‘x’ faces below error -Mount path is used by a Deduplication database.The data on this mount path used by the deduplication DB could be referenced by other backup jobs. The mount path can be deleted only when all associated storage policies/copies with deduplication enabled are deleted. See Deduplication DBs tab on the property dialog of this mount path to view the list of DDBs and storage policies/copies. if I unshare the mount paths associated with media agent ‘x’ with other mouth paths of the same library and remove media agent ‘x’ from data paths tab in dedupe storage policy , the restore jobs starts f
Hello, We have multiple sites and all these sites have different WAN bandwidths. All are DASH copying to a single location. All these locations have different working hours. We want to create multiple bandwidth throttling rules. What would be the best way to approach this? Should we create the rules at the source Media Agent with throttling the send traffic? Thank You
I have read a couple or articles on CommVault Online that say defragmentation of the Magnetic libraries is a good idea. Diskeeper, now Dymax IO, was listed as a certified product for online volumes. I am wondering if others defrag theire libraries for performance purposes, and what products they use. I have read in older articles that the native windows defragmentation tool can be used. Also it states that it should be done outside of backup hours (makes sense). Any feedback or information would be appreciated. Thanks
Hi Team, I have a HyperScale with 6 node cluster in on-prem for primary copy. For Secondary copy I need to move the data to Cloud (Archive Storage).My doubt is, when I am creating the Cloud Storage Pool, Do I select the existing (On-Prem) de-dup path (/ws/ddb/P_1/Copy/_21/Files/31) ( or) Do I need to create a dedicated the dedup path in the on-prem MAIf I need to option 2, then what is the recommended dedup partition value and reason behind that? And also share the your best practice for hybrid data protection, if any. Thanks,Manikandan
Hi alli need your help to understand table architecture for Deduplication database v4 gen2 table structure and there functioning. There is as such no information available on documentation explaining current ddb table structure. Please help with the information if possible.
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Hello there,I’m have a minor issue, I cannot delete unused Mount Path, since it’s used by DDB. There’s a few of MPs under Disk Library dedicated to this DDB. In the DDB properites I can only remove whole Disk Lib, which is not the point. CommCell says that in order to delete this MP, I need to delete each Storage Policy Copy which is referencing to this Disk Lib. It’s not an option neither. Logs are saying something similar: EvMMConfigMgr::onMsgConfigStorageLibrary() - Error [470, Mount path is used by a Deduplication database.] occurred while deleting the mountPath[xx] ###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:6170: Failed to delete mountpath [xx] due to error [470, Mount path is used by a Deduplication database.].###### MLMMountPath::deleteMountPath() - :mlmmountpath.cpp:5593: Failed to delete MountPath from database for Id [xx] due to error Mount path is used by a Deduplication database.:470 Do you have ideas, workaround to delete single MP in that situation?
Hi all, i will try here, We have 2 MA in Azure that acts as proxy as well.. When we tried to backup VM from azure (to cloud storage) , if we configured MA 1 as proxy job completed, when we try to configure MA 2 as proxy its failed with "failed to fetch a valid sas token" error. Anyone had a clue what cause this error? Both MA with same OS, Disks, Permission, Version..No drops from FW & Network settings are configured (client/CS)
This is a conversation post after my initial post about FR22 .3 This is more of a findings topic/conversation .I had three older wk8r2 media agents ( now replaced) that experienced widespread issues after going to FR22 .3NOTE: none of these issues are/were recorded with Commvault as actual issues. The decision to replace/Migrate the OS was made at the 11th hour after working weeks on these issues.The basic application appears to work just fine with 2k8r2 fr22 .3 - Readiness, services running ,can run jobs etc. The issue we were running into was consistent across all three. And the only 2k8r2 media agents in our environment's I knew it was an issue. Seemed too coincidental not to be.After the Fr22 .3 update- within 4 hours our jobs started experiencing all or some of the following errors:Pipeline errorsMedia mount services device not readylibrary full.Even when attempting to select new snap mount hosts for jobs i was getting connection refused messages in the GXTail event logs.The mos
Hi there, I have successfully added the cloud storage (S3 compatible). However, for the time being I am only able to set up connection based on http protocol. When I want to add a new cloud storage library using https there is the error message failed to do verification.To move further, I would like to utilize https protocol. I have self signed certificate from my netapp cloud storage S3 compatible - is it possible to allow using it since I dont have CA issued cert? Could Commvault be forced to use a self signed certificate? What I did try was to “Use this additional setting and set its value to 0 to skip the checking of the server's certificate claimed identity for the cloud libraries”, but it didnt help. Is it possible to check using of this settings? Do you have any suggestions for such situation? Thanks for you ideas.
We currently use IBM V5000 arrays for our Commvault backup target to land our deduped backups. We are starting to review other options to see what other fast, cost effective options are out there. I do prefer to use Fiber Channel connections, but open to options. Since Commvault is really the brain in our scenario, the storage array does not really need any features, just good speed. What Vendor Storage arrays do you use? Are you happy with it?
Hello Team, Just thought to discuss with everyone, we have lots of medias thats turn into "Deprecated” moved to Retire Pool which are very less usage. When am looked the information in Media Properties shows below message, When I could see Side information, shows ver less usage info : As per the default CommVault Media Hardware Maintenance shows as below, Here my question is : Whether those medias mark as good and re-use for future its advisible? Can you someone clarifity from MM. @Christian Kubik
Aux copy error: may not be copied as we failed to get array controller Media Agent, make sure to set an array controller Media Agent for source Array, will be retried soon
In our CIFN environment, we would like to take snapvaults from our primary (snap copy), under our storage policy. WE are currently using Open Replication (not OCUM). Our initial (copy) snapshot works fine but aux copy (snapvaults) are not. Here is the message under the Progress tab when job initiates. Error: Data to Storage Policy [storage-xxx] Copy [snap_vault] may not be copied as we failed to get array controller Media Agent, make sure to set an array controller Media Agent for source Array, will be retried soon. Any ideas?
Hi all, we are looking into our backup strategy and investigating few scenarios of backend storage. Right now we are considering object storage via S3 protocol and file storage (JBOD) over NFS protocol. The data which will be send there is on-prem filesystems, databases, VM’s etc. Total capacity - over 10 PB.Currently we tested some object storage over S3 protocol but we faced issues with data reclamation (garbage collection for expired objects taking way too long and capacity reclaimed wait time takes over month or few). Can you share your experience with back-end storage, what challenges you faced or how you solved my mentioned issues, also, what advantages you see comparing S3 & NFS protocols for backups. All feedback is very appreciated.Thanks!
Hi there,I am trying to add a new cloud storage (S3 compatible), however I am unable to do so. Moreover, I don’t see any asociated log files. I only see this error message:“Failed to verify the device from MediaAgent [xxxxxx] with the error [Failed to check cloud server status, error = [[Cloud] The server failed to do the verification. Error = 44037]]. “ My question is: Which logs should I add to Logging setting? And how to troubleshoot this stuff in general? PS. As workaround I used this - https://documentation.commvault.com/commvault/v11/article?p=51230.htm but it didn’t help. Thanks for any ideas
Commvault 11.18 (soon to be 11.20). - We are on the cusp of eliminating our secondary backups to tape. The benefit of a secondary copy on tape was the build-in air-gapping (and ability to move it offsite for safe-keeping). We plan to move to creating our secondary copies on disk in a different city. Commvault’s built-in Ransomware protection is no-brainer, but WHAT ABOUT WORM? What are the implications of the WORM storage on space consumption? Is there any scenario in which a WORM-enabled deduplicated secondary copy that is a true copy of the deduplicated primary copy (and with the same retention) would be any LARGER than the primary copy? Presumably, if a 1000 jobs share one block on the secondary storage, that block will not be removed until the last of those 1000 jobs ages out.Any info is appreciated. Thanks!
Hello World, I’ve recently replaced my media server and noticed my auxiliary copy jobs is getting this error whenever i try running the backup:User specified a data path which is not part of the data paths in the storage policy copy. Advice: Please specify a job data path which is part of the Storage Policy copy. Can see that the new media server has access to the library, but not sure what else to check.
Hi Team, My DDB Backup operations are failing with the following error message: The snapshot of the Dedupe database from the previous attempt of this job is not available and a new one could not be created, as the job cannot continue under this condition, it will fail I can't really find anything in the Commvault logs outside of a few VSS-related errors. What could this mean? RegardsWinston
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.