Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 671 Topics
- 3,371 Replies
recopying contents from a tape that was physically dropped and is no longer usable
Hello! This morning I was all thumbs and dropped a tape. This caused the little pin inside to come loose and the tape was barely holding. I have to consider it dead now. Is there a way to flag the data inside to be recopied to another tape? Thanks!
Are there any special considerations for building a "writing tape only" media agent
We have an old physical media agent doing Aux Copies AND writing tapes every month. When tape jobs start for this media agent, the throughput of the aux copies suffers greatly as the CPU goes to 100% and stays there until tapes finish (might be 10+ days). We wanted to see if we could easily offload the “tape writing” jobs as we have a newer (but not very powerful) physical system we could repurpose. This would entail installing a new physical media agent in the rack, adding SCSI HBA’s to it and connecting them to the tape drives. Are there any “gotchas” when setting up a new media agent for tape jobs (either special physical hardware considerations, or software/config gotchas, or guides I should be looking at)? I haven’t set a media agent up (especially doing tapes) and this is a that was maintained by a person who is no longer working with us. Also: We’re using LTO7 M8 tapes with LTO8 drives.
failed to read db error when adding nfs mountpath
Hi team I am trying to add an network mount path to commvault storage library. I assign the mediaagent then choose network then pick the credential and input the path. When I click OK it takes a long time to load then gives the error: "Failed to read db". This is commvault version 11.24.94 recently upgraded from SP16. I tried looking at the logs but I can't se to find the relevant logs. Anyone with an idea?
MANAGING PENDING ACTIONS IN VAULT TRACKER
There seems to be some question surrounding VAULT TRACKER and how to manage PENDING ACTIONS.This is the correct process on how to manage the pending actions for Vault Tracker:https://documentation.commvault.com/11.26/essential/111089_managing_pending_vault_tracker_actions.htmlSince some organizations are retaining there tape footprint for archival and data protection from Ransomware, Vault Tracker is a excellent tool for tape management. Dwayne
Clarification on Deleting Jobs on Tape
Hello, I am updating some our documentation on our best practices to securely delete files from file server's backups.The file server backups in questions are only stored on a primary copy residing on tape, encrypted via AES 256 per the storage policy, and using the built in key management server. Normally if we need to delete a file, we follow the documentation and use the delete data by browsing option. For clarification, if I use the "delete data by browsing option" and delete a file that resides offsite on tape, there is no way to recover that file, correct? There is no way to "Un-Age" or catalog operation on the tape I could perform if I were to insert it back into my tape library? I assume that the CommCell destroys the indexed data/encryption keys associated with that file and cannot read that block of data on the tape? Recently I noticed an option in the CommCell browser where I can delete content of an entire tape. Storage Resources > Libraries > Tape Library > Medi
Aux Copy - how to use all free tape drives unless another job needs a drive?
I have a library with 2x LTO drives that is used for some direct to tape jobs and for aux copies of some disk jobs.Ideally what I’d like is for both LTO drives to be free to aux copy data but if another job runs that needs a drive for the aux copy to throttle back down to using one drive.So for example right now I have 50TB of data to aux copy that is going to a single drive when it could go to both drives (I’ve got them so why not use them) except if I set the aux copy to do that it seems that any new backup jobs to tape pause with “no resources available”.Thanks 😀
Expire uncompleted "To Be Copied" - Selective Copy - First Full of the Year
I’d like to create a single beginning of year (Jan 2023) selective copy (to Tape)Settings are Selective : Yearly Full, First Full of the YearWhat I’d ideally like to achieve is to capture only those Full backups that took place in the first 2 weeks of the new year - which is ‘easy’ to do. What happens though is:If some ‘new data’ or new subclient is created and it obtains a first full in say February 2023, that data will ‘wait’ until it can be written to tapeBut no physical Tape will be made available until Jan 2024(eg 10 Tapes are put in on 1 January 2023 , 10 Tapes are removed 31 Jan 2023, and no further Tapes will be inserted until 1 Jan 2024) What I’d like to do is:If There are AUx Copies waiting (specific to Storage Policy and Storage Policy Copy), if the Aux has been waiting for more than (say) 60 days, is change the job to ‘DO NOT COPY That is ‘almost’ have an expiry data on waiting copies for Aux to Tape or an option that the ‘First Full of the Year’ has a validity period of
DELETE HPT CATALYS
Hi I need your help, in the initial implementation, the customer added a machine running debian 11.00 as MediaAgent, and it is not supported as it only allows to configure HPE-catalyst storage and not the libraries. Therefore the client uninstalled this debian OS from the mediaagent, and now I see that I can not remove the HPE-Catalyst.At the moment I added the mediaAgent with Ubuntu OS and it manages to see both the disk storage and the libraries.The error is about WORM media data.
replication group to dell powerscale
Looking to start a replication group from a VM and default backupset to a mount path on a dell powerscale. Going forward this volume will be san hosted instead of mounted via a server. Upon looking at the config- i dont see an option. Im away replication groups are agent to agent. I thought about makign a library with the location- but even that doenst allow.
Waiting for send queue to get emptied.
Hi all,one question about DDB Verification jobs.The verification jobs for one of our DDBs takes quite a long time, usually a matter of days. When I check the ScalableDDBVerf.log file I see many messages like this WARNING - Waiting for send queue to get emptied. Curr Size  Should I consider this a symptom of a problem? Thank you in advanceGaetano
StoragePolicy tiering or moving backups older than 6 months to different storage
Hello community,We have Storage Policy which keeps all backups for 30 days and monthly backups for 18 months all on the same storage.To free up some space, we could create another copy pointing to the cloud to keep our monthly backups for 18 months. Then we would have 30 days on prem and all monthly backups in the cloud.Now, a new idea has come up to keep 30 days on prem as well as 6 months for monthly backups.And when the monthly backups are 6 month old, the should be copied to the cloud until they are 18 months old and removed on prem.Is there a way to delay the copy of the monthly backups for 6 months?
Error when adding Oracle Cloud Infrastructure Object Storage on Command Center
May I know if anyone encountered this error before when adding cloud storage on Command Center? May I know what is the resolution? I tried to search to any sites but got not luck Error below:"Operating System could not find the device file specified. The device may be unreachable from the MediaAgent. Please ensure that the file is present in the given path and is accessible."
WORM Storage lock with datadomain
Hi we are running 11.30 and we want to start testing WORM Storage capabilities on our Datadomain. We have configured the retention-lock feature on Datadomain and activated the WORM Storage Lock on Commvault through the CommandCenterTalking to the Dell specialist, he has told us that there is a config that could affect on Commvault, the “automatic-lock-delay” value. That´s the time that the file remains “opened” while are being written on the DD by the backup aplication (in this case, Commvault), until it confirms the file closure and the it locks the file with the retention set before. As we don´t know how much time does CV need, we have set it to 120min on the DDHas any of you any experience with WORM on Datadomain in Commvault? Do you know how much time does Commvault keeps the files open on DD until are closed?
Detecting data inside of a mount point/disk library not associated to active backup data?
Hello there CV community,We are in a situation where we expect there may be some data in one of our disk libraries that is not associated to retained backup jobs. The jobs data stored on this disk library utilizes deduplication, so primarily the folder data contains deduplication chunks. We would like to validate that the content on the disk library is “current” i.e. associated with retained & deduplicated backup data. In this case, the storage is Azure object storage.Based on Damian’s post here: Clean Orphan Data , what is it? | Community (commvault.com), our hope was that we could use the DDB space reclamation feature alongside the ‘Clean orphaned data’ option to do this automatically. To test this, I ran the operation in a test environment after having stored some arbitrary files alongside legitimate backup data within the storage. This wasn’t successful, and I expect I have either misunderstood his description of the functionality or perhaps Commvault is specifically looking fo
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.