Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 675 Topics
- 3,383 Replies
Cleaning Tape not discover in Commcell
Hello,The customer has installed a tape library and the commcell do not detect any cleaning tape on the console. When i select Discover Cleaning Tape, i have this message “There are no new media to discover” and my setting is configured to discovered automatically the media. I see my cleaning tape on my library, but not on my Commcell console. Do you have a solution for me? Thank you very much!
Full and incremental aux copy backup size are same.
Dear Team,We observed, Full & incremental aux copy backup size are almost same.Aux copy schedule are below:-Weekly Full :- 03:00 AM on SaturdayDaily Incremental :- 07:00 PM on SaturdayNote :- another day incremental backups are completed normally. but only problem for Saturday incremental aux copy backup.(I thing Saturday incremental aux copy backup are taking INCR + FULL, because we have checked XYZ Job ID reflecting both tapes.)
WORM Storage lock with datadomain
Hi we are running 11.30 and we want to start testing WORM Storage capabilities on our Datadomain. We have configured the retention-lock feature on Datadomain and activated the WORM Storage Lock on Commvault through the CommandCenterTalking to the Dell specialist, he has told us that there is a config that could affect on Commvault, the “automatic-lock-delay” value. That´s the time that the file remains “opened” while are being written on the DD by the backup aplication (in this case, Commvault), until it confirms the file closure and the it locks the file with the retention set before. As we don´t know how much time does CV need, we have set it to 120min on the DDHas any of you any experience with WORM on Datadomain in Commvault? Do you know how much time does Commvault keeps the files open on DD until are closed?
StoragePolicy tiering or moving backups older than 6 months to different storage
Hello community,We have Storage Policy which keeps all backups for 30 days and monthly backups for 18 months all on the same storage.To free up some space, we could create another copy pointing to the cloud to keep our monthly backups for 18 months. Then we would have 30 days on prem and all monthly backups in the cloud.Now, a new idea has come up to keep 30 days on prem as well as 6 months for monthly backups.And when the monthly backups are 6 month old, the should be copied to the cloud until they are 18 months old and removed on prem.Is there a way to delay the copy of the monthly backups for 6 months?
Detecting data inside of a mount point/disk library not associated to active backup data?
Hello there CV community,We are in a situation where we expect there may be some data in one of our disk libraries that is not associated to retained backup jobs. The jobs data stored on this disk library utilizes deduplication, so primarily the folder data contains deduplication chunks. We would like to validate that the content on the disk library is “current” i.e. associated with retained & deduplicated backup data. In this case, the storage is Azure object storage.Based on Damian’s post here: Clean Orphan Data , what is it? | Community (commvault.com), our hope was that we could use the DDB space reclamation feature alongside the ‘Clean orphaned data’ option to do this automatically. To test this, I ran the operation in a test environment after having stored some arbitrary files alongside legitimate backup data within the storage. This wasn’t successful, and I expect I have either misunderstood his description of the functionality or perhaps Commvault is specifically looking fo
Error when adding Oracle Cloud Infrastructure Object Storage on Command Center
May I know if anyone encountered this error before when adding cloud storage on Command Center? May I know what is the resolution? I tried to search to any sites but got not luck Error below:"Operating System could not find the device file specified. The device may be unreachable from the MediaAgent. Please ensure that the file is present in the given path and is accessible."
Benefit to enabling Horizontal DDB
Hello,What are the benefits to enabling the horizontal DDB, BOL explains how enable this feature, but nothing about the real benefits except that split the DDB in 3 section first for File system, another for Database and the last one for VM.Can I expect to see an improvement in backup performance or an increase in deduplication performance that would further reduce on-disk consumption?Thanks,
Storage utilization per storage policy
Hello all-I’m trying to find a report that will output something that I think is super basic: list my 5 storage policies, and tell me how much space each one is using. I already know about the Client Storage Utilization by Storage Policy Copy report, but it shows me the client backup sizes within each policy and I don’t see how to total it very easily or customize it to see what I want. If that report can do it, can someone point me in the right direction to tweak it so I can get the info I’m trying to pull? Or if someone knows of an easier way, that would be great. I’ve checked the built-in reports in the console, command center and store.Thanks!
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Space reclamation / orphan data cleaning against cloud storage library
Hi all,Wondering if by chance anyone has experience running DDB space reclamation with orphan data cleanup against a cloud library. We have a cloud library in Azure cool tier which we expect may have some data that was not pruned successfully, thus increasing our storage consumption in Azure. The deduplication database for this data lives on local storage.We’d love to run a space reclamation with orphan data cleanup against this cloud library, but we’re concerned about the possible cost of storage transactions against the Azure cool library.Has anyone performed this operation before and observed the related cloud storage costs? For reference, we have just under 100 million blobs and a total of about 400TB of storage utilization in Azure. Many thanks for any input folks may have!
LTO9 - Media calibration / Characterization
Hi and happy new year to all of you !I would like to know if some of you have already implemented some LTO9 drives / tape libraries, and would love to get your feedback about it using Commvault.My experience on the LTO9 media, using dual tape drives tape libraries, is quite bad.The Media calibration / optimization / characterization phase that any new LTO9 media has to deal with is a pain, on my side.Looks like on the first mount of a media -- let me reword it in my ‘old guy’ words -- it has to be somehow formatted to be able to be used by your favourite backup software. Below a link to Quantum’s FAQs about this :https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf Short calculation : 50 LTO9 brand new tapes may require up to 2hours each of ‘calibration’ before they can be used. So this equals to 100 hours of ‘calibration’ before you could use the full 50 tape pool.. 😱 My 1st issue was that I had to adjust all the mount timeouts in that LT
Netapp SM-BC support with intellsinap
Hi all,we are implementing a new netapp infrastructure. This infrastructure will be composed by 2 clusters with SM-BC -- Snapmirror Business ContinuityWe will use it to present luns on vmwareI don’t find any information about the compatibility with intellisnap.Any Idea ?Thanks a lot
The media side mounted is write-protected
HiI'm not an expert user.After the automatic upgrade job to version 11.28.8, the backup jobs failes with the following error:Error Code: [62:308] Description: The media side mounted is write-protected. Source: backupserver, Process: cvdStatus: Mount ErrorLibrary Name: AutoDiskLibDrivepool: DrivePool(backupserver)1Drive: Folder_03.28.2019_11.26Media label: CV_MAGNETICFailure Reason: The media side mounted is write-protected.Under the library all mount paths have Read/Write AccessProperties for the mount path under Allocation policies are marked Maximum Allowed Writers.Tried manually upgrade to version 10.28.10, but nothing changed.This is a disk library, not tape. The library is about 89% full, but still have about 7,5 TB free space on the given mount path. How to enable read write?
New E2812/Media Agents recommendation
I’m slogging through a CV hardware upgrade. My company is “trying” to find $$ for training...but so far no joy, so I’m winging it.I have a new NetApp E2812, two new media agents, and a shiny Brocade FC switch to bind them all together.My immediate question is, do I create two volumes on the E2812 and mount one on each MA? Or can I create just one volume and let both MAs read/write to it simultaneously? Typically, sharing a volume like that across multiple Windows servers leads to trouble, but can the CV media agents handle that sharing?I like the idea of a single volume letting both MAs access all of the available disk space. That alleviates the need to balance my backups across the different volumes. Does anyone have any helpful pointers for me? Much obliged.
Good afternoon,I am trying to create a report on the occupancy of our libraries. However, the report coming from Commvault has a lot of information.Would it be possible to use CLI to extract only the information I want?My idea is to create a script and run it every month without having to manually organize the data.
Clarification on Deleting Jobs on Tape
Hello, I am updating some our documentation on our best practices to securely delete files from file server's backups.The file server backups in questions are only stored on a primary copy residing on tape, encrypted via AES 256 per the storage policy, and using the built in key management server. Normally if we need to delete a file, we follow the documentation and use the delete data by browsing option. For clarification, if I use the "delete data by browsing option" and delete a file that resides offsite on tape, there is no way to recover that file, correct? There is no way to "Un-Age" or catalog operation on the tape I could perform if I were to insert it back into my tape library? I assume that the CommCell destroys the indexed data/encryption keys associated with that file and cannot read that block of data on the tape? Recently I noticed an option in the CommCell browser where I can delete content of an entire tape. Storage Resources > Libraries > Tape Library > Medi
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.