Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
Moving the DDB for a cloud storage libray to different on-premises MA
I have several cloud libraries, where the storage and DDB are controlled by an on-premises MA. I would like to switch several of them to a different on-premises MA. But cannot seem to find anything here on it the docs on how to switch MAs for an existing cloud library.
Usage of mount points in a disk library
We have a partitioned DDB that uses a Disk library with 12 mount points. Spill and fill has been configured.An oracle DB is backed up with 4 streams/channels. The backup allocates 4 streams but these streams are allocated to one mount point and via one MA. How can the streams be spread across multiple mountpoints such that 2 go via MA1 and 2 via MA2.===Second Oracle DB is being backed up. This takes the same partition as the above job and uses another mount point with all 4 streams going to the same mountpoint. Any ideas how the streams to make Commvault distribute the streams evenly?
cloud library type for scality ring
Hi,i have a question regarding the implementation of a cloud library with Scality Ring.We can create two type of mount path S3 Compatible Storage or Scality Ring.Which is required ? (i have some cloud libraries already created in S3 compatible storage instead of scality ring type).There is a difference between them ?Kind regards, Christophe
Audit Reporting: Confirm data exists in all storage locations
Hey everyone, I’ve got a bit of a puzzler. We have several years of data on prem, and in a 3rd party S3 bucket. We’re looking to reduce the footprint of this on prem and 3rd party S3 bucket somewhat, and are moving the data to AWS and Azure combined storage tier libraries as it’s long term data that we need to keep per SLA, but do not expect to recover from unless a project is resurrected or a legal search request comes in, and as such, we can lower some costs by storing it on the lower cost AWS and Azure offerings.The test Aux copies worked quite well - I can see that both my AWS and Azure libraries have the same number of jobs, and the same total data, but if I am asked by an auditor to show that during this work, for client X, that data was on prem, at the 3rd party S3 site, and at AWS and Azure, before I clear it from on prem and the 3rd party S3, I have no idea how to get a report showing that there are 4 copies of the data. Alternately, an Auditor could say show me for job XXX
Temporarily disabled deduplication - make it enabled again
Hi all,some days ago we have temporarily disabled deduplication only under storage policy copy (Storage policy tab). Now, we want to allow deduplication again, however if we uncheck the option (temporarily disable deduplication) we see the message - Deduplication cannot be enabled on dependent copy when disabled on Storage Pool. And this is very strange. Under Storage Pool there have been no changes made, moreover, there is no option to uncheck temporarily disable deduplication (Storage Pool tab). Maybe, only one thing can be in play, the DDB is in Maintenance state because of verification in progress. Storage Pool window - no option to switch disabled deduplication
Error Code 13:187 Some jobs skkiped during an Aux Copy
Scenario. We´re creating an Aux Copy for all existing backup jobs from a particular storage policy, during the process we faced the following error message:Error Code: 13:187Description: Some backup jobs are skipped because they are aged on destination copy or marked do not copy.Source: commserve, Process JobManager. The Aux Copy was configured to take all existing jobs from the primery and only was able to run and move 10% of the total data on the primary copy and skiiped the rest, any idea how move the skipped jobs?? Regards. Ramon.
Hi,We are in CV deployement and Initially we build a single MA with 4 partitioned DDB in the Azure cloud. When data is growing, we moved the two partition into new Media Agent and its running in two MA with two DDB disk each.Now, both MA reached its bottleneck planning to scalling up further, but management allowed me to add one MA alone.So, I have one possiblity of running the backup with four-partitioned DDB splited with three MAs as shown below.MA 1 - Single DDB Disk MA 2 - Single DDB Disk MA3 - Two DDB DiskI am bit worry about to do that, as I am thinking it make some instablity between MAs. But, I couldn’t find relevent CV documents. Can you suggest whether the above design make any sense or not? and will it make any issue in the future? Thanks,Mani
Shared Library: Exclude/Include Barcodes tapes from old library from new library
I have a shared tape library, used by the “old” and the “new” commvault environment. The “new” is Hyperscale. I want the “new” to use only tapes with a barcodes prefoxed with ‘NP’ and the “old” to use everything else but not ‘NP’. Hopefully, that makes sense! Question: How is this achievable? cheers
Disk Volume Size Watermark
Came across this setting and was wondering when this should be used? What is a possible usecase to tweak this setting? Configure Disk Volume SizeDisk volumes are created based on the volume size. When the size of the volume reaches the maximum size, then a new volume is created. The maximum size of a disk volume is set to 25 GB by default, and this value can be modified. On the ribbon in the CommCell Console, click the Storage tab, and then click Media Management. Click the Resource Manager Configuration tab. Modify the Disk volume physical size high watermark in GB parameter as required. From the Disk volume physical size high watermark in GB box, enter the disk volume size. Click OK. https://documentation.commvault.com/11.24/expert/9319_disk_libraries_advanced.html#b9365_use_unbuffered_io
Storage policy jobs preference
Hi,in our case there is a lot of running jobs accros multiple storage policies. Once the jobs reach more than 90% of completion, the throughput of the job falls and the speed of the job is very slow. However,some of the jobs, that have less than 50% in progress status, are running faster.Is there any option to prioritize jobs, that are almost finished (with more then 95% of progress)?I know there is possible to change priority of the single job, however, from my experiance it didnt increase speed too much. Maybe I should have used the highest priority, that I exactly dont know.
How is backup encryption handled ?
Hi guys,I’m struggeling with encryption in a mixed environment.On the GlobalDedupePolicyCopy, I did not activate encryption.on the Client Advanced Property I enable encryptionon the subClient Property I enable encryption on Network & Media.executed Jobs are listed as ecryption enabled.does this mean, that the backups have been encrypted ? are the backups deduplicated against unencrypted backups within the same StoragePolicy ? (which might result in a mix of encrypted and unencrypted data for the same job) Since encryption is defined in the GDP and I already have two DDB partitions per MediaAgent, do I have to deploy additional MediaAgents to host the dedicated encryption backups, in case I want to enable that on the Storage Policy ?best regardsKlaus
DDB database reconstruction and deletion
Hi there, I want to share my experience with DDB reconstruction. My colleagues started the DDB database reconstruction because the DDB database was not in good condition. There were no current backups of DDB database (the last backup was 14 days old) and maybe some other error messages were active. They decided to start full reconstruction, maybe there was no option to make manual backup of the DDB database - hm, but the database had not been updated with new records anyway. The thing is that the reconstruction is very time consuming and moreover, since the DDB database is not running the backup jobs are not possible. The workaround we did was to temporarily disable deduplication. Another caveat is that we are running out of disk space. And that is how a disaster looks like . At the end I wil put my hypothetic questions. Is the DDB database needed for restoring of data - I wouldnt say so. And what would happen if the broken DDB database was deleted and completely the new one was built
Changing disk library for primary storage policy copy
Hi there, most likely we will need to temporarily change disk library for a couple of storage policies because of slow DDB database reconstruction.My question is how to change disk library for primary storage policy copy? There is no scroll down menu to change disk library in Default Destionation field Does it mean that I need to create new secondary copy and after that promote this newly created secondary copy to be the primary one? Is there any potential data loss? Do you have any hints or caveats for this task?
Auxiliary copy is stuck at 30%
Our weekly secondary AuxCopy is stuck at 30% since this weekend (so that it is blocking all the primary disk to disk incremental copies), with the below 2 error messages Thinking it might be some ports communication issue between the Media Server (S01190) where the tape library is attached to, and the CommCell Server (S02116), so I did the below ports check between the 2 Servers: Telnet from the Media Server (S01190) to the CommCell Server (S02116)Port 8400 OKPort 8401 OKPort 8403 OK Telnet from the CommCell Server (S02116) to the Media Server (S01190)Port 8400 OKPort 8401 Not OKPort 8403 Not OK Now, before I speak to our Network/Security administrator who have recently installed SentinelOne AV on both of the above 2 Servers, I’m wondering if I’m heading the right direction, and if I have done all the ports checking ? Thanks,Kelvin
Media stuck in drive
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
Magnetic Library Defragmentation
I have read a couple or articles on CommVault Online that say defragmentation of the Magnetic libraries is a good idea. Diskeeper, now Dymax IO, was listed as a certified product for online volumes. I am wondering if others defrag theire libraries for performance purposes, and what products they use. I have read in older articles that the native windows defragmentation tool can be used. Also it states that it should be done outside of backup hours (makes sense). Any feedback or information would be appreciated. Thanks
Remove Sealed DDB partition
Hello, I need to remove the Sealed DDB Partition in the commvault.This DDB isn´t in use, the last write in 2015, just for a test.someone has a procedure to remove?and i have another DDB partition in the same DDB, and is recording.i need just remove this DDB Partition because the sever will be turn off.
Import media from catalogic app
Hi, do I have an option to Import media from catalogic app? I have a customer that migrated to commvault from catalogic and he wants to know if he can import the catalogic tapes to commvault library.he has tapes with the last backup from catalogic.I think he will have to maintain his old backup system, but I’m don’t 100% sure.
System Created DDB Verification schedule policy - use of streams
Hi there, is it safe to limit the number of streams to be used in parallel for System Created DDB Verification schedule policy? I am asking because in our case this verification policy consumes 30 streams out of 30 streams. However, there is set up use maximum number of streams, why is 30 limit then?Secondly, I would like to discuss Data Verification Options. Which option do you use preferably? Would it be enough to use only Verification of Deduplication Database instead of Verification of existing jobs on disk and deduplication database?
DR of scale up Media agent
Hello, IHAC who has a NetApp eseries as their disk library. When MA1 writing to a lun fails, the lun needs to be mounted on another MA2 to restore the backups. What is a standard procedure in Commvault that could be used to open the same mount path from a different media agent. From the Commcell GUI one can only create a new mount path. The goal is to be able to restore the data protected to this LUN from a different MA.
Reservation Status: No new readers can be allocated
Hello, in the log of aux copy job (primary copy(disk storage) to secondary copy (tape library), there is this information: "Reservation Status: No new readers can be allocated, check for additional streams after  seconds, pending streams ". Can you explain here, what does it mean and possibly how to avoid this? Can this state caused slow backup performance?
Possible to reset media refresh tag on tapes?
Hi,Is there a way to “reset” the ‘Picked for Refresh / Prevented’ status of media? Some media was marked as prevented, and then the auto refresh criteria was changed after. How do I see which of these “Prevented” media should be auto picked now, I can only flip the manually “prevented” media back to manually “picked” Thanks.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.