Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 620 Topics
- 3,252 Replies
We suddenly encountered low throughput & high DDB Lookup (~99%) for all backup job.We have remove a obsolete Media Server this week. We also deleted some Storage Policy & Aux copy with no sub-client associated with. I would like to ask if anyone encountered similar situation? is our DeDup database corrupted? Please help. Many thanks
Hi,We have a little discussion going on if transaction log backups are non-dedup by default despite the StoragePolicyCopy has deduplication enabled.Or do we need to configure a non dedup storage policy for this?In my believe a Log Storage Policy is there for other retention times and copies. The backup type decides if the data must be dedupped or not.Can someone clarify for me?Kind regards,Danny
Doing some preparation of our DR environment and wondering, if we intend to only do restores in the DR environment, is SSD still a requirement? My thoughts were the performance is required to do signature lookups (for writes), but for restores this would not be required. Thanks!
Hi, one of my site has been closed. I took the last backup tapes from this site and put them in a tape library of current site.how can I restore (a VM in this instance) from the tapes of the old site while they are in the library of the current site?now the tapes from the old site are marked with no entrance sign
Hello Community ! I am trying to add a tape library HP MSL G3 Series to Commvault.I am using the Expert Storage Configuration. i have selected the Two Media agents (they are already zone with the tape libraries)I have followed the procedure. Now it asks if the library have a barcode reader and I don’t know :)can you help me please ? Thanks !
We copy data to AWS S3 for long term retention and let it age to glacier after 30 days. In order for this data to be restorable I first have to rehydrate it with a workflow as Commvault only expects to work with S3 storage tier. This works fine for the occasional restores. We are now looking into a larger data removal as part of a cleanup project to reduce AWS glacier storage costs. My questions: If I delete jobs from the Commcell that have aged to glacier, does Commvault have to rehydrate this data back to the S3 tier to be purged from AWS? If so, is this done with the same restoration job Cloud Recall workflow?
Hi, We have SAS attached TS4300 tape library dedicately assigned to AIX 7.2 LPAR hosted on IBM Power9 host. Installed Atape driver on AIX.We can take backup and restore it on OS level. But Commvault tools like testinq, ScanScsiTool not able to detect the device. root> lsdev -Cc tapermt0 Available 00-00-00 IBM 3580 Ultrium Tape Drive (SAS)smc0 Available 00-00-00 IBM 3573 Library Medium Changer (SAS)root> lsdev -Cc adapter | grep sassissas0 Available 00-00 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 testinq gives following error:root> /opt/commvault/Base> ./detectdevices -add tapesas0 rmt0.1 5764854988047962203 0 pthru_atape tapesas0 smc0 5764854988047962206 1 pthru_atape taperoot> /opt/commvault/MediaAgent64> ./testinq /dev/sas0 5764854988047962203 0devsubtype = '7', 0x37Error: ioctl SCIOLINQU failed with error 19 (No such device)Info: version = 2, status_validity = 0x02, scsi_status = 0x00, adapter_status = 0x04, adap_set_flags = 0x00Is there any specific driver on OS lev
HelloI have a problem of auxiliary copy between two media agents with HP StoreOnceI have a 2*100MB WAN link between the two MA.I got 13:138 errors but job is still runningSee attachments for detailsIf someone could help meSP11.22No firewallingFirewall is Off on the two MA Regards
Hi, do I have an option to Import media from catalogic app? I have a customer that migrated to commvault from catalogic and he wants to know if he can import the catalogic tapes to commvault library.he has tapes with the last backup from catalogic.I think he will have to maintain his old backup system, but I’m don’t 100% sure.
So the ticketing service is kind of being slow with this process, and i want to know if I can do this on my own. We have an alert for No Index Backup in the last 30 days. I don't know why they are not storing the index information on the media agents but storing the index data to tape. How do i get a client that is on this list for no index backup to get the index to backup to the media agent so i can get rid of this list?
Hello All,I am from TSM background and when VTL was introduced in TSM, the scratch tapes were not automatically deleted in VTL. Later, RELABELSCRATCH parameter was introduced and it allows to automatically relabel volumes when they are returned to scratch. I remember that specific settings we have do in Backup Exec and HP data Protector backup software too.→ I want to know whether any similar setting is there in Commvault?More details as per TSM perspective → Virtual Tape Libraries (VTLs) maintain volume space allocation after Tivoli Storage Manager has deleted a volume and returned it to a scratch state. The VTL has no knowledge that the volume was deleted and it keeps the full size of the volume allocate. This can be extremely large depending on the devices being emulated. As a result of multiple volumes that return to scratch, the VTL can maintain their allocation size and run out of storage space. Relabel processing on the Tivoli Storage Manager server are started for libraries (V
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
Aux copy job shows running after operational window.The aux copy job stops running at 7am due to blackout window then Automatically resumes and the job is killed by the System ( reason the job has exceeded the total running time.)The answer I am looking for is does it pass traffic after the blackout window is in place
If we have an existing DDB on a drive for a media agent and that drive gets encrypted with Bitlocker does that cause a problem?My thought is that it isn’t since all reads/writes are happening inside the server. There might be a performance penalty though. Or am I totaly wrong?//Henke
Hi there, I want to share my experience with DDB reconstruction. My colleagues started the DDB database reconstruction because the DDB database was not in good condition. There were no current backups of DDB database (the last backup was 14 days old) and maybe some other error messages were active. They decided to start full reconstruction, maybe there was no option to make manual backup of the DDB database - hm, but the database had not been updated with new records anyway. The thing is that the reconstruction is very time consuming and moreover, since the DDB database is not running the backup jobs are not possible. The workaround we did was to temporarily disable deduplication. Another caveat is that we are running out of disk space. And that is how a disaster looks like . At the end I wil put my hypothetic questions. Is the DDB database needed for restoring of data - I wouldnt say so. And what would happen if the broken DDB database was deleted and completely the new one was built
Hello,I’m trying to do some math over effects of changing 2:nd copy to be stored on immutable volumes in azure.I have retention of 365 days, currently no seal of db database but seems recommendation is 180 days. According to the formula in another post, that would give me that I should have 545 days as immutable days on the storage volume.Our baseline is 18TB.Since there is 18TB as baseline and there will be 3 seals of the DDB, does that mean that I will add 3 times 18TB to the volume since no data pruning will happen for 545 days?//Henke
Came across this setting and was wondering when this should be used? What is a possible usecase to tweak this setting? Configure Disk Volume SizeDisk volumes are created based on the volume size. When the size of the volume reaches the maximum size, then a new volume is created. The maximum size of a disk volume is set to 25 GB by default, and this value can be modified. On the ribbon in the CommCell Console, click the Storage tab, and then click Media Management. Click the Resource Manager Configuration tab. Modify the Disk volume physical size high watermark in GB parameter as required. From the Disk volume physical size high watermark in GB box, enter the disk volume size. Click OK. https://documentation.commvault.com/11.24/expert/9319_disk_libraries_advanced.html#b9365_use_unbuffered_io
Hi,We are in CV deployement and Initially we build a single MA with 4 partitioned DDB in the Azure cloud. When data is growing, we moved the two partition into new Media Agent and its running in two MA with two DDB disk each.Now, both MA reached its bottleneck planning to scalling up further, but management allowed me to add one MA alone.So, I have one possiblity of running the backup with four-partitioned DDB splited with three MAs as shown below.MA 1 - Single DDB Disk MA 2 - Single DDB Disk MA3 - Two DDB DiskI am bit worry about to do that, as I am thinking it make some instablity between MAs. But, I couldn’t find relevent CV documents. Can you suggest whether the above design make any sense or not? and will it make any issue in the future? Thanks,Mani
Hi all,some days ago we have temporarily disabled deduplication only under storage policy copy (Storage policy tab). Now, we want to allow deduplication again, however if we uncheck the option (temporarily disable deduplication) we see the message - Deduplication cannot be enabled on dependent copy when disabled on Storage Pool. And this is very strange. Under Storage Pool there have been no changes made, moreover, there is no option to uncheck temporarily disable deduplication (Storage Pool tab). Maybe, only one thing can be in play, the DDB is in Maintenance state because of verification in progress. Storage Pool window - no option to switch disabled deduplication
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.