Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 675 Topics
- 3,383 Replies
chunk commit process during backup job using ddb
Hi all,Is it possible if some explain the process of committing chunks to lib and signature in ddb. I read somewhere that chunk will keep on noting all data blocks even it is repeated. Actually m confused with that statement . Might b my understanding is wrong but need help to understand exactly when ddb signatures will be committed to ddb and when will chunk will get committed ?
Sudden low throughput & high DDB Lookup (~99%) for all backup job
We suddenly encountered low throughput & high DDB Lookup (~99%) for all backup job.We have remove a obsolete Media Server this week. We also deleted some Storage Policy & Aux copy with no sub-client associated with. I would like to ask if anyone encountered similar situation? is our DeDup database corrupted? Please help. Many thanks
Deduplication requirement for Long term retention copy in Cloud
Hi All,I came across the CommVault documentation, mentioning that deduplication won’t make much impact when I am keep my long retention copy in cloud as tape replacement.Can anyone share more details about the your own experience or CommVault documentation regarding pros/cons of keep the long term copy in cloud with/without dedup. Thanks,Mani
Deduplication transaction log backup
Hi,We have a little discussion going on if transaction log backups are non-dedup by default despite the StoragePolicyCopy has deduplication enabled.Or do we need to configure a non dedup storage policy for this?In my believe a Log Storage Policy is there for other retention times and copies. The backup type decides if the data must be dedupped or not.Can someone clarify for me?Kind regards,Danny
DDB storage for DR restore only
Doing some preparation of our DR environment and wondering, if we intend to only do restores in the DR environment, is SSD still a requirement? My thoughts were the performance is required to do signature lookups (for writes), but for restores this would not be required. Thanks!
Restore from tape of a closed site
Hi, one of my site has been closed. I took the last backup tapes from this site and put them in a tape library of current site.how can I restore (a VM in this instance) from the tapes of the old site while they are in the library of the current site?now the tapes from the old site are marked with no entrance sign
AWS Glacier data removal
We copy data to AWS S3 for long term retention and let it age to glacier after 30 days. In order for this data to be restorable I first have to rehydrate it with a workflow as Commvault only expects to work with S3 storage tier. This works fine for the occasional restores. We are now looking into a larger data removal as part of a cleanup project to reduce AWS glacier storage costs. My questions: If I delete jobs from the Commcell that have aged to glacier, does Commvault have to rehydrate this data back to the S3 tier to be purged from AWS? If so, is this done with the same restoration job Cloud Recall workflow?
Issue with detecting SAS atached Tape Library
Hi, We have SAS attached TS4300 tape library dedicately assigned to AIX 7.2 LPAR hosted on IBM Power9 host. Installed Atape driver on AIX.We can take backup and restore it on OS level. But Commvault tools like testinq, ScanScsiTool not able to detect the device. root> lsdev -Cc tapermt0 Available 00-00-00 IBM 3580 Ultrium Tape Drive (SAS)smc0 Available 00-00-00 IBM 3573 Library Medium Changer (SAS)root> lsdev -Cc adapter | grep sassissas0 Available 00-00 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 testinq gives following error:root> /opt/commvault/Base> ./detectdevices -add tapesas0 rmt0.1 5764854988047962203 0 pthru_atape tapesas0 smc0 5764854988047962206 1 pthru_atape taperoot> /opt/commvault/MediaAgent64> ./testinq /dev/sas0 5764854988047962203 0devsubtype = '7', 0x37Error: ioctl SCIOLINQU failed with error 19 (No such device)Info: version = 2, status_validity = 0x02, scsi_status = 0x00, adapter_status = 0x04, adap_set_flags = 0x00Is there any specific driver on OS lev
Import media from catalogic app
Hi, do I have an option to Import media from catalogic app? I have a customer that migrated to commvault from catalogic and he wants to know if he can import the catalogic tapes to commvault library.he has tapes with the last backup from catalogic.I think he will have to maintain his old backup system, but I’m don’t 100% sure.
No Index Backup in last 30 days
So the ticketing service is kind of being slow with this process, and i want to know if I can do this on my own. We have an alert for No Index Backup in the last 30 days. I don't know why they are not storing the index information on the media agents but storing the index data to tape. How do i get a client that is on this list for no index backup to get the index to backup to the media agent so i can get rid of this list?
Media stuck in drive
Hi,After a power failure the tape library is showing offline with the error initializing device failed. I restarted the Commvault server and the error went away however the drive is showing a media but in actuality it is empty. I can’t mount a tape since Commvault thinks there’s still a tape inside the drive. How to resolve this issue?
Aux copy job shows running after operational window
Aux copy job shows running after operational window.The aux copy job stops running at 7am due to blackout window then Automatically resumes and the job is killed by the System ( reason the job has exceeded the total running time.)The answer I am looking for is does it pass traffic after the blackout window is in place
Cloud storage https connection
Hi there, I have successfully added the cloud storage (S3 compatible). However, for the time being I am only able to set up connection based on http protocol. When I want to add a new cloud storage library using https there is the error message failed to do verification.To move further, I would like to utilize https protocol. I have self signed certificate from my netapp cloud storage S3 compatible - is it possible to allow using it since I dont have CA issued cert? Could Commvault be forced to use a self signed certificate? What I did try was to “Use this additional setting and set its value to 0 to skip the checking of the server's certificate claimed identity for the cloud libraries”, but it didnt help. Is it possible to check using of this settings? Do you have any suggestions for such situation? Thanks for you ideas.
DDB database reconstruction and deletion
Hi there, I want to share my experience with DDB reconstruction. My colleagues started the DDB database reconstruction because the DDB database was not in good condition. There were no current backups of DDB database (the last backup was 14 days old) and maybe some other error messages were active. They decided to start full reconstruction, maybe there was no option to make manual backup of the DDB database - hm, but the database had not been updated with new records anyway. The thing is that the reconstruction is very time consuming and moreover, since the DDB database is not running the backup jobs are not possible. The workaround we did was to temporarily disable deduplication. Another caveat is that we are running out of disk space. And that is how a disaster looks like . At the end I wil put my hypothetic questions. Is the DDB database needed for restoring of data - I wouldnt say so. And what would happen if the broken DDB database was deleted and completely the new one was built
Using imutable storage data volume increase
Hello,I’m trying to do some math over effects of changing 2:nd copy to be stored on immutable volumes in azure.I have retention of 365 days, currently no seal of db database but seems recommendation is 180 days. According to the formula in another post, that would give me that I should have 545 days as immutable days on the storage volume.Our baseline is 18TB.Since there is 18TB as baseline and there will be 3 seals of the DDB, does that mean that I will add 3 times 18TB to the volume since no data pruning will happen for 545 days?//Henke
Disk Volume Size Watermark
Came across this setting and was wondering when this should be used? What is a possible usecase to tweak this setting? Configure Disk Volume SizeDisk volumes are created based on the volume size. When the size of the volume reaches the maximum size, then a new volume is created. The maximum size of a disk volume is set to 25 GB by default, and this value can be modified. On the ribbon in the CommCell Console, click the Storage tab, and then click Media Management. Click the Resource Manager Configuration tab. Modify the Disk volume physical size high watermark in GB parameter as required. From the Disk volume physical size high watermark in GB box, enter the disk volume size. Click OK. https://documentation.commvault.com/11.24/expert/9319_disk_libraries_advanced.html#b9365_use_unbuffered_io
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.