Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Goodmorning family pls am kind of new to commvault, Now my question is what happen once AUXCopy has been copy to tape will data on media server backup still remain?and if it remain then can i manually delete those data on it? because most of the mounth path luns has been fill up waiting for ageing to effect and the rate by which our data increase is too fast because is a big environment. i check our VM retention policy is set to 30day am thinking if it less risky to manually delete some data off from this disknote we have 5 HQ media server with 80TB each with 10TB each LUNs 2 DR media servers also.
We are using a tape environment for our backups. One of the issues i have with commvaults storage policy is that it doesn't really have a difference on how it does full backups. Our retention cycles are a daily backup (Differentials) of 7 days 1 cycle, weekly full 28 days 1 cycle, and a monthly full with 1095 days +. Currently with how the storage policy is set up it is running to the default, all the data is being dumped to one tape and usually has monthly, daily and weekly data on the same tape. I notice when i pull my monthlies i sometimes will see that the weekly fulls and daily are eating up space that could be used to fill more monthly fulls unto the tape.I was wondering what would be the best practice to separate the monthly full away from the other backup retentions of the storage policy? Would it be something along th3e lines of creating an aux copy in the storage policy and just copy the last weekly full to the aux copy to separate the monthly full from the other data? or is
Hi all,I have a client still running v10 (yes, I know ...their call) and last year new additional storage was configured in their CommCell (2 sites). A new library and mount paths were created in each of the sites and new Primary and DASH copies created with data paths to the new storage in the respective sites.I’m finding that the storage for the Primary copy in their main site is filling up and there is nothing obvious in the forecast report. Is there such a setting in v10 which corresponds with the Enable Physical Pruning in v11? Thanks in advance.JamieK
Hello,I’m trying to do some math over effects of changing 2:nd copy to be stored on immutable volumes in azure.I have retention of 365 days, currently no seal of db database but seems recommendation is 180 days. According to the formula in another post, that would give me that I should have 545 days as immutable days on the storage volume.Our baseline is 18TB.Since there is 18TB as baseline and there will be 3 seals of the DDB, does that mean that I will add 3 times 18TB to the volume since no data pruning will happen for 545 days?//Henke
HelloScenary: 2 MA´s in a CommCell.The Customer has bought 1 more MA and he wants to add the SSD disk space to the existing DDB. I was reading the manual(Commvault site) and have a question, the site says “before you start”.Where can i get this “Authenticate Code”?https://documentation.commvault.com/m/commvault/v11_sp5/article?p=features/deduplication/t_configuring_additional_partitions.htm
Hello,Sorry I can’t correct the title are/or :)On a Storage policy that has deduplication enabled I have this message and I understood why.My question is :can I unselect the Extended Retention Rule 1 set for 90 daysand increase the basic one to for example to 90 days, if yes, how many cycle is good to have for 90 ? Thanks
The Tape Storage Matrix is getting quite “interesting” over time. I am not sure why there are still references to outdated unsupported OS’s and architectures, but that’s more of an comment than a question. I’m trying to verify whether a drive is on the matrix and not coming up with anything. Under the filter section there’s no windows OS. Hmm. If I search for T50e, Windows 2008R2 is the newest “supported” OS for it? What? Is this to say if I call in a ticket for a newer OS on that autoloader might support tell me “sorry Charlie”? Latest code for that unit is BlueScale12.7.07.03-20180817F, it’s not in the Matrix at all. Not sure how to take this or not. Anyone have a similar situation. Plenty of life left in these units and still fully supported by Spectra. Will LTO-8 work?thanks
Hi there! Could you please explain here why there is discrepancy between Application Size and Total Data to Process in the Job Controller view? My assumption is that there are some leftovers from previous backups, that need to be backed up/to be copied. Or is there any other reason?
Hi there!Is there any way how to investigate very poor reader time in NDMP backup in Commvault?Quick look at a part of the log suggests slow reader time to be a culprit of the poor backup performance: |*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353| Perf-Counter Time(seconds) Size|*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353||*1292266*|*Perf*|696353| Replicator DashCopy|*1292266*|*Perf*|696353| |_Buffer allocation............................ - [Samples - 477421] [Avg - 0.000000]|*1292266*|*Perf*|696353| |_Media Open................................... 20 [Samples - 5] [Avg - 4.000000]|*1292266*|*Perf*|696353| |_Chunk Recv...................................
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
HiBackground: Our backups are stored on a Data Domain and the Data Domain replicates itself to a remote site, additionally we've got daily snapshots on the data domain itself to prevent backup data deletion (these snapshots can only be deleted using the sysadmin account).Because we use Data Domain we are storing our backups without compression or deduplication. But my question can be the same for the following situation:Lets assume I've got a media agent, with a data disk (where the disk library is on) attached to it. the building catches fire and I am only able to unplug the data drive. I loose everything expect for the data drive. If I install a new commcell, can I then import the backups from that data drive somehow?
HiA single partition DDB outgrew its SSDThe simple solution was toadd a spare SSD and partition the DDB - to now have two active partitionThis has resulted the second ‘empty’ DDB growing a little, but the original partition has remained almost full (due to some infinite retention clients) Is there any way to ‘re-balance’ the DDB’sI could perform a full reconstruction (from disk, not from a DDB backup) - That's going to take 2-3 days to completeI wondered whether there was a ‘hidden feature’ or a workflow that could dynamically ‘equally balance’ the space used by the DDB - to be equal across both SSDs Many Thanks
I have a question. We have a Azure storage account container “backup” added as Cloud Storage Library in Commvault. This storage account and container is provisioned in Commvault as CL_Backup library. Can I create an additional container in the same storage account, name it “newbackups” and add it as a separate library in Storage Resources called CL_SecondaryBackups for example?I’m trying to leverage data cost savings in Azure by using the same storage account but with multiple containers.
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
Hi, We have SAS attached TS4300 tape library dedicately assigned to AIX 7.2 LPAR hosted on IBM Power9 host. Installed Atape driver on AIX.We can take backup and restore it on OS level. But Commvault tools like testinq, ScanScsiTool not able to detect the device. root> lsdev -Cc tapermt0 Available 00-00-00 IBM 3580 Ultrium Tape Drive (SAS)smc0 Available 00-00-00 IBM 3573 Library Medium Changer (SAS)root> lsdev -Cc adapter | grep sassissas0 Available 00-00 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 testinq gives following error:root> /opt/commvault/Base> ./detectdevices -add tapesas0 rmt0.1 5764854988047962203 0 pthru_atape tapesas0 smc0 5764854988047962206 1 pthru_atape taperoot> /opt/commvault/MediaAgent64> ./testinq /dev/sas0 5764854988047962203 0devsubtype = '7', 0x37Error: ioctl SCIOLINQU failed with error 19 (No such device)Info: version = 2, status_validity = 0x02, scsi_status = 0x00, adapter_status = 0x04, adap_set_flags = 0x00Is there any specific driver on OS lev
We copy data to AWS S3 for long term retention and let it age to glacier after 30 days. In order for this data to be restorable I first have to rehydrate it with a workflow as Commvault only expects to work with S3 storage tier. This works fine for the occasional restores. We are now looking into a larger data removal as part of a cleanup project to reduce AWS glacier storage costs. My questions: If I delete jobs from the Commcell that have aged to glacier, does Commvault have to rehydrate this data back to the S3 tier to be purged from AWS? If so, is this done with the same restoration job Cloud Recall workflow?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.