Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 588 Topics
- 3,129 Replies
HiA single partition DDB outgrew its SSDThe simple solution was toadd a spare SSD and partition the DDB - to now have two active partitionThis has resulted the second ‘empty’ DDB growing a little, but the original partition has remained almost full (due to some infinite retention clients) Is there any way to ‘re-balance’ the DDB’sI could perform a full reconstruction (from disk, not from a DDB backup) - That's going to take 2-3 days to completeI wondered whether there was a ‘hidden feature’ or a workflow that could dynamically ‘equally balance’ the space used by the DDB - to be equal across both SSDs Many Thanks
I have a question. We have a Azure storage account container “backup” added as Cloud Storage Library in Commvault. This storage account and container is provisioned in Commvault as CL_Backup library. Can I create an additional container in the same storage account, name it “newbackups” and add it as a separate library in Storage Resources called CL_SecondaryBackups for example?I’m trying to leverage data cost savings in Azure by using the same storage account but with multiple containers.
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
Hi, We have SAS attached TS4300 tape library dedicately assigned to AIX 7.2 LPAR hosted on IBM Power9 host. Installed Atape driver on AIX.We can take backup and restore it on OS level. But Commvault tools like testinq, ScanScsiTool not able to detect the device. root> lsdev -Cc tapermt0 Available 00-00-00 IBM 3580 Ultrium Tape Drive (SAS)smc0 Available 00-00-00 IBM 3573 Library Medium Changer (SAS)root> lsdev -Cc adapter | grep sassissas0 Available 00-00 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 testinq gives following error:root> /opt/commvault/Base> ./detectdevices -add tapesas0 rmt0.1 5764854988047962203 0 pthru_atape tapesas0 smc0 5764854988047962206 1 pthru_atape taperoot> /opt/commvault/MediaAgent64> ./testinq /dev/sas0 5764854988047962203 0devsubtype = '7', 0x37Error: ioctl SCIOLINQU failed with error 19 (No such device)Info: version = 2, status_validity = 0x02, scsi_status = 0x00, adapter_status = 0x04, adap_set_flags = 0x00Is there any specific driver on OS lev
We copy data to AWS S3 for long term retention and let it age to glacier after 30 days. In order for this data to be restorable I first have to rehydrate it with a workflow as Commvault only expects to work with S3 storage tier. This works fine for the occasional restores. We are now looking into a larger data removal as part of a cleanup project to reduce AWS glacier storage costs. My questions: If I delete jobs from the Commcell that have aged to glacier, does Commvault have to rehydrate this data back to the S3 tier to be purged from AWS? If so, is this done with the same restoration job Cloud Recall workflow?
Doing some preparation of our DR environment and wondering, if we intend to only do restores in the DR environment, is SSD still a requirement? My thoughts were the performance is required to do signature lookups (for writes), but for restores this would not be required. Thanks!
Hi Guys, I finally found the exact article that describes a solution I wanted to implement and am seeking opinions to do this or not. Basically I want to create multiple Selective Copies under a storage policy and associate with different subclients/computers to meet a client’s tiering model and different retention. Because I also want to implement Deduplication between the Primary and each selective copy, Weekly and Monthly, I’m weighing an option to Create Selective Copy using a Library instead of using a Storage Pool. In the article below;Article: https://documentation.commvault.com/commvault/v11/article?p=119730.htmStep 17b, can I select Partition Path as a normal Windows folder e.g. D:\<randomFolder>. If I create additional selective copies under the same storage policy, can I use the same D:\randomFolder> to deduplicate data between Primary Copy and additional selective copies or I have to create a D:\<randomFolder1> and so forth for each copy. I ask because I do n
Hi, We are setting up a new Windows MA with new DD-Engine and a new DiskLib. The DDB disk has been configured with recommended 32k block size and the DiskLib volumes with 64k block size. We have run a new DASH Copy in the SP to create a new dedup baseline from the previous backed up data. And the plan is to make this DASH Copy the new Primary once all data is available. A standard re-baseline operation, that usually get better dedup ratio with an updated DD-Engine. The backup data is mostly Hyper-V vm’s. In this case the Data Written in the old copy was 9TB and the Copy stopped att 18TB in the new MA as it ran out of resources - DiskLib Full. This was unexpected. Now we are thinking about doing Move DDB and MountPath but if there is a general Deduplication degration we are not sure if this will work, and if we will run into the same issues again. Does anyone have a similar experience or knowledge about this issue and a recommendation on how to move forward?I have done both new baseline
Hi guys, I have storage policy SP_A that run daily monday-saturday incremental and sunday FULL.I have a secondary copy in this SP_A that run a scheduled at MONDAY 23h a selective copy of a full weekly backup.I wonder if I can get this behaviour: the same auxiliary copy of a secondary copy with the last full backup of week but that runs just after primary copy finishes.Is it possible get this?
Is it possible to create a storage policy to backup data to tape library in weekly basis with manual barcode/media labels?
I am looking for a documents that we can setup storage policy to backup data to tape on a scheduled basis like tape#1 for week1 of the month, tape#2 for week2 of the month etc.. in round robin scenario from 1 backup job only Thank you
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Since noon on Saturday (May 15), my Disaster recovery Backup admin jobs have been failing with this error:Error Code: [34:85]Description: CommServeDR: Error Performing Transfer: Error : [Failed to initialize with Commvault cloud service, The service may be down for maintenance.].Source: inf-srv57, Process: commserveDRIs anyone else having issues with the CommVault cloud service?Ken
Hi, I have a customer with 2 copies :1-Primary dedup on disk with 66 jobs.2-Secondary dedup on disk with 133 jobs He created a copy 3 to replace the secondary but he chose #1 for the source, but there are jobs only in the secondary copy, is there a way to pick the missing jobs by changing the source of copies #3 for source = #2 ? If I change it and run and Aux Copy the missing jobs are not picked ? Or I have to delete the copy #3 and start over !?
Hi,Until recently, ~1 TB of data was stored on all our LTO4 tapes.I recently changed 2 things:I created a Global Secondary Copy Policy I enabled software encryption for the (secondary) backups to tape (Re-encrypt, BlowFish, Key length 128, No Access)Now, only ~750 GB of data is stored on all tapes before they are marked full. A decrease of 25%.Is one of these two changes a known, proven and expectedcause for this decreased usage of the tapes?Thanks!
Hello,There still seem to be more problems :-(Next time I launched an aux copy for two more tapes of the same Storage policy.It reached 98% and Pending status. no error at allHe took the same tape as in the previous process.I killed the process.…I have delete content of new LTO7 tape because, process aux copy not finished to 100%Now, I run auxilliary copy with Backup period in which there are backups for four tapes, but Job is completed and “no more data copied”Whether it is possible to run Aux copy from same tapes LTO4 twice to a new different LTO7 tape???
Hello guys. I’m looking for some advice/tips on how best to configure additional selective copies in a storage policy and ensure they are deduplicated to avoid rewriting the same blocks on cloud storage. The Primary Copy is deduped and goes to Library 1. I want Weekly and Monthly copies to go on Library 2 and 3 respectively with each copy disabled. I noticed I can’t use the Global Deduplication Policy being used by the Primary Copy on the additional copies. Anyone has some thoughts on how to tackle this? I’m not a fan of using Extended Retention on the Primary Copy as and set Weekly and Monthly retention on one media/point of failure.
Creating a Storage Policy Copy with Deduplication vs Creating a Deduplication Enabled Storage Policy Copy
Hi guys,This might seem stupid but I’m a bit confused by these two documents on the Commvault website that talk about deduplicating policy copies. If I’m getting the below articles correctly, the difference between; Creating a Storage Policy Copy with Deduplication https://documentation.commvault.com/commvault/v11/article?p=12446.htmand Creating a Deduplication Enabled Policy Copy https://documentation.commvault.com/commvault/v11/article?p=14132.htmis that the former is created using a Storage Pool (dedup engine exists) whilst for the latter the Deduplication location is not an existing dedup engine (storage pool) but just a local folder on the media agent? If the latter is correct and I want to use it to deduplicate additional independent copies e.g. Weekly Fulls and Monthly Fulls on independent libraries against the Primary Copy data, is there a downside to it?Need some assistance on this.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.