Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 672 Topics
- 3,378 Replies
Additional Partitions for a Deduplication Database
HelloScenary: 2 MA´s in a CommCell.The Customer has bought 1 more MA and he wants to add the SSD disk space to the existing DDB. I was reading the manual(Commvault site) and have a question, the site says “before you start”.Where can i get this “Authenticate Code”?https://documentation.commvault.com/m/commvault/v11_sp5/article?p=features/deduplication/t_configuring_additional_partitions.htm
Extended retention or not?
Hello,Sorry I can’t correct the title are/or :)On a Storage policy that has deduplication enabled I have this message and I understood why.My question is :can I unselect the Extended Retention Rule 1 set for 90 daysand increase the basic one to for example to 90 days, if yes, how many cycle is good to have for 90 ? Thanks
LTO-8 drive compatibility etc..
The Tape Storage Matrix is getting quite “interesting” over time. I am not sure why there are still references to outdated unsupported OS’s and architectures, but that’s more of an comment than a question. I’m trying to verify whether a drive is on the matrix and not coming up with anything. Under the filter section there’s no windows OS. Hmm. If I search for T50e, Windows 2008R2 is the newest “supported” OS for it? What? Is this to say if I call in a ticket for a newer OS on that autoloader might support tell me “sorry Charlie”? Latest code for that unit is BlueScale12.7.07.03-20180817F, it’s not in the Matrix at all. Not sure how to take this or not. Anyone have a similar situation. Plenty of life left in these units and still fully supported by Spectra. Will LTO-8 work?thanks
Import backup from Disk Library
HiBackground: Our backups are stored on a Data Domain and the Data Domain replicates itself to a remote site, additionally we've got daily snapshots on the data domain itself to prevent backup data deletion (these snapshots can only be deleted using the sysadmin account).Because we use Data Domain we are storing our backups without compression or deduplication. But my question can be the same for the following situation:Lets assume I've got a media agent, with a data disk (where the disk library is on) attached to it. the building catches fire and I am only able to unplug the data drive. I loose everything expect for the data drive. If I install a new commcell, can I then import the backups from that data drive somehow?
Application Size vs Total Data to Process Discrepancy
Hi there! Could you please explain here why there is discrepancy between Application Size and Total Data to Process in the Job Controller view? My assumption is that there are some leftovers from previous backups, that need to be backed up/to be copied. Or is there any other reason?
NDMP slow backup/reader time
Hi there!Is there any way how to investigate very poor reader time in NDMP backup in Commvault?Quick look at a part of the log suggests slow reader time to be a culprit of the poor backup performance: |*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353| Perf-Counter Time(seconds) Size|*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353||*1292266*|*Perf*|696353| Replicator DashCopy|*1292266*|*Perf*|696353| |_Buffer allocation............................ - [Samples - 477421] [Avg - 0.000000]|*1292266*|*Perf*|696353| |_Media Open................................... 20 [Samples - 5] [Avg - 4.000000]|*1292266*|*Perf*|696353| |_Chunk Recv...................................
Auxilary Copy not copied some jobs
Hi Folks,I have interesting situation about aux copy. I’ m doing DASH copy from one Disk Library to another Disk Library. Despite the successful completion of Aux Copy, 4 jobs remain in "Partially Copied" status. I ran Data Verification on Primary Copy for 4 Jobs and it completed successfully. I did Re-Copy but it stays the same, I did Do Not Copy - Pick for Copy but it's still the same. All Backups is selected in Copy Policy. What should I check?Best Regards.
Balance a DDB
HiA single partition DDB outgrew its SSDThe simple solution was toadd a spare SSD and partition the DDB - to now have two active partitionThis has resulted the second ‘empty’ DDB growing a little, but the original partition has remained almost full (due to some infinite retention clients) Is there any way to ‘re-balance’ the DDB’sI could perform a full reconstruction (from disk, not from a DDB backup) - That's going to take 2-3 days to completeI wondered whether there was a ‘hidden feature’ or a workflow that could dynamically ‘equally balance’ the space used by the DDB - to be equal across both SSDs Many Thanks
Add Azure Storage Account Container as an Independent Library
I have a question. We have a Azure storage account container “backup” added as Cloud Storage Library in Commvault. This storage account and container is provisioned in Commvault as CL_Backup library. Can I create an additional container in the same storage account, name it “newbackups” and add it as a separate library in Storage Resources called CL_SecondaryBackups for example?I’m trying to leverage data cost savings in Azure by using the same storage account but with multiple containers.
Azure blob space library with immutable policy
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
Problem with copy media LTO4 (IBM Tape library) to LTO7 (HPE Tape Library)
Hello,I have IBM Tape library TS3200 with LTO4 media. Now we have a new Tape library HPE with LTO7 media. How we can copy data from LTO4 Media (old Tape library) to LTO7 Tape library.Which way is recomended? Maybe Media Refresh??Thank you!Best regards,Elizabeta
Issue with detecting SAS atached Tape Library
Hi, We have SAS attached TS4300 tape library dedicately assigned to AIX 7.2 LPAR hosted on IBM Power9 host. Installed Atape driver on AIX.We can take backup and restore it on OS level. But Commvault tools like testinq, ScanScsiTool not able to detect the device. root> lsdev -Cc tapermt0 Available 00-00-00 IBM 3580 Ultrium Tape Drive (SAS)smc0 Available 00-00-00 IBM 3573 Library Medium Changer (SAS)root> lsdev -Cc adapter | grep sassissas0 Available 00-00 PCIe3 RAID SAS Adapter Quad-port 6Gb x8 testinq gives following error:root> /opt/commvault/Base> ./detectdevices -add tapesas0 rmt0.1 5764854988047962203 0 pthru_atape tapesas0 smc0 5764854988047962206 1 pthru_atape taperoot> /opt/commvault/MediaAgent64> ./testinq /dev/sas0 5764854988047962203 0devsubtype = '7', 0x37Error: ioctl SCIOLINQU failed with error 19 (No such device)Info: version = 2, status_validity = 0x02, scsi_status = 0x00, adapter_status = 0x04, adap_set_flags = 0x00Is there any specific driver on OS lev
AWS Glacier data removal
We copy data to AWS S3 for long term retention and let it age to glacier after 30 days. In order for this data to be restorable I first have to rehydrate it with a workflow as Commvault only expects to work with S3 storage tier. This works fine for the occasional restores. We are now looking into a larger data removal as part of a cleanup project to reduce AWS glacier storage costs. My questions: If I delete jobs from the Commcell that have aged to glacier, does Commvault have to rehydrate this data back to the S3 tier to be purged from AWS? If so, is this done with the same restoration job Cloud Recall workflow?
DDB storage for DR restore only
Doing some preparation of our DR environment and wondering, if we intend to only do restores in the DR environment, is SSD still a requirement? My thoughts were the performance is required to do signature lookups (for writes), but for restores this would not be required. Thanks!
Creating a Selective Copy Using a Library (Enable Deduplication)
Hi Guys, I finally found the exact article that describes a solution I wanted to implement and am seeking opinions to do this or not. Basically I want to create multiple Selective Copies under a storage policy and associate with different subclients/computers to meet a client’s tiering model and different retention. Because I also want to implement Deduplication between the Primary and each selective copy, Weekly and Monthly, I’m weighing an option to Create Selective Copy using a Library instead of using a Storage Pool. In the article below;Article: https://documentation.commvault.com/commvault/v11/article?p=119730.htmStep 17b, can I select Partition Path as a normal Windows folder e.g. D:\<randomFolder>. If I create additional selective copies under the same storage policy, can I use the same D:\randomFolder> to deduplicate data between Primary Copy and additional selective copies or I have to create a D:\<randomFolder1> and so forth for each copy. I ask because I do n
Dedup Data Written expands with new DD-engine
Hi, We are setting up a new Windows MA with new DD-Engine and a new DiskLib. The DDB disk has been configured with recommended 32k block size and the DiskLib volumes with 64k block size. We have run a new DASH Copy in the SP to create a new dedup baseline from the previous backed up data. And the plan is to make this DASH Copy the new Primary once all data is available. A standard re-baseline operation, that usually get better dedup ratio with an updated DD-Engine. The backup data is mostly Hyper-V vm’s. In this case the Data Written in the old copy was 9TB and the Copy stopped att 18TB in the new MA as it ran out of resources - DiskLib Full. This was unexpected. Now we are thinking about doing Move DDB and MountPath but if there is a general Deduplication degration we are not sure if this will work, and if we will run into the same issues again. Does anyone have a similar experience or knowledge about this issue and a recommendation on how to move forward?I have done both new baseline
run auxiliary copy when full backup is done
Hi guys, I have storage policy SP_A that run daily monday-saturday incremental and sunday FULL.I have a secondary copy in this SP_A that run a scheduled at MONDAY 23h a selective copy of a full weekly backup.I wonder if I can get this behaviour: the same auxiliary copy of a secondary copy with the last full backup of week but that runs just after primary copy finishes.Is it possible get this?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.