Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Hello All, I'm working on getting government certification with Commvault 11.24.I have been asked to confirm the requirements for some entities in Commvault. - Confirmation request: 1) maximum character length, 2) Input ConstraintsRequirements max characters Input Constraints User User name Password Email Server Server Name Host name Installation location BackupSet name Subclient name Plan Plan name Backup destination name Customer path Copy name Snap copy name Jobs View name Disk Disk name Backup location Deduplication DB location RegardsKim KK
Hi, We are setting up a new Windows MA with new DD-Engine and a new DiskLib. The DDB disk has been configured with recommended 32k block size and the DiskLib volumes with 64k block size. We have run a new DASH Copy in the SP to create a new dedup baseline from the previous backed up data. And the plan is to make this DASH Copy the new Primary once all data is available. A standard re-baseline operation, that usually get better dedup ratio with an updated DD-Engine. The backup data is mostly Hyper-V vm’s. In this case the Data Written in the old copy was 9TB and the Copy stopped att 18TB in the new MA as it ran out of resources - DiskLib Full. This was unexpected. Now we are thinking about doing Move DDB and MountPath but if there is a general Deduplication degration we are not sure if this will work, and if we will run into the same issues again. Does anyone have a similar experience or knowledge about this issue and a recommendation on how to move forward?I have done both new baseline
I’ve used the subjected workflow some times and it’s working.However when I looked at it today it seems it was changed @ some point in time.The documentation isn’t corresponding to whats shown in the GUI.It seems that it’s not possible to target a specific cloud library any longer.Is it just on my installation this has changed, or anyone else seen this?Screenshots provided below.//Henke
Hello,Our client ask me how restores from Azure cloud storage works when data is located on cool tier and on archive tier on Azure.We have tested the archive tier first. We copied the data to the cloud archive storage, and performed an restore for a database. We selected only one (of many) database which should have size lower then 10GB, but the full job backup size is 8,5TB (Size of application) - Recall archive workflow is working for 3days and completed only 30% - is it recalling all 8,5TB of data from archive storage?And how it would work if we use a cool tier cloud storage - when data is phisically available (online). If we copy a job which have 8TB and we will perform an restore only for one database (~10GB) from this huge backup - will it download all 8TB of data or will only choose this chunks which actually contains the data of this small 10GB database?Thanks in advance for help :)Have a nice day !Regards,Mateusz
I have a storage policy that has 7 days, 1 cycle retention with two copies: Primary and DR. It appears that none of the incremental backups are replicated to DR. Does CommVault somehow treat incremental backups differently than full backups when it comes to replication to the DR storage?Second question: I just ran a full backup of a small production server, verified that it’s only on the primary storage, did right-click on the storage policy > All tasks > Run Aux Copy > select Copy: 2-DASH-to-DR > OK > and get “No data needs to be copied”. I don’t understand how I can have so many backups, both incremental and full, that show as only being on the primary storage. Does anyone have any idea why my Aux Copy doesn’t seem to be working?Thanks in advance for any help.Ken
HiA single partition DDB outgrew its SSDThe simple solution was toadd a spare SSD and partition the DDB - to now have two active partitionThis has resulted the second ‘empty’ DDB growing a little, but the original partition has remained almost full (due to some infinite retention clients) Is there any way to ‘re-balance’ the DDB’sI could perform a full reconstruction (from disk, not from a DDB backup) - That's going to take 2-3 days to completeI wondered whether there was a ‘hidden feature’ or a workflow that could dynamically ‘equally balance’ the space used by the DDB - to be equal across both SSDs Many Thanks
Hello,Planning to add a new partiotion to an exiting DDB. I’ve went through the documention and have a doubt on the below.The addtional 0.5MB of data will be added only to the magnetic disk the holds the DDB or will it be also added to the Disk library mount paths ? “After running the Backup1, you add Partition2 and run Backup2 of the same 1 MB of data. After the second backup, 4 signatures of 128 KB size will be added to Partition2 (even though the same signatures exists in the original store) and for the other 4 signatures only the reference will be added in the original store (as the signatures already exists). The magnetic disk will have 1.5 MB of data (1 MB from the first Backup + 500 KB from Partition2 from Backup2).On running data aging, if Backup1 is aged, then from the first partition the first 4 signatures will be aged and also 500 KB of data will be pruned from the magnetic disk.”https://documentation.commvault.com/2022e/expert/12455_configuring_additional_partitions_for_ded
Hi,I’m working on an issue with DDB Verification on a HyperScale X environment.I talked to someone at Commvault Support who explained that planned DDB verifications were removed from the best practices on HyperScale X because it could impact Space Reclamation Jobs, aswell as impact performance. However, our customer needs to be able to produce reports to document and prove compliance in regards to ISO certifications. They were able to import a report on their Private Metrics Reporting server so they can generate reports based on Admin jobs.However, they are noticing that DDB verifications are taking a very long time and they aren’t actually completing, so all subsequent DDB verification are just queueing up. The DDBs in this environment are very large.Does anybody have a suggestion how we could solve this issue and actually getting these DDB verification jobs to finish in a timely manner? Would scheduling them more often solve the issue?Looking forward to your suggestions.Jeremy
Hello,Can I change the Secondary Copy schedule from the webconsole?I haven’t used the webconsole very much but with deploying a new commcell I’d thought I’d look into it. I do know how to handle it from the javaconsole. Documentation didn’t give me much as it is a mix odfjavaconsole and webconsole content (confusing)Best regards Henrik
Hi, I have a question regarding magnetic disk library setup and how will it affect restores. So what is being in the thoughts:Library has for example 4 media agents assigned to it; Each media agent would have own mount path provided from SAN; We would then share mount paths between media agents (exportfs (NFS..)) Local to media agent mount path would have Read/Write access and other media agents who have that mount path as a shared one would have Read access Transport Type would be Regular Mount paths would have spill and fill for load balancing With this setup a question came: what will happen if backup_1 was backed up into media_agent_1 out of 4 media agents used in that library, and this media agent is offline - can we still restore data from backup_1 with other 3 remaining media agents? At this point looks like a no - because mount paths to which backup_1 has been done is offline, so how can other media agents in the library be used for a restore? Can this work? If not - can you su
Similar to this article, I'd like to show simple queries to retrieve DDB information from CSDB.Important note: do not modify CSDB data and modules, just use READ operations only.Also pleaseTo keep your activities safe you need to keep uncommitted read from any table as one of the following techniques:use CommServ -- just for convenience-- place the following at the top of any queryset transaction isolation level read uncommitted;-- place with(nolock) statement in each table referenceselect * from APP_Application with(nolock)Most of the DDB information is stored in tables started with Idx, for configuration of DDB is stored mainly the following 3 tables, first one is for DDB information, latter 2 for partitions:select * from IdxSIDBStoreselect * from IdxSIDBSubStoreselect * from IdxAccessPathTo combine this, including which MA is in use for each partition, like the following:select store.SIDBStoreName as 'DDB Name' ,apc.name as 'MediaAgent' ,ap.Path as 'Partition path'from Id
Good day allThis one is a bit complicated, so will try keep it as brief as I can.The customer has a faulty storage device with the backup data on it.They’ve received a loan VAST unit which uses it’s own deduplication engine in addition to the Commvault dedupe in place. We would like to turn the Commvault deduplication off as we’re having errors on the DDB which may require us to seal it.The concerns I have are below, and I’m hoping to get some clarity on it.The faulty storage and VAST storage are in the same Data Centre (DC1).We want to turn DDB off in this DC and do a Move Mount Path(s) from the faulty storage to the VAST.At the same time, new backups are running to the VAST device on different Mount Paths.When this is all complete and the faulty storage is replaced (it won’t have it’s own DDB capabilities, so Commvault will handle that), we will move the VAST Mount Paths back to this storage.With Commvault DDB off, does non-deduplicated data get deduplicated during a ‘Move Mount Path
Hello to all,In case of local Disk Library with Commvault DDB enabled, If create a replication/secondary job to a HPE StoreOnce as target, these jobs are going to transfer only the changed data blocks ? Like Dash copy? Or the entire physically copying data for every job?The goal is to reduce the data / traffic for replicated jobs.Please for your help 😀 Thank you in advance,Nikos
Hi everyone. Customer runs several tape libraries in their commcell domain.One of them is configured with LTO8 tape drive and LTO7 M8 tape media which is known for 9TB tape.Tape drives are well configured just by performing detection and it showed tapes in library with “OOOOOOL8”. (but it’s a M8 media, 9TB capacity)I tried some configuration for tape media - LTO7, LTO8, LTO7M8 … but the same result.It failed to mount the tape on a drive and it showed same error msg … (illegal request ...) Anyone who experienced this?
Hello, IHAC who has a NetApp eseries as their disk library. When MA1 writing to a lun fails, the lun needs to be mounted on another MA2 to restore the backups. What is a standard procedure in Commvault that could be used to open the same mount path from a different media agent. From the Commcell GUI one can only create a new mount path. The goal is to be able to restore the data protected to this LUN from a different MA.
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
Our new SAN for CommVault Disk library is Dell PowerScale H7000 OneFS, the library in production is NetApp 2750. We use Windows 2019 MA and ISCSI LUNS on NetApp. Since PowerScale H7000 doesn't support iSCSI LUNS and Sealing DDB or Creating new Disk Library > Global DDB > New Primary is not the preferred path. Is it possible to create new path on current disk library the path on SMB share on DELL OneFS and disable the writes on all local ISCSI path’s with option checked “Prevent data block references for new backup” we could do this to all local disk at once or gradually. Our retention is 90 days on this SP. So could I delete those paths after jobs age out and continue using path on SMB share.
Hello,I’m trying to do some math over effects of changing 2:nd copy to be stored on immutable volumes in azure.I have retention of 365 days, currently no seal of db database but seems recommendation is 180 days. According to the formula in another post, that would give me that I should have 545 days as immutable days on the storage volume.Our baseline is 18TB.Since there is 18TB as baseline and there will be 3 seals of the DDB, does that mean that I will add 3 times 18TB to the volume since no data pruning will happen for 545 days?//Henke
Hi team, I need help designing a Commvault configuration for storage infrastructure renovation. We have a customer who has an old netapp storage with two arrays with snapvault and snapmirror between them managed by Commvault with OCUM and all clients using Intellisnap. The customer will refresh the storage arrays by moving SVMs to new arrays, I am not really sure if just moving the SVM will be completely transparent for Comvault transparent for Commvault or will it be necessary to configure the new arrays, the SVMs, and create the replicas from scratch again. Thank you.
Hi all,in Master Drive Pool Properties there is set Drive Allocation Policy to Use All Drives. In our case, we have 4 drives fully accessible. However, only 2 drives are in use. Is it possible that this settings is somehow and somewhere overriden? Is there way to figure out why are some of the drives idle? There is a lot of running jobs accros the multiple storage policies so it would be good to utilize all drives.
Hello,We are seeing a very large random read load on our Hitachi G350 backup storages with NL-SAS disks. These random reads are completely consuming our backup storage performance. We have two G350s on campus and a third at a remote site. Commvault runs copy jobs between these three G350s.DDB is on NVMe locally in the Media Agent, also the Index Cache Disk.We ran several analyses and Live Optics showed us that the daily change rate is 334.9%, which is mainly due to the Windows File System policy, for which we see 2485.1% daily change rate.Does anyone know how the random read load could be reduced since our disk backup is otherwise unusable. What steps could we take to optimize the Commvault configuration?Screenshot: Thanks for your help!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.