Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 672 Topics
- 3,377 Replies
DDB Verification in amazon glacer
Hello all, I have an auxiliary copy with deduplication running to a Glacier and i would like to run a DDB Verification on it, just to see an estimate cost for future verifications.What i would like to know is how do i use the workflow Cloud Storage Archive Recall for every job id that is referenced in that DDB, since in the workflow window it makes me choose a backup job id. Kind regards,Jmiamaral
Backup of AWS and Veritas Cluster Environments
Hello Experts, I am working on a proposal for a large Korean manufacturing company that is migrating its IT infrastructure to AWS. 1. Backup environment- Backup Source: AWS EC2 VMs, Veritas Cluster File System, File Data- Backup Storage: AWS S3 Object Storage 2. Unusual characteristics of previous backup tests- Tested Backup Solutions: Veritas NetBackup, DellEMC NetWorker, Veeam- When backing up the environment using AWS EC2 VM and Veritas Cluster File System to S3 storage, AWS EBS storage was used as the cache (staging ?) area.- In particular, the Veeam solution used more EBS storage as a cache (staging ?) area than Veritas NetBackup and DellEMC NetWorker. 3. What needs to be confirmed1) When performing a backup for the above environment with Commvault, is AWS EBS storage used as the cache (staging ?) area?- The conclusion that the customer reached after discussion with AWS and Veritas is that all backup solutions will use AWS EBS storage as the cache (staging ?) area.2) If AWS EBS st
Moving mount path hangs and stuck at 96%
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
Preferred restore location: Recovery data from azure cloud library
Hi There! I have Vmware vm backed up on-premises and auxiliary copy to Azure cloud library. When I try recovery a VMs that I supposed already transferred data to azure, I’m able to see bandwidth on firewall ports increase. So I think this scenario report a local data recovery to azure.I’d like to recovery data that already on azure cloud ( that was transfer by auxiliary copy). Someone could hep me with this steps?Thanks!
VTL to VTL migration
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots)
I am running a migration from other vendor to CV. For a time being, is it possible that I restrict commvault to use media from designated slots (not to scan/use all library slots). Just to avoid conflict as the tape library would be having CV as well as other vendor media.
cloud library type for scality ring
Hi,i have a question regarding the implementation of a cloud library with Scality Ring.We can create two type of mount path S3 Compatible Storage or Scality Ring.Which is required ? (i have some cloud libraries already created in S3 compatible storage instead of scality ring type).There is a difference between them ?Kind regards, Christophe
Encryption for S3 Cloud library
Hello All,I would like to use the Amazon KMS for the encryption, how do I achieve this. Do I need to register the Amazon KMS in our commcell and use it in our policies?As per below documentation we were asked to add additional keys to enable encryption. How that works, can anyone explain.https://documentation.commvault.com/11.24/expert/9263_enabling_server_side_encryption_with_amazon_s3_managed_keys_sse_s3.htmlwhat is the difference between the above documentation and registering the Amazon KMS in commcell?
Hi,We are in CV deployement and Initially we build a single MA with 4 partitioned DDB in the Azure cloud. When data is growing, we moved the two partition into new Media Agent and its running in two MA with two DDB disk each.Now, both MA reached its bottleneck planning to scalling up further, but management allowed me to add one MA alone.So, I have one possiblity of running the backup with four-partitioned DDB splited with three MAs as shown below.MA 1 - Single DDB Disk MA 2 - Single DDB Disk MA3 - Two DDB DiskI am bit worry about to do that, as I am thinking it make some instablity between MAs. But, I couldn’t find relevent CV documents. Can you suggest whether the above design make any sense or not? and will it make any issue in the future? Thanks,Mani
DiskLibs: SMB or iSCSI attached volumes for the mountpath?
Hi all,any advantages, other than security with removing attack surface, when using iSCSI attached volumes from a NAS directly to the Windows MediaAgent rather then writing via SMB to the NAS share?In regards of performance, ransomware protection etc?Your thoughts are highly appreciated, thanks in advanceregardsMichael
DDB for metallic Storage
we are setting up AUX copy to MCSS cold storage for one of my customer and have used IN premise DDB to send dash copy to cloud Storage.I have confusion here.Support our In premise Site/Server goes down.Will i be able to recover my data from cloud after rebuilding CV server? because both in DDB were in local storage only.or Does it copies DDB to MCSS also?
Can't change storage policy
Due to the fact of changing the backup storage infrastructure, I only want to change the storage policy.Our former strategy was to have a spool copy and to aux copies - one to NAS and one to LTO.This was done due to the fact that we had a performance/backup time issueWe solved this by direct attached SSDs to the backup server.Now I want to set a retention to the primary copy and delete the no longer needed Aux copy to NAS. This points also to the same storage (the local SSDs) and wastes the need space on this.If I want to change the retention on the primary storage policy I get error shown in the screenshot and changes are discarded.I don’t know where to find these settings (Archiver retention rule and “Onepass”) I have never set a archiver (retention rule), we are not using archiving
Is ceph supported as mount path of a dedup disklib
My customer has configured a dedup disklib. The shared mount path is a directory in a filesystem working on ceph layer. One night he found a lot of jobs waiting because the library was open for jobs but did not write data to disk. There was an unknown error on the ceph layer and a reboot of nodes solve this issue. The point is that the library was online and did not went offline during the “ceph error”. Customer reach the maximum limit of jobs and new partial important backup jobs could not start. Now we analyzed what happened. My question here:Is ceph supported as target of a disklib.In BOL I found entries about kubernetes, S3 connection etc. but nothing about using it as mountpath in a filesystem. I know that ceph supports block, file and object level.Thanks in advance for your answers Joerg
Weird System Created DDB Space Reclamation schedule policy
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Add Library Fujitsu LT20 to commserv
@Mike Struening Hello Guys,for add a new physical(first) library to a CommCell, what´s the best procedure and sequence to add?The customer need to send the historical data to a physical library(Fujitsu LT20) and generate new Full Backup, and need to send 3 times (backups full) before erase data.
HQ-VM-CommServ 32:162 Replacing the active media for job  from Mount Path [[dr_media_svr2] V:\DRMA02AR_LUN06] to [[dr_media_svr2] U:\DRMA02AR_LUN05].
HQ-VM-CommServ - 32:162 - Replacing the active media for job  from Mount Path [[dr svr2] V:\DRMA02AR_LUN06] to [[dr svr2] U:\DRMA02AR_LUN05]. What does this mean? AND MAKING THE JOB TO RUN VERY SLOW WITH LOW Current Throughput: OF 0.3:
Cloud Workloads Backup Strategy
Hi Community, I want to know about the strategy which we can take for data protection of cloud workloads using CommVault .Do we need to deploy CS in cloud or we can use on-prem cs for backup of cloud as well as on prem workloads ? if yes , How ?Please share if there is any sample reference architecture diagram for backup of cloud workloads ?What type of backup library to be used for cloud workloads backups ?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.