Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 647 Topics
- 3,299 Replies
DDB - Cluster Windows - Doubts
Hello,I have a doubt in setting up DDB Creation in awindows cluster environment.2 physical servers with installed media agent and a virtual storage (starwind)1) can the DDB be on the storage shared by the nodes of the cluster?2) Can I use only 1 DDB for all nodes (MA) or should I have a DDB for each MA?
Limiting the data ports used to send data between Media Agents
Hi. We have a complicated setup where we are using a Topology Group to send data between Media Agents through a Firewall and Proxy. Once the data hits the Firewall all data is forced into 2 tunnel ports.In addition to this control we would like to reduce the number of source ports being used so that these can be monitored for backup flow. Currently using the Dynamic Port Range 49152 and 65535 does not allow us to do this.Is it as simple as forcing all data traffic into the tunnel (8403 by default) and if so will this create a bottleneck.Thanks, Andy
LTO7 M8 tape media instead of LTO8 cartridge
Hi everyone. Customer runs several tape libraries in their commcell domain.One of them is configured with LTO8 tape drive and LTO7 M8 tape media which is known for 9TB tape.Tape drives are well configured just by performing detection and it showed tapes in library with “OOOOOOL8”. (but it’s a M8 media, 9TB capacity)I tried some configuration for tape media - LTO7, LTO8, LTO7M8 … but the same result.It failed to mount the tape on a drive and it showed same error msg … (illegal request ...) Anyone who experienced this?
Possible to increase the Aux Copy anomaly threshold?
I do full backups of production servers on Friday night and non-production servers Saturday night. Every Monday my inbox has roughly 20 anomaly emails about aux copy jobs running longer than usual. This has been going on for months now. Is it possible to tweak the anomaly thresholds in CommVault so I get fewer emails?Ken
Dedup Data Written expands with new DD-engine
Hi, We are setting up a new Windows MA with new DD-Engine and a new DiskLib. The DDB disk has been configured with recommended 32k block size and the DiskLib volumes with 64k block size. We have run a new DASH Copy in the SP to create a new dedup baseline from the previous backed up data. And the plan is to make this DASH Copy the new Primary once all data is available. A standard re-baseline operation, that usually get better dedup ratio with an updated DD-Engine. The backup data is mostly Hyper-V vm’s. In this case the Data Written in the old copy was 9TB and the Copy stopped att 18TB in the new MA as it ran out of resources - DiskLib Full. This was unexpected. Now we are thinking about doing Move DDB and MountPath but if there is a general Deduplication degration we are not sure if this will work, and if we will run into the same issues again. Does anyone have a similar experience or knowledge about this issue and a recommendation on how to move forward?I have done both new baseline
run auxiliary copy when full backup is done
Hi guys, I have storage policy SP_A that run daily monday-saturday incremental and sunday FULL.I have a secondary copy in this SP_A that run a scheduled at MONDAY 23h a selective copy of a full weekly backup.I wonder if I can get this behaviour: the same auxiliary copy of a secondary copy with the last full backup of week but that runs just after primary copy finishes.Is it possible get this?
question about setting up exchange db client to backup all passive DBs.
We have four Exchange servers. Each DB has a passive and active copy with them being on different servers. We installed the media agent and a backup disk on the exchange db server. Usually i know what DBs are passive on each server so i setup a subclient on each one so the back up will run from the passive disk directly over the SAN to the backup disk without using the network. However sometime exchange admins will do patching late at night and the DBs will failover and they wont fail them back til later. when this happens the db backups are much slower. I want to avoid this by configuring each subclient to only backup the passive DBs that are on the same server as the media agent/backup disk. my goal is to never run backups over the network even when DBs are failed over to other servers. is this possible?
Add Azure Storage Account Container as an Independent Library
I have a question. We have a Azure storage account container “backup” added as Cloud Storage Library in Commvault. This storage account and container is provisioned in Commvault as CL_Backup library. Can I create an additional container in the same storage account, name it “newbackups” and add it as a separate library in Storage Resources called CL_SecondaryBackups for example?I’m trying to leverage data cost savings in Azure by using the same storage account but with multiple containers.
Reservation Status: No new readers can be allocated
Hello, in the log of aux copy job (primary copy(disk storage) to secondary copy (tape library), there is this information: "Reservation Status: No new readers can be allocated, check for additional streams after  seconds, pending streams ". Can you explain here, what does it mean and possibly how to avoid this? Can this state caused slow backup performance?
Windows File System Backup & Archive Misconfiguration / Misunderstanding
When Commvault introduced the defaultArchiveSet alongside the defaultBackupSet I think I misunderstood how it was supposed to be used an this resulted in me misconfiguring that subclient. We have a file server which has a number of drives that I backup, but we also wanted to archive files within that backup location. So we setup seperate subclients one under defaultbackupset and one under defaultarchiveset. This has caused some of the same data to be backed up twice. What I am thinking is that we should have just configured archiving under the defaultbackupset subclient and left the defaultarchiveset alone?
Does convert to DDBv5 also enable Horizontal Scaling?
Hi all,Sorry for yet another question. But I love the quick feedback I get on this platform! :-)I have a customer running DDBv4 who wants to leverage DDBv5 with Horizontal Scaling.I know that we have the ConvertDDBToV5 workflow, or alternativelywe could convert the DDB manually.But would that also enable Horizontal Scaling automatically? Or is there some separate procedure required for getting horizontal scaling?Thanks in advance for your reply!
Best Way to Confirm Combined Storage Tiers Functioning
What’s the easiest way to confirm a Cool/Archive cloud library is working? I have migrated data from a Hot storage blob container. I provisioned an additional storage container as Hot in Azure and provisioned it as Cool/Archive as recommended by Commvault. I attempted a restore of a file from this Archive library and the restore just completed as normal without the use of a workflow so I’m worried the data isn’t in the Archive tier as expected. Any ideas how to confirm the data is actually in Archive and Metadata is in Cool? And why was the restore just standard?
DR of scale up Media agent
Hello, IHAC who has a NetApp eseries as their disk library. When MA1 writing to a lun fails, the lun needs to be mounted on another MA2 to restore the backups. What is a standard procedure in Commvault that could be used to open the same mount path from a different media agent. From the Commcell GUI one can only create a new mount path. The goal is to be able to restore the data protected to this LUN from a different MA.
Import backup from Disk Library
HiBackground: Our backups are stored on a Data Domain and the Data Domain replicates itself to a remote site, additionally we've got daily snapshots on the data domain itself to prevent backup data deletion (these snapshots can only be deleted using the sysadmin account).Because we use Data Domain we are storing our backups without compression or deduplication. But my question can be the same for the following situation:Lets assume I've got a media agent, with a data disk (where the disk library is on) attached to it. the building catches fire and I am only able to unplug the data drive. I loose everything expect for the data drive. If I install a new commcell, can I then import the backups from that data drive somehow?
Tape Library Options
Hello everyone, hopefully wanting to get some recommendations and possibly help. We are currently shopping to replace our HP LTO6 SAS Tape Libraries. We are looking at the Quantum i3’s to support LTO8. First off let me explain our setup2 Media Agents 4 Tape Libraries (HP LTO6) - 2 of them directly connected 6GB SAS to 1 media Agent and the other 2 drives directly connected 6GB SAS to the other media agentWe are looking at the i3 but would want both media agents to be able to utilize the same drives from the unit. We know how we currently have it setup and 2 drives belong to one media agent and the other 2 to the other. So some quick questions:Any recommendations on other brands we should look at? Anyone recommend the quantum tape drives? What would the setup of this look like from a SAS to Media Agent setup look like? Right now obviously 1 media agent can’t use the other’s tape drives and we dont want to be in this position in the new setup. If we have 4 drives we want all of our medi
what type of Cloud library for Scality ?
Hi,i have a question about the creation of a cloud library.We have a scality ring as a backend. We have select S3 compatible storage for create the library.in the documentation i found the below information :For another vendor that supports Amazon S3 such as Scality, you must select Amazon S3 from Type, and then, under Access Information, enter the credentials of that vendor.i have a doubt about selecting amazon S3 in the type of cloud library instead of S3 compatible storage because i tried to create a cloud library for testing in amazon s3 type. the request of creating the library don’t work.please adviceKind regards,
Deduplication on Independent Copies for Azure Cloud Backups
Hello guys. I’m looking for some advice/tips on how best to configure additional selective copies in a storage policy and ensure they are deduplicated to avoid rewriting the same blocks on cloud storage. The Primary Copy is deduped and goes to Library 1. I want Weekly and Monthly copies to go on Library 2 and 3 respectively with each copy disabled. I noticed I can’t use the Global Deduplication Policy being used by the Primary Copy on the additional copies. Anyone has some thoughts on how to tackle this? I’m not a fan of using Extended Retention on the Primary Copy as and set Weekly and Monthly retention on one media/point of failure.
Tape usage - Size of Stored Data
Hi,Until recently, ~1 TB of data was stored on all our LTO4 tapes.I recently changed 2 things:I created a Global Secondary Copy Policy I enabled software encryption for the (secondary) backups to tape (Re-encrypt, BlowFish, Key length 128, No Access)Now, only ~750 GB of data is stored on all tapes before they are marked full. A decrease of 25%.Is one of these two changes a known, proven and expectedcause for this decreased usage of the tapes?Thanks!
System Created DDB Space Reclamation schedule policy
Hello,after upgrading from V11FR20 to V11FR24 I noticed a new schedule policy named “System Created DDB Space Reclamation schedule policy” which was disabled by default.I basically know what the Space Reclamation functionality is about, and the policy has all our deduplication engines assigned. But when initializing this policy it finished in less than a quarter of an hour and from the logs only one dedup engine was processed. Another manual start just gives below error message. Can anybody explain to me what this schedule policy is about and how it is supposed to work?
On Prem Aux copy to S3?
We are looking into adding an additional layer of offsite backup storage. Amazon S3. The current idea would be add a 3rd copy to an existing storage policy. This copy would reflect the AWS S3 library. Can I Aux copy on premise backup data using physical on premise media agents directly to S3 Any suggestions would be appreciated. Thank you
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.