Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 652 Topics
- 3,323 Replies
On Prem Aux copy to S3?
We are looking into adding an additional layer of offsite backup storage. Amazon S3. The current idea would be add a 3rd copy to an existing storage policy. This copy would reflect the AWS S3 library. Can I Aux copy on premise backup data using physical on premise media agents directly to S3 Any suggestions would be appreciated. Thank you
Cloud Libraries and AWS Combined Storage Tiers
Hey guys,I’m currently using S3 IA for my cloud libraries (dedupe used) and looking to reduce costs. The combined storage tiers look promising, in particularly Intelligent Tearing/Glacier. Has anyone got any experience in using this, and can offer some insight into its suitability? Cheers,Steve
Determining why fewer tapes are being used than configured with "combine source streams"
Commvault is showing 75 TB of data to be written to tape. We have set the tape copy to be “combine source data streams” to 3 (so it will use 3 tapes) and multiplexing is set to 5.Additionally: We have the “Data Path configuration” set to use alternate Data paths when ‘resources are busy”, and (in the policy) checked “enable stream randomization...” and “distribute data evenly among multiple streams...”Started the job, it chose to only write to 2 tapes, and also chose an alternate media agent to use (not sure why, the default media agent has 2 available tape drives and does not appear to be busy).Looking at the job, it only used 7 readers BUT there is a single stream/entry for ‘media not copied”… it does not seem to have determined it needed to use 3 streams.. yet it has a single stream “waiting” … and its not writing “10 readers” (only 7)… so the reader counts/multiplexing seems to not be honored ( as there is a single stream waiting in ‘media not copied”.why didn’t it break up the st
Creating a Storage Policy Copy with Deduplication vs Creating a Deduplication Enabled Storage Policy Copy
Hi guys,This might seem stupid but I’m a bit confused by these two documents on the Commvault website that talk about deduplicating policy copies. If I’m getting the below articles correctly, the difference between; Creating a Storage Policy Copy with Deduplication https://documentation.commvault.com/commvault/v11/article?p=12446.htmand Creating a Deduplication Enabled Policy Copy https://documentation.commvault.com/commvault/v11/article?p=14132.htmis that the former is created using a Storage Pool (dedup engine exists) whilst for the latter the Deduplication location is not an existing dedup engine (storage pool) but just a local folder on the media agent? If the latter is correct and I want to use it to deduplicate additional independent copies e.g. Weekly Fulls and Monthly Fulls on independent libraries against the Primary Copy data, is there a downside to it?Need some assistance on this.
Data verification job for backed up data
Hi All In our environment data verification is disabled, recently from storage team came to know that (Exagrid backup solution) there are some files corrupted. how we can verify now, which jobs file got corrupted, if we run data verification job on all jobs will we able to find. otherwise if the storage team provide the chunk file name which has corrupted from that file will we able to identify the job.
Pruning DDB performance issues
DDB lookup times spiked, SSD are old SATA and under heavy load seem to be very slow. Is pruning only related to a data that is deduplicated? If ORA or SQL DB transaction logs being written to non deduplicated storage policy is pruning at all relevant to that data? How much DDB is being used during pruning? Is it very DDB intensive process?
where to config "preferred setting" in the "Select mount path for MediaAgent according to the preferred setting" of the Library properties
I’d like to consult where to config “preferred setting” mentioned in “Select mount path for MediaAgent according to the preferred setting” option of the library properties? thank you in advanceBest regards
Synchronize All DDBs grayed out
Hi Guys, I hope everybody is fine !So I ran a health report on my Commserve and on the “DDB Performance and Status” section in the “readiness” column, it shows: Needs resync.When I try to resynchronize the DDBs from Storage Resources→ Deduplication Engines, the “Synchronize All DDBs” is grayed out.The DDB status is showing active.Does it mean that as long as it is online I can’t run the synchronization, or I missed something? Thanks !
DDB Backups: Is the media agent that has a DDB partition associated with it supposed to back that partition up (and not another media agent)?
We have several DDB’s, all partitioned across several media agents. When the DDB backups run, I’m seeing most of the Media agents doing a backup for “themselves” meaning the client and Media agent are the same when the DDB backup runs. But for one of them, I cannot seem to get the DDB backup copy to choose the primary/default media agent (in the copy → data paths settings), to do any of the DDB backups, it always chooses the alternate media path for both DDB backup partitions.I have *not* yet chosen the “use preferred data path” setting (where it should only? use the primary media argent and not use any alternates) as I feel that it should choose the primary and it would auto choose the secondary media agent for the other partition if it needs to.Also: I want the DDBBackups to be slit over 2 media agents because 1 media agent is very overpowered (lots of CPU/memory) relative to the other (older and few CPU’s). The media agent the DDB backups is choosing is this underpowered media age
Magnetic Library Defragmentation
I have read a couple or articles on CommVault Online that say defragmentation of the Magnetic libraries is a good idea. Diskeeper, now Dymax IO, was listed as a certified product for online volumes. I am wondering if others defrag theire libraries for performance purposes, and what products they use. I have read in older articles that the native windows defragmentation tool can be used. Also it states that it should be done outside of backup hours (makes sense). Any feedback or information would be appreciated. Thanks
One or more active DDB partitions for the storage policy copy are not available for use.
Problem with DDBOne or more active DDB partitions for the storage policy copy are not available for use.Could anyone direct me to fix this problem?2736 664 02/18 09:26:07 ### RecvAnyMsg: Unexpected message received. Waiting [4F000019], I have , Group 2736 664 02/18 09:26:07 ### SendAndRecvMsg: RecvMsg returned failure. iRet [-1]2736 664 02/18 09:26:07 ### PruneRecords: SendAndRecvMsg failed. iRet [-1]2736 664 02/18 09:26:07 ### 5-3 PruneZRRecInt:2551 Failed to purge  primary SIDB records. Error  [The network module failed to send/receive data.]2736 664 02/18 09:26:07 ### 5-3 PruneZRRec:2293 Finishing zero reference record pruning. Attempts , iRet 2736 664 02/18 09:26:07 ### 5-3 DedupPrnPhase3:5247 Unable to remove unreferenced primary records from SIDB. Error 2736 664 02/18 09:26:07 ### stat-ID [Avg GetDirContents], Samples , Time [0.095575] Sec(s), Average [0.001991] Sec/Sample2736 664 02/18 09:26:07 ### stat-ID [Avg CanPruneVolume], Samples
MA hardware refresh and library mount paths move
Hi all,I’m looking for some steer with regards to a disk library and mount paths move between MediaAgents. I may be over-thinking this but just looking for clarification.My client has a MediaAgent which to be decommissioned. The mount paths are volumes presented from the SAN which have also now been presented to the new MediaAgent. (Offline in Disk Mgmt on new MA awaiting action).Is this just as simple as following the Migrate Shared Disk Libraries option in Disk Libraries - Advanced to move the mount paths configuration to the new MediaAgent or are there any gotchas to be aware of? Normally, I’d just go through a mount path move process but can’t in this case.Thanks in advance.
Dedupe & gzip compression. Does the --rsyncable option help?
Anybody used gzip with --rsyncable to increase dedupe efficiency and does it actually help? People still like to do app/database dump and pickups its all good until they compress the dumps and you convert the pickup backup from tape to disk dedupe. Since dedupe doesn’t like compressed files as a source there’s rumors that --rsyncable option will help here.This option uses a totally different compression algorithm that's rsync friendly and only increases the compressed size by about 1% when compared to the regular flavoured gzip file.
Failed to mount media drive
Hello, TeamI am getting the error below on majority of jobs running.I have checked the storage end and also the media agents (The LUNS are attached to the Media Agent) Everything looks good.What could be the cause of the error. Failed to mount the disk media in library [ARCHIVE_DISKPROD] with mount path [B:\Archive_DiskLibrary\MP10] on MediaAgent [hq_media_svr3]. Operation could not be completed in timeout interval. Please check the following: 1. Library and drive is functioning correctly. 2. Library and Drive management services are running. 3. All other MediaAgent services are running. 4. The time out period on the Expert Storage Configuration Properties Window in the CommCell Console. 5. Cleaning media in Assigned Media Group. Source: hq-vm-commserv, Process: MediaManager
Changing iRMC (HyperScale Appliance HS1300 & HS3300) Password using IPMITool
The following procedure allows you to safely update Fujitsu Appliance iRMC Password which won't impact operations such as: RHEV-M Hardware failure alerting Important Note - for the HyperScale Appliance\, Commvault leverages the IPMI protocol to monitor the physical hardware by design\, and reports back to Command Center if there is a fault. IPMI - Intelligent Platform Management Interface is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware and operating system. The following procedure are applicable for the following use cases: Updating iRMC Password for security purpose Resetting iRMC Password if it is forgotten or lost IPMITool is installed at the Guest OS level (Redhat OS)Updating iRMC Password First, you will need to establish a SSH session onto the Guest OS (HyperScale RedHat 7.#) Then input the following command: # ipmitool user set password
How to manually run missing aux copies
Because of a bug, my cloud storage hit 100% full and not all year-end backups were replicated to the cloud. I’ve worked with CommVault support and now have 50% free space. Even though the Aux Copy job has run several times, there are still year-end backups that haven’t replicated to cloud storage. How do I get CommVault to copy the missing backups?Ken
Aux copy behaviour - scalable resource allocation
Hello all, I have a question about my aux copy behaviour.We use S3 type storage on site for our backups and aux these to an private cloud provider. I’ve noticed that in the aux copies certain clients jobs will consistently be skipped. Initially I thought the issue was bandwidth as the aux copy never completed. We have improved the bandwidth and throughput is much better now, however the aux copies are still not completing. If I go to the secondary copy and show jobs (unticking time range and excluding “available”) I will have a number of jobs going back weeks, with none of the jobs from that client showing partially available (implying it is part way through copying). Interestingly, today I have started the aux copy with “Use scalable resource allocation” UNTICKED, and those old jobs have immediately been picked up and started copying.Anyone have any ideas why this would be? I’m curious what impact this will have on my environment. I just don’t get why most jobs were copying and it was
AuxCopy between 2 HPE StoreOnce Catalyst
We have 2 data centers, each having a HPE StoreOnce Catalyst Appliance and configured to a media agent in the data center.DC1- dc1mediaagent <> DC1 HPE StoreDC2-dc2mediaagent ↔ DC2 HPE StoreNow we want to get an AuxCopy going between DC1-HPE Store to DC2-HPE Store and another DC2-HPE store to DC1-HPE Store. We have tried configuring the libraries and sharing them to the other media agent. but the AuxCopy fails with Chunk errors Failures constantlyWe had no issues using Synology arrays between DC1 and DC2.Any ideas on best practice with HPE StoreOnce appliances? ThanksLarry
Storage Pool - best practices or no logic?
HelloFollowing the Commvault SE recommendations we have created a storage pool of 4 Media Agents with DAS storage. Initially all MAs could read and write data to each mount path and I have noticed that the "lan-free" logic does not work. I mean, each MA tries to access each mount path even if "closest" or fastest way available. Despite our network is 10Gb, data transfer between MAs is slow, very slow. Now, I have allowed only reads for any MA for any path and it works better, but still not perfect. The most important fact that aux copy to tape is very slow. Each policy allows each media agent access to any tape so my idea was "ok, let's stop access via IP and only MA that has its DAS will read/write data". So, backups are fast, aux copy is.... failing! Because MAs couldn't access data on another MAs.So, I am stuck with no ideas, except that stop using Storage Pool and go back to use each media agent as standalone.Any ideas how to:- have a storage pool- force each MA to use their DAS
Does convert to DDBv5 also enable Horizontal Scaling?
Hi all,Sorry for yet another question. But I love the quick feedback I get on this platform! :-)I have a customer running DDBv4 who wants to leverage DDBv5 with Horizontal Scaling.I know that we have the ConvertDDBToV5 workflow, or alternativelywe could convert the DDB manually.But would that also enable Horizontal Scaling automatically? Or is there some separate procedure required for getting horizontal scaling?Thanks in advance for your reply!
Tape usage - Size of Stored Data
Hi,Until recently, ~1 TB of data was stored on all our LTO4 tapes.I recently changed 2 things:I created a Global Secondary Copy Policy I enabled software encryption for the (secondary) backups to tape (Re-encrypt, BlowFish, Key length 128, No Access)Now, only ~750 GB of data is stored on all tapes before they are marked full. A decrease of 25%.Is one of these two changes a known, proven and expectedcause for this decreased usage of the tapes?Thanks!
Storage accelerator not working!!
I have CS and MA in Azure cloud. I have one windows client machine in same network. I have one azure blob storage configured as cloud library. I have installed Storage accelerator on my windows client but still i can’t see backups moving through client to cloud (checked in streams, checked in events, and job details). Any thing i need to do specifically to enable storage accelerator?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.