Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 617 Topics
- 3,237 Replies
Hi,Until recently, ~1 TB of data was stored on all our LTO4 tapes.I recently changed 2 things:I created a Global Secondary Copy Policy I enabled software encryption for the (secondary) backups to tape (Re-encrypt, BlowFish, Key length 128, No Access)Now, only ~750 GB of data is stored on all tapes before they are marked full. A decrease of 25%.Is one of these two changes a known, proven and expectedcause for this decreased usage of the tapes?Thanks!
Hello,There still seem to be more problems :-(Next time I launched an aux copy for two more tapes of the same Storage policy.It reached 98% and Pending status. no error at allHe took the same tape as in the previous process.I killed the process.…I have delete content of new LTO7 tape because, process aux copy not finished to 100%Now, I run auxilliary copy with Backup period in which there are backups for four tapes, but Job is completed and “no more data copied”Whether it is possible to run Aux copy from same tapes LTO4 twice to a new different LTO7 tape???
Hello guys. I’m looking for some advice/tips on how best to configure additional selective copies in a storage policy and ensure they are deduplicated to avoid rewriting the same blocks on cloud storage. The Primary Copy is deduped and goes to Library 1. I want Weekly and Monthly copies to go on Library 2 and 3 respectively with each copy disabled. I noticed I can’t use the Global Deduplication Policy being used by the Primary Copy on the additional copies. Anyone has some thoughts on how to tackle this? I’m not a fan of using Extended Retention on the Primary Copy as and set Weekly and Monthly retention on one media/point of failure.
Creating a Storage Policy Copy with Deduplication vs Creating a Deduplication Enabled Storage Policy Copy
Hi guys,This might seem stupid but I’m a bit confused by these two documents on the Commvault website that talk about deduplicating policy copies. If I’m getting the below articles correctly, the difference between; Creating a Storage Policy Copy with Deduplication https://documentation.commvault.com/commvault/v11/article?p=12446.htmand Creating a Deduplication Enabled Policy Copy https://documentation.commvault.com/commvault/v11/article?p=14132.htmis that the former is created using a Storage Pool (dedup engine exists) whilst for the latter the Deduplication location is not an existing dedup engine (storage pool) but just a local folder on the media agent? If the latter is correct and I want to use it to deduplicate additional independent copies e.g. Weekly Fulls and Monthly Fulls on independent libraries against the Primary Copy data, is there a downside to it?Need some assistance on this.
Hello, we would like to tier out the data, wich is stored on the disk library to an Huawei Object Storage. I created a secoundary copy and configured an aux copy schedule. The problem is that the disk library disc space is running low because the job is not as fast as I was hoping.The amount of data for the copy job can be up to 10 TB.Is there a solution to speed up the aux copy job ? The Media Agents provide 2x10 Gbit cards.RegardsThomas
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
We suddenly encountered low throughput & high DDB Lookup (~99%) for all backup job.We have remove a obsolete Media Server this week. We also deleted some Storage Policy & Aux copy with no sub-client associated with. I would like to ask if anyone encountered similar situation? is our DeDup database corrupted? Please help. Many thanks
Hi all, we are looking into our backup strategy and investigating few scenarios of backend storage. Right now we are considering object storage via S3 protocol and file storage (JBOD) over NFS protocol. The data which will be send there is on-prem filesystems, databases, VM’s etc. Total capacity - over 10 PB.Currently we tested some object storage over S3 protocol but we faced issues with data reclamation (garbage collection for expired objects taking way too long and capacity reclaimed wait time takes over month or few). Can you share your experience with back-end storage, what challenges you faced or how you solved my mentioned issues, also, what advantages you see comparing S3 & NFS protocols for backups. All feedback is very appreciated.Thanks!
Aux copy job shows running after operational window.The aux copy job stops running at 7am due to blackout window then Automatically resumes and the job is killed by the System ( reason the job has exceeded the total running time.)The answer I am looking for is does it pass traffic after the blackout window is in place
Hi All,I have an issue when adding a OneDrive cloud storage.I am configuring via CommCell Console. If I enter Application ID, Tenant ID and Shared Secret and then click the Detect button I receive an error “ ### EvMMConfigMgr::onMsgCloudOperation() - Failed to check cloud server status, error = [[Cloud] The requested URI does not represent any resource on the server. Message: Invalid hostname for this tenancy ”Commvault support answer is “The cloud vendor should be able to help you with the right URL. This is outside Commvault unfortunately ”.Does anybody have any experience using Microsoft OneDrive as a cloud storage?Thank you,Lubos
We have four Exchange servers. Each DB has a passive and active copy with them being on different servers. We installed the media agent and a backup disk on the exchange db server. Usually i know what DBs are passive on each server so i setup a subclient on each one so the back up will run from the passive disk directly over the SAN to the backup disk without using the network. However sometime exchange admins will do patching late at night and the DBs will failover and they wont fail them back til later. when this happens the db backups are much slower. I want to avoid this by configuring each subclient to only backup the passive DBs that are on the same server as the media agent/backup disk. my goal is to never run backups over the network even when DBs are failed over to other servers. is this possible?
So the ticketing service is kind of being slow with this process, and i want to know if I can do this on my own. We have an alert for No Index Backup in the last 30 days. I don't know why they are not storing the index information on the media agents but storing the index data to tape. How do i get a client that is on this list for no index backup to get the index to backup to the media agent so i can get rid of this list?
Hello, I would like to request your help as i’m quite new and have only basic knowledge of backup systems.After reading the commvault documentation available online and tried to troubleshoot the issues, they were narrowed down to 3: Sealed DDB are not aging out. When I try to run a verification of existing jobs on disk and deduplication database it says DDB is corrupted. we have our disks full, as the backups all have stopped.When i run the data retention forecast and compliance report it says “BASIC CYCLE” as reason why jobs are not aged out, i have this dedup policy to age out jobs after 1 cycle, so i guess that if i do a full backup, the previous will be aged out.. except i dont have any available disk space. Also I was unable to find a DDB backup as it seems there was never one to begin with. Should i reconstruct one new from the group up? How can i reduce the size of sealed DDBs as they are quite old.
I’m looking to migrate to new Server Hardware for one of my Media Agents and looking for best approach and minimal downtime.Was thinking I could setup the new hardware with the media agent role/software and start to move mount paths to the new server. Once all mount paths have been moved to the new MA I can then update my storage policies to point to the new MA. Is there anything else I need to look out for or would need to do?Any advice or knowledge on this would be great.Thanks
Hello Team, Just thought to discuss with everyone, we have lots of medias thats turn into "Deprecated” moved to Retire Pool which are very less usage. When am looked the information in Media Properties shows below message, When I could see Side information, shows ver less usage info : As per the default CommVault Media Hardware Maintenance shows as below, Here my question is : Whether those medias mark as good and re-use for future its advisible? Can you someone clarifity from MM. @Christian Kubik
We have some trouble with paths to a Synology nas going offline.Currently we have the DNS name in the path. I’d like to change that to referring the IP address instead. I’m pretty sure I can do it without any issues but better safe then sorry. Anyone see any risks of changing the \\DNSname to \\IPaddress for the path(s)? //Henke
Hi,We have a little discussion going on if transaction log backups are non-dedup by default despite the StoragePolicyCopy has deduplication enabled.Or do we need to configure a non dedup storage policy for this?In my believe a Log Storage Policy is there for other retention times and copies. The backup type decides if the data must be dedupped or not.Can someone clarify for me?Kind regards,Danny
Running v11SP20.40. We’re having a ton of occurrences in the last 3 weeks of this error across multiple environments:[The SDT data transfer was terminated on a request from the Job Manager.]I work in a MSP environment with the CS in one datacenter and the MAs spread out around the country. We have a minimum of 10GB on our datacenter links from MA-MA and MA-CS. I’m curious if anyone has seen this and has any resolution or troubleshooting steps.Thanks!
Hi guys, I'm having a customer who is actually using Standard MA with Commvault to back up to copy 1 to disk (PureFlash with NVMe Drives), copy 2 (and other PureFlash) and a copy 3 to Tape around 350 TB per week to send to tapes (LTO7) 4 tape drives (weekly Full) at a sustain throughput of 700 GB/Hour per drive (4 drive in parallel). We are looking to replace both MA with two HyperScaleX clusters. The questions are ! How we need to configure the HyperScaleX (reference architecture) to sustain the weekly tape creation of 350 TB per week with the same throughput knowing that we are going to use NearLine SAS in the HyperscaleX Cluster ! Or ca we use SSD for the Storage Pool drive in a HyperScaleX.
Hello all,I ran into a bit of an issue… Yesterday, one of the Disk Libraries got filled up and the backups went into waiting status. After haveing a look at the utilization, indeed it turned out to be 99,7% full.The main culprit were SQL server backups: There were some backup jobs with an extended retention - so I’ve deleted those, and some more of the old backup jobs to make space. I also ran Data Aging and could clearly see data chunks being deleted in SIDBPhysicalDeletes.log and after a while I got this: So, I assume quite a bit of data was deleted. The Primary copy (blue) went from 52.95TB down to 19.81 TB. However, when I check the Free Space on the Library I got very little: So I checked the Mountpaths Space Usage for that DL: Data Written corresponds to amount of space used by the Primary copy: 19.8 TBHowever Size on Disk, which takes into account Data Written + aged jobs which are still referenced by valid jobs is still very high. Almost unchanged. I am quite confused by this.
Hello,We are in the process of migrating to a new disk library. This disk library is a pair of NAS devices with 300 TB on each NAS.We can carve out the NAS into multiple volumes with a maximum size of 150 TB per volume.When we first setup our disk library about 10 years ago, the maximum recommended size of a mount path was 4 TB. I know that is old guidance and I am sure this has increased over the years.We were trying to find something on the documentation and the closest we found was a reference to the maximum mount path being 25 TB, but it appears that the limitation can be overridden with a registry setting.So a few questions:Is there a maximum mount path size in a disk library? If there is, what is it? If there is, what happens if you hit the limit without adjusting the registry and can it be overridden with a registry setting? Regardless of a maximum mount path size, from a performance and management perspective is there a best practices on sizing the mount paths. We have thre
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.