Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 769 Topics
- 3,647 Replies
Hi guys, I have storage policy SP_A that run daily monday-saturday incremental and sunday FULL.I have a secondary copy in this SP_A that run a scheduled at MONDAY 23h a selective copy of a full weekly backup.I wonder if I can get this behaviour: the same auxiliary copy of a secondary copy with the last full backup of week but that runs just after primary copy finishes.Is it possible get this?
Is it possible to create a storage policy to backup data to tape library in weekly basis with manual barcode/media labels?
I am looking for a documents that we can setup storage policy to backup data to tape on a scheduled basis like tape#1 for week1 of the month, tape#2 for week2 of the month etc.. in round robin scenario from 1 backup job only Thank you
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Since noon on Saturday (May 15), my Disaster recovery Backup admin jobs have been failing with this error:Error Code: [34:85]Description: CommServeDR: Error Performing Transfer: Error : [Failed to initialize with Commvault cloud service, The service may be down for maintenance.].Source: inf-srv57, Process: commserveDRIs anyone else having issues with the CommVault cloud service?Ken
Hi, I have a customer with 2 copies :1-Primary dedup on disk with 66 jobs.2-Secondary dedup on disk with 133 jobs He created a copy 3 to replace the secondary but he chose #1 for the source, but there are jobs only in the secondary copy, is there a way to pick the missing jobs by changing the source of copies #3 for source = #2 ? If I change it and run and Aux Copy the missing jobs are not picked ? Or I have to delete the copy #3 and start over !?
Hi,Until recently, ~1 TB of data was stored on all our LTO4 tapes.I recently changed 2 things:I created a Global Secondary Copy Policy I enabled software encryption for the (secondary) backups to tape (Re-encrypt, BlowFish, Key length 128, No Access)Now, only ~750 GB of data is stored on all tapes before they are marked full. A decrease of 25%.Is one of these two changes a known, proven and expectedcause for this decreased usage of the tapes?Thanks!
Hello,There still seem to be more problems :-(Next time I launched an aux copy for two more tapes of the same Storage policy.It reached 98% and Pending status. no error at allHe took the same tape as in the previous process.I killed the process.…I have delete content of new LTO7 tape because, process aux copy not finished to 100%Now, I run auxilliary copy with Backup period in which there are backups for four tapes, but Job is completed and “no more data copied”Whether it is possible to run Aux copy from same tapes LTO4 twice to a new different LTO7 tape???
Hello guys. I’m looking for some advice/tips on how best to configure additional selective copies in a storage policy and ensure they are deduplicated to avoid rewriting the same blocks on cloud storage. The Primary Copy is deduped and goes to Library 1. I want Weekly and Monthly copies to go on Library 2 and 3 respectively with each copy disabled. I noticed I can’t use the Global Deduplication Policy being used by the Primary Copy on the additional copies. Anyone has some thoughts on how to tackle this? I’m not a fan of using Extended Retention on the Primary Copy as and set Weekly and Monthly retention on one media/point of failure.
Creating a Storage Policy Copy with Deduplication vs Creating a Deduplication Enabled Storage Policy Copy
Hi guys,This might seem stupid but I’m a bit confused by these two documents on the Commvault website that talk about deduplicating policy copies. If I’m getting the below articles correctly, the difference between; Creating a Storage Policy Copy with Deduplication https://documentation.commvault.com/commvault/v11/article?p=12446.htmand Creating a Deduplication Enabled Policy Copy https://documentation.commvault.com/commvault/v11/article?p=14132.htmis that the former is created using a Storage Pool (dedup engine exists) whilst for the latter the Deduplication location is not an existing dedup engine (storage pool) but just a local folder on the media agent? If the latter is correct and I want to use it to deduplicate additional independent copies e.g. Weekly Fulls and Monthly Fulls on independent libraries against the Primary Copy data, is there a downside to it?Need some assistance on this.
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
We suddenly encountered low throughput & high DDB Lookup (~99%) for all backup job.We have remove a obsolete Media Server this week. We also deleted some Storage Policy & Aux copy with no sub-client associated with. I would like to ask if anyone encountered similar situation? is our DeDup database corrupted? Please help. Many thanks
Hi all, we are looking into our backup strategy and investigating few scenarios of backend storage. Right now we are considering object storage via S3 protocol and file storage (JBOD) over NFS protocol. The data which will be send there is on-prem filesystems, databases, VM’s etc. Total capacity - over 10 PB.Currently we tested some object storage over S3 protocol but we faced issues with data reclamation (garbage collection for expired objects taking way too long and capacity reclaimed wait time takes over month or few). Can you share your experience with back-end storage, what challenges you faced or how you solved my mentioned issues, also, what advantages you see comparing S3 & NFS protocols for backups. All feedback is very appreciated.Thanks!
Aux copy job shows running after operational window.The aux copy job stops running at 7am due to blackout window then Automatically resumes and the job is killed by the System ( reason the job has exceeded the total running time.)The answer I am looking for is does it pass traffic after the blackout window is in place
Hi All,I have an issue when adding a OneDrive cloud storage.I am configuring via CommCell Console. If I enter Application ID, Tenant ID and Shared Secret and then click the Detect button I receive an error “ ### EvMMConfigMgr::onMsgCloudOperation() - Failed to check cloud server status, error = [[Cloud] The requested URI does not represent any resource on the server. Message: Invalid hostname for this tenancy ”Commvault support answer is “The cloud vendor should be able to help you with the right URL. This is outside Commvault unfortunately ”.Does anybody have any experience using Microsoft OneDrive as a cloud storage?Thank you,Lubos
We have four Exchange servers. Each DB has a passive and active copy with them being on different servers. We installed the media agent and a backup disk on the exchange db server. Usually i know what DBs are passive on each server so i setup a subclient on each one so the back up will run from the passive disk directly over the SAN to the backup disk without using the network. However sometime exchange admins will do patching late at night and the DBs will failover and they wont fail them back til later. when this happens the db backups are much slower. I want to avoid this by configuring each subclient to only backup the passive DBs that are on the same server as the media agent/backup disk. my goal is to never run backups over the network even when DBs are failed over to other servers. is this possible?
So the ticketing service is kind of being slow with this process, and i want to know if I can do this on my own. We have an alert for No Index Backup in the last 30 days. I don't know why they are not storing the index information on the media agents but storing the index data to tape. How do i get a client that is on this list for no index backup to get the index to backup to the media agent so i can get rid of this list?
Hello, I would like to request your help as i’m quite new and have only basic knowledge of backup systems.After reading the commvault documentation available online and tried to troubleshoot the issues, they were narrowed down to 3: Sealed DDB are not aging out. When I try to run a verification of existing jobs on disk and deduplication database it says DDB is corrupted. we have our disks full, as the backups all have stopped.When i run the data retention forecast and compliance report it says “BASIC CYCLE” as reason why jobs are not aged out, i have this dedup policy to age out jobs after 1 cycle, so i guess that if i do a full backup, the previous will be aged out.. except i dont have any available disk space. Also I was unable to find a DDB backup as it seems there was never one to begin with. Should i reconstruct one new from the group up? How can i reduce the size of sealed DDBs as they are quite old.
Hello Team, Just thought to discuss with everyone, we have lots of medias thats turn into "Deprecated” moved to Retire Pool which are very less usage. When am looked the information in Media Properties shows below message, When I could see Side information, shows ver less usage info : As per the default CommVault Media Hardware Maintenance shows as below, Here my question is : Whether those medias mark as good and re-use for future its advisible? Can you someone clarifity from MM. @Christian Kubik
We have some trouble with paths to a Synology nas going offline.Currently we have the DNS name in the path. I’d like to change that to referring the IP address instead. I’m pretty sure I can do it without any issues but better safe then sorry. Anyone see any risks of changing the \\DNSname to \\IPaddress for the path(s)? //Henke
Hi,We have a little discussion going on if transaction log backups are non-dedup by default despite the StoragePolicyCopy has deduplication enabled.Or do we need to configure a non dedup storage policy for this?In my believe a Log Storage Policy is there for other retention times and copies. The backup type decides if the data must be dedupped or not.Can someone clarify for me?Kind regards,Danny
Running v11SP20.40. We’re having a ton of occurrences in the last 3 weeks of this error across multiple environments:[The SDT data transfer was terminated on a request from the Job Manager.]I work in a MSP environment with the CS in one datacenter and the MAs spread out around the country. We have a minimum of 10GB on our datacenter links from MA-MA and MA-CS. I’m curious if anyone has seen this and has any resolution or troubleshooting steps.Thanks!
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.