Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 731 Topics
- 3,543 Replies
The other day I noticed a Critical item in health dashboard under DDB backup section stating one of the store isn't protected.. Noticed the DDB store itself got auto created a day back keeping the old one active.. I noticed this across various environments where we have multiple DDB store with different ID gets created and all of them are actively used.. Couldn't find any documentation that explains it.. It would be helpful if someone throws some light here.. Thanks.. Below is the one i was referring to where DDB store 72 got auto created and if you notice for FS and DB agent store we have multiple ID’ present...
Is there a command line or script to set just the Location property of a previously exported tape? The below Command did not work. ./qoperation media -o export -b 000298L6 -l RLS-0 -el "xxx - 05" -m xxx-ma-01 -v -tf /home/xxx/tokenfile.cvltRequest successful.[xxx@xxx-ma-01 Base]$ CommVault 11.24.12
Hello, I have a matter that could require some help.We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. Dedup block size is 128 on the disk libraries and 512 on MCSS.Commvault version 11.24.Any help would be appreciated.Regards, Jean-xavier
Hello, I would like to request your help as i’m quite new and have only basic knowledge of backup systems.After reading the commvault documentation available online and tried to troubleshoot the issues, they were narrowed down to 3: Sealed DDB are not aging out. When I try to run a verification of existing jobs on disk and deduplication database it says DDB is corrupted. we have our disks full, as the backups all have stopped.When i run the data retention forecast and compliance report it says “BASIC CYCLE” as reason why jobs are not aged out, i have this dedup policy to age out jobs after 1 cycle, so i guess that if i do a full backup, the previous will be aged out.. except i dont have any available disk space. Also I was unable to find a DDB backup as it seems there was never one to begin with. Should i reconstruct one new from the group up? How can i reduce the size of sealed DDBs as they are quite old.
This has been driving me mildly nuts for the past 3 days post a power shutdown for our site.We had some challenges with our storage network, that prevented iSCSI comms for a few hours after we powered up our site post some scheduled electrical work.We solved that by the software equivalent of a power on/power off on the switches in question (two unrelated port channels to our SAN were disabled/re-enabled). Great. iSCSI works again with all hosts.The MA in question, a VM, is borrowing space on a temporary basis from a newer array, as I am doing a lot of migration work, and is mounting the volumes in Windows via iSCSI as opposed to using an iSCSI mounted data store in VMWare which is treated as an HDD by the OS. Keep in mind, this was all working swimmingly before this past weekend. For the past three days, CV is convinced that the 5 mount points in question are offline, and do not have a controller. If you go to the OS, and browse, you can drill down as deeply into the mounted volum
Because of a bug, my cloud storage hit 100% full and not all year-end backups were replicated to the cloud. I’ve worked with CommVault support and now have 50% free space. Even though the Aux Copy job has run several times, there are still year-end backups that haven’t replicated to cloud storage. How do I get CommVault to copy the missing backups?Ken
Hello Guys,I have a problem with the copy being made to the another site.Version CommServ: SP20 - 11.20.73I saw in another topic the same problems, and saw that update to SP21 solve this problem.Ihttps://community.commvault.com/technical-q-a-2/auxilary-copy-not-copied-some-jobs-239What do you think? Do i update to version SP21?
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
I have an old InfiniGuard VTL that is replicated to a new InfiniGuard VTL and I need to decom the old one. Is there a way, with CommVault down, to uninstall the medium changer and drives and install the new medium changer and have CommVault recognize the new VTL as the old one? The replicated VTL has all of the same barcodes.
Hello,There still seem to be more problems :-(Next time I launched an aux copy for two more tapes of the same Storage policy.It reached 98% and Pending status. no error at allHe took the same tape as in the previous process.I killed the process.…I have delete content of new LTO7 tape because, process aux copy not finished to 100%Now, I run auxilliary copy with Backup period in which there are backups for four tapes, but Job is completed and “no more data copied”Whether it is possible to run Aux copy from same tapes LTO4 twice to a new different LTO7 tape???
Hello all,I ran into a bit of an issue… Yesterday, one of the Disk Libraries got filled up and the backups went into waiting status. After haveing a look at the utilization, indeed it turned out to be 99,7% full.The main culprit were SQL server backups: There were some backup jobs with an extended retention - so I’ve deleted those, and some more of the old backup jobs to make space. I also ran Data Aging and could clearly see data chunks being deleted in SIDBPhysicalDeletes.log and after a while I got this: So, I assume quite a bit of data was deleted. The Primary copy (blue) went from 52.95TB down to 19.81 TB. However, when I check the Free Space on the Library I got very little: So I checked the Mountpaths Space Usage for that DL: Data Written corresponds to amount of space used by the Primary copy: 19.8 TBHowever Size on Disk, which takes into account Data Written + aged jobs which are still referenced by valid jobs is still very high. Almost unchanged. I am quite confused by this.
Hi everyone.how do you estimate MCSS storage requirements? MCSS (cool) would be our customers’ cloud secondary copy in their existing Commvault environment. So there are values available like local block size (128KB), deduplication factor, baseline size and such for estimation We’re seeing around 1.2 to 1.3 x in MCSS demand compared to local disk with customers who set it up recently. As MCSS secondary copies “must have a minimum retention of 30 days", reducing retention (which oftentimes is the customer’s choice for solving storage issues) is apparently not an option here.While of course we cannot provide a 100% estimation, we intend to not be too much off.Thanks.
Interestingly enough there are 2 conflicting theories regarding SQL and ORA transaction log deduplication and pruning question. There is one from CV support engineer: “We discussed high Q&I times and any way to reduce the load on the DDB.We both acknowledged that SQL Transaction Logs are not deduplicated, however you were curious if the T-Log backups communicated in anyway to the DDB which may impose any, albeit minimal, load or processing on the DDB.I was able to confirm that SQL Transaction Log backups do not communicate with the Deduplication process and therefore will not have any effect on Q&I times: https://documentation.commvault.com/11.24/expert/12434_deduplication_support.html””…………………………...But but but, other CV engineer says if the storage policy used for trans logs is deduplicated, CV is not smart enough and will try to deduplicate anyway. It will also involve pruning at the end of the cycle. And when I look at the jobs from deduplication engine level, I see my tran
Commvault is showing 75 TB of data to be written to tape. We have set the tape copy to be “combine source data streams” to 3 (so it will use 3 tapes) and multiplexing is set to 5.Additionally: We have the “Data Path configuration” set to use alternate Data paths when ‘resources are busy”, and (in the policy) checked “enable stream randomization...” and “distribute data evenly among multiple streams...”Started the job, it chose to only write to 2 tapes, and also chose an alternate media agent to use (not sure why, the default media agent has 2 available tape drives and does not appear to be busy).Looking at the job, it only used 7 readers BUT there is a single stream/entry for ‘media not copied”… it does not seem to have determined it needed to use 3 streams.. yet it has a single stream “waiting” … and its not writing “10 readers” (only 7)… so the reader counts/multiplexing seems to not be honored ( as there is a single stream waiting in ‘media not copied”.why didn’t it break up the st
I’m having problems with DDBBackup jobs at my DR site. I changed the configuration from every 6 hours to once per day but the backup from yesterday is still running and I’m at the 22 hour point. I’d like to kill this job and let a fresh one start but I understand that the DDBBackup uses snapshots and I’m afraid that if I kill it there won’t be a proper snapshot cleanup. Is it OK to kill a DDBBackup job that’s been running almost a long time?Ken
Hello all, I have a question about my aux copy behaviour.We use S3 type storage on site for our backups and aux these to an private cloud provider. I’ve noticed that in the aux copies certain clients jobs will consistently be skipped. Initially I thought the issue was bandwidth as the aux copy never completed. We have improved the bandwidth and throughput is much better now, however the aux copies are still not completing. If I go to the secondary copy and show jobs (unticking time range and excluding “available”) I will have a number of jobs going back weeks, with none of the jobs from that client showing partially available (implying it is part way through copying). Interestingly, today I have started the aux copy with “Use scalable resource allocation” UNTICKED, and those old jobs have immediately been picked up and started copying.Anyone have any ideas why this would be? I’m curious what impact this will have on my environment. I just don’t get why most jobs were copying and it was
Hello, we would like to tier out the data, wich is stored on the disk library to an Huawei Object Storage. I created a secoundary copy and configured an aux copy schedule. The problem is that the disk library disc space is running low because the job is not as fast as I was hoping.The amount of data for the copy job can be up to 10 TB.Is there a solution to speed up the aux copy job ? The Media Agents provide 2x10 Gbit cards.RegardsThomas
We are struggling to commission an HPe MSL6480 which has 6 drives. Each drive has 2 FC ports which in turn are connected such that port 1 connects to Fabric A switch, and port 2 to Fabric B switch and the HSX hosts are connected respectively. I’ve build and installed the lin_tape control path failover driver. The OS sees a device for each path. Does multipath need to be configured if so, can someone provide the process as linux is not my strength? Has anyone else installed and configured an MSL6480 and used with Commvault?
While trying to figure out how to gather BET for charging purposes I noticed that the size on disk as displayed both in Command Center and CommCell console for cloud libraries to be incorrect. I have opened a ticket for it and in particular referred to S3 buckets, but I was wondering it other customers see the same and if it also occurs on libraries using Microsoft Azure Storage or other types/vendors. Please comment in case you see identify the same. I noticed it while running FR26 and FR28 (2022e).
Dear All,Please I need help in resolving an issue with my commvault server. This is the error am getting;Error Code: [62:342] Description: Failed to mount media with barcode [A99801], side [A_780], into drive [IBM ULTRIUM-TD5_20], in library [STK L180 1] on MediaAgent [TMwas02]. SCSI Operation: Open Drive Device. Reason: Encountered an I/O error while performing the operation. Advice: Please use the following troubleshooting utilities 1.TapeTool for Tape device-related problems. 2.LibraryTool for Library-related problems provided in the MediaAgent's installation folder to perform diagnostics on this device. Source: backup, Process: MediaManagerPrior to this issue, the backup schedules run smoothly. I have noticed this error only comes up when I run back jobs on media agents ( I have eleven) in the commcell with exception of the commserv’s media agent.The status of the media library attached to these media agents shows “ready”Any help will be greatly appreciated.Thank you.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.