Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 675 Topics
- 3,383 Replies
Data Aging In AWS s3
Greetings, We have some Aux copies that go to our AWS s3 bucket. The storage policy this is under has a 30 day on prem and 365 day cloud policy. The 30 day on prem (primary) has data aging turned on and seems to be pruning and getting rid of jobs past 30 days. I took a look at the properties of the Aux copy job though and noticed that the check box for data aging was not selected. When I view all jobs for this Aux copy, it showed jobs back from years ago unfortunately. So that tells me that nothing is aging out or getting cleaned up. Our s3 bucket is getting very large and we need to clean up all of these old jobs to bring it down to a reasonable size. My question is how best to do this clean up? Can I view the jobs under the Aux copy and then just select all of them past our retention and delete? Would this delete data out of the s3 bucket also if I did this? I did select the data aging check box now and hit ok, then ran a data aging job from the commcell root and just ran it against
Cloud Storage prediction
Hi All, Currently we are maintaing the below retention policy for all data type. The Primary copy stored in the On-Prem and Aux copy moving to Tape.Daily Backup - 35 days (On-Prem and Cloud)Weekly Full - 6 Weeks (On-Prem and Cloud)Montlhly Full - 1 Year (Cloud)Annual Full - 5 Years (Cloud) Now, there is a proposal to change the Retention for Daily and weekly to 90 days and 12 weeks respewctivily in the cloud alone. I want to predict the following factors.What is my daily data change rate What is the expected immediate storage growth due to propossed retention How much I need to pay additionally according new policy. Hope most of you came across the situtaion and looking some inptu to calculate the values.
Aux Copy Job missing ?
Hi, I have a customer with 2 copies :1-Primary dedup on disk with 66 jobs.2-Secondary dedup on disk with 133 jobs He created a copy 3 to replace the secondary but he chose #1 for the source, but there are jobs only in the secondary copy, is there a way to pick the missing jobs by changing the source of copies #3 for source = #2 ? If I change it and run and Aux Copy the missing jobs are not picked ? Or I have to delete the copy #3 and start over !?
Media Agent Down
Hi,One of our Media agent is down. It has windows server OS. We are unable to bring up the server. Currently MA is offline. The server also have over 10TB of critical backed up data on it.Our OS team has failed to bring up the server. Please suggest how can we recover from this situation.
Catalog jobs from a cloud storage object
Hi Guys,Is there a way to catalog jobs from a bucket within a cloud storage library, like below:The tool offers only a Tape or a Disk as a Media. How do we retrieve our DR backups from a Cloud storage in case we lose everything in order to perform a Disaster Recovery.I found the link below, however it doesn’t show how to retrieve the DR DB.https://documentation.commvault.com/11.24/expert/43588_retrieving_disaster_recovery_dr_backups_from_cloud_storage_using_cloud_test_tool.htmlI’ve also found the below note:Does this mean that if deduplication is enabled, there is no way to retrieve the DR DB?Thanks a lot. Best Regards
Azure CloudLib Data Written does not match Dedupe statistics
I have a CloudLibrariy in Azure configured with three Cool Blobs. (three mount paths). Commvault reports Application Size 30TB and Data on Disk 50TB. In Azure the disks reports to be filled with 12TB. We have verified that WORM is disabled on the volumes in Azure. It seems that Commvault messes up the statistics for some reason. Have anyone seen this before on Azure CloudLibs?
Adding new DDB partition
Hello,Planning to add a new partiotion to an exiting DDB. I’ve went through the documention and have a doubt on the below.The addtional 0.5MB of data will be added only to the magnetic disk the holds the DDB or will it be also added to the Disk library mount paths ? “After running the Backup1, you add Partition2 and run Backup2 of the same 1 MB of data. After the second backup, 4 signatures of 128 KB size will be added to Partition2 (even though the same signatures exists in the original store) and for the other 4 signatures only the reference will be added in the original store (as the signatures already exists). The magnetic disk will have 1.5 MB of data (1 MB from the first Backup + 500 KB from Partition2 from Backup2).On running data aging, if Backup1 is aged, then from the first partition the first 4 signatures will be aged and also 500 KB of data will be pruned from the magnetic disk.”https://documentation.commvault.com/2022e/expert/12455_configuring_additional_partitions_for_ded
Disk Library on LVM or gpt disk
Hi,I'm setting up the new Linux (Red-Hat) MediaAgent right now and I have a "what would be better" question. Maybe someone would like to share experiences :) On this new MediaAgent I plan to create a new disk library. MediaAgent will have available resources via SAN from an array (several volumes of 8TB each). Is it better to do LVM on these volumes (create vgs, create lvols and finally create filesystem - for example ext4) or better to make a gpt (parted) partitions and create ext4 without creating vgs and lvols?I am very curious about your opinion what would be a better solution.greetings
Slow Tape to Disk
We were testing a small (200MB) backup and restore to tape and back to disk. Backup takes < 8 minutes however restore takes 3 hours. It appears that after making contact with the index server the restore is waiting almost three hours before mounting the first tape. Transfer from tape to disk actually take a few minutes. Any idea why it can take so long to mount the first tape?
Copy of config of Backup jobs to a new servers
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
MCSS Library performances
Hello, I have a matter that could require some help.We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. Dedup block size is 128 on the disk libraries and 512 on MCSS.Commvault version 11.24.Any help would be appreciated.Regards, Jean-xavier
LTO - Inventory to new Commserve
Hello Community,I have many LTO that contain backups from another commcell (Commcell A). I want to add the jobs of these LTOs in the commserve database (Commcell B).So Commserve know the jobs and can restore in the Commcell B. Next, software encryption is enable on LTO Copy in Commcell A.Is it possible and how ? Thanks :)
DDB Update to V5 question - Stop backups -vs.- Temporarily Disable Dedupe
Upgrading my DDB’s to V5, trying to avoid fully stopping backups while I squeeze in the upgrade. Is it possible to check “Temporarily Disable Deduplication” under Dedupe Engines > [DDB name] > properties > Deduplication > advanced tab, and perform the upgrade? The DDB needs to come offline for compaction. Thanks in advance, Joel Bates
how to create metallic license library capacity dashboard widget
We have CommVault Complete licensing and also Metallic Cloud Storage Service (MCSS) licensing. However, the Command Center dashboard lacks a widget for Metallic. Can someone share how to add a widget that would display the %used of a library? This would effectively show MCSS as a % of licensed use. The existing Current Capacity widget only shows the % of licensed Complete usage, not the % of storage used; the MCSS library falls into the latter category since MCSS is not a “Complete” accounting of backup potential, it is an actual % of backup used. From dashboard:From Storage>CloudSo it would be nice to see the cloud library as a pie chart, similar to the complete licensing in the Current Capacity widget. How do you make widgets and put them on the dashboard?
Azure Cold and Archive
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
Aux Copy between CommCells
Hello.Is it possible to have aux copies configured between different CommCell entities?Same company but different business units in different buildings requiring offsite backup copies but is trying to make use of the infrastructure of the other already existing in their environment.This is probably possible when installing a second instance on the existing MA but will require separate disk libraries entities at that site.Thanks.Iggy
Aux copy properties showing Media Not copied "Stream No./Sequence" column containing "stream" 0/1?
I have some Aux copies that have this, for example:The Aux copy is configured to use 8 streams via “Combine source data streams = 8”. Multiplexing is not enabled. When it runs (in properties → Streams tab) it has 8 “Destination Streams” running (“Number of readers in use = 8 ). BUT when I look at properties → “Media not Copied” tab, it shows 9 streams in the "Stream No./Sequence" column, where it is:what's “0/1” → Stream Number 0? but if so then there’s 9 streams to copy? I wanted to make sure something was not misconfigured somewhere and Stream No 0 wasn’t some default to handle an oddity/overflow of data or something strange. I was under the impression the streams were to be combined to 8 (all data broken up into 8 chunks to be streamed/read/copied) … yet the UI is telling me I have 8 streams “to copy” and another one named 0, though it actually only runs 8 streams/readers. For reference, here's the active streams of the same job showing 8 readers/streams
3DFS Share for a NDMP Intellisnap Backup Copy
I have some "NAS/network shares" with different backup types (network share CIFS/NFS or NDMP).The backup copies for shares using FileSystem Agent via "Network Share" can use 3DFS and those using NDMP once cannot use 3DFS. Unfortunately, I have a requirement/need to be able to use 3DFS. How can I share backups of NDMP NAS shares via 3DFS? Am I missing something, is there a setting or do I have to live with it?
Use an existing library with data while creating global deduplication policy
Hi All, We currently only have intellisnap backup and the existing disk library is only being used for the SQL tlog backups. We are planning to onboard few streaming backups and configure global deduplication.Is it possible to select the existing library which contains transaction log data while creating the global deduplication policy without impacting the data on it? I am trying to avoid to have multiple disk libraries. Or do I need to configure an empty disk library for this purpose? Thanks.
We have some trouble with paths to a Synology nas going offline.Currently we have the DNS name in the path. I’d like to change that to referring the IP address instead. I’m pretty sure I can do it without any issues but better safe then sorry. Anyone see any risks of changing the \\DNSname to \\IPaddress for the path(s)? //Henke
Hi All, we had an internal discussion for new customers what Library is the best way. Often we are running Windows Cluster withs csv volumes or windows filecluster or single servers with san attached storage. In the past there are a lot of problems with the ransomware on CSV an Filecluster. Do you have some more information wich way should be good or better to prevent redirected I/O in cluster and also errors during maintanance ? Also is there any possibility to check if the ransomware protection is working on a CSV / Windows file cluster ? Sure the option is set but did we had an option to test if its working ?
Moving mount path hangs and stuck at 96%
We initiated a move a mount path operation on our cmvlt environment,but it seems that is stuck somehow .There is no other jobs running in that moment for this media agent.Also we can see that all the data seems to be already copied.We see Estimated Total Data Size:1.65 TBSize of Data Copied:1:65 TBare the same It is like this for 1 day and some hours
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.