Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 778 Topics
- 3,676 Replies
Evening folksI am in the process of enabling data verification across our storage policies - some of our servers are EC2 instances with Commvault configured to backup directly to S3.I assume if I were to enable data verification in the above scenario I would incur further charges as Commvault would be reading data from the S3 bucket?Also, I just want to check my workings out is correct, or there about. If we are charged for using data verification in the above scenario - if, for example, the data verification job needed to verify 150GB worth of data my math would be 150GB (size of data) / 32MB (Block size that CV stores data in) / 1000 (S3 charges per 1000 requests) * price per 1000 READ requests? Many thanks
Hey everyone, we were wondering how clientside deduplication and compression is working on Azures “Storage Accounts”. It doesn’t seem like it’s using our Mediaagent but which resource is it using then? Is there somehow of an “invisible” virtual maschine which runs a “Storage Account” in Azure and does the deduplication etc.? Best regards
Hi, I could not find anywhere that adressed this, so asking here. I read that the only way to “migrate data” between storage classes would be as documented. However, can this be done? I have a Hyperscale as a Primary Copy I have an existing AWS S3 Standard bucket as a Dash Copy with Deduplication. I want to create another Dash Copy to an AWS S3-IA Bucket with Deduplication. I want to promote that copy to be the secondary copy and get rid of the existing bucket. Effectively, this seems like migrating the data just as good as going through the process described with the Cloud Tool. Am I wrong? Can this be done?
Hello Community, I am new to Commvault. I am trying to check the status of a DDB Reconstruction failed job. I checked the Storage policy but I don’t see the job that created the internal ticket. Type: Job Management - DeDup DB ReconstructionDetected Criteria: Job StartedDetected Thanks.
Greetings, We have some Aux copies that go to our AWS s3 bucket. The storage policy this is under has a 30 day on prem and 365 day cloud policy. The 30 day on prem (primary) has data aging turned on and seems to be pruning and getting rid of jobs past 30 days. I took a look at the properties of the Aux copy job though and noticed that the check box for data aging was not selected. When I view all jobs for this Aux copy, it showed jobs back from years ago unfortunately. So that tells me that nothing is aging out or getting cleaned up. Our s3 bucket is getting very large and we need to clean up all of these old jobs to bring it down to a reasonable size. My question is how best to do this clean up? Can I view the jobs under the Aux copy and then just select all of them past our retention and delete? Would this delete data out of the s3 bucket also if I did this? I did select the data aging check box now and hit ok, then ran a data aging job from the commcell root and just ran it against
I have an Auxcopy job running, it allocates 55 streams and as time progresses those streams gradually drop as streams complete. I now have my Auxcopy that has been sitting on 7 jobs for a couple of hours. When I kill the job and restart it I go back up to 61 streams. Am I improving things by doing this, will my job finish faster? Why does the streams not increase by itself during the job or why does it take so long to do so? Can I improve this?
Hello! This morning I was all thumbs and dropped a tape. This caused the little pin inside to come loose and the tape was barely holding. I have to consider it dead now. Is there a way to flag the data inside to be recopied to another tape? Thanks!
Hi Team, We are about to embark on the V4 to V5 DDB conversion process, but I thought I would ask here and see how it went for those that have completed this. We have a few partitioned DDB’s with a reasonable amount of size, and I am trying to gauge how long our backup-outage might be, as we have to guesstimate on behalf of our customer. I can see that the pre-upgrade report does give estimates, so I’m wondering how ball-park they might be. One more thing … is there a way if identifying what version of DDB we have> Thanks
Greetings,We are trying to redo our Aux copy backups we currently have going to S3 now. We want to harden our Aux/cloud backups by creating new S3 buckets and turning on WORM for them. I created a new test bucket and turning on the S3 Object Lock for the bucket. My question is around retention times. We currently have and will continue our retention of our yearly (gathering all the weekly full backups for 365 days) backups and then have data aging start deleting the full backups on-hand after 365 days. If I want to keep this same retention with WORM turned on, do I need to set the S3 object lock retention to the same amount (or less/more than 365 days)? I am reading through documents and other posts about setting retention 2 or 3 times the amount as the policy to prevent accidental deletion or preventing it from being deleted. This is where I am confused on. Right now we do have it deduped going to our Aux copy/cloud, but I guess I question if we should even dedupe our new backups to t
Hello community,I have a customer whose backing up in a disk lib, then auxcopies to an Azure Cloud lib.However, when customer looks at his Azure costs for the last 5 days, they spend over 100 dollars a day in Iterative read operations. I’m trying to figure out what is reading so much from the cloud library. The total written size for a day is 200GB. why 41 million read requests? The DDB Verification is disabled for this library.Is there anything else I should look into?Thanks in advance.
Hi,our documentation around combined storage tiers in cloud could be a bit misleading around sizing for Warm and archive tier storage requirements.See in red below.what’s the ideal ratio to size? (or do we need to just go 10% on warm tier to be safe?)https://documentation.commvault.com/2022e/expert/139246_combined_storage_tier.html BackupsCommvaults combined storage tiers works by placing Commvault metadata used in both deduplication and non-deduplicated backups in the warmer or frequent access tiers. This allows you to perform a simple Browse and Restore of the archival data, without the delay of a cloud archive storage recall.As a result, more than 90% of the backup data gets stored in the Archive tier, while only up to 10% of the data is stored in the warmer tiers. Deduplicated and non-deduplicated data is supported in Commvault combined storage tiers. For deduplicated data, the backup indexes and deduplication metadata is stored in the warmer tier. For non-deduplicated data, th
Hi all,I have a following question. If I needed to have more disk storage pools (in order to have more storage policy copies with different retention), I would need to have more deduplication engines. Is it possible to create multiple deduplication engines within one disk drive? Are there any limitation for the number of deduplication engines on one disk drive? Thanks in advance for your engagement!
This assumption correct? - If you to start CV encrypting data sent dedupe storage my guess would be that it a completely new set of dedupe data? Once encryption is turned on, the dedupe engine will see it as new data rather than encryption version of the old. While the unencrypted and encrypted data from the same servers remains in the same dedupe storage, storage usage could higher than usual.
Has anyone ever transferred their initial baseline backup between two sites using an available removable disk drive (Seeding a Deduplicated Storage Policy). We are looking like may have to use this method to get a good AUX copy for some remote sites. How did it go? did you have any issues? Thanks for your input.
I’m looking for some guidance or documentation on what can be done with ZRS storage. Say we have the following availability zones in the region.Zone 1: Media Agents and Hot blob backing up local VM’s, files etc. no Aux copy.Zone 2: Standby Media Agent and conserve powered down. Commsere DR recovery done manually. Zone1 failed so we power up MA and CS at Zone2 - perform manual DR backup set restore Next step - getting the ZRS storage mounted as primary. - any links, docs steps would be much appreciated.Thanks,Regards,Robert
HI Team,Is there any API or report using which we can monitor individual library mount paths ?CV alerts are for complete library but i need to configure an alert specific to mount paths( coming from different media agents or servers ) part of same disk library. We receive below alerts when one or few of the mount paths of library met reserved space .Failure Reason: Insufficient disk space. Available mount paths are not enabled for write or have met reserved space limit. Enable/add more mount paths or add more disk space to existing mount paths. Please check mount paths on the library Need to configure alerts on mount path level so that we can disable the mount paths in advance .Regards, Mohit
Hello, I have a matter that could require some help.We have a primary copy on a disk Library, a secondary on another disk Library and a third copy on a MCSS cloud Library. All copies have dedup enabled.Auxiliary copies run fine between the disk librairies but when it comes to sending the data to the MCSS library it takes forever. We tried to increase the number of streams, use either of the disk libraries as source for the aux copies but we can’t achieve suitable performances. What we see is that the process generate excessive read operations on the source library. Dedup block size is 128 on the disk libraries and 512 on MCSS.Commvault version 11.24.Any help would be appreciated.Regards, Jean-xavier
Hi,We keep getting the following alerts in the “Anomaly Report”:It seems clear. The Media Agent is unable to access the Mount Path over SMB due to a permission issue. Backups don’t seem to be impacted so it feels like Commvault is doing a retry, after which is works fine. But we would like to know where this error is coming from. It doesn’t happen frequently.We checked the Event Viewer on the Media Agent, and we do see an SMB error, but the time doesn’t match, the error on Windows is 30min before the error in Commvault. We checked the Storage where this SMB share comes from but also nothing special to see here.I wanted to check with the Community to see where this might come from and how to troubleshoot this properly. Would this an issue with the storage, or rather the Media Agent on Windows? Or maybe an issue in Commvault?
Hi,I’m working on a project where a large Hyperscale environment needs to be migrated to Azure. Looking at using either Azure storage or Metallic recovery reserve and/or possibly a mix of the two. There’s short term 30 days as well as LTR 5 to 7 years so will use Hot / Cool tiers (possibly combined storage tiers if Metallic not used)HS is setup using the default DDB block size 128KB. In an ideal world - one could just setup a secondary copy for the cloud storage (either Metallic or Azure) and kick off the Aux copy to cloud, just let it run to get the data over in to Azure then eventually promote it to primary copy… however…As the cloud storage will then be used as a primary copy - ideally, we want to configure it with 512KB DDB block size. Media agents will be setup in Azure as they will eventually become the production MA’s once things get cut over. Some key questions on the above:copying between storage policies with different DDB block sizes – how will this affect overall dedupl
Hello All, We are using Azure Cold data to store our off-shite copies. We have been using this from last several years. Lately, we decided to use Azure Combined storage and planned to move/copy data from Azure cold to Archive storage. After having a discussion with Commvault, we implemented what was suggested but this process seems to be really slow and case is now escalated to Dev. Being honest, we are seeing terrible delay from their side too. I am question is now, If instead of using aux copy to copy the jobs from cold blob to the combined tier library, what if we changed the tier of cold blob from cool to combined tier? If we did that, would the data convert to archive, or would that only effect new data written to that storage?
So,I’m having a project with Commvault and a HPE Apollo 4200 with 24x 12 TB NLSAS drives.What would be a best practice approach when creating the array(and logical volume) in the aerver Raid Controller?I read some people saying that the max volume should be 4 TB per mount point, does this make sense these days ?If yes, that means that I need to create 66 volumes of 4TB each within raid controller.Any beneficts on doing this versus a single volume with 264 TB?thanks
Hi!We’ve recently moved media-agents to new servers, and i’m now in the process of “cleaning up”. And for 3 of the 4 media-agents the report showed nothing, and i could delete the mediaagent without any issue. However, for the last media-agent, it complains about “Proxy Server to Perform Intellisnap Backup Operations”, and lists a client name, a backupset name, and a subclient name.This was “correct”, because there was a subclient there with that name that was configured for intellisnap (but not in use, and had no backups). So i deleted the subclient… but even so, now several days later, when i try to delete the media-agent, the report still complains about the same “issue”. I could ofcourse just ignore the “warning” and proceed with the deletion, but i’d much prefer for the report to show that there is no association, before proceeding with the deletion.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.