Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 770 Topics
- 3,650 Replies
The signature does not match. Message: The required information to complete authentication was not provided or was incorrect.
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket. What do I have to do to solve this problem?
Hi all,I have a following question. If I needed to have more disk storage pools (in order to have more storage policy copies with different retention), I would need to have more deduplication engines. Is it possible to create multiple deduplication engines within one disk drive? Are there any limitation for the number of deduplication engines on one disk drive? Thanks in advance for your engagement!
Hi,our documentation around combined storage tiers in cloud could be a bit misleading around sizing for Warm and archive tier storage requirements.See in red below.what’s the ideal ratio to size? (or do we need to just go 10% on warm tier to be safe?)https://documentation.commvault.com/2022e/expert/139246_combined_storage_tier.html BackupsCommvaults combined storage tiers works by placing Commvault metadata used in both deduplication and non-deduplicated backups in the warmer or frequent access tiers. This allows you to perform a simple Browse and Restore of the archival data, without the delay of a cloud archive storage recall.As a result, more than 90% of the backup data gets stored in the Archive tier, while only up to 10% of the data is stored in the warmer tiers. Deduplicated and non-deduplicated data is supported in Commvault combined storage tiers. For deduplicated data, the backup indexes and deduplication metadata is stored in the warmer tier. For non-deduplicated data, th
Hi team.I have a question about data verification on sec tape copy: How to exclude a specific tape from the data verification process ? Or how to address problems as below:My setup is simple - primary copy on disk lib and secondary copy to LTO. Both copies are verified.From time to time I see a tape that generates a lot of CRC errors during data verification. The verification process takes a long time blocking tape drive and is either completed or sometimes stuck. After that, dozens of jobs on the tape have the partial verification status. Next scheduled verification process repeated with the same results.What surprises me even if the tape exceeds the threshold of w/r errors and has the condition status of “bad” - it is still subject to verification. Why such verification should not finish for some jobs as failed and that’s it. I have such a nasty tape quite often and then a lot of problems. Manually checking and excluding from verification dozens of jobs on tape is practically impossi
I have a new Dell Poweredge R750 sitting in a box with SSD RAID BOSS and 12x18TB NL-SAS on a H755 RAID controller.It's going to be running Windows Server 2019 or 2022 and will be a backup server holding both Commvault and Veeam data.The server will boot off the SSD BOSS and the NL-SAS is purely backup storage.At a PERC level I expect I'll go with RAID6.At a the Windows level I'm not clear whether there are IO benefits splitting it out into lots of small (5-10TB) volumes (which are all sitting on the same physical RAID) or whether to keep it simple and have one or two large volumes and segment data using folders.The current disk library is sitting on a pair of older Windows Poweredges with around 12x4TB mount paths which need migrating to the new hardware.Unless there’s a real benefit I’d sooner have a single large flexible Windows volume with 12 subfolders for each mount path than be messing around creating 12x volumes.I presume there is no way to consolidate down the number of mount p
Greetings,We are trying to redo our Aux copy backups we currently have going to S3 now. We want to harden our Aux/cloud backups by creating new S3 buckets and turning on WORM for them. I created a new test bucket and turning on the S3 Object Lock for the bucket. My question is around retention times. We currently have and will continue our retention of our yearly (gathering all the weekly full backups for 365 days) backups and then have data aging start deleting the full backups on-hand after 365 days. If I want to keep this same retention with WORM turned on, do I need to set the S3 object lock retention to the same amount (or less/more than 365 days)? I am reading through documents and other posts about setting retention 2 or 3 times the amount as the policy to prevent accidental deletion or preventing it from being deleted. This is where I am confused on. Right now we do have it deduped going to our Aux copy/cloud, but I guess I question if we should even dedupe our new backups to t
I have an Auxcopy job running, it allocates 55 streams and as time progresses those streams gradually drop as streams complete. I now have my Auxcopy that has been sitting on 7 jobs for a couple of hours. When I kill the job and restart it I go back up to 61 streams. Am I improving things by doing this, will my job finish faster? Why does the streams not increase by itself during the job or why does it take so long to do so? Can I improve this?
How and where to identify the traditional DDB version on the management agent login the log "SIDBEngine.log on the media agent we find this 6292 126c 12/16 13:00:19 ### 1-0-1-0 LoadConfig 313 Use MemDB [false] But you can't see the version number? And do you have a version number? does it exist?
We have a tape library with two drives, Is it possible to have two secondary copies aimed to this same tape library and to start two auxiliary copies at the same time? My desired goal is to have at final two tapes with the same data so way to keep one of them on another safe offsite location.
Hi all,any advantages, other than security with removing attack surface, when using iSCSI attached volumes from a NAS directly to the Windows MediaAgent rather then writing via SMB to the NAS share?In regards of performance, ransomware protection etc?Your thoughts are highly appreciated, thanks in advanceregardsMichael
Hello,We have been out of space in one of the two libraries we have, and to expand it we have to change the current hard disks for others of higher capacity, as we have no option to add additional disks or modules. The affected library contains the DR copy, so we have changed the path in the storage policy to copy it to another library and be able to free the affected library, delete it and configure it again with the new disks.When we try to do this we get an error, because the DR copy is of warm type, and has a long retention that until it expires we cannot delete it, even though we have another copy in the other library.How can we proceed?Best regards, and thanks in advance.
Hey everyone, we were wondering how clientside deduplication and compression is working on Azures “Storage Accounts”. It doesn’t seem like it’s using our Mediaagent but which resource is it using then? Is there somehow of an “invisible” virtual maschine which runs a “Storage Account” in Azure and does the deduplication etc.? Best regards
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
Hello, a customer asked if it would be possible to make all their primary copies (for disk libraries) WORM-protected and what the implications were.Up until now our standard is n-days/1cycle retention on primary copies and n-days/0cycle retention on a secondary copy.We basically use the 1 cycle as a safety net. So if for whatever reason the backup of a client does not work for a long time, they always have one backup available without setting a manual retention on those jobs.Now we have an internal discussion about how the retention for WORM works. Specifically if the cycle retention is also relevant for manual deletion of the jobs/clients that hold those jobs.So for example: A client is using a WORM storage policy with 14 days/1 cycle retention.Data Aging will not age out and delete the jobs automatically until both conditions are met.But is it possible to manually delete the jobs on day 15? I would say it is not possible because it is still retained by the cycles. If that is the case
Failed to verify the device from MediaAgent - Failed to check cloud server status Error: The certificate file is not found. Error = 44336
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket.Also created a config file in .oci folder.What do I have to do to solve this problem?
Hi, I came into work and noticed dozens of jobs in a waiting state due to the mount path not having enough free space. I am aware that we need to add more storage and we are going to, but in the interim I tried lowering the reserve space from 6TB to 2TB.so that the jobs can finish, and I also see what can be cleared. It’s not letting me change it. It will only go to 5960 GB. I currently have a ticket in with CV. 221017-401. Is there a way to fix this?
Hi,I’m working on a project where a large Hyperscale environment needs to be migrated to Azure. Looking at using either Azure storage or Metallic recovery reserve and/or possibly a mix of the two. There’s short term 30 days as well as LTR 5 to 7 years so will use Hot / Cool tiers (possibly combined storage tiers if Metallic not used)HS is setup using the default DDB block size 128KB. In an ideal world - one could just setup a secondary copy for the cloud storage (either Metallic or Azure) and kick off the Aux copy to cloud, just let it run to get the data over in to Azure then eventually promote it to primary copy… however…As the cloud storage will then be used as a primary copy - ideally, we want to configure it with 512KB DDB block size. Media agents will be setup in Azure as they will eventually become the production MA’s once things get cut over. Some key questions on the above:copying between storage policies with different DDB block sizes – how will this affect overall dedupl
Hi I need your help, in the initial implementation, the customer added a machine running debian 11.00 as MediaAgent, and it is not supported as it only allows to configure HPE-catalyst storage and not the libraries. Therefore the client uninstalled this debian OS from the mediaagent, and now I see that I can not remove the HPE-Catalyst.At the moment I added the mediaAgent with Ubuntu OS and it manages to see both the disk storage and the libraries.The error is about WORM media data.
Oracle Archive Log Backups are backed up to a storage policy. The primary copy’s retention is configured for 30 days with 0 cycles. The Data Retention Forecast and Compliance Report shows the Archive log is required by the RMAN Full backup which has been aux copied from a primary copy (Disk Library) to a secondary copy (Tape Library). The Secondary Copy retention is 7 years. I would like for the Log Backup on Disk to follow the primary copy retention of 30 days and expire once 30 days have been met, however it is not expiring since itis required by the Full backup on tape with the longer retention. I had a similar issue with SQL backups where the log backups were not expiring because the log backups were required for the SQL full that had been aux copy to tape with a long 7 year retention. The setting to correct this behavior is under Storage → Media Management Configuration → Data Aging → and disabling <Honor SQL Chaining for Full jobs on Selective copy>.Does anyone know if
Hi all, I have a DAG cluster and it has 7 TB data. Backup job complate 1 day about. Restore jobs duration time worse than backup jobs duration time(1.5 day about). How can I reduse this times(Backup and restore but restore important than backup)? I tried increase stream number but its not working.
I’m looking for some guidance or documentation on what can be done with ZRS storage. Say we have the following availability zones in the region.Zone 1: Media Agents and Hot blob backing up local VM’s, files etc. no Aux copy.Zone 2: Standby Media Agent and conserve powered down. Commsere DR recovery done manually. Zone1 failed so we power up MA and CS at Zone2 - perform manual DR backup set restore Next step - getting the ZRS storage mounted as primary. - any links, docs steps would be much appreciated.Thanks,Regards,Robert
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.