Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 557 Topics
- 2,976 Replies
While trying to figure out how to gather BET for charging purposes I noticed that the size on disk as displayed both in Command Center and CommCell console for cloud libraries to be incorrect. I have opened a ticket for it and in particular referred to S3 buckets, but I was wondering it other customers see the same and if it also occurs on libraries using Microsoft Azure Storage or other types/vendors. Please comment in case you see identify the same. I noticed it while running FR26 and FR28 (2022e).
We have a storage policy with aux copies that was sending disk backups to a tape library. The library in question that contained all these tapes has been decommissioned. A new library was stood up and all tapes were put into this new tape library. However the aux copy that represents this data belonged to a different media server and physical library. We are trying to figure out how to take the data sent to the AUX copy in the old storage policy and move it to a new Cloudian array that has been configured as a Cloud library.
Failed to verify the device from MediaAgent - Failed to check cloud server status Error: The certificate file is not found. Error = 44336
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket.Also created a config file in .oci folder.What do I have to do to solve this problem?
Greetings!I’ve been involved in backups for quite a while, but have mercifully been using drives ~ not tapes. I’m now having to consider tapes. We have multiple SLAs, including:A monthly full backup to tape, retention to 62 days, lasts 1 month only A monthly full backup to tape, retention of 365 days - so 12 tape backups A quarterly full backup to tape, retention of 365 days - so 4 tape backupsThe last full backup of the month goes to tape. So, I forsee a single tape (or a group of tapes with a mess of… ) consisting of 1 month + 12 month retention times. Some of these tapes will have jobs that last a year as well as jobs that expired months earlier. What is everyone’s experience with such a thing? We have over 750 servers involved here. CV has but one tape drive currently and an operations group to rotate tapes. Thank you in advance for any experience you can lend me… Mike Rucker
I have an Auxiliary Disk-to-Disk copy and the throughput is very low, I see a lot of intermittent reading from the disk where the data lives.
the CVJobReplicatorODS, the job number is 177027 346796 56e20 09/02 18:11:14 177027 Target copy is single instanced346796 56e20 09/02 18:11:14 177027 Block level SI is set. Going to set minimum single instanceable size to block size346796 56e20 09/02 18:11:14 177027 Min SI Data Size [128 KB], SI Block Size [128 KB]346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU for target copy:346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU(NOENCRYPTION) for target copy: as there are no encrypted src copy files.346796 56e20 09/02 18:11:14 177027 N/w agents configured before/after firewall check = [2/2]. Firewalled = 1346796 56e20 09/02 18:11:14 177027 CVArchive::StartPipeline() - StartPipeline SI configuration -[srcClientName - commvault-shf] Block Level [true], Block Size , File Level [false], Min Signature Size 346796 56e20 09/02 18:11:14 177027 CPipelayer::InitiatePipeline Initiating SDT connection [000000D50C41C7E0] from 10.10.165.221:8400(commvault-shf) to
Hello, TeamI am getting the error below on majority of jobs running.I have checked the storage end and also the media agents (The LUNS are attached to the Media Agent) Everything looks good.What could be the cause of the error. Failed to mount the disk media in library [ARCHIVE_DISKPROD] with mount path [B:\Archive_DiskLibrary\MP10] on MediaAgent [hq_media_svr3]. Operation could not be completed in timeout interval. Please check the following: 1. Library and drive is functioning correctly. 2. Library and Drive management services are running. 3. All other MediaAgent services are running. 4. The time out period on the Expert Storage Configuration Properties Window in the CommCell Console. 5. Cleaning media in Assigned Media Group. Source: hq-vm-commserv, Process: MediaManager
Hi all!could you advice me how to troubleshoot following types of error: Error Code: [13:138] Description: Error occurred while processing chunk [xxx] in media [xxx], at the time of error in library [disklib01] and mount path [[xxx] /srv/commvault/disklib01/xxx], for storage policy [XXX] copy [Xxx] MediaAgent [svma1]: Backup Job [xxx]. Unable to setup the copy pipeline. Please check connectivity between Source MA [svma1] and Destination MA [svma1]. At a glance, it seems that it is not possible for CV to process chunk from the (index?)/disk library...However, the issue is connected with storage policy copy, that moves data from the disk library to the tape library (secondary copy). The main problem for us is that it is not possible to copy data to the tapes. Therefore, it may say Unable to setup the copy pipeline. The media agent is one server/device, that communicates with both disk and tape library. Lastly, the files in the related directories dont seem to be corrupted...Any suggesti
I am executing a database tape backup with RMAN, the backup fails with RMAN errors and in the tape library the drives with Reservation Stuck status are observed, the status of the drives is "Drive Fully Accessible", which indicates Reservation Stuck ? RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03009: failure of uncatalog command on ch1 channel at 08/29/2022 16:13:07ORA-00204: error in reading (block 1, # blocks 1) of control fileORA-00202: control file: '+DS242600413/backup.ctl.galaxy.1'ORA-15078: ASM diskgroup was forcibly dismountedRMAN> Recovery Manager complete.ORACLE error from target database: ORA-00204: error in reading (block 1, # blocks 1) of control fileORA-00202: control file: '+DS242600413/backup.ctl.galaxy.1'ORA-15078: ASM diskgroup was forcibly dismounted]l file: '+DS242600413/backup.ctl.galaxy.1'
Hi, For data security I was told to do search on the WORM option at Primary copy level of our storage policies.. Just a bit of background of our environment, we have short data retention set on our Primary Copy - 35 days 1 cycle. My understanding is that this WORM option in storage polices work within Commvault software and no admin(or anyone) can delete backup jobs after it is enabled, and we have to wait for job to age out and then be pruned by commvault automatically. Then I was advised that if I enable WORM, DDB for its storage policy will be sealed. A new DDB will be created automatically after rebaselined. So I have some questions below. is DDB sealing an automate process? i did enable WORM option more than a month ago for a storage policy for testing but I cannot see a sealed ddb under ‘Deduplication engines’. Our disk libraries are quite big (from 300 TB to 800TB). if we need to do the rebaseline every time, will this take long time and have impact on performance? With only 35
Good afternoon,I wanted to check with the community before generating a case with support, regarding the number of outstanding prunable blocks. On a library, which has started to increase 1TB per day, we have applied a 'run space reclamation' and have recovered 30TB of 65TB it had in size. Although this library has only 6TB in use. It is strange. We are seeing a large number of outstanding prunable blocks but performing the 'run space reclamation' does not remove them. Does anyone know the reason?Thanks
I am in the process of moving all our data from tape to a disk library and need to estimate how long this will take. Has anyone come up with a reasonably accurate way of predicting how long Aux copying data for a given copy would take.I have 6 x Tape Drives in a library. This has been used a a target for multiple Auxcopy operations for multiple Storage Policies. Due to tape contention a multiplexed multi stream Auxcopy for a Storage Policy copy could be written to 1 to 4 drives depending on drive availability. I am also interested to know how this works. When the operation began it copied the oldest data first , but as time went on the completed and partially copied data began to appear throughout the timeline. Is there away to make the most recent data copy first? Does an Auxcopy operation begin copying all jobs on a given mounted piece of media, or does it only copy some jobs and will have to mount that tape later.
Hello,I created Oracle database full backup subclient every day at 2:00 AM and Oracle archive log full backup subclient every hour. Both data and archive log use same storage policy At storage policy level I created auxiliary copy with selective copy all fulls. But when I checked the jobs inside this auxiliary copy I can find oracle data full backups only and it is not contain the archive log backup jobs. The storage policy primary contain all data and archive logs jobs
Hello Every one,Our current storage is almost out of space, so we want to move all of the backup data to a new Storage.we are thinking of the following:- Create new Disk library contains mount paths of the new storage.- Move the mount path from old mount paths to the new mount paths.- Change the storage policy data path to the new mount path.The reason behind we thought of using moving mount path method, because using Aux copy method would require to stop current backup jobs while moving the mount path doesn’t require that.what is your advice about this approach
Hi All, we had an internal discussion for new customers what Library is the best way. Often we are running Windows Cluster withs csv volumes or windows filecluster or single servers with san attached storage. In the past there are a lot of problems with the ransomware on CSV an Filecluster. Do you have some more information wich way should be good or better to prevent redirected I/O in cluster and also errors during maintanance ? Also is there any possibility to check if the ransomware protection is working on a CSV / Windows file cluster ? Sure the option is set but did we had an option to test if its working ?
Commvault 11.18 (soon to be 11.20). - We are on the cusp of eliminating our secondary backups to tape. The benefit of a secondary copy on tape was the build-in air-gapping (and ability to move it offsite for safe-keeping). We plan to move to creating our secondary copies on disk in a different city. Commvault’s built-in Ransomware protection is no-brainer, but WHAT ABOUT WORM? What are the implications of the WORM storage on space consumption? Is there any scenario in which a WORM-enabled deduplicated secondary copy that is a true copy of the deduplicated primary copy (and with the same retention) would be any LARGER than the primary copy? Presumably, if a 1000 jobs share one block on the secondary storage, that block will not be removed until the last of those 1000 jobs ages out.Any info is appreciated. Thanks!
Looking to start a replication group from a VM and default backupset to a mount path on a dell powerscale. Going forward this volume will be san hosted instead of mounted via a server. Upon looking at the config- i dont see an option. Im away replication groups are agent to agent. I thought about makign a library with the location- but even that doenst allow.
Since the other thread was stated as solved I’ll start a new one. From What I understand the Network Throttling on the media agent only effects backup jobs, not Aux copies jobs bandwidth.There is an option to limit the Aux copy bandwidth usage by setting the Advanced setting "Throttle Network Bandwidth(MB/HR)" on the stg policy copy.Unfortunately that effects the usage all day as there is no possibility to set it per time interval.I have two Aux Copy jobs running for two different Storage Policy copies with the setting for bandwidth limitation set to 5000 MB/HR which is roughly 11 Mbit/s.So running two of those shouldn't use more then 22 Mbit/s.Looking at the Current Through put they are 3,92 GB/hr and 2,97 GB/hr in Comccell console. Those combined gives 6,89 GB/hr roughly 15 Mbit/s. I understand that it's not 100% accurate as the data is deduplicated.But looking at our monitoring I see two streams going to Azure one using 30 Mb/s and the other at 20 Mb/s.In fact the Aux copies tend to
Commcell v11Backed up server folder needs to be restored from multiple backups which are on multiple tapes over a one year timespan. (38 Full BU)We need all backup from the full year. Looking for a way to complete this short of doing individual recoveries. Suggestions?Thank you.
Afternoon allI have a few questions - hopefully nothing too complicated!I have recently configured a tape library in our Commvault environment which appears to be working OK - I just had a few questions around configuring the tape library to suite our needs.The plan is to backup to tapes so they can be take offsite daily and used in a DR scenario. We would like to keep 3 weeks worth of data on the tapes and would like to have a tape for each day.I’ve configured the auxiliary copy job and the related schedules to backup at a suitable time which I think so far is fine however where I appear to be struggling is with configuring Commvault to backup to a new tape each day.We will have 21 tapes and as an example tape 001 will be week 1 Monday, tape 002 will be week 1 Tuesday etc - tape 007, 008 and 009 will be Friday, Saturday and Sunday respectively.There will be x2 SP’s that will be backing up to tape - my question is, is it possible to configure CV so that one the backup job from SP2 is c
Hi Team, We are about to embark on the V4 to V5 DDB conversion process, but I thought I would ask here and see how it went for those that have completed this. We have a few partitioned DDB’s with a reasonable amount of size, and I am trying to gauge how long our backup-outage might be, as we have to guesstimate on behalf of our customer. I can see that the pre-upgrade report does give estimates, so I’m wondering how ball-park they might be. One more thing … is there a way if identifying what version of DDB we have> Thanks
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.