Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 620 Topics
- 3,252 Replies
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
When following the document on how to stop/start a Hyperscale X appliance node https://documentation.commvault.com/2022e/expert/133467_stopping_and_starting_hyperscale_x_appliance_node.html I get to the step unmount the CDS vdisk, and after getting the proper vdisk name and running the cmd# umount /ws/hedvig/<vdiskname>i get the message ‘device is busy’.What’s the proper way to remediate this and continue?ThanksG
We have a storage policy with aux copies that was sending disk backups to a tape library. The library in question that contained all these tapes has been decommissioned. A new library was stood up and all tapes were put into this new tape library. However the aux copy that represents this data belonged to a different media server and physical library. We are trying to figure out how to take the data sent to the AUX copy in the old storage policy and move it to a new Cloudian array that has been configured as a Cloud library.
Failed to verify the device from MediaAgent - Failed to check cloud server status Error: The certificate file is not found. Error = 44336
Hello. I’m trying to configure an Oracle Cloud Infrastructure Object Storage but it’s showing this error: I already put all the information required to configure, such as: Service Host, Tenancy OCID, User OCID, Key’s Fingerprint, PEM Key Filename and Bucket.Also created a config file in .oci folder.What do I have to do to solve this problem?
I have an Auxiliary Disk-to-Disk copy and the throughput is very low, I see a lot of intermittent reading from the disk where the data lives.
the CVJobReplicatorODS, the job number is 177027 346796 56e20 09/02 18:11:14 177027 Target copy is single instanced346796 56e20 09/02 18:11:14 177027 Block level SI is set. Going to set minimum single instanceable size to block size346796 56e20 09/02 18:11:14 177027 Min SI Data Size [128 KB], SI Block Size [128 KB]346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU for target copy:346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU(NOENCRYPTION) for target copy: as there are no encrypted src copy files.346796 56e20 09/02 18:11:14 177027 N/w agents configured before/after firewall check = [2/2]. Firewalled = 1346796 56e20 09/02 18:11:14 177027 CVArchive::StartPipeline() - StartPipeline SI configuration -[srcClientName - commvault-shf] Block Level [true], Block Size , File Level [false], Min Signature Size 346796 56e20 09/02 18:11:14 177027 CPipelayer::InitiatePipeline Initiating SDT connection [000000D50C41C7E0] from 10.10.165.221:8400(commvault-shf) to
I’m having problems with DDBBackup jobs at my DR site. I changed the configuration from every 6 hours to once per day but the backup from yesterday is still running and I’m at the 22 hour point. I’d like to kill this job and let a fresh one start but I understand that the DDBBackup uses snapshots and I’m afraid that if I kill it there won’t be a proper snapshot cleanup. Is it OK to kill a DDBBackup job that’s been running almost a long time?Ken
Good afternoon,I wanted to check with the community before generating a case with support, regarding the number of outstanding prunable blocks. On a library, which has started to increase 1TB per day, we have applied a 'run space reclamation' and have recovered 30TB of 65TB it had in size. Although this library has only 6TB in use. It is strange. We are seeing a large number of outstanding prunable blocks but performing the 'run space reclamation' does not remove them. Does anyone know the reason?Thanks
I am in the process of moving all our data from tape to a disk library and need to estimate how long this will take. Has anyone come up with a reasonably accurate way of predicting how long Aux copying data for a given copy would take.I have 6 x Tape Drives in a library. This has been used a a target for multiple Auxcopy operations for multiple Storage Policies. Due to tape contention a multiplexed multi stream Auxcopy for a Storage Policy copy could be written to 1 to 4 drives depending on drive availability. I am also interested to know how this works. When the operation began it copied the oldest data first , but as time went on the completed and partially copied data began to appear throughout the timeline. Is there away to make the most recent data copy first? Does an Auxcopy operation begin copying all jobs on a given mounted piece of media, or does it only copy some jobs and will have to mount that tape later.
I am executing a database tape backup with RMAN, the backup fails with RMAN errors and in the tape library the drives with Reservation Stuck status are observed, the status of the drives is "Drive Fully Accessible", which indicates Reservation Stuck ? RMAN-00571: ===========================================================RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============RMAN-00571: ===========================================================RMAN-03009: failure of uncatalog command on ch1 channel at 08/29/2022 16:13:07ORA-00204: error in reading (block 1, # blocks 1) of control fileORA-00202: control file: '+DS242600413/backup.ctl.galaxy.1'ORA-15078: ASM diskgroup was forcibly dismountedRMAN> Recovery Manager complete.ORACLE error from target database: ORA-00204: error in reading (block 1, # blocks 1) of control fileORA-00202: control file: '+DS242600413/backup.ctl.galaxy.1'ORA-15078: ASM diskgroup was forcibly dismounted]l file: '+DS242600413/backup.ctl.galaxy.1'
Hello, TeamI am getting the error below on majority of jobs running.I have checked the storage end and also the media agents (The LUNS are attached to the Media Agent) Everything looks good.What could be the cause of the error. Failed to mount the disk media in library [ARCHIVE_DISKPROD] with mount path [B:\Archive_DiskLibrary\MP10] on MediaAgent [hq_media_svr3]. Operation could not be completed in timeout interval. Please check the following: 1. Library and drive is functioning correctly. 2. Library and Drive management services are running. 3. All other MediaAgent services are running. 4. The time out period on the Expert Storage Configuration Properties Window in the CommCell Console. 5. Cleaning media in Assigned Media Group. Source: hq-vm-commserv, Process: MediaManager
Afternoon folksI have an auxiliary copy that backs up to tape - I have deleted all of the existing jobs on the tape media with the hope of starting the backup chain from scratch. However, since deleting the backup jobs, if I go back to the storage policy and right click on the tape auxiliary copy and view “media not copied” it is blank. I was expecting to see all backup jobs for the backup period that I have selected.Is it possible for me to “restart the schedule” so to speak without deleting the auxiliary job? TIA
Hello,I created Oracle database full backup subclient every day at 2:00 AM and Oracle archive log full backup subclient every hour. Both data and archive log use same storage policy At storage policy level I created auxiliary copy with selective copy all fulls. But when I checked the jobs inside this auxiliary copy I can find oracle data full backups only and it is not contain the archive log backup jobs. The storage policy primary contain all data and archive logs jobs
Hello Every one,Our current storage is almost out of space, so we want to move all of the backup data to a new Storage.we are thinking of the following:- Create new Disk library contains mount paths of the new storage.- Move the mount path from old mount paths to the new mount paths.- Change the storage policy data path to the new mount path.The reason behind we thought of using moving mount path method, because using Aux copy method would require to stop current backup jobs while moving the mount path doesn’t require that.what is your advice about this approach
Hi All, we had an internal discussion for new customers what Library is the best way. Often we are running Windows Cluster withs csv volumes or windows filecluster or single servers with san attached storage. In the past there are a lot of problems with the ransomware on CSV an Filecluster. Do you have some more information wich way should be good or better to prevent redirected I/O in cluster and also errors during maintanance ? Also is there any possibility to check if the ransomware protection is working on a CSV / Windows file cluster ? Sure the option is set but did we had an option to test if its working ?
Greetings!I’ve been involved in backups for quite a while, but have mercifully been using drives ~ not tapes. I’m now having to consider tapes. We have multiple SLAs, including:A monthly full backup to tape, retention to 62 days, lasts 1 month only A monthly full backup to tape, retention of 365 days - so 12 tape backups A quarterly full backup to tape, retention of 365 days - so 4 tape backupsThe last full backup of the month goes to tape. So, I forsee a single tape (or a group of tapes with a mess of… ) consisting of 1 month + 12 month retention times. Some of these tapes will have jobs that last a year as well as jobs that expired months earlier. What is everyone’s experience with such a thing? We have over 750 servers involved here. CV has but one tape drive currently and an operations group to rotate tapes. Thank you in advance for any experience you can lend me… Mike Rucker
Looking to start a replication group from a VM and default backupset to a mount path on a dell powerscale. Going forward this volume will be san hosted instead of mounted via a server. Upon looking at the config- i dont see an option. Im away replication groups are agent to agent. I thought about makign a library with the location- but even that doenst allow.
Commcell v11Backed up server folder needs to be restored from multiple backups which are on multiple tapes over a one year timespan. (38 Full BU)We need all backup from the full year. Looking for a way to complete this short of doing individual recoveries. Suggestions?Thank you.
I’m getting the following alert email roughly once an hour:> Anomaly Notification> The system detected an unusual drop in the pruning performance for the following databases in commcell <CommServer_Host>> Deduplication Database Reason> HyperScale_Primary inrease in (CommServe Job Records to be Deleted)> CV Cloud Storage increase in (CommServe Job Records to be Deleted)> Please click here for more details.When I follow the “click here” link, I see:> 1 CommServe Job Records to be DeletedThis has been going on for a couple of weeks. I don’t think an annoying email is a big enough deal to open a ticket for but I’d still like to clean this up. Does anyone know what the problem is and how to fix it? I’m sorry to say the CommVault help pages for this are not very useful. Ken
Hi, Do you have a best practice how to utilize media agents in a GRID format?For example we have 4 MA’s and let’s say 3 subclients..1 sub client always uses 1 MA (more vsa proxies within the job for vm’s) for VMware backups. So there is a some sorting mechanism for other 2 sub clients to either receive idling MA’s to start the backup, or use the one which is already used by sub client 1… But is there an option to utilize the potential of a GRID solution where we can use more than 1 MA for a backup of a sub client? Hope it’s clear of what I am trying to achieve.
Afternoon allI have a few questions - hopefully nothing too complicated!I have recently configured a tape library in our Commvault environment which appears to be working OK - I just had a few questions around configuring the tape library to suite our needs.The plan is to backup to tapes so they can be take offsite daily and used in a DR scenario. We would like to keep 3 weeks worth of data on the tapes and would like to have a tape for each day.I’ve configured the auxiliary copy job and the related schedules to backup at a suitable time which I think so far is fine however where I appear to be struggling is with configuring Commvault to backup to a new tape each day.We will have 21 tapes and as an example tape 001 will be week 1 Monday, tape 002 will be week 1 Tuesday etc - tape 007, 008 and 009 will be Friday, Saturday and Sunday respectively.There will be x2 SP’s that will be backing up to tape - my question is, is it possible to configure CV so that one the backup job from SP2 is c
CV_MAGNETIC\V_1508600] [The parameter used for the current operation is not supported by the Operating System, OS Drivers or the underlying Hardware.]. For more help, please call your vendor's support hotline.
Hello Community,Thanks for all answers.I am having same issues with CIFS shares using VSan Dell storage.How do I increase the SMB credits to 256? In media agent Windows registry key?Thanks Error occurred in Disk Media, PathCV_MAGNETIC\V_1508600] [The parameter used for the current operation is not supported by the Operating System, OS Drivers or the underlying Hardware.]. For more help, please call your vendor's support hotline.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.