Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,252 Replies
Good day,Please Advise.Error Code: [7:314] The job has failed because the VSS snapshot could not be created.I have checked vssadmin list writers: No errors found. State:  Stable Last error: No errorDDB Backup Disk free space is 260 GBProcess Manager service: VSS Provider Service is running.Warm regardsGlenn Ngobeni
Afternoon allI have a few questions - hopefully nothing too complicated!I have recently configured a tape library in our Commvault environment which appears to be working OK - I just had a few questions around configuring the tape library to suite our needs.The plan is to backup to tapes so they can be take offsite daily and used in a DR scenario. We would like to keep 3 weeks worth of data on the tapes and would like to have a tape for each day.I’ve configured the auxiliary copy job and the related schedules to backup at a suitable time which I think so far is fine however where I appear to be struggling is with configuring Commvault to backup to a new tape each day.We will have 21 tapes and as an example tape 001 will be week 1 Monday, tape 002 will be week 1 Tuesday etc - tape 007, 008 and 009 will be Friday, Saturday and Sunday respectively.There will be x2 SP’s that will be backing up to tape - my question is, is it possible to configure CV so that one the backup job from SP2 is c
Commcell v11Backed up server folder needs to be restored from multiple backups which are on multiple tapes over a one year timespan. (38 Full BU)We need all backup from the full year. Looking for a way to complete this short of doing individual recoveries. Suggestions?Thank you.
Hi guys, In order to configure hardware WORM beside commvault’s WORM, we found this note "If you have multiple storage policies/copies created using the cloud storage pool, make sure to set the same number of days as retention in all the copies." under the link below :https://documentation.commvault.com/11.24/expert/9251_configuring_worm_storage_mode_on_cloud_storage.html Does that mean that all the secondary copies that have to be stored on the WORM storage have to have the same retention ? It got me a little confused. Since in our case, from each Storage Policy we will create a new copy that will be saved to the WORM Storage, do we have to assign the same retention to all the newly created copies ? Thanks in advance
Hi guys, We are implementing an Air Gapped DR site, so the DR site only receives an offline copy from the main one through aux copies.For aux copies, we only disabled their schedule from the “System Created Autocopy Schedule” and created a new specific schedule which start the aux copies to the DR site after the blackout window. Since the DR MAs and Library are only accessible during the Air Gap window, I was wondering if rescheduling the remaining system created jobs (DDB Backup, DDB Verification, DDB Space Reclamation, Data Aging) to the Air Gap period doesn’t cause any issue when running at the same time with the Aux Copies (Our Aux Copies use Deduplication). I know that, when running the DDB Backup jobs, this may cause issues with Aux Copies as of the screenshot below : https://documentation.commvault.com/11.24/expert/12504_deduplication_database_backup.html So my question is, is it possible to launch all the previously listed system jobs at the same time with Aux Copies without an
Hello,Can I change the Secondary Copy schedule from the webconsole?I haven’t used the webconsole very much but with deploying a new commcell I’d thought I’d look into it. I do know how to handle it from the javaconsole. Documentation didn’t give me much as it is a mix odfjavaconsole and webconsole content (confusing)Best regards Henrik
Greetings!I’ve been involved in backups for quite a while, but have mercifully been using drives ~ not tapes. I’m now having to consider tapes. We have multiple SLAs, including:A monthly full backup to tape, retention to 62 days, lasts 1 month only A monthly full backup to tape, retention of 365 days - so 12 tape backups A quarterly full backup to tape, retention of 365 days - so 4 tape backupsThe last full backup of the month goes to tape. So, I forsee a single tape (or a group of tapes with a mess of… ) consisting of 1 month + 12 month retention times. Some of these tapes will have jobs that last a year as well as jobs that expired months earlier. What is everyone’s experience with such a thing? We have over 750 servers involved here. CV has but one tape drive currently and an operations group to rotate tapes. Thank you in advance for any experience you can lend me… Mike Rucker
Hello Every one,Our current storage is almost out of space, so we want to move all of the backup data to a new Storage.we are thinking of the following:- Create new Disk library contains mount paths of the new storage.- Move the mount path from old mount paths to the new mount paths.- Change the storage policy data path to the new mount path.The reason behind we thought of using moving mount path method, because using Aux copy method would require to stop current backup jobs while moving the mount path doesn’t require that.what is your advice about this approach
Below is the analysis from commvault: - StoreOnce Aux copy failing due to "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != ."- FR25 MR23 contains the fix for the issue you are seeing.https://documentation.commvault.com/11.25/assets/service_pack/updates/11_25_23.htm3395 -- AuxCopy jobs from HPE StoreOnce catalyst libraries as source and destination may fail with Error "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != .- Customer to install MR23 on source MA.So we decided to upgrade to 11.26 as it is the latest and has all fixes as 11.25.32. But can we wait for 11.28 and upgrade to 11.28. Will it have same bug fixes as that of 11.26.
Hello,I created Oracle database full backup subclient every day at 2:00 AM and Oracle archive log full backup subclient every hour. Both data and archive log use same storage policy At storage policy level I created auxiliary copy with selective copy all fulls. But when I checked the jobs inside this auxiliary copy I can find oracle data full backups only and it is not contain the archive log backup jobs. The storage policy primary contain all data and archive logs jobs
I need to check if there is any option to move the data from one mount path to another mount path in the same library. I need this to be done for mitigating over commit issue at back end storage. I am having 3-4 mount paths in which only one job is there , i want to move that one job to any other MP within the library and get that deleted so that over commit issue will get solved. Current Version: V11 SP26.23Backend Storage : Netapp
I am in the process of moving all our data from tape to a disk library and need to estimate how long this will take. Has anyone come up with a reasonably accurate way of predicting how long Aux copying data for a given copy would take.I have 6 x Tape Drives in a library. This has been used a a target for multiple Auxcopy operations for multiple Storage Policies. Due to tape contention a multiplexed multi stream Auxcopy for a Storage Policy copy could be written to 1 to 4 drives depending on drive availability. I am also interested to know how this works. When the operation began it copied the oldest data first , but as time went on the completed and partially copied data began to appear throughout the timeline. Is there away to make the most recent data copy first? Does an Auxcopy operation begin copying all jobs on a given mounted piece of media, or does it only copy some jobs and will have to mount that tape later.
Good afternoon,I wanted to check with the community before generating a case with support, regarding the number of outstanding prunable blocks. On a library, which has started to increase 1TB per day, we have applied a 'run space reclamation' and have recovered 30TB of 65TB it had in size. Although this library has only 6TB in use. It is strange. We are seeing a large number of outstanding prunable blocks but performing the 'run space reclamation' does not remove them. Does anyone know the reason?Thanks
Hi,Please assist.DDB Reconstruction is inprogress, the job is in “add records” phase but gets stuck on “failed to start controller on media agent.Services are running on both source media agent and the problematic media some other jobs are usng the same media agent Commserve is 11.25.14I thank you. RegardsGlenn
Hello Community We are doing a pilot by pushing around 50t of data over to MCSS. We have noticed that when the jobs are progressing, there are multiple streams of different sizes at the beginning of the job and as the job progresses the number of streams go down. Is there a way to ensure that the data that is being pushed to MCSS is distributed among multiple stream equally so that we get consistent performance out of the job till the end?
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
Hi Commvaulters, Had a question regarding data aging/pruning on isolated MAs.So, we have an isolated MAs (Air gapped = They are powered on only during Aux Copies, then, when the last aux copy finishes, the MAs are shutdown), the data is Deduped. All jobs concerning the MAs are rescheduled to be aligned with the Air Gap window (Aux Copy, DDB Backup, Data Aging...etc). My concern now, is that the Air Gap window may not be sufficient to process the data pruning on the storage, since the MAs are directly shutdown after the Aux Copies. Can the Air Gap window be an issue for the data aging/pruning process ? If someone gives us some guidance regarding this, that would be great. Regards.
Hi Commvaulters, Can someone advise me with the needed network ports to be open in order to perform an aux copy between 2 MAs.We installed a new MA on a remote site, and we want to open only the needed ports between it and the CS (For communication) and between it and the MAs located on the main site.Please to note that we are running CV version 11.24, I know that the main ports are 8400 for communication and 8403 for data transfer.Is there any other once that needs to be opened ? We want to minimize port opening in order to fully secure the remote MA. Regards.
Hi Guys, Our on-premises Commvault infrastructure is with 2x Media Agents, 1x Tape Library, and few proxies. The backups - Primary on disks and the Secondary on Tapes. We are planning to migrate the Secondary copies over to Azure Storage and here’s our plan,Placing Media Agent in Azure Placing DDB in Azure (on Media Agent) Switching Auxiliary Job Migrating old data from tapes to AzurePlease provide some guidance and best practise on this.Thanks in advance.
Hi, I have a selective copy that backup a week's data on a tape. This tape i export every friday offsite.This week i I did not have more scratch and if i check media not copied on a selective copy i see more data.Now have scratch tape, and if i check jobs faile for storage policy there are more on status: not selected.how do i relaunch those jobs? if i search on job history same jobs id, there are on status completed because not saved only selective copy
Hi all,we would like to enable parallel copying, so that one aux copy job writes to multiple tapes at the same time to speed up the copy but also restore process.I have already increased the device stream in the copy to 2, but it continues to copy with only one drive. Are there any other settings to be made for this?Thanks in advance
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.