Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 672 Topics
- 3,378 Replies
The job has failed because the VSS snapshot could not be created.
Good day,Please Advise.Error Code: [7:314] The job has failed because the VSS snapshot could not be created.I have checked vssadmin list writers: No errors found. State:  Stable Last error: No errorDDB Backup Disk free space is 260 GBProcess Manager service: VSS Provider Service is running.Warm regardsGlenn Ngobeni
Tape library - 3 week sets
Afternoon allI have a few questions - hopefully nothing too complicated!I have recently configured a tape library in our Commvault environment which appears to be working OK - I just had a few questions around configuring the tape library to suite our needs.The plan is to backup to tapes so they can be take offsite daily and used in a DR scenario. We would like to keep 3 weeks worth of data on the tapes and would like to have a tape for each day.I’ve configured the auxiliary copy job and the related schedules to backup at a suitable time which I think so far is fine however where I appear to be struggling is with configuring Commvault to backup to a new tape each day.We will have 21 tapes and as an example tape 001 will be week 1 Monday, tape 002 will be week 1 Tuesday etc - tape 007, 008 and 009 will be Friday, Saturday and Sunday respectively.There will be x2 SP’s that will be backing up to tape - my question is, is it possible to configure CV so that one the backup job from SP2 is c
My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?
Hi, quick on the ISO 2.3 for reference architecture deployment dvd_10072022_113351.iso ?I don't know if I’m in the right place for the question but !This ISO is based on which FR ? FR.24 !?My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?I’ve seen a lot of new features for monitoring and securing nodes !Thank you,
Commcell restore a folder from a year of backups
Commcell v11Backed up server folder needs to be restored from multiple backups which are on multiple tapes over a one year timespan. (38 Full BU)We need all backup from the full year. Looking for a way to complete this short of doing individual recoveries. Suggestions?Thank you.
Moving long-term backups to another library
Hello all, is there a possibility to move the saved data from one library to another ?We have two full backups on which we have set the retention until next year. The rest of the backups continue with the normal retention of 30 days and 4 cycles. However, I would like to move the backups that need to be retained until next year to our object storage. I have already prepared the connection to the object storage. I just need to know how to move the data. Kind RegardsThomas
Secondary copy schedule
Hello,Can I change the Secondary Copy schedule from the webconsole?I haven’t used the webconsole very much but with deploying a new commcell I’d thought I’d look into it. I do know how to handle it from the javaconsole. Documentation didn’t give me much as it is a mix odfjavaconsole and webconsole content (confusing)Best regards Henrik
Tapes with jobs that have multiple retentions - what happens?
Greetings!I’ve been involved in backups for quite a while, but have mercifully been using drives ~ not tapes. I’m now having to consider tapes. We have multiple SLAs, including:A monthly full backup to tape, retention to 62 days, lasts 1 month only A monthly full backup to tape, retention of 365 days - so 12 tape backups A quarterly full backup to tape, retention of 365 days - so 4 tape backupsThe last full backup of the month goes to tape. So, I forsee a single tape (or a group of tapes with a mess of… ) consisting of 1 month + 12 month retention times. Some of these tapes will have jobs that last a year as well as jobs that expired months earlier. What is everyone’s experience with such a thing? We have over 750 servers involved here. CV has but one tape drive currently and an operations group to rotate tapes. Thank you in advance for any experience you can lend me… Mike Rucker
Move Backup Data to new New Storage
Hello Every one,Our current storage is almost out of space, so we want to move all of the backup data to a new Storage.we are thinking of the following:- Create new Disk library contains mount paths of the new storage.- Move the mount path from old mount paths to the new mount paths.- Change the storage policy data path to the new mount path.The reason behind we thought of using moving mount path method, because using Aux copy method would require to stop current backup jobs while moving the mount path doesn’t require that.what is your advice about this approach
Please confirm that the 11.28 version contains all of the fixes found in 11.26.
Below is the analysis from commvault: - StoreOnce Aux copy failing due to "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != ."- FR25 MR23 contains the fix for the issue you are seeing.https://documentation.commvault.com/11.25/assets/service_pack/updates/11_25_23.htm3395 -- AuxCopy jobs from HPE StoreOnce catalyst libraries as source and destination may fail with Error "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != .- Customer to install MR23 on source MA.So we decided to upgrade to 11.26 as it is the latest and has all fixes as 11.25.32. But can we wait for 11.28 and upgrade to 11.28. Will it have same bug fixes as that of 11.26.
Oracle Archive Log Backup
Hello,I created Oracle database full backup subclient every day at 2:00 AM and Oracle archive log full backup subclient every hour. Both data and archive log use same storage policy At storage policy level I created auxiliary copy with selective copy all fulls. But when I checked the jobs inside this auxiliary copy I can find oracle data full backups only and it is not contain the archive log backup jobs. The storage policy primary contain all data and archive logs jobs
Mount path to Mount path Data Migration
I need to check if there is any option to move the data from one mount path to another mount path in the same library. I need this to be done for mitigating over commit issue at back end storage. I am having 3-4 mount paths in which only one job is there , i want to move that one job to any other MP within the library and get that deleted so that over commit issue will get solved. Current Version: V11 SP26.23Backend Storage : Netapp
Estimating How Long Tape to Disk Auxcopy will take
I am in the process of moving all our data from tape to a disk library and need to estimate how long this will take. Has anyone come up with a reasonably accurate way of predicting how long Aux copying data for a given copy would take.I have 6 x Tape Drives in a library. This has been used a a target for multiple Auxcopy operations for multiple Storage Policies. Due to tape contention a multiplexed multi stream Auxcopy for a Storage Policy copy could be written to 1 to 4 drives depending on drive availability. I am also interested to know how this works. When the operation began it copied the oldest data first , but as time went on the completed and partially copied data began to appear throughout the timeline. Is there away to make the most recent data copy first? Does an Auxcopy operation begin copying all jobs on a given mounted piece of media, or does it only copy some jobs and will have to mount that tape later.
Data Transferred over network
i see this all the time and i never understood it. we make backup copies from disk to tape, both attached to the same media agent. During these aux copies it has the Data Transferred over Network number, which I think should be 0, but there is usually a number there. For example this aux job has these numbers, still running:Total Data Processed: 3.23 TBData Transferred Over Network: 107.95 GBTotal Data to Process: 4.7 TB
Number of Pending Prunable Records
Good afternoon,I wanted to check with the community before generating a case with support, regarding the number of outstanding prunable blocks. On a library, which has started to increase 1TB per day, we have applied a 'run space reclamation' and have recovered 30TB of 65TB it had in size. Although this library has only 6TB in use. It is strange. We are seeing a large number of outstanding prunable blocks but performing the 'run space reclamation' does not remove them. Does anyone know the reason?Thanks
Proper disk configuration for new CV server
Hi, We have recently acquired a new server and storage as part of our hardware refresh for the Commvault server.The new server have the following:2x 480GB SATA SSD configured as Raid 1 (OS installed)2x 1.6TB PCIe SSD - still deciding whether to use host based mirroring or leave it as standalone disks, intended use is for SQL database, DDB, and index. The old server has the following disk configuration:OS - 558GB - 173GB usedSQL - 278GB - 866MB usedDDB - 418GB - 25.6GB usedCommvault V11 SP16 HPK17 Any recommendation for the new server’s disk configuration? If I will use host based mirroring will it have impact on the server’s performance? Which Commvault version should I use? 2022E or 11.26? Thank you in advance.
I have an Auxiliary Disk-to-Disk copy and the throughput is very low, I see a lot of intermittent reading from the disk where the data lives.
the CVJobReplicatorODS, the job number is 177027 346796 56e20 09/02 18:11:14 177027 Target copy is single instanced346796 56e20 09/02 18:11:14 177027 Block level SI is set. Going to set minimum single instanceable size to block size346796 56e20 09/02 18:11:14 177027 Min SI Data Size [128 KB], SI Block Size [128 KB]346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU for target copy:346796 56e20 09/02 18:11:14 177027 EncryptionType:PASSTHRU(NOENCRYPTION) for target copy: as there are no encrypted src copy files.346796 56e20 09/02 18:11:14 177027 N/w agents configured before/after firewall check = [2/2]. Firewalled = 1346796 56e20 09/02 18:11:14 177027 CVArchive::StartPipeline() - StartPipeline SI configuration -[srcClientName - commvault-shf] Block Level [true], Block Size , File Level [false], Min Signature Size 346796 56e20 09/02 18:11:14 177027 CPipelayer::InitiatePipeline Initiating SDT connection [000000D50C41C7E0] from 10.10.165.221:8400(commvault-shf) to
DDB Network interface
I am creating a partitioned DDB with two media agents. What interface of the media agent should i Add? I have dedicated NIC available.Do in need ta add the IP-address of the media agent? What happens if i leave it to default? I have implemented this in the past but I cannot remember this part.
Best practices to ensure maximum streams for Aux to MCSS
Hello Community We are doing a pilot by pushing around 50t of data over to MCSS. We have noticed that when the jobs are progressing, there are multiple streams of different sizes at the beginning of the job and as the job progresses the number of streams go down. Is there a way to ensure that the data that is being pushed to MCSS is distributed among multiple stream equally so that we get consistent performance out of the job till the end?
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
Pruning on isolated MA
Hi Commvaulters, Had a question regarding data aging/pruning on isolated MAs.So, we have an isolated MAs (Air gapped = They are powered on only during Aux Copies, then, when the last aux copy finishes, the MAs are shutdown), the data is Deduped. All jobs concerning the MAs are rescheduled to be aligned with the Air Gap window (Aux Copy, DDB Backup, Data Aging...etc). My concern now, is that the Air Gap window may not be sufficient to process the data pruning on the storage, since the MAs are directly shutdown after the Aux Copies. Can the Air Gap window be an issue for the data aging/pruning process ? If someone gives us some guidance regarding this, that would be great. Regards.
Secondary Copy - Migration from Tapes to Azure Storage
Hi Guys, Our on-premises Commvault infrastructure is with 2x Media Agents, 1x Tape Library, and few proxies. The backups - Primary on disks and the Secondary on Tapes. We are planning to migrate the Secondary copies over to Azure Storage and here’s our plan,Placing Media Agent in Azure Placing DDB in Azure (on Media Agent) Switching Auxiliary Job Migrating old data from tapes to AzurePlease provide some guidance and best practise on this.Thanks in advance.
Using multiple drives for an aux copy job
Hi all,we would like to enable parallel copying, so that one aux copy job writes to multiple tapes at the same time to speed up the copy but also restore process.I have already increased the device stream in the copy to 2, but it continues to copy with only one drive. Are there any other settings to be made for this?Thanks in advance
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.