Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,673 Replies
Wanting to have tapes that are not full be written to until they are full, and 'full/appendable" designation
I have 2 hopefully related questions to tape writing/usage. please pardon my ignorance :)I am seeing tapes that are “never being written to again” and are not full at all with this message “Marked full/appendable due to media group switching to a different data path”. My assumption is that the tape was writing, and then something happened to the job… but shouldn’t it not be “marked” as anything, and just be ‘appendable” until its full? We have ( in the tape library properties) “mark media appendable” and “use appendable media” set to 90 days”. These tapes are not > 90 days old. is there a way to unmark them or not cause them to be marked “full/appendable” (whatever that means? why full and also appendable?) In the docs for “Default Scratch” media group, I see these settings (below). I want to have any tape that’s “not full yet” to be picked before any other “completely empty” tapes. What setting should I choose? The ‘reused” and “recycled” terms below don’t make sense to me. Cu
Good day,Please Advise.Error Code: [7:314] The job has failed because the VSS snapshot could not be created.I have checked vssadmin list writers: No errors found. State:  Stable Last error: No errorDDB Backup Disk free space is 260 GBProcess Manager service: VSS Provider Service is running.Warm regardsGlenn Ngobeni
Hi guys, In order to configure hardware WORM beside commvault’s WORM, we found this note "If you have multiple storage policies/copies created using the cloud storage pool, make sure to set the same number of days as retention in all the copies." under the link below :https://documentation.commvault.com/11.24/expert/9251_configuring_worm_storage_mode_on_cloud_storage.html Does that mean that all the secondary copies that have to be stored on the WORM storage have to have the same retention ? It got me a little confused. Since in our case, from each Storage Policy we will create a new copy that will be saved to the WORM Storage, do we have to assign the same retention to all the newly created copies ? Thanks in advance
Hi, We are deconfiguring a TapeLibrary in one Commcell and want to make a clean setup in a different Commcell and reuse the tapes. We do not want to migrate MediaAgents or save data. We did configure the TapeLib in the new Commcell but got issues as the tapes indicates that they belongs to a different Commcell. What would be the correct way to make the tapes reusable or turn up as new tapes in the new Commcell? Regards,Patrik
Hi guys, We are implementing an Air Gapped DR site, so the DR site only receives an offline copy from the main one through aux copies.For aux copies, we only disabled their schedule from the “System Created Autocopy Schedule” and created a new specific schedule which start the aux copies to the DR site after the blackout window. Since the DR MAs and Library are only accessible during the Air Gap window, I was wondering if rescheduling the remaining system created jobs (DDB Backup, DDB Verification, DDB Space Reclamation, Data Aging) to the Air Gap period doesn’t cause any issue when running at the same time with the Aux Copies (Our Aux Copies use Deduplication). I know that, when running the DDB Backup jobs, this may cause issues with Aux Copies as of the screenshot below : https://documentation.commvault.com/11.24/expert/12504_deduplication_database_backup.html So my question is, is it possible to launch all the previously listed system jobs at the same time with Aux Copies without an
Hello,Can I change the Secondary Copy schedule from the webconsole?I haven’t used the webconsole very much but with deploying a new commcell I’d thought I’d look into it. I do know how to handle it from the javaconsole. Documentation didn’t give me much as it is a mix odfjavaconsole and webconsole content (confusing)Best regards Henrik
Below is the analysis from commvault: - StoreOnce Aux copy failing due to "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != ."- FR25 MR23 contains the fix for the issue you are seeing.https://documentation.commvault.com/11.25/assets/service_pack/updates/11_25_23.htm3395 -- AuxCopy jobs from HPE StoreOnce catalyst libraries as source and destination may fail with Error "Unexpected copy parameters. SFileNum  !=  || CcId  !=  || Offset  +  != .- Customer to install MR23 on source MA.So we decided to upgrade to 11.26 as it is the latest and has all fixes as 11.25.32. But can we wait for 11.28 and upgrade to 11.28. Will it have same bug fixes as that of 11.26.
Hi AllOver the years we have been creating multiple partitions and using each partition as a mount path in the disk library. This method gives the exact size of the disk libraryHowever , if you create mount paths using different folders within the same partition then Commvault disk library size is multiplied by the number of folders we createFor e.g. , if the E: partition is 10 TB and we create 5 mount paths folders as Folder1 2 3 4 and 5 , we expect the total size to be seen as 10 TB . But commvault calculates it as 5 * 10 and shows the disk library with a wrong size of 50 TBAny ideas on how to fix this ?RegardsJithendra Krishnakumar
I need to check if there is any option to move the data from one mount path to another mount path in the same library. I need this to be done for mitigating over commit issue at back end storage. I am having 3-4 mount paths in which only one job is there , i want to move that one job to any other MP within the library and get that deleted so that over commit issue will get solved. Current Version: V11 SP26.23Backend Storage : Netapp
Hi,Please assist.DDB Reconstruction is inprogress, the job is in “add records” phase but gets stuck on “failed to start controller on media agent.Services are running on both source media agent and the problematic media some other jobs are usng the same media agent Commserve is 11.25.14I thank you. RegardsGlenn
Hello Community We are doing a pilot by pushing around 50t of data over to MCSS. We have noticed that when the jobs are progressing, there are multiple streams of different sizes at the beginning of the job and as the job progresses the number of streams go down. Is there a way to ensure that the data that is being pushed to MCSS is distributed among multiple stream equally so that we get consistent performance out of the job till the end?
Is it possible to change Azure Cloud Archive library into Azure Cloud Combined Storage Archive/Cool Library
Hello Commvault Community!,I have a question in the name of one of our client.We have created a Cloud Library (Azure Archive) - and copied around 40TB of data into the Cloud.It tooks us over 2 months to transfer this amount of data, when it completes we then realize that there is a problem with Cloud Recall workflow - when we try to “Browse and Restore” from CopyPrecedense (Azure Archive) - It is trying to reach an Index from this archive cloud (it runs a job “Index Restore” and it can’t find index data because it is archive storage, so it runs a Archive Recall workflow to recall an index data to be able to list the backup data - this worlflow fails after few seconds and we can find an error in Browse and restore window: “The index cannot be accessed. Try again later. If the issue persists, contact support.”.We decided that restoring an index from archive cloud isn’t good idea, because even if it would work - it would take too much time (few hours just to list backup content (index res
Hi Commvaulters, Had a question regarding data aging/pruning on isolated MAs.So, we have an isolated MAs (Air gapped = They are powered on only during Aux Copies, then, when the last aux copy finishes, the MAs are shutdown), the data is Deduped. All jobs concerning the MAs are rescheduled to be aligned with the Air Gap window (Aux Copy, DDB Backup, Data Aging...etc). My concern now, is that the Air Gap window may not be sufficient to process the data pruning on the storage, since the MAs are directly shutdown after the Aux Copies. Can the Air Gap window be an issue for the data aging/pruning process ? If someone gives us some guidance regarding this, that would be great. Regards.
Hi, I have few question about deduplication. I will be very appreciate for help:how to backup servers with enabled deduplication - some special tips how to recover files from volumes with deduplication enabled is the option of data backup on such servers with the use of an installed agent recommended / not recommended, how does it affect the backup speed, why Commvault is not able to recover data from disks after deduplication, maybe something else needs to be configured?Thanks for help!
Hi, I have a selective copy that backup a week's data on a tape. This tape i export every friday offsite.This week i I did not have more scratch and if i check media not copied on a selective copy i see more data.Now have scratch tape, and if i check jobs faile for storage policy there are more on status: not selected.how do i relaunch those jobs? if i search on job history same jobs id, there are on status completed because not saved only selective copy
We have been using Azure Storage accounts as offsite libraries for years. We fully take advantage of cool storage tier, but with our usage history, I would like to start setting the Commvault Azure Libraries to archive storage tier. What is the best way to get the blobs assigned to the archive tier? I assume that only new written data will have archive and the rest will have COOL, until it is pruned. Retention varies from 1 year to upwards of 10years. Is there a native workflow available, or should I think about creating a new library and AUX copy the data into the new configuration.
Hi all,we would like to enable parallel copying, so that one aux copy job writes to multiple tapes at the same time to speed up the copy but also restore process.I have already increased the device stream in the copy to 2, but it continues to copy with only one drive. Are there any other settings to be made for this?Thanks in advance
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.