Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,674 Replies
We are running on Commvault 11.20.Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool). We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
i have next problem Alert:Aux copy job FailedType:Job Management - Auxiliary CopyDetected Criteria:Job FailedIs escalated:Detected Time:Wed Dec 28 23:42:51 2022CommCell:CommServeUser:AdministratorJob ID:63139Status:FailedStorage Policy Name:CommServeDRCopy Name:SecondaryStart Time:Wed Dec 28 23:00:11 2022Scheduled Time:Wed Dec 28 23:00:08 2022End Time:Wed Dec 28 23:42:51 2022Error Code:[13:138] [40:91] [40:65]Failure Reason:Error occurred while processing chunk  in media [V_845], at the time of error in library [RezervnaKopija] and mount path [[CommServe] \\192.168.99.51\RezervnaKopija], for storage policy [CommServeDR] copy [Secondary] MediaAgent [CommServe]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [CommServeDR], Copy [Primary], Host [DRI-COMMVAULT.dri.local], Path [\\192.168.99.51\RezervnaKopija\MX19RW_07.26.2022_08.59\CV_M
Hi All,I have question regarding the Restore process specially for the VM Gues File system restore when done from the combined Storage Tier Cool-Archive being as used as end for long term Library.As understand, the Index and Meta data in this case will be in Cool storage and hence we would be able to Browse the data without having to run the recall workflow.Having selected the folders to restore, will the complete VM data/disk data be rehydrated from archive to tier or just the selected data?
Hello,Today we would like to check the working solution for immutable storage in Azure. And I am really wondering how to configure that type of access: If the access node is local server and resource is not appeared in Azure. Regards, Michal
Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_4661435], at the time of error in library [DiskLib_ca-VMAPool-1] and mount path [[ca-vma1] \\xxxxip\cvlt_maglib_01], for storage policy [Plan-ca-vma-VM-90Local-365Cloud] copy [2-DASH-privateStore] MediaAgent [ca-vma1]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Source: ca-vma1, Process: CVJobReplicatorODS Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [Plan-ca-vma-VM-90Local-365Cloud], Copy [Primary], Host [ca-vma1.green.xxx], Path [\\xxxxip\cvlt_maglib_01\CWKROQ_03.16.2023_08.40\CV_MAGNETIC\V_4661435], File Number , Backup Jobs [ 8531530]. Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Source: ca-vma1, Process: CVJobReplicatorODS
Hello Occured today on a strange error:HP Ultrium 7-SCSI_2 - A SCSI command to the drive is stuck on the active drive controlling MediaAgent.Id restarted tape libary but this problem is not resovled i assume i need to restart MediaAgent but dont know exacly how anyone can help?
Hi team I am trying to add an network mount path to commvault storage library. I assign the mediaagent then choose network then pick the credential and input the path. When I click OK it takes a long time to load then gives the error: "Failed to read db". This is commvault version 11.24.94 recently upgraded from SP16. I tried looking at the logs but I can't se to find the relevant logs. Anyone with an idea?
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Hi,We have the following environment:Commserve 11.28 on W2K22 3 Physical Media Agents (MA) on W2K22 Each physical MA has its own DDB partition disk (D:\) and its own Disk library volume (G:\) → there is no sharing of the disks between the physical MA The DDB has 3 partitions: one on each physical MA One disk library with 3 mounth paths (one per MA) but each MA has its own disk library volume not sharedBackups are going well but we have issues with “DDB Verification” operation.The error messages are the following:“Error Code: [62:2687] Description: Export / Mount failed for mount path [G:/dl_ssd_aalst/CV_MAGNETIC], please check if the mount path is accessible on Data Server [appwbck003]. Check the logs for detailed error. Source: appwbck002, Process: ScalableDDBVerf Library [dl_ssd_aalst], MediaAgent [appwbck002], Drive Pool , MountPath[\dl_ssd_aalst]: Mount failed for mount path, please check if the mount path is accessible on Data Server. Check the Media Agent logs for detailed err
Hi Team,We created Private Cloud Library using Dell EMC ECS S3 protocol. However, today I noticed that the DDB Engines of the related libraries are unchecked in the DDB Verification schedule created automatically by the system.Is this the default behavior?Is it not recommended for DDB Verification Cloud Libraries ?Is DDB Verification recommended even if our private cloud data is still in our own DataCenter? Best Regards.
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
I will soon have to implement Commvault on a site consisting of the following elements:- 2 MediaAgent- A NetApp storage array presenting file per block via iSCSIThe idea is that the MAs work on the same storage and that when one goes down the other continues to provide availability to the infrastructure for both backup and restore operations and reading a lot I think that the best for my case would be to implement GridStor: https://documentation.commvault.com/2022e/expert/10842_gridstor_alternate_data_paths.htmlFor my knowledge and experience, reading it makes me a somewhat complicated configuration and I don't know if it fits the needs I am looking for.This is the procedure that is the most difficult for me because I do not understand very well why: https://documentation.commvault.com/2022e/expert/9788_san_attached_libraries_configuration.htmlI thought that presenting the LUN to the MA and formatting it and sharing the data path with the other MA would be enough but I see that it is n
Hello,In the documentation Deduplication Building Block Guide (commvault.com), it is mentioned that:“The DDBs created for Windows Media Agent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period.”Is the format at 32 KB referring to the block size of the disk itself (NTFS block size) or referring to the “Block Level Deduplication Factor” parameter of the Storage Policy as shown below ?Thanks
We had an FM200 release in our data center. DDBs became corrupt and restore failed. Sealed and started new DDB to get backups running. Now all jobs have aged out of the sealed DDB, but I still have 387TB of data on disk and I need the space back ASAP. SIDBPHysicalDeletes not showing continuous activity. Data aging ran multiple times.
I’d like to create a single beginning of year (Jan 2023) selective copy (to Tape)Settings are Selective : Yearly Full, First Full of the YearWhat I’d ideally like to achieve is to capture only those Full backups that took place in the first 2 weeks of the new year - which is ‘easy’ to do. What happens though is:If some ‘new data’ or new subclient is created and it obtains a first full in say February 2023, that data will ‘wait’ until it can be written to tapeBut no physical Tape will be made available until Jan 2024(eg 10 Tapes are put in on 1 January 2023 , 10 Tapes are removed 31 Jan 2023, and no further Tapes will be inserted until 1 Jan 2024) What I’d like to do is:If There are AUx Copies waiting (specific to Storage Policy and Storage Policy Copy), if the Aux has been waiting for more than (say) 60 days, is change the job to ‘DO NOT COPY That is ‘almost’ have an expiry data on waiting copies for Aux to Tape or an option that the ‘First Full of the Year’ has a validity period of
I am attempting to replicate an existing Storage Policy with some differences associated with Media Agents for the copies. There is a existing setting called Archiver data Retention with a setting of 63 days for certain copies, but looking at the Copy properties for Retention, I am unable to find that setting.As a result, in my replicated Storage Policy, the default Archiver data Retention value is Infinite.Do you know how to set the settings for Archiver data Retention?
Hi All,We have configured OL enabled s3 bucket and configured library and storage pool and policies using that s3 bucket and everything shows online and can be access from commvault console and also using CLI on MediaAgents. we also tested cloud test tool and that works fine too. But when we start a backup or Aux copy we are seeing below errors on the jobs2204 37c4 02/28 17:04:16 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16092550/CHUNK_172585619.FOLDER/12204 37c4 02/28 17:04:16 33401117 [cvd] CVRFAMZS3::SendRequest() - Error: Access Denied. 2204 2948 02/28 17:05:29 33401117 [cvd] WriteFile() - Access Denied. for file 9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_1725859252204 37c4 02/28 17:05:29 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_172585925.FOLDER/12204 37c4 02/28 17:
We’re running CV11.24.25 with a two-node grid (physical) with CIFS mount paths from a Nexsan Unity that takes secondary copies from MAs that perform backups (no direct backups other than DDB), with a a partition on each MA. We decided to replace this with a four-node (virtual) grid with S3 (NetApp) storage. The four-node grid was set up with a global dedupe policy based on 512KB dedupe block size with a partition on each node. The two-node grid is the standard 128KB dedupe block size.We had ~600TB of back-end storage (~3.3PB of front-end) and have ~1.75PB front-end left to process after about two months of copying. There were 105 storage policies (multi-tenant env) with retentions ranging from 30 days to 12 years (DB, file, VM, O365 apps) with anything higher than 30 days being extended retentions (normally 30 days/1 cycle and then monthly/yearly with extended retention).We do not seem able to maintain any reasonably high copy rates. Having looked at other conversations here we’ve trie
Evening folksI am in the process of enabling data verification across our storage policies - some of our servers are EC2 instances with Commvault configured to backup directly to S3.I assume if I were to enable data verification in the above scenario I would incur further charges as Commvault would be reading data from the S3 bucket?Also, I just want to check my workings out is correct, or there about. If we are charged for using data verification in the above scenario - if, for example, the data verification job needed to verify 150GB worth of data my math would be 150GB (size of data) / 32MB (Block size that CV stores data in) / 1000 (S3 charges per 1000 requests) * price per 1000 READ requests? Many thanks
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.