Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 652 Topics
- 3,323 Replies
AUX copy jobs failing
Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_4661435], at the time of error in library [DiskLib_ca-VMAPool-1] and mount path [[ca-vma1] \\xxxxip\cvlt_maglib_01], for storage policy [Plan-ca-vma-VM-90Local-365Cloud] copy [2-DASH-privateStore] MediaAgent [ca-vma1]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Source: ca-vma1, Process: CVJobReplicatorODS Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [Plan-ca-vma-VM-90Local-365Cloud], Copy [Primary], Host [ca-vma1.green.xxx], Path [\\xxxxip\cvlt_maglib_01\CWKROQ_03.16.2023_08.40\CV_MAGNETIC\V_4661435], File Number , Backup Jobs [ 8531530]. Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Source: ca-vma1, Process: CVJobReplicatorODS
problem with Aux copy job Failedi have [13:138] [40:91] [40:65]
i have next problem Alert:Aux copy job FailedType:Job Management - Auxiliary CopyDetected Criteria:Job FailedIs escalated:Detected Time:Wed Dec 28 23:42:51 2022CommCell:CommServeUser:AdministratorJob ID:63139Status:FailedStorage Policy Name:CommServeDRCopy Name:SecondaryStart Time:Wed Dec 28 23:00:11 2022Scheduled Time:Wed Dec 28 23:00:08 2022End Time:Wed Dec 28 23:42:51 2022Error Code:[13:138] [40:91] [40:65]Failure Reason:Error occurred while processing chunk  in media [V_845], at the time of error in library [RezervnaKopija] and mount path [[CommServe] \\192.168.99.51\RezervnaKopija], for storage policy [CommServeDR] copy [Secondary] MediaAgent [CommServe]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [CommServeDR], Copy [Primary], Host [DRI-COMMVAULT.dri.local], Path [\\192.168.99.51\RezervnaKopija\MX19RW_07.26.2022_08.59\CV_M
Problem with media agent
Hello Occured today on a strange error:HP Ultrium 7-SCSI_2 - A SCSI command to the drive is stuck on the active drive controlling MediaAgent.Id restarted tape libary but this problem is not resovled i assume i need to restart MediaAgent but dont know exacly how anyone can help?
failed to read db error when adding nfs mountpath
Hi team I am trying to add an network mount path to commvault storage library. I assign the mediaagent then choose network then pick the credential and input the path. When I click OK it takes a long time to load then gives the error: "Failed to read db". This is commvault version 11.24.94 recently upgraded from SP16. I tried looking at the logs but I can't se to find the relevant logs. Anyone with an idea?
Usage of HPE StoreOnce as a disk library
Hi, I need some info about the usage of HPE StoreOnce as a disk library. Something like this:https://documentation.commvault.com/11.24/expert/102869_add_hpe_catalyst_storage.htmlCurrently we're using one HPE StoreOnce catalyst store for a disk library. This gives us some single point of failure issue and we need to try and remediate this. So I would like to know what options there is. Like could we have a disk library with catalyst stores from multiple HPE StoreCnce boxes? Like a grid consisting of 4MA's with storage from 4 HPE StoreOnce. So in case there's some issue/maintenance we take down one HPE StoreOnce box at a time and backups will continue to run.
Aux Copy - how to use all free tape drives unless another job needs a drive?
I have a library with 2x LTO drives that is used for some direct to tape jobs and for aux copies of some disk jobs.Ideally what I’d like is for both LTO drives to be free to aux copy data but if another job runs that needs a drive for the aux copy to throttle back down to using one drive.So for example right now I have 50TB of data to aux copy that is going to a single drive when it could go to both drives (I’ve got them so why not use them) except if I set the aux copy to do that it seems that any new backup jobs to tape pause with “no resources available”.Thanks 😀
Immutable storage for Commvault in Azure with on-permises Media Agent.
Hello,Today we would like to check the working solution for immutable storage in Azure. And I am really wondering how to configure that type of access: If the access node is local server and resource is not appeared in Azure. Regards, Michal
Mount path is showing offline
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
DDB Verification Operation failed
Hi,We have the following environment:Commserve 11.28 on W2K22 3 Physical Media Agents (MA) on W2K22 Each physical MA has its own DDB partition disk (D:\) and its own Disk library volume (G:\) → there is no sharing of the disks between the physical MA The DDB has 3 partitions: one on each physical MA One disk library with 3 mounth paths (one per MA) but each MA has its own disk library volume not sharedBackups are going well but we have issues with “DDB Verification” operation.The error messages are the following:“Error Code: [62:2687] Description: Export / Mount failed for mount path [G:/dl_ssd_aalst/CV_MAGNETIC], please check if the mount path is accessible on Data Server [appwbck003]. Check the logs for detailed error. Source: appwbck002, Process: ScalableDDBVerf Library [dl_ssd_aalst], MediaAgent [appwbck002], Drive Pool , MountPath[\dl_ssd_aalst]: Mount failed for mount path, please check if the mount path is accessible on Data Server. Check the Media Agent logs for detailed err
DDB Verification for Private Cloud Library
Hi Team,We created Private Cloud Library using Dell EMC ECS S3 protocol. However, today I noticed that the DDB Engines of the related libraries are unchecked in the DDB Verification schedule created automatically by the system.Is this the default behavior?Is it not recommended for DDB Verification Cloud Libraries ?Is DDB Verification recommended even if our private cloud data is still in our own DataCenter? Best Regards.
Full backup or synthetic for streaming on agent-based backups?
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Cloud library migration from Azure one tenant to another
We are running on Commvault 11.20.Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool). We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
Doubt about MA architecture
I will soon have to implement Commvault on a site consisting of the following elements:- 2 MediaAgent- A NetApp storage array presenting file per block via iSCSIThe idea is that the MAs work on the same storage and that when one goes down the other continues to provide availability to the infrastructure for both backup and restore operations and reading a lot I think that the best for my case would be to implement GridStor: https://documentation.commvault.com/2022e/expert/10842_gridstor_alternate_data_paths.htmlFor my knowledge and experience, reading it makes me a somewhat complicated configuration and I don't know if it fits the needs I am looking for.This is the procedure that is the most difficult for me because I do not understand very well why: https://documentation.commvault.com/2022e/expert/9788_san_attached_libraries_configuration.htmlI thought that presenting the LUN to the MA and formatting it and sharing the data path with the other MA would be enough but I see that it is n
LTO9 - Media calibration / Characterization
Hi and happy new year to all of you !I would like to know if some of you have already implemented some LTO9 drives / tape libraries, and would love to get your feedback about it using Commvault.My experience on the LTO9 media, using dual tape drives tape libraries, is quite bad.The Media calibration / optimization / characterization phase that any new LTO9 media has to deal with is a pain, on my side.Looks like on the first mount of a media -- let me reword it in my ‘old guy’ words -- it has to be somehow formatted to be able to be used by your favourite backup software. Below a link to Quantum’s FAQs about this :https://www.quantum.com/globalassets/products/tape-storage-new/lto-9/lto-9-quantum-faq-092021.pdf Short calculation : 50 LTO9 brand new tapes may require up to 2hours each of ‘calibration’ before they can be used. So this equals to 100 hours of ‘calibration’ before you could use the full 50 tape pool.. 😱 My 1st issue was that I had to adjust all the mount timeouts in that LT
Deduplication block level factor
Hello,In the documentation Deduplication Building Block Guide (commvault.com), it is mentioned that:“The DDBs created for Windows Media Agent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period.”Is the format at 32 KB referring to the block size of the disk itself (NTFS block size) or referring to the “Block Level Deduplication Factor” parameter of the Storage Policy as shown below ?Thanks
DDB Sealed after corruption and data is not deleting from disk
We had an FM200 release in our data center. DDBs became corrupt and restore failed. Sealed and started new DDB to get backups running. Now all jobs have aged out of the sealed DDB, but I still have 387TB of data on disk and I need the space back ASAP. SIDBPHysicalDeletes not showing continuous activity. Data aging ran multiple times.
Expire uncompleted "To Be Copied" - Selective Copy - First Full of the Year
I’d like to create a single beginning of year (Jan 2023) selective copy (to Tape)Settings are Selective : Yearly Full, First Full of the YearWhat I’d ideally like to achieve is to capture only those Full backups that took place in the first 2 weeks of the new year - which is ‘easy’ to do. What happens though is:If some ‘new data’ or new subclient is created and it obtains a first full in say February 2023, that data will ‘wait’ until it can be written to tapeBut no physical Tape will be made available until Jan 2024(eg 10 Tapes are put in on 1 January 2023 , 10 Tapes are removed 31 Jan 2023, and no further Tapes will be inserted until 1 Jan 2024) What I’d like to do is:If There are AUx Copies waiting (specific to Storage Policy and Storage Policy Copy), if the Aux has been waiting for more than (say) 60 days, is change the job to ‘DO NOT COPY That is ‘almost’ have an expiry data on waiting copies for Aux to Tape or an option that the ‘First Full of the Year’ has a validity period of
Settings for Archiver data Retention
I am attempting to replicate an existing Storage Policy with some differences associated with Media Agents for the copies. There is a existing setting called Archiver data Retention with a setting of 63 days for certain copies, but looking at the Copy properties for Retention, I am unable to find that setting.As a result, in my replicated Storage Policy, the default Archiver data Retention value is Infinite.Do you know how to set the settings for Archiver data Retention?
Has anyone noticed their LACP bonds are not balanced across the interfaces for hyperscale.
We’ve noticed that while using LACP mode 4 on our dell r740xd2 hyerperscales that the interfaces have unbalanced amount of traffic if you do IFCONFIG. I have p1p2 bonded with p5p2 for the storage network. P1p1 and p5p1 on the data traffic network. Notice my rx packets and tx packets are very unbalanced.p1p1 tx packets are at 410Gib and its partner p5p1 is at 11TiB for example. Does anyone see the same on their LACP config or solved this issue? We see the same behavior on a dell 48 port switch and the cisco 9k using Cisco ACI. Also on hyperscale 1.5 and hyperscale x deployments. p1p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether bc:97:e1:2c:9b:00 txqueuelen 1000 (Ethernet) RX packets 11213468144 bytes 14318195329557 (13.0 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1279302179 bytes 440269032106 (410.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0p1p2: flags=6211<UP,BROADCAST,RUN
ObjectLock s3 Bucket Backup and Aux copy issue
Hi All,We have configured OL enabled s3 bucket and configured library and storage pool and policies using that s3 bucket and everything shows online and can be access from commvault console and also using CLI on MediaAgents. we also tested cloud test tool and that works fine too. But when we start a backup or Aux copy we are seeing below errors on the jobs2204 37c4 02/28 17:04:16 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16092550/CHUNK_172585619.FOLDER/12204 37c4 02/28 17:04:16 33401117 [cvd] CVRFAMZS3::SendRequest() - Error: Access Denied. 2204 2948 02/28 17:05:29 33401117 [cvd] WriteFile() - Access Denied. for file 9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_1725859252204 37c4 02/28 17:05:29 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_172585925.FOLDER/12204 37c4 02/28 17:
How to properly deleting/decommissioning mount points associated to old storage: DDB's appear still associated with the mount paths.
We have added new storage to Commvault, and set the old mount paths to “Disabled for Write” via the mount path “Allocation Policy” → “Disable mount path for new data” + “prevent data block references for new backups”All mount paths that are “disabled for write” do not have any data on them via the “mount path” → “view contents” option.We have waited a several months for all the data to age off.BUT…I see info on the forums/docs that “data’ may still be on the storage, and there are referenced to “baseline data” in use by our DDB’s. When I go to the mount path properties → Deduplication DB’s tab, I see that all our “disabled for write” mount paths have DDB’s listed in them. So it appears commvault is still using the storage in some way. I saw a post that indicated “The LOGICAL jobs no longer exist on that mount path, but the original blocks are still there, and are being referenced by newer jobs. For that reason, jobs listed in other mount paths are referencing blocks in this mount pat
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.