Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 667 Topics
- 3,370 Replies
Problem with media agent
Hello Occured today on a strange error:HP Ultrium 7-SCSI_2 - A SCSI command to the drive is stuck on the active drive controlling MediaAgent.Id restarted tape libary but this problem is not resovled i assume i need to restart MediaAgent but dont know exacly how anyone can help?
failed to read db error when adding nfs mountpath
Hi team I am trying to add an network mount path to commvault storage library. I assign the mediaagent then choose network then pick the credential and input the path. When I click OK it takes a long time to load then gives the error: "Failed to read db". This is commvault version 11.24.94 recently upgraded from SP16. I tried looking at the logs but I can't se to find the relevant logs. Anyone with an idea?
Usage of HPE StoreOnce as a disk library
Hi, I need some info about the usage of HPE StoreOnce as a disk library. Something like this:https://documentation.commvault.com/11.24/expert/102869_add_hpe_catalyst_storage.htmlCurrently we're using one HPE StoreOnce catalyst store for a disk library. This gives us some single point of failure issue and we need to try and remediate this. So I would like to know what options there is. Like could we have a disk library with catalyst stores from multiple HPE StoreCnce boxes? Like a grid consisting of 4MA's with storage from 4 HPE StoreOnce. So in case there's some issue/maintenance we take down one HPE StoreOnce box at a time and backups will continue to run.
Aux Copy - how to use all free tape drives unless another job needs a drive?
I have a library with 2x LTO drives that is used for some direct to tape jobs and for aux copies of some disk jobs.Ideally what I’d like is for both LTO drives to be free to aux copy data but if another job runs that needs a drive for the aux copy to throttle back down to using one drive.So for example right now I have 50TB of data to aux copy that is going to a single drive when it could go to both drives (I’ve got them so why not use them) except if I set the aux copy to do that it seems that any new backup jobs to tape pause with “no resources available”.Thanks 😀
Mount path is showing offline
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
DDB Verification Operation failed
Hi,We have the following environment:Commserve 11.28 on W2K22 3 Physical Media Agents (MA) on W2K22 Each physical MA has its own DDB partition disk (D:\) and its own Disk library volume (G:\) → there is no sharing of the disks between the physical MA The DDB has 3 partitions: one on each physical MA One disk library with 3 mounth paths (one per MA) but each MA has its own disk library volume not sharedBackups are going well but we have issues with “DDB Verification” operation.The error messages are the following:“Error Code: [62:2687] Description: Export / Mount failed for mount path [G:/dl_ssd_aalst/CV_MAGNETIC], please check if the mount path is accessible on Data Server [appwbck003]. Check the logs for detailed error. Source: appwbck002, Process: ScalableDDBVerf Library [dl_ssd_aalst], MediaAgent [appwbck002], Drive Pool , MountPath[\dl_ssd_aalst]: Mount failed for mount path, please check if the mount path is accessible on Data Server. Check the Media Agent logs for detailed err
DDB Verification for Private Cloud Library
Hi Team,We created Private Cloud Library using Dell EMC ECS S3 protocol. However, today I noticed that the DDB Engines of the related libraries are unchecked in the DDB Verification schedule created automatically by the system.Is this the default behavior?Is it not recommended for DDB Verification Cloud Libraries ?Is DDB Verification recommended even if our private cloud data is still in our own DataCenter? Best Regards.
Full backup or synthetic for streaming on agent-based backups?
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
Doubt about MA architecture
I will soon have to implement Commvault on a site consisting of the following elements:- 2 MediaAgent- A NetApp storage array presenting file per block via iSCSIThe idea is that the MAs work on the same storage and that when one goes down the other continues to provide availability to the infrastructure for both backup and restore operations and reading a lot I think that the best for my case would be to implement GridStor: https://documentation.commvault.com/2022e/expert/10842_gridstor_alternate_data_paths.htmlFor my knowledge and experience, reading it makes me a somewhat complicated configuration and I don't know if it fits the needs I am looking for.This is the procedure that is the most difficult for me because I do not understand very well why: https://documentation.commvault.com/2022e/expert/9788_san_attached_libraries_configuration.htmlI thought that presenting the LUN to the MA and formatting it and sharing the data path with the other MA would be enough but I see that it is n
Deduplication block level factor
Hello,In the documentation Deduplication Building Block Guide (commvault.com), it is mentioned that:“The DDBs created for Windows Media Agent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period.”Is the format at 32 KB referring to the block size of the disk itself (NTFS block size) or referring to the “Block Level Deduplication Factor” parameter of the Storage Policy as shown below ?Thanks
DDB Sealed after corruption and data is not deleting from disk
We had an FM200 release in our data center. DDBs became corrupt and restore failed. Sealed and started new DDB to get backups running. Now all jobs have aged out of the sealed DDB, but I still have 387TB of data on disk and I need the space back ASAP. SIDBPHysicalDeletes not showing continuous activity. Data aging ran multiple times.
Expire uncompleted "To Be Copied" - Selective Copy - First Full of the Year
I’d like to create a single beginning of year (Jan 2023) selective copy (to Tape)Settings are Selective : Yearly Full, First Full of the YearWhat I’d ideally like to achieve is to capture only those Full backups that took place in the first 2 weeks of the new year - which is ‘easy’ to do. What happens though is:If some ‘new data’ or new subclient is created and it obtains a first full in say February 2023, that data will ‘wait’ until it can be written to tapeBut no physical Tape will be made available until Jan 2024(eg 10 Tapes are put in on 1 January 2023 , 10 Tapes are removed 31 Jan 2023, and no further Tapes will be inserted until 1 Jan 2024) What I’d like to do is:If There are AUx Copies waiting (specific to Storage Policy and Storage Policy Copy), if the Aux has been waiting for more than (say) 60 days, is change the job to ‘DO NOT COPY That is ‘almost’ have an expiry data on waiting copies for Aux to Tape or an option that the ‘First Full of the Year’ has a validity period of
Settings for Archiver data Retention
I am attempting to replicate an existing Storage Policy with some differences associated with Media Agents for the copies. There is a existing setting called Archiver data Retention with a setting of 63 days for certain copies, but looking at the Copy properties for Retention, I am unable to find that setting.As a result, in my replicated Storage Policy, the default Archiver data Retention value is Infinite.Do you know how to set the settings for Archiver data Retention?
Has anyone noticed their LACP bonds are not balanced across the interfaces for hyperscale.
We’ve noticed that while using LACP mode 4 on our dell r740xd2 hyerperscales that the interfaces have unbalanced amount of traffic if you do IFCONFIG. I have p1p2 bonded with p5p2 for the storage network. P1p1 and p5p1 on the data traffic network. Notice my rx packets and tx packets are very unbalanced.p1p1 tx packets are at 410Gib and its partner p5p1 is at 11TiB for example. Does anyone see the same on their LACP config or solved this issue? We see the same behavior on a dell 48 port switch and the cisco 9k using Cisco ACI. Also on hyperscale 1.5 and hyperscale x deployments. p1p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether bc:97:e1:2c:9b:00 txqueuelen 1000 (Ethernet) RX packets 11213468144 bytes 14318195329557 (13.0 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1279302179 bytes 440269032106 (410.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0p1p2: flags=6211<UP,BROADCAST,RUN
ObjectLock s3 Bucket Backup and Aux copy issue
Hi All,We have configured OL enabled s3 bucket and configured library and storage pool and policies using that s3 bucket and everything shows online and can be access from commvault console and also using CLI on MediaAgents. we also tested cloud test tool and that works fine too. But when we start a backup or Aux copy we are seeing below errors on the jobs2204 37c4 02/28 17:04:16 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16092550/CHUNK_172585619.FOLDER/12204 37c4 02/28 17:04:16 33401117 [cvd] CVRFAMZS3::SendRequest() - Error: Access Denied. 2204 2948 02/28 17:05:29 33401117 [cvd] WriteFile() - Access Denied. for file 9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_1725859252204 37c4 02/28 17:05:29 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_172585925.FOLDER/12204 37c4 02/28 17:
AUX Copy optimization from Disklib to S3 library
We’re running CV11.24.25 with a two-node grid (physical) with CIFS mount paths from a Nexsan Unity that takes secondary copies from MAs that perform backups (no direct backups other than DDB), with a a partition on each MA. We decided to replace this with a four-node (virtual) grid with S3 (NetApp) storage. The four-node grid was set up with a global dedupe policy based on 512KB dedupe block size with a partition on each node. The two-node grid is the standard 128KB dedupe block size.We had ~600TB of back-end storage (~3.3PB of front-end) and have ~1.75PB front-end left to process after about two months of copying. There were 105 storage policies (multi-tenant env) with retentions ranging from 30 days to 12 years (DB, file, VM, O365 apps) with anything higher than 30 days being extended retentions (normally 30 days/1 cycle and then monthly/yearly with extended retention).We do not seem able to maintain any reasonably high copy rates. Having looked at other conversations here we’ve trie
Data verification job with S3 storage
Evening folksI am in the process of enabling data verification across our storage policies - some of our servers are EC2 instances with Commvault configured to backup directly to S3.I assume if I were to enable data verification in the above scenario I would incur further charges as Commvault would be reading data from the S3 bucket?Also, I just want to check my workings out is correct, or there about. If we are charged for using data verification in the above scenario - if, for example, the data verification job needed to verify 150GB worth of data my math would be 150GB (size of data) / 32MB (Block size that CV stores data in) / 1000 (S3 charges per 1000 requests) * price per 1000 READ requests? Many thanks
How does Deduplication on Azure Storage Accounts work
Hey everyone, we were wondering how clientside deduplication and compression is working on Azures “Storage Accounts”. It doesn’t seem like it’s using our Mediaagent but which resource is it using then? Is there somehow of an “invisible” virtual maschine which runs a “Storage Account” in Azure and does the deduplication etc.? Best regards
AWS S3 - Dash copy between buckets and promote copy?
Hi, I could not find anywhere that adressed this, so asking here. I read that the only way to “migrate data” between storage classes would be as documented. However, can this be done? I have a Hyperscale as a Primary Copy I have an existing AWS S3 Standard bucket as a Dash Copy with Deduplication. I want to create another Dash Copy to an AWS S3-IA Bucket with Deduplication. I want to promote that copy to be the secondary copy and get rid of the existing bucket. Effectively, this seems like migrating the data just as good as going through the process described with the Cloud Tool. Am I wrong? Can this be done?
Verify DDB Reconstruction Jobs
Hello Community, I am new to Commvault. I am trying to check the status of a DDB Reconstruction failed job. I checked the Storage policy but I don’t see the job that created the internal ticket. Type: Job Management - DeDup DB ReconstructionDetected Criteria: Job StartedDetected Thanks.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.