Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 725 Topics
- 3,531 Replies
Hi Team,We created Private Cloud Library using Dell EMC ECS S3 protocol. However, today I noticed that the DDB Engines of the related libraries are unchecked in the DDB Verification schedule created automatically by the system.Is this the default behavior?Is it not recommended for DDB Verification Cloud Libraries ?Is DDB Verification recommended even if our private cloud data is still in our own DataCenter? Best Regards.
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
I will soon have to implement Commvault on a site consisting of the following elements:- 2 MediaAgent- A NetApp storage array presenting file per block via iSCSIThe idea is that the MAs work on the same storage and that when one goes down the other continues to provide availability to the infrastructure for both backup and restore operations and reading a lot I think that the best for my case would be to implement GridStor: https://documentation.commvault.com/2022e/expert/10842_gridstor_alternate_data_paths.htmlFor my knowledge and experience, reading it makes me a somewhat complicated configuration and I don't know if it fits the needs I am looking for.This is the procedure that is the most difficult for me because I do not understand very well why: https://documentation.commvault.com/2022e/expert/9788_san_attached_libraries_configuration.htmlI thought that presenting the LUN to the MA and formatting it and sharing the data path with the other MA would be enough but I see that it is n
Hello,In the documentation Deduplication Building Block Guide (commvault.com), it is mentioned that:“The DDBs created for Windows Media Agent should be formatted at 32 KB block size to reduce the impact of NTFS fragmentation over a time period.”Is the format at 32 KB referring to the block size of the disk itself (NTFS block size) or referring to the “Block Level Deduplication Factor” parameter of the Storage Policy as shown below ?Thanks
We had an FM200 release in our data center. DDBs became corrupt and restore failed. Sealed and started new DDB to get backups running. Now all jobs have aged out of the sealed DDB, but I still have 387TB of data on disk and I need the space back ASAP. SIDBPHysicalDeletes not showing continuous activity. Data aging ran multiple times.
I’d like to create a single beginning of year (Jan 2023) selective copy (to Tape)Settings are Selective : Yearly Full, First Full of the YearWhat I’d ideally like to achieve is to capture only those Full backups that took place in the first 2 weeks of the new year - which is ‘easy’ to do. What happens though is:If some ‘new data’ or new subclient is created and it obtains a first full in say February 2023, that data will ‘wait’ until it can be written to tapeBut no physical Tape will be made available until Jan 2024(eg 10 Tapes are put in on 1 January 2023 , 10 Tapes are removed 31 Jan 2023, and no further Tapes will be inserted until 1 Jan 2024) What I’d like to do is:If There are AUx Copies waiting (specific to Storage Policy and Storage Policy Copy), if the Aux has been waiting for more than (say) 60 days, is change the job to ‘DO NOT COPY That is ‘almost’ have an expiry data on waiting copies for Aux to Tape or an option that the ‘First Full of the Year’ has a validity period of
I am attempting to replicate an existing Storage Policy with some differences associated with Media Agents for the copies. There is a existing setting called Archiver data Retention with a setting of 63 days for certain copies, but looking at the Copy properties for Retention, I am unable to find that setting.As a result, in my replicated Storage Policy, the default Archiver data Retention value is Infinite.Do you know how to set the settings for Archiver data Retention?
We’ve noticed that while using LACP mode 4 on our dell r740xd2 hyerperscales that the interfaces have unbalanced amount of traffic if you do IFCONFIG. I have p1p2 bonded with p5p2 for the storage network. P1p1 and p5p1 on the data traffic network. Notice my rx packets and tx packets are very unbalanced.p1p1 tx packets are at 410Gib and its partner p5p1 is at 11TiB for example. Does anyone see the same on their LACP config or solved this issue? We see the same behavior on a dell 48 port switch and the cisco 9k using Cisco ACI. Also on hyperscale 1.5 and hyperscale x deployments. p1p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether bc:97:e1:2c:9b:00 txqueuelen 1000 (Ethernet) RX packets 11213468144 bytes 14318195329557 (13.0 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1279302179 bytes 440269032106 (410.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0p1p2: flags=6211<UP,BROADCAST,RUN
Hi All,We have configured OL enabled s3 bucket and configured library and storage pool and policies using that s3 bucket and everything shows online and can be access from commvault console and also using CLI on MediaAgents. we also tested cloud test tool and that works fine too. But when we start a backup or Aux copy we are seeing below errors on the jobs2204 37c4 02/28 17:04:16 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16092550/CHUNK_172585619.FOLDER/12204 37c4 02/28 17:04:16 33401117 [cvd] CVRFAMZS3::SendRequest() - Error: Access Denied. 2204 2948 02/28 17:05:29 33401117 [cvd] WriteFile() - Access Denied. for file 9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_1725859252204 37c4 02/28 17:05:29 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_172585925.FOLDER/12204 37c4 02/28 17:
We’re running CV11.24.25 with a two-node grid (physical) with CIFS mount paths from a Nexsan Unity that takes secondary copies from MAs that perform backups (no direct backups other than DDB), with a a partition on each MA. We decided to replace this with a four-node (virtual) grid with S3 (NetApp) storage. The four-node grid was set up with a global dedupe policy based on 512KB dedupe block size with a partition on each node. The two-node grid is the standard 128KB dedupe block size.We had ~600TB of back-end storage (~3.3PB of front-end) and have ~1.75PB front-end left to process after about two months of copying. There were 105 storage policies (multi-tenant env) with retentions ranging from 30 days to 12 years (DB, file, VM, O365 apps) with anything higher than 30 days being extended retentions (normally 30 days/1 cycle and then monthly/yearly with extended retention).We do not seem able to maintain any reasonably high copy rates. Having looked at other conversations here we’ve trie
Evening folksI am in the process of enabling data verification across our storage policies - some of our servers are EC2 instances with Commvault configured to backup directly to S3.I assume if I were to enable data verification in the above scenario I would incur further charges as Commvault would be reading data from the S3 bucket?Also, I just want to check my workings out is correct, or there about. If we are charged for using data verification in the above scenario - if, for example, the data verification job needed to verify 150GB worth of data my math would be 150GB (size of data) / 32MB (Block size that CV stores data in) / 1000 (S3 charges per 1000 requests) * price per 1000 READ requests? Many thanks
Hey everyone, we were wondering how clientside deduplication and compression is working on Azures “Storage Accounts”. It doesn’t seem like it’s using our Mediaagent but which resource is it using then? Is there somehow of an “invisible” virtual maschine which runs a “Storage Account” in Azure and does the deduplication etc.? Best regards
Hi, I could not find anywhere that adressed this, so asking here. I read that the only way to “migrate data” between storage classes would be as documented. However, can this be done? I have a Hyperscale as a Primary Copy I have an existing AWS S3 Standard bucket as a Dash Copy with Deduplication. I want to create another Dash Copy to an AWS S3-IA Bucket with Deduplication. I want to promote that copy to be the secondary copy and get rid of the existing bucket. Effectively, this seems like migrating the data just as good as going through the process described with the Cloud Tool. Am I wrong? Can this be done?
Hello Community, I am new to Commvault. I am trying to check the status of a DDB Reconstruction failed job. I checked the Storage policy but I don’t see the job that created the internal ticket. Type: Job Management - DeDup DB ReconstructionDetected Criteria: Job StartedDetected Thanks.
Greetings, We have some Aux copies that go to our AWS s3 bucket. The storage policy this is under has a 30 day on prem and 365 day cloud policy. The 30 day on prem (primary) has data aging turned on and seems to be pruning and getting rid of jobs past 30 days. I took a look at the properties of the Aux copy job though and noticed that the check box for data aging was not selected. When I view all jobs for this Aux copy, it showed jobs back from years ago unfortunately. So that tells me that nothing is aging out or getting cleaned up. Our s3 bucket is getting very large and we need to clean up all of these old jobs to bring it down to a reasonable size. My question is how best to do this clean up? Can I view the jobs under the Aux copy and then just select all of them past our retention and delete? Would this delete data out of the s3 bucket also if I did this? I did select the data aging check box now and hit ok, then ran a data aging job from the commcell root and just ran it against
I have an Auxcopy job running, it allocates 55 streams and as time progresses those streams gradually drop as streams complete. I now have my Auxcopy that has been sitting on 7 jobs for a couple of hours. When I kill the job and restart it I go back up to 61 streams. Am I improving things by doing this, will my job finish faster? Why does the streams not increase by itself during the job or why does it take so long to do so? Can I improve this?
Hello! This morning I was all thumbs and dropped a tape. This caused the little pin inside to come loose and the tape was barely holding. I have to consider it dead now. Is there a way to flag the data inside to be recopied to another tape? Thanks!
Hi Team, We are about to embark on the V4 to V5 DDB conversion process, but I thought I would ask here and see how it went for those that have completed this. We have a few partitioned DDB’s with a reasonable amount of size, and I am trying to gauge how long our backup-outage might be, as we have to guesstimate on behalf of our customer. I can see that the pre-upgrade report does give estimates, so I’m wondering how ball-park they might be. One more thing … is there a way if identifying what version of DDB we have> Thanks
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.