Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 620 Topics
- 3,252 Replies
What’s the easiest way to confirm a Cool/Archive cloud library is working? I have migrated data from a Hot storage blob container. I provisioned an additional storage container as Hot in Azure and provisioned it as Cool/Archive as recommended by Commvault. I attempted a restore of a file from this Archive library and the restore just completed as normal without the use of a workflow so I’m worried the data isn’t in the Archive tier as expected. Any ideas how to confirm the data is actually in Archive and Metadata is in Cool? And why was the restore just standard?
Hello, EveryoneHow are you, today?I have a problem i am trying to resolve. I don't know if anyone can be of help.So, this customer has a tape library with 3 drives but 5 slots. They had 3 existing 3 drives which is running fine. They just added 2 more drives to be configured. I have been trying to detect and configure the drives but i can see them (it says undetected, unconfigured). It says SCSI adapter has been removed, Please go to property page to select the right scsi. Can you help, please? Eventually i detected and configured one of the drives, but can’t detect the last one. During the detect an configure scan, the 5th drive shows but in the data path and library, it does not show.
We think that there is no real benefit for the Commvault to deduplicate Transaction Log data. Maybe some compression benefit, but today we deduplicate practically everything. I started another topic (Difference between Incremental Storage Policy and Log Storage policy) to ask about the differences between Log Storage Policy and Incremental Storage Policy. The idea is to have a specific Storage Policy for Database Agents Transactional Logs with deduplication disabled. By examining some backups jobs, we noticed that Savings Percentage comes entirely from Compression. Due to its nature, It seems that there is no gain when deduplicating variable data like Transactional Logs. By disabling deduplication for those data, in theory, we can keep DDB smaller and reduce Q&I times. Am I right?
Hi,i have a question about the creation of a cloud library.We have a scality ring as a backend. We have select S3 compatible storage for create the library.in the documentation i found the below information :For another vendor that supports Amazon S3 such as Scality, you must select Amazon S3 from Type, and then, under Access Information, enter the credentials of that vendor.i have a doubt about selecting amazon S3 in the type of cloud library instead of S3 compatible storage because i tried to create a cloud library for testing in amazon s3 type. the request of creating the library don’t work.please adviceKind regards,
Hello, We have an environment that was originally set up with Windows media agents, but as it has grown we have added some Linux media agents as Data Movers. CommVault will let me share the mountpaths with these Linux media agents but it will not let me run a “move mountpath” from a connected CIFS UNC path to a locally-mounted NFS volume on a Linux media agent, so it concerned me that it would let me share them with a Linux media agent if it was mounted as a local path. That being said, I was wondering if anyone else is successfully accessing mountpaths over SMB/CIFs using Linux media agents? (SMB path mounted to local directories on the Linux media agents and accessed/shared-to as local paths on the Linux servers, i.e. \\sancifsservername\mysharename via network path on Windows servers, and mounted to local directory as local device on Linux servers.)Thanks all!Pat
Hi,We have some issues with our use of our cloud backend (scality ring).When running backup or data verification the request send by the media agent create some peak of use on the juniper switch (QFX 5100). that causing drop of packet and in result in commvault error reading chunk. we have multiple bad chunk alerts.the network team ask us to set a limitation of the network bandwith to limit peak on juniper switch.how can it be possible to limit the bandwith between media agent and cloud librairies ?kind regards,
If we have an existing DDB on a drive for a media agent and that drive gets encrypted with Bitlocker does that cause a problem?My thought is that it isn’t since all reads/writes are happening inside the server. There might be a performance penalty though. Or am I totaly wrong?//Henke
where to config "preferred setting" in the "Select mount path for MediaAgent according to the preferred setting" of the Library properties
I’d like to consult where to config “preferred setting” mentioned in “Select mount path for MediaAgent according to the preferred setting” option of the library properties? thank you in advanceBest regards
Hi !I’ve noticed a strange behaviour of a daily DDB incremental verification job, normally set to 30 readers, but getting much higher than this since a few days.. See below How can this be possible ? I thought that the limit was.. a limit So, what has changed ? Well, not sure it is related, but since sept9, I upgraded from v11FR21HP20 to v11FR21HP53, and no other change.The Gridstor of 4 hosting this DDB also host another DDB, whose verification job runs without getting (or printing only) the maximum 30 readers limits. That’s a bit annoying because the first job goes pending for a while with : Error Code: [19:1105]Description: No Resources Available. Please check the Streams tab for details. Of course no ressources are available, as it’s trying to allocated more than 30.. A few details about this job and blocks to process : Should I log a case or, have you already been used to such ?
This is just a question about how we could have better way to rebuild MA. We have a windows based local ma and the data is aux copied to another media agent at a secondary site (different city);The RAID on the local MA was returning errors; so we replaced one of the faulty disk which caused the whole raid configuration to fail. We had to rebuild the whole virtual disk again. Unfortunately since there were only 3-6TB disks; everything OS/DDB/IndexCache/Libraries were in the same Virtual disk and we lost everything. Since we had a copy of the backup data at the secondary location; the quickest way was to rebubuild the MA. The RAID controller was not showing any error; and it was not feasible to send another MA to the site due to lockdown So we rebilt the virutal disk again; installed/upgraded OS to Windows 2019; kept the same hostname/ip configuration; installed Commvault agent and it made the MA online on Commserve. created index cache and DDB added the new mount path in the
We want to use space on an SSD for a new DDB (horizontal scaling) and the old DDB is not being used. It only has jobs with long term retention as it was used for file archiving(onepass). The onepass jobs are not active anymore. Data only needs to be restored. If Commvault doesnt need the DDB for restores, how can I get rid of the DDB as we dont have any backups running.Sealing could be an option. However BOL mentions:The sealed DDB is automatically deleted only after all the jobs (with their data block signatures stored on the DDB) are aged or deleted and the corresponding volumes are deleted from the disk.So the DDB is required for purposes other than the backup. https://documentation.commvault.com/commvault/v11/article?p=12591.htmFor now we will be moving the partition to a different drive. Any ideas?
Hi All. I have a question regarding restore of data which resides on a offsite private cloud where clients have no access. Setup is:On prem Client network with servers in backup.→ Firewall in between with port open for backup traffic.On prem Backup network with MediaAgents and storage.→ Firewall in between with port open for aux copy data.MediaAgents with offsite private cloud storage containing Aux copies of client data. Clients do not have access to the offsite MediaAgents/copy, so my question is, how do I make a restore of data from the offsite cloud, when the On prem clients and the offsite MediaAgents cannot talk to each other directly? Is it possible to route restore traffic via the on prem MediaAgents acting as a proxy some how? Would the network gateway via outgoing routes be the solution? Any suggestion would be appreciated. Thanks for helping.-Anders
HI ThereIHAC that will use exagrid as backup storage with Commvault.Exagrid states that they can add to Commvault deduplication to obtain a higher dedup ratio (up to 20:1 for long term retention data).I couldn’t find any information on Exagrid on BoL and my understanding was that we do not use CV deduplication when using a deduplication storage as primary target.Did anyone implemented CV with Exagrid ? and if so any specifics/culprit or best practices ? ThanksAbdel
Hello All,I am from TSM background and when VTL was introduced in TSM, the scratch tapes were not automatically deleted in VTL. Later, RELABELSCRATCH parameter was introduced and it allows to automatically relabel volumes when they are returned to scratch. I remember that specific settings we have do in Backup Exec and HP data Protector backup software too.→ I want to know whether any similar setting is there in Commvault?More details as per TSM perspective → Virtual Tape Libraries (VTLs) maintain volume space allocation after Tivoli Storage Manager has deleted a volume and returned it to a scratch state. The VTL has no knowledge that the volume was deleted and it keeps the full size of the volume allocate. This can be extremely large depending on the devices being emulated. As a result of multiple volumes that return to scratch, the VTL can maintain their allocation size and run out of storage space. Relabel processing on the Tivoli Storage Manager server are started for libraries (V
Hi all, We have a following issue. The LTO8 tapes were badly marked with LTO6 barcodes. Then, the tapes have been remarked with right barcode. However, Commvault can not recognise the newly remarked tapes - Commvault says that the barcodes had been already used and tapes were moved to retired group. We tried to perform Discover, Full Scan (inventory) and Update barcode for the given tape without success. Is there any workaround for the issue? Is there only one way to somehow fix the tapes within the tape library?
Hello community , We are trying to migrate SAN storage to S3 cloud library .Per suggestions followed these steps . 1. configured new global dedupe storage policy using your new S3 bucket and MA2. configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage3. ran aux-copyWe have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.Support has mentioned below points .-Your current configuration is allowing the selection and prioritization of new backups over older data-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. How to make sure have optimal Aux copy configurations Please share your inputs . Thanks in advanceSpartan9
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
Hello everyone,All my backups are set to replicate from my primary site to my DR site and all copies have the same retention. Weirdly, the Disk Library Growth report shows the media agent at my primary site has 178TB worth of data but my DR site is only 132TB so I’m wondering where the 46TB difference comes from.Question: Is there a way to compare the contents of two media agents to see where discrepancies are coming from?ThanksKen
Hi, I’ve an old sealed DDB with no more jobs associated to it, it only shows some number of unique blocks left ( for a size of 1,18TB ) , secondary blocks is 0 , application size is already at 0. How can I then do what you proposed to do: “then remove ALL of the blocks for that store in one big macro prune.”?Cause I’d like to get totally rid of that sealed DDB ?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.