Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,676 Replies
This has been driving me mildly nuts for the past 3 days post a power shutdown for our site.We had some challenges with our storage network, that prevented iSCSI comms for a few hours after we powered up our site post some scheduled electrical work.We solved that by the software equivalent of a power on/power off on the switches in question (two unrelated port channels to our SAN were disabled/re-enabled). Great. iSCSI works again with all hosts.The MA in question, a VM, is borrowing space on a temporary basis from a newer array, as I am doing a lot of migration work, and is mounting the volumes in Windows via iSCSI as opposed to using an iSCSI mounted data store in VMWare which is treated as an HDD by the OS. Keep in mind, this was all working swimmingly before this past weekend. For the past three days, CV is convinced that the 5 mount points in question are offline, and do not have a controller. If you go to the OS, and browse, you can drill down as deeply into the mounted volum
Hi,we backup a File Server in China with Inc. On Saturday there is synth Full.The Aux Copy of the Synt Full ( 1.8 TB )( to Germany is really slow.Can i schedule the Synt Full on the secondy Storage Policy ?In China always incremental, aux copy to germany and then in germany the Synt Full.We need a full for the tape backups.RegardsPeter Rupp
I’ve looked at the list of destinations for cloud library and didn’t find Azure Gov. Cloud. We currently use Azure but will move to Azure Gov Cloud.Can I create a Cloud library in Azure Gov by just choose Azure and change the url? Or isn’t Azure Gov Supported as destination? BRHenke
Hello,I would like for weekly tape backup jobs to create an exception the first 3 days of the monthFor that I created an exception (except for 1, 2 and 3 firt day on month) in the auxiliary copy to tape task in schedule policyhowever I have pending jobs not copied on October 02 in the weekly storage policies, which I don't wantI opened a case (incident 211005-210), the support told me that it was logical because implemented at the level of storage policiesHow could I prevent the weekly backup copies from not being done every 1, 2 and 3 of each month ?is it possible to configure it ?thanks a lot.
Dear colleagues,We have been using CommVault Simpana for many years. And currently we are looking for a solution how we can backup our data to cloud with Simpana. I mean that target of backup will be AWS, Azure or any others cloud providers. I found in the documentation that backup to cloud is possible for CommServer DR, but what about production data?Note: We are looking options for BACKUP to CLOUD (where backup destination is CLOUD) but NOT backup of the CLOUD (where CLOUD is backup source)!I would appreciate any information on this topic.Thanks!
Hi, I am performing a Commcell Migration and in the source Commcell there is a TapeLibrary that we need to migrate to the target Commcell. My question is if the historical data will be available for restores (i.e tapes will be migrated/re-labled as well). And if using the option “Allow imported media to be reused on this Commcell” means that append will work once the TapeLibrary is migrated or if it is read only and tapes will be reused (moved to scratch) only once all data on them has expired. Regards, /Patrik
Hi everyone.how do you estimate MCSS storage requirements? MCSS (cool) would be our customers’ cloud secondary copy in their existing Commvault environment. So there are values available like local block size (128KB), deduplication factor, baseline size and such for estimation We’re seeing around 1.2 to 1.3 x in MCSS demand compared to local disk with customers who set it up recently. As MCSS secondary copies “must have a minimum retention of 30 days", reducing retention (which oftentimes is the customer’s choice for solving storage issues) is apparently not an option here.While of course we cannot provide a 100% estimation, we intend to not be too much off.Thanks.
Hello.Is it possible to have aux copies configured between different CommCell entities?Same company but different business units in different buildings requiring offsite backup copies but is trying to make use of the infrastructure of the other already existing in their environment.This is probably possible when installing a second instance on the existing MA but will require separate disk libraries entities at that site.Thanks.Iggy
What’s the easiest way to confirm a Cool/Archive cloud library is working? I have migrated data from a Hot storage blob container. I provisioned an additional storage container as Hot in Azure and provisioned it as Cool/Archive as recommended by Commvault. I attempted a restore of a file from this Archive library and the restore just completed as normal without the use of a workflow so I’m worried the data isn’t in the Archive tier as expected. Any ideas how to confirm the data is actually in Archive and Metadata is in Cool? And why was the restore just standard?
Hello, EveryoneHow are you, today?I have a problem i am trying to resolve. I don't know if anyone can be of help.So, this customer has a tape library with 3 drives but 5 slots. They had 3 existing 3 drives which is running fine. They just added 2 more drives to be configured. I have been trying to detect and configure the drives but i can see them (it says undetected, unconfigured). It says SCSI adapter has been removed, Please go to property page to select the right scsi. Can you help, please? Eventually i detected and configured one of the drives, but can’t detect the last one. During the detect an configure scan, the 5th drive shows but in the data path and library, it does not show.
We think that there is no real benefit for the Commvault to deduplicate Transaction Log data. Maybe some compression benefit, but today we deduplicate practically everything. I started another topic (Difference between Incremental Storage Policy and Log Storage policy) to ask about the differences between Log Storage Policy and Incremental Storage Policy. The idea is to have a specific Storage Policy for Database Agents Transactional Logs with deduplication disabled. By examining some backups jobs, we noticed that Savings Percentage comes entirely from Compression. Due to its nature, It seems that there is no gain when deduplicating variable data like Transactional Logs. By disabling deduplication for those data, in theory, we can keep DDB smaller and reduce Q&I times. Am I right?
Hi,i have a question about the creation of a cloud library.We have a scality ring as a backend. We have select S3 compatible storage for create the library.in the documentation i found the below information :For another vendor that supports Amazon S3 such as Scality, you must select Amazon S3 from Type, and then, under Access Information, enter the credentials of that vendor.i have a doubt about selecting amazon S3 in the type of cloud library instead of S3 compatible storage because i tried to create a cloud library for testing in amazon s3 type. the request of creating the library don’t work.please adviceKind regards,
Hello, We have an environment that was originally set up with Windows media agents, but as it has grown we have added some Linux media agents as Data Movers. CommVault will let me share the mountpaths with these Linux media agents but it will not let me run a “move mountpath” from a connected CIFS UNC path to a locally-mounted NFS volume on a Linux media agent, so it concerned me that it would let me share them with a Linux media agent if it was mounted as a local path. That being said, I was wondering if anyone else is successfully accessing mountpaths over SMB/CIFs using Linux media agents? (SMB path mounted to local directories on the Linux media agents and accessed/shared-to as local paths on the Linux servers, i.e. \\sancifsservername\mysharename via network path on Windows servers, and mounted to local directory as local device on Linux servers.)Thanks all!Pat
Hi,We have some issues with our use of our cloud backend (scality ring).When running backup or data verification the request send by the media agent create some peak of use on the juniper switch (QFX 5100). that causing drop of packet and in result in commvault error reading chunk. we have multiple bad chunk alerts.the network team ask us to set a limitation of the network bandwith to limit peak on juniper switch.how can it be possible to limit the bandwith between media agent and cloud librairies ?kind regards,
If we have an existing DDB on a drive for a media agent and that drive gets encrypted with Bitlocker does that cause a problem?My thought is that it isn’t since all reads/writes are happening inside the server. There might be a performance penalty though. Or am I totaly wrong?//Henke
where to config "preferred setting" in the "Select mount path for MediaAgent according to the preferred setting" of the Library properties
I’d like to consult where to config “preferred setting” mentioned in “Select mount path for MediaAgent according to the preferred setting” option of the library properties? thank you in advanceBest regards
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.