Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 588 Topics
- 3,129 Replies
Hello,We have bought a new tape library and we want to create some auxiliary copy jobs to run on a weekly basis for 4 storage policies only for the full backup job's(the client schedule is a full once a day/incr every 1 hour)The auxiliary copy jobs must all run during the weekend and occupy not more than 2 tapes( we have tapes with big capacity-16TB) because we have only 8 tapesThe question is, how can we make the copy jobs to write and fill up one tape and after to try and fill up another tape when the first one is full?For example: after the first copy jobs is finished, the second one needs to use the same tape like the first one (if there is still space available of course)and when the tape is full, write on another one and so onBest regards,Stefan
Hi all,one question about DDB Verification jobs.The verification jobs for one of our DDBs takes quite a long time, usually a matter of days. When I check the ScalableDDBVerf.log file I see many messages like this WARNING - Waiting for send queue to get emptied. Curr Size  Should I consider this a symptom of a problem? Thank you in advanceGaetano
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Hi All, Was documenting about the WORM activation on our cloud storage, from different threads here and using the documentation. Came across different questions, which I hope will get some answer through this topic.1 - On the link that follows, it’s stating that “Note: Once applied, the WORM functionality is irreversible”.Does that mean when we activate the WORM on the storage through the workflow, we cannot change the retention ? We wanted as a first time test the WORM, with the setting of the retention of one storage policy copy on the storage pool as a test with 1 day only. Does that mean that we cannot change the retention of the workflow to something else ? Let's say 15 days. 2 - Same from the link, since our storage pool is using deduplication, it’s stated that the retention on the storage will be set twice of the one on the storage pool, our copies on the storage pool will be set to 15 days, does that mean the data will remain for 30 days on the storage without being deleted, af
My company is planning to do a tech refresh on our aging data domain to a newer version. We also highlighted that we’re having backup slowness issues on some of our large Oracle databases & some NDMP backups. Our current configuration is backing up to a VTL located in our data domain & no compression or deduped are enabled from Commvault layer.Dell’s sales team advice us to purchase an additional license for DDBoost for the new Data Domain. This is because DDBoost will be able to perform a very good dedupe & compression rate from the source level before transferring it to our data domain, thus saving the time it took to transfer via network.However I’ve been checking around Commvault’s KB and looks like commvault only works with BoostFS & not DD Boost. I haven’t check with Dell yet regarding this. May I know has anyone implemented DDBoost on your environment for backing up databases/vm & NDMP?
The auditors want to see if my backups are encrypted and I’m not sure where to go in the CommVault GUI to show that. I don’t see anything about encryption in the properties for my storage libraries or my storage policies. Where do I show whether or not my backups are encrypted?Ken
Hello,I am trying to fully understand restore point retention to achieve my goals. I also have Incident 221117-488 currently open about this.Here’s what I am trying to achieve: a low priority backup plan that runs on a daily schedule, and consistently retains as close to 3 restore points as possible.Here are the settings I currently have for my base and derived planBase plan: WIN_SYS_STD_BASE_LOWSLA: 1 week, inherited from CommCellBackup destinationsPrimary - 3 days retention periodSecondary - 3 days retention periodDatabase optionsLog backup RPO - 4 hour(s)Run full backup every: 1 weekStorage pool Override not allowedRPO Override requiredFolders to backup Override optionalDerived plan (Defines scheduling only): WIN_SYS_STD_BASE_LOW_10PMDefined in Java GUI: Run synthetic full every 3 daysBackup frequency: Run incremental every day at 10:00 PMBackup destinations (Inheriting from base plan)Primary - 3 days retention periodSecondary - 3 days retention periodDatabase optionsLog backup RPO
Hi All,Has anybody worked on bringing in the Commvault maglib status and tape media usage status to Grafana dashboard? Is there anyway to showcase the usage trend and capacity reporting based on maglib utilization and publish in Grafana?Possibly if we can take the real time data from Commvault then we can pretty much show this metric in Grafana. Any leads?
Hello, a customer asked if it would be possible to make all their primary copies (for disk libraries) WORM-protected and what the implications were.Up until now our standard is n-days/1cycle retention on primary copies and n-days/0cycle retention on a secondary copy.We basically use the 1 cycle as a safety net. So if for whatever reason the backup of a client does not work for a long time, they always have one backup available without setting a manual retention on those jobs.Now we have an internal discussion about how the retention for WORM works. Specifically if the cycle retention is also relevant for manual deletion of the jobs/clients that hold those jobs.So for example: A client is using a WORM storage policy with 14 days/1 cycle retention.Data Aging will not age out and delete the jobs automatically until both conditions are met.But is it possible to manually delete the jobs on day 15? I would say it is not possible because it is still retained by the cycles. If that is the case
Hi,I'm setting up the new Linux (Red-Hat) MediaAgent right now and I have a "what would be better" question. Maybe someone would like to share experiences :) On this new MediaAgent I plan to create a new disk library. MediaAgent will have available resources via SAN from an array (several volumes of 8TB each). Is it better to do LVM on these volumes (create vgs, create lvols and finally create filesystem - for example ext4) or better to make a gpt (parted) partitions and create ext4 without creating vgs and lvols?I am very curious about your opinion what would be a better solution.greetings
Hi, We have recently acquired a new server and storage as part of our hardware refresh for the Commvault server.The new server have the following:2x 480GB SATA SSD configured as Raid 1 (OS installed)2x 1.6TB PCIe SSD - still deciding whether to use host based mirroring or leave it as standalone disks, intended use is for SQL database, DDB, and index. The old server has the following disk configuration:OS - 558GB - 173GB usedSQL - 278GB - 866MB usedDDB - 418GB - 25.6GB usedCommvault V11 SP16 HPK17 Any recommendation for the new server’s disk configuration? If I will use host based mirroring will it have impact on the server’s performance? Which Commvault version should I use? 2022E or 11.26? Thank you in advance.
Hi,i have to perform OS refresh of some physical MA ( 2 grid of 4 MA). We will upgrade from Windows Server 2012 to 2019.my purpose is about data availability because our disk libraries are configured with Local LUNs on each MA shared with others via Dataserver-IP option.The problem is when one MA is unavailable for the upgrade, their mount paths were not readable for the others MA. I have search in the documentation and i don’t find any uses cases for doing a MA refresh with shared mount paths. please advice Kind regards,christophe
Hello,We have been out of space in one of the two libraries we have, and to expand it we have to change the current hard disks for others of higher capacity, as we have no option to add additional disks or modules. The affected library contains the DR copy, so we have changed the path in the storage policy to copy it to another library and be able to free the affected library, delete it and configure it again with the new disks.When we try to do this we get an error, because the DR copy is of warm type, and has a long retention that until it expires we cannot delete it, even though we have another copy in the other library.How can we proceed?Best regards, and thanks in advance.
Hello Commvault !Im in the middle of a project with Commvault 11.28 and HPE StoreOnce Catalyst.I followed the documentation (https://documentation.commvault.com/fujitsu/v11/expert/99426_creating_hpe_store_for_hpe_storeonce_catalyst_library.html) to create the Store and also to create Library in CommCell.Now, in order to configure the Storage policies & add the Vmware, I assume this is going to be from CommCell. right? Because I cant even see the HPE Catalyst Library from Command Center (from there it would be much easier)!
I’m new to CV and still trying to sort out Commcell Console versus Command Center, so I appreciate your patience. After 4 years with Veeam and two decades with Data Protector, Commvault is proving to be quite a different animal.My first concern is why it takes googling some arcane code (ActivateHPECatalyst) to enter in the Commcell Console properties to make visible the StoreOnce option for library creation in the UI. What is the rationale for hiding the StoreOnce option in the first place?Now that I’ve added a Catalyst-backed disk library in Storage Resources > Libraries via Commcell Console, I go back over to Command Center, look at Storage > Disk, and I do not see my new disk library. How then am I supposed to add it to anything as a backup destination?
Our new SAN for CommVault Disk library is Dell PowerScale H7000 OneFS, the library in production is NetApp 2750. We use Windows 2019 MA and ISCSI LUNS on NetApp. Since PowerScale H7000 doesn't support iSCSI LUNS and Sealing DDB or Creating new Disk Library > Global DDB > New Primary is not the preferred path. Is it possible to create new path on current disk library the path on SMB share on DELL OneFS and disable the writes on all local ISCSI path’s with option checked “Prevent data block references for new backup” we could do this to all local disk at once or gradually. Our retention is 90 days on this SP. So could I delete those paths after jobs age out and continue using path on SMB share.
Hello, I need your help ;-)i was asked to prepare tabled report showing Front End data stored in local library (NetApp).I already explained the difference in logical and physical use, but my management still wants to see the list of everything that will sum-up to 297 TB.I tried the chargeback report but it's showing me data from Jun last year. Does anyone know how can I achieve this goal?
Hi all,any advantages, other than security with removing attack surface, when using iSCSI attached volumes from a NAS directly to the Windows MediaAgent rather then writing via SMB to the NAS share?In regards of performance, ransomware protection etc?Your thoughts are highly appreciated, thanks in advanceregardsMichael
I have a productive commcell, where all mount path supports drilling of holes (Sparse). When I open a mount path property in Windows, I can see "Size on disk" is much more smaller than the "Size" of the folder. The whole partition smaller than the "Size" of the folder, but of course, larged than "Size on disk".I installed a test environment, where all mount path also supports drilling of holes (Sparse). Scheduled or manual backups are succeed and stored on mount paths. But when I open a mount path property in Windows on test MA, then I can see the "Size" and the "Size on disk" are the same. Or "Size on disk" is a bit larger. If I check a file with "fsutil sparse queryflag", then I get the response: "This file is NOT set as sparse"My question is, when backend file sizes start to decrease? When will be sparse set on backup files, what stored on a sparse supported mounth path?
We seem to run into multiple problems in our new HSX environment. Metadata disk d2 silently filled 100% on one of three nodes, data disks are all 90-95%, but in GUI only 550 of 720TB are shown as used. Not a single alert for this, all green in GUI. An then there is disk d22 / sdv on one node that failed a few weeks ago and was replaced together with support. In GUI it’s shown as mounted but in reality its not. sdu 65:64 0 16.4T 0 disk /hedvig/d21sdv 65:80 0 16.4T 0 disksdw 65:96 0 16.4T 0 disk /hedvig/d23 I followed Replacing Disks in an HyperScale X Reference Architecture Node (commvault.com) but the disk is not mounted. Nov 9 11:22:38 sdes1701-dp systemd: Dependency failed for /hedvig/d22. Nov 9 11:22:38 sdes1701-dp systemd: Job hedvig-d22.mount/start failed with result 'dependency'. Nov 9 11:22:38 sdes1701-dp systemd: Job dev-disk-by\x2duuid-dfcc3e6c\x2d8152\x2d42b2\x2db0a1\x2d6742d4748d3c.d
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.