Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 617 Topics
- 3,237 Replies
Hello Commvault !Im in the middle of a project with Commvault 11.28 and HPE StoreOnce Catalyst.I followed the documentation (https://documentation.commvault.com/fujitsu/v11/expert/99426_creating_hpe_store_for_hpe_storeonce_catalyst_library.html) to create the Store and also to create Library in CommCell.Now, in order to configure the Storage policies & add the Vmware, I assume this is going to be from CommCell. right? Because I cant even see the HPE Catalyst Library from Command Center (from there it would be much easier)!
I’m new to CV and still trying to sort out Commcell Console versus Command Center, so I appreciate your patience. After 4 years with Veeam and two decades with Data Protector, Commvault is proving to be quite a different animal.My first concern is why it takes googling some arcane code (ActivateHPECatalyst) to enter in the Commcell Console properties to make visible the StoreOnce option for library creation in the UI. What is the rationale for hiding the StoreOnce option in the first place?Now that I’ve added a Catalyst-backed disk library in Storage Resources > Libraries via Commcell Console, I go back over to Command Center, look at Storage > Disk, and I do not see my new disk library. How then am I supposed to add it to anything as a backup destination?
Our new SAN for CommVault Disk library is Dell PowerScale H7000 OneFS, the library in production is NetApp 2750. We use Windows 2019 MA and ISCSI LUNS on NetApp. Since PowerScale H7000 doesn't support iSCSI LUNS and Sealing DDB or Creating new Disk Library > Global DDB > New Primary is not the preferred path. Is it possible to create new path on current disk library the path on SMB share on DELL OneFS and disable the writes on all local ISCSI path’s with option checked “Prevent data block references for new backup” we could do this to all local disk at once or gradually. Our retention is 90 days on this SP. So could I delete those paths after jobs age out and continue using path on SMB share.
Hello, I need your help ;-)i was asked to prepare tabled report showing Front End data stored in local library (NetApp).I already explained the difference in logical and physical use, but my management still wants to see the list of everything that will sum-up to 297 TB.I tried the chargeback report but it's showing me data from Jun last year. Does anyone know how can I achieve this goal?
Hi all,any advantages, other than security with removing attack surface, when using iSCSI attached volumes from a NAS directly to the Windows MediaAgent rather then writing via SMB to the NAS share?In regards of performance, ransomware protection etc?Your thoughts are highly appreciated, thanks in advanceregardsMichael
I have a productive commcell, where all mount path supports drilling of holes (Sparse). When I open a mount path property in Windows, I can see "Size on disk" is much more smaller than the "Size" of the folder. The whole partition smaller than the "Size" of the folder, but of course, larged than "Size on disk".I installed a test environment, where all mount path also supports drilling of holes (Sparse). Scheduled or manual backups are succeed and stored on mount paths. But when I open a mount path property in Windows on test MA, then I can see the "Size" and the "Size on disk" are the same. Or "Size on disk" is a bit larger. If I check a file with "fsutil sparse queryflag", then I get the response: "This file is NOT set as sparse"My question is, when backend file sizes start to decrease? When will be sparse set on backup files, what stored on a sparse supported mounth path?
We seem to run into multiple problems in our new HSX environment. Metadata disk d2 silently filled 100% on one of three nodes, data disks are all 90-95%, but in GUI only 550 of 720TB are shown as used. Not a single alert for this, all green in GUI. An then there is disk d22 / sdv on one node that failed a few weeks ago and was replaced together with support. In GUI it’s shown as mounted but in reality its not. sdu 65:64 0 16.4T 0 disk /hedvig/d21sdv 65:80 0 16.4T 0 disksdw 65:96 0 16.4T 0 disk /hedvig/d23 I followed Replacing Disks in an HyperScale X Reference Architecture Node (commvault.com) but the disk is not mounted. Nov 9 11:22:38 sdes1701-dp systemd: Dependency failed for /hedvig/d22. Nov 9 11:22:38 sdes1701-dp systemd: Job hedvig-d22.mount/start failed with result 'dependency'. Nov 9 11:22:38 sdes1701-dp systemd: Job dev-disk-by\x2duuid-dfcc3e6c\x2d8152\x2d42b2\x2db0a1\x2d6742d4748d3c.d
Hi all!My company, using six MA to create and store backups. One MA with a separated storage for long term retention outside, and an another one for create local backups on a branch office site. On the main site there are four MA in two node grids. MA1 & MA2 is a grid and MA3 & MA4 is an another. They are sharing their libraries and DDBs.From the branch office, local backups are copied to the main site and main backups are copied to long term site as DR backups. MAs are physical on main site and virtual on others, and disk storages are used on all sites.Currently, we are planing to change our disk storages and physical MAs on main site. And of course, it is a good chance to upgrade OS on MAs from Win2012R2 to Win2019. During the process, library content should be moved from the old disk storage to the new one, and DDBs from old MA to new. One MA stores 40 - 60 TB backup data, and of course, I would like to do it with minimum downtime. I have found descriptions about library mov
Hi All,I have question regarding the Restore process specially for the VM Gues File system restore when done from the combined Storage Tier Cool-Archive being as used as end for long term Library.As understand, the Index and Meta data in this case will be in Cool storage and hence we would be able to Browse the data without having to run the recall workflow.Having selected the folders to restore, will the complete VM data/disk data be rehydrated from archive to tier or just the selected data?
Hi All.I have a DDB on a Linux Media Agent running in Azure. The “CommServe Job Records to be Deleted” is very high and reporting as Critical in the Command Center Health Report.I have confirmed that the Storage Policies associated to this DDB is enabled for Data Aging. Physial pruning is also enabled on the DDB.When running Data Aging to this specific DDB/Copy there is no entry in the MediaManagerPrune log on the CommServe for this DDB ID. All the other DDBs are listed.There is also no SIDBPhysicalDeletes log file on the MA.I have checked the jobs and no jobs are retained past the retention period.Any idea what would cause the records to remain on this DDB?Let me know if you require any additional information that could assist.Thank you.Ignes
Hello I have an issue related to DDB as shown below the q&i time is very high, known that the media agent is serving Oracle and SAP dbs only with daily full backup around 23 oracle rac and 18 sap client.library is from flash storage.DDB Disks is ssd and moved to pure storage [ NVMe disks] due to insufficient space on local disks any idea how to maintain this ?
Hello,During a DDB reconstruction process, how does it reconstruct the missing data? In our case, it first performed a restore of the last backup which was from sometime early morning. Does it then access the library storage directly to reconstruct the missing data or does it access the Commserve? I’m asking because I’ve been told conflicting information from different Commvault technicians. I just want to make sure that I understand this process clearly. Thank you. Bill
Hi team.I have a question about data verification on sec tape copy: How to exclude a specific tape from the data verification process ? Or how to address problems as below:My setup is simple - primary copy on disk lib and secondary copy to LTO. Both copies are verified.From time to time I see a tape that generates a lot of CRC errors during data verification. The verification process takes a long time blocking tape drive and is either completed or sometimes stuck. After that, dozens of jobs on the tape have the partial verification status. Next scheduled verification process repeated with the same results.What surprises me even if the tape exceeds the threshold of w/r errors and has the condition status of “bad” - it is still subject to verification. Why such verification should not finish for some jobs as failed and that’s it. I have such a nasty tape quite often and then a lot of problems. Manually checking and excluding from verification dozens of jobs on tape is practically impossi
Hi Guys, I would like to know whether there are recommendations on the block size of the cloud library? We have a Cloud Storage in our data center, and we would like to use it for backup. On the storage, we have the ability to choose the block size. Do we need to specify the block size or keep it default (32 KB). Note: On disk library, we are used to formatting our local drive to 64 kb, however we didn’t find anything for cloud libraries.Thanks in advance. Best Regards
I have a media agent installed on sparc with SunOS 5.11 operating system on it, and it does not have DDB MediaAgent role assigned, thus i cannot create local deduplication database partition. Is there a way to assign a DDB MediaAgent role to this media agent?
Someone on my team wanted to try to add a new Tape Library to the environment using Command Center. I had never done this in Command Center before, so I watched the user go through the steps. We found that it had created the library and the storage pools. But we could not find any way to create barcode patterns or to create another scratch pool from the Command Center.We are an MSP and this is an essential step in being able to use the library.We then tried to create the barcode patterns and other scratch pools from the CommCell Console. We created the entities but then found we could not associate it to the pool/plan.It appears that the option to change the scratch pool is greyed out in this case.Is it expected that you cannot edit the scratch pool when the library/storage policy/plan is created from the Command Center? Or are we missing something here?Furthermore I was expecting a much more user-friendly approach to adding tape to the environment. For example, the user had to s
While trying to figure out how to gather BET for charging purposes I noticed that the size on disk as displayed both in Command Center and CommCell console for cloud libraries to be incorrect. I have opened a ticket for it and in particular referred to S3 buckets, but I was wondering it other customers see the same and if it also occurs on libraries using Microsoft Azure Storage or other types/vendors. Please comment in case you see identify the same. I noticed it while running FR26 and FR28 (2022e).
Hi there,I am trying to add a new cloud storage (S3 compatible), however I am unable to do so. Moreover, I don’t see any asociated log files. I only see this error message:“Failed to verify the device from MediaAgent [xxxxxx] with the error [Failed to check cloud server status, error = [[Cloud] The server failed to do the verification. Error = 44037]]. “ My question is: Which logs should I add to Logging setting? And how to troubleshoot this stuff in general? PS. As workaround I used this - https://documentation.commvault.com/commvault/v11/article?p=51230.htm but it didn’t help. Thanks for any ideas
Hi, I came into work and noticed dozens of jobs in a waiting state due to the mount path not having enough free space. I am aware that we need to add more storage and we are going to, but in the interim I tried lowering the reserve space from 6TB to 2TB.so that the jobs can finish, and I also see what can be cleared. It’s not letting me change it. It will only go to 5960 GB. I currently have a ticket in with CV. 221017-401. Is there a way to fix this?
I am creating a partitioned DDB with two media agents. What interface of the media agent should i Add? I have dedicated NIC available.Do in need ta add the IP-address of the media agent? What happens if i leave it to default? I have implemented this in the past but I cannot remember this part.
Hello all, I’m trying to configure my OCI into Commvault to test this tool (I’m using trial licence) but I’m dealing with some errors as shown bellow: What certificate is this? How can I install it? And where?Then, I use the CloudTestTool and it shows this: Look at the log file: 4228 2274 10/12 12:54:02 ### GetTenancy() failed. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### Failed to get the namespace. Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### CheckServerStatus() - Error: Message: The required information to complete authentication was not provided or was incorrect.4228 2274 10/12 12:54:02 ### EvMMConfigMgr::onMsgConfigStorageLibrary() - CVMACMediaConfig::AutoConfigLibrary returns error creating Library [Failed to verify the device from MediaAgent [dphwinbkpprd] with the error [Failed to check cloud serv
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.