Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 667 Topics
- 3,370 Replies
Has anyone noticed their LACP bonds are not balanced across the interfaces for hyperscale.
We’ve noticed that while using LACP mode 4 on our dell r740xd2 hyerperscales that the interfaces have unbalanced amount of traffic if you do IFCONFIG. I have p1p2 bonded with p5p2 for the storage network. P1p1 and p5p1 on the data traffic network. Notice my rx packets and tx packets are very unbalanced.p1p1 tx packets are at 410Gib and its partner p5p1 is at 11TiB for example. Does anyone see the same on their LACP config or solved this issue? We see the same behavior on a dell 48 port switch and the cisco 9k using Cisco ACI. Also on hyperscale 1.5 and hyperscale x deployments. p1p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether bc:97:e1:2c:9b:00 txqueuelen 1000 (Ethernet) RX packets 11213468144 bytes 14318195329557 (13.0 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1279302179 bytes 440269032106 (410.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0p1p2: flags=6211<UP,BROADCAST,RUN
DDB threshold reached max limit
Hi Team,The DDB partition Q&I time is very high post changed the DDB partition location. I have moved the DDB partition recently from one drive to another due to space was full.As per storage team, we are using Lun assigned NLSAS_SSD drives on our MA. Now all the backups are running very slow. I want to reduce the Q&I time or else want to run the DDB on single (T:\ DDB) partition. because the backups are running fine using that DDB(when i was moved the DDB that time it was used another partition) Please suggest on this.I have raised a case to vendor, they are suggesting to run the ConvertDDBToV5 workflow job and waiting for the steps and DT.
Clarification on Deleting Jobs on Tape
Hello, I am updating some our documentation on our best practices to securely delete files from file server's backups.The file server backups in questions are only stored on a primary copy residing on tape, encrypted via AES 256 per the storage policy, and using the built in key management server. Normally if we need to delete a file, we follow the documentation and use the delete data by browsing option. For clarification, if I use the "delete data by browsing option" and delete a file that resides offsite on tape, there is no way to recover that file, correct? There is no way to "Un-Age" or catalog operation on the tape I could perform if I were to insert it back into my tape library? I assume that the CommCell destroys the indexed data/encryption keys associated with that file and cannot read that block of data on the tape? Recently I noticed an option in the CommCell browser where I can delete content of an entire tape. Storage Resources > Libraries > Tape Library > Medi
Benefit to enabling Horizontal DDB
Hello,What are the benefits to enabling the horizontal DDB, BOL explains how enable this feature, but nothing about the real benefits except that split the DDB in 3 section first for File system, another for Database and the last one for VM.Can I expect to see an improvement in backup performance or an increase in deduplication performance that would further reduce on-disk consumption?Thanks,
Good afternoon,I am trying to create a report on the occupancy of our libraries. However, the report coming from Commvault has a lot of information.Would it be possible to use CLI to extract only the information I want?My idea is to create a script and run it every month without having to manually organize the data.
How does Deduplication on Azure Storage Accounts work
Hey everyone, we were wondering how clientside deduplication and compression is working on Azures “Storage Accounts”. It doesn’t seem like it’s using our Mediaagent but which resource is it using then? Is there somehow of an “invisible” virtual maschine which runs a “Storage Account” in Azure and does the deduplication etc.? Best regards
Error when adding Oracle Cloud Infrastructure Object Storage on Command Center
May I know if anyone encountered this error before when adding cloud storage on Command Center? May I know what is the resolution? I tried to search to any sites but got not luck Error below:"Operating System could not find the device file specified. The device may be unreachable from the MediaAgent. Please ensure that the file is present in the given path and is accessible."
Aux Copy - how to use all free tape drives unless another job needs a drive?
I have a library with 2x LTO drives that is used for some direct to tape jobs and for aux copies of some disk jobs.Ideally what I’d like is for both LTO drives to be free to aux copy data but if another job runs that needs a drive for the aux copy to throttle back down to using one drive.So for example right now I have 50TB of data to aux copy that is going to a single drive when it could go to both drives (I’ve got them so why not use them) except if I set the aux copy to do that it seems that any new backup jobs to tape pause with “no resources available”.Thanks 😀
ObjectLock s3 Bucket Backup and Aux copy issue
Hi All,We have configured OL enabled s3 bucket and configured library and storage pool and policies using that s3 bucket and everything shows online and can be access from commvault console and also using CLI on MediaAgents. we also tested cloud test tool and that works fine too. But when we start a backup or Aux copy we are seeing below errors on the jobs2204 37c4 02/28 17:04:16 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16092550/CHUNK_172585619.FOLDER/12204 37c4 02/28 17:04:16 33401117 [cvd] CVRFAMZS3::SendRequest() - Error: Access Denied. 2204 2948 02/28 17:05:29 33401117 [cvd] WriteFile() - Access Denied. for file 9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_1725859252204 37c4 02/28 17:05:29 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_172585925.FOLDER/12204 37c4 02/28 17:
AUX copy jobs failing
Error Code: [13:138] Description: Error occurred while processing chunk  in media [V_4661435], at the time of error in library [DiskLib_ca-VMAPool-1] and mount path [[ca-vma1] \\xxxxip\cvlt_maglib_01], for storage policy [Plan-ca-vma-VM-90Local-365Cloud] copy [2-DASH-privateStore] MediaAgent [ca-vma1]: Backup Job . Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Source: ca-vma1, Process: CVJobReplicatorODS Failed to Copy or verify Chunk  in media [CV_MAGNETIC], Storage Policy [Plan-ca-vma-VM-90Local-365Cloud], Copy [Primary], Host [ca-vma1.green.xxx], Path [\\xxxxip\cvlt_maglib_01\CWKROQ_03.16.2023_08.40\CV_MAGNETIC\V_4661435], File Number , Backup Jobs [ 8531530]. Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Cannot impersonate user. User credentials provided for disk mount path access may be incorrect. Source: ca-vma1, Process: CVJobReplicatorODS
Disk Library mount path is offline due to nfs local_lock option set in mount options after upgrading to 11.20 or higher
Sharing this information Proactively.Issue:After upgrading to 11.20 or higher, NFS Mount Paths show offline in the Comcell GUI with the error "The mount path is marked offline due to nfs local_lock option set in mount options"CVMA.log on the Media Agent will show:102415 1901f 01/13 19:06:53 ### WORKER [96/0/0 ] :CVMAMagneticWorker.cpp:6992: Marking mount path \[<mount path>] mounted on dir \[/commvault\_fas\-syd\] offline due to mount options [rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=220.127.116.11,mountvers=3,mountport=635,mountproto=tcp,local_lock=all,addr=<IP Address>]Cause:Checking the NFS mount options by running mount -v will reveal the path is not set to "local_lock=none":In earlier releases, it was advised to set local_lock=none as per https://documentation.commvault.com/commvault/v11/article?p=12567.htm. However, 11.20 has enforced the check. This was done due to issues where
Changing iRMC (HyperScale Appliance HS1300 & HS3300) Password using IPMITool
The following procedure allows you to safely update Fujitsu Appliance iRMC Password which won't impact operations such as: RHEV-M Hardware failure alerting Important Note - for the HyperScale Appliance\, Commvault leverages the IPMI protocol to monitor the physical hardware by design\, and reports back to Command Center if there is a fault. IPMI - Intelligent Platform Management Interface is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware and operating system. The following procedure are applicable for the following use cases: Updating iRMC Password for security purpose Resetting iRMC Password if it is forgotten or lost IPMITool is installed at the Guest OS level (Redhat OS)Updating iRMC Password First, you will need to establish a SSH session onto the Guest OS (HyperScale RedHat 7.#) Then input the following command: # ipmitool user set password
Retrieve information from CSDB - DDB Information
Similar to this article, I'd like to show simple queries to retrieve DDB information from CSDB.Important note: do not modify CSDB data and modules, just use READ operations only.Also pleaseTo keep your activities safe you need to keep uncommitted read from any table as one of the following techniques:use CommServ -- just for convenience-- place the following at the top of any queryset transaction isolation level read uncommitted;-- place with(nolock) statement in each table referenceselect * from APP_Application with(nolock)Most of the DDB information is stored in tables started with Idx, for configuration of DDB is stored mainly the following 3 tables, first one is for DDB information, latter 2 for partitions:select * from IdxSIDBStoreselect * from IdxSIDBSubStoreselect * from IdxAccessPathTo combine this, including which MA is in use for each partition, like the following:select store.SIDBStoreName as 'DDB Name' ,apc.name as 'MediaAgent' ,ap.Path as 'Partition path'from Id
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.