Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 667 Topics
- 3,370 Replies
Good afternoon,I am trying to create a report on the occupancy of our libraries. However, the report coming from Commvault has a lot of information.Would it be possible to use CLI to extract only the information I want?My idea is to create a script and run it every month without having to manually organize the data.
AuxCopy free space is less than primary
We have 2 Synology 24TB storage libraries, one for the primary and the other at another site for the AuxCopy.The primary has Free Space 15.34TB, Size on Disk 8.59TB, Total Application Size 30.5TB.The AuxCopy has Free Space 3.62TB, Size on Disk 20.33TB, Total Application Size 30.33TB Shouldn’t the AuxCopy be an exact replica of the Primary? Why would it be larger? Thanks for any suggestions.Larry
Has anyone noticed their LACP bonds are not balanced across the interfaces for hyperscale.
We’ve noticed that while using LACP mode 4 on our dell r740xd2 hyerperscales that the interfaces have unbalanced amount of traffic if you do IFCONFIG. I have p1p2 bonded with p5p2 for the storage network. P1p1 and p5p1 on the data traffic network. Notice my rx packets and tx packets are very unbalanced.p1p1 tx packets are at 410Gib and its partner p5p1 is at 11TiB for example. Does anyone see the same on their LACP config or solved this issue? We see the same behavior on a dell 48 port switch and the cisco 9k using Cisco ACI. Also on hyperscale 1.5 and hyperscale x deployments. p1p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500 ether bc:97:e1:2c:9b:00 txqueuelen 1000 (Ethernet) RX packets 11213468144 bytes 14318195329557 (13.0 TiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1279302179 bytes 440269032106 (410.0 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0p1p2: flags=6211<UP,BROADCAST,RUN
Backing up Exchange Databases from HD Library to tape
We had to back up some exchange Dags for short term retentions to an 3Par Disk library. The Mount Path is from one of our media agents where i can see what's stored on the 3Par. My boss wants me to back up the month of September that we have on disk to tape, so we can clear it out from the 3Par for another month of Storage. Would the best way to back this up to tape would be to create a subclient on the media agent the drive is mounted to , target the Data Chuncks for the time of the month he wants backed up, and assign it a storage policy that will back it up to tape, or will that not capture the data we want from the 3Par?
Any awareness on auxcopy to AWS in China - HOW TO?
We have a Media Agent located in China and business need is to aux copy primary data to AWS cloud. What way this can be achieved. I am looking for information more on traffic management because there is not straght forward way to migrate data from first copy to AWS cloud. Note: I know technical step to setup auxi copy in normal scenario.
AWS S3 - Dash copy between buckets and promote copy?
Hi, I could not find anywhere that adressed this, so asking here. I read that the only way to “migrate data” between storage classes would be as documented. However, can this be done? I have a Hyperscale as a Primary Copy I have an existing AWS S3 Standard bucket as a Dash Copy with Deduplication. I want to create another Dash Copy to an AWS S3-IA Bucket with Deduplication. I want to promote that copy to be the secondary copy and get rid of the existing bucket. Effectively, this seems like migrating the data just as good as going through the process described with the Cloud Tool. Am I wrong? Can this be done?
Copy data of first week in SSD disks and copy data from 2nd week to 4th week to NLSAS disks
Hi,We have data to backup with a retention period of 4 weeks. The challenge is the following:the data within the fist week of retention period must be copied to SSD disksthe data within 2nd week to 4th week of retention must go to NLSAS disks. So, the goal is to not have the data of the first week retention in NLSAS disks to reduce the space.Is there a way to reach this goal? ThanksRegards,
Import media from catalogic app
Hi, do I have an option to Import media from catalogic app? I have a customer that migrated to commvault from catalogic and he wants to know if he can import the catalogic tapes to commvault library.he has tapes with the last backup from catalogic.I think he will have to maintain his old backup system, but I’m don’t 100% sure.
How to read Azure ZRS blob is primary zone has failed.
I’m looking for some guidance or documentation on what can be done with ZRS storage. Say we have the following availability zones in the region.Zone 1: Media Agents and Hot blob backing up local VM’s, files etc. no Aux copy.Zone 2: Standby Media Agent and conserve powered down. Commsere DR recovery done manually. Zone1 failed so we power up MA and CS at Zone2 - perform manual DR backup set restore Next step - getting the ZRS storage mounted as primary. - any links, docs steps would be much appreciated.Thanks,Regards,Robert
ObjectLock s3 Bucket Backup and Aux copy issue
Hi All,We have configured OL enabled s3 bucket and configured library and storage pool and policies using that s3 bucket and everything shows online and can be access from commvault console and also using CLI on MediaAgents. we also tested cloud test tool and that works fine too. But when we start a backup or Aux copy we are seeing below errors on the jobs2204 37c4 02/28 17:04:16 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16092550/CHUNK_172585619.FOLDER/12204 37c4 02/28 17:04:16 33401117 [cvd] CVRFAMZS3::SendRequest() - Error: Access Denied. 2204 2948 02/28 17:05:29 33401117 [cvd] WriteFile() - Access Denied. for file 9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_1725859252204 37c4 02/28 17:05:29 33401117 [cvd] Curl Error: 23, PUT https://pss-commvault-use1-db-45d.s3.us-east-1.amazonaws.com/9T4IG2_02.01.2023_22.26/CV_MAGNETIC/V_16091931/CHUNK_172585925.FOLDER/12204 37c4 02/28 17:
Benefit to enabling Horizontal DDB
Hello,What are the benefits to enabling the horizontal DDB, BOL explains how enable this feature, but nothing about the real benefits except that split the DDB in 3 section first for File system, another for Database and the last one for VM.Can I expect to see an improvement in backup performance or an increase in deduplication performance that would further reduce on-disk consumption?Thanks,
Storage utilization per storage policy
Hello all-I’m trying to find a report that will output something that I think is super basic: list my 5 storage policies, and tell me how much space each one is using. I already know about the Client Storage Utilization by Storage Policy Copy report, but it shows me the client backup sizes within each policy and I don’t see how to total it very easily or customize it to see what I want. If that report can do it, can someone point me in the right direction to tweak it so I can get the info I’m trying to pull? Or if someone knows of an easier way, that would be great. I’ve checked the built-in reports in the console, command center and store.Thanks!
Multiple deduplication engines within one disk drive
Hi all,I have a following question. If I needed to have more disk storage pools (in order to have more storage policy copies with different retention), I would need to have more deduplication engines. Is it possible to create multiple deduplication engines within one disk drive? Are there any limitation for the number of deduplication engines on one disk drive? Thanks in advance for your engagement!
My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?
Hi, quick on the ISO 2.3 for reference architecture deployment dvd_10072022_113351.iso ?I don't know if I’m in the right place for the question but !This ISO is based on which FR ? FR.24 !?My customer is currently at 11.24.60, should I upgrade the environment to 2022E prior to start deploying the HSX Cluster ?I’ve seen a lot of new features for monitoring and securing nodes !Thank you,
Identifying traditionnel DDB version on the log to the MA
How and where to identify the traditional DDB version on the management agent login the log "SIDBEngine.log on the media agent we find this 6292 126c 12/16 13:00:19 ### 1-0-1-0 LoadConfig 313 Use MemDB [false] But you can't see the version number? And do you have a version number? does it exist?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.