Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,674 Replies
Got a storage policy with a active selective Copy that is supposed to copy the data written to a HDD library and copy it to a tape. It missed its aux backup for whatever reason. When trying to manually run an aux copy from the storage policy it tells me there is no Data that needs to be copied when clearly there is no job records of the aux performing backups. How do i force an aux copy without this error popping up .
Hi all,in the Job Controller there is a waiting backup job due to the following error message:"Drive in which Media is mounted is not ready to use. Advice: Please retry your operation later. If the drive has a stuck volume, reset the drive to recover the media."As proposed, I tried to "reset drive", but it didnt help. The tape seems to be stuck in the drive.Do you have any suggestions what to try next to work it out?
Hello, I am enabling encryption for my backup data per new requirement by enabling it through the storage policy copy encryption setting. After subsequent backup jobs have completed I have verified encryption to the backup data is set from the storage policy copy report. I do not see an indication that the DDB backups are encrypted. Do they need to be encrypted? This is a requirement by our auditors and they will see this in the report like I did and might ask me why the DDB backups are excluded.Thanks.
We have an offsite facility we send our aux copies to for DR purposes. PB’s of data for CommVault to go through . We do not have a firewall, and port 8400 can reach the offsite ok. Some of us in house think using the network topology to create a persistent connection and then open 8 routes will speed up the process over letting Commvault handle the traffic automatically. Does anyone have any insight into which process is better to use for us? or more technical how the network routes work or a recommended setup?
To connect to S3 bucket as cloud library from Media Agent (on-premises) we can use below options:1.AWS Direct Connect2.VPN Gateway3. InternetMy query is if we are using option 3 internet to connect to S3 bucket how can we protect/secure S3 bucket from outside attackers or any non authorized users accessing the S3 bucket over internet.
Hello all,I am trying to find some specific guidelines for Block and Chunk settings when related to Cloud Storage. The information I have found is generally related to Disk and Tape Media.I have been reviewing an environment that uses Chunk settings of either ‘Application setting’ for Primary Copy and a mixture of ‘Application copy’ and 4096 for secondary copies.Block size has been set to 1024 and in other cases set to ‘use media type setting’.I am wondering what the best practices for these settings are and if any of these user set settings are overridden?
Hi AllDo the below IOPS numbers in the second table below correspond to the test condition specified here :Excerpt from: https://documentation.commvault.com/11.24/expert/8852_testing_iops_for_disk_library_mount_path_with_iometer.html Access Specification Settings Percent Read 50 Percent Write 50 Percent Random Distribution 50 Percent Sequential Distribution 50 Transfer Request Size 64K The minimum IOPS required for each mount path of the disk library of extra large, large and medium MediaAgents is: Components Extra Large Large Medium Disk Library 1000 IOPS 1000 IOPS 800 IOPS
Hi Commvault People, I have a tidying up exercise to do, which will ultimately involve this:- 1 - Decommission two MA’s currently working as partitioned dedupe (lets call, them MA3 and MA4)2 - Migrate workloads to remaining MA’s, also with their own partitioned GDSP configs. Lets call them MA1 and MA2.3 - Noting that Storage Policies on MA1\MA2 versus MA3\MA4 have very different retention. I know that generally speaking, you can easily reassociate to another SP and away you go, but that’s only useful if you have matching retention.The issue I have is that my MA3 and MA4 Storage Policies (whose ma’s I want to retire) are set to Infinite retention. The Storage Policies on the remaining ma’s (MA1\MA2 are not). So If I want to migrate the workloads away from MA3 and MA4, is it possible to somehow keep the Storage Policies in tact that were previously associated with MA3 and MA4. I would prefer to keep them in tact to avoid yet more legacy and historical SP’s clogging up the environme
We are low on space. As we look through old jobs, or anything that is still being held on past our retention period. We have plenty of infinite and long term retained items mixed with our regular data in our primary DDB. I’m claiming that the infinite retention or long-term jobs could be hodling reference blocks. This is the reason why we see data size on disk being a reasonable number but the library being full. For example, 800 TB size on disk but the 1.5PB library is full. as we go through jobs, some are seen as not “big fish” because data written may show for example 85GB written for a 1.5TB App size server. We skip and don’t worry about this because, I’m told well 85 GB is only what will come back in space. Let's look for 1TB + being written. I’m thinking that even as we size our future library, we should consider a pool of space that will always sit there and hodl these reference blocks and be “unuseable” space. Hopefully this rant makes sense?
Hi Team,My Primary Copy is not using deduplication . If i enabled WORM at storage policy copy level , would there be any impact on backup library storage consumption .I guess 2x to 3x additional capacity is required only when we enable WORM on deduplicated backup copies.Please let me know if my understanding is right ?
I have a media server containing several storage policies. Now I need to move one of the policies and files to a new media server. How do I find how much physical storages space the storage policy uses on the disks of the media server?And is this size the same it will use on the new server when moved (create new primary copy)?
I need to restore from tapes I backed up to about a year ago. When I put the year old tapes into my library it says they are empty/spare media. I run a full scan and it completes instantly and nothing is updated. Is there a way to inventory the tapes like in backup exec? I am new to using commvault.
Commvault are aware of any issues with routing Commvault Traffic through the local internet in China.
Hello All, We have configured the Aux copy with AWS cloud storage library destination, we are facing Auxcopy slowness issue only for China location. Other locations are working fine. (US & UK).Any one facing same issue in China location…? Commvault are aware of any issues with routing Commvault Traffic through the local internet in China.
Owing to issues with network and Cloud we are having to use Tape Media as 2ary storage (offline). I would like to have 3 media pools: Daily, Monthly and Annual. Can someone remind me how to assign scratch media to the new media groups ? I am trying to separate backup sets based on retention.Help appreciated.
We have a Media Agent located in China and business need is to aux copy primary data to AWS cloud. What way this can be achieved. I am looking for information more on traffic management because there is not straght forward way to migrate data from first copy to AWS cloud. Note: I know technical step to setup auxi copy in normal scenario.
when performing a backup with the NAS agent that has deduplication enabled in Commvault, is it convenient to enable on the source storage side its native compression and deduplication functions at the same time, specifically from a Huawei Ocean Store 5500 devicePlease advice if we can enable compression and deduplication on commvault and Huawei Ocean store end?
I am doing this sortof. I have an S3-IA bucket and send my Aux copies to it. I am just not doing the tiering to Glacier or something else. Everything stays in S3IA. My Aux copy retention is for 91 days, but I am getting blasted with AWS charges for early deletes. Does anyone know how I can find what is deleting early? CV support verified I have proper retention set.Thanks,Stephanie
We were testing a small (200MB) backup and restore to tape and back to disk. Backup takes < 8 minutes however restore takes 3 hours. It appears that after making contact with the index server the restore is waiting almost three hours before mounting the first tape. Transfer from tape to disk actually take a few minutes. Any idea why it can take so long to mount the first tape?
Commvault is showing 75 TB of data to be written to tape. We have set the tape copy to be “combine source data streams” to 3 (so it will use 3 tapes) and multiplexing is set to 5.Additionally: We have the “Data Path configuration” set to use alternate Data paths when ‘resources are busy”, and (in the policy) checked “enable stream randomization...” and “distribute data evenly among multiple streams...”Started the job, it chose to only write to 2 tapes, and also chose an alternate media agent to use (not sure why, the default media agent has 2 available tape drives and does not appear to be busy).Looking at the job, it only used 7 readers BUT there is a single stream/entry for ‘media not copied”… it does not seem to have determined it needed to use 3 streams.. yet it has a single stream “waiting” … and its not writing “10 readers” (only 7)… so the reader counts/multiplexing seems to not be honored ( as there is a single stream waiting in ‘media not copied”.why didn’t it break up the st
DDB Backups: Is the media agent that has a DDB partition associated with it supposed to back that partition up (and not another media agent)?
We have several DDB’s, all partitioned across several media agents. When the DDB backups run, I’m seeing most of the Media agents doing a backup for “themselves” meaning the client and Media agent are the same when the DDB backup runs. But for one of them, I cannot seem to get the DDB backup copy to choose the primary/default media agent (in the copy → data paths settings), to do any of the DDB backups, it always chooses the alternate media path for both DDB backup partitions.I have *not* yet chosen the “use preferred data path” setting (where it should only? use the primary media argent and not use any alternates) as I feel that it should choose the primary and it would auto choose the secondary media agent for the other partition if it needs to.Also: I want the DDBBackups to be slit over 2 media agents because 1 media agent is very overpowered (lots of CPU/memory) relative to the other (older and few CPU’s). The media agent the DDB backups is choosing is this underpowered media age
Hey all - we are using encryption on our S3 library in AWS. Is it a best practice to also enable software encryption on the primary copy? For the encryption of the backup data going to S3, is there a specific encryption type I can find somewhere that is in use? This is in my storage policy Primary copy properties. This is just an example of where I found the setting….
Previously we got site "A" node pool data backupsets on tape cartridges from TSM and easily restored it on site "B" after create a same node pool like site “A” and then creates table of content and restored it. Now we are using Commvault and want to restore on tape backup data from site "A" to site "B". Could you tell me how can we do via Commvault?
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.