Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,674 Replies
Hi Team,Iam testing Isilon Smart lock feature for storage level immutability with Commvault DASH Aux Copies.My Dedupe is hosted on local drive ( non-worm) on windows media agent .Library is created from smart lock directory from Isilon .My question is that how does CV writes to WORM enabled storage ? As per the current settings on Isilon Smart lock there is autocommit time period of 5 mins ... files that have been in a SmartLock directory for a 5 mins of time without being modified will automatically committed to a WORM state.Can Commvault handle this because i guess there will be chunks which will not modify for 5 mins and need to update later during Aux copy process? Will the Aux copy fail or create new chunks/folders ?What about DDB ? Keeping DDB on non-worm and data on worm is recommended way ?Regards,Mohit
Hi Community,We are planning to convert our production environment’s DDBv4 to DDBv5, but in our test environment, all our media agents are in DDBv5. As the DDBv5 comes from the version 11.14, i am searching for a media kit package older than SP14 in order to test and validate.Is there Anywhere i can download a mediakit older than SP14 which is not available in commvault store ? Thanks in advance.
Hi Team, We have a very large, Infinite retention Storage Policy, associated to Storage Pool “Pool1”.It has grown to the point, that we will soon be creating another Storage Pool and Storage Policy. Let’s call these Pool2. All clients from Pool1 will be migrated to Pool2, so Pool1 will stop receiving any fresh data, since Pool2 will start receiving it all. The question I have is around the massive, leftover DDB’s from Pool 1. They are 2 x 1.8 TB and are hosted on the two Media Agents associated with Pool1. Since Pool1 will stop receiving data, I am keen to decommission the Pool1 Media Agents - noting that the Secondary-copy Cloud-based backup data can be accessed from a number of Media Agents and so it does not necessarily have to be the Pool1 Media Agents. It can be any Media Agents, provided they are mapped to the relevant Cloud Library Mount Points. So the questions I have are :- 1 - What do we do with these large, legacy DDB’s? I understand we need to keep for Commvault Sync
Hello,I need a little clarification about long running Auxiliary Copies.On a daily basis we run a Primary Copy schedule which is followed by three auxiliary copies. Quite often the auxiliaries take a very long time to complete, enough to overlap with the primary copy schedule of the following day. This makes the jobs on day 2 to “enter” the still running auxiliary copies from day 1, is there any way to avoid this? Is it possible to set a boundary on the latest job to be included in an auxiliary copy?To my understanding, the Copy Policy->Backup Period->End Time setting cannot be used as it would provide a fixed date rather than a moving one.Sorry if this sounds a bit dummy and thanks for you supportGaetano
Hello Commvault Community! I would like to ask you if there is any option to do a "tape mirror". I mean 1:1 copying the data from one tape to another, but keeping the data on both tapes. I am aware that there is a "Tape to tape copy" option, but if I understand correctly it deletes the data from the source tape and copies it to the new one available in Spare Media. The reason why the customer wants two copies of the data on two different copies is the security rules of their company, where they must have one tape in the safe and the other in the active Tape Library. I suggested to make another copy in Storage Policy backup to tape with the source copy from disk, but then we can't be sure that the tapes will be 1:1. Is there any chance we can make a tape mirror? Move Contents of Media from One Tape to Anotherhttps://documentation.commvault.com/11.24/expert/10538_move_contents_of_media_from_one_tape_to_another.html Thanks&Regards,Kamil
Hi Team,we have a MA, which has four mount points all comes from backend SAN. out of four, one mount points are showing error while writing the data. i checked the disk management, disk was showing as read only than i have rebooted the MA and made the disk read/write. created test folder on the disk which working fine. but when i run the storage validation, it gives below error. did anyone had these kind of issue ? 7944 173c 09/09 10:20:44 685427 Scheduler Set pending cause [Failed to mount the disk media in library [LIB05] with mount path [C:\CommVaulttLibrary\503] on MediaAgent [cv5]. Operating System could not find the path specified. Please ensure that the path is correct and accessible.
Hi all,in the Job Controller there is a waiting backup job due to the following error message:"Drive in which Media is mounted is not ready to use. Advice: Please retry your operation later. If the drive has a stuck volume, reset the drive to recover the media."As proposed, I tried to "reset drive", but it didnt help. The tape seems to be stuck in the drive.Do you have any suggestions what to try next to work it out?
Hello,My demo CommCell was upgraded to FR26 recently and MA is getting AWS authentication issues ever since. Version: 11.26.3Storage: NetApp ONTAP S3 CVMA.log7040 104c 12/27 15:39:06 ##### [DEVICE_IO] Message: AWS authentication requires a valid Date or x-amz-date header7040 2d84 12/27 15:39:06 ##### [DEVICE_IO] GetFileSize() - Error: Error = 44106CloudFileAccess.log7040 22f4 12/27 15:43:09 ### [CVMountd] OpenFile() - <bucket>/3672BF_06.01.2021_10.33/CV_MAGNETIC/V_3888/CHUNK_152953/SFILE_CONTAINER.idx, mode = READWRITE, error = Error = 441067040 28f0 12/27 15:43:09 ### [CVMountd] OpenFile() - <bucket>/3672BF_06.01.2021_10.33/CV_MAGNETIC/V_3044/CHUNK_121116/SFILE_CONTAINER.idx, mode = READWRITE, error = Error = 44106CloudActivity.log7040 25b8 12/27 15:42:59 ### [CVMountd] Message: AWS authentication requires a valid Date or x-amz-date header7040 2ab0 12/27 15:42:59 ### [CVMountd] GetFileSize() - Error: Error = 44106 What changed with FR26? I already tried wit
Since the other thread was stated as solved I’ll start a new one. From What I understand the Network Throttling on the media agent only effects backup jobs, not Aux copies jobs bandwidth.There is an option to limit the Aux copy bandwidth usage by setting the Advanced setting "Throttle Network Bandwidth(MB/HR)" on the stg policy copy.Unfortunately that effects the usage all day as there is no possibility to set it per time interval.I have two Aux Copy jobs running for two different Storage Policy copies with the setting for bandwidth limitation set to 5000 MB/HR which is roughly 11 Mbit/s.So running two of those shouldn't use more then 22 Mbit/s.Looking at the Current Through put they are 3,92 GB/hr and 2,97 GB/hr in Comccell console. Those combined gives 6,89 GB/hr roughly 15 Mbit/s. I understand that it's not 100% accurate as the data is deduplicated.But looking at our monitoring I see two streams going to Azure one using 30 Mb/s and the other at 20 Mb/s.In fact the Aux copies tend to
Hello,I would like for weekly tape backup jobs to create an exception the first 3 days of the monthFor that I created an exception (except for 1, 2 and 3 firt day on month) in the auxiliary copy to tape task in schedule policyhowever I have pending jobs not copied on October 02 in the weekly storage policies, which I don't wantI opened a case (incident 211005-210), the support told me that it was logical because implemented at the level of storage policiesHow could I prevent the weekly backup copies from not being done every 1, 2 and 3 of each month ?is it possible to configure it ?thanks a lot.
Since noon on Saturday (May 15), my Disaster recovery Backup admin jobs have been failing with this error:Error Code: [34:85]Description: CommServeDR: Error Performing Transfer: Error : [Failed to initialize with Commvault cloud service, The service may be down for maintenance.].Source: inf-srv57, Process: commserveDRIs anyone else having issues with the CommVault cloud service?Ken
Getting the following when CommVault attempts to reconstruct the DDB User: Administrator Job ID: 353464 Status: Failed Storage Policy Name: BTR_Global_Dedupe Copy Name: BTR_Global_Dedupe_Primary Start Time: Sun Nov 28 17:33:46 2021 End Time: Tue Nov 30 03:45:22 2021 Error Code: [62:2035] Failure Reason: One or more partitions of the active DDB for the storage policy copy is not available to use. Is there any way around this error so I can get backups going again? It has tried a couple of times for multiple days.
Hi there!Is there any way how to investigate very poor reader time in NDMP backup in Commvault?Quick look at a part of the log suggests slow reader time to be a culprit of the poor backup performance: |*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353| Perf-Counter Time(seconds) Size|*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353||*1292266*|*Perf*|696353| Replicator DashCopy|*1292266*|*Perf*|696353| |_Buffer allocation............................ - [Samples - 477421] [Avg - 0.000000]|*1292266*|*Perf*|696353| |_Media Open................................... 20 [Samples - 5] [Avg - 4.000000]|*1292266*|*Perf*|696353| |_Chunk Recv...................................
I’ve used the subjected workflow some times and it’s working.However when I looked at it today it seems it was changed @ some point in time.The documentation isn’t corresponding to whats shown in the GUI.It seems that it’s not possible to target a specific cloud library any longer.Is it just on my installation this has changed, or anyone else seen this?Screenshots provided below.//Henke
I have CS and MA in Azure cloud. I have one windows client machine in same network. I have one azure blob storage configured as cloud library. I have installed Storage accelerator on my windows client but still i can’t see backups moving through client to cloud (checked in streams, checked in events, and job details). Any thing i need to do specifically to enable storage accelerator?
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
A customer of ours intends to move their Azure workload to Metallic in the near future.Because we don't manage the azure side of things for the customer, I assumed that the Azure blob storage was a combined tier based on what was configured in Commvault (screenshot below). The cloud libraries are configured in Commvault to use a Combined Tier of storage where we write metadata to Hot and chunk data to Archive tier. Based on that I’d suggested that a rehydration of data is needed from Archive tier to hot tier to perform an Auxcopy to Metallic or lodge a ticket with Microsoft to have the entire contents of the blob converted to Hot tier so that it can be read by the AuxCopy process. CV cloud library explorer shows chunks in Archive tierTo learn more about Azure's blob storage configuration, the customer turned to Microsoft. Microsoft confirmed that no data needed to be rehydrated in order to migrate to Metallic because they could only see Azure's hot-tier blob storage configured on that
Hi all, i will try here, We have 2 MA in Azure that acts as proxy as well.. When we tried to backup VM from azure (to cloud storage) , if we configured MA 1 as proxy job completed, when we try to configure MA 2 as proxy its failed with "failed to fetch a valid sas token" error. Anyone had a clue what cause this error? Both MA with same OS, Disks, Permission, Version..No drops from FW & Network settings are configured (client/CS)
Hello, EveryoneHow are you, today?I have a problem i am trying to resolve. I don't know if anyone can be of help.So, this customer has a tape library with 3 drives but 5 slots. They had 3 existing 3 drives which is running fine. They just added 2 more drives to be configured. I have been trying to detect and configure the drives but i can see them (it says undetected, unconfigured). It says SCSI adapter has been removed, Please go to property page to select the right scsi. Can you help, please? Eventually i detected and configured one of the drives, but can’t detect the last one. During the detect an configure scan, the 5th drive shows but in the data path and library, it does not show.
Hi Commvault Community, i would like to see, if anyone else is facing the same Issue. We have a few customer that are still using Tapes to move weekly/monthy Backups to a save location.With the change to “forever incremental” they are now facing a big issue to collect all the backups of the agent on to the exported tape. There is no process to generate a Synth. Full or everything likewise.One customer is still holding on to the “old” exchange one-pass classic agent, so he don’t lose the Synth Full Options.Another customer is manually creating a new Storage Policy Copy every month to get all the Backups on the same tape.We have shared this problem a few months ago, we even had a talk at the GO 2019 with dev. Back then we there told there will be a solution on its way, we could expect it for SP20. Now we are at SP26 and there is still no option for these customer.
Hi Team, We are in the process of evaluating our recent Aux Copy performance to our DR site as we are scoping for future capacity and bandwidth requirements. It would be very useful if there was a suitable report to show me the rate of throughput and the amount of data copied in a summary format, similar to what you see when running Job reports and aligning by Storage Policy (as opposed to client). Surprisingly, I can’t see a summary of the Aux Copy results in any reports.Although the Aux Copy entry in the report shows the data copied, and the overall throughput it is only for each Aux Copy job. You can see below in this example, where we have a few days worth of Aux Copies:- What I really need is a report with a summary line at the top of all data copied and the overall throughput figure.At the moment, I need to export into CV and then reorganize the fields so I can add things up manually.There is one other report which might also be of use, which is the Jobs in Storage Policy Copies
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.