Storage and Deduplication
Discuss any topic related to storage or deduplication with fellow community members
- 621 Topics
- 3,258 Replies
We have a Media agent MA1 (Physical Server) located at Site A with disk libarary still having jobs under retention till next one year. However, siteA is now decomissioned. We would like to rebuild MA1 and use storage at another site SiteB, but would like retain jobs which are still under retention. Doing a backup job of CV_magnetic files to another disk library a good option What would be a best approach
Hi all,in the Job Controller there is a waiting backup job due to the following error message:"Drive in which Media is mounted is not ready to use. Advice: Please retry your operation later. If the drive has a stuck volume, reset the drive to recover the media."As proposed, I tried to "reset drive", but it didnt help. The tape seems to be stuck in the drive.Do you have any suggestions what to try next to work it out?
Hello,My demo CommCell was upgraded to FR26 recently and MA is getting AWS authentication issues ever since. Version: 11.26.3Storage: NetApp ONTAP S3 CVMA.log7040 104c 12/27 15:39:06 ##### [DEVICE_IO] Message: AWS authentication requires a valid Date or x-amz-date header7040 2d84 12/27 15:39:06 ##### [DEVICE_IO] GetFileSize() - Error: Error = 44106CloudFileAccess.log7040 22f4 12/27 15:43:09 ### [CVMountd] OpenFile() - <bucket>/3672BF_06.01.2021_10.33/CV_MAGNETIC/V_3888/CHUNK_152953/SFILE_CONTAINER.idx, mode = READWRITE, error = Error = 441067040 28f0 12/27 15:43:09 ### [CVMountd] OpenFile() - <bucket>/3672BF_06.01.2021_10.33/CV_MAGNETIC/V_3044/CHUNK_121116/SFILE_CONTAINER.idx, mode = READWRITE, error = Error = 44106CloudActivity.log7040 25b8 12/27 15:42:59 ### [CVMountd] Message: AWS authentication requires a valid Date or x-amz-date header7040 2ab0 12/27 15:42:59 ### [CVMountd] GetFileSize() - Error: Error = 44106 What changed with FR26? I already tried wit
Since the other thread was stated as solved I’ll start a new one. From What I understand the Network Throttling on the media agent only effects backup jobs, not Aux copies jobs bandwidth.There is an option to limit the Aux copy bandwidth usage by setting the Advanced setting "Throttle Network Bandwidth(MB/HR)" on the stg policy copy.Unfortunately that effects the usage all day as there is no possibility to set it per time interval.I have two Aux Copy jobs running for two different Storage Policy copies with the setting for bandwidth limitation set to 5000 MB/HR which is roughly 11 Mbit/s.So running two of those shouldn't use more then 22 Mbit/s.Looking at the Current Through put they are 3,92 GB/hr and 2,97 GB/hr in Comccell console. Those combined gives 6,89 GB/hr roughly 15 Mbit/s. I understand that it's not 100% accurate as the data is deduplicated.But looking at our monitoring I see two streams going to Azure one using 30 Mb/s and the other at 20 Mb/s.In fact the Aux copies tend to
This assumption correct? - If you to start CV encrypting data sent dedupe storage my guess would be that it a completely new set of dedupe data? Once encryption is turned on, the dedupe engine will see it as new data rather than encryption version of the old. While the unencrypted and encrypted data from the same servers remains in the same dedupe storage, storage usage could higher than usual.
Hello,I would like for weekly tape backup jobs to create an exception the first 3 days of the monthFor that I created an exception (except for 1, 2 and 3 firt day on month) in the auxiliary copy to tape task in schedule policyhowever I have pending jobs not copied on October 02 in the weekly storage policies, which I don't wantI opened a case (incident 211005-210), the support told me that it was logical because implemented at the level of storage policiesHow could I prevent the weekly backup copies from not being done every 1, 2 and 3 of each month ?is it possible to configure it ?thanks a lot.
Since noon on Saturday (May 15), my Disaster recovery Backup admin jobs have been failing with this error:Error Code: [34:85]Description: CommServeDR: Error Performing Transfer: Error : [Failed to initialize with Commvault cloud service, The service may be down for maintenance.].Source: inf-srv57, Process: commserveDRIs anyone else having issues with the CommVault cloud service?Ken
Getting the following when CommVault attempts to reconstruct the DDB User: Administrator Job ID: 353464 Status: Failed Storage Policy Name: BTR_Global_Dedupe Copy Name: BTR_Global_Dedupe_Primary Start Time: Sun Nov 28 17:33:46 2021 End Time: Tue Nov 30 03:45:22 2021 Error Code: [62:2035] Failure Reason: One or more partitions of the active DDB for the storage policy copy is not available to use. Is there any way around this error so I can get backups going again? It has tried a couple of times for multiple days.
Hi there!Is there any way how to investigate very poor reader time in NDMP backup in Commvault?Quick look at a part of the log suggests slow reader time to be a culprit of the poor backup performance: |*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353| Perf-Counter Time(seconds) Size|*1292266*|*Perf*|696353| -----------------------------------------------------------------------------------------------------|*1292266*|*Perf*|696353||*1292266*|*Perf*|696353| Replicator DashCopy|*1292266*|*Perf*|696353| |_Buffer allocation............................ - [Samples - 477421] [Avg - 0.000000]|*1292266*|*Perf*|696353| |_Media Open................................... 20 [Samples - 5] [Avg - 4.000000]|*1292266*|*Perf*|696353| |_Chunk Recv...................................
I’ve used the subjected workflow some times and it’s working.However when I looked at it today it seems it was changed @ some point in time.The documentation isn’t corresponding to whats shown in the GUI.It seems that it’s not possible to target a specific cloud library any longer.Is it just on my installation this has changed, or anyone else seen this?Screenshots provided below.//Henke
I have CS and MA in Azure cloud. I have one windows client machine in same network. I have one azure blob storage configured as cloud library. I have installed Storage accelerator on my windows client but still i can’t see backups moving through client to cloud (checked in streams, checked in events, and job details). Any thing i need to do specifically to enable storage accelerator?
We are trying to move a mount path to a new media agent, but it’s failing ever since we stopped the job. The disk became full resulting the data migration job being stuck for weeks, so we stopped the job and freed up space. However, every time we try to restart the mount path move it fails.We did the following to troubleshoot:Run disk defragmentation Verify disk and mouth path are online Validate mouth path Verify no disk maintenance task is running General sanity checks The error that we get is:Move Mount Path Job Failed, Reason : The mount path is under maintenance and hence cannot proceed with the move operation. Source: <MediaAgentName>, Process: LibraryOperation Can anyone tell how to fix this error? How do we get Commvault to see that the disk is not in maintenance mode?
Good morning to all.On a monthly basis I release auxiliary copies to tape ending with a total of 11 tapes.I wanted to do a restore of about 675MB and it asked me for 9 tapes to do it, is this so why the data has been distributed on 9 tapes?Shouldn't it be on one tape only? Since the size of the L7 tape is large. Could it be due to the "Use Scalable Resource Allocation" option? Thank you very much for your help.Best regards,Johana 😀
A customer of ours intends to move their Azure workload to Metallic in the near future.Because we don't manage the azure side of things for the customer, I assumed that the Azure blob storage was a combined tier based on what was configured in Commvault (screenshot below). The cloud libraries are configured in Commvault to use a Combined Tier of storage where we write metadata to Hot and chunk data to Archive tier. Based on that I’d suggested that a rehydration of data is needed from Archive tier to hot tier to perform an Auxcopy to Metallic or lodge a ticket with Microsoft to have the entire contents of the blob converted to Hot tier so that it can be read by the AuxCopy process. CV cloud library explorer shows chunks in Archive tierTo learn more about Azure's blob storage configuration, the customer turned to Microsoft. Microsoft confirmed that no data needed to be rehydrated in order to migrate to Metallic because they could only see Azure's hot-tier blob storage configured on that
Hi all!My company, using six MA to create and store backups. One MA with a separated storage for long term retention outside, and an another one for create local backups on a branch office site. On the main site there are four MA in two node grids. MA1 & MA2 is a grid and MA3 & MA4 is an another. They are sharing their libraries and DDBs.From the branch office, local backups are copied to the main site and main backups are copied to long term site as DR backups. MAs are physical on main site and virtual on others, and disk storages are used on all sites.Currently, we are planing to change our disk storages and physical MAs on main site. And of course, it is a good chance to upgrade OS on MAs from Win2012R2 to Win2019. During the process, library content should be moved from the old disk storage to the new one, and DDBs from old MA to new. One MA stores 40 - 60 TB backup data, and of course, I would like to do it with minimum downtime. I have found descriptions about library mov
Hello, EveryoneHow are you, today?I have a problem i am trying to resolve. I don't know if anyone can be of help.So, this customer has a tape library with 3 drives but 5 slots. They had 3 existing 3 drives which is running fine. They just added 2 more drives to be configured. I have been trying to detect and configure the drives but i can see them (it says undetected, unconfigured). It says SCSI adapter has been removed, Please go to property page to select the right scsi. Can you help, please? Eventually i detected and configured one of the drives, but can’t detect the last one. During the detect an configure scan, the 5th drive shows but in the data path and library, it does not show.
Hi Commvault Community, i would like to see, if anyone else is facing the same Issue. We have a few customer that are still using Tapes to move weekly/monthy Backups to a save location.With the change to “forever incremental” they are now facing a big issue to collect all the backups of the agent on to the exported tape. There is no process to generate a Synth. Full or everything likewise.One customer is still holding on to the “old” exchange one-pass classic agent, so he don’t lose the Synth Full Options.Another customer is manually creating a new Storage Policy Copy every month to get all the Backups on the same tape.We have shared this problem a few months ago, we even had a talk at the GO 2019 with dev. Back then we there told there will be a solution on its way, we could expect it for SP20. Now we are at SP26 and there is still no option for these customer.
Hi Commvaulters, Hope everyone is doing good.We have a new cloud library, that is being set up by our storage team, we have 2 Media Agents that will be able to use the new Library. We want to set up some High Availability between the 2 MAs accessing the cloud library, after some researches on the Commvault documentation, I came through GridStor (Alternate Data Paths).I wonder If it’s possible to share the same bucket between the two MAs (like an NFS share on Linux), in that way, if one of the MAs fails, the jobs will fail over to the second one ?On the documentation, I’ve seen that you have to configure one of the MA1 to mount the volume, which allows it to access the volume as local disk then share the volume to MA2 in order to access it using UNC paths. In this specific scenario, doesn’t that mean, if my MA1 fails, then also MA2 loses its access to the volume since it's shared by MA1 ? All this is a bit confusing, since it's the first time we are trying to implement this MAs HA using
Hi Team, We are in the process of evaluating our recent Aux Copy performance to our DR site as we are scoping for future capacity and bandwidth requirements. It would be very useful if there was a suitable report to show me the rate of throughput and the amount of data copied in a summary format, similar to what you see when running Job reports and aligning by Storage Policy (as opposed to client). Surprisingly, I can’t see a summary of the Aux Copy results in any reports.Although the Aux Copy entry in the report shows the data copied, and the overall throughput it is only for each Aux Copy job. You can see below in this example, where we have a few days worth of Aux Copies:- What I really need is a report with a summary line at the top of all data copied and the overall throughput figure.At the moment, I need to export into CV and then reorganize the fields so I can add things up manually.There is one other report which might also be of use, which is the Jobs in Storage Policy Copies
Hi Guys, I would like to know whether there are recommendations on the block size of the cloud library? We have a Cloud Storage in our data center, and we would like to use it for backup. On the storage, we have the ability to choose the block size. Do we need to specify the block size or keep it default (32 KB). Note: On disk library, we are used to formatting our local drive to 64 kb, however we didn’t find anything for cloud libraries.Thanks in advance. Best Regards
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.