Storage and Deduplication
Discuss any topic related to storage or deduplication best practices
- 776 Topics
- 3,673 Replies
Hi guys, What is better: Full backup or synthetic full for streaming on agent-based backups for file systems? I get the point of synthetic full being better in terms that we are not using client machine’s resources, but are there any disadvantages of it?Would the normal full be saver..? If so - why?
We have a Media agent MA1 (Physical Server) located at Site A with disk libarary still having jobs under retention till next one year. However, siteA is now decomissioned. We would like to rebuild MA1 and use storage at another site SiteB, but would like retain jobs which are still under retention. Doing a backup job of CV_magnetic files to another disk library a good option What would be a best approach
Hello,I’ve some older data at one mount path and want to move it to different host where new storage has been already configured and it’s up and running. Is there a way to somehow merge the data from old MP into new one? So I want to move data from Host1 - “D:\MP1” to Host2 - “D:\New MP” to keep everything in single place. Simple move is enough?
We are running on Commvault 11.20.Currently backup jobs are using Azure Blob cloud disk library, with container default setting. (On Azure is container type is Cool). We would like to move this storage to another tenant with different storage account with Cool/Archieve type container. Looking for a best approach to migrate the storage, if it can be done from Commvault and not from Azure.
Hello,let me ask your opinion about the following situation.When I check the details of the System Created DDB Space Reclamation schedule policy I see that it looks like “corrupted”.As you can see in the attached image, in the summary screen it shows Type “Data Verification” while in the dialog the type is “Data Protection”. Moreover, the Associations tab shows the list of Clients instead of the DDB list.Is this normal? How can I get rid of this? Thank you in advanceGaetano
Good morning to all.On a monthly basis I release auxiliary copies to tape ending with a total of 11 tapes.I wanted to do a restore of about 675MB and it asked me for 9 tapes to do it, is this so why the data has been distributed on 9 tapes?Shouldn't it be on one tape only? Since the size of the L7 tape is large. Could it be due to the "Use Scalable Resource Allocation" option? Thank you very much for your help.Best regards,Johana 😀
Hi all,we are implementing a new netapp infrastructure. This infrastructure will be composed by 2 clusters with SM-BC -- Snapmirror Business ContinuityWe will use it to present luns on vmwareI don’t find any information about the compatibility with intellisnap.Any Idea ?Thanks a lot
Hi Team, We are looking to configure media agents with active-active or active-standby with SAN attached disk library. Media agents are windows.Also we want to learn is there an option active-active or active-standby options with the Commvault.
Hi all!My company, using six MA to create and store backups. One MA with a separated storage for long term retention outside, and an another one for create local backups on a branch office site. On the main site there are four MA in two node grids. MA1 & MA2 is a grid and MA3 & MA4 is an another. They are sharing their libraries and DDBs.From the branch office, local backups are copied to the main site and main backups are copied to long term site as DR backups. MAs are physical on main site and virtual on others, and disk storages are used on all sites.Currently, we are planing to change our disk storages and physical MAs on main site. And of course, it is a good chance to upgrade OS on MAs from Win2012R2 to Win2019. During the process, library content should be moved from the old disk storage to the new one, and DDBs from old MA to new. One MA stores 40 - 60 TB backup data, and of course, I would like to do it with minimum downtime. I have found descriptions about library mov
Hi Commvaulters, Hope everyone is doing good.We have a new cloud library, that is being set up by our storage team, we have 2 Media Agents that will be able to use the new Library. We want to set up some High Availability between the 2 MAs accessing the cloud library, after some researches on the Commvault documentation, I came through GridStor (Alternate Data Paths).I wonder If it’s possible to share the same bucket between the two MAs (like an NFS share on Linux), in that way, if one of the MAs fails, the jobs will fail over to the second one ?On the documentation, I’ve seen that you have to configure one of the MA1 to mount the volume, which allows it to access the volume as local disk then share the volume to MA2 in order to access it using UNC paths. In this specific scenario, doesn’t that mean, if my MA1 fails, then also MA2 loses its access to the volume since it's shared by MA1 ? All this is a bit confusing, since it's the first time we are trying to implement this MAs HA using
Hello,We are seeing a very large random read load on our Hitachi G350 backup storages with NL-SAS disks. These random reads are completely consuming our backup storage performance. We have two G350s on campus and a third at a remote site. Commvault runs copy jobs between these three G350s.DDB is on NVMe locally in the Media Agent, also the Index Cache Disk.We ran several analyses and Live Optics showed us that the daily change rate is 334.9%, which is mainly due to the Windows File System policy, for which we see 2485.1% daily change rate.Does anyone know how the random read load could be reduced since our disk backup is otherwise unusable. What steps could we take to optimize the Commvault configuration?Screenshot: Thanks for your help!
Hi Guys, I would like to know whether there are recommendations on the block size of the cloud library? We have a Cloud Storage in our data center, and we would like to use it for backup. On the storage, we have the ability to choose the block size. Do we need to specify the block size or keep it default (32 KB). Note: On disk library, we are used to formatting our local drive to 64 kb, however we didn’t find anything for cloud libraries.Thanks in advance. Best Regards
Do you see these error for your jobs?When we updated from 11.20.32 to 11.20.60 we started getting Cache-Database errors on various backup types, FileSystem idata agent, NDMP backups and others.The exact error we get is:Error Code: [40:110]Description: Client-side deduplication enabled job found Cache-Database and Deduplication-Database are out of sync.I have a ticket open with support, but I am wondering if the issue is unique to us or if it is happening to other customers as well. Thank you,
I have read a couple or articles on CommVault Online that say defragmentation of the Magnetic libraries is a good idea. Diskeeper, now Dymax IO, was listed as a certified product for online volumes. I am wondering if others defrag theire libraries for performance purposes, and what products they use. I have read in older articles that the native windows defragmentation tool can be used. Also it states that it should be done outside of backup hours (makes sense). Any feedback or information would be appreciated. Thanks
Size on disk 55,74 TB, but data written is 24,77 TBHi folks,I’m trying to figure this out for few hours and I still didn’t find anything wrong in the Storage Policy, DiskLib, Media Agent properties... Backup jobs are also fine. I have counted 10800 jobs manually, just to be sure that the size is correct. 24,77 TB of data is written. BUT how it can be possible, that size on disk takes 55,74 TB?Did someone had the same situation?
Hi There! I have Vmware vm backed up on-premises and auxiliary copy to Azure cloud library. When I try recovery a VMs that I supposed already transferred data to azure, I’m able to see bandwidth on firewall ports increase. So I think this scenario report a local data recovery to azure.I’d like to recovery data that already on azure cloud ( that was transfer by auxiliary copy). Someone could hep me with this steps?Thanks!
Hi, we just started to use an object storage to tier out the data after 7 days. We created one bucket and added it to commvault as backup target. Den we changed the config and created another bucket but forget to delete the first one and now commvault is using both buckets to store the data. How can I migrate the already stored data in bucket 2 (data path 1) into bucket 1 (data path 2) ?After I could migrate the data I would like to delete the second data path and then remove the bucket in the Object Storage.RegardsThomas
Hello,The customer bought new servers MS SQL and migrate to new 2 servers.Scenery today:2 servers MS SQL1 production2 copyNew Scenery:2 servers MS SQL (New Version of O.S. and Database)1 production2 copyand i need to copy the config of backup jobs, with the same configuration of current backup: retention, backups jobs, DDB, schedulers.Does anyone have any procedure? the best practices to do.
Hello community , We are trying to migrate SAN storage to S3 cloud library .Per suggestions followed these steps . 1. configured new global dedupe storage policy using your new S3 bucket and MA2. configured new secondary copies in your existing storage policies pointing to your new S3 dedupe storage3. ran aux-copyWe have huge data and contacted commvault support to determine the timelines to know when the aux copy will get complete . Currently aux-copy is running for more than 4 months.Support has mentioned below points .-Your current configuration is allowing the selection and prioritization of new backups over older data-You are also configured to copy all data to cloud and mentioned we are not using dedupe for aux copy. How to make sure have optimal Aux copy configurations Please share your inputs . Thanks in advanceSpartan9
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.