Ask questions, give answers, get good karma
- 2,757 Topics
- 14,241 Replies
Hello,the cliente need to migrate MS SQL Server database for the new server by Continuous Data Replicator.Someone already made this procedure and can help me?I install SQL Server in 2 servers and add to Commcell.Current version of Commvault: 11 SP 13+**Do i need to upgrade Commvault before migrating?Current version of SQL: 2012And the cliente is asking to me what the best version will migrate? 2019 standard or enterprise?
Hi,I saw Commvault bring Commserve Linux feature in Feature Release 25. So i wanted to test it. I created Linux package as described here;https://documentation.commvault.com/11.25/expert/2696_downloading_software_for_unix_linux_and_macintosh_computers_using_download_manager.htmlI installed fresh RHEL 8.3. I’m trying to setup as described here;https://documentation.commvault.com/11.25/expert/132361_installing_commserve_server_in_linux_environment.htmlIn my screen i don’t see Create a new CommCell Group option. My screen; Could you help me for this issue. Thank you.
HiBeen plagued with this problem for a while. Support has not been able to crack it yet either. I have 4 Media Agents and 2 are using CBT and crash consistent backup options for the Intellisnap and work fine. The other 2 Media agents are also using CBT but using Application Consistent (quiesced backup) backup options for Intellisnap and these Media Agents will sometimes freeze up the Virtual Disk Service and the backupcopy fails for the remainder of the backup copies. I cannot open the process manager on the MA’s when this happens and end up have to reboot the MA to get things back on track. Not a lot of info but chucking this one out there to see if anyone had a similar experience.
sucessful backups not protecting files The process cannot access the file because it is being used by another process
The file system backups are showing as successful(no warning) however failing to protect some of the files; I am wondering why the job is not completing with partial sucess and showing it as a VSS issue. Is that becuase there are no application specific vss writers for the application and hence quiesing is not working its magic [C:\Program Files (x86)\BigFix Enterprise\BES Client\__BESData\SiteData.db] The process cannot access the file because it is being used by another process.
Hi All. I am having issues with restore speed when restoring a vmware server. When a restore is done where “vCenter Client” is set to the vcenter, restore speeds are slow.When a restore is done where “vCenter Client” is set directly to the ESXI host and the vcenter is bypassed, we see a factor 4 in restore speeds. Anyone who can explain this behavior? I thought that the vcenter was only used for control data and not data movement. Regards-Anders
Hello, this might not be enough info but I’m trying to restore a VM and keep getting this errorError Code: [91:32]Description: Unable to register virtual machine [SERVER]. Possible reason includes datastore was not mounted correctly.Source: CVMA01, Process: CVDLater these errors only occur: Error Code: [32:392]Description: Volume Operation in Progress : [One or more volumes are being processed by other operation. VolumeId ]Source: CVMA01, Process: CVDI have the job log files but can’t really make sense of it. Its pretty much nothing but errors and fails.
Our weekly secondary AuxCopy is stuck at 30% since this weekend (so that it is blocking all the primary disk to disk incremental copies), with the below 2 error messages Thinking it might be some ports communication issue between the Media Server (S01190) where the tape library is attached to, and the CommCell Server (S02116), so I did the below ports check between the 2 Servers: Telnet from the Media Server (S01190) to the CommCell Server (S02116)Port 8400 OKPort 8401 OKPort 8403 OK Telnet from the CommCell Server (S02116) to the Media Server (S01190)Port 8400 OKPort 8401 Not OKPort 8403 Not OK Now, before I speak to our Network/Security administrator who have recently installed SentinelOne AV on both of the above 2 Servers, I’m wondering if I’m heading the right direction, and if I have done all the ports checking ? Thanks,Kelvin
I started looking at the MFA on Command Centre and baffled as it is flawed. If my domain account has been compromised, I would be expecting the second factor to be the 2nd line of defence. But no, you can request a new pin that gets sent to your compromised domain account e-mail address. I then looked to see if I can amend my account by adding an external e-mail address, but LDAP pulls this from the domain and can not be edited. By editing the e-mail script we can omit the pin, but I think this hasn’t been thought through by Commvault, considering that backups are supposed to be the last line of defence against a cyber attack the two factor serves only to delay the time it takes for SMTP to deliver a new pin.
Newbie Alert! Hello Everyone I am new to Commvault I took over an environment which has more than 200 storage policies. These policies were created as a result of plans created using command center. We have more than 2000 VMs and 200+ filesystem clients and Databases. I was wondering how should I approach the cleanup. Is it a good idea to use plans for larger environments, because as a result of single plan there are multiple configurations created automatically. Thank You
Working with DB2 on AIX for our SAP AFS Application. Approx. 4TB database split over 4 logical volumes in an IBM Virtual IO environment on IBM Power. This backup takes 20 to 26 hours. Below the statistics in db2diag after the backup. I’m not sure how to read this, but I think the WaitQ is quite high.Hope anyone can shed a light on this. 2021-03-13-01.02.01.804357+060 E411673A4491 LEVEL: InfoPID : 4718886 TID : 61753 PROC : db2sysc 0INSTANCE: db2prd NODE : 000 DB : PRDAPPHDL : 0-12866 APPID: *LOCAL.db2prd.210311220047AUTHID : DB2PRD HOSTNAME: lpdd001EDUID : 61753 EDUNAME: db2agent (PRD) 0FUNCTION: DB2 UDB, database utilities, sqluxLogDataStats, probe:395MESSAGE : Performance statisticsDATA #1 : String, 3994 bytesParallelism = 40Number of buffers = 40Buffer size = 2494464 (609 4kB pages)BM# Total I/O MsgQ WaitQ Buffers MBytes--- -------- -------- -------- -------- -------- --------000 93654.87 47851.71 43854.53 455.97 1032152 2450793001 93638.40 5314.21 28579.66 58494.24 309031 73353
Hi @Jordan @Mike Struening I have a question during the topic. So i try delete MP - i working with your advice, but also send mail for authorization to admin and i have below error. ERROR CODE [19:857]: waiting on user input [Delete Mount Path [ [cvbackup] H:\P_QNAP (MQNWX2_02.08.2021_13.16) from Library - DiskLibQnap ] requested by [ UMO\mjosko.domadm ]] View Contents returns an empty list.There seems to be data on the disk as Size on Disk indicatesseveral / several hundred GB (similarly the size of the folder on the disk).Despite the empty list of View Contents, the data on the disk was only deleted aftersome time. As I can see, there is something else left.What does the data erasure mechanism depend on?
On CV 11.20.9 we backuped Oracle DB from SPARC Solaris to MA RedHar8.4.If we set “Optimized For concurrent LAN backups” on MA then speed = 1800 GB per hour, if we unset “Optimized For concurrent LAN backups”, then speed=5500 GB per hour.All other settings are identical.What changed in MA configuration when set “Optimized For concurrent LAN backups”?
Hello,I have old Tape library TS3200 with LTO4 tapes and new Tape library HPE with LTO7 tapes. I would like to copy data from LTO4 to LTO7 tapes.I worked according documentation Media Refresh (commvault.com) .I have Enabled Media Refresh on Storage policy copy, media for Refresh I have marked Full, and Pick for Refresh. I have run Media Refresh job and choose tab Start new media…No, I have error “There are not enough drives in the drive pool. This could be due to drive limit set on master pool if this failure is for magnetic library.”I don’t know where the problem is?Is there anything else to do for Media refresh operation?Best regards,Elizabeta
Hi Gurus,One of my customer got a restore request of a folder which accidently deleted from a vm 3-4 days back.(We are having weekly full & daily incremental backup of the vm using intellisnap.)Surprisingly when he browsing the file with Latest or Date range option, he don’t see the folder under browse result, but when he browses from backup history (incr or last full job) he do see the folder there.Ideally, he should see the file while browsing with Lastest or Date Range option also. Am I right?Also, I don’t see any option like “show deleted files..” which we have in Endpoint solutions. Thanks,Anuj
Hello,I need to create a backup job with all full backups to tape in a physical library(new library) that the customer has purchased.Today all backups are on disk (File Library).After the new Jobs to tape has been succesfully executed (3 backup jobs to tape will be done), i need to erase the full backup, which today is very old and needs to be done a new full backup, because the size will be smaller and will free up more space in disk.Wait guys for the best practices to do this.Another question: i will create the Storage Police for the library with permanente retention and include all tapes in the same SP or create other for example: Database, Exchange, etc.@Mike Struening
Some facts:Version 11.24.7Sharepoint 2016 (on-prem)SP DB backup speeds: 15-25 GB/hr (So not superfast, but atleast it finishes in a reasonable time)SP Documents backup speeds: 1-1.5 GB/hr (So basicly superslow) Some background information:I am a storage guy, who’s recently been put in charge of the backup solution, due to people leaving. While i’ve worked some with other backup solutions before, thats 10+ years ago, so my backup skills are a bit rusty compared to my storage skills, so bear with me if i need things spoon fed :-)Also, the Documents parts of sharepoint have been split up in about 10 different subclients, and this performance issue is affecting all of them.Also, for other backups being run by Commvault, i see very different backup speeds. For instance full backups of some MSSQL servers can provide speeds of 500-3000 GB/hr, so i know the backend of the backup solution can handle a lot higher speeds than what Sharepoint is giving us. Now, if i try to drill down a bit on one
Hello,we have created a test Azure blob space library which should be used for deduplicated secondary copy. There is an immutable policy set on the container. According to Commvault documentation we set the container retention twice the value that has the storage policy copy retention, then in the DDB properties “create new DDB every  days” to value of storage policy copy retention. During the backup cycles, there remain sealed DDB’s, which don’t reference any job (all expired). Then someday they are indefinitively automatically removed (and then their baseline from cloud storage space). These baselines in the cloud consume very lot space (and costs). There are 3 to 4 baselines in the cloud during the backup cycles.Please has somebody experience with cloud library deduplication (with immutable blobs) ? Is really more then 3 times more space necessary for the backup ? Which process in Commvault decides, when the sealed DDB will be removed ?After test we would like to give a realisti
I am working on some performance issues with my backups.Looking at the media agents during the backups, looking a the media/proxy agents I am seeing what I think is high latency on the loopback adaptor. in the screen shot below you can see in in Windows resource monitor the latency for vsbkp, cvd and other processes are around 30ms. The loopback adaptor is internal to the server, never touching a ethernet switch or wire, so I would think it should be much lower right?I am wondering what type of latency other folks are seeing on their media agent and or proxy agents during backups?Thank you ahead of time!
So another question.Regarding scheduling backups. So our environment which is mostly windows servers and a few linux servers and 1 MsSQL. Is following an accurate assumption:If I want to create a new client group, example - “Windows Server Agents” and then associate all of our Windows servers to this group this approach seems much cleaner way to create bkup policy then creating a seperate backup schedule for each windows server individually. What is recommened? Also if I was to create this group client Windows server bkup schedule I notice it doesnt let me chosse what drives I want to backup and the system state option. I only see these options if I set up a backup schedule for an idividual client as there is a “content” tab where I can select aLL local drives or browse etc..and then there is also a check box for the “system State” Does the client group bkup schedule automatically bkup all local drives - C:\ drive and if there happens to be a D:\ it will back that up as well
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.