Ask questions, share knowledge, get good karma
I have some Aux copies that have this, for example:The Aux copy is configured to use 8 streams via “Combine source data streams = 8”. Multiplexing is not enabled. When it runs (in properties → Streams tab) it has 8 “Destination Streams” running (“Number of readers in use = 8 ). BUT when I look at properties → “Media not Copied” tab, it shows 9 streams in the "Stream No./Sequence" column, where it is:what's “0/1” → Stream Number 0? but if so then there’s 9 streams to copy? I wanted to make sure something was not misconfigured somewhere and Stream No 0 wasn’t some default to handle an oddity/overflow of data or something strange. I was under the impression the streams were to be combined to 8 (all data broken up into 8 chunks to be streamed/read/copied) … yet the UI is telling me I have 8 streams “to copy” and another one named 0, though it actually only runs 8 streams/readers. For reference, here's the active streams of the same job showing 8 readers/streams
I have deployed FileSystem Package to approx. 80 laptops remotely from console and did not selected option to “configure as laptop “.Now, they are appearing as Server and i want them to appear as laptop.How can i change the package to configure as laptop?when i try to add package . the option to “configure as laptop “ does not appear”.
Hello community!I’m running into issue with AuxCopy job for one of my Plans in Commvault.This job is a selective secondary copy, which should copy weekly friday fulls into Qnap repository.The destination library is a Qnap library, which was added using uncpath (\\qnap.dns.name\commvault\) and dedicated user with full access permissions for this share.I have configured a MCSS Cloud library also, and create the same secondary selective copy, with the same configuration - and for cloud storage everything is working fine.Qnap firmware is the newest possible (we have updated it few days ago, believing this would solve the problem). We also disconnected every other shares from QNAP (now it is dedicated storage for commvault, for keeping additional copies of backup).The job ends in copy as Partially Copied, when I manually run another AuxCopy job it actually completes without any issues and make a copy job status as “Available”- but it’s not a solution to run an additional aux copy job manual
Hi,File system backup jobs for a MediaAgent failsError Code: [19:1109]Description: Please check the log files for this job for more details.My troubleshooting steps that didn’t work:catalog migration to different path (once from the properties of MA, and once using change index server config workflow) try using optimized scan upgrade the commcell to the latest maintenance release 11.24.48The Archive index logs:16120 4530 05/27 20:06:24 680118 Begin Archival of Remote Loose Files (if available) and Action Logs, Argv [-j 680118 -a 2:457 -t 1 -jt 680118:11:12:0:32808 -ab 0 -parent 1 -c 363 -maxcolnum 3 -numcol 2 -numfolder 58 -incimage -incimage -idx2 -TJ -idx2 -new_scan -slt -hloff -r 1653668189 -c 0 -maxcolnum 3 -numcol 2 -IgnoreEarlyStubBackup -c 0 -maxcolnum 3 -numcol 2 -IgnoreEarlyStubBackup -c 0 -maxcolnum 3 -numcol 2 -IgnoreEarlyStubBackup -c 0 -maxcolnum 3 -numcol 2 -IgnoreEarlyStubBackup -c 0 -maxcolnum 3 -numcol 2 -IgnoreEarlyStubBackup -c 0 -maxcolnum 3 -numcol
Hi !I’m used to old-style storage/schedule policies and clients/subclients associations..But we’re told that PLANS are the future, and we’ll have to move to Plans for sure.So I’ve deployed a few new MAs to protect some locations inside my company, where I have to protect VSA + file level backups.I have local disk backup, then auxcopy to tape and auxcopy to cloud from primary.MA is physical linux, as we have windows VSA clients, I have also deployed Windows VSA Proxy. Then I created Storage pools for local disk, tape copies and Cloud copies.To simplify things and test Plans, I created a Plan per location, with standard details, like 1 day RPO, 1 month of retention, with my backup timeslots, full timeslots (confusing with synthfull out of control, but we’ll discuss that later in this thread I guess).I created a VSA VM group that points to my VMWare location ( = selects all the VMs in that location, including my VSA proxy VM).For some VMs, I was asked to also provide file-level backup of
Hi Commvaulters, Can someone advise me with the needed network ports to be open in order to perform an aux copy between 2 MAs.We installed a new MA on a remote site, and we want to open only the needed ports between it and the CS (For communication) and between it and the MAs located on the main site.Please to note that we are running CV version 11.24, I know that the main ports are 8400 for communication and 8403 for data transfer.Is there any other once that needs to be opened ? We want to minimize port opening in order to fully secure the remote MA. Regards.
Hi guys, Recently, we upgraded our CommCell environment, from one of the clients, the upgrade passed successfully, on the CommCell console, the new version is showed up, but on the client side, when checking the commvault services status, we found out that it’s the old version that is still showed up comparing to the other clients. We tried restarting services, but, the issue remained.Any suggestion to clear this issue ? Is there some sort of cache on the clients to be cleared, or anything else ? Regards.
All my media agents have the same settings in Media Agent → Properties → “General” Tab except one, which has ”Validation on Network” not checked (the others do). I found the docs for this setting here: MediaAgent - Online Help (commvault.com) and its one of those where the definition is basically rewording the setting (not a lot of detail)Is there any reason anyone would *not* want to check “Validation on Network”? I’m not sure if it was an oversight or done for some reason for this one media agent. I’m mostly sure it was oversight or forgotten about, but wanted to see if “Validation on Network” was something set by default (I’m on SP16), or if people turn it off as its CPU intensive or mostly unnecessary unless you need xyz protection from bad/choppy network connections. I would guess that if “Validation on Network” is unchecked… something else would catch invalid data before its written to disk?
Hey folks,Hope everyone is having an amazing day!I was hoping you could help us out by letting us know what your preferred backup window is, as well as your patch window / schedule. We’re working on something internally and some quick validation from the community would be really helpful.I created a quick 6 question survey, and all responses are completely anonymous.https://forms.office.com/r/NkFPDf1M0k
Hi, I have issue after upgrade Commserve and MediaAgent. On the customer environment has Commserve and 2 MediaAgent, I have upgrade Commserve to SP24 and 2 MediaAgent to SP24. The result Commserve is complete upgrade fine and 1 MedaiAgent is complete upgrade fine. The second MediaAgent is display that can’t communicate with commserve need to change commserve name.I have check on Commcell the second MediaAgent go offline and check readiness, it shows about communicate. So I try to check network ping , telnet , and check on hostfile. CS can ping <ip> and <hostname> to second MediaAgent, MediaAgent can ping <ip> and <hostname> to CS. CS = OKMA1 = OKMA2 = need commserve to change name Is anyone has meet about this issue? Please advise me. Thank you.
Good morning to all.On a monthly basis I release auxiliary copies to tape ending with a total of 11 tapes.I wanted to do a restore of about 675MB and it asked me for 9 tapes to do it, is this so why the data has been distributed on 9 tapes?Shouldn't it be on one tape only? Since the size of the L7 tape is large. Could it be due to the "Use Scalable Resource Allocation" option? Thank you very much for your help.Best regards,Johana 😀
Good morning allI wanted to confirm that initiating a restore using Command Centre won’t give you the option to change the Data Path (Media Agent and Library)?We’re allowing business units to manage their own operations and this is one requirement that’s come up, and it looks like we will have to get them to use the Java console instead? Thanks.Mauro
Hi,I’ve been running some tests with customer lately and we came across some very troubling situation during out of place restore of Oracle database. Specifically restore out of place is trying to delete source database redo logs. If the source database is running it fails, but if source database is closed it succeeds damaging source database You can find alert.log snippet below:Source DB shutdownDeleted Oracle managed file +DATA/BMSTST1/ONLINELOG/group_3.1951.1090834209Completed: alter database rename file '+DATA/BMSTST1/ONLINELOG/group_3.1951.1090834209' to '+DATA/BMSTST2/ONLINELOGS/redo_1.log'alter database rename file '+DATA/BMSTST1/ONLINELOG/group_2.1950.1090834207' to '+DATA/BMSTST2/ONLINELOGS/redo_2.log'Deleted Oracle managed file +DATA/BMSTST1/ONLINELOG/group_2.1950.1090834207Completed: alter database rename file '+DATA/BMSTST1/ONLINELOG/group_2.1950.1090834207' to '+DATA/BMSTST2/ONLINELOGS/redo_2.log'alter database rename file '+DATA/BMSTST1/ONLINELOG/group_1.1949.1090834207'
HiI’ve been trying to run Duplicate Oracle database using Tape Storage Policy Copy but it fails during Archivelog restore with “Error while restoring backup piece”.Since this is a Selective Copy there are only Weekly Selective Fulls present on media. I’ve been looking for some answers in RMAN log, but what I see here doesn’t make sense.First of all the last SOF (selective online full) backup present on tape copy is job 47104. Looking at RMAN run block I can see SET UNTIL clause is set to correct time which is a couple of minutes before job 47104 ended. It does restore all required datafiles, but during archivelog restore it tries to restore from backup pieces from newer jobs which, since this is a Selective Copy, are not present on the media. Then it fails over to previous job until it tries to restore from the job that is actually on the media (47104) but it still fails permanently this time. The SCN on which restore fails is lower then the Next SCN number in the job properties so it
Hi Everyone,Greetings, we are planning to migrate the commserve from 2012 OS server to 2019. The commserve service pack is 11 SP 9+, we are planning to upgrade 11.24 with latest maintenance. Here the question is, the commserve running on low OS 2012, can we upgrade the commserve to latest version on same OS to migrate or after migration of database to new hardware need to upgrade the service pack? Thanks and regards.
The rest api has the ability to list credentials, and to get details for the credentials but cannot retrieve the actual credentials?This seems incomplete / pointless.am I missing something?I am trying to use powershell to make some changes outside of Commvault but the credential manager api doesn’t seem to be able to return actual credentials.https://api.commvault.com/#7dc77320-bdce-4591-95c4-ccd3325e2094
Hello! Is it securely possible to have a CV environment for servers, which is not in Active Directory and using one set of Media Agents, to share those Media Agents with another Commserve which IS in AD in order to facilitate backing up of workstations? How hard would this be?Things I’m wondering about: We could just create a new Library on the Media Agent dedicated to workstations. There are actually two MAs, one at each of our locations, but let’s keep this simpler for now. How susceptible do we think the server Libraries are to hacking attempts via CV? The only communication between Commserves, workstations and media agents is TCP 8000-8006? Commserves for server backups are segrated from “everybody else” short of a few physical servers and ESX/VMs. If we are hacked on the workstation side (let’s assume someone gets into an admin account and runs ramped), can they harm the server backups in another library on the same media agent?Thanks for any input and please let me know if I’
Hello,Can I know what is the process or algorithm of VSA coordinator proxy node on how it load balances between the VSA proxy server in the list .. Any link or explanation would be helpful.. I have seen cases where 20-25 VSA proxy nodes are used in very large environment… Hence would like to know how the proxy nodes are spanned across each VM during the backup process.. Thanks
Hi fellow Commvaulters, We’ve launched a restore of an OpenStack instance, which the job failed with the following error : The job has overwritten the Instance, which means that it is now deleted, and we are not able to restore it.Any suggestion of what can be the root cause of the error ? Kind regards.
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.