Ask questions, give answers, get good karma
- 3,460 Topics
- 15,938 Replies
WHY IS IT THAT I CANNOT FIND MY END OF THE YEAR BACKUP
After end of the year schedule ran we lost all currently backup data for that day DAY and this incident happen to SQL DBsjobs that was selected to run AUXILLARY COPY too was clear and from the history did not show any job ran that day why.am suppose to see history of backup job for sql backup for 2021 /12/30 to 31 jobs that ran but to my suprise all job history was wipe out.
Commvault Expert Certification
Hi Guys,I guys, I have completed the Commvault Engineer certification. Since I am having 7+ years experiance in Commvault, i completed the exam the knowledge i have gatherd in my experiance. But, I would like gather more knowledge for Commvault expert exam. I couldn’t find the self paced training or any reference document to become commvault expert . Can you anyone suggest any possible way to prepare Commvault Expert Exam and also share your personal experiance if you alerdy attempted. Thanks,Mani
DNS restore active directory
Hello, I want to know if I'm going to remove 1 dns zone how can I restore it via AD, should I check DC=domainDnsZones or DC=ForestDnsZone? And then full dns zone be restored? For example I want to remove RestoremePlease ( see attachment), how can I restore it via AD restore
Oracle Jobs start over after failed backup after we upgraded to cmmvault versio 11.25.14.
If the job is on 80 % thewn commvaul;t start over again with the job. If I look in the log the backupo is finished on oracle site. Commvault don't recieve the message that the job is done.It occurs after we do the upgrade to the new version 11.25.14.Any body the same issues?
Decreased Throughput with Qumulo
Recently we added Qumulo to our environment and are backing it up. What I have observed is that since then the throughput on all jobs have significantly decreased. (not sure if related, but who knows) We are working with Support, but I wanted to see if anyone else may have ran into this issue with Qumulo. Job ID 752435 (incremental backup) took over 11 hours to back up 154.41 GB. Job ID 752486 (full backup) took 7 hours 13 minutes to back up 12.81 TB.The above jobs do not make sense to me considering their difference is size and the amount of time to back up. Any help will be greatly appreciated. Thank you Community!
DDB and Index Cache restore failing
Hi Good day So there was a power outage, and I lost the SAN LUNs I provisioned for both DDB, Index Cache and even the storage repository. They were mapped/zoned to the Media Agent (Server) initially. Unfortunately, the guy on ground provisioned new storage for these DDBs AND Index Cache, changed the drive letters and reassigned the partitions I have brought back the LUNs and the backup files within are still intact. However, the location specified in the media library is offline. I can’t restore anything from this location.Please I need help, because we need to restore a file server
Custom Dashboard configuration issue
Hi all I’m creating Custom Dashboards in Metrics Server for various business units.The creation of them is not a problem and all the info is there.I’m struggling when I want the view to be specific to client groups or storage policies as an example. When I edit the report that the tile uses, and I save the view, the info is correct. I then deploy that report to the Dashboard, but once refreshed the tile brings back the information for the entire landscape.I assume I’m just doing something wrong in my creation of the custom reports?Regards,Mauro
NTP on Hyperscale
Hi, I have 2 questions related to the same issue.We have an hyperscale with 3 nodes. When we look at the mountpath we have 3 shares of which 2 are online and the 3th offline. If you validate this share it comes with the following message:Failed to check cloud server status, error = [[Cloud] The request time is too skewed. Message: The difference between the request time and the server's time is too large. Resource: /*******?delimiter=%2F&prefix=*******%2F Extra Details: RequestId 1643106149072049 ]When I gave the command timedatectl we found out that the node was a litte bit over15minutes behind and the other 13min behind. I think 15min is reason to take the share offline for that node.My questions are:Can/may I set the time with the command “date --set”<correct time>” Does this come with a risk? Do I need to stop the CV services first for example? Are Hyperscale nodes not default configured with a NTP server, same as Commserve for example? All my Linux knowledge is from Goog
Commserver timezone picking wrong time
Hello,I noticed a small issue in one of my schedule. My commserver is in Helsinki time zone UTC+0200I created a replication schedule to run everyday 12:00 AM but I noticed job was running at 10:00 AM local time.Below screenshots captured at 11:00 AM local time. It shows next job in 23 hours (i.e. 10 AM local time) When changing time zone explicitly to UTC+0200, it picks correct time So looks like commserver is identifying itself in a different timezone. I validated OS time on commserver and it is correct (UTC+0200). Any ideas?I am concerned that this may mess up backup times.
Sharepoint SQL Server Services Account: Not Ready
had a problem with my Shp setup:1 sql Server hosting the Shp Db → sql backupagent installed1 shp Server where the shp backupclient is installed now i get the error: 19956 1 01/25 08:15:22 ### CVSPPermissionCheck+<>c__DisplayClass21_0 <CheckSharePointSQLServerAccount>b__0 - SQL Server: SPSQL instance: SP2017 User Account: NT Service\SQLAgent$SP2017 does not have full permissions to job results and log files folders 19956 1 01/25 08:15:22 ### CVSPPermissionCheck CheckFolderAccess - Exception System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated. at System.Security.Principal.NTAccount.Translate(IdentityReferenceCollection sourceAccounts, Type targetType, Boolean forceSuccess) at System.Security.Principal.NTAccount.Translate(Type targetType) at CVSPCompatibilityCheck.CVSPPermissionCheck.CheckFolderAccess(String username, String path)19956 1 01/25 08:15:22 ### CVSPPermissionCheck CheckFolderAccess - Except
PostgreSQL index files restore
Hi allI am running a PostgreSQL restore and the DBA has mentioned that the index files haven’t restored on this particular restore process. I’m doing a table level restore to another DB instance (out of place).He mentioned a restore that was completed in December had the index files restored. I’m a bit at a loss as there is no option to allow this from what I can see. I also followed the identical restoration process as was done in December so I’m a little stumped.Not sure if there is a simple solution to this? I am happy to log a support ticket bu thought I’d ask here first.Thanks in advance.Mauro
Live Sync Continuous Replication
I’m very curious if other companies have adopted Continuous Replication within their VMware environments. We are a rather large RP4VM shop and there are many… challenges. I’ve watched CR mature over time and we’re at the point where we’re ready to start testing at scale. We have roughly 7,000 VM’s which need to be replicated (no point in time recovery, just latest recovery point) and I’m curious how others have configured their environment to support 24x7 replication (absolutely no downtime).
Restore from tape in a foreign Commcell
Hello,I’m sure it’s not possible to restore from tape in a foreign Commcell without a DR Backup applied to the new Commcell but I’m not able get this information in Commvault documentation. I’m sure I have already seen this information. Could you help me to find the link ? Thanks in advance and best regards,Gilles
1-Touch for SUSE Linux on IBM Power ?
Hi, it’s not clear in the documentation about if it this is supported or not ?I have a customer who has OS = SLES 15 Plaform = ppc64le.We are trying 1-Touch recovery of those servers.I have used the DVD for the Store (1-Touch Linux Live Boot Disc, DVD4_R11B80_SP24.iso), it is bootable inside intel but not inside a PPC64le Platform !We are also receiving this warning since we have enabled 1-Touch Recovery at the sub client level !Error Code: [6:966] Description: System state backup failed : [Failed to populate ReaR conf file. Check sr.log/sr_post_backup.log for details.] Source: sappcal101, Process: sr
Hello,I am using NDMP to backup netapp svm.I have opend port 10000 from access node to svmnow I can browse and list the content.When I launch the backup I have this error message.Does this port the only required ? Is there any dynamic ports to open ?Thanks ! Error Code: [39:424]Description: Client [nas_client_cifs] was unable to connect to the tape server [awpw99a00a] on port . Please check network connectivity.Source: mediaagent01, Process: NasBackup
Unable to configure VSA clients in AZURE
Hello, After Installation Media Agent in AZURE the next step is configuration VSA backup client for on of the subscription where the Media Agent is located. I checked the roles which were added to subscription for the Media Agents: Infrastructure Administrator Networking Infrastructure AdministratorMedia Agent has enabled Managed Identity, Which is required for access VSA proxy server to subscription. After when my colleague try to configure VSA client. He received the error: “Unable to connect to Virtual Machine host [ID number for subscription] as user . [Failed to get access token. Connection failed.“Please let me know when I should take a look more details about that issue or steps to verify configuration for the Media Agent.
New install, Adminconsole HTTP error 404
Hello Just did a new install of SP24 as a test environment, Install successful but the adminconsole does not work.Commvault\ContentStore\Apache\work\Catalina\localhost\adminconsole is emptyComparing to my woking prod environment there should be a lot of files there.I tried the install 2 times, did a repair of the installation as well but nothing helped.Can I just copy the Catalina dir from my working environment??
Can I point a specific backup job or storage policy to a specific mount path?
Management has provided me with a separate partition for my Database backup jobs. I already have a storage policy dedicate to our DB backups. Is there a way to force a backup job or a storage policy to only save to a specified mount path? I am thinking I can move a mount path to the DB partition and then direct my DB storage policy or DB jobs to copy only to the mount path on the DB partition that has been provided
Sql sysadmin password change often, causes failed.
Hello, Everyone.I am currently in an environment where they use CyberArk to change all service accounts passwords at different intervals. They have their sysadmin sa for sql database backup on cyberark and it changes often. The backup administrator does not know when the password changes because it is automated. This often leads to backup failure due validation of credentials. Is there a workflow to update the passwords as they change that? Or a way commvault can be on on-boarded on cyberark so the passwords updates as they change. Or a different way that the issue can be resolved. Note:Security is paramount to them, they are not looking to change how their passwords are generated. Please, help.
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.