Ask questions, give answers, get good karma
- 3,485 Topics
- 16,009 Replies
DDB Backup Failure
Hi Team, I’m frequently seeing below error for many DDB Backups and getting failed.Error Code: [6:43]Backup failed: unable to open or read a file. Configured to fail on any error. Please check the following: 1. The account that the File System agent uses has sufficient privileges to back up files. 2. The data being backed up can be accessed and is not on a path that is unmounted or is inaccessible. 3. A third-party product is not locking the files. 4. There is no corruption of the data on the disk. 5. If some failures are expected, please change the current setting so that the backups can complete. Please let me know how to resolve this error. Regards,Harshavardhan.
issue on enabling ransomware protection on a new mount for Linux MA
Hello Community Before make the change in prod env, I have mounted a new iscsi LUN(no multipath) on a MA (Rhl8.7) in a lab env and configured it for Commvault library. Also completed the first backup on this mount.I'm attempting to enable Commvault ransomware protection for this new mount.During this process, I received a message stating that the operation would be disruptive and required the update of the fstab conf file for both local and network file systems. After confirming the operation by entering 'y,' a policy was created/added in the cvstorage module. Issue:Even though the process was carried out, but fstab was not updated for both mounts. It was expected that the process would involve unmounting the mount and updating fstab for it, as indicated in the following output:2023-xx-xx 22:18:05,387 - __main__ - INFO - unmounting 'XXX_mount_name'2023-xx-xx22:18:05,412 - __main__ - INFO - updating fstab with security 'XXX_mount_name'However, the process doesn’t run umount/update fsta
Need to change Encryption Algorithm from Blowfish to AES with 256 key on 600 clients by CLI command
Dear Team,Good Day! I need to Encryption Algorithm from Blowfish to AES with 256 key on 600 clients, so is there any command line that I can execute to run on each storage policy, as changing through GUI, is time consuming. Thanks
Tape Backup with Backup Plans - custom scratch pools
Hello, Is it possible to customize the default scratch pool for a storage copy to tape created in a Server Plan (second copy to tape)? Or it will always be Default Scratch Pool and I need to go to the Commcell Console and change in there? Regards,Pedro
Suse11sp4 operating system crash when backup oracle11g database using BLB option
crashdump log: <6>[5258663.115395] EXT3-fs (dm-22): mounted filesystem with ordered data mode<6>[5431207.839871] 31828: cvbf_attach_block_device(): Attaching to 253:20<3>[5431207.839877] 31828: ERROR: cvbf_attach_block_device(): Device 253:20 is already attached<6>[5431241.625666] kjournald starting. Commit interval 15 seconds<6>[5431241.626067] EXT3-fs (dm-22): mounted filesystem with ordered data mode<3>[5435880.559354] INFO: task CvMountd:11008 blocked for more than 180 seconds.<3>[5435880.559362] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.<6>[5435880.559370] CvMountd D ffff88084d8a3c24 0 11008 1 0x00000000<4>[5435880.559381] ffff880180d1fd38 0000000000000086 ffff880180d1e010 0000000000013400<4>[5435880.559394] 0000000000013400 0000000000013400 0000000000013400 ffff880180d1ffd8<4>[5435880.559405] ffff880180d1ffd8 0000000000013400 ffff880266776240 ffff88084fc40480<
Oracle restore using script file
Hi Team, DB team is trying to automate in place restores for RAC server and wants to know where they can find the below parameter value to allocate channels from their end. attached script for the reference which I saved from CommVault. it looks like 1756 & 1757 is node server value. any idea how CommVault taking this value ?<racDataStreamAllcation>1756 4</racDataStreamAllcation><racDataStreamAllcation>1757 0</racDataStreamAllcation> Script has been saved from commvault
associate admin center credential manger to sql plan
Hello team We have a group in commcell to hold sql credentials → client computer groups→ sql→ group properties→ advanced settings tab→ override higher levels settings → impersonate user → select the credential that we created in commcell credential manager. By doing this, any sql client in this group will inherit this particular service account that used for sql level backup/restore rather than set cred for each sql client.After upgrade to the latest release 2022E , we now have the option credential manager on Command center and the existing credentials have been auto-created by upgrade for sql credential manager support. I’m trying to associate the credential in command center to a sql plan, and unable to find the option(override higher levels settings\impersonate user) in the plan. wonder if there is a way to associate the credential to any sql plan on command center?
Activate / Analyitics engine not running
Over the past year I have been elbows deep into system governance and this brought me to loving Activate .This AM I woke up to check reporting and noticed that ALL of my projects were reporting “ No data found” following by a Command Center alert “ Failed to get Schema, Make sure Analyitics Engine is reachable”Me being .. well.. Me, I knew right where to look. My index server and content analyizer were up and active, so- naturally i rebooted them. Just incase an update snuck in there.After reboot the issue still exists. Anyone else have this issue? I have opened a ticket with Maintenance advantage on the lowest priority. I would like to see if we “ The Community” can resolve this issue together.
Restore from commandcenter download function
Hi, more time when server are fully loaded, when i restore guest file from a vm backup and on restore process i use DOWNLOAD from web console and i have “error” session disconnect.In this situation the web browser not download restored file, but from console i see restore process complete.i suppose that are the timeout of commandcenter session but the file processed are store in a temporary folder of commserve?Thanks
Script to gracefully shutdown Hyperscale X cluster
Hello,Does anyone know a way to automate the shutdown procedure of an Hyperscale X cluster, maybe with a script?From the documentation there are some commands to perform, and we would need something that can be started without human input, and that can be scheduled in crontab.https://documentation.commvault.com/2022e/expert/132928_stopping_and_starting_hyperscale_x_reference_architecture_cluster.html Another useful thing would be a check command after the reboot to understand if everything was stopped “as expected” and that we are in a “clean” situation. ThanksLucio
Commvault AWS API Limited Permissions
Hi all, Had a question regarding the current IAM policies being provided by Commvault for AWS policies that are all documented here. My question is in relation to this policy. More pointed, I’m interested to understand if anyone has taken the time to configure these policies in a “least privilege access” approach utilizing something like condition based tags. I understand that Commvault provides this policy. But as you can see, the bottom half of that policy is still far more wide open than what would meet a “least privilege access” approach. For instance I’d like to understand what SSM is being used for and how we could approach restricting these specific permissions to only the resources we need to give it access to: "ssm:CancelCommand""ssm:SendCommand""ssm:ListCommands""ssm:ListDocuments""ssm:DescribeDocument""ssm:DescribeInstanceInformation" Open to thoughts, suggestions! Thanks,
OneDrive files for users that left years ago still available for restore
My OneDrive backups are configured to use my Tier 3 Prod (“other” production) storage policy which has a 14 day retention for all backups and a 1 year retention for monthly backup.Today I noticed that a coworker named Simon who moved on to greener pastures more than a year ago still appears in the list of clients when I do a Browse and Restore search on OneDrive users within the CommVault java GUI. The Backup Time reported for Simon’s files falls within the weekly synthetic full backup that runs Saturday nights.I thought there might have been a failure to clean up Azure but when I check admin.microsoft.com, Simon does not appear as an active, guest, or deleted user. I checked with my O365 administrator and Simon’s files appear to have been properly deleted.I don’t understand why his files are still appearing within the CommVault backups. Shouldn’t his OneDrive files have expired by now? Ken
Setting job precedence/stagger times
Good day allI guess this question is two-fold and I’ve not really been able to find a way this can be done using a Plan.We’re running SQL Plans that backup the DB’s in the traditional Full/Diff/TLOG format.We have 1 SQL instance that mustn’t have it’s TLOGS backed up. My assumption is that it would need to go into it’s own Plan with TLOG slider switched off?The second portion to this same instance is that the backups of the various DB’s don’t happen at the same time due to their size?Is it possible to configure an instance that a DB will not start backing up until the prior one is complete and they keep cascading in this manner.I understand you could create multiple sub-clients within each instance and assign different schedules. However, this would potentially mean that if a DB finishes sooner than the next one is scheduled to start, there’s an unused portion of the backup window. Conversely if one schedule overruns into the next, then we have 2 jobs running which we’re trying to avoi
Activate - Exchange System Governance
We started using Active later last year, ( Amazing for those who have not deployed this yet) and while the file system projects are useful and abundant, I am wanting to add our exchange servers. Does anyone know if you can use just the DB of the exchange server ? or does each mailbox have to be collected within commvault for the crawl/scan to work?
Archived SQL with no INdexes and Constraints
Hi, GuysI am currently running a Database Archiving project for a company and there’s a little issue which has delayed the progress.After we archived some production tables, the INDEXES and CONSTRAINTs for that particular table is not present on the storage.We tried to explain that, Backup and Archiving are different, the INDEXES and CONSTRAINTS are present in backup but absent in ARCHIVING, They insisted that, they would need the CONSTRAINTS and INDEXES on the STAGING server and Archiviing storage for reporting purpose. Is there a way to achieve this?
Already have an account? Login
Login to the community
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.