Hi all,we have recently set up a new Linux server with a 6+ TB DB. The normal CV backup works good, much faster than our old System. We have a problem although with database logs. When the System is being used it can produce new logs every three seconds. Our old system had an open pipe to the backup System and was able to Archive the files fast enough that we didn’t have a problem. CommVault takes around 15 seconds per file. When the system is under load we have a problem with our log space. We now have a temporary log area (2TB) which we need to give back.The way I understand it CV opens up ist stream to DB2, Archives to log the closes the stream and checks that everything has been properly archived. The Problem here is the overhead involved. The actual archiving of the file is fast enough, it just that everything around is to slow.Anybody have any suggestions how we can accelerate the process?Thanks in Advance!
Hello,We are trying to set up our Postgres DB to use blocklevel backups. Our system is PostgreSQL 10.13 running on Linux RH7. The backups run without an error. I can see all full and incremental (logs) that have been done over the last several days. After preparing the server (getting rid of the /main file and all log files) I started from the web interface a restore to current.After about 15 seconds the Progress indicator goes from green to yellow (Shows 10% completed) and I get the the error message : Received failed message for job [258820], phase [Restore]Event code 19:122After that the only thing I can do is kill the Job amd go back to my older normal backup. CommVault can restore that without a problem.Thanks for any help!Douglas
Hello,I'm new to CommVault, we are just starting getting it set up for production. We currently are running our backups on TSM but are preparing switching to CommVault. Is there a way to send completed log files directly to CommVault per pipe or other methods? That is how we do it with TSM. When a log file has been filled completely it gets piped directly into TSM. No waiting at all. The suggested method of placing the files into a separate archive with the postgres archive command doesn't appeal to us that much, it would nice to have the completed files go directly to a Log Backup instead of parking them, while you wait for CommVault to pull them.Any suggestion would be much appreciated!We are running on RedHat 7 with Postgres 10 and 12.Thanks!
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.