Solved

Postgres - pipe Log Files directly to CommVault

  • 25 January 2021
  • 6 replies
  • 466 views

Badge +2

Hello,

I'm new to CommVault, we are just starting getting it set up for production. We currently are running our backups on TSM but are preparing switching to CommVault. 

Is there a way to send completed log files directly to CommVault per pipe or other methods? That is how we do it with TSM. When a log file has been filled completely it gets piped directly into TSM. No waiting at all. The suggested method of placing the files into a separate archive with the postgres archive command doesn't appeal to us that much, it would nice to have the completed files go directly to a Log Backup instead of parking them, while you wait for CommVault to pull them.

Any suggestion would be much appreciated!

We are running on RedHat 7 with Postgres 10 and 12.

Thanks!

icon

Best answer by Edd Rimmer 25 January 2021, 15:06

View original

6 replies

Userlevel 4
Badge +7

Hello,

Welcome to Commvault! I’m sorry to hear we don’t match up with how TSM performs the log backups exactly, and it’s not what you are used to…

Currently there is no method to have the archive command initiate a job and pipe the archive logs - however I can advise the following:

Short term: Increase the frequency of your log backups based on the average WAL creation time - for example, run them every 5 minutes instead of every hour if required.

Long term: Feature Release 11.23 once released will bring support of Automatic Schedule for log backups. This will allow you to specify the amount of logs present or disk spaced used which when met, will automatically trigger a backup. So you could set this to trigger a backup when 1 WAL file existed for example.

Final thought: I have worked with a customer previously who created a script which would be called by the archive_command which would spawn a log backup in Commvault using the Commvault qoperation command, IF there was not already one running. This might be something to consider ?

Badge +2

Hi Edd,

Thanks for your answer.

Short Term: We’ve set up the log backup to continuous which checks every Five minutes if a log backup is running. If not, it starts one provided that the last one is older than one minute. Currently that is ok, but we are planning on migrating a few multi-TB size DBs into Postgres and that may not be often enough.

Long Term: The Automatic Schedule sounds good, but when I looked at the documentation I didn’t see that postgres is supported with that tool.

Regarding your final thought with a script, that sounds promising, I would like to look into that further. Right now though we can’t run any scripts from CommVault because they all belong to root and we nned to get our Rights Setup for that.

Userlevel 7
Badge +23

I’m curious if TSM understands that these are log files for postgres and gives you the capabilities for point-in-time recoveries, or if it's treating them like any other regular file and you have to manage everything at restore time?

 

 

Userlevel 4
Badge +7

Hey,

Yep automatic schedule is not supported - it will be supported in FR23 which is not out yet :grinning: The link I referenced was just to explain automatic schedule for log backups in general.

FR23 will be out mid March 2021.

With regards to the script - it may be the best option to tie you over until FR23 is available and you upgrade to it. There’s a few different ways you could do this and you’ll need to decide if you run the qoperation in asynchronous or synchronous mode which will most likely decide how simple / advanced your script needs to be! Some pseudo code for a script would be:

Script executed (from archive_command):

Check if a file pid.file exists
     If pid.file exists get the output of this file and check if that process is running
         If process is running exit the script
         If process is not running then create new pid.file with the current running PID

   If pid.file does not exist then create pid.file with the current running PID
   Run CV postgresql incremental (log) backup synchronously (waits for it to complete)

Remove pid.file

The script would be executed after the ‘cp’ in the archive_command we specifically say to use - we search the archive_command for the directory you specify on the PostreSQL instance which is how we will check the logs are actually being backed up properly (and archived before backing up). You’ll also want to make sure you understand how archive_command works in PostgreSQL - if a return code of 0 isn’t returned from the output of whatever you put in archive_command then PostgreSQL will treat the archiving of the WAL file as failed and will keep retrying.

I hope this helps somewhat :wink:

Userlevel 7
Badge +23

 

With regards to the script - it may be the best option to tie you over until FR23 is available and you upgrade to it. There’s a few different ways you could do this and you’ll need to decide if you run the qoperation in asynchronous or synchronous mode which will most likely decide how simple / advanced your script needs to be! Some pseudo code for a script would be:

 

Also to add, it is possible to entirely configure the script from Commvault using workflows without having to log into the target system.

Userlevel 4
Badge +7

 

With regards to the script - it may be the best option to tie you over until FR23 is available and you upgrade to it. There’s a few different ways you could do this and you’ll need to decide if you run the qoperation in asynchronous or synchronous mode which will most likely decide how simple / advanced your script needs to be! Some pseudo code for a script would be:

 

Also to add, it is possible to entirely configure the script from Commvault using workflows without having to log into the target system.

In this specific circumstance (PostgreSQL WAL archiving logs backup aka ‘Incremental’ backups in CV) - the backup needs to be triggered from the client to mimic the behavior of TSM as closely as possible. This could trigger a workflow which could be created to achieve what the script does but whatever route is chosen - PostgreSQL will run the ‘archive_command’ whenever a WAL file is archived and expect a return code of 0 or it will keep retrying until it gets 0 - at this point the WAL file is considered archived/protected and is then recycled or removed.

Reply