Question

DB2 LUW Log Archiving is too slow

  • 23 February 2021
  • 4 replies
  • 59 views

Badge +2

Hi all,

we have recently set up a new Linux server with a 6+ TB DB. The normal CV backup works good, much faster than our old System. We have a problem although with database logs. When the System is being used it can produce new logs every three seconds. Our old system had an open pipe to the backup System and was able to Archive the files fast enough that we didn’t have a problem. CommVault takes around 15 seconds per file. When the system is under load we have a problem with our log space. We now have a temporary log area (2TB) which we need to give back.

The way I understand it CV opens up ist stream to DB2, Archives to log the closes the stream and checks that everything has been properly archived. The Problem here is the overhead involved. The actual archiving of the file is fast enough, it just that everything around is to slow.

Anybody have any suggestions how we can accelerate the process?

Thanks in Advance!


4 replies

Userlevel 2
Badge +4

Hello!

I think what you’re potentially after here is like a persistent log backup job, much like SAP HANA persistent log backups where the job is constantly running and so the only delay would be actually bringing up the pipeline to the media agent and sending the archive logs to the media agent - or even better having this pipeline constantly open (which I’m not aware of any agent currently does).

Whilst I investigate other options or possible future enhancements I just wanted to check you’re aware of the thresholds for DB2 log backups in case this helps in the mean time? Are the jobs currently scheduled?

See here→ https://documentation.commvault.com/commvault/v11_sp20/article?p=14879_1.htm

Badge

Hi,

Looks like currently you have log backup threshold set to 1, which instructs Commvault DB2 vendor agent module to register the job for each DB2 archive log request. You can set the log backup threshold to say 20 ( approximately 3*20 = 60 seconds of transactions taking into consideration that you have one log per 3 seconds). This would average out the overall overhead per log file. The logs would temporarily be staged on the locally mounted Commvault DB2 archive log directory that you had specified during agent install.

 

For information on future enhancement, please contact support.

Badge +2

Hi,

Yes, that is correct. That was the suggestion from our vendor as well. We don’t want to use that because, we would like the logs to be directly archived. If you have that setting at 20 db2 get a message that the logs have been archived but they are still on the local server.

What we would prefer, would be a solution that would db2 the archived message first when the logs 

have been actually saved. Or a way that the overhead between CV and db2 is reduced. The best would be to get the logs written directly to CV as soos as they are completed.

 

Is there some planned future enhancement that you are referring to or do you just mean that generally?

Thanks!

Badge

Hi,

Having the log staging for 20 logs (around 60 seconds worth in your case) with CV archive log directory on NFS volume, or mirrored disk would reduce the overhead and provide secured archive log storage. Note that this is different than the DB2 online log directory, and thus DB2 server is free from managing the log once its marked as archived. The log is in effect archived immediately and placed in staging location, and then backed up to media when we have 20 logs in staging directory.

 

We do have planned future enhancement in roadmap that would further reduce the per archive log job, stream allocation related tasks involved in direct movement of archive log to backup media.

 

Thank You!

Reply