Solved

DB2 LUW Log Archiving is too slow

  • 23 February 2021
  • 15 replies
  • 1214 views

Badge +2

Hi all,

we have recently set up a new Linux server with a 6+ TB DB. The normal CV backup works good, much faster than our old System. We have a problem although with database logs. When the System is being used it can produce new logs every three seconds. Our old system had an open pipe to the backup System and was able to Archive the files fast enough that we didn’t have a problem. CommVault takes around 15 seconds per file. When the system is under load we have a problem with our log space. We now have a temporary log area (2TB) which we need to give back.

The way I understand it CV opens up ist stream to DB2, Archives to log the closes the stream and checks that everything has been properly archived. The Problem here is the overhead involved. The actual archiving of the file is fast enough, it just that everything around is to slow.

Anybody have any suggestions how we can accelerate the process?

Thanks in Advance!

icon

Best answer by BackupDev 24 February 2021, 18:24

View original

15 replies

Userlevel 4
Badge +7

Hello!

I think what you’re potentially after here is like a persistent log backup job, much like SAP HANA persistent log backups where the job is constantly running and so the only delay would be actually bringing up the pipeline to the media agent and sending the archive logs to the media agent - or even better having this pipeline constantly open (which I’m not aware of any agent currently does).

Whilst I investigate other options or possible future enhancements I just wanted to check you’re aware of the thresholds for DB2 log backups in case this helps in the mean time? Are the jobs currently scheduled?

See here→ https://documentation.commvault.com/commvault/v11_sp20/article?p=14879_1.htm

Userlevel 1
Badge +2

Hi,

Looks like currently you have log backup threshold set to 1, which instructs Commvault DB2 vendor agent module to register the job for each DB2 archive log request. You can set the log backup threshold to say 20 ( approximately 3*20 = 60 seconds of transactions taking into consideration that you have one log per 3 seconds). This would average out the overall overhead per log file. The logs would temporarily be staged on the locally mounted Commvault DB2 archive log directory that you had specified during agent install.

 

For information on future enhancement, please contact support.

Badge +2

Hi,

Yes, that is correct. That was the suggestion from our vendor as well. We don’t want to use that because, we would like the logs to be directly archived. If you have that setting at 20 db2 get a message that the logs have been archived but they are still on the local server.

What we would prefer, would be a solution that would db2 the archived message first when the logs 

have been actually saved. Or a way that the overhead between CV and db2 is reduced. The best would be to get the logs written directly to CV as soos as they are completed.

 

Is there some planned future enhancement that you are referring to or do you just mean that generally?

Thanks!

Userlevel 1
Badge +2

Hi,

Having the log staging for 20 logs (around 60 seconds worth in your case) with CV archive log directory on NFS volume, or mirrored disk would reduce the overhead and provide secured archive log storage. Note that this is different than the DB2 online log directory, and thus DB2 server is free from managing the log once its marked as archived. The log is in effect archived immediately and placed in staging location, and then backed up to media when we have 20 logs in staging directory.

 

We do have planned future enhancement in roadmap that would further reduce the per archive log job, stream allocation related tasks involved in direct movement of archive log to backup media.

 

Thank You!

Userlevel 1
Badge +5

Hello @BackupDev 

We are also having this issue of DB2 frequent archiving and customer is comparing with TSM where they did not have this problem.

 

Can you please confirm in which release this enhancement is expected to be released?

Userlevel 7
Badge +23

Hi @shailu89 !  I’ll tag in @BackupDev to see if they know the status (and if we can share it yet).

Userlevel 1
Badge +2

Hi @shailu89 , thanks for reaching out.

Are you using log backup threshold as 1?
The enhancement mentioned above is specifically to optimize the backup job initialization and is planned for future roadmap.

 

It would be good to have a support case logged so that we can review what might be bottleneck/ issue in your environment. And what are the specific comparison points.

Badge +2

Hi @DSchat ,

 

maybe its unnecessary but i had a similar problem , we installed a new cv as a replacement for tsm and they used directly backup to tsm via LOGARCHMETH1 specified on the db itself. We replaced it with the vendor id for commvault and also the option and trackmod. This works pretty fine for use so db2 talks directly with cv.

Here is the doc (its old buld still working)

https://documentation.commvault.com/fujitsu/v10/article?p=products/db2/dba/config_basic_dba.htm

 

Cheers,

Thomas

Userlevel 1
Badge +5

@BackupDev  yes we are using log threshold as 1. The environment has heavy loaded db2 hotel which generates archives like every couple of sec. This unnecessarily puts load in CS DB. Something like SAP HANA pipe will be good to be implemented.

 

I know as work around we can try to put archives in different disks and change threshold to a higher value but this all add additional costs for customer (working in cloud model) while expectation is commvault should be able to do this effectively (in few seconds) while currently time taken is in minutes (due to all the overhead)

Userlevel 1
Badge +2

@shailu89 having the local staging directory of decent size on existing discs should do. For example, if you plan to have backup threshold of 20 logs of 4 GB size each, then having 40 logs worth of free space (40*4= 160 GB) should work.

 

Also, the enhancement mentioned above for threshold=1 case is actively being worked upon and is in roadmap.

 

Thanks,

Userlevel 1
Badge +5

Hi @BackupDev ,

Any updates on the enhancement which was mentioned in earlier update?

 

Badge

same issue here;

small SAP system, logfilesize of 64MBx90logs; 10 logs get written per minute, threshold is set to 1 (because we’re used to it due to TSM), each logs takes approx  20s until it reaches CV, by the time this creates a huge queue.

we also don’t like the idea to actively rely on CV staging directory

So: @BackupDev any news on the enhancement/roadmap?

Userlevel 1
Badge +2

The enhancement would be available in very near term future release.

Meanwhile, you can follow the formula for log staging to keep 5 to 15 minutes of logs in CV staging area.

Badge

Hi @BackupDev 

is the enhancement already available? Can you provide a link to the documentation, i.e. how can I leverage this?
If not, can you give an outlook on the availability?

 

Regards

Michael

Userlevel 1
Badge +2

Hi @MichaelSch ,

 

The feature is actively being pursued with development, and not released yet, please contact your account team for further updates.

 

Thanks,

 

Reply