Hello!
I think what you’re potentially after here is like a persistent log backup job, much like SAP HANA persistent log backups where the job is constantly running and so the only delay would be actually bringing up the pipeline to the media agent and sending the archive logs to the media agent - or even better having this pipeline constantly open (which I’m not aware of any agent currently does).
Whilst I investigate other options or possible future enhancements I just wanted to check you’re aware of the thresholds for DB2 log backups in case this helps in the mean time? Are the jobs currently scheduled?
See here→ https://documentation.commvault.com/commvault/v11_sp20/article?p=14879_1.htm
Hi,
Looks like currently you have log backup threshold set to 1, which instructs Commvault DB2 vendor agent module to register the job for each DB2 archive log request. You can set the log backup threshold to say 20 ( approximately 3*20 = 60 seconds of transactions taking into consideration that you have one log per 3 seconds). This would average out the overall overhead per log file. The logs would temporarily be staged on the locally mounted Commvault DB2 archive log directory that you had specified during agent install.
For information on future enhancement, please contact support.
Hi,
Yes, that is correct. That was the suggestion from our vendor as well. We don’t want to use that because, we would like the logs to be directly archived. If you have that setting at 20 db2 get a message that the logs have been archived but they are still on the local server.
What we would prefer, would be a solution that would db2 the archived message first when the logs
have been actually saved. Or a way that the overhead between CV and db2 is reduced. The best would be to get the logs written directly to CV as soos as they are completed.
Is there some planned future enhancement that you are referring to or do you just mean that generally?
Thanks!
Hi,
Having the log staging for 20 logs (around 60 seconds worth in your case) with CV archive log directory on NFS volume, or mirrored disk would reduce the overhead and provide secured archive log storage. Note that this is different than the DB2 online log directory, and thus DB2 server is free from managing the log once its marked as archived. The log is in effect archived immediately and placed in staging location, and then backed up to media when we have 20 logs in staging directory.
We do have planned future enhancement in roadmap that would further reduce the per archive log job, stream allocation related tasks involved in direct movement of archive log to backup media.
Thank You!
Hello @BackupDev
We are also having this issue of DB2 frequent archiving and customer is comparing with TSM where they did not have this problem.
Can you please confirm in which release this enhancement is expected to be released?
Hi @shailu89 ! I’ll tag in @BackupDev to see if they know the status (and if we can share it yet).
Hi @shailu89 , thanks for reaching out.
Are you using log backup threshold as 1?
The enhancement mentioned above is specifically to optimize the backup job initialization and is planned for future roadmap.
It would be good to have a support case logged so that we can review what might be bottleneck/ issue in your environment. And what are the specific comparison points.
Hi @DSchat ,
maybe its unnecessary but i had a similar problem , we installed a new cv as a replacement for tsm and they used directly backup to tsm via LOGARCHMETH1 specified on the db itself. We replaced it with the vendor id for commvault and also the option and trackmod. This works pretty fine for use so db2 talks directly with cv.
Here is the doc (its old buld still working)
https://documentation.commvault.com/fujitsu/v10/article?p=products/db2/dba/config_basic_dba.htm
Cheers,
Thomas
@BackupDev yes we are using log threshold as 1. The environment has heavy loaded db2 hotel which generates archives like every couple of sec. This unnecessarily puts load in CS DB. Something like SAP HANA pipe will be good to be implemented.
I know as work around we can try to put archives in different disks and change threshold to a higher value but this all add additional costs for customer (working in cloud model) while expectation is commvault should be able to do this effectively (in few seconds) while currently time taken is in minutes (due to all the overhead)
@shailu89 having the local staging directory of decent size on existing discs should do. For example, if you plan to have backup threshold of 20 logs of 4 GB size each, then having 40 logs worth of free space (40*4= 160 GB) should work.
Also, the enhancement mentioned above for threshold=1 case is actively being worked upon and is in roadmap.
Thanks,
Hi @BackupDev ,
Any updates on the enhancement which was mentioned in earlier update?
same issue here;
small SAP system, logfilesize of 64MBx90logs; 10 logs get written per minute, threshold is set to 1 (because we’re used to it due to TSM), each logs takes approx 20s until it reaches CV, by the time this creates a huge queue.
we also don’t like the idea to actively rely on CV staging directory
So: @BackupDev any news on the enhancement/roadmap?
The enhancement would be available in very near term future release.
Meanwhile, you can follow the formula for log staging to keep 5 to 15 minutes of logs in CV staging area.
Hi @BackupDev
is the enhancement already available? Can you provide a link to the documentation, i.e. how can I leverage this?
If not, can you give an outlook on the availability?
Regards
Michael
Hi @MichaelSch ,
The feature is actively being pursued with development, and not released yet, please contact your account team for further updates.
Thanks,