Skip to main content

Friends,

 

We use on our systems Oracle SEHA (Standard Edition High Availability) databases.

That means we have 2 servers and the databases run at one server as single databases. If the server goes down, the database fail-over to the other server. It feels like classical cluster-behavior, but it’s without cluster-software on OS level.

 

This means that Commvault will see the database as a standard Oracle database running on node1. Then when there is a failover, Commvault will autodetect this database on node2 as a new database and the one of node1 will have failed jobs.

The when the database fails back to node1, the jobs of this database on node2 will keep having failed jobs.

So the database backups are spread on 2 clients, while you can see in CV that the DBID is the same.

 

As we have 30 days retention, we can’t just cleanup the remains on node2 as for recovery we need data from both clients. We also can’t disable the backup for node2, because we don’t know when the next failover will happen and we want to stay protected. So we are stuck with many failed jobs.

 

Does anyone have a best-practice for this situation?

 

Thanks in advance !

Maurice

Hello @DUO-CSR 


Thanks for the detail and questions. 

With other cluster environments there is general a cluster name space that will be used to float between the two servers and it is just the 1 object in CV. 

With that missing Commvault has no way to know that this is the same database on both servers. 

 

I have asked our Ora Devs this question about best practice and will get back to you with what i find. 

 

Kind regards

Albert Williams

 


Hello @DUO-CSR 


After having a chat with the internal team the recommendation is to configure this as a Data guard setup and it should mitigate those issues. 

https://documentation.commvault.com/2023e/expert/configuring_oracle_for_data_guard.html

 

Kind regards

Albert Williams


Thank you @Albert Williams !!
We will try this !

With kind regards, Maurice


Reply