Solved

AWS Aurora PostgreSQL v13 max backup streams


Badge +3

Hi,

I have an AWS Aurora PostgreSQL v13.7 instance with many Databases, I can only run backup at max 3 streams (i.e. 3 DBs at a time) irrespective of the subclient/Storage Policy/etc. number of streams settings. Is it a Commvault or AWS/postgres limit? Is it possible to increase it?

Thanks

icon

Best answer by Sunil 24 July 2023, 05:05

View original

6 replies

Userlevel 5
Badge +13

Hi @GGMGL 

Where do you see the max 3 streams limit?

 

Thanks,

Sunil-

Badge +3

Hi @Sunil,

Thanks for replying. In the Streams tab of the running job I can see 3 active streams, it’s with 11.28 and doing dump based backups.

I have increased the CPU count of the proxy (RHEL 7.9) from 4 to 8 vCPUs and now I can see max 5 active streams.

Then I have started a backup using the same proxy against a different RDS instance and that shows 5 active streams as well (so 2 backup jobs running at the same time using the same proxy/Storage Policy with 10 active streams total).

Thanks

Badge +3

Almost forgot, if I have 8 DBs in the subclient with 8 readers/streams then I get 5 active streams, if I have 4 DBs in the subclient with 8 readers/streams then I get 3 active streams.

Userlevel 5
Badge +13

Hi @GGMGL 

Got it. The parallel streams is dependent on the number of DBs, not the cores.

The max number of parallel streams would be (Num of Databases in the subclient + 1)/2 rounded. That’s why you’re seeing 5 and 3 active streams respectively.

 

Thanks,

Sunil

Badge +3

Thanks @Sunil for explaining the logic. Is it tunable? If I have 4 similar size DBs in an Aurora instance then it will do 3 DBs then when those compete starts the remaining one effectively doubling the backup window even if the proxy/DB instance could sustain backup of 4 DBs fine.

Userlevel 5
Badge +13

Hi @GGMGL 

Unfortunately, there is no override that I know of. One way to work it around is by splitting the DBs in two subclients (2+2) and schedule backups at same time.

We have this logic for avoid starvation when we are reading from different disks. But of little utility for cloud use case. Let me go back and take a look into the feasibility of addressing this.

 

Thanks,

Sunil

Reply