Skip to main content

Hi Everyone,

I receive this message: “Cannot allocate Pipeline Buffer”. Not always, but time to time. Obviously it is kind of lack of resource for this job. I changed the backup schedule to another time when the system is not so busy, but still receive the same message. The job being restarted and pass successfully. Does anyone has some proposal how to fix this situation?

Thanks in advance!

Hi @NDN,

 

Are you able to share the log with this error so we can check for a possible cause? - These issues are usually network related.

Sometimes implementing a 1-way Network Route Configuration can resolve this as it creates and keep-alive a tunnel connection. - See the following documentation for steps: https://documentation.commvault.com/commvault/v11_sp20/article?p=7208.htm

 

Best Regards,

Michael


Thanks Michael for the prompt answer.

Please see the attached log file.


@NDN , what type of backup is this?  Assuming Windows FS, the log extract shows the file scan process, but not the backup process.

Can you see what cvd.log and clbackup.log show for one of these Job IDs?


Hi Mike,

Yes, it’s Windows FS. I can’t find clbackup.log, but attached you can see cvd.log.


Found it.


Thanks!  I am seeing some disconnects here:

5164  40d8  11/10 08:53:55 ### ERROR: CvFwClient::connect(): Connect to 172.16.134.211:52195 failed: Connection timed out5164  408c  11/10 08:54:16 ### ERROR: CvFwClient::connect(): Connect to 172.16.134.211:52195 failed: Connection timed out

What machine is that?  Is that the Media Agent?

Based on this, I would investigate the network connection, and/or follow @MichaelCapon ‘s advice of setting up a one way persistent firewall connection:

https://documentation.commvault.com/commvault/v11_sp20/article?p=7208.htm


I’ve checked this already. No result.


To confirm, @NDN , you set up a one way firewall and it didn’t help?

If that’s the case, I would upload the job logs as soon as you see this and call in a case (if you’ve seen it recently enough, call a case in now :nerd: ).

Want to ensure you are able to share sufficient job logs, etc.


Hi @NDN , following up to see if you have run into this issue since (and if so, if you opened a case).


Marking this as answered, though if you have time to update the thread, please do!


Sorry but I have to say this…. Can we please use this as an example and make the request to development to optimize the error handling so the end user is informed that something is not correct with the network connection, instead of delivering an error that refers to the pipeline buffer.

Enhancing this saves out on customers opening tickets and improves UX because the user is pointed in the right direction which should contribute towards a swift self-service solution.


@Onno van den Berg , I agree with you 100%.  I’ve been reviewing the top error codes for the past year to do this exact thing.

Error codes should be informative and helpful.  Tell me what the problem is, and how to address it.  If I need further help, I can read a KB, or come seek help in the community.

We’ve made a ton of progress, though I’ll look into this one a bit deeper since it’s not so much a JPR as it is a log message and we may not have reviewed it yet.


Well I'm all-in with you on this and imho there should be a big push on making these kinds of improvements across the product. It looks like a small improvement, but it often has big impact on user experience. Customers creating support tickets resulting in a lot of effort from support and development to locate the root cause while in the end all this hassle was pretty much for nothing because the customer was given an incorrect and/or unreadable error message. 


Definitely.  I’ve been running through the top 100 ensuring we have KB articles attached to each, and as a result, creating CMRs to reword some messages, and even split some up.

We have and are still making great progress, though as you can imagine, the changes take time, and will be released in the latest MRs, etc.

My motto is ‘Messages should make sense because they are clearly written, not because you work here’.

If you see anything that looks poorly worded, feel free to let me know.  Primary focus is on the more popular codes, so there’s always a chance I’m already working on it :nerd:


I am facing same issue while trying to troubleshoot i am not able to fix this, can anyone help me here to understand what was the fix later on this issue. I changed firewall settings to one-way but still getting pipeline buffer error. when i do client checkreadiness it shows client is ready with all Media Agents. SCAN phase completes then backup phase fail, below errors i have seen in clbackup logs.

SdtBase::generateSignature() - Setting Error [])
1620  bd8   07/05 11:25:09 4179551 SdtBase::generateSignature() -
1620  7c0   07/05 11:25:09 4179551 SdtBase::allocateBuffer(): error_set found RCId C20049326]
1620  7c0   07/05 11:25:09 4179551 7PIPELAYER  ] Failed to allocate SDT buffer. Probably, the socket was closed. SDT Error Failed to decrypt data: wrong decryption key.]
1620  7c0   07/05 11:25:09 4179551 CFileBackup::SendHeader(5564) - Unable to allocate buffer
1620  7c0   07/05 11:25:09 4179551 CBackupBase::DoBackup(3497) - SendHeader indicates FAIL_BACKUP
1620  7c0   07/05 11:25:09 4179551 CBackupBase::DoBackup(2608) - --- 0:02.074800
1620  7c0   07/05 11:25:09 4179551 FsBackupTw::Run(510) - --- 0:02.074800 ObjectId=1, CollectFileName=C:\Program Files\Commvault\ContentStore\iDataAgent\JobResults\CV_JobResults\iDataAgent\FileSystemAgent\2\11079\NumColTot1.cvf
1620  c18   07/05 11:25:09 4179551 CCVAPipelayer::ClosePipeline() - About to destroy Data Mover
1620  c18   07/05 11:25:09 4179551 5PIPELAYER  ] Data Pipe Is Down
1620  c18   07/05 11:25:09 4179551 CCVAPipelayer::SendCommandToDSBackup() - Failed to allocate a command buffer
1620  c18   07/05 11:25:09 4179551 CCVAPipelayer::ClosePipeline() - Failure sending datamover destroy
1620  c18   07/05 11:25:09 4179551 CPipelayer::ShutdownPipeline() - stat- SDT o0000000007DF5CF0] tduration - 14 seconds]
1620  c18   07/05 11:25:09 4179551 CPipelayer::ShutdownPipeline() - pipeline has already been shutdown
1620  c18   07/05 11:25:09 4179551 CBackupBase::Close(4503) - Backup: Errors in closing the pipeline.
1620  c18   07/05 11:25:09 4179551 CBackupBase::Close(4505) - Back from closePipeline
1620  c18   07/05 11:25:09 4179551 ~CVArchive() - Destroying CVArchive. This=00000000024DC5C0
1620  c18   07/05 11:25:09 4179551 CPipelayer::ShutdownPipeline() - pipeline has already been shutdown


I am facing same issue while trying to troubleshoot i am not able to fix this, can anyone help me here to understand what was the fix later on this issue. I changed firewall settings to one-way but still getting pipeline buffer error. when i do client checkreadiness it shows client is ready with all Media Agents. SCAN phase completes then backup phase fail, below errors i have seen in clbackup logs.

If you have an network rule in place, check the cvfwd.log on the client side looking for entries related to media agent…


No error related to communication in cvfwd.logs with Media Agent


No error related to communication in cvfwd.logs with Media Agent

If so, then media agent side has to be checked as welll as classic called AV exclusions on both ends.

Addtionally, what versions are used by the client and the media agent(s)?


No error related to communication in cvfwd.logs with Media Agent

If so, then media agent side has to be checked as welll as classic called AV exclusions on both ends.

Addtionally, what versions are used by the client and the media agent(s)?

Hi Jacek Piechucki  AV exlcusions are in place. We have 10 more clients from same location on same sub-net using same Media Agents no issue with them, only two clients with which we are facing issues and no one is able to understand why issue only with two out of 12 clients. No error in logs cvfwd.logs are cleans cvnet logs are not showing any issue, no error in cvd logs, clbackup logs shows below errors which are not pointing where issue is actually.

7164  1814  07/22 14:12:55 4300572 CCVAPipelayer::SendCommandToDSBackup() - Failed to allocate a command buffer
7164  1814  07/22 14:12:55 4300572 CCVAPipelayer::ClosePipeline() - Failure sending datamover destroy
7164  1814  07/22 14:12:55 4300572 CPipelayer::ShutdownPipeline() - stat- SDT a0000000001EF4470] Eduration - 21 seconds]
7164  1814  07/22 14:12:55 4300572 CPipelayer::ShutdownPipeline() - pipeline has already been shutdown
7164  1814  07/22 14:12:55 4300572 CBackupBase::Close(4503) - Backup: Errors in closing the pipeline.
7164  1814  07/22 14:12:55 4300572 CBackupBase::Close(4505) - Back from closePipeline
7164  1814  07/22 14:12:55 4300572 ~CVArchive() - Destroying CVArchive. This=0000000001EE3FB0
7164  1814  07/22 14:12:55 4300572 CPipelayer::ShutdownPipeline() - pipeline has already been shutdown
7164  1814  07/22 14:12:55 ####### SdtTailSrvPool::Rel: Resetting SrvPool as ref. count is 0.
7164  1814  07/22 14:12:56 4300572 CBackupBase::Close(4352) - --- 0:00.343200
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::Run(3861) - m_FsBackupTWRef Close failed
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::Run(4222) - Error occurred during the backup of assigned collect files, status=-1
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::Run(3446) - --- 0:02.917205
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::Run(3292) - Run indicated failure
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::Run(2935) - --- 0:12.199221
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::Close(5716) - Closing B2] Threads
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::MarkCFAsProcessedInSubStoresReturned(10540) - mapSubStoreByCollect - size 00]
7164  1814  07/22 14:12:56 4300572 FsBackupCtlr::MarkCFAsProcessedInSubStoresReturned(10540) - mapSubStoreByCollect - size (0]


Reply