Skip to main content
Solved

MongoDB Backup İssıe


Forum|alt.badge.img+8
  • Byte
  • 34 replies

Hello

IAm using mongoDB cluster with 3 nodes. IT failes with that error “Volume List is Empty”

Do you know about that issue?

Failed to get array info. Please enter array info from Array Management. : [cvso_unsnapOneDevice:/dev/mapper/datavg-lv_data is not a Native Snap Capable Device]

 

 

225783 371f7 03/22 15:37:16 11686416 MongoDbIDA::MongoDbBackupCoordinator::populateNodeAndTaskList(540) - Number of Secondary Shards accumulated=[1]
225783 371f7 03/22 15:37:16 11686416 MongoDbIDA::MongoDbBackupCoordinator::GetSecondaryDataPath(354) - Secondary Shard Server => Shard=[MongoDbTestCluster] data=[/data01/mongodb/data] host=[gbmnct03.fw.garanti.com.tr] port=[29892]
225783 371f7 03/22 15:37:16 11686416 MongoDbIDA::MongoDbBackupCoordinator::GetSecondaryDataPath(360) - secHost name = [gbmnct03.fw.garanti.com.tr]
225783 371f7 03/22 15:37:16 11686416 MongoDbIDA::MongoDbBackupCoordinator::GetSecondaryDataPath(385) - secClientId  = [3155]
225783 371f7 03/22 15:37:16 11686416 MongoDbIDA::MongoDbBackupCoordinator::populateNodeAndTaskList(600) - addNode for secClientId  = [3155]
225783 371f7 03/22 15:37:16 11686416 MongoDbIDA::MongoDbBackupCoordinator::populateNodeAndTaskList(626) - added Task List  Tasks=[1] SecondaryShards=[1]
225783 371f7 03/22 15:37:16 11686416 DistributedIDA::CMaster::UpdateJMMisc(492) - Updating client list [3187,3155] to JM Misc table
225783 371f7 03/22 15:37:16 11686416 DistributedIDA::CMaster::getReservations(946) - Stream reservation is not required.
225783 3822a 03/22 15:37:16 11686416 DistributedIDA::CMaster::streamRefresh(3383) - Started
225783 371f7 03/22 15:37:16 11686416 DistributedIDA::CMaster::registerNodes(999) - common agents arguments:-j 11686416 -pkg MongoDB -a 2:10904 -d PGARAPPCVM01.fw.garanti.com.tr*GAR-PND-TST-ALL-MA01*8400 -t 1 -r 1647287461 -i 2 -snap -jt 11686416:5:2:0:37594 -pcj 11550590 -pcr 0 -pcb 1 -pct 1647287512 -pcs 0 -numstreams 0 -cn gbmnct02 -vm Instance001 -controller -phase backup
225783 38229 03/22 15:37:16 11686416 DistributedIDA::CCoordinatorReports::Run(144) - Reports are printed every 300 seconds
225783 371f7 03/22 15:37:16 11686416 DistributedIDA::CMaster::registerNodes(1002) - common agents arguments:-j 11686416 -pkg MongoDB -a 2:10904 -d PGARAPPCVM01.fw.garanti.com.tr*GAR-PND-TST-ALL-MA01*8400 -t 1 -r 1647287461 -i 2 -snap -jt 11686416:5:2:0:37594 -pcj 11550590 -pcr 0 -pcb 1 -pct 1647287512 -pcs 0 -numstreams 0 -vm Instance001 -controller -phase backup
225783 371f7 03/22 15:37:16 11686416 DistributedIDA::CMaster::registerNodes(1005) - common agents arguments:-j 11686416 -pkg MongoDB -a 2:10904 -d PGARAPPCVM01.fw.garanti.com.tr*GAR-PND-TST-ALL-MA01*8400 -t 1 -r 1647287461 -i 2 -snap -jt 11686416:5:2:0:37594 -pcj 11550590 -pcr 0 -pcb 1 -pct 1647287512 -pcs 0 -numstreams 0 -controller -phase backup
225783 371f7 03/22 15:37:16 11686416 DistributedIDA::CMaster::registerNodes(1057) - for gbmnct03:  -numstreams 0 -t 1 -d PGARAPPCVM01.fw.garanti.com.tr*GAR-PND-TST-ALL-MA01*8400
225783 371f7 03/22 15:37:16 11686416 AddAgent() - Added Agent [gbmnct03]
225783 371f7 03/22 15:37:16 11686416 StartRemoteAgent() - Starting Remote Agent [gbmnct03]
225783 371f7 03/22 15:37:16 11686416 StartRemoteAgent() - Launching agent executable [CVDistributor.exe] on [gbmnct03.fw.garanti.com.tr*gbmnct03*8400*8402] with args [-j 11686416 -pkg MongoDB -a 2:10904 -d PGARAPPCVM01.fw.garanti.com.tr*GAR-PND-TST-ALL-MA01*8400 -t 1 -r 1647287461 -i 2 -snap -jt 11686416:5:2:0:37594 -pcj 11550590 -pcr 0 -pcb 1 -pct 1647287512 -pcs 0 -controller -phase backup  -numstreams 0 -t 1 -d PGARAPPCVM01.fw.garanti.com.tr*GAR-PND-TST-ALL-MA01*8400 ]
225783 371f7 03/22 15:37:17 11686416 StartRemoteAgent() - Started Agent [gbmnct03]
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::OnAgentStarted(1973) - Got AgentStarted for gbmnct03
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::TaskRequest(3161) - task request for stream ID 0 [RcID:0]
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::TaskRequest(3200) - Adding task request into task queue (0)
225783 371f7 03/22 15:37:17 11686416 MongoDbIDA::MongoDbBackupCoordinator::getTask(684) - Number of listed Tasks=[1]
225783 371f7 03/22 15:37:17 11686416 MongoDbIDA::MongoDbBackupCoordinator::getTask(746) - got 1 tasks to be executed
225783 371f7 03/22 15:37:17 11686416 MongoDbIDA::MongoDbBackupCoordinator::getTask(762) - got task 1 reserved for node gbmnct03
225783 371f7 03/22 15:37:17 11686416 MongoDbIDA::MongoDbBackupCoordinator::getTask(794) - Sending MongoDBConfig message to the node Controller.
225783 371f7 03/22 15:37:17 11686416 MongoDbIDA::MongoDbBackupCoordinator::getTask(798) - dbUser=[admin]
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::DispatchTasks(1544) - will update task with m_iReferenceId 1
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::DispatchTasks(1554) - task 1 reserved for stream 1001
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::DispatchTasks(1559) - task refID:1 assigned to stream 1001 on node gbmnct03
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::DispatchTasks(1576) - will update task with m_iReferenceId 1
225783 371f7 03/22 15:37:17 11686416 DistributedIDA::CMaster::DispatchTasks(1584) - task 1 started on stream 1001
225783 371f7 03/22 15:37:19 11686416 MongoDbIDA::MongoDbBackupCoordinator::OnTaskChange(814) - Number of listed Tasks=[1]
225783 371f7 03/22 15:37:19 11686416 MongoDbIDA::MongoDbBackupCoordinator::OnTaskChange(821) - Sending Failed/Stop message to Node=[gbmnct03]
225783 371f7 03/22 15:37:19 11686416 DistributedIDA::CMaster::TaskComplete(3251) - task 1 status reported : 6
225783 371f7 03/22 15:37:19 11686416 DistributedIDA::CMaster::takeAnAction(588) - marking job status based on IDA value
225783 38229 03/22 15:37:19 11686416 DistributedIDA::CCoordinatorReportsPrivate::printReport(367) -
REPORT:--- Progress Report ----------------------------------------------------------------------------------------
REPORT:                            |Status  |Rst| Objects | Success | Failed  | Skipped | Data GB |  GB/h
REPORT:  Node:gbmnct03             |Running |  0|         |         |         |         |         |
REPORT:Stream:1001                 |Running |  0|         |         |         |         |         |
REPORT: Stream:1001|Running:0 Failed:1 Complete:0
REPORT:-----------------------------------------------------------------------------------------------------------
REPORT:                            |TOTAL   |  0|         |         |         |         |         |
225783 38229 03/22 15:37:19 11686416 DistributedIDA::CCoordinatorReports::Run(187) - report thread finished
225783 3822a 03/22 15:37:19 11686416 DistributedIDA::CMaster::streamRefresh(3463) - done
225783 371f7 03/22 15:37:19 11686416 MongoDbIDA::MongoDbBackupCoordinator::CleanupSnapshots(1433) - Cleanup snapshots created from failed attempts ...
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::getVolumeSnaps() - Request for getVolumeSnaps - JId [11686416] CCId [2].
225783 371f7 03/22 15:37:19 11686416 CVMMSnapAPI::getVolumeSnaps() - Completed the getVolumeSnaps operation.
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::getVolumeSnaps() - Request for getVolumeSnaps Succeeded. Status [0].
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::initialize() - Request for CVSnapClientAPIInternal::deleteVolumeSnaps - JId [11686416] CCId [2].
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::checkVolumeList() - Volume List is empty [0]. Err [60114:Volume List is empty].
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::setJPR() - Setting JPR Job Id [11686416] Err [60114] EvErr [1040189160] ErrStr [Volume List is empty]
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::setJPR() - JPR Set - Error [60114, Volume List is empty] Custom Error [Volume List is empty]
225783 371f7 03/22 15:37:19 11686416 CVSnapClientAPIInternal::deleteVolumeSnaps() - Request for deleteVolumeSnaps Failed. Status [-1].
225783 371f7 03/22 15:37:19 11686416 MongoDbIDA::MongoDbBackupCoordinator::CleanupSnapshots(1496) - Unable to delete snapshots created from failed attempts  [Volume List is empty]
225783 371f7 03/22 15:37:19 11686416 DistributedIDA::CMaster::endPhaseTasks(3719) - 0x80070306:{MongoDbIDA::MongoDbBackupCoordinator::OnComplete(938)/W32.774.(One or more errors occurred while processing the request. (ERROR_ERRORS_ENCOUNTERED.774))-Failed to complete the snap backup}
225783 371f7 03/22 15:37:19 11686416 DistributedIDA::CMaster::endPhaseTasks(3729) - m_jsStatus:2 m_jsStatusFor:1 m_jsPendingCause:4
225783 371f7 03/22 15:37:19 11686416 Sending FAILED complete message to JM, 11686416
225783 371f7 03/22 15:37:19 11686416 Deinitialize() - Disconnecting agent [gbmnct03]
225783 371f7 03/22 15:37:19 11686416 main(169) - ---------------------
225783 371f7 03/22 15:37:19 11686416 main(170) - ENDING DistributedIDA
 

Best answer by Mike Struening RETIRED

Adding case solution.  If this does occur again, please let us know!

Solution:

suspecting issue at LVM mount points, was escalating internally, but customer confirmed issue fixed by itself.

View original
Did this answer your question?
If you have a question or comment, please create a topic

4 replies

Mike Struening
Vaulter
Forum|alt.badge.img+23

Hi @ETO , thanks for the post (and welcome)!

I checked our internal database and didn’t see other incidents with this message.

I went to the Associate Manager of the team that supports MongoDB and he immediately suggested opening a support case as this is a complex issue best handled in that venue.

Can you share the case number once created so I can track it accordingly?

Thanks!


Forum|alt.badge.img+8
  • Author
  • Byte
  • 34 replies
  • March 24, 2022

Hello @Mike Struening 

I have opened a  suppot case.

Case number 220322-652

 

Thanks


Mike Struening
Vaulter
Forum|alt.badge.img+23

Thanks!  I’ll keep an eye, though by all means, if you get a solution before I update this thread, feel free to share it!


Mike Struening
Vaulter
Forum|alt.badge.img+23

Adding case solution.  If this does occur again, please let us know!

Solution:

suspecting issue at LVM mount points, was escalating internally, but customer confirmed issue fixed by itself.


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings