Skip to main content
Solved

backup direct to tape

  • September 19, 2022
  • 9 replies
  • 944 views

Forum|alt.badge.img+8

Hi teams

Environnement :

  • Physical commserve DL160 / Windows 2019 / Version : 11.28.19 / 10 Go NIC
  • One drive LTO 8 on MSL 2024 / SAS
  • Cluster Vsphere 7 / 10 Go NIC

I try to backup a monster VM (32 To) directly on tape.

  i think that the bandwith rate is quite low (see below)

 

i would like increase performance of bandwith rate…

How i can do this.Help appreciate.

Thanks..

 

Best answer by Jordan

PerfMgr log indicates a slow read from VMDK issue. Can you post some stats from vsbkp.log?

 

Also since this is tape, you would want to potentially increase block size and chunk size on DataPath properties in storage policy copy along with some multiplexing to try to maximise the write speeds we can achieve to tape drive. 

View original
Did this answer your question?

9 replies

Mike Struening
Vaulter
Forum|alt.badge.img+23

Hi @virgil , thanks for the thread!

Looking at your screenshot, 77.34% of the load is the write (i.e. the library itself).

LTO8 has a compressed transfer rate of 750 MB/sec. (360MB/sec. native).  This is of course ideal, but a good measure to start.

That’s equal to (at the low end) 0.3 GB/sec or 1080 GB an hour, far higher than you are seeing.

Our best place to start is the cvperfmgr.log on the Media Agent.  Can you paste the entry for this job?

It’s most likely an issue with the tape/drive, etc. but we can gather some solid information to begin in this log file.


Forum|alt.badge.img+8
  • Author
  • Byte
  • 45 replies
  • September 19, 2022

Hi Mike,

thanks for your answser.

i put the result of the log below:

|*113*|*Perf*|179| Job-ID: 179            [Pipe-ID: 113]            [App-Type: 106]            [Data-Type: 1]
|*113*|*Perf*|179| Stream Source:   fif-hp-p003
|*113*|*Perf*|179| Network medium:   SDT
|*113*|*Perf*|179| Head duration (Local):  [16,September,22 21:00:42  ~  16,September,22 22:20:35] 01:19:53 (4793)
|*113*|*Perf*|179| Tail duration (Local):  [16,September,22 21:00:42  ~  16,September,22 22:20:35] 01:19:53 (4793)
|*113*|*Perf*|179| --------------------------------------------------------------------------------------------------------------------------------
|*113*|*Perf*|179|     Perf-Counter                                                             Time(seconds)              Size
|*113*|*Perf*|179| --------------------------------------------------------------------------------------------------------------------------------
|*113*|*Perf*|179|
|*113*|*Perf*|179| Virtual Server Agent
|*113*|*Perf*|179|  |_VM[c0766011med].........................................................         -                          
|*113*|*Perf*|179|    |_Config Files[VM Files]................................................         -                          
|*113*|*Perf*|179|      |_Read................................................................         -                   6358925  [6.06 MB]
|*113*|*Perf*|179|      |_Open File...........................................................         -                            [Samples - 12] [Avg - 0.000000]
|*113*|*Perf*|179|      |_Buffer allocation...................................................         -                            [Samples - 158] [Avg - 0.000000]
|*113*|*Perf*|179|      |_Pipeline write......................................................         -                   6382036  [6.09 MB]
|*113*|*Perf*|179|    |_Disk[c0766011med_3.vmdk]..............................................         -                          
|*113*|*Perf*|179|      |_Disk Read...........................................................      4534              196268261727  [182.79 GB] [145.13 GBPH]
|*113*|*Perf*|179|      |_Buffer allocation...................................................         -                            [Samples - 748705] [Avg - 0.000000]
|*113*|*Perf*|179|      |_Pipeline write......................................................        42              194866490980  [181.48 GB] [15555.73 GBPH]
|*113*|*Perf*|179|
|*113*|*Perf*|179| Reader Pipeline Modules[Client]
|*113*|*Perf*|179|  |_CVA Wait to received data from reader...................................      4743                          
|*113*|*Perf*|179|  |_CVA Buffer allocation...................................................        19                          
|*113*|*Perf*|179|  |_SDT: Receive Data.......................................................      4530              195302725665  [181.89 GB]  [Samples - 3118185] [Avg - 0.001453] [144.55 GBPH]
|*113*|*Perf*|179|  |_SDT-Head: CRC32 update..................................................       141              195302668129  [181.89 GB]  [Samples - 3118184] [Avg - 0.000045] [4643.99 GBPH]
|*113*|*Perf*|179|  |_SDT-Head: Network transfer..............................................        64              195302668129  [181.89 GB]  [Samples - 3118184] [Avg - 0.000021] [10231.30 GBPH]
|*113*|*Perf*|179|  |_SDT:Stats...............................................................         -                          
|*113*|*Perf*|179|    |_[Compression : LZO]...................................................         -                          
|*113*|*Perf*|179|    |_[Buf size : 65536]....................................................         -                          
|*113*|*Perf*|179|    |_[Buf count : 90]......................................................         -                          
|*113*|*Perf*|179|    |_[SDT threads : 20]....................................................         -                          
|*113*|*Perf*|179|    |_[Processor count : 20]................................................         -                          
|*113*|*Perf*|179|    |_[Thread per Connection : 10]..........................................         -                          
|*113*|*Perf*|179|
|*113*|*Perf*|179| Writer Pipeline Modules[MediaAgent]
|*113*|*Perf*|179|  |_[Stream target: fif-hp-p003]............................................         -                          
|*113*|*Perf*|179|  |_SDT-Tail: Wait to receive data from source..............................      4775              195302725665  [181.89 GB]  [Samples - 3118185] [Avg - 0.001531] [137.13 GBPH]
|*113*|*Perf*|179|  |_SDT-Tail: Writer Tasks..................................................       808              195302668129  [181.89 GB]  [Samples - 3118184] [Avg - 0.000259] [810.40 GBPH]
|*113*|*Perf*|179|    |_DSBackup: Update Restart Info.........................................         6                          
|*113*|*Perf*|179|    |_DSBackup: Update Index................................................        32                          
|*113*|*Perf*|179|    |_DSBackup: Update Restart Info Index...................................         1                          
|*113*|*Perf*|179|    |_DSBackup: Media Write.................................................       705              195177535652  [181.77 GB] [928.20 GBPH]
|*113*|*Perf*|179|


Graham Swift
Vaulter
Forum|alt.badge.img+11
  • Vaulter
  • 79 replies
  • September 19, 2022

How many streams are you running in the job? I am assuming the large VM has multiple disks?

Do you have combine streams or multiplexing enabled?

Check to see if you have the latest drivers for the tape drive installed as well.

 


Forum|alt.badge.img+11
  • Vaulter
  • 135 replies
  • Answer
  • September 20, 2022

PerfMgr log indicates a slow read from VMDK issue. Can you post some stats from vsbkp.log?

 

Also since this is tape, you would want to potentially increase block size and chunk size on DataPath properties in storage policy copy along with some multiplexing to try to maximise the write speeds we can achieve to tape drive. 


Mike Struening
Vaulter
Forum|alt.badge.img+23

Thanks for posting @virgil !

the brilliant minds of @Graham Swift and @Jordan have provided great feedback, as always!


Forum|alt.badge.img+8
  • Author
  • Byte
  • 45 replies
  • September 20, 2022

Hi Teams

@Graham Swift  : 3 streams for this job. Multiplexing is enable. Latest driver for the tape drive is installed.

@Jordan ; Chunk size and block size are by default in data Path Properties.

What values to use ?

 

some stats of vsbkp.log

7932 5974  09/20 18:13:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:13:24 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,747,642,454,016] bytes free [41.25%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:13:24 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5b44  09/20 18:14:14 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,747,642,454,016] bytes free [41.25%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:14:15 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5974  09/20 18:14:47 179 stat- ID [readdisk], Bytes [18530674147681], Time [334414.761606] Sec(s), Average Speed [52.845239] MB/Sec
17932 5974  09/20 18:14:47 179 stat- ID [Datastore Read [VMFS-SAS-MSA01-PACS]], Bytes [18530674147681], Time [334427.060304] Sec(s), Average Speed [52.843296] MB/Sec
17932 4184  09/20 18:14:47 179 stat- ID [allocPLBuffer], Samples [70688910], Time [80.399191] Sec(s), Average [0.000001] Sec/Sample
17932 4184  09/20 18:14:47 179 stat- ID [writePLBuffer], Bytes [18540174863015], Time [5937.205690] Sec(s), Average Speed [2978.048852] MB/Sec
17932 5974  09/20 18:15:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:15:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,747,642,454,016] bytes free [41.25%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:15:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 2908  09/20 18:15:48 179 TPool [SdtHeadThPool]. Tot [342756], Pend [0], Comp [342756], Max Par [12], Avg Work Time [72.89 us], Avg Wait Time [5.41 us]
17932 5b44  09/20 18:16:15 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5974  09/20 18:17:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:17:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,747,642,454,016] bytes free [41.25%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:17:25 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::OnIdle_Running() - Waiting for [1] VMs to be processed.  [c0766011med]
17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 113, ReservationId = 101, MediaAgent = [fif-hp-p003], Agent = []  [Released]
17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 115, ReservationId = 101, MediaAgent = [fif-hp-p003], Agent = [fif-hp-p003]  [Assigned]
17932 14dc  09/20 18:17:59 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 116, ReservationId = 101, MediaAgent = [fif-hp-p003], Agent = []  [Released]
17932 5b44  09/20 18:18:16 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5b44  09/20 18:19:16 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,746,884,333,568] bytes free [41.24%]  Update Interval [300] seconds  [1] VMs
17932 5974  09/20 18:19:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:19:26 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,746,884,333,568] bytes free [41.24%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:19:26 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,527,672,320] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5b44  09/20 18:20:17 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,543,400,960] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 2908  09/20 18:20:48 179 TPool [SdtHeadThPool]. Tot [281736], Pend [0], Comp [281736], Max Par [11], Avg Work Time [72.38 us], Avg Wait Time [5.41 us]
17932 5974  09/20 18:21:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:21:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,746,884,333,568] bytes free [41.24%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:21:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,543,400,960] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5b44  09/20 18:22:17 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,543,400,960] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::OnIdle_Running() - Waiting for [1] VMs to be processed.  [c0766011med]
17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 113, ReservationId = 101, MediaAgent = [fif-hp-p003], Agent = []  [Released]
17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 115, ReservationId = 101, MediaAgent = [fif-hp-p003], Agent = [fif-hp-p003]  [Assigned]
17932 14dc  09/20 18:23:09 179 VSBkpCoordinator::DumpStreamInfo() - StreamId = 2, RCID = 116, ReservationId = 101, MediaAgent = [fif-hp-p003], Agent = []  [Released]
17932 5974  09/20 18:23:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:23:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,746,884,333,568] bytes free [41.24%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:23:27 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,543,400,960] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5b44  09/20 18:24:18 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,746,884,333,568] bytes free [41.24%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:24:18 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,543,400,960] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
17932 5974  09/20 18:24:47 179 stat- ID [readdisk], Bytes [18567528448353], Time [335013.232996] Sec(s), Average Speed [52.855748] MB/Sec
17932 5974  09/20 18:24:47 179 stat- ID [Datastore Read [VMFS-SAS-MSA01-PACS]], Bytes [18567528448353], Time [335025.558982] Sec(s), Average Speed [52.853803] MB/Sec
17932 4184  09/20 18:24:47 179 stat- ID [allocPLBuffer], Samples [70829498], Time [80.567833] Sec(s), Average [0.000001] Sec/Sample
17932 4184  09/20 18:24:47 179 stat- ID [writePLBuffer], Bytes [18577019362441], Time [5948.398713] Sec(s), Average Speed [2978.352174] MB/Sec
17932 5974  09/20 18:25:17 179 vsJobMgr::updateVMBkpJobStatus() - Sending VM status for [1] virtual machines
17932 5b44  09/20 18:25:28 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SAS-MSA01-PACS] [24,746,884,333,568] bytes free [41.24%]  Update Interval [300] seconds  [1] VMs
17932 5b44  09/20 18:25:28 179 VSBkpDatastoreMonitor::ReportDatastore() - Datastore [VMFS-SSD-MSA01] [406,543,400,960] bytes free [16.95%]  Update Interval [120] seconds  [1] VMs
 

Thanks.


Forum|alt.badge.img+11
  • Vaulter
  • 135 replies
  • September 20, 2022

@virgil - the read speed we are getting from your ESX datastore is ~50MB/s. You’re not going to get faster backups if read from source is the bottleneck. If this VM is 32TB, then that will take about 187 hours for backup to complete.

 

50MB/s converts to about 180GB/hr which is what your job is showing 


Jos Meijer
Commvault Certified Expert
Forum|alt.badge.img+17
  • Commvault Certified Expert
  • 638 replies
  • September 20, 2022

I agree with @Jordan this is the actual bottleneck.

How many VSA's do you have assigned to this hypervisor backup.

How many CPU/memory is assigned to the vsa?

Is it physical or VM based.

What transport mode are you using?

 


Forum|alt.badge.img+8
  • Author
  • Byte
  • 45 replies
  • September 21, 2022

@Jordan : Yes Average is about Speed [52.855748] MB/Sec...

@Jos Meijer : 1 vsa / 64 Go. it ‘s a physical commserve. Used nbd as transport node.

Thanks


Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings