Jump to content

slow performance when writing to/from network


Manni01

Recommended Posts

Hi there,

 

I'm wondering if there could be a way to optimize imgburn's performance when reading AND writing from/to the network.

 

If I only read from a NAS share to a local disc, or write from a local disc to a NAS share, performance is as expected (around 100MB/s on my gigabit CAT6 network).

 

But if I read froma NAS share and write to another, the performance drops to around 30MB/s. Given the limitation of the network, I would expect it to drop around 45/50MB/s, possibly a bit less. I use a QNAP TS859, a Qnap TS809, and a Synology DS2411.

 

Clients are Win7 x64 ultimate destops or laptops.

 

Any idea why the performance is not so good over the network? I tried with different clients, and different NAS, imgburn seems to hit a ceiling around 30-35MB/s in this situation.

 

Thanks!

Link to comment
Share on other sites

Network shares/devices often like the OS's buffering to be enabled.

 

You do that on the 'I/O' tab in the settings.

 

That said, reading and writing to the same physical device is always going to be slower than reading and writing to different ones.

 

Thanks for your quick reply, and apologies for my late reaction... I forgot to click "follow this topic"...

 

I tried to enable OS buffering and it doesn't improve anything.

 

As I said, I am reading from one NAS and writing to another, so it's a different physical device. Each NAS is able to do around 100MB/s read/write, so saturates the gigabit / CAT6 connection.

 

Except when IMGBURN is doing the reading/writing to/from two separate NAS.

 

It goes like this:

 

LOCAL DRIVE --> IMGBURN --> NAS1 or NAS2 = 100MB/s

NAS1 or NAS2 --> IMGBURN --> LOCAL DRIVE = 100MB/s

NAS1 --> IMGBURN --> NAS2 =30 MB/s

 

I tried selecting Elby I/O instead of MS, and it doesn't make any difference either.

 

Any other idea?

 

Have you done first hand tests using network shares (NAS, not MS client) as source/dest?

Edited by Manni01
Link to comment
Share on other sites

Ah sorry, I missed the bit about you having multiple NAS devices. I don't even have 1, so no, I haven't done any first hand tests at all!

 

What does the 'Buffer' look like when you're reading from one and writing to another?

 

If it's full, it means it can write out the data quickly enough (so source is faster than dest). If it's empty, it can't read the data quickly enough (so source is slower than dest).

 

Using ElbyCDIO won't do anything as it's nothing to do with the API functions used for reading and writing files.

Link to comment
Share on other sites

Buffer looks normal (full), except at the very beginning when it fills up progressively, and at the very end when it empties progressively.

 

It looks to me like it's related to the way IMGBURN handles the I/O when both source and dest are a network share.

 

Again, when the share is only a source or destination, it goes at full expected speed, which proves that it's not related to either the source or destination per se.

 

If I reverse source and dest, I get the same (low) performance, around 30MB/s.

Link to comment
Share on other sites

It doesn't know (or care if) things are shares, local drives, 2 different physical drives etc. It just has a couple of paths that it passes to the 'CreateFile' API and that's it.

 

It's then doing ReadFile and WriteFile on the handles obtained by CreateFile as quickly as it's able to.

 

Whilst the reading part has an option of enabling the OS's buffering, the writing part doesn't.

Link to comment
Share on other sites

Any way to enable buffered I/O for the writing? That might improve things.

 

Could there be some collisions due to the sustained high bandwidth reading/writing going through the ethernet connection simultaneously in opposite directions?

 

Or maybe a timing issue?

 

Only trying to understand why the performance is so much lower than it should, I realise that IMGBURN doesn't handle directly the low level traffic between source and dest.

Edited by Manni01
Link to comment
Share on other sites

Sorry, that was a mistake on my part... Build mode always writes out using buffering.

 

It's Read mode that writes out without buffering.

 

What happens if you run ImgBurn twice? Have 1 reading from NAS and writing to local. Have the other reading local and writing to NAS.

 

Does the throughput then drop on both or do they manage 100mb/s still?

Link to comment
Share on other sites

I was preparing to do the requested test, but I discovered something which might help us understand what's happening.

 

I didn't have any bluray folder on my local drive, so I mounted an ISO from one of the NAS using Virtual Clone Drive, and started copying the content to my local drive (an SSD delivering 200MB/s + speed with large files in R/W to rule out any HDD induced bottleneck in the experiment). This is on a quadcore with gigabit ethernet.

I was surprised to see that the speed, even when copying the main .m2ts (so a large file) was only 32MB/s (roughly the performance of IMGBURN when reading from a NAS and writing to another).

Once the copy was over, I tried to copy the ISO itself from the same NAS to the same local drive, and I got my usual 100-110MB/s.

So I had a look at the settings in VCD, and buffered I/O was enabled. I disabled it, tried to copy the BDMV from the mounted ISO, and the speed shot up to 50-60 MB/s (about double).

 

So it looks like 1) buffered I/O seems to have a negative influence on the copy speed, at least for VCD and 2) VCD seems to suffer from the same drop in performance as IMGBURN when reading from a NAS compared to direct file copy. If I write to the NAS, I get the expected performance (around 100MB/s with direct filecopy, and around 95MB/s both with VCD and IMGBURN (so an acceptable overhead), whether buffered I/O is enabled in VCD or not.

 

So limitation seems to be when reading from a network share with some tools, but not when copying files directly.

 

Does that help?

 

If buffered I/O is disabled for reading in IMGBURN, it shouldn't impact performance negatively if it behaves like VCD, as worse performance comes when it's enabled.

Edited by Manni01
Link to comment
Share on other sites

I've done the test, and here are the results with IMGBURN:

 

HDD -> NAS1 = 90MB/s

NAS2 -> HDD= 70MB/s

When at the same time, NAS1 holds its speed and NAS2 drops to 20MB/s, which is consistent with max speed of 110MB/s on the gigabit network.

 

NAS1 -> HDD = 70MB/s

HDD -> NAS2 = 95MB/s

When at the same time, NAS2 holds its speed and NAS1 drops to 15MB/s, which is also consistent with max speed of 110MB/s on the gigabit network.

 

So I would expect an optimal NAS1 to NAS2 to be around 50-55MB/s, at least 50% faster than what it is at the moment (around 30-35MB/s).

Link to comment
Share on other sites

A couple more things:

 

1) I was wrong, the buffer is empty during the NAS>NAS operation, so it's read limited.

2) If I start two instances of IMGBURN copying different files from the same NAS1 to the same NAS2, I get around 22MB/s for each instance, which totals to more than the 30MB/s I get when only one instance is running. This shows that we should be able to reach at least 45MB/s during one operation if it was optimised.

Edited by Manni01
Link to comment
Share on other sites

Sorry but I really have no idea how to do things any differently and I have no way of testing anything anyway. I wouldn't hold your breath for any improvement - but I am adding a few options that may or may not help :(

 

No problem, I understand. Just trying to provide as much info I can.

 

If you let me know what the options are in the new build, I'll be happy to test for you.

 

Appreciate your excellent support as always :thumbup:

Edited by Manni01
Link to comment
Share on other sites

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.