Message183609
| Author |
neologix |
| Recipients |
eric.araujo, giampaolo.rodola, neologix, pitrou, rosslagerwall |
| Date |
2013年03月06日.19:49:04 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<CAH_1eM1wwpqKJtQVTqNtrDwh2TZrPcuSW2E+GPwF4XaSpZtg+A@mail.gmail.com> |
| In-reply-to |
<1362593397.98.0.766187186143.issue13564@psf.upfronthosting.co.za> |
| Content |
> Specifying a big blocksize doesn't mean the transfer will be faster.
> send/sendfile won't send more than a certain amount of bytes anyways.
The transfer won't be faster mainly because it's really I/O bound.
But it will use less CPU, only because you're making less syscalls.
> If I'm not mistaken I recall from previous benchmarks that after a certain point (131072 or something) increasing the blocksize results in equal or even worse performances.
I can perfectly believe this for a send loop, maybe because you're
exceeding the socket buffer size, or because your working set doesn't
fit into caches anymore, etc.
But for sendfile(), I don't see how calling it repeatedly could not be
slower than calling it once with the overall size: that's how netperf
and vsftpd use it, and probably others.
> Another thing I don't like is that by doing so you implicitly assume that the file is "fstat-eable". I don't know if there are cases where it's not, but the less assumptions we do the better.
Well, the file must be mmap-able, so I doubt fstat() is the biggest concern... |
|