Message217141
| Author |
tiwilliam |
| Recipients |
ezio.melotti, nadeem.vawda, serhiy.storchaka, skip.montanaro, tiwilliam |
| Date |
2014年04月24日.23:53:21 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1398383602.58.0.527558081815.issue20962@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
I played around with different file and chunk sizes using attached benchmark script.
After several test runs I think 1024 * 16 would be the biggest win without losing too many μs on small seeks. You can find my benchmark output here: https://gist.github.com/tiwilliam/11273483
My test data was generated with following commands:
dd if=/dev/random of=10K bs=1024 count=10
dd if=/dev/random of=1M bs=1024 count=1000
dd if=/dev/random of=5M bs=1024 count=5000
dd if=/dev/random of=100M bs=1024 count=100000
dd if=/dev/random of=1000M bs=1024 count=1000000
gzip 10K 1M 5M 100M 1000M |
|