Message217141
I played around with different file and chunk sizes using attached benchmark script.
After several test runs I think 1024 * 16 would be the biggest win without losing too many μs on small seeks. You can find my benchmark output here: https://gist.github.com/tiwilliam/11273483
My test data was generated with following commands:
dd if=/dev/random of=10K bs=1024 count=10
dd if=/dev/random of=1M bs=1024 count=1000
dd if=/dev/random of=5M bs=1024 count=5000
dd if=/dev/random of=100M bs=1024 count=100000
dd if=/dev/random of=1000M bs=1024 count=1000000
gzip 10K 1M 5M 100M 1000M |
|
Date |
User |
Action |
Args |
2014-04-24 23:53:22 | tiwilliam | set | recipients:
+ tiwilliam, skip.montanaro, nadeem.vawda, ezio.melotti, serhiy.storchaka |
2014-04-24 23:53:22 | tiwilliam | set | messageid: <1398383602.58.0.527558081815.issue20962@psf.upfronthosting.co.za> |
2014-04-24 23:53:22 | tiwilliam | link | issue20962 messages |
2014-04-24 23:53:22 | tiwilliam | create | |
|