This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author desbma
Recipients desbma, methane
Date 2019-02-25.18:01:04
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1551117664.72.0.122434540092.issue36103@roundup.psfhosted.org>
In-reply-to
Content
If you do a benchmark by reading from a file, and then writing to /dev/null several times, without clearing caches, you are measuring *only* the syscall overhead:
* input data is read from the Linux page cache, not the file on your SSD itself
* no data is written (obviously because output is /dev/null)

Your current command line also measures open/close timings, without that I think the speed should linearly increase when doubling buffer size, but of course this is misleading, because its a synthetic benchmark.

Also if you clear caches in between tests, and  write the output file to the SSD itself, sendfile will be used, and should be even faster.

So again I'm not sure this means much compared to real world usage.
History
Date User Action Args
2019-02-25 18:01:04desbmasetrecipients: + desbma, methane
2019-02-25 18:01:04desbmasetmessageid: <1551117664.72.0.122434540092.issue36103@roundup.psfhosted.org>
2019-02-25 18:01:04desbmalinkissue36103 messages
2019-02-25 18:01:04desbmacreate