This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author fried
Recipients asvetlov, fried, lukasz.langa
Date 2016-06-03.04:56:32
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
With large tar file extracts I noticed that tarfile was slower than it should be.  Seems in linux that for large files (10MB) truncate is not always a free operation even when it should be a no-op. ex: File is already 10mb seek to end and truncate. 

I created a script to test the validity of this patch.  It generates two random tar archives containing 1024 files of 10mb each. The files are randomized so disk caching should not interfere. 

So to extract those 1g tar files the following was observed
Time Delta for TarFile: 148.23699307441711
Time Delta for FastTarFile: 107.71058106422424
Time Diff: 40.52641201019287 0.27338932859929255
Date User Action Args
2016-06-03 04:56:33friedsetrecipients: + fried, asvetlov, lukasz.langa
2016-06-03 04:56:33friedsetmessageid: <>
2016-06-03 04:56:33friedlinkissue27194 messages
2016-06-03 04:56:32friedcreate