This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients benjamin.peterson, pitrou, serhiy.storchaka, vstinner
Date 2017-09-20.14:04:16
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1505916257.02.0.708284699634.issue31530@psf.upfronthosting.co.za>
In-reply-to
Content
Serhiy: "What if just deny reentrant reads? Set a flag while read into a buffer, check it before reading in other thread, and raise RuntimeError."

io.BufferedReader/io.BufferedWriter raises a RuntimeError exception for reentrant call, but only in the same thread. For example, it ease debug for signal handlers which trigger such reentrant call.

I'm not sure about 0001-stop-crashes-when-iterating-over-a-file-on-multiple-.patch since it doesn't fix the consistency: two parallel readline() calls can return the same line, instead of being mutual exclusive and only return different lines.

I'm not sure about adding a new lock. "Lock" sounds like "dead locks". I dislike the risk of introducing dead locks very late in the Python 2.7 development cycle.

I like the idea of a simple exception on concurrent operations. But I'm not sure how much code it will break :-/ Crazy idea: would it make sense to raise an exception by default, but add an opt-in option to ignore the exception? I wrote "crazy" since it became clear that the code is not thread-safe and so that parallel operations on the same file object is likely to corrupt data in various funny ways.
History
Date User Action Args
2017-09-20 14:04:17vstinnersetrecipients: + vstinner, pitrou, benjamin.peterson, serhiy.storchaka
2017-09-20 14:04:17vstinnersetmessageid: <1505916257.02.0.708284699634.issue31530@psf.upfronthosting.co.za>
2017-09-20 14:04:17vstinnerlinkissue31530 messages
2017-09-20 14:04:16vstinnercreate