This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ronaldoussoren
Recipients ned.deily, ronaldoussoren, serhiy.storchaka, wessen
Date 2020-10-21.20:12:55
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1603311175.71.0.970823614804.issue42103@roundup.psfhosted.org>
In-reply-to
Content
One Apple implementation of binary plist parsing is here: https://opensource.apple.com/source/CF/CF-550/CFBinaryPList.c.

That seems to work from a buffer (or mmap) of the entire file, making consistency checks somewhat easier, and I don't think they have a hardcoded limit on the number of items in the plist (but: it's getting late and might have missed that).

An easy fix for this issue is to limit the amount of values read by _read_ints, but that immediately begs the question what that limit should be. It is clear that 1B items in the array is too much (in this case 1B object refs for dictionary keys). 

I don't know what a sane limit would be. The largest plist file on my system is 10MB. Limiting *n* to 20M would be easily enough to parse that file, and doesn't consume too much memory (about 160MB for the read buffer at most). 

Another thing the current code doesn't do is check that the "ref" argument to _read_object is in range. That's less problematic because the code will raise an exception regardless (IndexErrox, instead of the nicer InvalidFileException).
History
Date User Action Args
2020-10-21 20:12:55ronaldoussorensetrecipients: + ronaldoussoren, ned.deily, serhiy.storchaka, wessen
2020-10-21 20:12:55ronaldoussorensetmessageid: <1603311175.71.0.970823614804.issue42103@roundup.psfhosted.org>
2020-10-21 20:12:55ronaldoussorenlinkissue42103 messages
2020-10-21 20:12:55ronaldoussorencreate