This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author nascheme
Recipients David Bieber, benjamin.peterson, brett.cannon, docs@python, nascheme, ncoghlan, r.david.murray, rhettinger, serhiy.storchaka, terry.reedy
Date 2017-11-09.21:45:10
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1510263910.48.0.213398074469.issue31778@psf.upfronthosting.co.za>
In-reply-to
Content
Just a comment on what I guess is the intended use of literal_eval(), i.e. taking a potentially untrusted string and turning it into a Python object.  Exposing the whole of the Python parser to potential attackers would make me very worried.  Parsing code for all of Python syntax is just going to be very complicated and there can easily be bugs there.  Generating an AST and then walking over it to see if it is safe is also scary.  The "attack surface" is too large.  This is similar to the Shellshock bug. If you can trust the supplier of the string then okay but I would guess that literal_eval() is going to get used for untrusted inputs.

It would be really nice to have something like ast.literal_eval() that could be used for untrusted strings.  I would implement it by writing a retricted parser.  Keep it extremely simple.  Validate it by heavy code reviews and extensive testing (e.g. fuzzing).
History
Date User Action Args
2017-11-09 21:45:10naschemesetrecipients: + nascheme, brett.cannon, rhettinger, terry.reedy, ncoghlan, benjamin.peterson, r.david.murray, docs@python, serhiy.storchaka, David Bieber
2017-11-09 21:45:10naschemesetmessageid: <1510263910.48.0.213398074469.issue31778@psf.upfronthosting.co.za>
2017-11-09 21:45:10naschemelinkissue31778 messages
2017-11-09 21:45:10naschemecreate