This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rbcollins
Recipients barry, cvrebert, exarkun, ezio.melotti, ncoghlan, pitrou, rbcollins
Date 2009-12-13.04:44:56
SpamBayes Score 6.447228e-06
Marked as misclassified No
Message-id <1260679498.42.0.999819512442.issue6108@psf.upfronthosting.co.za>
In-reply-to
Content
"2) 0 args, e = MyException(), with overridden __str__:
   py2.5  : str(e) -> 'ascii' or error; unicode(e) -> u'ascii' or error;
   py2.6  : str(e) -> 'ascii' or error; unicode(e) -> u''
   desired: str(e) -> 'ascii' or error; unicode(e) -> u'ascii' or error;
Note: py2.5 behaviour is better: if __str__ returns an ascii string
(including ''), unicode(e) should return the same string decoded, if
__str__ returns a non-ascii string, both should raise an error.
"

I'm not sure how you justify raising an unnecessary error when trying to
stringify an exception as being 'better'.

__str__ should not decode its arguments if they are already strings:
they may be valid data for the user even if they are not decodable (and
note that an implicit decode may try to decode('ascii') which is totally
useless.

__str__ and __unicode__ are /different/ things, claiming they have to
behave the same is equivalent to claiming either that we don't need
unicode, or that we don't need binary data.

Surely there is space for both things, which does imply that
unicode(str(e)) != unicode(e).

Why _should_ that be the same anyway?
History
Date User Action Args
2009-12-13 04:44:58rbcollinssetrecipients: + rbcollins, barry, exarkun, ncoghlan, pitrou, ezio.melotti, cvrebert
2009-12-13 04:44:58rbcollinssetmessageid: <1260679498.42.0.999819512442.issue6108@psf.upfronthosting.co.za>
2009-12-13 04:44:57rbcollinslinkissue6108 messages
2009-12-13 04:44:56rbcollinscreate