This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gvanrossum
Recipients
Date 2002-12-11.16:07:21
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
In python 2.2, int of a unicode string containing
non-digit characters raises ValueError, like all other
attempts to convert an invalid string or unicode to
int. But in Python 2.3, it appears that int() of a
unicode string si implemented differently and now can
raise UnicodeEncodeError:

>>> int(u"\u1234")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
UnicodeEncodeError: 'decimal' codec can't encode
character '\u1234' in position 0: invalid decimal
Unicode string
>>> 

I think it's important that int() of a string or
unicode argument only raises ValueError to indicate
invalid inputs -- otherwise one ends up writing bare
excepts for conversions to string (as it is too much
trouble to keep track of which Python versions can
raise which exceptions).
History
Date User Action Args
2007-08-23 14:09:22adminlinkissue652104 messages
2007-08-23 14:09:22admincreate