This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author gwideman
Recipients benjamin.peterson, docs@python, eric.araujo, ezio.melotti, gwideman, lemburg, pitrou, tshepang, vstinner
Date 2014-03-19.23:50:41
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1395273041.74.0.181908712413.issue20906@psf.upfronthosting.co.za>
In-reply-to
Content
Antoine:

Thanks for your comments -- this is slippery stuff.

> It's better, but how about simply "In this article"?

I was hoping to inform the reader that the hex representations are found in many articles, not just special to this one.

> [ showing the glyph ]

Agreed -- it would be good to show the glyphs mentioned. But in a way that isn't confusing if the user's web browser doesn't show it correctly.

> For all intents and purposes, iso-8859-1 and friends *are* encodings 
> (and this is how Python actually names them).

I am still mulling this over. iso-8859-1 is most literally an "encoding" in the old sense of the word (character <--> byte representation), and is not, per se, a unicode-related concept. 

I think part of the ambiguity problem here is that there are two subtly but importantly different ideas here:

1. Python string (capable of representing any unicode text) --> some full-fidelity and industry recognized unicode byte stream, like utf-8, or utf-32. I think this is legitimately described as an "encoding" of the unicode string.

versus:

2. 1. Python string --> some other code system, such as ASCII, cp1250, etc. The destination code system doesn't necessarily have anything to do with unicode, and whole ranges of unicode's characters either result in an exception, or get translated as escape sequences. Ie: This is more usefully seen as a translation operation, than "merely" encoding.

In 1, the encoding process results in data that stays within concepts defined within Unicode. In 2, encoding produces data that would be described by some code system outside of Unicode.

At the moment I think Python muddles these two ideas together, and I'm not sure how to clarify this. 

> So it should say "16-bit code points" instead, right?

I don't think Unicode code points should ever be described as having a particular number of bits. I think this is a core concept: Unicode separates the character <--> code point, and code point <--> bits/bytes mappings. 

At most, one might want to distinguish different ranges of unicode code points. Even if there is a need to distinguish code points <= 65535, I don't think this should be described as "16-bit", as it muddies the distinction between Unicode's two mappings.
History
Date User Action Args
2014-03-19 23:50:41gwidemansetrecipients: + gwideman, lemburg, pitrou, vstinner, benjamin.peterson, ezio.melotti, eric.araujo, docs@python, tshepang
2014-03-19 23:50:41gwidemansetmessageid: <1395273041.74.0.181908712413.issue20906@psf.upfronthosting.co.za>
2014-03-19 23:50:41gwidemanlinkissue20906 messages
2014-03-19 23:50:41gwidemancreate