This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author liturgist
Recipients
Date 2005-08-05.17:53:44
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Logged In: YES 
user_id=197677

I would very much like to produce the doc table from code. 
However, I have a few questions.

It seems that encodings.aliases.aliases is a list of all
encodings and not necessarily those supported on all
machines.  Ie. mbcs on UNIX or embedded systems that might
exclude some large character sets to save space.  Is this
correct?  If so, will it remain that way?

To find out if an encoding is supported on the current
machine, the code should handle the exception generated when
codecs.lookup() fails.  Right?

To generate the table, I need to produce the "Languages"
field.  This information does not seem to be available from
the Python runtime.  I would much rather see this
information, including a localized version of the string,
come from the Python runtime, rather than hardcode it into
the script.  Is that a possibility?   Would it be a better
approach?

The non-language oriented encodings such as base_64 and
rot_13 do not seem to have anything that distinguishes them
from human languages.  How can these be separated out
without hardcoding?

Likewise, the non-language encodings have an "Operand type"
field which would need to be generated.  My feeling is,
again, that this should come from the Python runtime and not
be hardcoded into the doc generation script.  Any suggestions?
History
Date User Action Args
2007-08-23 14:33:30adminlinkissue1249749 messages
2007-08-23 14:33:30admincreate