New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HZ codec has no test #56266
Comments
All CJK codecs have tests except the chinese HZ codec, I don't know why. But to add a test, I need to add data to Lib/test/cjkencodings_test.py and the format of this file is not documented. It is not too difficult to understand the format by reading the code of the tests, but it's hard to maintain these tests (add more tests or change a test). I need tests to be able to patch the codec to fix bpo-12016. My plan is to:
|
convert_cjkencodings.py is script to replace Lib/test/cjkencodings_test.py by a Lib/test/cjkencodings/ directory: cjkencodings.patch fixes Lib/test/test_multibytecodec_support.py to use the directoy. |
New files should be marked as binary in Mercurial: add "Lib/test/cjkencodings/* = BIN" in .hgeol. |
Looking at cjkencodings.py the format is pretty clear. The file consists of one statement that creates one dict that maps encoding names to a pair of (encoded) byte strings. The bytes literals are entirely hex escapes, with a maximum of 16 per chunk (line). From the usage you deduced that the first is encoded with named encoding and the second encoded with utf-8. (For anyone wondering, a separate utf-8 strings is needed for each encoding because each other encoding is limited to a different subset of unicode chars.) So I am not completely convinced that pulling the file apart is a complete win. Another entry could be added (the file is formatted with that possibility in mind), but it would certainly be much easier if the original formatting program were available. I do have a couple of questions.
|
Terry J. Reedy wrote:
Victor, could you please contact Hye-Shik Chang <perky@FreeBSD.org> Wouldn't it be better to just use example strings from the RFC and |
With classic plain text files you don't need tools to convert a test Example: $ iconv -f utf-8 Lib/test/cjkencodings/gb18030-utf8.txt -t gb18030 -o
Lib/test/cjkencodings/gb18030-2.txt
$ md5sum Lib/test/cjkencodings/gb18030-2.txt
Lib/test/cjkencodings/gb18030.txt
f8469bf751a9239a1038217e69d82532 Lib/test/cjkencodings/gb18030-2.txt
f8469bf751a9239a1038217e69d82532 Lib/test/cjkencodings/gb18030.txt (Cool, iconv gives the same result :-))
Each encoding uses a different text, I don't know why. It's difficult to
I don't understand why different texts are used. Why not just using the Anyway, we can use multiple testcases for each encoding.
We can use another codec than Python codec. The iconv command line |
Le mercredi 11 mai 2011 à 17:27 +0000, Marc-Andre Lemburg a écrit :
Good idea, done.
Nice, this RFC contains some useful examples. |
Lib/test/cjkencodings_test.py was created when CJK were introduced in Python: changeset 31386 by Hye-Shik Chang <hyeshik@gmail.com>. "Add CJK codecs support as discussed on python-dev. (SF bpo-873597) Several style fixes are suggested by Martin v. Loewis and |
Reading http://tools.ietf.org/html/rfc1843 suggests that the reason that there is no HZ pair in cjkencodings.py is that it is not a cjkencoding. Instead it is a formatter or meta-encoding for intermixing ascii codes and GB2312(-80) codes. (I assume the '-80' suffix means the 1980 version.) In a bytes environment, I believe a strict HZ decoder would simply separate the input bytes into alternating ascii and GB bytes by splitting on the shift chars, changing '~~' to '~', and deleting '~\n' (2 chars). So it would need a special-case test. Python shifts between ascii and GB2312 decoders to produce a unicode stream. Because of the deletion of line-continuation markers, the codec is not 1 to 1. A test sentence should contain both that and an encoded ~. >>> hz=b'''\
This ASCII sentence has a tilde: ~~.
The next sentence is in GB.~{<:Ky2;S{#,~}~
~{NpJ)l6HK!#~}Bye.'''
>>> hz
b'This ASCII sentence has a tilde: ~~.\nThe next sentence is in GB.~{<:Ky2;S{#,~}~\n~{NpJ)l6HK!#~}Bye.'
>>> HZ = hz.decode('HZ')
>>> HZ
'This ASCII sentence has a tilde: ~.\nThe next sentence is in GB.己所不欲,勿施於人。Bye.'
# second '\n' deleted
>>> HZ.encode('HZ')
b'This ASCII sentence has a tilde: ~.\nThe next sentence is in GB.~{<:Ky2;S{#,NpJ)l6HK!#~}Bye.'
# no '~}~\n~{' in the middle of GC codes.
I believe hz and u8=HZ.encode() should work as a test pair for the working of the hz parser itself:
>>> u8 = HZ.encode()
>>> u8
b'This ASCII sentence has a tilde: ~.\nThe next sentence is in GB.\xe5\xb7\xb1\xe6\x89\x80\xe4\xb8\x8d\xe6\xac\xb2\xef\xbc\x8c\xe5\x8b\xbf\xe6\x96\xbd\xe6\x96\xbc\xe4\xba\xba\xe3\x80\x82Bye.'
>>> u8.decode() == hz.decode('HZ')
True However, I have no idea what the hz codec is doing with the shifted byte pairs between '~{' and '~}' All the gb codecs decode b'<:Ky2;S{#,NpJ)l6HK!#' to '<:Ky2;S{#,NpJ)l6HK!#' (ie, ascii chars to same unicode chars). And they encode '己所不欲,勿施於人。' to bytes with the high bit set. I figured it out. The 1995 rfc says "A GB (GB1 and GB2) code is a two byte code, where the first byte is in the range $21-$77 (hexadecimal), and the second byte is in the range $21-$7E." This was in the days of for 7-bit bytes, at least for safe transmission. Now that we use 8-bit bytes nearly everywhere, the gb specs have probably be updated since 1980. This makes hz rather obsolete, since high-bit unset ascii codes and high-bit set gb codes can be mixed without the hz wrapping. In any case, Python's gb codecs act this way. So the hz codec is setting and unsetting the high bit when passing bytes to and from gb codec (assuming it does not use a modified version internally).
>>> hhz = [c - 128 for c in '己所不欲,勿施於人。'.encode('GB2312')]
>>> bytes(hhz)
b'<:Ky2;S{#,NpJ)l6HK!#' Perhaps there should be a separate test like the above to be sure that hz really uses GB2312-80, as specified. |
Hello, everyone! The rationale why I chose to encode the test strings into a Python source code was that I wanted for them to be treated as text files which are trackable in CVS or subversion and to keep Python source codes free of any non-ASCII characters. Now I don't feel the need of "text file" status, STINNER's suggestion works for me. Actually, all "stateful" encodings supported by cjkcodecs lack of adequate test codes. (There are seven more iso-2022 stateful encodings in addition of hz in Python.) "cjkencoding_tests.py" is used for random chunk coding tests and most stateful encodings are not compatible with random chunk coding. For those reasons, I didn't include test strings for them there. But they apparently still need appropriate simple string coding and stream coding tests. STINNER Victor wrote:
Almost every encoding in cjkcodecs has different set of characters. They support different languages (Chinese, Japanese, Korean), different scripts (Hanja, Kanji, Traditional and Simplified Chinese), different standards (johab and KS X 1001 in Korean), different versions/variants (JIS X 0201 and JIS X 0213 in Japanese). It would be quite striking, actually one of them, gb18030, is a "superset" of the Unicode so far. Teddy J Reedy wrotes:
You're right. By the way, my previous e-mail address <perky@FreeBSD.org> isn't reachable anymore, please send to <hyeshik@gmail.com> when you need. |
Mercurial supports binary file, I plan to mark the CJK testcases as binary using .hgeol. |
New changeset 16503022c4b8 by Victor Stinner in branch '3.1': New changeset 370db8da308f by Victor Stinner in branch '3.2': New changeset e7daf2acc3a7 by Victor Stinner in branch 'default': |
New changeset 1bd697cdd210 by Victor Stinner in branch '2.7': |
Oh, I specified the wrong issue number of my last 3 commits: test_linecache failure is related to this issue. New changeset 9a4d4506680a by Victor Stinner in branch '3.1': New changeset 43cbfacae463 by Victor Stinner in branch '3.2': New changeset 06473da99270 by Victor Stinner in branch 'default': |
New changeset 83f4c270b27d by Victor Stinner in branch '2.7': |
ISO 2022 encodings don't have tests neither: test_multibytecodec doesn't test directly these encodings but it is "Unit test for multibytecodec itself". We may also add tests specific to ISO 2022 encodings:
While trying to write tests for the HZ encoding, I found a bug in CJK multibyte encodings => bpo-12100, "Incremental encoders of CJK codecs reset the codec at each call to encode()". |
Haypo, since you've created a new directory there are makefile (and PC build file, I think) updates that will need to be made. (This should be documented in the dev guide if it isn't already.) |
I think that issue bpo-12100 should be fixed (wontfix/fixed) before this one. |
Can you review attached cjkencodings_dir.patch?
Do you mean that the cjkencodings directory should be documented? (in setup.rst? subdirectories are not listed) Or the process of adding a new directory? |
I presume and hope David meant the process, as I would have no idea how to add a directory. And David did not seem completely sure. |
New changeset 10b23f1c8cb6 by Victor Stinner in branch '3.1': New changeset 3368d4a04e52 by Victor Stinner in branch '3.2': New changeset 06c44a518d0b by Victor Stinner in branch 'default': |
New changeset 3c724c3eaed7 by Victor Stinner in branch '2.7': |
Looks good to me. And I meant documenting the process for adding a directory. |
iso2022_tests.patch: add some tests for ISO2022 encodings:
|
New changeset a024183e046f by Victor Stinner in branch '3.1': New changeset 4289cc96835e by Victor Stinner in branch '3.2': New changeset b2b0cae86f56 by Victor Stinner in branch 'default': |
New changeset 8ba0192a0eb1 by Victor Stinner in branch '2.7': |
New changeset 6c6923a406df by Victor Stinner in branch '2.7': New changeset 2a313ceaf17c by Victor Stinner in branch '3.2': New changeset 1a9ccb5bef27 by Victor Stinner in branch 'default': |
We have know tests for some ISO 2022 codecs and the HZ codec, it's much better! |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: