Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot use both read and readline method in same ZipExtFile object #51859

Closed
lucifer mannequin opened this issue Dec 31, 2009 · 28 comments
Closed

Cannot use both read and readline method in same ZipExtFile object #51859

lucifer mannequin opened this issue Dec 31, 2009 · 28 comments
Assignees
Labels
extension-modules C modules in the Modules dir type-bug An unexpected behavior, bug, or error

Comments

@lucifer
Copy link
Mannequin

lucifer mannequin commented Dec 31, 2009

BPO 7610
Nosy @pitrou, @ezio-melotti, @bitdancer
Files
  • zipfile_7610_py27.diff
  • zipfile_7610_py27.diff: Patch for Python 2.7 using the io module.
  • zipfile_7610_py27.diff: Updated patch for Python 2.7 using the io module.
  • zipfile_7610_py27_v4.diff: Updated patch for Python 2.7 using the io module.
  • zipfile_7610_py27_v5.diff: Patch (take 5) for Python 2.7.
  • zipfile_7610_py27_v6.diff: Patch v6 for Python 2.7.
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/pitrou'
    closed_at = <Date 2010-01-28.20:43:33.206>
    created_at = <Date 2009-12-31.08:03:34.996>
    labels = ['extension-modules', 'type-bug']
    title = 'Cannot use both read and readline method in same ZipExtFile object'
    updated_at = <Date 2010-01-28.20:43:33.205>
    user = 'https://bugs.python.org/lucifer'

    bugs.python.org fields:

    activity = <Date 2010-01-28.20:43:33.205>
    actor = 'ezio.melotti'
    assignee = 'pitrou'
    closed = True
    closed_date = <Date 2010-01-28.20:43:33.206>
    closer = 'ezio.melotti'
    components = ['Extension Modules']
    creation = <Date 2009-12-31.08:03:34.996>
    creator = 'lucifer'
    dependencies = []
    files = ['15723', '15739', '15822', '15882', '15941', '15949']
    hgrepos = []
    issue_num = 7610
    keywords = ['patch']
    message_count = 28.0
    messages = ['97079', '97158', '97165', '97230', '97235', '97236', '97251', '97258', '97548', '97659', '97783', '97830', '97838', '97848', '97974', '97977', '98033', '98040', '98046', '98449', '98455', '98459', '98460', '98461', '98462', '98471', '98473', '98478']
    nosy_count = 5.0
    nosy_names = ['pitrou', 'ezio.melotti', 'nirai', 'r.david.murray', 'lucifer']
    pr_nums = []
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue7610'
    versions = ['Python 2.7', 'Python 3.2']

    @lucifer lucifer mannequin added extension-modules C modules in the Modules dir type-bug An unexpected behavior, bug, or error labels Dec 31, 2009
    @lucifer
    Copy link
    Mannequin Author

    lucifer mannequin commented Dec 31, 2009

    open a file in the zip file through ZipFile.open method, if invoke read
    method after readline method in the ZipExtFile object, the data is not
    correct.

    I was trying to get a ZipExtFile and pass it to pickle.load(f), a
    exception was thrown.

    The reason is readline will keep a linebuffer, but read only use
    readbuffer. After invoke readline, there will be some missing data in
    linebuffer when invoke read method

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 3, 2010

    I uploaded a possible patch for Python 2.7.

    The patch converts ZipExtFile into subclass of io.BufferedIOBase and drops most of the original implementation.

    However, the patch breaks current newline behavior of ZipExtFile. I figured this was reasonable because zipfile newline behavior:
    a) was introduced in Python 2.6
    b) was incompatible with behavior of io module files.
    c) was not documented.

    Reading zip content as text with newline functionality may be done with io.TextIOWrapper.

    A bonus of sub classing io.BufferedIOBase is significantly better read performance.

    @pitrou
    Copy link
    Member

    pitrou commented Jan 3, 2010

    I don't think we can remove the "U" option from 2.6/2.7; it was certainly introduced for a reason and isn't inconsistent with the "U" option to the built-in open().

    On 3.x, behaviour is indeed inconsistent with the standard IO library, so maybe we can either wrap the object in a TextIOWrapper, or remove support for "U".

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 4, 2010

    Right, I was reading the 3.1 docs by mistake.

    I updated the patch. This time universal newlines are supported.

    On my dataset (75MB 650K lines log file) the readline() speedup is x40 for 'r' mode and x8 for 'rU' mode, and you can get an extra bump by using the io module wrappers.

    I added tests for interleaved use of readline() and read() (which add ~2 seconds to running time)

    @pitrou
    Copy link
    Member

    pitrou commented Jan 4, 2010

    I updated the patch. This time universal newlines are supported.

    Thank you. Are you sure the "Shortcut common case" in readline() is
    useful? BufferedIOBase.readline() in itself should be rather fast.

    @pitrou
    Copy link
    Member

    pitrou commented Jan 4, 2010

    Also, I'm not sure what happens in readline() in universal mode when the
    chunk ends with a '\r' and there's a '\n' in the following chunk (see
    the "ugly check" that your patch removes). Is there a test for that?

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 5, 2010

    Thank you. Are you sure the "Shortcut common case" in readline()
    is useful? BufferedIOBase.readline() in itself should be rather fast.

    On my dataset the shortcut speeds up readline() 400% on top of the default C implementation.

    I can take a look to why the C implementation is slow (although it is documented as "slowish").

    Also, I'm not sure what happens in readline() in universal mode when
    the chunk ends with a '\r' and there's a '\n' in the following chunk
    (see the "ugly check" that your patch removes). Is there a test for that?

    The regular pattern returns either a line chunk or a newline (sequence) but not both. To read a line there is therefore a minimum of two peek() calls. One for the line content and the last for the newline. Since the peek is called with a value of 2, the newline sequence \r\n should be retrieved as is. There is no test for that.

    @pitrou
    Copy link
    Member

    pitrou commented Jan 5, 2010

    Since the peek is called with a value of 2, the newline sequence \r\n
    should be retrieved as is.

    No, it doesn't follow. The \r can still appear at the end of a readahead, in which case your algorithm will not eliminate the following \n.

    That is, if the sequence of readaheads is ['a\r', '\nb\n'], readlines() will return ['a\n', '\n', 'b\n'] while it should return ['a\n', 'b\n'].

    It should be possible to construct a statistically valid test case for this, for example by creating an archived file containing 'a\r\n'*10000 and reading back from it.

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 10, 2010

    If the sequence of readaheads is ['a\r', '\nb\n'], the first use of the pattern will consume 'a', then the peek(2) will trigger a read() and the next use of the pattern will consume '\r\n'.

    I updated the patch and enhanced a little the inline comments to clear this up and added the suggested test.

    I also ventured into a rewrite of the read() function since it seems to contain various latent bugs / problems, probably the result of ages of slow refactoring. For example, decompressor flush() depends on self.eof flag which is never set and the decrypter is fed by the raw buffer which is in turn set by the decompresser, thus theoretically feeding decrypted data back to the decrypter.

    I also made it completely buffered so now small reads are significantly faster.

    If this patch is good, i'll prepare a patch for python 3.x and 2.6

    @pitrou
    Copy link
    Member

    pitrou commented Jan 12, 2010

    Some comments:

    • you should probably write n = sys.maxsize instead of n = 1 << 31 - 1
    • ZipExtFile.read() should support n=None as a synonym to n=-1 (read everything)
    • bytes as a variable name isn't very good since it's the built-in name for bytestrings in py3k
    • in ZipExtFile.read(), it seems you have removed the adjustment for encrypted files (see adjust read size for encrypted files since the first 12 bytes [etc.])
    • is there a situation where the decompressor might return less bytes than expected? (after all compression doesn't /always/ compress, in unfavourable chunks of data it might actually expand things a bit)

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 14, 2010

    I uploaded an update for Python 2.7.

    • you should probably write n = sys.maxsize instead of n = 1 << 31 - 1

    sys.maxsize is 64 bit number on my system but the maximum value accepted by zlib's decompress() seems to be INT_MAX defined in pyport.h which equals the number I used.

    • ZipExtFile.read() should support n=None as a synonym to n=-1
      (read everything)

    Added

    • bytes as a variable name isn't very good since it's the built-in
      name for bytestrings in py3k

    Changed (old habits die hard).

    • in ZipExtFile.read(), it seems you have removed the adjustment for
      encrypted files (see adjust read size for encrypted files since the first 12 bytes [etc.])

    Yes, was moved to the constructor.

    • is there a situation where the decompressor might return less bytes
      than expected? (after all compression doesn't /always/ compress, in
      unfavourable chunks of data it might actually expand things a bit)

    The documentation of io.BufferedIOBase.read() reads "multiple raw reads may be issued to satisfy the byte count". I understood this language to mean satisfying read size is optional. Isn't it?

    @pitrou
    Copy link
    Member

    pitrou commented Jan 15, 2010

    The documentation of io.BufferedIOBase.read() reads "multiple raw
    reads may be issued to satisfy the byte count". I understood this
    language to mean satisfying read size is optional. Isn't it?

    It's the reverse actually. It means that BufferedIOBase.read itself
    may (or perhaps should) issue multiple raw reads in order to satisfy the
    byte count.

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 15, 2010

    May be a good idea to clear this up in the documentation.

    http://en.wiktionary.org/wiki/may#Verb
    "(modal auxiliary verb, defective) To have permission to. Used in granting permission and in questions to make polite requests."

    @bitdancer
    Copy link
    Member

    I do not find the existing phrasing in the IO docs ambiguous, but since it is obviously possible to misinterpret it it would be good to clarify it. Can you suggest an alternate phrasing that would be clearer?

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 17, 2010

    I do not find the existing phrasing in the IO docs ambiguous, but since
    it is obviously possible to misinterpret it it would be good to clarify
    it. Can you suggest an alternate phrasing that would be clearer?

    Replace 'may' with 'will' or 'shall' everywhere the context indicates a mandatory requirement.

    Since this possibly affects the entire Python documentation, does it make sense to discuss this on python-dev?

    @pitrou
    Copy link
    Member

    pitrou commented Jan 17, 2010

    Replace 'may' with 'will' or 'shall' everywhere the context indicates
    a mandatory requirement.

    Since this possibly affects the entire Python documentation, does it
    make sense to discuss this on python-dev?

    Either that, or open a separate tracker issue. I'm not a documentation
    specialist and would prefer to gather other opinions.

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 18, 2010

    Uploaded an updated patch with read() which calls underlying stream enough times to satisfy required read size.

    @pitrou
    Copy link
    Member

    pitrou commented Jan 18, 2010

    The patch looks rather good. Is self.MAX_N still necessary in read()? I guess it's rare to read more than 2GB at once, though...

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 19, 2010

    Right, removed MAX_N from read(); remains in read1().
    If good, what versions of Python is this patch desired for?

    @pitrou pitrou self-assigned this Jan 27, 2010
    @pitrou
    Copy link
    Member

    pitrou commented Jan 27, 2010

    The patch has been committed in r77798 (trunk) and r77800 (py3k). Thank you!
    I won't commit it to 2.6 and 3.1 because it's too involved to qualify as a bug fix, though.

    @pitrou pitrou closed this as completed Jan 27, 2010
    @ezio-melotti
    Copy link
    Member

    Nir, could you also provide a test for the part that "handles unconsumed data" (line 601 in zipfile.py)?

    In r77809 (and r77810) I made a change to avoid using zlib when it's not necessary (zlib is not always available), and I was going to commit a typo that luckily I noticed in the diff. The tests however didn't notice anything because that part is untested.

    @ezio-melotti ezio-melotti reopened this Jan 28, 2010
    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 28, 2010

    The related scenario is a system without zlib. How do you suggest simulating this in test?

    @ezio-melotti
    Copy link
    Member

    The test should just check that the part that handles unconsumed data works when zlib is available. AFAIU if zlib is not available this part (i.e. the content of the if) can be skipped so it doesn't need to be tested.

    (When zlib is not available 'zlib' is set to None, see the try/except at the beginning of zipfile.py)

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 28, 2010

    Unconsumed data is compressed data. If the part which handles unconsumed data does not work when zlib is available, then the existing tests would fail. In any case the unconsumed buffer is an implementation detail of zipfile.

    I see a point in adding a test to make sure zipfile behaves as expected when zlib is not available, but how?

    Also, on which systems is zlib missing? I don't see this mentioned in the zlib docs.

    @ezio-melotti
    Copy link
    Member

    If the part which handles unconsumed data does not work when zlib is available, then the existing tests would fail.

    If the existing tests end up testing that part of code too then it's probably fine. I tried to add a print inside the 'if' (at line 605) but it didn't print anything. However here some tests are skipped, so maybe that part of code is tested in some of these skipped tests.

    I see a point in adding a test to make sure zipfile behaves as expected when zlib is not available, but how?

    Several tests are already skipped in test_zipfile.py when zlib is not available (all the ones with the skipUnless decorator)

    Also, on which systems is zlib missing? I don't see this mentioned in the zlib docs.

    Even if it's available and usually already installed in most of the systems, in some -- like mine (Linux) -- is not installed.

    If you can confirm that that part of code is already tested in some of the tests that here are skipped, I will close the issue.

    @nirai
    Copy link
    Mannequin

    nirai mannequin commented Jan 28, 2010

    I actually meant how would you simulate zlib's absence on a system in which it is present?

    @ezio-melotti
    Copy link
    Member

    The easiest way is to setting zlib to None or not import it at all.
    Are you suggesting that test_zipfile should be always run with and without zlib to check that everything (except the things that require zlib of course) works in both the cases?
    My concern was about that part that handles unconsumed data only, because I didn't see any test in the patch that specifically check it. If there are already tests that checks it, then it's fine, even if it's not possible to test that part without zlib.

    @ezio-melotti
    Copy link
    Member

    Apparently that part of code is already tested in other tests that use deflated mode, so I'll close this again. Thanks for the info.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    extension-modules C modules in the Modules dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    3 participants