Title: LZMA library sometimes fails to decompress a file
Type: behavior Stage: patch review
Components: Library (Lib) Versions: Python 3.9, Python 3.8, Python 3.7
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: Esa.Peuha, Jeffrey.Kintscher, Ma Lin, akira, josh.r, kenorb, maubp, nadeem.vawda, peremen, serhiy.storchaka, vnummela
Priority: normal Keywords: patch

Created on 2014-06-25 18:28 by vnummela, last changed 2019-06-18 09:34 by Ma Lin.

File name Uploaded Description Edit vnummela, 2014-06-25 18:28 Example lzma-compressed files, a good one and a bad one vnummela, 2014-07-01 18:17 15 more example files that fail lzma decompression akira, 2014-11-21 08:15
02h_ticks.bi5 kenorb, 2015-09-28 18:40 peremen, 2017-12-24 17:51 2 more failing files
fix-bug.diff Ma Lin, 2019-06-05 05:06 Ma Lin, 2019-06-18 09:33
Pull Requests
URL Status Linked Edit
PR 14048 open Ma Lin, 2019-06-13 09:42
Messages (15)
msg221566 - (view) Author: Ville Nummela (vnummela) Date: 2014-06-25 18:28
Python lzma library sometimes fails to decompress a file, even though the file does not appear to be corrupt. 

Originally discovered with OS X 10.9 / Python 2.7.7 / bacports.lzma
Now also reproduced on OS X / Python 3.4 / lzma, please see for more details.

Two example files are provided, a good one and a bad one. Both are compressed using the older lzma algorithm (not xz). An attempt to decompress the 'bad' file raises "EOFError: Compressed file ended before the end-of-stream marker was reached."

The 'bad' file appears to be ok, because
- a direct call to XZ Utils processes the files without complaints
- the decompressed files' contents appear to be ok.

The example files contain tick data and have been downloaded from the Dukascopy bank's historical data feed service. The service is well known for it's high data quality and utilised by multiple analysis SW platforms. Thus I think it is unlikely that a file integrity issue on their end would have gone unnoticed.

The error occurs relatively rarely; only around 1 - 5 times per 1000 downloaded files.
msg221583 - (view) Author: Josh Rosenberg (josh.r) * (Python triager) Date: 2014-06-25 23:49
Just to be clear, when you say "1 - 5 times per 1000 downloaded files", have you confirmed that redownloading the same file a second time produces the same error? Just making sure we've ruled out corruption during transfer over the network; small errors might make it past one decompressor with minimal effect in the midst of a huge data file, while a more stringent error checking decompressor would reject them.
msg221597 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2014-06-26 08:00
>>> import lzma
>>> f ='22h_ticks_bad.bi5')
>>> len(
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/serhiy/py/cpython/Lib/", line 310, in read
    return self._read_all()
  File "/home/serhiy/py/cpython/Lib/", line 251, in _read_all
    while self._fill_buffer():
  File "/home/serhiy/py/cpython/Lib/", line 225, in _fill_buffer
    raise EOFError("Compressed file ended before the "
EOFError: Compressed file ended before the end-of-stream marker was reached

This is similar to issue1159051. We need a way to say "read as much as possible without error and raise EOFError only on next read".
msg221599 - (view) Author: Ville Nummela (vnummela) Date: 2014-06-26 08:20
My stats so far:

As of writing this, I have attempted to decompress about 5000 downloaded files (two years of tick data). 25 'bad' files were found within this lot.

I re-downloaded all of them, plus about 500 other files as the minimum lot the server supplies is 24 hours / files at a time.

I compared all these 528 file pairs using hashlib.md5 and got identical hashes for all of them.

I guess what I should do next is to go through the decompressed data and look for suspicious anomalies, but unfortunately I don't have the tools in place to do that quite yet.
msg221784 - (view) Author: Esa Peuha (Esa.Peuha) Date: 2014-06-28 13:05
This code

import _lzma
with open('22h_ticks_bad.bi5', 'rb') as f:
    infile =
for i in range(8191, 8195):
    decompressor = _lzma.LZMADecompressor()
    first_out = decompressor.decompress(infile[:i])
    first_len = len(first_out)
    last_out = decompressor.decompress(infile[i:])
    last_len = len(last_out)
    print(i, first_len, first_len + last_len, decompressor.eof)

prints this

8191 36243 45480 True
8192 36251 45473 False
8193 36253 45475 False
8194 36260 45480 True

It seems to me that this is a subtle bug in liblzma; if the input stream to the incremental decompressor is broken at the wrong place, the internal state of the decompressor is corrupted. For this particular file, it happens when the break occurs after reading 8192 or 8193 bytes, and happens to use a buffer of 8192 bytes. There is nothing wrong with the compressed file, since decompresses it correctly if the buffer size is set to almost any other value.
msg222052 - (view) Author: Ville Nummela (vnummela) Date: 2014-07-01 18:17
Uploading a few more 'bad' lzma files for testing.
msg231466 - (view) Author: Akira Li (akira) * Date: 2014-11-21 07:13
@Esa changing the buffer size helps with some "bad" files
but lzma module still fails on some files.

I've uploaded script that demonstrates it.
msg231467 - (view) Author: Akira Li (akira) * Date: 2014-11-21 08:15
If lzma._BUFFER_SIZE is less than 2048 then all example files are
decompressed successfully (at least lzma module produces the same
results as xz utility)
msg251784 - (view) Author: (kenorb) Date: 2015-09-28 18:40
The same with this attached file. It fails with Python 3.5 (small buffers like 128, 255, 1023, etc.) , but it seems to work in Python 3.4 with lzma._BUFFER_SIZE = 1023. So it looks like something regressed.
msg309005 - (view) Author: Shinjo Park (peremen) Date: 2017-12-24 17:51
Hi, I think I encountered this bug with Ubuntu 17.10 / Python 3.6.3. The same error was triggered by Python's LZMA library, while the xz command line tool can extract the problematic file. Not sure whether there is the bug in 3.7/3.8. I am attaching the problematic archives, they should contain UTF-16LE encoded text.
msg344530 - (view) Author: Jeffrey Kintscher (Jeffrey.Kintscher) * Date: 2019-06-04 07:07
I adapted the example in msg221784:

with open('22h_ticks_bad.bi5', 'rb') as f:
    infile =

for i in range(1, 9000):
    decompressor = _lzma.LZMADecompressor()
    first_out = decompressor.decompress(infile[:i])
    first_len = len(first_out)
    last_out = decompressor.decompress(infile[i:])
    last_len = len(last_out)
    if not decompressor.eof:
        print(i, first_len, first_len + last_len, decompressor.eof)

which outputs this using both 3.7.3 and 3.8.0a3+ on macOS 10.14.4:

648 2682 45479 False
1834 7442 45479 False
2766 11667 45473 False
2767 11668 45474 False
3591 15428 45473 False
5051 21743 45473 False
5052 21745 45475 False
5589 24387 45475 False
5590 24388 45476 False
6560 28823 45476 False
6561 28824 45477 False
7327 32325 45474 False
8192 36251 45473 False
8193 36253 45475 False
8368 37283 45475 False
8369 37285 45477 False

So, yes, still an active bug.
msg344668 - (view) Author: Ma Lin (Ma Lin) * Date: 2019-06-05 05:06
fix-bug.diff fixes this bug, I will submit a PR after thoroughly understanding the problem.
msg345491 - (view) Author: Ma Lin (Ma Lin) * Date: 2019-06-13 10:03
I wrote a review guide in PR 14048.
msg345971 - (view) Author: Ma Lin (Ma Lin) * Date: 2019-06-18 09:33
I investigated this problem.

Here is the toggle conditions:

- The format is FORMAT_ALONE, this is the legacy .lzma container format.
- The file's header recorded "Uncompressed Size".
- The file doesn't have "End of Payload Marker" or "End of Stream Marker".

Otherwise, liblzma's internal state doesn't hold any bytes that can be output. 

Good news is:

- lzma module's default compressing format is FORMAT_XZ, not FORMAT_ALONE.
- Even FORMAT_ALONE files generated by lzma module (underlying xz library), always have "End of Payload Marker".
- Maybe FORMAT_ALONE format is being outdated in the world.

Attached file, test `` function [1] with different max_length values (from -1 to 1000, exclude 0), can ensure that the needs_input mechanism works properly.
Usage: modify `DIR` variable to bad files' folder.

msg345972 - (view) Author: Ma Lin (Ma Lin) * Date: 2019-06-18 09:34
toggle conditions -> trigger conditions
Date User Action Args
2019-06-18 09:34:31Ma Linsetmessages: + msg345972
2019-06-18 09:33:12Ma Linsetfiles: +

messages: + msg345971
2019-06-13 10:03:43Ma Linsetmessages: + msg345491
versions: + Python 3.8, Python 3.9, - Python 2.7, Python 3.4, Python 3.5, Python 3.6
2019-06-13 09:42:35Ma Linsetstage: patch review
pull_requests: + pull_request13910
2019-06-05 05:06:15Ma Linsetfiles: + fix-bug.diff
keywords: + patch
messages: + msg344668
2019-06-04 07:07:01Jeffrey.Kintschersetmessages: + msg344530
versions: + Python 3.7
2019-06-03 05:12:08Ma Linsetnosy: + Ma Lin
2019-06-01 07:55:22Jeffrey.Kintschersetnosy: + Jeffrey.Kintscher
2017-12-24 17:51:00peremensetfiles: +
versions: + Python 3.6
nosy: + peremen

messages: + msg309005
2015-09-28 18:40:52kenorbsetfiles: + 02h_ticks.bi5
nosy: + kenorb
messages: + msg251784

2014-11-21 08:15:30akirasetfiles: -
2014-11-21 08:15:19akirasetfiles: +

messages: + msg231467
2014-11-21 07:26:34akirasetfiles: -
2014-11-21 07:26:29akirasetfiles: +
2014-11-21 07:13:51akirasetfiles: +
nosy: + akira
messages: + msg231466

2014-11-19 10:37:14maubpsetnosy: + maubp
2014-07-01 18:17:55vnummelasetfiles: +

messages: + msg222052
2014-06-28 13:05:57Esa.Peuhasetnosy: + Esa.Peuha
messages: + msg221784
2014-06-26 08:20:46vnummelasetmessages: + msg221599
2014-06-26 08:00:23serhiy.storchakasetnosy: + serhiy.storchaka

messages: + msg221597
versions: + Python 3.5
2014-06-25 23:49:30josh.rsetnosy: + josh.r
messages: + msg221583
2014-06-25 18:28:55vnummelacreate