Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement PEP 3154 (pickle protocol 4) #62010

Closed
avassalotti opened this issue Apr 21, 2013 · 63 comments
Closed

Implement PEP 3154 (pickle protocol 4) #62010

avassalotti opened this issue Apr 21, 2013 · 63 comments
Assignees
Labels
release-blocker stdlib Python modules in the Lib dir type-feature A feature request or enhancement

Comments

@avassalotti
Copy link
Member

BPO 17810
Nosy @tim-one, @loewis, @rhettinger, @ncoghlan, @pitrou, @larryhastings, @avassalotti, @asvetlov, @serhiy-storchaka
Dependencies
  • bpo-15397: Unbinding of methods
  • bpo-17893: Refactor reduce protocol implementation
  • Files
  • 9f1be171da08.diff
  • framing.patch
  • prefetch.patch
  • framing2.patch
  • framing3.patch
  • methods.patch
  • 8434af450da0.diff
  • framing.diff
  • pickle_frame_headers.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/avassalotti'
    closed_at = <Date 2013-11-24.04:39:41.547>
    created_at = <Date 2013-04-21.06:48:58.149>
    labels = ['type-feature', 'library', 'release-blocker']
    title = 'Implement PEP 3154 (pickle protocol 4)'
    updated_at = <Date 2013-11-25.21:56:35.101>
    user = 'https://github.com/avassalotti'

    bugs.python.org fields:

    activity = <Date 2013-11-25.21:56:35.101>
    actor = 'serhiy.storchaka'
    assignee = 'alexandre.vassalotti'
    closed = True
    closed_date = <Date 2013-11-24.04:39:41.547>
    closer = 'alexandre.vassalotti'
    components = ['Library (Lib)']
    creation = <Date 2013-04-21.06:48:58.149>
    creator = 'alexandre.vassalotti'
    dependencies = ['15397', '17893']
    files = ['29966', '30068', '30072', '30094', '30118', '30213', '32640', '32709', '32840']
    hgrepos = ['184']
    issue_num = 17810
    keywords = ['patch']
    message_count = 63.0
    messages = ['187496', '187500', '187510', '187512', '187516', '187830', '187833', '187834', '187874', '187876', '187877', '187878', '187891', '187896', '187918', '188087', '188089', '188090', '188096', '188098', '188102', '188109', '188227', '188280', '188281', '188282', '188312', '188315', '188320', '188327', '188330', '188331', '188338', '188351', '188363', '188889', '188971', '188989', '189017', '190554', '190583', '195499', '195522', '195583', '202939', '203339', '203420', '203435', '204066', '204067', '204088', '204089', '204093', '204097', '204175', '204176', '204389', '204390', '204391', '204397', '204399', '204401', '204426']
    nosy_count = 13.0
    nosy_names = ['tim.peters', 'loewis', 'rhettinger', 'ncoghlan', 'pitrou', 'larry', 'alexandre.vassalotti', 'Arfrever', 'asvetlov', 'neologix', 'python-dev', 'serhiy.storchaka', 'mstefanro']
    pr_nums = []
    priority = 'release blocker'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue17810'
    versions = ['Python 3.4']

    @avassalotti
    Copy link
    Member Author

    I have restarted the work on PEP-3154. Stefan Mihaila had begun an implementation as part of the Google Summer of Code 2012. Unfortunately, he hit multiple roadblocks which prevented him to finish his work by the end of the summer. He previously shown interest in completing his implementation. However he got constrained by time and never resumed his work.

    So I am taking over the implementation of the PEP. I have decided to go forward with a brand new code, using Stefan's work only as a guide. At the moment, I have completed about half of the PEP---missing only support for calling __new__ with keyword arguments and the use of new qualified name for referring objects.

    Design-wise, there is still a few things that we should discuss. For example, I think Stefan's idea, which is not specified in the PEP, to eliminate PUT opcodes is interesting. His proposal was to emit an implicit PUT opcode after each object pickled and make the Pickler and Unpickler classes agree on the scheme. A drawback of this implicit scheme is we cannot be selective about which object we save in the memo during unpickling. That means, for example, we won't be able to make pickletools.optimize work with protocol 4 to reduce the memory footprint of the unpickling process. This scheme also alters the meaning of all previously defined opcodes because of the implicit PUTs, which is sort of okay because we are changing protocol. Alternatively, we could use an explicit scheme by defining new "fat" opcodes, for the built-in types we care about, which includes memoization. This scheme would a bit more flexible however it would also be slightly more involved implementation-wise. In any case, I will run benchmarks to see if either schemes are worthwhile.

    @avassalotti avassalotti self-assigned this Apr 21, 2013
    @avassalotti avassalotti added stdlib Python modules in the Lib dir type-feature A feature request or enhancement labels Apr 21, 2013
    @pitrou
    Copy link
    Member

    pitrou commented Apr 21, 2013

    Thank you for reviving this :)
    A couple of questions:

    • why ADDITEM in addition to ADDITEMS? I don't think single-element sets are an important use case (as opposed to, say, single-element tuples)
    • what is the purpose of STACK_GLOBAL? I would say memoization of common names but you pass memoize=False

    For example, I think Stefan's idea, which is not specified in the
    PEP, to eliminate PUT opcodes is interesting. His proposal was to
    emit an implicit PUT opcode after each object pickled and make the
    Pickler and Unpickler classes agree on the scheme.

    Are the savings worth it?
    I've tried pickletools.optimize() on two objects:

    • a typical data dict (http.client.responses). The pickle length decreases from 1155 to 1063 (8% shrink); unpickling is faster by 4%.

    • a Logger object (logging.getLogger("foobar"). The pickle length decreases from 427 to 389 (9% shrink); unpickling is faster by 2%.

    @serhiy-storchaka
    Copy link
    Member

    Link to the previous attempt: bpo-15642.

    @serhiy-storchaka
    Copy link
    Member

    Memoization consumes memory during pickling. For now every memoized object requires memory for:

    dict's entity;
    an id() integer object;
    a 2-element tuple;
    a pickle's index (an integer object).

    It's about 80 bytes on 32-bit platform (and twice as this on 64-bit). For data which contains a lot of floats it can be cumbersome.

    @pitrou
    Copy link
    Member

    pitrou commented Apr 21, 2013

    Memoization consumes memory during pickling. For now every memoized
    object requires memory for:

    dict's entity;
    an id() integer object;
    a 2-element tuple;
    a pickle's index (an integer object).

    It's about 80 bytes on 32-bit platform (and twice as this on 64-bit).

    As far as I understand, Alexandre doesn't propose to suppress
    memoization, only to make it implicit. Therefore the memory overhead
    would be the same (but the pickle would have less opcodes).

    For data which contains a lot of floats it can be cumbersome.

    Apparently, floats don't get memoized:

    >>> pickletools.dis(pickle.dumps([1.0, 2.0]))
        0: \x80 PROTO      3
        2: ]    EMPTY_LIST
        3: q    BINPUT     0
        5: (    MARK
        6: G        BINFLOAT   1.0
       15: G        BINFLOAT   2.0
       24: e        APPENDS    (MARK at 5)
       25: .    STOP

    @rhettinger
    Copy link
    Contributor

    I would like to see Proto4 include an option for compression (zlib,bz2) or somesuch and become self-decompressing upon unpickling. The primary use cases for pickling involve writing to disk or transmitting across a wire -- both use cases benefit from compression (with reduced read/write times).

    @neologix
    Copy link
    Mannequin

    neologix mannequin commented Apr 26, 2013

    I would like to see Proto4 include an option for compression
    (zlib,bz2) or somesuch and become self-decompressing upon unpickling.

    I don't see what this would bring over explicit compression:

    • depending on the use case, you may want to use different compression algorithms, e.g. for disk you may want higher compression ratio like bzip2/lzma, but for wire you'd prefer something fast like snappy
    • supporting multiple compression algorithms and levels would complicate the API
    • this would probably complicate the code, since you'd have to support optional compression, and have a way to indicate which format is used
    • that's really mixing two entirely different concepts (serialization vs compression)

    @pitrou
    Copy link
    Member

    pitrou commented Apr 26, 2013

    I don't see what this would bring over explicit compression:

    • depending on the use case, you may want to use different compression algorithms, e.g. for disk you may want higher compression ratio like bzip2/lzma, but for wire you'd prefer something fast like snappy
    • supporting multiple compression algorithms and levels would complicate the API
    • this would probably complicate the code, since you'd have to support optional compression, and have a way to indicate which format is used
    • that's really mixing two entirely different concepts (serialization vs compression)

    I agree with Charles-François.
    A feature that may be actually nice to have in the pickle protocol would
    be some framing, to help with streaming unpickling (right now unpickling
    a stream can read almost one byte at a time, IIRC).
    However, that would also make the protocol and the pickler significantly
    more complex.

    @pitrou
    Copy link
    Member

    pitrou commented Apr 26, 2013

    A proof of concept hack to enable framing on pickle showed a massive performance increase on streaming unpickling (up to 5x faster with a C file object such as io.BytesIO, up to 150x faster with a pure Python file object such as _pyio.BytesIO). There is a slight slowdown on non-streaming operation, but that could probably be optimized.

    @pitrou
    Copy link
    Member

    pitrou commented Apr 26, 2013

    (note: I've updated PEP-3154 with framing and GLOBAL_STACK)

    @serhiy-storchaka
    Copy link
    Member

    A feature that may be actually nice to have in the pickle protocol would
    be some framing, to help with streaming unpickling (right now unpickling
    a stream can read almost one byte at a time, IIRC).
    However, that would also make the protocol and the pickler significantly
    more complex.

    What if just use io.BufferedReader?

        if not isinstance(file, io.BufferedReader):
            file = io.BufferedReader(file)

    (at start of _Unpickler.__init__)

    @pitrou
    Copy link
    Member

    pitrou commented Apr 26, 2013

    What if just use io.BufferedReader?

    if not isinstance(file, io.BufferedReader):
        file = io.BufferedReader(file)
    

    (at start of _Unpickler.__init__)

    Two problems:

    1. semantically, it is wrong; the BufferedReader will read bytes beyond
      the pickle end, so the underlying stream will be desynchronized

    2. performance-wise, it doesn't solve the issue either: read() method
      calls are costly, even on an optimized C object

    @serhiy-storchaka
    Copy link
    Member

    I were thinking about framing before looking at your last changes to PEP-3154 and I have two alternative propositions.

    1. Pack picked items in blocks of some predefined (or specified at the start with the BLOCKSIZE opcode) size. Only some large data (long strings, large integers) can cross the boundary between blocks. In all other cases the block should be padded with the NOP opcode.

    2. A similar to your proposition, but frames should be declared with a special PREFETCH opcode (with 2- or 4-bytes argument). Large data pickled outside frames (this prevents doublecopying). Opcode and size of large data object can (should?) be included in the previous frame.

    @pitrou
    Copy link
    Member

    pitrou commented Apr 27, 2013

    1. Pack picked items in blocks of some predefined (or specified at the
      start with the BLOCKSIZE opcode) size. Only some large data (long
      strings, large integers) can cross the boundary between blocks. In all
      other cases the block should be padded with the NOP opcode.

    Padding makes it both less efficient and more annoying to handle, IMO.
    My framing proof-of-concept ends up quite simple in terms of code
    complexity. For example, the C version only adds 125 lines of code in 3
    additional functions.

    1. A similar to your proposition, but frames should be declared with a
      special PREFETCH opcode (with 2- or 4-bytes argument). Large data
      pickled outside frames (this prevents doublecopying).

    No doublecopying is necessary (not in the C version, that is). That
    said, this is an interesting idea.

    @serhiy-storchaka
    Copy link
    Member

    Padding makes it both less efficient and more annoying to handle, IMO.

    Agree. But there is other application for NOPs. UTF-8 decoder (and some other decoders) works more fast (up to 4x) when input is aligned. By adding several NOPs before BINUNICODE so that start of encoded data is 4- or 8-bytes aligned relatively to start of frame, we can significan speedup unpickling long ASCII strings. I propose to add new NOP opcode and to use it to align some align-sensitive data.

    My framing proof-of-concept ends up quite simple in terms of code
    complexity. For example, the C version only adds 125 lines of code in 3
    additional functions.

    I just looked in the code and saw that the unpickler already has a ready infrastructure for prefetching. Now your words have not appear to be so incredible. ;) It should work.

    No doublecopying is necessary (not in the C version, that is).

    Agree, there is no doublecopying (except for large bytes objects).

    @pitrou
    Copy link
    Member

    pitrou commented Apr 29, 2013

    Here is a framing patch on top of Alexandre's work.

    There is one thing that framing breaks: pickletools.optimize(). I think it would be non-trivial to fix it. Perhaps the PREFETCH opcode is a better idea for this.

    Alexandre, I don't understand why you removed STACK_GLOBAL. GLOBAL is a PITA that we should not use in protocol 4 anymore, so we need either STACK_GLOBAL or some kind of BINGLOBAL.

    @serhiy-storchaka
    Copy link
    Member

    What is wrong with GLOBAL?

    @pitrou
    Copy link
    Member

    pitrou commented Apr 29, 2013

    What is wrong with GLOBAL?

    It uses the lame "text mode" that scans for newlines, and is generally
    annoying to optimize. This is like C strings vs. Pascal strings.
    http://www.python.org/dev/peps/pep-3154/#binary-encoding-for-all-opcodes

    @serhiy-storchaka
    Copy link
    Member

    With framing it isn't annoying.

    @avassalotti
    Copy link
    Member Author

    Antoine, I removed STACK_GLOBAL when I found performance issues with the implementation. The changeset that added it had some unrelated changes that made it harder to debug than necessary. I am planning to re-add it when I worked out the kinks.

    @pitrou
    Copy link
    Member

    pitrou commented Apr 29, 2013

    With framing it isn't annoying.

    Slightly less, but you still have to wrap readline() calls in the
    unpickler.

    I have started experimenting with PREFETCH, but making the opcode
    optional is a bit annoying in the C pickler, which means it's simpler to
    always emit it, which means it's not very different from framing in the
    end :-)

    @pitrou
    Copy link
    Member

    pitrou commented Apr 29, 2013

    And here is an implementation of PREFETCH over Alexandre's work.
    As you can see the code complexity compared to framing is mostly a wash, but I think fixing pickletools.optimize() will be easier with PREFETCH (still needs confirmation, of course :-)).

    @pitrou
    Copy link
    Member

    pitrou commented May 1, 2013

    Here is an updated framing patch which supports pickletools.optimize().

    @avassalotti
    Copy link
    Member Author

    The latest framing patch looks pretty nice overall. One concern is we need to make sure the C implementation call _Pickler_OpcodeBoundary often enough to keep the frames around the sizes. For example, batch_save_list and batch_save_dict can currently create a frame much larger than expected. Interestingly enough, I found pickle, with patch applied, crashes when handling such frames:

    13:44:43 pep-3154 $ ./python -c "import pickle, io; pickle.dump(list(range(10**5)), io.BytesIO(), 4)"
    Debug memory block at address p=0x1e96b10: API 'o'
    52 bytes originally requested
    The 7 pad bytes at p-7 are FORBIDDENBYTE, as expected.
    The 8 pad bytes at tail=0x1e96b44 are not all FORBIDDENBYTE (0xfb):
    at tail+0: 0x00 *** OUCH
    at tail+1: 0x00 *** OUCH
    at tail+2: 0x00 *** OUCH
    at tail+3: 0x00 *** OUCH
    at tail+4: 0x4d *** OUCH
    at tail+5: 0x75 *** OUCH
    at tail+6: 0x5b *** OUCH
    at tail+7: 0xfb
    The block was made by call bpo-237465 to debug malloc/realloc.
    Data at p: 00 00 00 00 00 00 00 00 ... ff ff ff ff 00 00 00 00
    Fatal Python error: bad trailing pad byte

    Current thread 0x00007f5dea491700:
    File "<string>", line 1 in <module>
    Aborted (core dumped)

    Also, I think we should try to make pickletools.dis display the frame boundaries to help with debugging. This could be implemented by adding an option to pickletools.genops which could be helpful for testing the framing implementation as well.

    @pitrou
    Copy link
    Member

    pitrou commented May 2, 2013

    One concern is we need to make sure the C implementation call
    _Pickler_OpcodeBoundary often enough to keep the frames around the
    sizes. For example, batch_save_list and batch_save_dict can currently
    create a frame much larger than expected.

    I don't understand how that can happen. batch_list() and batch_dict()
    both call save() for each item, and save() calls
    _Pickler_OpcodeBoundary() at the end. Have I missed something?

    Interestingly enough, I found pickle, with patch applied, crashes when
    handling such frames:

    Interesting, I'll take a look when I have some time.

    Also, I think we should try to make pickletools.dis display the frame
    boundaries to help with debugging. This could be implemented by adding
    an option to pickletools.genops which could be helpful for testing the
    framing implementation as well.

    Agreed.

    @avassalotti
    Copy link
    Member Author

    I don't understand how that can happen. batch_list() and batch_dict()
    both call save() for each item, and save() calls
    _Pickler_OpcodeBoundary() at the end. Have I missed something?

    Ah, you're right. I was thinking in terms of my fast dispatch patch in issue bpo-17787. Sorry for the confusion!

    @avassalotti
    Copy link
    Member Author

    I am currently fleshing out an improved implementation for the reduce protocol version 4. One thing I am curious about is whether we should keep the special cases we currently have there for dict and list subclasses.

    I recall Raymond expressed disagreement in #msg83098 about this behavior. I agree that having __setitem__ called before __init__ make it harder for dict and list subclasses to support pickling. To take advantage of the special case, subclasses need to do their required initialization in the __new__ method.

    On the other hand, it does decrease the memory requirements for unpickling such subclasses---i.e., we can build the object in-place instead of building an intermediary list or dict. Reading PEP-307 confirms indeed that was the original intention.

    One possible solution, other than removing the special case completely, is to make sure we initialize the object (using the BUILD opcode) before we call __setitem__ or append on it. This would be a simple change that would solve the initialization issue. However, I would still feel uneasy about the default object.__reduce__ behavior depending on the object's subtype.

    I think it could be worthwhile to investigate a generic API for pickling collections in-place. For example, a such API would helpful for pickling set subclasses in-place.

    __items__() or Return an iterator of the items in the collection. Would be
    __getitems__() equivalent to iter(dict.items()) on dicts and iter(list) on
    lists.

    __additems__(items) Add a batch of items to the collection. By default, it would
    be defined as:

                             for item in items:
                                 self.__additem__(item)
                     However, subclasses would be free to provide a more efficient
                     implementation of the method. Would be equivalent to
                     dict.update on dicts and list.extend on lists.
    

    __additem__(item) Add a single item to the collection. Would be equivalent to
    dict[item[0]] = item[1] on dicts and list.append on lists.

    The collections module's ABCs could then provide default implementations of this API, which would give its users efficient in-place pickling automatically.

    @avassalotti
    Copy link
    Member Author

    Thanks Stefan for the patch. It's very much appreciated. I will try to review it soon.

    Of the features you proposed, the twos I would like to take a look again is implicit memoization and the BAIL_OUT opcode. For the implicit memoization feature, we will need to have some performance results in hand to justify the major changes it needs. If you can you work out a quick patch, I can run it through the benchmarks suite for pickle and measure the impact. Hopefully, we will see a good improvement though we can't be sure until we measure.

    And as for the BAIL_OUT opcode, it would be interesting to revisit its use now that we support binary framing. It could be helpful to add it to prevent the Unpickler from hanging if the other end forgot to close the stream. I am still not totally convinced. However if you make a good case for it, I would support to see it included.

    @avassalotti
    Copy link
    Member Author

    Stefan, I took a quick look at your patch. There is a couple things that stands out.

    First, I think the implementation of BINGLOBAL and BINGLOBAL_BIG should be moved to another patch. Adding a binary version opcode for GLOBAL is a separate feature and it should be reviewed independently. Personally, I prefer the STACK_GLOBAL opcode I proposed as it much simpler to implement, but I am biased.

    Next, the patch's formatting should be fixed to conform to PEP-7 and PEP-8. Make sure the formatting is consistent with the surrounding code. In particular, comments should be full sentences that explains why we need this code. Avoid adding comments that merely say what the code does, unless the code is complex.

    In addition, please replace the uses of PyUnicode_InternFromString with the _Py_IDENTIFIER as needed. The latter allow the static strings to be garbage collected when the module is deleted, which is friendlier to embedded interpreters. It is also lead to cleaner code.

    Finally, the class method check hack looks like a bug to me. There are multiple solutions here. For example, we could fix class methods to be cached so they always have the same ID once they are created. Or, we could remove the 'is' check completely if it is unnecessary.

    @pitrou
    Copy link
    Member

    pitrou commented May 12, 2013

    Stefan, I took a quick look at your patch. There is a couple things
    that stands out.

    It would be nice if you could reconcile each other's work. Especially so
    I don't re-implement framing on top of something else :-)

    Adding a binary version opcode for GLOBAL is a separate feature and it
    should be reviewed independently.

    Well, it's part of the PEP.

    Personally, I prefer the STACK_GLOBAL opcode I proposed as it much
    simpler to implement, but I am biased.

    I agree it sounds simpler. I hadn't thought about it when first writing
    the PEP.

    @avassalotti
    Copy link
    Member Author

    Stefan, could you address my review comments soon? The improved support for globals is the only big piece missing from the implementation of PEP, which I would like to get done and submitted by the end of the month.

    @mstefanro
    Copy link
    Mannequin

    mstefanro mannequin commented Jun 4, 2013

    On 6/3/2013 9:33 PM, Alexandre Vassalotti wrote:

    Alexandre Vassalotti added the comment:

    Stefan, could you address my review comments soon? The improved support for globals is the only big piece missing from the implementation of PEP, which I would like to get done and submitted by the end of the month.

    ----------


    Python tracker <report@bugs.python.org>
    <http://bugs.python.org/issue17810\>


    Yes, I apologize for the delay again. Today is my last exam this
    semester, so
    I'll do my best to get it done as soon as possible (hopefully this weekend).

    @pitrou
    Copy link
    Member

    pitrou commented Aug 17, 2013

    Alexandre, Stefan, is any of you working on this?
    If not, could you please expose what the status of the patch is, whose work is the most advanced (Alexandre's or Stefan's) and what should be the plan to move this forward?

    Thanks!

    @ncoghlan
    Copy link
    Contributor

    Potentially relevant to this: we hope to have PEP-451 done for 3.4, which adds a __spec__ attribute to module objects, and will also tweak runpy to ensure -m registers __main__ under it's real name as well.

    If pickle uses __spec__.name in preference to __name__ when __spec__ is defined, then objects defined in __main__ modules run via -m should start being pickled correctly.

    @avassalotti
    Copy link
    Member Author

    I am still working on it. I am implemented support for nested globals last week (http://hg.python.org/features/pep-3154-alexandre/rev/c8991b32a47e). At this point, the only big piece remaining is the support for method descriptors. There are other minor things left but we can worry about those later.

    Nick, thanks for the pointer! I didn't know about PEP-451. I will look how we can use it in pickle.

    @avassalotti
    Copy link
    Member Author

    Hi folks,

    I consider my implementation of PEP-3154 mostly feature complete at this point. I still have a few things left to do. For example, I need to update the documentation about the new protocol. However, these can mostly be done along the review process. Plus, I definitely prefer getting feedback sooner. :-)

    Please review at:

    http://bugs.python.org/review/17810/

    Thanks!

    @avassalotti
    Copy link
    Member Author

    I have been looking again at Stefan's previous proposal of making memoization implicit in the new pickle protocol. While I liked the smaller pickles it produced, I didn't the invasiveness of the implementation, which requires a change for almost every opcode processed by the Unpickler. This led me to, what I think is, a reasonable compromise between what we have right now and Stefan's proposal. That is we can make the argument of the PUT opcodes implicit, without making the whole opcode implicit.

    I've implemented this by introducing a new opcode MEMOIZE, which stores the top of the pickle stack using the size of the memo as the index. Using the memo size as the index avoids us some extra bookkeeping variables and handles nicely situations where Pickler.memo.clear() or Unpickler.memo.clear() are used.

    Size-wise, this brings some good improvements for pickles containing a lot of dicts and lists.

    # Before
    $ ./python.exe -c "import pickle; print(len(pickle.dumps([[] for _ in range(1000)], 4)))"
    5251

    # After with new MEMOIZE opcode
    ./python.exe -c "import pickle; print(len(pickle.dumps([[] for _ in range(1000)], 4)))"
    2015

    Time-wise, the change is mostly neutral. It makes pickling dicts and lists slightly faster because it simplifies the code for memo_put() in _pickle.

    Report on Darwin Kernel Version 12.5.0: Sun Sep 29 13:33:47 PDT 2013; root:xnu-2050.48.12~1/RELEASE_X86_64 x86_64 i386
    Total CPU cores: 4

    ### pickle4_dict ###
    Min: 0.714912 -> 0.667203: 1.07x faster
    Avg: 0.741616 -> 0.685567: 1.08x faster
    Significant (t=16.25)
    Stddev: 0.02033 -> 0.01346: 1.5102x smaller
    Timeline: http://goo.gl/iHqCfB

    ### pickle4_list ###
    Min: 0.414151 -> 0.398913: 1.04x faster
    Avg: 0.432094 -> 0.409058: 1.06x faster
    Significant (t=11.83)
    Stddev: 0.01049 -> 0.00893: 1.1749x smaller
    Timeline: http://goo.gl/wfQzgL

    Anyhow, I have committed this improvement in my pep-3154 branch (http://hg.python.org/features/pep-3154-alexandre/rev/8a2861aaef82) for now, though I will happily revert it if people oppose to the change.

    @serhiy-storchaka
    Copy link
    Member

    I propose to include frame size in previous frame. This will twice decrease the number of file reads.

    @loewis
    Copy link
    Mannequin

    loewis mannequin commented Nov 19, 2013

    Attached is a patch that takes a different approach to framing, putting it into an optional framing layer by means of a buffered reader/writer.

    The framing structure is the same as in PEP-3154; a separate PYFRAMES magic is prepended to guard against protocol inconsistencies and to allow for automatic detection of framing.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 23, 2013

    New changeset 992ef855b3ed by Antoine Pitrou in branch 'default':
    Issue bpo-17810: Implement PEP-3154, pickle protocol 4.
    http://hg.python.org/cpython/rev/992ef855b3ed

    @pitrou
    Copy link
    Member

    pitrou commented Nov 23, 2013

    I've now committed Alexandre's latest work (including the FRAME and MEMOIZE opcodes).

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 23, 2013

    New changeset d719975f4d25 by Christian Heimes in branch 'default':
    Issue bpo-17810: Add NULL check to save_frozenset
    http://hg.python.org/cpython/rev/d719975f4d25

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 23, 2013

    New changeset c54becd69805 by Christian Heimes in branch 'default':
    Issue bpo-17810: return -1 on error
    http://hg.python.org/cpython/rev/c54becd69805

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 23, 2013

    New changeset a02adfb3260a by Christian Heimes in branch 'default':
    Issue bpo-17810: Add two missing error checks to save_global
    http://hg.python.org/cpython/rev/a02adfb3260a

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 23, 2013

    New changeset 3e16c8c34e69 by Christian Heimes in branch 'default':
    Issue bpo-17810: Fixed NULL check in _PyObject_GetItemsIter()
    http://hg.python.org/cpython/rev/3e16c8c34e69

    @avassalotti
    Copy link
    Member Author

    I've finalized the framing implementation in de9bda43d552.

    There will be more improvements to come until 3.4 final. However, feature-wise we are done. Thank you everyone for the help!

    @tim-one
    Copy link
    Member

    tim-one commented Nov 24, 2013

    [Alexandre Vassalotti]

    I've finalized the framing implementation in de9bda43d552.

    There will be more improvements to come until 3.4 final. However, feature-wise
    we are done. Thank you everyone for the help!

    Woo hoo! Thank YOU for the hard work - I know how much fun this is ;-)

    @serhiy-storchaka
    Copy link
    Member

    Here is a patch which restores optimization for frame headers. Unfortunately it breaks test_optional_frames.

    @larryhastings
    Copy link
    Contributor

    Isn't it a little late to be changing the pickle protocol, now that we've hit feature-freeze? If you want to check something like this in you're going to have to make a good case for it.

    @serhiy-storchaka
    Copy link
    Member

    This doesn't change the pickle protocol. This is just an implementation detail.

    @avassalotti
    Copy link
    Member Author

    Optimizing the output of the pickler class should be fine during the feature freeze as long the semantics of the current opcodes stay unchanged.

    @pitrou
    Copy link
    Member

    pitrou commented Nov 25, 2013

    Well, Larry may expand, but I think we don't commit performance optimizations during the feature freeze either.
    ("feature" is taken in the same sense as in "no new features in the bugfix branches")

    @larryhastings
    Copy link
    Contributor

    I'll make you a deal. As long as the protocol remains 100% backwards and forwards compatible (3.4.0b1 can read anything written by trunk, and trunk can read anything written by 3.4.0b1), you can make optimizations until beta 2. After that you have to stop... or get permission again.

    @serhiy-storchaka
    Copy link
    Member

    I have opened separate bpo-19780 for this.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    release-blocker stdlib Python modules in the Lib dir type-feature A feature request or enhancement
    Projects
    None yet
    Development

    No branches or pull requests

    7 participants