This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author mrabarnett
Recipients ezio.melotti, mrabarnett, serhiy.storchaka, yannvgn
Date 2019-07-31.17:23:32
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1564593812.49.0.157396144592.issue37723@roundup.psfhosted.org>
In-reply-to
Content
I've just had a look at _uniq, and the code surprises me.

The obvious way to detect duplicates is with a set, but that requires the items to be hashable. Are they?

Well, the first line of the function uses 'set', so they are.

Why, then, isn't it using a set to detect the duplicates?

How about this:

def _uniq(items):
    newitems = []
    seen = set()
    for item in items:
        if item not in seen:
            newitems.append(item)
            seen.add(item)
    return newitems
History
Date User Action Args
2019-07-31 17:23:32mrabarnettsetrecipients: + mrabarnett, ezio.melotti, serhiy.storchaka, yannvgn
2019-07-31 17:23:32mrabarnettsetmessageid: <1564593812.49.0.157396144592.issue37723@roundup.psfhosted.org>
2019-07-31 17:23:32mrabarnettlinkissue37723 messages
2019-07-31 17:23:32mrabarnettcreate