Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-uniformity in randrange for large arguments. #53271

Closed
mdickinson opened this issue Jun 18, 2010 · 25 comments
Closed

Non-uniformity in randrange for large arguments. #53271

mdickinson opened this issue Jun 18, 2010 · 25 comments
Assignees
Labels
stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error

Comments

@mdickinson
Copy link
Member

BPO 9025
Nosy @rhettinger, @terryjreedy, @mdickinson, @abalkin, @orsenthil, @pitrou, @vstinner
Files
  • issue9025.patch
  • issue9025_v2.patch
  • _smallrandbelow.diff
  • randint.py
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/rhettinger'
    closed_at = <Date 2010-09-07.04:47:26.860>
    created_at = <Date 2010-06-18.10:20:49.663>
    labels = ['type-bug', 'library']
    title = 'Non-uniformity in randrange for large arguments.'
    updated_at = <Date 2010-09-07.04:47:26.846>
    user = 'https://github.com/mdickinson'

    bugs.python.org fields:

    activity = <Date 2010-09-07.04:47:26.846>
    actor = 'rhettinger'
    assignee = 'rhettinger'
    closed = True
    closed_date = <Date 2010-09-07.04:47:26.860>
    closer = 'rhettinger'
    components = ['Library (Lib)']
    creation = <Date 2010-06-18.10:20:49.663>
    creator = 'mark.dickinson'
    dependencies = []
    files = ['17706', '17707', '17755', '17757']
    hgrepos = []
    issue_num = 9025
    keywords = ['patch']
    message_count = 25.0
    messages = ['108095', '108097', '108101', '108108', '108111', '108118', '108120', '108123', '108138', '108310', '108438', '108441', '108453', '108460', '108463', '108466', '108478', '108481', '108487', '108495', '108498', '108499', '108503', '108504', '115738']
    nosy_count = 7.0
    nosy_names = ['rhettinger', 'terry.reedy', 'mark.dickinson', 'belopolsky', 'orsenthil', 'pitrou', 'vstinner']
    pr_nums = []
    priority = 'high'
    resolution = 'fixed'
    stage = None
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue9025'
    versions = ['Python 3.2']

    @mdickinson
    Copy link
    Member Author

    Not a serious bug, but worth noting:

    The result of randrange(n) is not even close to uniform for large n. Witness the obvious skew in the following (this takes a minute or two to run, so you might want to reduce the range argument):

    Python 3.2a0 (py3k:81980, Jun 14 2010, 11:23:36)
    [GCC 4.2.1 (SUSE Linux)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from random import randrange
    >>> from collections import Counter
    >>> Counter(randrange(6755399441055744) % 3 for _ in range(100000000))
    Counter({1: 37508130, 0: 33323818, 2: 29168052})

    (The actual probabilities here are, as you might guess from the above numbers: {0: 1/3, 1: 3/8, 2: 7/24}.)

    The cause: for n < 2**53, randrange(n) is effectively computed as int(random() * n). For small n, there's a tiny bias involved, but this is still an effective method. However, as n increases towards 2**53, the bias increases significantly. (For n >= 2**53, the random module uses a different strategy that *does* produce uniformly distributed results.)

    A solution would be to lower the cutoff point where randrange() switches from using int(random() * n) to using the _randbelow method.

    @mdickinson mdickinson added stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error labels Jun 18, 2010
    @mdickinson
    Copy link
    Member Author

    Note: the number 6755399441055744 is special: it's 0.75 * 2**53, and was deliberately chosen so that the non-uniformity is easily exhibited by looking at residues modulo 3. For other numbers of this size, the non-uniformity is just as bad, but demonstrating the non-uniformity clearly would have taken a little more work.

    @mdickinson
    Copy link
    Member Author

    Here's an example patch that removes any bias from randrange(n) (except for bias resulting from the imperfectness of the core MT generator). I added a small private method to Modules/_randommodule.c to aid the computation.

    This only fixes one instance of int(random() * n) in the Lib/random.py source; the other instances should be modified accordingly.

    With this patch, randrange is a touch faster than before (20-30% speedup) for small arguments.

    Is this worth pursuing?

    @mdickinson
    Copy link
    Member Author

    The nonuniformity of randrange has a knock-on effect in other random module functions. For example, take a sample of 100 elements from range(6004799503160661), and take the smallest element from that sample. Then the exact distribution of that smallest element is somewhat complicated, but you'd expect it to be even with probability very close to 50%. But it turns out that it's roughly twice as likely to be even as to be odd.

    >>> from random import sample
    >>> from collections import Counter
    >>> population = range(6004799503160661)
    >>> Counter(min(sample(population, 100)) % 2 for _ in range(100000))
    Counter({0: 66810, 1: 33190})

    @mdickinson
    Copy link
    Member Author

    Here's a more careful Python-only patch that fixes the bias in randrange and randint (but not in shuffle, choice or sample). It should work well both for Mersenne Twister and for subclasses of Random that use a poorer PRNG with badly-behaved low-order bits.

    @rhettinger rhettinger self-assigned this Jun 18, 2010
    @rhettinger
    Copy link
    Contributor

    Will take a look at this in the next few days.
    Am tempted to just either provide a recipe
    or provide a new method. That way sequences generated
    by earlier python's are still reproducible.

    @abalkin
    Copy link
    Member

    abalkin commented Jun 18, 2010

    I would prefer to see correct algorithm in stdlib and a recipe for how to reproduce old sequences for the users who care.

    @rhettinger
    Copy link
    Contributor

    FWIW, we spent ten years maintaining the ability to reproduce sequences. It has become an implicit promise.
    I'll take a look at the patch in the next few days.

    @mdickinson
    Copy link
    Member Author

    Hmm. I hadn't considered the reproducibility problem.

    Does the module aim for reproducibility across all platforms *and* all versions of Python? Or just one of those?

    For small n, I think the patched version of randrange(n) produces the same sequence as before with very high probability, but not with certainty. Since that sounds like a recipe for hard-to-find bugs, it might be better to deliberately perturb the outputs somehow so that the sequence is obviously different from before, rather than subtly different.

    @terryjreedy
    Copy link
    Member

    'Random', without qualification, is commonly taken to mean 'with uniform distribution'. Otherwise it has no specific meaning and could well be a synonym for 'arbitrary' or 'haphazard'.

    The behavior reported is buggy and in my opinion should be fixed if possible. I have done simulation research in the past and do not consider them minor. If I had results that depended on these functions, I might want to rerun with the fixed versions to make sure the end results were not affected. I would certainly want the fixed behavior for any future work.

    I do not see any promise of reproducibility of sequences from version to version. I do not really see the point as one can rerun with the old Python version or copy the older random.py.

    The old versions could be kept with with an 'old_' prefix and documented in a separate subsection that starts with "Do not use these buggy old versions of x and y in new code. They are only present for those who want to reproduce old sequences." But I wonder how many people would use them.

    @rhettinger
    Copy link
    Contributor

    FWIW, here are two approaches to getting an equi-distributed version of int(n*random()) where 0 < n <= 2**53. The first mirrors the approach currently in the code. The second approach makes fewer calls to random().

    def rnd1(n):
        assert 0 < n <= 2**53
        N = 1 << (n-1).bit_length()
        r = int(N * random())
        while r >= n:
            r = int(N * random())
        return r
    
    def rnd2(n, N=1<<53):
        assert 0 < n <= N
        NN = N - (N % n) + 0.0  # largest multiple of n <= N
        r = N * random()        # a float with an integral value
        while r >= NN:
            r = N * random()
        return int(r) % n

    @mdickinson
    Copy link
    Member Author

    Either of these looks good to me.

    If the last line of the second is changed from "return int(r) % n" to "return int(r) // (N // n)" then it'll use the high-order bits of random() instead of the low-order bits. This doesn't matter for MT, but might matter for subclasses of Random using a different underlying generator.

    @pitrou
    Copy link
    Member

    pitrou commented Jun 23, 2010

    This wouldn't be the first time reproduceability is dropped, since reading from the docs:

    “As an example of subclassing, the random module provides the WichmannHill class that implements an alternative generator in pure Python. The class provides a backward compatible way to reproduce results from earlier versions of Python, which used the Wichmann-Hill algorithm as the core generator.”

    Also:

    FWIW, we spent ten years maintaining the ability to reproduce
    sequences. It has become an implicit promise.

    IMO it should either be documented explicitly, or be taken less dearly. There's not much value in an "implicit promise" that's only known by a select few.

    (besides, as Terry said, I think most people are more concerned by the quality of the random distribution than by the reproduceability of sequences)

    @orsenthil
    Copy link
    Member

    I guess, Antoine wanted to point out this:

    "Changed in version 2.3: MersenneTwister replaced Wichmann-Hill as the
    default generator."

    But as the paragraph points out Python did provide non default WichmanHill
    class for generating repeatable sequences with older python.

    My brief reading on this topic, does suggest that 'repeatability' is
    an important requirement for any PRNG.

    @mdickinson
    Copy link
    Member Author

    BTW, the Wichmann-Hill code is gone in py3k, so that doc paragraph needs removing or updating.

    @rhettinger
    Copy link
    Contributor

    Thanks guys, I've got it from here.

    Some considerations for the PRNG are:

    • equidistribution (for quality)
    • repeatability from the same seed (even in multithreaded environments)
    • quality and simplicity of API (for usability)
    • speed (it matters whether a Monte Carlo simulation takes 5 minutes or 30 minutes).

    I'm looking at several ideas:

    • updating the new randrange() to use rnd2() algorithm shown above (for equidistribution).
    • possibly providing a C version of rnd2() and using it in randrange() for speed and for thread-safety.
    • possibly updating shuffle() and choice() to use rnd2().
    • moving the existing randrange() to randrange_quick() -- uses int(n * random) for speed and for reproducibility of previously created sequences. Alternatively, adding a recipe to the docs for recreating old sequences and not cluttering the code with backwards compatibility cruft.

    Am raising the priority to normal because I think some effort needs to be made to address equidistribution in 3.2. Will take some time to come-up with a good patch that balances quality, simplicity, speed, thread-safety, and reproducibility.

    May also consult with Tim Peters who has previously voiced concerns about stripping bits off of multiple calls to random() because the MT proofs make no guarantees about quality in those cases. I don't think this is an issue in practice, but in theory when we start tossing out some of the calls to random(), we're also throwing away the guarantees of a long periodic, 623 dimensions, uniformity, and equidistribution.

    @pitrou
    Copy link
    Member

    pitrou commented Jun 23, 2010

    Some considerations for the PRNG are:

    • equidistribution (for quality)
    • repeatability from the same seed (even in multithreaded environments)

    I believe a reasonable (com)promise would be to guarantee repeatability
    accross a given set of bugfix releases (for example, accross all 2.6.x
    releases). We shouldn't necessarily commit to repeatability accross
    feature releases, especially if it conflicts with desireable
    opportunities for improvement.

    @mdickinson
    Copy link
    Member Author

    • possibly providing a C version of rnd2()

    If recoding in C is acceptable, I think there may be better ( = simpler and faster) ways than doing a direct translation of rnd2.

    For example, for small k, the following algorithm for randrange(k) suffices:

    • take a single 32-bit deviate (generated using genrand_int32)
    • multiply by k (a 32-by-32 to 64-bit widening multiply) and return
      the high 32-bits of the result, provided that the bottom half
      of the product is <= 2**32 - k (almost always true, for small k).
    • consume extra random words as necessary in the case that the bottom
      half of the product is > 2**32 - k.

    I can provide code (with that 3rd step fully expanded) if you're interested in this approach.

    This is likely to be significantly faster than a direct translation of rnd32, since in the common case it requires only: one 32-bit deviate from MT, one integer multiplication, one subtraction, and one comparison. By comparison, rnd2 uses (at least) two 32-bit deviates and massages them into a float, before doing arithmetic with that float.

    Though it's possible (even probable) that any speed gain would be insignificant in comparison to the rest of the Python machinery involved in a single randrange call.

    @mdickinson
    Copy link
    Member Author

    Just to illustrate, here's a patch that adds a method Random._smallrandbelow, based on the algorithm I described above.

    @rhettinger
    Copy link
    Contributor

    Antoine, there does need to be repeatablity; there's no question about that. The open question for me is how to offer that repeatability in the cleanest manner.

    People use random.seed() for reproducible tests. They need it to have their studies become independently validatable, etc. Some people are pickling the state of the RNG and need to restart where they left off, etc.

    Mark, thanks for the alternative formulation. I'll take a look when I get a chance.

    I've got it from here.

    @vstinner
    Copy link
    Member

    randint.py: another algorithm to generate a random integer in a range. It uses only operations on unsigned integers (no evil floatting point number). It calls tick() multiple times to generate enough entropy. It has an uniform distribution (if the input generator has an uniform distribution).

    tick() is simply the output of the Mersenne Twister generator. The algorithm can be optimized (especially the part computing ndigits and scale).

    @pitrou
    Copy link
    Member

    pitrou commented Jun 24, 2010

    Antoine, there does need to be repeatablity; there's no question about
    that.

    Well, that doesn't address my proposal of making it repeatable accross
    bugfix releases only. There doesn't seem to be a strong use case for
    perpetual repeatability.

    If some people really need perpetual repeatability, I don't understand
    how they can rely on Python anyway, since we don't make any such promise
    explicitly (or do they somehow manage to read in your mind?). So,
    realistically, they should already be using their custom-written
    deterministic generators.

    @vstinner
    Copy link
    Member

    Distribution with my algorithm:

    ...
    from collections import Counter
    print Counter(_randint(6755399441055744) % 3 for _ in xrange(100000000))

    => Counter({0L: 33342985, 2L: 33335781, 1L: 33321234})

    Distribution: {0: 0.33342985000000003, 2: 0.33335780999999998, 1: 0.33321234}

    @mdickinson
    Copy link
    Member Author

    A couple of points:

    (1) In addition to documenting the extent of the repeatability, it would be good to have tests to prevent changes that inadvertently change the sequence of randrange values.

    (2) For large arguments, cross-platform reproducibility is already a bit fragile. For example, the _randbelow function depends on the system _log function---see the line

        k = int(1.00001 + _log(n-1, 2.0))

    Now that we have the bit_length method available, it might be better to use that.

    @rhettinger
    Copy link
    Contributor

    Put in a fix with r84576. May come back to it to see if it can or should be optimized with C. For now, this gets the job done.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    stdlib Python modules in the Lib dir type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    7 participants