This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients Steven.Barker, ideasman42, mark.dickinson, paul.moore, rhettinger, serhiy.storchaka, steve.dower, tim.golden, zach.ware
Date 2016-05-22.01:21:08
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1463880069.28.0.920000986238.issue27072@psf.upfronthosting.co.za>
In-reply-to
Content
Since the downstream calls to PyMem_Malloc and _PyLong_FromByteArray both accept size_t for their sizing, there isn't a problem there.

That said, I think the current limitation nicely protects us from harm.  If you were to run getrandbits(2**60) it would take a long time, eat all your memory, trigger swaps until your harddrive was full, and you wouldn't be able to break out of the tight loop with a keyboard interrupt.

Even with the current limit, the resultant int object is ridiculously big in a way that is awkward to manipulate after it is created (don't bother trying to print it, jsonify it, or doing any interesting math it).

Also, if a person wants a lot of bits, it is effortless to make repeated calls getrandbits() using the current API.  Doing so would likely improve their code and be a better design (consuming bits as generated rather than creating them all at once and extracting them later).

In short, just because we can do it, doesn't mean we should.
History
Date User Action Args
2016-05-22 01:21:09rhettingersetrecipients: + rhettinger, paul.moore, mark.dickinson, tim.golden, ideasman42, zach.ware, serhiy.storchaka, steve.dower, Steven.Barker
2016-05-22 01:21:09rhettingersetmessageid: <1463880069.28.0.920000986238.issue27072@psf.upfronthosting.co.za>
2016-05-22 01:21:09rhettingerlinkissue27072 messages
2016-05-22 01:21:08rhettingercreate