Message266036
Since the downstream calls to PyMem_Malloc and _PyLong_FromByteArray both accept size_t for their sizing, there isn't a problem there.
That said, I think the current limitation nicely protects us from harm. If you were to run getrandbits(2**60) it would take a long time, eat all your memory, trigger swaps until your harddrive was full, and you wouldn't be able to break out of the tight loop with a keyboard interrupt.
Even with the current limit, the resultant int object is ridiculously big in a way that is awkward to manipulate after it is created (don't bother trying to print it, jsonify it, or doing any interesting math it).
Also, if a person wants a lot of bits, it is effortless to make repeated calls getrandbits() using the current API. Doing so would likely improve their code and be a better design (consuming bits as generated rather than creating them all at once and extracting them later).
In short, just because we can do it, doesn't mean we should. |
|
Date |
User |
Action |
Args |
2016-05-22 01:21:09 | rhettinger | set | recipients:
+ rhettinger, paul.moore, mark.dickinson, tim.golden, ideasman42, zach.ware, serhiy.storchaka, steve.dower, Steven.Barker |
2016-05-22 01:21:09 | rhettinger | set | messageid: <1463880069.28.0.920000986238.issue27072@psf.upfronthosting.co.za> |
2016-05-22 01:21:09 | rhettinger | link | issue27072 messages |
2016-05-22 01:21:08 | rhettinger | create | |
|