New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stack overflow evaluating eval("()" * 30000) #50015
Comments
Originally reported by Juanjo Conti at PyAr: Evaluating this expression causes a stack overflow, and the Python eval("()" * 30000) 3.0.1, 2.6, 2.5 and current 2.x trunk all fail on Windows; the original 2.4 isn't affected; it raises a "TypeError: 'tuple' object is not Alberto Bertogli said: inside eval, symtable_visit_expr() (Python/ |
This is a pathological case. I suppose we have to add a recursion |
Another case from bpo-7985: |
On 3.1.2, WinXP, I immediately get |
Terry, try a large constant. I can reproduce it on all versions from 2.6 to 3.3 with eval("()" * 300000). |
In 3.3.3a4 and b1, with original 30000, I no longer get TypeError, but box "python(w).exe has stopped working". So either Win7, 64 bit, on machine with much more memory makes a diffence, or something in code reverted. Is this really a security issue? (If so, up priority?) |
I don't think that eval is used in security context. |
This is a fix for this issue. The solution was to add two fields (recursion_depth and The test case added also covers other similar cases (a.b.b.b.b.b... There is no depth check in when visiting statement because I cannot The patch uses the current depth and current recursion limit but |
I've re-reviewed Andrea's patch (I was looking over Andrea's shoulder at the EuroPython sprint when he wrote it). It looks good and applies cleanly. |
Just curiosity: how relate the magic numbers 100000 and 2000 in test_compiler_recursion_limit to recursion_depth and recursion_limit Thanks! |
Indeed I don't like the introduction of COMPILER_STACK_FRAME_SCALE. |
The patch is incomplete: the VISIT macro contains a "return 0;" and in this case st->recursion_depth is not decremented. |
I missed all the macrology present :-( ... the following is a patch that takes it into account (also defines a VISIT_QUIT macro to make more visible the exit points). The handling has been also extended to visit_stmt because the macros are shared. Of course all this makes sense assuming that a cleanup in case of error is indeed desired... BTW: shouldn't be all those statement macros of the "do{...}while(0)" form instead of just being wrapped in "{}" ? I see potential problems with if/else... |
On Mon, Aug 20, 2012 at 12:27 AM, Antoine Pitrou <report@bugs.python.org> wrote:
I submitted a new version with the scale lowered to 3. Using a lower value (e.g. 1) however makes "test_extended_args" fail If it's ok to make that test to throw instead then the whole scaling |
We want to minimise the risk of breaking working code. Making it easy to adjust this recursion limit separately from the main recursion limit by using a scaling factor is a good way to do that. It shouldn't increase the maintenance burden in any significant way, since the ratio of the stack depth increase in the compiler vs the main interpreter loop should be relatively constant across platforms. Autogenerated code could easily hit the 1000 term limit - if anything, I'd be inclined to set it *higher* than 4 rather than lower, as breaking previously working code in a maintenance release is a bad thing, regardless of our opinion of the sanity of that code. |
We can simply apply the 1000 limit in Python 3.4 and mark the bug as I don't think adding a scaling factor just to cope with hypothetical |
New changeset ab02cd145f56 by Nick Coghlan in branch '3.3': New changeset bd1db93d76e1 by Nick Coghlan in branch 'default': |
You can take the scaling factor out if you really want, but it adds no real maintenance overhead, and better reflects the real stack usage. |
However, agreed on the won't fix for 3.2 and 2.7, although I'd consider it at least for 2.7 if someone went through and worked out a patch that applies cleanly. For 3.2, this really isn't the kind of thing we'd want to do in the final regular maintenance release. |
Can you also add a related snippet in |
Note: if you do take the scaling factor out, don't forget to track down the reasons behind the original commit that added the test that broke *without* the scaling factor. For me, "the test suite fails without it" is reason enough for me to say its needed - someone decided at some point to ensure that level of nesting worked, so if we're going to go back on that, we need to know the original reason why the test was added. I think it's easier just to keep that code working, since we have a solution that *doesn't* break it and really isn't that complicated. |
New changeset cf2515d0328b by Nick Coghlan in branch '3.3': New changeset 3712028a0c34 by Nick Coghlan in branch 'default': |
The sanity check in the recursion limit finding script is definitely a good idea, so I added that (as the commits show). For the record, running that script on the 3.3 branch with my 4 GB RAM Fedora 17 ASUS Zenbook finds a maximum recursion limit around 16800, at which point test_add is the first one to die. |
I don't think there is any need for a scaling factor. There is no way an expression tree >1000 deep could possibly have any sane behaviour. |
Didn't you make a mistake in the recursion factor there? Or is it really |
Antoine: The scaling is deliberate higher in the recursion limit finder because we just want to ensure it hits the recursion limit (or blows the stack, if the scaling is wrong). In the tests, I cut it finer because I wanted to ensure we were straddling the allowed/disallowed boundary fairly closely in order to properly test the code that accounts for the *existing* recursion depth when initialising the compiler's internal state. Mark: same answer I gave Antoine earlier. Without the scaling factor, a test fails. If you want to advocate for removing or changing that test instead, track down why it was added and provide a rationale for why it's no longer applicable. However, before you invest too much time in that, note that the trees generated by binary operators of the same precedence in CPython are *not* balanced (IIRC, the LHS is handled as a leaf expression), thus they also suffer from this recursion problem (if you look at the patch I committed, I added an extra test beyond those Andrea provided: a multiplication expression with a ridiculously large number of terms). I agree that path is the only one generated code is likely to hit, though, which is why I added a specific test for it. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: