This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: a bunch of infinite C recursions
Type: crash Stage:
Components: Interpreter Core Versions: Python 2.6
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: brett.cannon Nosy List: arigo, brett.cannon, josiahcarlson, loewis, mwh, nnorwitz, terry.reedy
Priority: critical Keywords: patch

Created on 2005-05-15 23:43 by arigo, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
test17.py arigo, 2006-06-25 09:33 Segfaults a debug build of 2.5.
obj_call_rec.diff brett.cannon, 2006-06-26 19:31 Patch for python object call infinite recursion
infinite_rec.diff brett.cannon, 2006-07-01 00:46 Fix infinite recursion in PyObject_Call() and PyErr_NormalizeException()
Messages (27)
msg25346 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2005-05-15 23:43
There is a general way to cause unchecked infinite recursion at the C level, and I have no clue at the moment how it could be reasonably fixed.  The idea is to define special __xxx__ methods in such a way that no Python code is actually called before they invoke more special methods (e.g. themselves).

>>> class A: pass
>>> A.__mul__=new.instancemethod(operator.mul,None,A)
>>> A()*2
Segmentation fault
msg25347 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2005-05-19 02:02
Logged In: YES 
user_id=593130

On Windows, this caused the interactive window to just 
disappear.so I suspect something similar occurred.

New is a known dangerous, use at your own risk, implementation 
specific module whose use, like byte code hacking, is outside 
the language proper.  Both bypass normal object creation syntax 
and its checks and both can create invalid objects.  A hold-your-
hand inplementation would not give such access to internals.

Lib Ref 3.28 says "This module provides a low-level interface to 
the interpreter, so care must be exercised when using this 
module. It is possible to supply non-sensical arguments which 
crash the interpreter when the object is used."  Should more or 
different be said?  

If not, I suspect this should be closed as 'won't fix', as in 'won't 
remove the inherently dangerous new module'.


msg25348 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2005-05-19 15:05
Logged In: YES 
user_id=4771

This is not about the new module.  The same example can be written as:

  import types
  class A: pass
  A.__mul__ = types.MethodType(operator.mul, None, A)

If this still looks essentially like an indirect way of using the new module, here is another example:

  class A(str): __get__ = getattr
  a = A('a')
  A.a = a
  a.a

Or, as I just found out, new-style classes are again vulnerable to the older example based __call__, which was fixed for old-style classes:

  class A(object): pass
  A.__call__ = A()
  A()()

I'm not denying that these examples look convoluted :-)
My point here is that we can basically build a lot of examples based only on core (if not necessarily widely understood) language features.  It appears to go against the basic hope that CPython cannot be crashed as long as you don't use features explicitely marked as dangerous.
msg25349 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2005-05-20 21:22
Logged In: YES 
user_id=21627

Wouldn't adding Py_EnterRecursiveCall into many places solve
the problem?
msg25350 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2005-05-20 21:46
Logged In: YES 
user_id=4771

Yes, but I'm concerned that we would need to add it really really many places, and probably forget some even then.  E.g. I just thought about:

    lst = [apply]
    lst.append(lst)
    apply(*lst)

It seems a bit hopeless, honestly...
msg25351 - (view) Author: Josiah Carlson (josiahcarlson) * (Python triager) Date: 2005-05-23 07:41
Logged In: YES 
user_id=341410

I personally think that the CPython runtime should make a
best-effort to not crash when running code that makes sense.
 But when CPython is running on input that is nonsensical
(in each of the examples that Armin provides, no return
value could make sense), I think that as long as the
behavior is stated clearly, it is sufficient.

Certainly it would be nice if CPython did not crash in such
cases, but I don't know if the performance penalty and code
maintenance outweigh the cases where users write bad code. 
Perhaps a compile-time option, enabled by default based on
whether or not we want a safer or faster CPython.  Of course
maintenance is still a chore, and it is one additional set
of calls that C extension writers may need to be aware of
(if their code can be recursively called, and they want to
participate in the infinite recursion detection).
msg25352 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2005-05-23 13:06
Logged In: YES 
user_id=21627

It has been a long-time policy that you should not be able
to crash the Python interpreter even with malicious code. I
think this is a good policy, because it provides people
always with a back-trace, which is much easier to analyse
than a core dump.
msg25353 - (view) Author: Michael Hudson (mwh) (Python committer) Date: 2005-05-23 13:16
Logged In: YES 
user_id=6656

I agree with Armin that this could easily be a never ending story.  Perhaps 
it would suffice to sprinkle Py_EnterRecursiveCall around as we find holes.

It might have to, because I can't really think of a better way of doing this.  
The only other approach I know is that of SBCL (a Common Lisp 
implementation): it mprotects a page at the end of the stack and installs a 
SIGSEGV handler (and uses sigaltstack) that knows how to abort the 
current lisp operation.  Somehow, I don't think we want to go down this 
line.

Anybody have any other ideas?
msg25354 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2005-05-23 13:52
Logged In: YES 
user_id=4771

When I thought about the same problem for PyPy, I imagined
that it would be easy to use the call graph computed by
the type inferencer ("annotator").  We would find an
algorithm that figures out the minimal number of places
that need a Py_EnterRecursiveCall so that every cycle goes
through at least one of them.  For CPython it might be
possible to go down the same path if someone can find a C
code analyzer smart enough to provide the required
information -- a call graph including indirect calls through
function pointers.  Not sure it's sane, though.
msg25355 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2005-05-29 12:23
Logged In: YES 
user_id=4771

Adding a Py_EnterRecursiveCall() in PyObject_Call() seems to fix all the examples so far, with the exception of the "__get__=getattr" one, where we get a strange result instead of a RuntimeError (I suspect careless exception eating is taking place).

The main loop in ceval.c doesn't call PyObject_Call() very often: it usually dispatches directly itself for performance, which is exactly what we want here, as recursion from ceval.c is already protected by a Py_EnterRecursiveCall().  So this change has a minor impact on performance.  Pystone for example issues only three PyObject_Call() per loop, to call classes.  This has an almost-unmeasurable impact ( < 0.4%).

Of course I'll think a bit more and search for examples that don't go through PyObject_Call()  :-)
msg25356 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2005-09-01 20:39
Logged In: YES 
user_id=593130

Bug submission [ 1267884 ] crash recursive __getattr__
appears to be another example of this problem, so I closed it as 
a duplicate.  If that turns out to be wrong, it should be reopened.
msg25357 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-06-23 19:44
Logged In: YES 
user_id=357491

Do you have any objection to using the
Py_EnterRecursiveCall() in PyObject_Call(), Armin, to at
least deal with the crashers it fixes?
msg25358 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2006-06-23 20:05
Logged In: YES 
user_id=4771

I'd have answer "good idea, go ahead", if it were not for
what I found out a few days ago, which is that:

* you already checked yourself a Py_EnterRecursiveCall() into
PyObject_Call() -- that was in r46806 (I guess you forgot)

* I got a case of Python hanging on me in an infinite busy
loop, which turned out to be caused by this (!)

So I reverted r46806 in r47601, added a test (see log for an
explanation), and moved the PyEnter_RecursiveCall()
elsewhere, where it still catches the originally intended
case, but where it will probably not catch the cases of the
present tracker any more.  Not sure what to do about it.  I'd
suggest to be extra careful here; better some extremely
obscure and ad-hoc ways to provoke a segfault, rather than
busy-loop hangs in previously working programs...
msg25359 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-06-23 20:53
Logged In: YES 
user_id=357491

I thought the check was in slot_tp_call and not
PyObject_Call.  So yeah, I basically forgot.  =)

The problem with allowing the segfault to stay is that it
destroys security in terms of protecting the interpreter,
which I am trying to deal with.  So leaving random ways to
crash the interpreter is currently a no-no for me.  I will
see if I can come up with another way to fix this issue.
msg25360 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-06-23 20:57
Logged In: YES 
user_id=357491

The rev. that Armin checked in was actually r47061.
msg25361 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-06-23 21:54
Logged In: YES 
user_id=357491

I just had an idea, Armin.  What if, at the recursive call
site in PyErr_NormalizeException(), we called
Py_LeaveRecursiveCall() before and Py_EnterRecursiveCall()
after?  That would keep the recursion limit the same when
the normalization was done, but still allow the check in
PyObject_Call()::

Py_LeaveRecursiveCall();
PyErr_NormalizeException(exc, val, tb);
Py_EnterRecursiveCall("");

Since it is an internal call I think it would be safe to
"play" with the recursion depth value like this.  What do
you think?
msg25362 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2006-06-25 09:33
Logged In: YES 
user_id=4771

Yes, it seems reasonable.  You should probably manipulate
tstate->recursion_depth directly instead of via the
macros, as e.g. ceval.c does.

This would fix most crashes here.  It would make the attached
test17.py segfault, though.  This test17 already segfaults a
debug build of Python, but not a release build because the
recursive PyErr_NormalizeException() is turned into a tail
call by gcc.  (What, a convoluted mind, mine?)
msg25363 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-06-26 19:31
Logged In: YES 
user_id=357491

Yes, Armin, that is a rather evil piece of code.  =)  But I
have a possible simple solution.  =)

If you add a check after the PyExceptionClass_Check() in the
'if' statement that tstate->recursion_depth is greater than
0, you short-circuit the infinite recursion and still get a
sensible output.

I have attached a patch with my proposed changes for
PyObject_Call() and PyErr_NormalizeException() and the
remove of the recursion check for slot_tp_call().  The only
issue I can see with this is if the recursion_depth check
turns out to be too small of a number.  But I really doubt
that will be an issue since one shouldn't be having a depth
of tracebacks greater than the current recursion depth.

Let me know what you think of the patch.
msg25364 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2006-06-30 07:14
Logged In: YES 
user_id=4771

Looks quite obscure, a hack to avoid the bad effects of the
previous hack of temporarily decreasing the recursion depth
(which no other piece of code does).  I'd be much more
confident that I'm looking at correct code if we used a more
explicit approach.  What about a prebuilt RuntimeError
instance with the message "maximum recursion depth
exceeded", similar to the prebuilt MemoryError instance?

Then at least PyObject_Call() could raise this instance
directly, with PyErr_SetObject().  We'd get an
already-normalized exception in this way, and remove any
need for tstate recursion_depth mangling.
msg25365 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-06-30 17:06
Logged In: YES 
user_id=357491

Seems reasonable to me.  I will try to cook up a patch after
I am done trying to fix the xml_parsers.py crasher.
msg25366 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-07-01 00:46
Logged In: YES 
user_id=357491

OK, new patch, with the check in PyObject_Call() (and thus
slot_tp_call() recursion check removed), the addition of
PyExc_RecursionErrorInst which is an instance of
RuntimeError and a reasonable message set, and
PyErr_NormalizeException() checking the recursion depth
directly and using PyExc_RecursionErrorInst when the limit
is hit.

Hopefully this does it.  =)
msg25367 - (view) Author: Neal Norwitz (nnorwitz) * (Python committer) Date: 2006-11-06 00:41
Logged In: YES 
user_id=33168

What's the status of this patch?  It seems like it's ready
to checkin.  Since this change isn't b/w compatible, should
we do anything to fix these in 2.5?  For example, bug
1590036.  We could add recursion checks one by one, but I'm
tempted to ignore 2.5.  It seems too little of a gain to fix
2.5 and 2.6 in incompatible ways for such unlikely bugs like
these.
msg25368 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2006-11-06 23:27
Logged In: YES 
user_id=357491

I was waiting on Armin to do a final review, but if you are
willing to sign off on it I can check it in.

And I agree that it isn't worth backporting to 2.5 because
of the hassle.
msg25369 - (view) Author: Neal Norwitz (nnorwitz) * (Python committer) Date: 2006-11-06 23:58
Logged In: YES 
user_id=33168

Armin mentioned in the other bug that if this patch fixes
that too (class ... __getattr__ = getattr), this patch
should be applied.  It seems to be generally in the right
direction.  I did review the patch and couldn't find any
issues.  I'm sure Armin will speak up if he disagrees.  So
go ahead and check in when you are ready.  Don't forget to
move all the crashers tests into the real test suites and
include tests for getattr/hasattr from the other bug report.
 Thanks!
msg25370 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2006-11-07 04:33
Logged In: YES 
user_id=4771

Yes, the patch fixes the other bug too.  It looks ready to be checked in (the examples should be added as tests).

All the crashers/infinite_rec_*.py now pass, but infinite_rec_2 produces no output instead of a RuntimeError.  This is probably an 
unrelated problem swallowing the exception silently...
msg55336 - (view) Author: Armin Rigo (arigo) * (Python committer) Date: 2007-08-27 13:43
Assigning to Brett instead of me (according to the tracker history).
msg55729 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2007-09-07 04:18
Rev. 58032 applied the patch and added a test to test_descr.
History
Date User Action Args
2022-04-11 14:56:11adminsetgithub: 41988
2007-09-07 04:18:59brett.cannonsetstatus: open -> closed
resolution: fixed
messages: + msg55729
2007-09-07 03:38:05brett.cannonlinkissue1267884 superseder
2007-08-28 21:55:15brett.cannonsetpriority: high -> critical
2007-08-28 21:54:50brett.cannonsetpriority: normal -> high
keywords: + patch
type: crash
severity: normal -> major
versions: + Python 2.6
2007-08-27 13:43:20arigosetassignee: arigo -> brett.cannon
messages: + msg55336
2005-05-15 23:43:47arigocreate