New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
co_stacksize estimate can be highly off #68528
Comments
The computation of def g():
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
... i.e. any function that is big enough to contain 6 try: blocks in sequence will have its stack size overestimated by about 70. |
I'm against backporting performance improvements which don't fix a severe regression. |
This isn't so easy. Seems the simplest way to solve this issue is implementing bpo-17611. |
The WIP pull request PR# 2827 seems to help. The following code prints 86 on python3.6 and 25 with PR 2827 applied. def g():
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
try: pass
except ImportError as e: pass
print(g.__code__.co_stacksize) |
With PR 5076 the result of the above example is 10. |
Tests originally based on Antoine's tests added for PR 2827. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: