This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Loop limited to 1000
Type: behavior Stage: resolved
Components: Versions: Python 3.6
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: Nosy List: Blkph0x, steven.daprano
Priority: normal Keywords:

Created on 2018-07-04 07:02 by Blkph0x, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Messages (2)
msg321013 - (view) Author: Benjamin Gear (Blkph0x) Date: 2018-07-04 07:01
I think this is a bug or maybe I'm seeing it wrong so I have a while loop witch collects data from a website json outs it into a list then does the same thing into a 2nd list then it calls the my comparing function and outputs the changes at 1st with only a while loop no call to function and would run no problem for days now I have changed it to collect all the data and now I'm getting limited to 1000 recursions I then attempted to alter the 100 limit witch works until seg fault now 

https://github.com/blkph0x/pyinator
msg321014 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2018-07-04 07:34
Sorry, this is for reporting bugs in the Python interpreter and standard library, not your own code. If you have a bug in your pyinator project, you should report it to yourself, not us :-)

If you think my analysis of the problem below is wrong, and that you have truly found a bug in the interpreter, please read this:

http://www.sscce.org/

and give us the smallest example of this bug you can, don't just link to your library and expect us to find the problem ourselves.

My analysis, based on a *brief* look at your project, it that every time you call GetPriceCheck, you increase the recursion limit by an extra 1000, and then keep making more and more recursive calls until you run out of memory and segfault. That's pretty much unavoidable.

To prevent that, you can:

- add more memory; 

- make fewer recursive calls;

- fix your code to use a better technique for scraping websites.

(It is rude to keep hitting a website over and over and over again, without any limit. Websites have limited bandwidth, which they pay for, and every time you hit the website, that makes it harder for somebody else. At the very least, you should back off exponentially, waiting longer between each attempt: 1 second, 2 seconds, 4 seconds, 8 seconds, 16 seconds...)

The recursion limit is designed to prevent segfaults by giving you a nice Python-level exception instead of a hard, OS-level segmentation fault. But if you set the recursion limit too high, you by-pass that protection and you are responsible for not crashing the stack.
History
Date User Action Args
2022-04-11 14:59:02adminsetgithub: 78220
2018-07-04 07:34:39steven.dapranosetstatus: open -> closed

nosy: + steven.daprano
messages: + msg321014

resolution: not a bug
stage: resolved
2018-07-04 07:02:00Blkph0xcreate