Author HynekPetrak
Recipients HynekPetrak
Date 2021-04-06.07:43:37
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Hi, I wrote an webcrawler, which is using ThreadPoolExecutor to span multiple thread workers, retrieve content of a web using via http.client and saves it to a file.
After a couple of thousands requests have been processes, the crawler starts to consume memory rapidly, resulting in consumption of all available memory.
tracemalloc shows the memory is not collected from:
/usr/lib/python3.9/http/ size=47.6 MiB, count=6078, average=8221 B
  File "/usr/lib/python3.9/http/", line 468
    s =

I have tested as well with requests and urllib3 and as they use http.client underneath, the result is always the same.

My code around that:
def get_html3(session, url, timeout=10):
    o = urlparse(url)
    if o.scheme == 'http':
        cn = http.client.HTTPConnection(o.netloc, timeout=timeout)
        cn = http.client.HTTPSConnection(o.netloc, context=ctx, timeout=timeout)
    cn.request('GET', o.path, headers=headers)
    r = cn.getresponse()
    log.debug(f'[*] [{url}] Status: {r.status} {r.reason}')
    if r.status not in [400, 403, 404]:
        ret ='utf-8')
        ret = ""
    del r
    del cn
    return ret
Date User Action Args
2021-04-06 07:43:38HynekPetraksetrecipients: + HynekPetrak
2021-04-06 07:43:38HynekPetraksetmessageid: <>
2021-04-06 07:43:38HynekPetraklinkissue43741 messages
2021-04-06 07:43:37HynekPetrakcreate