This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author DanilZ
Recipients DanilZ, aeros, asvetlov, bquinlan, lukasz.langa, maggyero, methane, ned.deily, pitrou, serhiy.storchaka
Date 2020-10-01.07:38:07
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <3B89D7EB-E61B-4AC2-B890-90B44C926B5A@me.com>
In-reply-to <1601511101.89.0.0203326077586.issue37294@roundup.psfhosted.org>
Content
I think you have correctly estimated the problem in the last part of your message: "as it could possibly indicate an issue with running out of memory when the dataframe is converted to pickle format (which often increases the total size) within the process associated with the job”

The function pd.read_csv performs without any problems inside a process, the error appears only when I try to extract it from the finished process via:
    for f in concurrent.futures.as_completed(results):
            data = f.result()

or

    data = results.result()

It just does not pass a large file from the results object.

I am sure that inside of a multiprocess everything works correctly for 2 reasons:
1. If I change in function inside a process to just save the file (that had been read in memory) to disk.
2. If I recuse the file size, then it gets extracted from results.result() without error.

So I guess then that my question narrows down to: 
1. Can I increase the memory allocated to a process? 
2. Or at least understand what would is the limit.

Regards,
Danil

> On 1 Oct 2020, at 03:11, Kyle Stanley <report@bugs.python.org> wrote:
> 
> 
> Kyle Stanley <aeros167@gmail.com> added the comment:
> 
> DanilZ, could you take a look at the superseding issue (https://bugs.python.org/issue37297) and see if your exception raised within the job is the same?  
> 
> If it's not, I would suggest opening a separate issue (and linking to it in a comment here), as I don't think it's necessarily related to this one. "state=finished raised error" doesn't indicate the specific exception that occurred. A good format for the name would be something along the lines of:
> 
> "ProcessPoolExecutor.submit() <specific exception name here> while reading large object (4GB)"
> 
> It'd also be helpful in the separate issue to paste the full exception stack trace, specify OS, and multiprocessing start method used (spawn, fork, or forkserver). This is necessary to know for replicating the issue on our end.
> 
> In the meantime, I workaround I would suggest trying would be to use the  *chunksize* parameter (or *Iterator*) in pandas.read_csv(), and split it across several jobs (at least 4+, more if you have additional cores) instead of within a single one. It'd also be generally helpful to see if that alleviates the problem, as it could possibly indicate an issue with running out of memory when the dataframe is converted to pickle format (which often increases the total size) within the process associated with the job.
> 
> ----------
> nosy: +aeros
> 
> _______________________________________
> Python tracker <report@bugs.python.org>
> <https://bugs.python.org/issue37294>
> _______________________________________
History
Date User Action Args
2020-10-01 07:38:09DanilZsetrecipients: + DanilZ, bquinlan, pitrou, ned.deily, asvetlov, methane, lukasz.langa, serhiy.storchaka, maggyero, aeros
2020-10-01 07:38:09DanilZlinkissue37294 messages
2020-10-01 07:38:07DanilZcreate