This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author skrah
Recipients benjamin.peterson, neologix, njs, pitrou, rhettinger, skrah, tim.peters, trent, vstinner, wscullin, xdegaye
Date 2017-11-03.09:53:26
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
> I'm not sure that the cost of the memory allocator itself defeats the gain of aligned memory on algorithms. I expect data processing to be much more expensive than the memory allocation, no?

I guess this issue isn't easy to focus due to the vast variety of use cases. So the is only about numpy/ndtypes:

What you write is true, but I'm simply getting cold feet w.r.t locking
myself into memset(). calloc() uses mmap() for large allocations, so I think one can happily allocate a large number of huge arrays without any
cost on Linux, as long as they're not accessed.

At least that's what my tests indicate.

Couple that with the fact that one has to use aligned_free() anyway,
and that posix_memalign() isn't that great, and the use case seems
less solid **for scientific computing**.

So I rather waste a couple of bytes per allocation and deal with
some Valgrind macros to get proper bounds checking.

Note that CPython still knows any allocation from ndtypes, because
ndt_callocfunc will be set to PyMem_Calloc() [1] and the custom
ndt_aligned_calloc() uses ndt_callocfunc.

[1] If #31912 is solved.
Date User Action Args
2017-11-03 09:53:27skrahsetrecipients: + skrah, tim.peters, rhettinger, pitrou, vstinner, benjamin.peterson, trent, njs, neologix, xdegaye, wscullin
2017-11-03 09:53:27skrahsetmessageid: <>
2017-11-03 09:53:27skrahlinkissue18835 messages
2017-11-03 09:53:26skrahcreate