foo.c:
#include <Python.h>
static PyMethodDef mth[] = { {NULL, NULL, 0, NULL} };
static struct PyModuleDef mod = { PyModuleDef_HEAD_INIT, "foo", NULL, -1, mth };
PyMODINIT_FUNC PyInit_foo(void) { return PyModule_Create(&mod); }
bar.c:
#include <Python.h>
static PyMethodDef mth[] = { {NULL, NULL, 0, NULL} };
static struct PyModuleDef mod = { PyModuleDef_HEAD_INIT, "bar", NULL, -1, mth };
PyMODINIT_FUNC PyInit_bar(void) { return PyModule_Create(&mod); }
setup.py:
from distutils.core import setup, Extension
setup(name='PackageName',
ext_modules=[Extension('foo', sources=['foo.c']),
Extension('bar', sources=['bar.c'])])
In an NFS mount:
host1$ python setup.py build
host1$ rm *.so; cp build/lib.*/foo*.so .; cp build/lib.*/bar*.so .
host1$ python -c 'import foo; input(); import bar'
While python is waiting for input, on another host in the same directory:
host2$ rm *.so; cp build/lib.*/bar*.so .; cp build/lib.*/foo*.so .
Back on host1:
<Enter>
ImportError: dynamic module does not define init function (PyInit_bar)
Attaching a debugger to Python after the ImportError and calling dlerror() shows the problem:
(gdb) print (char *)dlerror()
$1 = 0xe495210 "/<...>/foo.cpython-34dm.so: undefined symbol: PyInit_bar"
This is because dynload_shlib.c[1] caches dlopen handles by (device and) inode number; but NFS will reuse inode numbers even if a process on a client host has the file open; running lsof on Python, before:
python 16475 ecatmur mem REG 0,36 14000 55321147 /<...>/foo.cpython-34dm.so (nfs:/export/user)
and after:
python 16475 ecatmur mem REG 0,36 55321147 /<...>/foo.cpython-34dm.so (nfs:/export/user) (path inode=55321161)
Indeed, bar.cpython-34dm.so now has the inode number that Python originally opened foo.cpython-34dm.so under:
host1$ stat -c '%n %i' *.so
bar.cpython-34dm.so 55321147
foo.cpython-34dm.so 55321161
Obviously, this can only happen on a filesystem like NFS where inode numbers can be reused even while a process still has a file open (or mapped).
We encountered this problem in a fairly pathological situation; multiple processes running in two virtualenvs with different copies of a zipped egg (of the same version!) were contending over the ~/.python-eggs directory created by pkg_resources[2] to cache .so files extracted from eggs. We are working around the situation by setting PYTHON_EGG_CACHE to a virtualenv-specific location, which also fixes the contention issue. (We should probably work out why the eggs are different, but fixing that is bound into our build/deployment system.)
I'm not sure exactly how to solve or even detect this issue; perhaps looking at the mtime of the .so might work?
If it is decided not to fix the issue it would be useful if _PyImport_GetDynLoadFunc could report the actual dlerror(); this would have saved us quite some time debugging it. I'll work on a patch to do that.
1. http://hg.python.org/cpython/file/tip/Python/dynload_shlib.c
2. https://bitbucket.org/pypa/setuptools/src/ac127a3f46be3037c79f2c4076c7ab221cde21b2/pkg_resources.py?at=default#cl-1040
|