Using pyfailmalloc (*) (see issue #18408), I found bugs in the PyStructSequence API.
The following macros in Objects/structseq.c does not check for exceptions:
#define VISIBLE_SIZE(op) Py_SIZE(op)
#define VISIBLE_SIZE_TP(tp) PyLong_AsLong( \
PyDict_GetItemString((tp)->tp_dict, visible_length_key))
#define REAL_SIZE_TP(tp) PyLong_AsLong( \
PyDict_GetItemString((tp)->tp_dict, real_length_key))
#define REAL_SIZE(op) REAL_SIZE_TP(Py_TYPE(op))
#define UNNAMED_FIELDS_TP(tp) PyLong_AsLong( \
PyDict_GetItemString((tp)->tp_dict, unnamed_fields_key))
#define UNNAMED_FIELDS(op) UNNAMED_FIELDS_TP(Py_TYPE(op))
Exceptions in PyDict_GetItemString() and PyLong_AsLong() are "unlikely" (the request key always exist, except in a newly developed module, which is not the case here), but become very likely using pyfailmalloc: PyDict_GetItemString() allocates a temporary string, and the memory allocation can fail.
In my opinion, the PyStructSequence structure should store the number of visible, real and unnamed fields. The problem is that it would require a design of the structure: data cannot be added between tuple items and the tuple header. PyStructSequence is currently defined as:
typedef PyTupleObject PyStructSequence;
Another option is to detect and handle correctly exceptions where these macros are used. But how should we handle exceptions in structseq_dealloc() ???
static void
structseq_dealloc(PyStructSequence *obj)
{
Py_ssize_t i, size;
size = REAL_SIZE(obj);
for (i = 0; i < size; ++i) {
Py_XDECREF(obj->ob_item[i]);
}
PyObject_GC_Del(obj);
}
By the way, structseq_dealloc() might restore the original size ("Py_SIZE(obj) = REAL_SIZE(obj);") before calling the tuple destructor (even if tupledealloc() only keeps real tuple in its free list).
(*) https://pypi.python.org/pypi/pyfailmalloc
|