New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct __sizeof__ support for StringIO #59695
Comments
Here is a patch that implements correct __sizeof__ for C implementation of io.StringIO. I haven't tested the patch on narrow build. |
For some value of "correct", since the internal accumulator could hold
There's no narrow build anymore on 3.3. |
This is not a concern of __sizeof__, because these lists and strings are
I mean 2.7 and 3.2. I have written test for such case, but can't check if it |
It is, since they are private and not known by the user. |
The question really is what memory blocks can "leak out" of the object, and those don't really belong to the container. Those shouldn't be accounted for, since somebody else may get hold of them, and pass them to sys.sizeof. For the PyAccu, AFAICT, objects cannot leak out of it (except for gc.getobjects in debug mode). So I think the memory allocated by the accu really belongs to the StringIO, and needs to be accounted there. The same goes for readnl, writenl, and weakreflist. On a related note, I wonder why the tp_clear and tp_traverse functions for stringio are so short. Shouldn't it also visit "decoder"? |
Not only in debug mode. >>> import io, gc
>>> s=io.StringIO()
>>> s.write('12345')
5
>>> s.write('qwerty')
6
>>> for o in gc.get_objects():
... if "'123" in repr(o) and len(repr(o)) < 1000:
... print(type(o), repr(o))
...
<class 'list'> ['12345', 'qwerty']
<class 'list'> ['o', 'gc', 'get_objects', 'repr', 'o', "'123", 1000, 'len', 'repr', 'o', 'print', 'type', 'o', 'repr', 'o']
<class 'tuple'> ("'123", 1000, None) Someone can summarize sys.getsizeof() for all managed by GC objects and therefore count internal objects twice. I think the standard library should provide a method for recursive calculation of memory (in some sense, perhaps using different strategies), but that should wait up to 3.4. __sizeof__ should count non-object memory that is not available for GC. As I understand it. |
I see. I meant sys.getobjects, which is in debug mode only, but That still leaves readnl and writenl.
Actually, this is (and should be) a separate project: http://guppy-pe.sourceforge.net/
I think you misunderstand slightly; the GC relevance is only a side Non-PyObject blocks clearly need to be accounted for. PyObject blocks normally don't need to be accounted for, as they are typically
There are also irregular ways to get objects:
For memory accounting, it would really be best if the latter two categories |
Do I understand correctly that for each Python subobject must be one of the following:
PyAccu can contains not only Unicode objects, but at least an instance of a Unicode subtype, which can has a reference to StringIO object, creating a cycle. Does this mean that PyAccu should be visited in tp_traverse and not be accounted in __sizeof__? Or should we tighten the control and ensure that PyAccu contains only exact Unicode objects? |
IMO, the latter. |
Patch updated. Now private pyobjects (readnl, accu) counted. Note all three patches rather different. |
It would be good if someone looked at the patches. I'm not sure that they are good enough. Perhaps they are too complicated and we shouldn't worry about duplicates? |
Well, I do think that it's not very important to make __sizeof__ strictly exact according to some specification. I'm also not sure we want to complicate maintenance when the calculation becomes tedious or difficult. To me __sizeof__ is mostly an educational tool which gives a hint at the memory impact of an object, but cannot really convey a precise information except for trivial atomic types. |
I disagree that sizeof cannot work well for variable-sized types. It works very well for strings, lists, tuple, dicts, and other "regular" containers. I agree that it is not important that it is absolutely correct (in some sense) for every object, but it shouldn't lose "big" chunks of data. A bug where it misses four bytes is much less important than a bug where it misses N bytes (for an object-specific value N that can grow indefinitely). As for the specific patch, I don't think any action should be taken before the 3.3 release. I would personally prefer if the computations where done in Py_ssize_t, not PyObject* (i.e. the result of the recursive call should be unwrapped). |
It can't work well if we count internal Python objects that can be shared. This is because the "work well" concept is not well defined. And because the implementation of a certain defined calculation algorithm can be completely unmaintainable, that is not well. If we wrote in the StringIO the same 1 MB string twice, should the __sizeof__() return about 1) 2 MB, 2) 1 MB or 3) size of empty stream if there are external links to this shared string? Patch implements the second strategy, it can be simplified to the first one or complicated even more to a third one. Even more complication will need to take into account the sharing of eol string ('\r' and '\n' always shared, '\r\n' may be).
I too. But on platforms with 64-bit pointers and 32-bit sizes we can allocate total more than PY_SIZE_MAX bytes (hey, I remember the DOS memory models with 16-bit size_t and 32-bit pointers). Even faster we get an overflow if allow the repeated counting of shared objects. What to do with overflow? Return PY_SIZE_MAX or ignore the possibility of errors? |
Am 17.09.2012 14:26, schrieb Serhiy Storchaka:
It can never overflow. We cannot allocate more memory than SIZE_MAX; |
Patches updated. Now the computations done in size_t. Note that the patches for the different versions differ. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: