Message374335
I suggest taking this to Python ideas. While there is a legitimate concern about large objects in a Variable View, the idea impacts long-standing core APIs. Accordingly, it needs to be thought through, become better specified, and be evaluated against alternatives. If the language impact is pervasive, it would likely need a PEP as well.
Some questions immediately come to mind:
* Would the existing standard and third party libraries need to recode every __repr__ or __str__ implementation for every container that has ever been written? Would that include C code as well?
* It there something this limit parameter could do that couldn't already be achieved with __format__()?
* Should limits be a responsibility of individual classes or it is a debugger responsibility? On the one hand, it is hard to see how a debugger could implement this without blind truncation; on the other hand, I don't think other languages make a similar inversion of responsibility.
* How would the parameter be accessed via the !r and !s codes in f-strings?
* How easy or hard would this be to implement for typical classes, lists for example?
* What is meant by "max number of symbols we want to evaluate"? Would the repr for ['x'*1_000_000] count as one symbol or as one million?
* For tree-like structures (JSON for example), does a symbol limit make sense? Wouldn't you want a depth limit instead.
* Would some variant of "..." be added to indicate that limits were applied and to prevent someone for accidentally running eval() on the output? |
|
Date |
User |
Action |
Args |
2020-07-26 17:50:51 | rhettinger | set | recipients:
+ rhettinger, remi.lapeyre, Bernat Gabor |
2020-07-26 17:50:51 | rhettinger | set | messageid: <1595785851.7.0.676631393809.issue41383@roundup.psfhosted.org> |
2020-07-26 17:50:51 | rhettinger | link | issue41383 messages |
2020-07-26 17:50:51 | rhettinger | create | |
|