This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author giampaolo.rodola
Recipients JelleZijlstra, eric.smith, giampaolo.rodola, gvanrossum, lazka, llllllllll, methane, ncoghlan, pitrou, rhettinger, serhiy.storchaka, vstinner, xiang.zhang
Date 2017-07-19.09:20:15
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1500456015.93.0.584316901399.issue28638@psf.upfronthosting.co.za>
In-reply-to
Content
> While "40x faster" is more 10x faster than "4x faster", C 
> implementation can boost only CPython and makes maintenance more harder.

As a counter argument against "let's not do it because it'll be harder to maintain" I'd like to point out that namedtuple API is already kind of over engineered (see: "verbose", "rename", "module" and "_source") and as such it seems likely it will remain pretty much the same in the future. So why not treat namedtuple like any other basic data structure, boost its internal implementation and simply use the existing unit tests to make sure there are no regressions? It seems the same barrier does not apply to tuples, lists and sets.

> Of course, 1.9x faster attribute access (http://bugs.python.org/issue28638#msg298499) is attractive.

It is indeed and it makes a huge difference in situations like busy loops. E.g. in case of asyncio 1.9x faster literally means being able to serve twice the number of reqs/sec:
https://github.com/python/cpython/blob/3e2ad8ec61a322370a6fbdfb2209cf74546f5e08/Lib/asyncio/selector_events.py#L523
History
Date User Action Args
2017-07-19 09:20:16giampaolo.rodolasetrecipients: + giampaolo.rodola, gvanrossum, rhettinger, ncoghlan, pitrou, vstinner, eric.smith, methane, serhiy.storchaka, llllllllll, xiang.zhang, JelleZijlstra, lazka
2017-07-19 09:20:15giampaolo.rodolasetmessageid: <1500456015.93.0.584316901399.issue28638@psf.upfronthosting.co.za>
2017-07-19 09:20:15giampaolo.rodolalinkissue28638 messages
2017-07-19 09:20:15giampaolo.rodolacreate