Message145407
>> Correct. I copied the algorithm from _io.FileIO, under the assumption
>> that there was a reason for not using a simpler O(n log n) doubling
>> strategy. Do you know of any reason for this? Or is it safe to ignore it?
>
> I don't know, but I'd say it's safe to ignore it.
To elaborate: ISTM that it's actually a bug in FileIO. I can imagine
where it's coming from (i.e. somebody feeling that overhead shouldn't
grow unbounded), but I think that's ill-advised - *if* somebody really
produces multi-gigabyte data (and some people eventually will), they
still deserve good performance, and they will be able to afford the
memory overhead (else they couldn't store the actual output, either).
> Generally we use a less-than-doubling strategy, to conserve memory (see
> e.g. bytearray.c).
Most definitely. In case it isn't clear (but it probably is here):
any constant factor > 1.0 will provide amortized linear complexity. |
|
Date |
User |
Action |
Args |
2011-10-12 16:16:51 | loewis | set | recipients:
+ loewis, barry, georg.brandl, doko, jcea, amaury.forgeotdarc, arekm, lars.gustaebel, pitrou, vstinner, nadeem.vawda, nicdumz, eric.araujo, Christophe Simonis, rcoyner, proyvind, asvetlov, nikratio, leonov, Garen, ysj.ray, thedjatclubrock, ockham-razor, strombrg, shirish, tshepang, jeremybanks, Nam.Nguyen |
2011-10-12 16:16:51 | loewis | link | issue6715 messages |
2011-10-12 16:16:50 | loewis | create | |
|