Issue18606
This issue tracker has been migrated to GitHub,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2013-07-31 08:20 by steven.daprano, last changed 2022-04-11 14:57 by admin. This issue is now closed.
Files | ||||
---|---|---|---|---|
File name | Uploaded | Description | Edit | |
statistics.py | steven.daprano, 2013-07-31 08:20 | Proposed statistics module | ||
statistics.patch | steven.daprano, 2013-08-14 01:13 | review | ||
statistics2.diff | terry.reedy, 2013-08-14 04:05 | review | ||
statistics.patch | steven.daprano, 2013-08-18 12:19 | review | ||
test_statistics.patch | steven.daprano, 2013-08-19 03:55 | Statistics test suite | review | |
statistics.patch | steven.daprano, 2013-08-19 03:59 | Statistics module | review | |
statistics_newsum.patch | steven.daprano, 2013-08-26 15:07 | includes exact implementation of sum | ||
statistics_combined.patch | gvanrossum, 2013-09-08 17:50 | combined patch for statistics.py and test_statistics.py, includes "newsum" patch | review | |
statistics_combined_withdocs.patch | georg.brandl, 2013-10-13 10:08 | review | ||
statistics.patch | steven.daprano, 2013-10-18 15:05 | combined patch for alpha 4 | review | |
statistics.patch | steven.daprano, 2013-10-18 17:20 | combined patch for alpha 4 this time including statistics.rst | review |
Messages (64) | |||
---|---|---|---|
msg193988 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-07-31 08:20 | |
I proposed adding a statistics module to the standard library some time ago, and received some encouragement: http://mail.python.org/pipermail/python-ideas/2011-September/011524.html Real life intervened, plus a bad case of over-engineering, but over the last few weeks I have culled my earlier (private) attempt down to manageable size. I would like to propose the attached module for the standard library. I also have a set of unit-tests for this module. At the moment it covers about 30-40% of the functions in the module, but I should be able to supply unit tests for the remaining functions over the next few days. |
|||
msg193994 - (view) | Author: Antoine Pitrou (pitrou) * | Date: 2013-07-31 10:23 | |
I suppose you should write a PEP for the module inclusion proposal (and for a summary of the API). |
|||
msg193997 - (view) | Author: Ronald Oussoren (ronaldoussoren) * | Date: 2013-07-31 13:11 | |
At first glance statistics.sum does the same as math.fsum (and statistics. add_partial seems to be a utility for implementing sum). I agree that a PEP would be useful. |
|||
msg194213 - (view) | Author: Gregory P. Smith (gregory.p.smith) * | Date: 2013-08-02 22:09 | |
note, http://docs.scipy.org/doc/scipy/reference/stats.html#statistical-functions is a very popular module for statistics in Python. One of the more frequent things I see people include the entire beast of a code base (scipy and numpy) for is the student's t-test functions. I don't think we can include code from scipy due to license reasons so any implementor should NOT be looking at the scipy code, just the docs for API inspirations. |
|||
msg194231 - (view) | Author: Alexander Belopolsky (belopolsky) * | Date: 2013-08-03 03:02 | |
Is there a reason why there is no "review" link? Could it be because the file is uploaded as is rather than as a patch? In any case, I have a question about this code in sum: # Convert running total to a float. See comment below for # why we do it this way. total = type(total).__float__(total) The "comment below" says: # Don't call float() directly, as that converts strings and we # don't want that. Also, like all dunder methods, we should call # __float__ on the class, not the instance. x = type(x).__float__(x) but this reason does not apply to total that cannot be a string unless you add instances of a really weird class in which case all bets are off and the dunder method won't help much. |
|||
msg194233 - (view) | Author: Alexander Belopolsky (belopolsky) * | Date: 2013-08-03 03:21 | |
The implementation of median and mode families of functions as classes is clever, but I am not sure it is a good idea to return something other than an instance of the class from __new__(). I would prefer to see a more traditional implementation along the lines: class _mode: def __call__(self, data, ..): .. def collate(self, data, ..): .. mode = _mode() |
|||
msg194239 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-03 07:29 | |
On 03/08/13 13:02, Alexander Belopolsky wrote: > > Alexander Belopolsky added the comment: > > Is there a reason why there is no "review" link? Could it be because the file is uploaded as is rather than as a patch? I cannot answer that question, sorry. > In any case, I have a question about this code in sum: > > # Convert running total to a float. See comment below for > # why we do it this way. > total = type(total).__float__(total) > > The "comment below" says: > > # Don't call float() directly, as that converts strings and we > # don't want that. Also, like all dunder methods, we should call > # __float__ on the class, not the instance. > x = type(x).__float__(x) > > but this reason does not apply to total that cannot be a string unless you add instances of a really weird class in which case all bets are off and the dunder method won't help much. My reasoning was that total may be a string if the start parameter is a string, but of course I explicitly check the type of start. So I think you are right. |
|||
msg194241 - (view) | Author: Vajrasky Kok (vajrasky) * | Date: 2013-08-03 08:30 | |
"Is there a reason why there is no 'review' link? Could it be because the file is uploaded as is rather than as a patch?" I think I can answer this question. The answer is yes. You can have "review" only if you use diff not raw file. The original poster, Steven D'Aprano, uploaded the raw file instead of diff. To upload the new file as a diff, (assuming he is using mercurial) he can do something like this: hg add Lib/statistics.py hg diff Lib/statistics.py > /tmp/statistics_diff.patch Then he can upload the statistics_diff.patch. Of course, this is just my hypothetical guess. |
|||
msg194293 - (view) | Author: Alexander Belopolsky (belopolsky) * | Date: 2013-08-03 19:31 | |
Here is the use-case that was presented to support adding additional operations on timedelta objects: """ I'm conducting a series of observation experiments where I measure the duration of an event. I then want to do various statistical analysis such as computing the mean, median, etc. Originally, I tried using standard functions such as lmean from the stats.py package. However, these sorts of functions divide by a float at the end, causing them to fail on timedelta objects. Thus, I have to either write my own special functions, or convert the timedelta objects to integers first (then convert them back afterwards). """ (Daniel Stutzbach, in msg26267 on issue1289118.) The proposed statistics module does not support this use case: >>> mean([timedelta(1)]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sasha/Work/cpython-ro/Lib/statistics.py", line 387, in mean total = sum(data) File "/Users/sasha/Work/cpython-ro/Lib/statistics.py", line 223, in sum total += x TypeError: unsupported operand type(s) for +=: 'int' and 'datetime.timedelta' >>> sum([timedelta(1)], timedelta(0)) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sasha/Work/cpython-ro/Lib/statistics.py", line 210, in sum raise TypeError('sum only accepts numbers') TypeError: sum only accepts numbers |
|||
msg194321 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-04 01:40 | |
On 04/08/13 05:31, Alexander Belopolsky wrote: > > Alexander Belopolsky added the comment: > > Here is the use-case that was presented to support adding additional operations on timedelta objects: > > """ > I'm conducting a series of observation experiments where I > measure the duration of an event. I then want to do various > statistical analysis such as computing the mean, median, > etc. Originally, I tried using standard functions such as > lmean from the stats.py package. However, these sorts of > functions divide by a float at the end, causing them to fail > on timedelta objects. Thus, I have to either write my own > special functions, or convert the timedelta objects to > integers first (then convert them back afterwards). > """ (Daniel Stutzbach, in msg26267 on issue1289118.) > > The proposed statistics module does not support this use case: [...] > TypeError: sum only accepts numbers That's a nice use-case, but I'm not sure how to solve it, or whether it needs to be. I'm not going to add support for timedelta objects as a special-case. Once we start special-casing types, where will it end? At first I thought that registering timedelta as a numeric type would help, but that is a slightly dubious thing to do since timedelta doesn't support all numeric operations: py> datetime.timedelta(1, 1, 1)+2 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unsupported operand type(s) for +: 'datetime.timedelta' and 'int' (What would that mean, anyway? Add two days, two seconds, or two milliseconds?) Perhaps timedelta objects should be enhanced to be (Integral?) numbers. In the meantime, there's a simple way to do this: py> from datetime import timedelta as td py> data = [td(2), td(1), td(3), td(4)] py> m = statistics.mean([x.total_seconds() for x in data]) py> m 216000.0 py> td(seconds=m) datetime.timedelta(2, 43200) And for standard deviation: py> s = statistics.stdev([x.total_seconds() for x in data]) py> td(seconds=s) datetime.timedelta(1, 25141, 920371) median works without any wrapper: py> statistics.median(data) datetime.timedelta(2, 43200) I'm now leaning towards "will not fix" for supporting timedelta objects. If they become proper numbers, then they should just work, and if they don't, supporting them just requires a tiny bit of extra code. However, I will add documentation and tests for them. |
|||
msg194324 - (view) | Author: Alexander Belopolsky (belopolsky) * | Date: 2013-08-04 02:07 | |
> Once we start special-casing types, where will it end? At the point where all stdlib types are special-cased. :-) > In the meantime, there's a simple way to do this: py> from datetime import timedelta as td py> data = [td(2), td(1), td(3), td(4)] py> m = statistics.mean([x.total_seconds() for x in data]) py> td(seconds=m) datetime.timedelta(2, 43200) Simple, but as simple ways go in this area not correct. Here is the right way: py> td.resolution * statistics.mean(d//td.resolution for d in data) datetime.timedelta(2, 43200) I wish I had a solution to make sum() work properly on timedeltas without special-casing. I thought that start could default to type(data[0])(0), but that would bring in strings. Maybe statistics.mean() should support non-numbers that support addition and division by a number? Will it be too confusing if mean() supports types that sum() does not? |
|||
msg194418 - (view) | Author: Daniel Stutzbach (stutzbach) | Date: 2013-08-04 20:14 | |
As the person originally trying to take the mean of timedelta objects, I'm personally fine with the workaround of: py> m = statistics.mean([x.total_seconds() for x in data]) py> td(seconds=m) datetime.timedelta(2, 43200) At the time I was trying to take the mean of timedelta objects, even the total_seconds() method did not exist in the version of Python I was using. On the flip side, wouldn't sum() work on timedelta objects if you simply removed the "isinstance(start, numbers.Number)" check? |
|||
msg194448 - (view) | Author: Ronald Oussoren (ronaldoussoren) * | Date: 2013-08-05 05:26 | |
As noted before statistics.sum seems to have the same functionality as math.fsum. Statistics.add_partial, which does the majority of the work, also references the same cookbook recipe as the math.fsum documentation. IMHO statistics.sum should be removed. |
|||
msg194488 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-05 15:33 | |
On 03/08/13 13:22, Alexander Belopolsky wrote: > > Alexander Belopolsky added the comment: > > The implementation of median and mode families of functions as classes is clever, So long as it is not too clever. > but I am not sure it is a good idea to return something other than an instance of the class from __new__(). Returning foreign instances is supported behaviour for __new__. (If the object returned from __new__ is not an instance, __init__ is not called.) I believe the current implementation is reasonable and prefer to keep it. If I use the traditional implementation, there will only be one instance, with no state, only methods. That's a rather poor excuse for an instance, and a class already is a singleton object with methods and (in this case) no state, so creating an instance as well adds nothing. I will change the implementation if the consensus among senior devs is against it, but would prefer not to. |
|||
msg194494 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-05 17:08 | |
I too find the use of a class that'll never be instantiated peculiar. As you say, there's no state to be stored. So why not simply have separate functions `median`, `median_low`, `median_high`, `median_grouped`, etc.? |
|||
msg194499 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-05 18:14 | |
On 06/08/13 03:08, Mark Dickinson wrote: > > I too find the use of a class that'll never be instantiated peculiar. I'll accept "unusual", but not "peculiar". It's an obvious extension to classes being first-class objects. We use classes as objects very frequently, we call methods on classes directly (e.g. int.fromhex). This is just a trivial variation where I am using a class-as-object as a function. But if this is really going to be a sticking point, I can avoid using a class. I'll make median a plain function. Will that be acceptable? > As you say, there's no state to be stored. So why not simply have separate functions `median`, `median_low`, `median_high`, `median_grouped`, etc.? Why have a pseudo-namespace median_* when we could have a real namespace median.* ? I discussed my reasons for this here: http://mail.python.org/pipermail/python-ideas/2013-August/022612.html |
|||
msg194502 - (view) | Author: Alexander Belopolsky (belopolsky) * | Date: 2013-08-05 18:38 | |
On Mon, Aug 5, 2013 at 2:14 PM, Steven D'Aprano <report@bugs.python.org>wrote: > > As you say, there's no state to be stored. So why not simply have > separate functions `median`, `median_low`, `median_high`, `median_grouped`, > etc.? > > Why have a pseudo-namespace median_* when we could have a real namespace > median.* ? I am with Steven on this one. Note that these functions are expected to be used interactively and with standard US keyboards "." is much easier to type than "_". My only objection is to having a class xyz such that isinstance(xyz(..), xyz) is false. While this works with CPython, it may present problems for other implementations. |
|||
msg194503 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-05 18:57 | |
> My only objection is to having a class xyz such that isinstance(xyz(..), > xyz) is false. Yep. Use a set of functions (median, median_low); use an instance of a class as Alexander describes; use a single median function that takes an optional "method" parameter; create a statistics.median subpackage and put the various median functions in that. Any of those options are fairly standard, unsurprising, and could reasonably be defended. But having `median` be a class whose `__new__` returns a float really *is* nonstandard and peculiar. There's just no need for such perversity in what should be a straightforward and uncomplicated module. Special cases aren't special enough to break the rules and all that. |
|||
msg194715 - (view) | Author: Stephen J. Turnbull (sjt) * | Date: 2013-08-09 02:14 | |
A few small comments and nits. 1. I'm with the author on the question of a sum function in this module. The arguments that builtin sum isn't accurate enough, and neither is math.fsum for cases where all data is of infinite precision, are enough for me. 2. A general percentile function should be high on the list of next additions. A substantive question: 3. Can't add_partial be used in the one-pass algorithms? Several typos and suggested style tweaks: 4. I would find the summary more readable if grouped by function: add_partial, sum, StatisticsError; mean, median, mode; pstdev, pvariance, stdev, variance. Maybe I'd like it better if the utilities came last. IMO YMMV, of course. 5. In the big comment in add_partial, "the inner loop" is mentioned. Indeed this is the inner loop in statistics.sum, but there's only one loop in add_partial. 6. In the Limitations section of sum's docstring it says "these limitations may change". Is "these limitations may be relaxed" what is meant? I would hope so, but the current phrasing makes me nervous. 7. In sum, there are two comments referring to the construct "type(total).__float__(total)", with the first being a forward reference to the second. I would find a single comment above the "isinstance(total, float)" test more readable. Eg, """ First, accumulate a non-float sum. Until we find a float, we keep adding. If we find a float, we exit this loop, convert the partial sum to float, and continue with the float code below. Non-floats are converted to float with 'type(x).__float__(x)'. Don't call float() directly, as that converts strings and we don't want that. Also, like all dunder methods, we should call __float__ on the class, not the instance. """ 8. The docstrings for mean and variance say they are unbiased. This depends on the strong assumption of a representative (typically i.i.d.) sample. I think this should be mentioned. 9. Several docstrings say "this function should be used when ...". In fact the choice of which function to use is somewhat delicate. My personal preference would be to use "may" rather than "should." 10. In several of the mode functions, the value is a sorted sequence. The sort key should be specified, because it could be the data value or the score. |
|||
msg194725 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-08-09 11:49 | |
One small point: I think that the argument `m` to variance, pvariance, stdev and pstdev should be renamed to `mu` for pvariance/pstdev and `xbar` for variance/stdev. The doc-strings should carefully distinguish that `mu` is the true/population mean and `xbar` is the estimated/sample mean and refer to this difference between the function variants. |
|||
msg194781 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-10 02:07 | |
On 31/07/13 20:23, Antoine Pitrou added the comment: > > I suppose you should write a PEP for the module inclusion proposal Done. http://www.python.org/dev/peps/pep-0450/ I hope to have an updated reference implementation, plus unittests, up later today or tomorrow. |
|||
msg194932 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-12 09:21 | |
About the implementation of sum: it's worth noting that the algorithm you're using for floats depends on correct rounding of addition and subtraction, and that that's not guaranteed. See the existing test (testFsum) in test_math for more information, and note that that test is skipped on machines that don't do correct rounding. This isn't an uncommon problem: last time I looked, most 32-bit Linux systems had problems with double rounding, thanks to evaluating first to 64-bit precision using the x87 FPU, and then rounding to 53-bit precision as usual. (Python builds on 64-bit Linux tend to use the SSE2 instructions in preference to the x87, so don't suffer from this problem.) Steven: any thoughts about how to deal with this? Options are (1) just ignore the problem and hope no-one runs into it, (2) document it / warn about it, (3) try to fix it. Fixing it would be reasonably easy for a C implementation (with access to the FPU control word, in the same way that our float<->string conversion already does), but not so easy in Python without switching algorithm altogether. |
|||
msg194936 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-12 10:32 | |
From the code: # Also, like all dunder methods, we should call # __float__ on the class, not the instance. Why? I've never encountered this recommendation before. x.__float__() would be clearer, IMO. |
|||
msg194937 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-12 10:53 | |
> Why? I've never encountered this recommendation before. x.__float__() > would be clearer, IMO. Hmm; it would be better if I engaged by brain before commenting. I guess the point is that type(x).__float__(x) better matches the behaviour of the builtin float: >>> class A: ... def __float__(self): return 42.0 ... >>> a = A() >>> a.__float__ = lambda: 1729.0 >>> >>> float(a) 42.0 >>> a.__float__() 1729.0 >>> type(a).__float__(a) 42.0 When you get around to tests, it would be nice to have a test for this behaviour, just so that someone who comes along and wonders the code is written this way gets an immediate test failure when they try to incorrectly "simplify" it. |
|||
msg194938 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-12 10:55 | |
(We don't seem to care too much about the distinction in general, though: there are a good few places in the std. lib. where obj.__index__() is used instead of the more correct type(obj).__index__(obj).) |
|||
msg194984 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-12 18:40 | |
On 09/08/13 21:49, Oscar Benjamin wrote: > I think that the argument `m` to variance, pvariance, stdev and pstdev > should be renamed to `mu` for pvariance/pstdev and `xbar` for > variance/stdev. The doc-strings should carefully distinguish that `mu` > is the true/population mean and `xbar` is the estimated/sample mean > and refer to this difference between the function variants. Good thinking, and I agree. |
|||
msg194992 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-12 19:20 | |
On 12/08/13 19:21, Mark Dickinson wrote: > About the implementation of sum: it's worth noting that the algorithm you're using for floats depends on correct rounding of addition and subtraction, and that that's not guaranteed. [...] > Steven: any thoughts about how to deal with this? Options are (1) just ignore the problem and hope no-one runs into it, (2) document it / warn about it, (3) try to fix it. Fixing it would be reasonably easy for a C implementation (with access to the FPU control word, in the same way that our float<->string conversion already does), but not so easy in Python without switching algorithm altogether. Document it and hope :-) add_partial is no longer documented as a public function, so I'm open to switching algorithms in the future. |
|||
msg194993 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-12 19:28 | |
Okay, that works. I agree that not documenting add_partial is probably a good plan. |
|||
msg195004 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-08-12 19:59 | |
On 12 August 2013 20:20, Steven D'Aprano <report@bugs.python.org> wrote: > On 12/08/13 19:21, Mark Dickinson wrote: >> About the implementation of sum: > add_partial is no longer documented as a public function, so I'm open to switching algorithms in the future. Along similar lines it might be good to remove the doc-test for using decimal.ROUND_DOWN. I can't see any good reason for anyone to want that behaviour when e.g. computing the mean() whereas I can see reasons for wanting to reduce rounding error for decimal in statistics.sum. It might be a good idea not to tie yourself to the guarantee implied by that test. I tried an alternative implementation of sum() that can also reduce rounding error with decimals but it failed that test (by making the result more accurate). Here's the sum() I wrote: def sum(data, start=0): if not isinstance(start, numbers.Number): raise TypeError('sum only accepts numbers') inexact_types = (float, complex, decimal.Decimal) def isexact(num): return not isinstance(num, inexact_types) if isexact(start): exact_total, inexact_total = start, 0 else: exact_total, inexact_total = 0, start carrybits = 0 for x in data: if isexact(x): exact_total = exact_total + x else: new_inexact_total = inexact_total + (x + carrybits) carrybits = -(((new_inexact_total - inexact_total) - x) - carrybits) inexact_total = new_inexact_total return (exact_total + inexact_total) + carrybits It is more accurate for e.g. the following: nums = [decimal.Decimal(10 ** n) for n in range(50)] nums += [-n for n in reversed(nums)] assert sum(nums) == 0 However there will also be other situations where it is less accurate such as print(sum([-1e30, +1e60, 1, 3, -1e60, 1e30])) so it may not be suitable as-is. |
|||
msg195115 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-14 01:13 | |
Attached is a patch containing the statistics reference implementation, after considering feedback given here and on python-ideas, and tests. |
|||
msg195122 - (view) | Author: Terry J. Reedy (terry.reedy) * | Date: 2013-08-14 04:05 | |
Revised patch with tests modified to pass, as described in pydev post. 1. test. added to test_statistics_approx import 2. delete test_main and change ending of both to if __name__ == '__main__': unittest.main() |
|||
msg195148 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-14 13:02 | |
Steven: were you planning to start a discussion thread on python-dev for PEP 450? I see that there's some activity on python-list and on python-ideas, but I think most core devs would expect the main discussions to happen on the python-dev mailing list. (And I suspect that many core devs don't pay attention to python-list very much.) |
|||
msg195449 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-17 06:48 | |
To anyone waiting for me to respond to rietveld reviews, I'm trying, I really am, but I keep getting a django traceback. This seems to have been reported before, three months ago: http://psf.upfronthosting.co.za/roundup/meta/issue517 |
|||
msg195555 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-18 12:19 | |
Since I can't respond to the reviews, here's a revised patch. Summary of major changes: - median.* functions are now median_* - mode now only returns a single value - better integrate tests with Python regression suite - cleanup tests as per Ezio's suggestions - remove unnecessary metadata and change licence |
|||
msg195556 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-18 12:27 | |
On 15/08/13 22:58, ezio.melotti@gmail.com wrote: > http://bugs.python.org/review/18606/diff/8927/Lib/statistics.py#newcode277 > Lib/statistics.py:277: assert isinstance(x, float) and > isinstance(partials, list) > Is this a good idea? I think so add_partials is internal/private, and so I don't have to worry about the caller providing wrong arguments, say a non-float. But I want some testing to detect coding errors. Using assert for this sort of internal pre-condition is exactly what assert is designed for. > http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics.py#newcode144 > Lib/test/test_statistics.py:144: assert data != sorted(data) > Why not assertNotEqual? I use bare asserts for testing code logic, even if the code is test code. So if I use self.assertSpam(...) then I'm performing a unit test of the module being tested. If I use a bare assert, I'm asserting something about the test logic itself. > http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py > File Lib/test/test_statistics_approx.py (right): > > http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode1 > Lib/test/test_statistics_approx.py:1: """Numeric approximated equal > comparisons and unit testing. > Do I understand correctly that this is just an helper module used in > test_statistics and that it doesn't actually test anything from the > statistics module? Correct. > http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode137 > Lib/test/test_statistics_approx.py:137: # and avoid using > TestCase.almost_equal, because it sucks > Could you elaborate on this? Ah, I misspelled "TestCase.AlmostEqual". - Using round() to test for equal-to-some-tolerance is IMO quite an idiosyncratic way of doing approx-equality tests. I've never seen anyone do it that way before. It surprises me. - It's easy to think that ``places`` means significant figures, not decimal places. - There's now a delta argument that is the same as my absolute error tolerance ``tol``, but no relative error argument. - You can't set a per-instance error tolerance. > http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode241 > Lib/test/test_statistics_approx.py:241: assert len(args1) == len(args2) > Why not assertEqual? As above, I use bare asserts to test the test logic, and assertSpam methods to perform the test. In this case, I'm confirming that I haven't created dodgy test data. > http://bugs.python.org/review/18606/diff/8927/Lib/test/test_statistics_approx.py#newcode255 > Lib/test/test_statistics_approx.py:255: self.assertTrue(approx_equal(b, > a, tol=0, rel=rel)) > Why not assertApproxEqual? Because I'm testing the approx_equal function. I can't use assertApproxEqual to test its own internals. |
|||
msg195575 - (view) | Author: Antoine Pitrou (pitrou) * | Date: 2013-08-18 18:38 | |
A couple of comments about the test suite: - I would like to see PEP8 test names, i.e. test_foo_and_bar rather than testFooAndBar - I don't think we need two separate test modules, it makes things more confusing |
|||
msg195597 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-19 03:55 | |
Merged two test suites into one, and PEP-ified the test names testSpam -> test_spam. |
|||
msg195598 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-19 03:59 | |
Patch file for the stats module alone, without the tests. |
|||
msg195630 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-08-19 13:15 | |
I've just checked over the new patch and it all looks good to me apart from one quibble. It is documented that statistics.sum() will respect rounding errors due to decimal context (returning the same result that sum() would). I would prefer it if statistics.sum would use compensated summation with Decimals since in my view they are a floating point number representation and are subject to arithmetic rounding error in the same way as floats. I expect that the implementation of sum() will change but it would be good to at least avoid documenting this IMO undesirable behaviour. So with the current implementation I can do: >>> from decimal import Decimal as D, localcontext, Context, ROUND_DOWN >>> data = [D("0.1375"), D("0.2108"), D("0.3061"), D("0.0419")] >>> print(statistics.variance(data)) 0.01252909583333333333333333333 >>> with localcontext() as ctx: ... ctx.prec = 2 ... ctx.rounding = ROUND_DOWN ... print(statistics.variance(data)) ... 0.010 The final result is not accurate to 2 d.p. rounded down. This is because the decimal context has affected all intermediate computations not just the final result. Why would anyone prefer this behaviour over an implementation that could compensate for rounding errors and return a more accurate result? If statistics.sum and statistics.add_partial are modified in such a way that they use the same compensated algorithm for Decimals as they would for floats then you can have the following: >>> statistics.sum([D('-1e50'), D('1'), D('1e50')]) Decimal('1') whereas it currently does: >>> statistics.sum([D('-1e50'), D('1'), D('1e50')]) Decimal('0E+23') >>> statistics.sum([D('-1e50'), D('1'), D('1e50')]) == 0 True It still doesn't fix the variance calculation but I'm not sure exactly how to do better than the current implementation for that. Either way though I don't think the current behaviour should be a documented guarantee. The meaning of "honouring the context" implies using a specific sum algorithm, since an alternative algorithm would give a different result and I don't think you should constrain yourself in that way. |
|||
msg195646 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-19 16:35 | |
On 19/08/13 23:15, Oscar Benjamin wrote: > So with the current implementation I can do: > >>>> from decimal import Decimal as D, localcontext, Context, ROUND_DOWN >>>> data = [D("0.1375"), D("0.2108"), D("0.3061"), D("0.0419")] >>>> print(statistics.variance(data)) > 0.01252909583333333333333333333 >>>> with localcontext() as ctx: > ... ctx.prec = 2 > ... ctx.rounding = ROUND_DOWN > ... print(statistics.variance(data)) > ... > 0.010 > > The final result is not accurate to 2 d.p. rounded down. This is > because the decimal context has affected all intermediate computations > not just the final result. Yes. But that's the whole point of setting the context to always round down. If summation didn't always round down, it would be a bug. If you set the precision to a higher value, you can avoid the need for compensated summation. I'm not prepared to pick and choose which contexts I'll honour. If I honour those with a high precision, I'll honour those with a low precision too. I'm not going to check the context, and if it is "too low" (according to whom?) set it higher. >Why would anyone prefer this behaviour over > an implementation that could compensate for rounding errors and return > a more accurate result? Because that's what the Decimal standard requires (as I understand it), and besides you might be trying to match calculations on some machine with a lower precision, or different rounding modes. Say, a pocket calculator, or a Cray, or something. Or demonstrating why rounding matters. Perhaps it will cause less confusion if I add an example to show a use for higher precision as well. > If statistics.sum and statistics.add_partial are modified in such a > way that they use the same compensated algorithm for Decimals as they > would for floats then you can have the following: > >>>> statistics.sum([D('-1e50'), D('1'), D('1e50')]) > Decimal('1') statistics.sum can already do that: py> with localcontext() as ctx: ... ctx.prec = 50 ... x = statistics.sum([D('-1e50'), D('1'), D('1e50')]) ... py> x Decimal('1') I think the current behaviour is the right thing to do, but I appreciate the points you raise. I'd love to hear from someone who understands the Decimal module better than I do and can confirm that the current behaviour is in the spirit of the Decimal module. |
|||
msg195670 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-08-19 21:25 | |
On 19 August 2013 17:35, Steven D'Aprano <report@bugs.python.org> wrote: > > Steven D'Aprano added the comment: > > On 19/08/13 23:15, Oscar Benjamin wrote: >> >> The final result is not accurate to 2 d.p. rounded down. This is >> because the decimal context has affected all intermediate computations >> not just the final result. > > Yes. But that's the whole point of setting the context to always round down. If summation didn't always round down, it would be a bug. If individual binary summation (d1 + d2) didn't round down then that would be a bug. > If you set the precision to a higher value, you can avoid the need for compensated summation. I'm not prepared to pick and choose which contexts I'll honour. If I honour those with a high precision, I'll honour those with a low precision too. I'm not going to check the context, and if it is "too low" (according to whom?) set it higher. I often write functions like this: def compute_stuff(x): with localcontext() as ctx: ctx.prec +=2 y = ... # Compute in higher precision return +y # __pos__ reverts to the default precision The final result is rounded according to the default context but the intermediate computation is performed in such a way that the final result is (hopefully) correct within its context. I'm not proposing that you do that, just that you don't commit to respecting inaccurate results. >>Why would anyone prefer this behaviour over >> an implementation that could compensate for rounding errors and return >> a more accurate result? > > Because that's what the Decimal standard requires (as I understand it), and besides you might be trying to match calculations on some machine with a lower precision, or different rounding modes. Say, a pocket calculator, or a Cray, or something. Or demonstrating why rounding matters. No that's not what the Decimal standard requires. Okay I haven't fully read it but I am familiar with these standards and I've read a good bit of IEEE-754. The standard places constrainst on low-level arithmetic operations that you as an implementer of high-level algorithms can use to ensure that your code is accurate. Following your reasoning above I should say that math.fsum and your statistics.sum are both in violation of IEEE-754 since fsum([a, b, c, d, e]) is not equivalent to ((((a+b)+c)+d)+e) under the current rounding scheme. They are not in violation of the standard: both functions use the guarantees of the standard to guarantee their own accuracy. Both go to some lengths to avoid producing output with the rounding errors that sum() would produce. > I think the current behaviour is the right thing to do, but I appreciate the points you raise. I'd love to hear from someone who understands the Decimal module better than I do and can confirm that the current behaviour is in the spirit of the Decimal module. I use the Decimal module for multi-precision real arithmetic. That may not be the typical use-case but to me Decimal is a floating point type just like float. Precisely the same reasoning that leads to fsum applies to Decimal just as it does to float (BTW I've posted on Rayomnd Hettinger's recipe a modification that might make it work for Decimal but no reply yet.) |
|||
msg195686 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2013-08-20 12:43 | |
I agree with Oscar about sum for decimal.Decimal. The *ideal* sum for Decimal instances would return the correctly rounded result (i.e., the exact result, rounded to the current context just once using the current rounding mode). It seems wrong to give a guarantee of behaviour that's in conflict with this ideal. IEEE 754 recommends a 'sum' operation (in section 9.4, amongst other reduction operations), but doesn't go so far as to either require or recommend that the result be correctly rounded. Instead, it says "Implementations may associate in any order or evaluate in any wider format.", and then later on, "Numerical results [...] may differ among implementations due to the precision of intermediates and the order of evaluation." It certainly *doesn't* specify that the results should be as though the context precision and rounding mode were used for every individual addition. |
|||
msg195855 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-22 02:43 | |
On 20/08/13 22:43, Mark Dickinson wrote: > I agree with Oscar about sum for decimal.Decimal. The *ideal* sum for Decimal instances would return the correctly rounded result (i.e., the exact result, rounded to the current context just once using the current rounding mode). It seems wrong to give a guarantee of behaviour that's in conflict with this ideal. Okay, I know when I'm beaten :-) Documentation will no longer make reference to "honouring the context", as currently stated, and specific example shown will be dropped. Patch to follow. Changing the actual implementation of sum will follow later. If Oscar is willing, I'd like to discuss some of his ideas off-list, but that may take some time. What else is needed before I can ask for a decision on the PEP? |
|||
msg195880 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-08-22 11:57 | |
On 22 August 2013 03:43, Steven D'Aprano <report@bugs.python.org> wrote: > > If Oscar is willing, I'd like to discuss some of his ideas off-list, but that may take some time. I am willing and it will take time. I've started reading the paper that Raymond Hettinger references for the algorithm used in his accurate float sum recipe. I'm not sure why yet but the algorithm is apparently provably exact only for binary radix floats so isn't appropriate for decimals. It does seem to give *very* accurate results for decimals though so I suspect the issue is just about cases that are on the cusp of the rounding mode. In any case the paper cites a previous work that gives an algorithm that apparently works for floating point types with arbitrary radix and exact rounding; it would be good for that to live somewhere in Python but I haven't had a chance to look at the paper yet. |
|||
msg196210 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-08-26 15:07 | |
I have changed the algorithm for statistics.sum to use long integer summation of numerator/denominator pairs. This removes the concerns Mark raised about the float addition requiring correct rounding. Unless I've missed something, this now means that statistics.sum is now exact, including for floats and Decimals. The cost is that stats.sum(ints) is a little slower, sum of Decimals is a lot slower (ouch!) but sum of floats is faster and of Fractions a lot faster. (Changes are relative to my original implementation.) In my testing, algorithmic complexity is O(N) on the number of items, at least up to 10 million items. |
|||
msg196340 - (view) | Author: janzert (janzert) * | Date: 2013-08-28 00:43 | |
Seems that the discussion is now down to implementation issues and the PEP is at the point of needing to ask python-dev for a PEP dictator? |
|||
msg196341 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-08-28 00:56 | |
On Aug 28, 2013 1:43 AM, "janzert" <report@bugs.python.org> wrote: > > Seems that the discussion is now down to implementation issues and the PEP is at the point of needing to ask python-dev for a PEP dictator? I would say so. AFAICT Steven has addressed all of the issues that have been raised. I've read through the module in full and I'm happy with the API/specification exactly as it now is (including the sum function since the last patch). |
|||
msg197298 - (view) | Author: Guido van Rossum (gvanrossum) * | Date: 2013-09-08 17:50 | |
Here's a combined patch. Hopefully it will code review properly. |
|||
msg197300 - (view) | Author: Guido van Rossum (gvanrossum) * | Date: 2013-09-08 17:54 | |
Nice docstrings, but those aren't automatically included in the Doc tree. |
|||
msg199679 - (view) | Author: Alyssa Coghlan (ncoghlan) * | Date: 2013-10-13 09:21 | |
Are the ReST docs the only missing piece here? It would be nice to have this included in alpha 4 next weekend (although the real deadline is beta 1 on November 24). |
|||
msg199685 - (view) | Author: Georg Brandl (georg.brandl) * | Date: 2013-10-13 10:08 | |
In the attached patch I took the docstrings, put them in statistics.rst and reformatted/marked-up them according to our guidelines. This should at least be good enough to make this committable. I also modified statistics.py very slightly; I removed trailing spaces and added "Function/class" in the third table in the module docstring. |
|||
msg199686 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-10-13 10:26 | |
On Sun, Oct 13, 2013 at 09:21:13AM +0000, Nick Coghlan wrote: > > Nick Coghlan added the comment: > > Are the ReST docs the only missing piece here? As far as I know, the only blocker is that the ReST docs are missing. Also Guido would like to see the docstrings be a little smaller (or perhaps even a lot smaller), and that will happen at the same time. The implementation of statistics.sum needs to be a bit faster, but that's also coming. I presume that won't be a blocker. |
|||
msg199831 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-10-14 01:19 | |
Oscar Benjamin has just made a proposal to me off-list that has *almost* convinced me to make statistics.sum a private implementation detail, at least for the 3.4 release. I won't go into detail about Oscar's proposal, but it has caused me to rethink all the arguments for making sum public. Given that the PEP concludes that sum ought to be public, is it appropriate to defer that part of it until 3.5 without updating the PEP? I'd like to shift sum -> _sum for 3.4, then if Oscar's ideas don't pan out, in 3.5 make sum public. (None of this will effect the public interface for mean, variance, etc.) |
|||
msg199832 - (view) | Author: Tim Peters (tim.peters) * | Date: 2013-10-14 01:24 | |
Do what's best for the future of the module. A PEP is more of a starting point than a constraint, especially for implementation details. And making a private thing public later is one ginormous whale of a lot easier than trying to remove a public thing later. "Practicality beats purity" once again ;-) |
|||
msg199843 - (view) | Author: Raymond Hettinger (rhettinger) * | Date: 2013-10-14 05:54 | |
I think this should get checked in so that people can start interacting with it. The docstrings and whatnot can get tweaked later. |
|||
msg199852 - (view) | Author: Alyssa Coghlan (ncoghlan) * | Date: 2013-10-14 08:46 | |
+0 for starting with _sum as private and +1 for getting this initial version checked in for alpha 4. |
|||
msg200270 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-10-18 15:05 | |
Here is the updated version which I hope is not too late for alpha 4. Main changes: * sum is now private * docstrings have been simplified and shrunk somewhat * I have a draft .rst file, however I'm having trouble getting Sphinx working on my system and I have no idea whether the reST is working. |
|||
msg200281 - (view) | Author: Georg Brandl (georg.brandl) * | Date: 2013-10-18 16:17 | |
The rst file is missing from your patch. I already posted a patch with statistics.rst five days ago. I have no idea why you ignored it. |
|||
msg200285 - (view) | Author: Steven D'Aprano (steven.daprano) * | Date: 2013-10-18 17:20 | |
Georg Brandl wrote: > The rst file is missing from your patch. Oops! Sorry about that. Fixed now. > I already posted a patch with statistics.rst five days ago. > I have no idea why you ignored it. I'm sorry if I stepped on your toes, but I didn't ignore your patch. If I've failed to follow the right procedure, it is due to inexperience, not malice. You yourself suggested it was only a temporary version just good enough to get the module committed, and in any case, it was already out of date since I've made sum private. |
|||
msg200471 - (view) | Author: Larry Hastings (larry) * | Date: 2013-10-19 18:32 | |
Mr. D'Aprano emailed me about getting this in for alpha 4. Since nobody else stepped up, I volunteered to check it in for him. There were some minor ReST errors in statistics.rst but I fixed 'em. |
|||
msg200472 - (view) | Author: Guido van Rossum (gvanrossum) * | Date: 2013-10-19 18:35 | |
Thanks Larry! On Sat, Oct 19, 2013 at 11:32 AM, Larry Hastings <report@bugs.python.org>wrote: > > Larry Hastings added the comment: > > Mr. D'Aprano emailed me about getting this in for alpha 4. Since nobody > else stepped up, I volunteered to check it in for him. There were some > minor ReST errors in statistics.rst but I fixed 'em. > > ---------- > nosy: +larry > > _______________________________________ > Python tracker <report@bugs.python.org> > <http://bugs.python.org/issue18606> > _______________________________________ > |
|||
msg200477 - (view) | Author: Roundup Robot (python-dev) | Date: 2013-10-19 18:50 | |
New changeset 685e044bed5e by Larry Hastings in branch 'default': Issue #18606: Add the new "statistics" module (PEP 450). Contributed http://hg.python.org/cpython/rev/685e044bed5e |
|||
msg200478 - (view) | Author: Larry Hastings (larry) * | Date: 2013-10-19 18:52 | |
Checked in. Thanks, Mr. D'Aprano! |
|||
msg200495 - (view) | Author: Georg Brandl (georg.brandl) * | Date: 2013-10-19 20:58 | |
> I'm sorry if I stepped on your toes, but I didn't ignore your patch. If I've failed to follow the right procedure, it is due to inexperience, not malice. You yourself suggested it was only a temporary version just good enough to get the module committed, and in any case, it was already out of date since I've made sum private. No problem, just hate to see work being done twice. :) |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:57:48 | admin | set | github: 62806 |
2013-10-19 20:58:47 | georg.brandl | set | messages: + msg200495 |
2013-10-19 18:52:09 | larry | set | status: open -> closed resolution: fixed messages: + msg200478 stage: resolved |
2013-10-19 18:50:26 | python-dev | set | nosy:
+ python-dev messages: + msg200477 |
2013-10-19 18:35:18 | gvanrossum | set | messages: + msg200472 |
2013-10-19 18:32:48 | larry | set | nosy:
+ larry messages: + msg200471 |
2013-10-18 17:20:28 | steven.daprano | set | files:
+ statistics.patch messages: + msg200285 |
2013-10-18 16:17:11 | georg.brandl | set | messages: + msg200281 |
2013-10-18 15:06:05 | steven.daprano | set | files:
+ statistics.patch messages: + msg200270 |
2013-10-14 08:46:13 | ncoghlan | set | messages: + msg199852 |
2013-10-14 05:54:08 | rhettinger | set | nosy:
+ rhettinger messages: + msg199843 |
2013-10-14 01:24:25 | tim.peters | set | nosy:
+ tim.peters messages: + msg199832 |
2013-10-14 01:19:06 | steven.daprano | set | messages: + msg199831 |
2013-10-13 10:26:14 | steven.daprano | set | messages: + msg199686 |
2013-10-13 10:09:02 | georg.brandl | set | files:
+ statistics_combined_withdocs.patch nosy: + georg.brandl messages: + msg199685 |
2013-10-13 09:21:13 | ncoghlan | set | nosy:
+ ncoghlan messages: + msg199679 |
2013-09-09 09:50:40 | skrah | set | nosy:
+ skrah |
2013-09-08 17:54:48 | gvanrossum | set | messages: + msg197300 |
2013-09-08 17:50:21 | gvanrossum | set | files:
+ statistics_combined.patch nosy: + gvanrossum messages: + msg197298 |
2013-08-28 00:56:27 | oscarbenjamin | set | messages: + msg196341 |
2013-08-28 00:43:49 | janzert | set | nosy:
+ janzert messages: + msg196340 |
2013-08-26 15:08:01 | steven.daprano | set | files:
+ statistics_newsum.patch messages: + msg196210 |
2013-08-22 11:57:50 | oscarbenjamin | set | messages: + msg195880 |
2013-08-22 02:43:11 | steven.daprano | set | messages: + msg195855 |
2013-08-20 12:43:28 | mark.dickinson | set | messages: + msg195686 |
2013-08-19 21:25:37 | oscarbenjamin | set | messages: + msg195670 |
2013-08-19 16:35:54 | steven.daprano | set | messages: + msg195646 |
2013-08-19 13:15:55 | oscarbenjamin | set | messages: + msg195630 |
2013-08-19 03:59:06 | steven.daprano | set | files:
+ statistics.patch messages: + msg195598 |
2013-08-19 03:55:21 | steven.daprano | set | files:
+ test_statistics.patch messages: + msg195597 |
2013-08-18 18:38:59 | pitrou | set | messages: + msg195575 |
2013-08-18 12:27:49 | steven.daprano | set | messages: + msg195556 |
2013-08-18 12:19:27 | steven.daprano | set | files:
+ statistics.patch messages: + msg195555 |
2013-08-17 06:48:40 | steven.daprano | set | messages: + msg195449 |
2013-08-14 14:08:23 | ethan.furman | set | nosy:
+ ethan.furman |
2013-08-14 13:02:24 | mark.dickinson | set | messages: + msg195148 |
2013-08-14 04:05:34 | terry.reedy | set | files:
+ statistics2.diff nosy: + terry.reedy messages: + msg195122 |
2013-08-14 01:13:24 | steven.daprano | set | files:
+ statistics.patch keywords: + patch messages: + msg195115 |
2013-08-12 19:59:50 | oscarbenjamin | set | messages: + msg195004 |
2013-08-12 19:28:20 | mark.dickinson | set | messages: + msg194993 |
2013-08-12 19:20:18 | steven.daprano | set | messages: + msg194992 |
2013-08-12 18:40:22 | steven.daprano | set | messages: + msg194984 |
2013-08-12 10:55:03 | mark.dickinson | set | messages: + msg194938 |
2013-08-12 10:53:17 | mark.dickinson | set | messages: + msg194937 |
2013-08-12 10:32:04 | mark.dickinson | set | messages: + msg194936 |
2013-08-12 09:21:06 | mark.dickinson | set | messages: + msg194932 |
2013-08-10 02:07:25 | steven.daprano | set | messages: + msg194781 |
2013-08-09 11:49:56 | oscarbenjamin | set | messages: + msg194725 |
2013-08-09 02:14:45 | sjt | set | nosy:
+ sjt messages: + msg194715 |
2013-08-06 13:53:43 | oscarbenjamin | set | nosy:
+ oscarbenjamin |
2013-08-05 18:57:19 | mark.dickinson | set | messages: + msg194503 |
2013-08-05 18:38:31 | belopolsky | set | messages: + msg194502 |
2013-08-05 18:14:23 | steven.daprano | set | messages: + msg194499 |
2013-08-05 17:08:22 | mark.dickinson | set | messages: + msg194494 |
2013-08-05 16:46:15 | mark.dickinson | set | nosy:
+ mark.dickinson |
2013-08-05 15:33:47 | steven.daprano | set | messages: + msg194488 |
2013-08-05 05:26:07 | ronaldoussoren | set | messages: + msg194448 |
2013-08-04 20:14:30 | stutzbach | set | nosy:
+ stutzbach messages: + msg194418 |
2013-08-04 02:07:58 | belopolsky | set | messages: + msg194324 |
2013-08-04 01:52:56 | christian.heimes | set | nosy:
+ christian.heimes |
2013-08-04 01:40:36 | steven.daprano | set | messages: + msg194321 |
2013-08-03 19:31:26 | belopolsky | set | nosy:
+ agthorr messages: + msg194293 |
2013-08-03 19:05:24 | tshepang | set | nosy:
+ tshepang |
2013-08-03 08:30:53 | vajrasky | set | nosy:
+ vajrasky messages: + msg194241 |
2013-08-03 07:29:57 | steven.daprano | set | messages: + msg194239 |
2013-08-03 03:22:00 | belopolsky | set | messages: + msg194233 |
2013-08-03 03:02:19 | belopolsky | set | nosy:
+ belopolsky messages: + msg194231 |
2013-08-02 22:09:33 | gregory.p.smith | set | nosy:
+ gregory.p.smith messages: + msg194213 |
2013-07-31 13:11:35 | ronaldoussoren | set | nosy:
+ ronaldoussoren messages: + msg193997 |
2013-07-31 10:23:45 | pitrou | set | nosy:
+ pitrou messages: + msg193994 |
2013-07-31 08:20:20 | steven.daprano | create |