msg313933  (view) 
Author: Luc (dcasmr) 
Date: 20180316 05:58 
When a list or dataframe serie contains NaN(s), the median, median_low and median_high are computed in Python 3.6.4 statistics library, however, the results are wrong.
Either, it should return a NaN just like when we try to compute a mean or point the user to drop the NaNs before computing those statistics.
Example:
import numpy as np
import statistics as stats
data = [75, 90,85, 92, 95, 80, np.nan]
Median = stats.median(data)
Median_low = stats.median_low(data)
Median_high = stats.median_high(data)
The results from above return ALL 90 which are incorrect.
Correct answers should be:
Median = 87.5
Median_low = 85
Median_high = 92
Thanks,
Luc

msg313938  (view) 
Author: Maheshwar Kumar (maheshwark97) 
Date: 20180316 08:21 
Will just removing all np.nan values do the job? Btw the values will be:
median = 88.5
median_low = 85
median_high = 90
I can correct it and send a pull request.

msg313940  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20180316 08:43 
> Will just removing all np.nan values do the job?
Unfortunately, I don't think it's that simple. You want consistency across the various library calls, so if the various `median` functions are changed to treat NaNs as missing data, then the other functions should be, too.

msg313949  (view) 
Author: Maheshwar Kumar (maheshwark97) 
Date: 20180316 13:59 
Well if i dont consider np.nan as missing data and consider all other values then the answer being 90 is correct,right?

msg313950  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20180316 14:32 
> then the answer being 90 is correct,right?
How do you deduce that? Why 90 rather than 85 (or 87.5, or some other value)?
For what it's worth, NumPy gives a result of NaN for the median of an array that contains NaNs:
>>> np.median([1.0, 2.0, 3.0, np.nan])
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/sitepackages/numpy/lib/function_base.py:4033: RuntimeWarning: Invalid value encountered in median
r = func(a, **kwargs)
nan

msg313953  (view) 
Author: Steven D'Aprano (steven.daprano) * 
Date: 20180316 16:15 
On Fri, Mar 16, 2018 at 02:32:36PM +0000, Mark Dickinson wrote:
> For what it's worth, NumPy gives a result of NaN for the median of an array that contains NaNs:
By default, R gives the median of a list containing either NaN or NA
("not available", intended as a missing value) as NA:
> median(c(1, 2, 3, 4, NA))
[1] NA
> median(c(1, 2, 3, 4, NaN))
[1] NA
but you can ignore them too:
> median(c(1, 2, 3, 4, NA), na.rm=TRUE)
[1] 2.5
> median(c(1, 2, 3, 4, NaN), na.rm=TRUE)
[1] 2.5

msg313956  (view) 
Author: Luc (dcasmr) 
Date: 20180316 16:45 
Just to make sure we are focused on the issue, the reported bug is with the statistics library (not with numpy). It happens, when there is at least one missing value in the data and involves the computation of the median, median_low and median_high using the statistics library.
The test was performed on Python 3.6.4.
When there is no missing values (NaNs) in the data, computing the median, median_high and median_low from the statistics library work fine.
So, yes, removing the NaNs (or imputing for them) before computing the median(s) resolve the issue.
Also, just like statistics.mean(data) when data has missing return a nan, the median, median_high and median_low should behave the same way.
import numpy
import statistics as stats
data = [75, 90,85, 92, 95, 80, np.nan]
Median = stats.median(data)
Median_high = stats.median_high(data)
Median_low = stats.median_low(data)
print("The incorrect Median is", Median)
The incorrect Median is, 90
print("The incorrect median high is", Median_high)
The incorrect median high is, 90
print("The incorrect median low is", Median_low)
The incorrect median low is, 90
## Mean returns nan
Mean = stats.mean(data)
prin("The mean is", Mean)
The mean is, nan
Now, when we drop the missing values, we have:
data2 = [75, 90,85, 92, 95, 80]
stats.median(data2)
87.5
stats.median_high(data2)
90
stats.median_low(data2)
85

msg313965  (view) 
Author: Maheshwar Kumar (maheshwark97) 
Date: 20180316 18:25 
So From the above i am to conclude that removing np.nan is the best path to be taken? Also the above step is to be included in median_grouped as well right?

msg313972  (view) 
Author: Luc (dcasmr) 
Date: 20180316 21:14 
If we are trying to fix this, the behavior should be like computing the mean or harmonic mean with the statistics library when there are missing values in the data. At least that way, it is consistent with how the statistics library works when computing with NaNs in the data. Then again, it should be mentioned somewhere in the docs.
import statistics as stats
import numpy as np
import pandas as pd
data = [75, 90,85, 92, 95, 80, np.nan]
stats.mean(data)
nan
stats.harmonic_mean(data)
nan
stats.stdev(data)
nan
As you can see, when there is a missing value, computing the mean, harmonic mean and sample standard deviation with the statistics library
return a nan.
However, with the median, median_high and median_low, it computes those statistics incorrectly with the missing values present in the data.
It is better to return a nan, then let the user drop (or resolve) any missing values before computing.
## Another example using pandas serie
df = pd.DataFrame(data, columns=['data'])
df.head()
data
0 75.0
1 90.0
2 85.0
3 92.0
4 95.0
5 80.0
6 NaN
### Use the statistics library to compute the median of the serie
stats.median(df1['data'])
90
## Pandas returns the correct median by dropping the missing values
## Now use pandas to compute the median of the serie with missing value
df['data'].median()
87.5
I did not test the median_grouped in statistics library, but will let you know afterwards if its affected as well.

msg327281  (view) 
Author: Steven D'Aprano (steven.daprano) * 
Date: 20181007 14:35 
I want to revisit this for 3.8.
I agree that the current implementationdependent behaviour when there are NANs in the data is troublesome. But I don't think that there is a single right answer.
I also agree with Mark that if we change median, we ought to change the other functions so that people can get consistent behaviour. It wouldn't be good for median to ignore NANs and mean to process them.
I'm inclined to add a parameter to the statistics functions to deal with NANs, that allow the caller to select from:
 implementationdependent, i.e. what happens now;
(for speed, and backwards compatibility, this would be the default)
 raise an exception;
 return a NAN;
 skip any NANs (treat them as missing values to be ignored).
I think that raise/return/ignore will cover most usecases for NANs, and the default will be suitable for the "easy cases" where there are no NANs, without paying any performance penalty if you already know your data has no NANs.
Thoughts?
I'm especially looking for ideas on what to call the first option.

msg333135  (view) 
Author: David Mertz (DavidMertz) * 
Date: 20190107 03:57 
I believe that the current behavior of `statistics.median[_low_high\]` is simply broken. It relies on the particular behavior of Python sorting, which only utilizes `.__lt__()` between objects, and hence does not require a total order.
I can think of absolutely no way to characterize these as reasonable results:
Python 3.7.1  packaged by condaforge  (default, Nov 13 2018, 09:50:42)
>>> statistics.median([9, 9, 9, nan, 1, 2, 3, 4, 5])
1
>>> statistics.median([9, 9, 9, nan, 1, 2, 3, 4])
nan

msg333151  (view) 
Author: Jonathan Fine (jfine2358) * 
Date: 20190107 14:06 
Based on a quick review of the python docs, the bug report, PEP 450
and this thread, I suggest
1. More carefully draw attention to the NaN feature, in the
documentation for existing Python versions.
2. Consider revising statistics.py so that it raises an exception,
when passed NaN data.
This implies dividing this issue into two parts: legacy and future.
For more information, see:
https://mail.python.org/pipermail/pythonideas/2019January/054872.html

msg399968  (view) 
Author: Irit Katriel (iritkatriel) * 
Date: 20210820 13:16 
Reproduced in 3.11:
>>> import numpy as np
>>> import statistics as stats
>>> data = [75, 90,85, 92, 95, 80, np.nan]
>>> stats.median(data)
90
>>> stats.median_low(data)
90
>>> stats.median_high(data)
90

msg399995  (view) 
Author: Raymond Hettinger (rhettinger) * 
Date: 20210820 21:37 
[Steven]
> Thoughts?
1) Document that results are undefined if a NaN is present in the data.
2) Add function to strip NaNs from the data:
def remove_nans(iterable):
"Remove float('NaN') and other objects not equal to themselves"
return [x for x in iterable if x == x]

msg400456  (view) 
Author: Steven D'Aprano (steven.daprano) * 
Date: 20210828 02:16 
See thread on PythonIdeas.
https://mail.python.org/archives/list/pythonideas@python.org/thread/EDRF2NR4UOYMSKE64KDI2SWUMKPAJ3YM/
