classification
Title: Computing median, median_high an median_low in statistics library
Type: behavior Stage:
Components: Library (Lib) Versions: Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: steven.daprano Nosy List: David Mertz, dcasmr, eric.smith, jfine2358, maheshwark97, mark.dickinson, steven.daprano
Priority: normal Keywords:

Created on 2018-03-16 05:58 by dcasmr, last changed 2019-01-07 14:06 by jfine2358.

Messages (12)
msg313933 - (view) Author: Luc (dcasmr) Date: 2018-03-16 05:58
When a list or dataframe serie contains NaN(s), the median, median_low and median_high are computed in Python 3.6.4 statistics library, however, the results are wrong.
Either, it should return a NaN just like when we try to compute a mean or point the user to drop the NaNs before computing those statistics.
Example:
import numpy as np
import statistics as stats

data = [75, 90,85, 92, 95, 80, np.nan]
Median  = stats.median(data)
Median_low = stats.median_low(data)
Median_high = stats.median_high(data)
The results from above return ALL 90 which are incorrect.

Correct answers should be:
Median = 87.5
Median_low  = 85
Median_high = 92
Thanks,
Luc
msg313938 - (view) Author: Maheshwar Kumar (maheshwark97) Date: 2018-03-16 08:21
Will just removing all np.nan values do the job? Btw the values will be:
median = 88.5
median_low = 85
median_high = 90
I can correct it and send a pull request.
msg313940 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2018-03-16 08:43
> Will just removing all np.nan values do the job?

Unfortunately, I don't think it's that simple. You want consistency across the various library calls, so if the various `median` functions are changed to treat NaNs as missing data, then the other functions should be, too.
msg313949 - (view) Author: Maheshwar Kumar (maheshwark97) Date: 2018-03-16 13:59
Well if i dont consider np.nan as missing data and consider all other values then the answer being 90 is correct,right?
msg313950 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2018-03-16 14:32
> then the answer being 90 is correct,right?

How do you deduce that? Why 90 rather than 85 (or 87.5, or some other value)?

For what it's worth, NumPy gives a result of NaN for the median of an array that contains NaNs:

>>> np.median([1.0, 2.0, 3.0, np.nan])
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/numpy/lib/function_base.py:4033: RuntimeWarning: Invalid value encountered in median
  r = func(a, **kwargs)
nan
msg313953 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2018-03-16 16:15
On Fri, Mar 16, 2018 at 02:32:36PM +0000, Mark Dickinson wrote:
> For what it's worth, NumPy gives a result of NaN for the median of an array that contains NaNs:

By default, R gives the median of a list containing either NaN or NA 
("not available", intended as a missing value) as NA:

> median(c(1, 2, 3, 4, NA))
[1] NA
> median(c(1, 2, 3, 4, NaN))
[1] NA

but you can ignore them too:

> median(c(1, 2, 3, 4, NA), na.rm=TRUE)
[1] 2.5
> median(c(1, 2, 3, 4, NaN), na.rm=TRUE)
[1] 2.5
msg313956 - (view) Author: Luc (dcasmr) Date: 2018-03-16 16:45
Just to make sure we are focused on the issue, the reported bug is with the statistics library (not with numpy). It happens, when there is at least one missing value in the data and involves the computation of the median, median_low and median_high using the statistics library.
The test was performed on Python 3.6.4.

When there is no missing values (NaNs) in the data, computing the median, median_high and median_low from the statistics library work fine.
So, yes, removing the NaNs (or imputing for them) before computing the median(s) resolve the issue.
Also, just like statistics.mean(data) when data has missing return a nan, the median, median_high and median_low should  behave the same way.

import numpy
import statistics as stats

data = [75, 90,85, 92, 95, 80, np.nan]

Median = stats.median(data) 
Median_high = stats.median_high(data)
Median_low = stats.median_low(data)
print("The incorrect Median is", Median)
The incorrect Median is, 90
print("The incorrect median high is", Median_high)
The incorrect median high is, 90
print("The incorrect median low is", Median_low)
The incorrect median low is, 90

## Mean returns nan
Mean = stats.mean(data)
prin("The mean is", Mean)
The mean is, nan

Now, when we drop the missing values, we have:
data2 = [75, 90,85, 92, 95, 80]
stats.median(data2)
87.5
stats.median_high(data2)
90
stats.median_low(data2)
85
msg313965 - (view) Author: Maheshwar Kumar (maheshwark97) Date: 2018-03-16 18:25
So From the above i am to conclude that removing np.nan is the best path to be taken? Also the above step is to be included in median_grouped as well right?
msg313972 - (view) Author: Luc (dcasmr) Date: 2018-03-16 21:14
If we are trying to fix this, the behavior should be like computing the mean or harmonic mean with the statistics library when there are missing values in the data.  At least that way, it is consistent with how the statistics library works when computing with NaNs in the data.  Then again, it should be mentioned somewhere in the docs.

import statistics as stats
import numpy as np
import pandas as pd
data = [75, 90,85, 92, 95, 80, np.nan]
stats.mean(data)
nan
stats.harmonic_mean(data)
nan
stats.stdev(data)
nan
As you can see, when there is a missing value, computing the mean, harmonic mean and sample standard deviation with the statistics library 
return a nan.
However, with the median, median_high and median_low, it computes those statistics incorrectly with the missing values present in the data.
It is better to return a nan, then let the user drop (or resolve) any missing values before computing.
## Another example using pandas serie
df = pd.DataFrame(data, columns=['data'])
df.head()
        data
0	75.0
1	90.0
2	85.0
3	92.0
4	95.0
5	80.0
6	NaN

### Use the statistics library to compute the median of the serie
stats.median(df1['data'])
90
 
## Pandas returns the correct median by dropping the missing values
## Now use pandas to compute the median of the serie with missing value
df['data'].median()
87.5

I did not test the median_grouped in statistics library, but will let you know afterwards if its affected as well.
msg327281 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2018-10-07 14:35
I want to revisit this for 3.8.

I agree that the current implementation-dependent behaviour when there are NANs in the data is troublesome. But I don't think that there is a single right answer.

I also agree with Mark that if we change median, we ought to change the other functions so that people can get consistent behaviour. It wouldn't be good for median to ignore NANs and mean to process them.

I'm inclined to add a parameter to the statistics functions to deal with NANs, that allow the caller to select from:

- implementation-dependent, i.e. what happens now;
  (for speed, and backwards compatibility, this would be the default)

- raise an exception;

- return a NAN;

- skip any NANs (treat them as missing values to be ignored).

I think that raise/return/ignore will cover most use-cases for NANs, and the default will be suitable for the "easy cases" where there are no NANs, without paying any performance penalty if you already know your data has no NANs.

Thoughts?

I'm especially looking for ideas on what to call the first option.
msg333135 - (view) Author: David Mertz (David Mertz) Date: 2019-01-07 03:57
I believe that the current behavior of `statistics.median[|_low|_high\]` is simply broken.  It relies on the particular behavior of Python sorting, which only utilizes `.__lt__()` between objects, and hence does not require a total order.

I can think of absolutely no way to characterize these as reasonable results:

Python 3.7.1 | packaged by conda-forge | (default, Nov 13 2018, 09:50:42)
>>> statistics.median([9, 9, 9, nan, 1, 2, 3, 4, 5])
1
>>> statistics.median([9, 9, 9, nan, 1, 2, 3, 4])
nan
msg333151 - (view) Author: Jonathan Fine (jfine2358) * Date: 2019-01-07 14:06
Based on a quick review of the python docs, the bug report, PEP 450
and this thread, I suggest

1. More carefully draw attention to the NaN feature, in the
documentation for existing Python versions.
2. Consider revising statistics.py so that it raises an exception,
when passed NaN data.

This implies dividing this issue into two parts: legacy and future.

For more information, see:
https://mail.python.org/pipermail/python-ideas/2019-January/054872.html
History
Date User Action Args
2019-01-07 14:06:18jfine2358setnosy: + jfine2358
messages: + msg333151
2019-01-07 03:57:41David Mertzsetnosy: + David Mertz
messages: + msg333135
2018-10-07 14:35:30steven.dapranosetassignee: steven.daprano
messages: + msg327281
versions: + Python 3.8, - Python 3.6
2018-03-20 09:23:12eric.smithsetnosy: + eric.smith
2018-03-16 21:14:39dcasmrsetmessages: + msg313972
2018-03-16 18:25:23maheshwark97setmessages: + msg313965
2018-03-16 16:45:41dcasmrsetmessages: + msg313956
2018-03-16 16:15:15steven.dapranosetmessages: + msg313953
2018-03-16 14:32:36mark.dickinsonsetmessages: + msg313950
2018-03-16 13:59:58maheshwark97setmessages: + msg313949
2018-03-16 08:43:08mark.dickinsonsetnosy: + mark.dickinson, steven.daprano
messages: + msg313940
2018-03-16 08:21:28maheshwark97setnosy: + maheshwark97
messages: + msg313938
2018-03-16 05:58:02dcasmrcreate