This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients rhettinger, steven.daprano
Date 2021-11-11.06:54:54
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1636613694.67.0.344679147943.issue45766@roundup.psfhosted.org>
In-reply-to
Content
Sure, I’m happy to wait.

My thoughts:

* The first link you provided does give the same slope across packages.  Where they differ is in how they choose to report statistics for assessing goodness of fit or for informing hypothesis testing. Neither of those apply to us.

* The compared stats packages offer this functionality because some models don’t benefit from a non-zero constant. 

* The second link is of low quality and reads like hastily typed, stream of consciousness rant that roughly translates to “As a blanket statement applicable to all RTO, I don’t believe the underlying process is linear and I don’t believe that a person could have a priori knowledge of a directly proportional relationship.”  This is bunk — a cold caller makes sales in direct proportion to the number of calls they make, and zero calls means zero sales.

* The last point is a distractor.  Dealing with error analysis or input error models is beyond the scope of the package. Doing something I could easily do with my HP-12C is within scope. 

* We’re offering users something simple. If you have a need to fit a data to directly proportional model, set a flag.

* If we don’t offer the option, users have to do too much work to bridge from what we have to what they need:

   (covariance(x, y) + mean(x)*mean(y)) / (variance(x) + mean(x)**2)
History
Date User Action Args
2021-11-11 06:54:54rhettingersetrecipients: + rhettinger, steven.daprano
2021-11-11 06:54:54rhettingersetmessageid: <1636613694.67.0.344679147943.issue45766@roundup.psfhosted.org>
2021-11-11 06:54:54rhettingerlinkissue45766 messages
2021-11-11 06:54:54rhettingercreate