Logged In: YES
user_id=31435
Alas, there's no good solution to this. Python's definition of
mod doesn't make sense for floats. Consider, e.g.,
(-1e-100) % 1e100
math.fmod() applied to this returns (the exactly correct) -1e-
100, but Python's float.__mod__ returns the absurd (on more
than one count) 1e100. math.fmod() can always compute an
exactly correct result, but float.__mod__ cannot. As the
example shows, float.__mod__(x, y) can't even guarantee abs
(x % y) < abs(y) for non-zero y (but math.fmod() can).
OTOH, math.fmod's "sign of the first argument" rule is silly
compared to Python's "sign of the second argument" rule for
integers.
It's a mistake to believe that one definition of mod "should"
work across numeric types. Since Decimal is in fact a floating
type, I'd prefer to stick to the IBM spec's definition, which
(unlike Python's) makes sense for floats.
That's not a good solution; but neither would be extending
the IBM spec to introduce a broken-for-floats meaning just
because Python's float.__mod__ is broken.
|