Message132219
As Daniel says, from_float expects a float object, not a Decimal instance.
What did you want to achieve in the following line:
self.from_float(value * decimal.Decimal(1.0))/decimal.Decimal(1.0)
?
By the way: in all current versions of Python, from_float is redundant: you can create a Decimal directly from a float:
Python 2.7.1+ (2.7:d52b1faa7b11+, Mar 25 2011, 21:48:24)
[GCC 4.2.1 (Apple Inc. build 5664)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from decimal import Decimal
[64140 refs]
>>> Decimal(2.3)
Decimal('2.29999999999999982236431605997495353221893310546875')
[64149 refs] |
|
Date |
User |
Action |
Args |
2011-03-26 09:06:41 | mark.dickinson | set | recipients:
+ mark.dickinson, daniel.urban, daveabailey |
2011-03-26 09:06:41 | mark.dickinson | set | messageid: <1301130401.67.0.984106764134.issue11680@psf.upfronthosting.co.za> |
2011-03-26 09:06:41 | mark.dickinson | link | issue11680 messages |
2011-03-26 09:06:41 | mark.dickinson | create | |
|