Possible Duplicate:
Python rounding error with float numbers
python maths is wrong
I can't get Python to correctly do the subtraction 1 - 0.8 and assign it. It keeps coming up with the incorrect answer, 0.19999999999999996.
I explored a bit:
sq = {}
sub = {}
for i in range(1000):
sq[str(i/1000.)+'**2']=((i/1000.)**2)
sub['1-'+str(i/1000.)]=(1.0-(i/1000.))
and discovered that this error happens with a somewhat random group of the floats between 0 and 1 to the third decimal place. A similar error also occurs when you square those floats, but to a different subset.
I'm hoping for an explanation of this and how to make Python do the arithmetic right. Using round(x,3) is the work-around I'm using for now, but it's not elegant.
Thanks!
This is a session in my Python 2.7.3 shell:
*** Python 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit (AMD64)] on win32. ***
*** Remote Python engine is active ***
>>> 1-0.8
0.19999999999999996
>>> print 1-0.8
0.2
>>> a = 1-0.8
>>> a
0.19999999999999996
>>> print a
0.2
>>> a = 0.2
>>> print a
0.2
>>> a
0.2
>>>
Here's the code I put into a couple online interpreters:
def doit():
d = {'a':1-0.8}
return d
print doit()
and the output:
{'a': 0.19999999999999996}
-
3check yoda.arachsys.com/csharp/floatingpoint.htmlavasal– avasal2013年01月02日 10:25:57 +00:00Commented Jan 2, 2013 at 10:25
-
effbot.org/pyfaq/…Paul Hankin– Paul Hankin2013年01月02日 10:47:00 +00:00Commented Jan 2, 2013 at 10:47
3 Answers 3
Use Decimal its designed just for this:
>>> from decimal import Decimal, getcontext
>>> Decimal(1) - Decimal(0.8)
Decimal('0.1999999999999999555910790150')
>>> getcontext().prec = 3
>>> Decimal(1) - Decimal(0.8)
Decimal('0.200')
>>> float(Decimal(1) - Decimal(0.8))
0.2
3 Comments
Decimal(1) - Decimal('0.8') -> Decimal('0.2')float(Decimal(0.061157) - Decimal(0.060782)), which generates 0.00037500000000000033Floating numbers don't work as you're expecting them to.
For starters, read the floating point guide. Long story short: computers represent floating point numbers as binary, and it turns out that storing a precise decimal fraction as binary is not possible (try it for yourself on paper to see why). For practical purposes, 0.19999999999999996 is "close enough" to 0.2. If you wanted to print it as 0.2, then you could do something like:
print "%0.1f" % floating_point_value
So what you're seeing isn't an error. It's expected behavior.
1 Comment
Python stores floats with 'bits', and some floats you just can't represent accurately, no matter how many bits of precision you have. This is the problem you have here. It's sorta like trying to write 1/3 in decimal with a limited amount of decimals places perfectly accurately.
Comments
Explore related questions
See similar questions with these tags.