Message155312
| Author |
zbysz |
| Recipients |
docs@python, loewis, mark.dickinson, zbysz |
| Date |
2012年03月10日.14:17:01 |
| SpamBayes Score |
6.541418e-09 |
| Marked as misclassified |
No |
| Message-id |
<4F5B6257.9050003@in.waw.pl> |
| In-reply-to |
<4F5B609F.9050407@in.waw.pl> |
| Content |
[part mangled by the tracker]
"> 1.1999999999999999555910790149937383830547332763671875
">
"> which is accurate to around 16 decimal digits.)
It is easy to count, that exactly 17 digits are accurate.
I have to admit, that I'm completely lost here --- why would a vastly
inaccurate number (with more than half of digits wrong) be ever stored?
If "1.2" is converted to a float (a C double in current implementation),
it has 15.96 decimal digits of precision.
" > Similarly, the result of any
" > floating-point operation must often be rounded to fit into the
" > internal format, resulting in another tiny error.
"Similarly, the result of a floating-point operation must be rounded to
fit into the fixed precision, often resulting in another tiny error." ? |
|