Message249308
| Author |
tim.peters |
| Recipients |
aconrad, belopolsky, larry, mark.dickinson, r.david.murray, tbarbugli, tim.peters, trcarden, vivanov, vstinner |
| Date |
2015年08月29日.01:40:10 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<1440812411.05.0.533427732189.issue23517@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
> >>> x = float.fromhex('0x1.38f312b1b36bdp-1')
> >>> x
> 0.6112295
> >>> round(x, 6)
> 0.611229
> >>> timedelta(0, x).microseconds
> 611230
>
> but I no longer remember whether we concluded that
> timedelta got it wrong or round or both or neither. :-)
Here you go:
>>> import decimal
>>> decimal.Decimal(x)
Decimal('0.61122949999999998116351207499974407255649566650390625')
That's the exact value you're actually using. What's "correct" depends on what's intended.
round(x, 6) actually rounds to
>>> decimal.Decimal(round(x, 6))
0.6112290000000000222968310481519438326358795166015625
and that's fine. timedelta's result does not match what using infinite precision would deliver, but I couldn't care much less ;-)
The real lesson to take from all this, when you design your own killer language, is that using a binary floating point type for timestamps comes with many costs and surprises. |
|