Message246306
| Author |
vstinner |
| Recipients |
belopolsky, ethan.furman, larry, mark.dickinson, r.david.murray, tbarbugli, trcarden, vstinner |
| Date |
2015年07月05日.10:42:37 |
| SpamBayes Score |
-1.0 |
| Marked as misclassified |
Yes |
| Message-id |
<CAMpsgwZiLJkdEkxzxp+ph5H4YTqXdngx9UvMM+N6eorKHi76Ag@mail.gmail.com> |
| In-reply-to |
<1435889228.66.0.609645465346.issue23517@psf.upfronthosting.co.za> |
| Content |
Le vendredi 3 juillet 2015, Alexander Belopolsky <report@bugs.python.org> a
écrit :
>
> > UNIX doesn't like timestamps in the future
>
> I don't think this is a serious consideration. The problematic scenario
> would be obtaining high-resolution timestamp (from say time.time()),
> converting it to datetime and passing it back to OS as a possibly 0.5μs
> higher value. Given that timestamp -> datetime -> timestamp roundtrip by
> itself takes over 1μs, it is very unlikely that by the time rounded value
> hits the OS it is still in the future.
>
In many cases the resolution is 1 second. For example, a filesystem with a
resolution of 1second. Or an API only supporting a resolution of 1 second.
With a resoltuion of 1 second, timestamps in the future are likely (50%).
Sorry I don't remember all detail of timestamp rounding and all issues that
I saw. |
|