6

The documentation on date/time data types state that timestamp with timezone takes 8 bytes, while time with timezone takes 12 bytes. They both have the same resolution (1 microsecond), and on the face of it timestamp with timezone is storing more information.

Can anyone explain this behavior?

I'm not planning on using time with timezone for reasons explained on the same page.

asked May 21, 2013 at 6:48

1 Answer 1

8

time with time zone stores microseconds (8 bytes) and the time zone (4 bytes). timestamp with time zone stores just the microseconds and converts the time zone at display time. Because of the conceptual weirdness of the time with time zone type, the time zone needs to be stored explicitly. You don't actually need 8 bytes to store the number of microseconds in a day, but 4 bytes wouldn't be enough. If you really wanted to, you could probably devise a more compact storage format for time with time zone, but in practice nobody cares.

answered May 21, 2013 at 13:44

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.