The documentation on date/time data types state that timestamp with timezone
takes 8 bytes, while time with timezone
takes 12 bytes. They both have the same resolution (1 microsecond), and on the face of it timestamp with timezone
is storing more information.
Can anyone explain this behavior?
I'm not planning on using time with timezone
for reasons explained on the same page.
1 Answer 1
time with time zone
stores microseconds (8 bytes) and the time zone (4 bytes). timestamp with time zone
stores just the microseconds and converts the time zone at display time. Because of the conceptual weirdness of the time with time zone
type, the time zone needs to be stored explicitly. You don't actually need 8 bytes to store the number of microseconds in a day, but 4 bytes wouldn't be enough. If you really wanted to, you could probably devise a more compact storage format for time with time zone
, but in practice nobody cares.