7

Consider the following:

declare @dt datetime, @dt2 datetime2, @d date
set @dt = '2013-01-01'
set @dt2 = '2013-01-01'
set @d = '2013-01-01'
select convert(varbinary, @dt) as dt,
 convert(varbinary, @dt2) as dt2,
 convert(varbinary, @d) as d

Output:

dt dt2 d
------------------ -------------------- --------
0x0000A13900000000 0x07000000000094360B 0x94360B

Now, I already understand from the documentation that datetime has a smaller range, and starts from 1753年01月01日, while datetime2 and date use 0001年01月01日 as their start date.

What I don't understand though, is that datetime appears to be little-endian while datetime2 and date are big-endian. If that's the case, how can they even be properly sortable?

Consider if I want to know how many integer days are represented by a date type. You would think you could do this:

declare @d date
set @d = '0001-01-31'
select cast(convert(varbinary, @d) as int)

But due to the endianness, you get 1966080 days!

To get the correct result of 30 days, you have to reverse it:

select cast(convert(varbinary,reverse(convert(varbinary, @d))) as int)

Or, of course you can do this:

select datediff(d,'0001-01-01', @d)

But that means internally somewhere it is reversing the bytes anyway.

So why did they switch endianness?

I only care because I'm working on a custom UDT in SQLCLR and the binary order of the bytes does seem to matter there, but these built-in types seem much more flexible. Does SQL Server have something internal where each type gets to provide it's own sorting algorithm? And if so, is there a way I can tap into that for my custom UDT?

See also, a related (but different) question on StackOverflow.

asked Jul 26, 2013 at 23:00
6
  • Have you tried implementing IComparable? You shouldn't need to dig into the internal representation of the data types, ever. Commented Jul 26, 2013 at 23:05
  • According to this (scroll down to "Implementing a UDT with a User-Defined Format"), you can implement IComparable, but it is only used client-side. SQL Server ignores it and goes off the byte order. Commented Jul 26, 2013 at 23:08
  • Oh. Well that's annoying. Commented Jul 26, 2013 at 23:11
  • @PaulWhite - That is indeed useful. At least it's confirmation of what I am experiencing. Thanks! Commented Jul 28, 2013 at 17:41
  • @PaulWhite - The part he doesn't address in that article is how to remove the leading byte for the null. Why should an int need to be stored in 5 bytes? Commented Jul 28, 2013 at 17:46

1 Answer 1

2

SQL Server does not rely on the binary order for its "own" data types. For CLR datatypes you could use the iComparable interface, but as @MattJohnson mentioned, SQL Server ignores it:

http://connect.microsoft.com/SQLServer/feedback/details/252230/sqlclr-provide-the-ability-to-use-icomparable-or-a-similar-mechanism-for-udts


Microsoft does not publish the details about how the different datatypes are stored and worked with. However Books Online explicitly states that you cannot rely on a specific binary format for a specific datatype and that the format they use might change any time. So it is a good idea to store an INT as just that and not as VARBINARY, because you might not be able to read your data anymore after the next SP.

As for the sorting: Most of the SQL Server core is written in C++. I assume internally a method similar to an iComparable is used. But again, there is no publicly accessible documentation about this available. Even if it were, you probably would not be able to exploit it because of the inherent differences between .NET and C++.

answered Jul 27, 2013 at 7:31
4
  • The other issue number mentioned there was also informative. But do you have any details regarding how the internal types do it? Commented Jul 27, 2013 at 17:24
  • @MattJohnson, see my update above. I am afraid it is not what you were looking for... Commented Jul 28, 2013 at 10:54
  • So, you're recommending using a SQL int as the backing field of my CLR UDT? Do you have an example of how to do that? The CREATE TYPE statement will take a base_type, or an external assembly - but not both. Commented Jul 28, 2013 at 17:44
  • No, that was just an example. What I am saying is that you need to find a way to serialize you UDT so that it can be binary-sorted, as there is no current way to implement an iComparable interface (or similar) and have SQL Server use it. Commented Jul 28, 2013 at 22:07

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.