Consider the following:
declare @dt datetime, @dt2 datetime2, @d date
set @dt = '2013-01-01'
set @dt2 = '2013-01-01'
set @d = '2013-01-01'
select convert(varbinary, @dt) as dt,
convert(varbinary, @dt2) as dt2,
convert(varbinary, @d) as d
Output:
dt dt2 d
------------------ -------------------- --------
0x0000A13900000000 0x07000000000094360B 0x94360B
Now, I already understand from the documentation that datetime
has a smaller range, and starts from 1753年01月01日, while datetime2
and date
use 0001年01月01日 as their start date.
What I don't understand though, is that datetime
appears to be little-endian while datetime2
and date
are big-endian. If that's the case, how can they even be properly sortable?
Consider if I want to know how many integer days are represented by a date
type. You would think you could do this:
declare @d date
set @d = '0001-01-31'
select cast(convert(varbinary, @d) as int)
But due to the endianness, you get 1966080 days!
To get the correct result of 30 days, you have to reverse it:
select cast(convert(varbinary,reverse(convert(varbinary, @d))) as int)
Or, of course you can do this:
select datediff(d,'0001-01-01', @d)
But that means internally somewhere it is reversing the bytes anyway.
So why did they switch endianness?
I only care because I'm working on a custom UDT in SQLCLR and the binary order of the bytes does seem to matter there, but these built-in types seem much more flexible. Does SQL Server have something internal where each type gets to provide it's own sorting algorithm? And if so, is there a way I can tap into that for my custom UDT?
See also, a related (but different) question on StackOverflow.
1 Answer 1
SQL Server does not rely on the binary order for its "own" data types. For CLR datatypes you could use the iComparable interface, but as @MattJohnson mentioned, SQL Server ignores it:
Microsoft does not publish the details about how the different datatypes are stored and worked with. However Books Online explicitly states that you cannot rely on a specific binary format for a specific datatype and that the format they use might change any time. So it is a good idea to store an INT as just that and not as VARBINARY, because you might not be able to read your data anymore after the next SP.
As for the sorting: Most of the SQL Server core is written in C++. I assume internally a method similar to an iComparable is used. But again, there is no publicly accessible documentation about this available. Even if it were, you probably would not be able to exploit it because of the inherent differences between .NET and C++.
-
The other issue number mentioned there was also informative. But do you have any details regarding how the internal types do it?Matt Johnson-Pint– Matt Johnson-Pint2013年07月27日 17:24:59 +00:00Commented Jul 27, 2013 at 17:24
-
@MattJohnson, see my update above. I am afraid it is not what you were looking for...Sebastian Meine– Sebastian Meine2013年07月28日 10:54:32 +00:00Commented Jul 28, 2013 at 10:54
-
So, you're recommending using a SQL
int
as the backing field of my CLR UDT? Do you have an example of how to do that? TheCREATE TYPE
statement will take abase_type
, or an external assembly - but not both.Matt Johnson-Pint– Matt Johnson-Pint2013年07月28日 17:44:58 +00:00Commented Jul 28, 2013 at 17:44 -
No, that was just an example. What I am saying is that you need to find a way to serialize you UDT so that it can be binary-sorted, as there is no current way to implement an iComparable interface (or similar) and have SQL Server use it.Sebastian Meine– Sebastian Meine2013年07月28日 22:07:03 +00:00Commented Jul 28, 2013 at 22:07
Explore related questions
See similar questions with these tags.
IComparable
? You shouldn't need to dig into the internal representation of the data types, ever.IComparable
, but it is only used client-side. SQL Server ignores it and goes off the byte order.