Showing posts with label SQL Server 2016. Show all posts
Showing posts with label SQL Server 2016. Show all posts
Monday, January 15, 2018
When uncompressing data, make sure to use the correct function
SQL Server has offered data compression for a while now, you could either use PAGE compression or ROW compression, you had no ability so specify which columns. That changed in SQL Server 2016
SQL Server 2016 added the ability to compress data yourself by using the COMPRESS function. The COMPRESS function compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max).
Let's take a quick look at how this function works
We will create a table, insert one row and then update the SomeOtherColumn value by using the COMPRESS function on the SomeColumn column
CREATE TABLE test (Somecolumn varchar(200), SomeOtherColumn varbinary(max)) go
INSERT test VALUES ('aaaaaaaaaaaaa', null)
UPDATE test SET SomeOtherColumn = COMPRESS(Somecolumn) go SELECT Somecolumn,SomeOtherColumn FROM test
Here is what the output looks like
What if if told you now to create a new table by uncompressing the data via a SELECT INTO query followed by dropping the old table
No problem, if there is a COMPRESS function then surely there must be an UNCOMPRESS function... right? Let's find out......
SELECT Somecolumn,CAST( UNCOMPRESS(SomeOtherColumn) AS varchar(max)) AS SomeOtherColumn INTO SomeNewTable FROM test DROP TABLE test
SELECT * FROM SomeNewTable
Wrong, while the UNCOMPRESS function exists, it is not the correct function
Hopefully you ran a test before just dropping a table after doing select into
The correct function is DECOMPRESS
Here is what it all looks like
This is the outputSELECT Somecolumn,SomeOtherColumn, CAST( DECOMPRESS(SomeOtherColumn) AS varchar(max)) AS CastedDecompressed, DECOMPRESS(SomeOtherColumn) as Decompressed FROM test SELECT Somecolumn,SomeOtherColumn, CAST( UNCOMPRESS(SomeOtherColumn) AS varchar(max)) AS CastedUncompressed, UNCOMPRESS(SomeOtherColumn) as Uncompressed FROM test
Tuesday, January 17, 2017
Using bigint with FORMATMESSAGE
SQL Server 2016 added the FORMATMESSAGE function. According to Books On Line, FORMATMESSAGE constructs a message from an existing message in sys.messages or from a provided string. The functionality of FORMATMESSAGE resembles that of the RAISERROR statement. However, RAISERROR prints the message immediately, while FORMATMESSAGE returns the formatted message for further processing.
So let's take a look at this new function, run the following
SELECT FORMATMESSAGE('Signed int %i, %i', 50, -50) SELECT FORMATMESSAGE('Unsigned int %u, %u', 50, -50);
Here is the output if you run that
--------------------------------------------
Signed int 50, -50
Unsigned int 50, 4294967246
Here is what the type specifications that you can use are
Type specification Represents
d or i Signed integer
o Unsigned octal
s String
u Unsigned integer
x or X Unsigned hexadecimal
We used i to denote a signed integer, we also used u to denote a unsigned integer
Let's look at another example, this time we are using a variable. The variable will be an integer and we are using i as the type specification
DECLARE @Val int = 1 SELECT FORMATMESSAGE('The value you supplied %i is incorrect!', @Val);
Here is the output
---------------------------------------
The value you supplied 1 is incorrect!
That worked without a problem. Now let's use a variable of the bigint data type, we are using the same type specification as before
DECLARE @Val bigint = 1 SELECT FORMATMESSAGE('The value you supplied %i is incorrect!', @Val);
Here is the output
---------------------------------------------------------------------------
Error: 50000, Severity: -1, State: 1. (Params:).
The error is printed in terse mode because there was error during formatting.
Tracing, ETW, notifications etc are skipped.
As you can see that did not work, so what can we do?
One thing we can do is converting the value to a varchar and then use s as the type specification
DECLARE @Val bigint = 1 SELECT FORMATMESSAGE('The value you supplied %s is incorrect!', CONVERT(VARCHAR(100),@Val));
You will again get this as output
---------------------------------------
The value you supplied 1 is incorrect!
So converting to varchar worked, but what if we want to use a bigint data type without converting to a varchar?
Another way is to use I64d as the type specification
DECLARE @Val bigint = 1 SELECT FORMATMESSAGE('The value you supplied %I64d is incorrect!', @Val);
You will get this
---------------------------------------
The value you supplied 1 is incorrect!
So there you have it, if you want to use bigint with FORMATMESSAGE use I64d as the type specification, or convert to varchar and use s as the type specification
Monday, January 16, 2017
Creating the Wide World Importers sample database v1.0
Before SQL Server 2005, we had the northwind and pubs sample database, these actually shipped with the product. If you did a SQL Server 2000 install, you would see these two database in addition to the system databases.
With SQL Server 2005 this changed, there were no more sample databases included. Instead new databases were made available for download, for a while now we have the AdventureWorks available. Today I wanted to see if there was a AdventureWorks 2016 database available. I couldn't find it. Instead I found the Wide World Importers sample database v1.0.
These sample databases live now on github
Here is the link https://github.com/Microsoft/sql-server-samples
As a normal person you will probably navigate to the databases folder https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases and in there you will probably go to https://github.com/Microsoft/sql-server-samples/tree/master/samples/databases/wide-world-importers/wwi-database-scripts
Then you will see the following scripts
Now if your name is Denis or you like to waste time (or both), you think..cool I will download this repo and run these files. That is not unreasonable to think...... but
To save you some time, here is the easiest way to put this database on your SQL Server instance
Instead of running those scripts, go to this page https://github.com/Microsoft/sql-server-samples/releases/tag/wide-world-importers-v1.0 scroll down to downloads, download the 121 MB file named WideWorldImporters-Full.bak
While that is downloading, create a folder on you C drive named SQL, in that folder create 2 folders one named Data and one named Log, it should look like in the image below
Now assuming the backup file is on the C drive, here is what the restore command should look like
USE [master] RESTORE DATABASE [WideWorldImporters] FROM DISK = N'C:\WideWorldImporters-Full.bak' WITH FILE = 1, MOVE N'WWI_Primary' TO N'C:\SQL\DATA\WideWorldImporters.mdf', MOVE N'WWI_UserData' TO N'C:\SQL\DATA\WideWorldImporters_UserData.ndf', MOVE N'WWI_Log' TO N'C:\SQL\Log\WideWorldImporters.ldf', MOVE N'WWI_InMemory_Data_1' TO N'C:\SQL\DATA\WideWorldImporters_InMemory_Data_1', NOUNLOAD, REPLACE, STATS = 5 GO
On SQL Server vNext 1.1, the output looks like this from the command above
5 percent processed.
10 percent processed.
15 percent processed.
20 percent processed.
25 percent processed.
30 percent processed.
Processed 1464 pages for database 'WideWorldImporters', file 'WWI_Primary' on file 1.
Processed 53096 pages for database 'WideWorldImporters', file 'WWI_UserData' on file 1.
Processed 33 pages for database 'WideWorldImporters', file 'WWI_Log' on file 1.
Processed 3862 pages for database 'WideWorldImporters', file 'WWI_InMemory_Data_1' on file 1.
Converting database 'WideWorldImporters' from version 852 to the current version 860.
Database 'WideWorldImporters' running the upgrade step from version 852 to version 853.
Database 'WideWorldImporters' running the upgrade step from version 853 to version 854.
Database 'WideWorldImporters' running the upgrade step from version 854 to version 855.
Database 'WideWorldImporters' running the upgrade step from version 855 to version 856.
Database 'WideWorldImporters' running the upgrade step from version 856 to version 857.
Database 'WideWorldImporters' running the upgrade step from version 857 to version 858.
Database 'WideWorldImporters' running the upgrade step from version 858 to version 859.
Database 'WideWorldImporters' running the upgrade step from version 859 to version 860.
100 percent processed.
RESTORE DATABASE successfully processed 58455 pages in 6.105 seconds (74.803 MB/sec).
To make sure everything is cool after the restore is done, run the following
USE WideWorldImporters GO select [Website].[CalculateCustomerPrice](1,number,'20160101') from master..spt_values where type = 'p' select [Website].[CalculateCustomerPrice](customerid,1,'20160101') from Sales.customers
And now we can even check that the function is visible in the new sys.dm_exec_function_stats DMV
SELECT TOP 10 d.object_id, d.database_id, OBJECT_NAME(object_id, database_id) AS 'function name', d.cached_time, d.last_execution_time, d.total_elapsed_time, d.total_elapsed_time/d.execution_count AS [avg_elapsed_time], d.last_elapsed_time, d.execution_count FROM sys.dm_exec_function_stats AS d ORDER BY [total_worker_time] DESC;
And here is the output
As you can see, we can see this function was executed a bunch of time
So there you have it, this is how you create the DB.... I also now understand why Brent Ozar uses the Stackoverflow DB for his posts, at least it is easy to find .., if you prefer that over this example, head over here: https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/
Thursday, December 29, 2016
The 15 most popular posts in 2016
Another year is almost over. As the last post of this year I decided to share with you the 15 most popular posts in 2016. I just used Google Analytics to grab this info, I excluded the home page from the top 15. Four of these posts were written in 2006 and one was written in 2005
Here they are in order of popularity
1 Some cool SQL Server announcements SQL Graph, Adaptive Query Plan, CTP1 of SQL vNext, SQL Injection detection
This is my recap of the chalkboard session with the SQL Server team at the SQL Server PASS summit in Seattle.
2 Five Ways To Return Values From Stored Procedures
A very old post that shows you five ways to return values from a stored proc
3 SQL Server 2016 SP1 released, SQL Server vNext available for download
Posted this while the connect event was going on
4 Installing SQL Server on Linux, connecting and running queries from Linux
A quick walk through on how to install SQL Server on Ubuntu, I also included a video in the post
5 Are your foreign keys indexed? If not, you might have problems
A post showing you that indexing foreign keys might be beneficial if you do deletes
6 Convert Millisecond To "hh:mm:ss" Format
A very old post showing you how to convert from milliseconds to "hh:mm:ss" format
7 What's new in SQL Server 2016: CREATE OR ALTER
A post showing how the new CREATE OR ALTER syntax works
8 Why you need additional privileges to truncate tables compared to delete statements
The reason why you need these privileges is shown
9 What's new in SQL Server 2016: Truncate partitions
This post shows how truncate partitions works
10 What we learned from the reddit SQL Server AMA thread on /r/sqlserver
A recap of the AMA SQL Server thread on reddit
11 Playing around with the new stuff in SQL Server vNext and SQL Server 2016 SP1
Just a post showing some of the new stuff released in SP 1 of SQL Server 2016 and vNext
12 ISO-11179 Naming Conventions
A very old post linking to the ISO-11179 Naming Conventions document
13 Query Analyzer Trick
A very old post explaining how you can drag the columns into the query window from object explorer. I think by now everyone knows this one right?
14 What was the first computer you bought, what were some of your favorite games?
Mine was the Commodore 128, what about you?
15 Chaos Isolation Level In SQL Server
This is linked from dba.stackexchange.com, it is kind of interesting because it was a silly post I made
That is all... see you in 2017...
Out of focus Christmas tree
Saturday, November 19, 2016
What we learned from the reddit SQL Server AMA thread on /r/sqlserver
The SQL Server AMA thread on reddit, has finished. I have put 8 most interesting answers here and then gave my take on it, the questions will be linked straight to the question on reddit. The answer will be in italics and my take on it will be below in regular font
What was the reasoning for having SQL Server for Linux run atop Drawbridge, rather than directly interfacing with Linux? It seams like it's not a true Linux port if NT Kernel emulation is required
I guess you can always debate whether it is better to have an abstraction layer or do a full port. I am a practical person, and like to judge based on results. I myself am not on the Linux team, but am really impressed with what the team has managed to deliver thus far, both in terms of feature set and in terms of performance. A full port would have taken much longer and would have led to a code base that is much harder to maintain.
I never heard of Drawbridge either. From the Microsoft Drawbridge research page: Drawbridge combines two core technologies: First, a picoprocess, which is a process-based isolation container with a minimal kernel API surface. Second, a library OS, which is a version of Windows enlightened to run efficiently within a picoprocess.
I really like the JSON support you guys added. Is JSON in SQL Server still evolving, or are you guys happy with where it is at?I would love to see a JSON data type, like XML support. Thanks!
It makes sense that they decided to use nvarchar since you can then use it with in-memory OLTP and Columnstore. We currently store json in some of our tables but we never search on it or manipulate it in the DB
Haha I knew this would be asked, great answer also. We all remember the Ballmer Linux is a cancer quote, how things have changed
I am so sorry, but I just had to add this image :-)
That said, we are working on a new set of multi-platform SQL client tools that DBAs and database developers can use on Linux/macOS/Windows. The new 'mssql' extension for Visual Studio Code is our first attempt in this area and we are using it to prove out the multi-platform SQL tools stack "backend" we've created.
We don't have specific dates yet but stay tuned for more details in the coming months. Meanwhile, you can try the mssql extension for VS Code on Linux/macOS/Windows from here: https://aka.ms/mssql-marketplace
We plan to have a cross-platform tool that will be Linux native. We don't have an exact date set but it's in the works for 2017. In the meantime, you can connect to SQL Server on Linux through the Visual Studio Code mssql extension as well as SSMS.
So it looks like there will be a cross-platform tool, maybe something like Visual Studio Code, make it web based. As long as it is not based on Java and has that nasty purple shiny look and feel. But in reality, all the people I worked with that used *nix always used windows and then would use PuTTY or something similar to SSH into the box and just do everything from the command line.
Yes, it will! Cost = free.
Is there any plan to make SQL Server work on Windows Server Nano?
This is something that we are looking into. It would require creating a new appx-style package for deployment. That in itself would be a good thing actually that could make it easier to install SQL Server (like the package based install for SQL Server on Linux) but it would be a very large development effort. Spending dev effort on that would take away from other things we would like to do. It would also mean that we end up having two different methods of patching SQL Server (.msp and appx) which could further complicate servicing. On the other hand, WS Nano is really well suited for SQL Server due to it's higher availability (less patching), smaller attack surface area, fewer services running, etc. Those are the trade offs we are thinking through, but no concrete plans one way or another right now. Let us know what you think we should do!
This is nice, also it seems that when you license SQL Server in the future, the OS doesn't matter, if you get a 4 core license, you can install SQL Server on Linux or Windows, your choice
Installing SQL Server on Linux is really nice, here is what it looks like on redhat
sudo su
curl https://packages.microsoft.com/config/rhel/7/mssql-server.repo > /etc/yum.repos.d/mssql-server.repo
sudo yum install -y mssql-server
sudo /opt/mssql/bin/sqlservr-setup #follow the prompts
systemctl status mssql-server #verify service is running
It also makes sense to run it on a server that is bare metal, no need for IE or printer service running.
You can find R Services samples in the following links:
https://gallery.cortanaintelligence.com/Collection/ML-Templates-with-SQL-Server-R-Services-1
https://msdn.microsoft.com/en-US/library/mt591993.aspx
Additionally, we have a new website focused around easy to use "Getting Started" tutorials. There is one R sample there:
https://www.microsoft.com/en-us/sql-server/developer-get-started/r
let me add a couple more R-related articles :)
https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/r-services/Implementing%20Predictive%20Analytics
and a simple blogpost I wrote some time ago just in case it helps with the 101 https://medium.com/@davidsb/datascience-for-developers-build-your-first-predictive-model-with-r-a798f684752f
Additionally, we have a new website focused around easy to use "Getting Started" tutorials. There is one R sample there:
https://www.microsoft.com/en-us/sql-server/developer-get-started/r
let me add a couple more R-related articles :)
https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/r-services/Implementing%20Predictive%20Analytics
and a simple blogpost I wrote some time ago just in case it helps with the 101 https://medium.com/@davidsb/datascience-for-developers-build-your-first-predictive-model-with-r-a798f684752f
Here are a few that I like:
- Improvements in columnstore indexes and batch processing (link)
- Automatic soft NUMA for large CPU system deployments (link)
- Hardware acceleration for encryption features like TDE by leveraging Intel's AES-NI instruction set (link)
We evaluate hardware advancements periodically for SQL Server. The new Microsoft ML library that is available with SQL Server vNext uses GPU for the neural net algorithms so you may see such capabilities in other areas of the product.
This is definitely an area we are looking at. But we don't have any concrete at the moment.
I remember when postgres did this, the performance improvement was impressive, you can read about that here PG-STROM
That is all, if you want to see all the questions and answers go here: SQL Server AMA thread on reddit,
Thursday, November 17, 2016
What's new in SQL Server 2016: CREATE OR ALTER
SQL Server 2016 Service Pack 1 added the CREATE OR REPLACE functionality, however in SQL Server, it is called CREATE OR ALTER. Finally it is here, this has been asked for since the 90s, it was on the SQL Server wishlist (who remembers that before they had connect :-)
CREATE OR ALTER can be used with the following four object types
STORED PROCEDURES
FUNCTIONS
TRIGGERS
VIEWS
Let's take a look how this all works
If you want to run this code, you need to be on SQL Server 2016 Service pack 1 or higher or on vNext, the @@VERSION function on SP1 and vNext returns the following
Microsoft SQL Server 2016 (SP1) (KB3182545) - 13.0.4001.0 (X64) Oct 28 2016 18:17:30
Microsoft SQL Server vNext (CTP1) - 14.0.1.246 (X64) Nov 1 2016 23:24:39
Before they added CREATE OR ALTER, you had a couple of options to create rerunable scripts. The first way was dropping the proc if it existed and having the CREATE PROC in the same script, it looked like this
--old way of dropping a proc then having a create script IF OBJECT_ID('procTest') is not null DROP PROCEDURE procTest GO CREATE PROCEDURE procTest AS BEGIN PRINT (1) END; GO
Another way would be to create a dummy proc if the proc didn't exist already and then just have the ALTER PROC part in your script
--another way is to have a dummy proc created, --that way your alter part is always the same IF OBJECT_ID('dbo.procTest') IS NULL EXEC ('CREATE PROCEDURE dbo.procTest AS RETURN 0;') GO ALTER PROCEDURE procTest AS BEGIN PRINT (1) END; GO
In SQL Server 2016 with Service Pack 1, this becomes so much easier. Here is what it looks like
-- the easier way in sql server 2016 SP1 and up CREATE OR ALTER PROCEDURE procTest AS BEGIN PRINT (1) END; GO
Here is how that looks with a function, pretty much the same as a procedure
-- works the same with functions CREATE OR ALTER FUNCTION fnTest() RETURNS INT AS BEGIN RETURN(1) END; GO
The view looks like this
-- also works with views CREATE OR ALTER VIEW vwTest AS SELECT 1 AS col; GO
With triggers it is pretty much the same, here is the code that you can run, I first created a table otherwise I would not be able to create a trigger
-- first we need a table for the trigger IF NOT EXISTS(SELECT * FROM sys.tables WHERE name = 'BooFar') CREATE TABLE BooFar(id int) GO -- works with triggers CREATE OR ALTER TRIGGER trTest ON BooFar AFTER INSERT, UPDATE AS RAISERROR ('Hello from trigger', 1, 10); GO -- just a test to make sure the trigger works INSERT BooFar values(1) -- you should see this in the message tab /* Hello from trigger Msg 50000, Level 1, State 10 (1 row(s) affected) */
The CREATE OR REPLACE syntax does NOT work with DDL triggers. If you execute this the first time it will run fine.
CREATE OR ALTER TRIGGER safety ON DATABASE FOR DROP_TABLE, ALTER_TABLE AS PRINT 'You must disable Trigger "safety" to drop or alter tables!' ROLLBACK;
Now run it again, you will get this error
Msg 2714, Level 16, State 2, Procedure safety, Line 1 [Batch Start Line 77]
There is already an object named 'safety' in the database.
I submitted a connect item for this: https://connect.microsoft.com/SQLServer/feedback/details/3111866
I want them to either disable CREATE OR ALTER for DDL triggers or they should fix it so it doesn't error on a second run
Tor wrap this up, let's drop all these objects we just created, we will do that by using the Drop if exists syntax
-- drop everything by using -- DROP object IF EXISTS DROP TABLE IF EXISTS BooFar DROP PROCEDURE IF EXISTS procTest DROP VIEW IF EXISTS vwTest
You can find the github repo for the series here whatisnewinsqlserver2016, all the code for this post can be found here CREATE Or ALTER
Wednesday, November 16, 2016
Playing around with the new stuff in SQL Server vNext and SQL Server 2016 SP1
Earlier today Microsoft released the first CTP of the next version of SQL Server, I have a post about that here SQL Server 2016 SP1 released, SQL Server vNext available for download
I downloaded and installed SQL Server on a VM (But only after the VM decided to install Windows 10 anniversary update first). I played around with some of the new things, let's take a look
Version
If you select @@VERSION, you get the following back
Microsoft SQL Server vNext (CTP1) - 14.0.1.246 (X64) Nov 1 2016 23:24:39 Copyright (c) Microsoft Corporation Enterprise Evaluation Edition (64-bit) on Windows 10 Pro 6.3 <X64> (Build 14393: ) (Hypervisor)
The compatibility for this version of SQL Server is 140
STRING_AGG
This is a brand new string function in SQL Server, If you have used MySQL then the STRING_AGG is similar to the GROUP_CONCAT function, however you can't use DISTINCT like you can in MySQLThe function STRING_AGG concatenates the values of string expressions and places separator values between them. The separator is not added at the end of string. This function is available beginning with SQL Server 2016 SP1
Here is a sample query
SELECT STRING_AGG (name, ',') AS csv FROM master..spt_values WHERE type = 'A';
Here is the output
rpc,pub,sub,dist,dpub,rpc out,data access,collation compatible,system,use remote collation,lazy schema validation,remote proc transaction promotion
Nothing special of course but at least you don't have to write your own version and thinking about removing the last comma
A better example would be if you wanted to get all the columns for a table next to the table name
You would think the query would be like this
SELECT t.name,STRING_AGG (c.name, ',') AS csv FROM sys.tables t JOIN sys.columns c on t.object_id = c.object_id ORDER BY 1
However that gives you the following error
Msg 8120, Level 16, State 1, Line 41
Column 'sys.tables.name' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
You need to add a GROUP BY to the query
SELECT t.name,STRING_AGG (c.name, ',') AS csv FROM sys.tables t JOIN sys.columns c on t.object_id = c.object_id GROUP BY t.name ORDER BY 1
Here is what you get back
That looks much better, I can see myself using this a lot in the future
In memory OLTP/Columnstore
According to the documentation sp_spaceused is now supported for in-memory tables.
I tried it out and with SQL Server 2016 I get back 0 rows and 0 KB, with vNext I get back rows but NULL for everything else
namerowsreserveddataindex_sizeunused
ShoppingCart0 0 KB0 KB0 KB0 KB -- 2016
ShoppingCart53248 NULLNULLNULLNULL -- vNext
Not sure what is going on there.
You can now add a columnstore index to a table that has a varchar(max) data type in vNext
CREATE TABLE t_bulkload ( accountkey int not null, accountdescription varchar (max), accounttype char(500), AccountCodeAlternatekey int) GO CREATE CLUSTERED COLUMNSTORE index t_bulkload_cci on t_bulkload
Running that on SQL Server 2016 gives the following error
Msg 35343, Level 16, State 1, Line 39
The statement failed. Column 'accountdescription' has a data type that cannot participate in a columnstore index. Omit column 'accountdescription'.
Running that same code on SQL Server vNext 2016 gives the following warning
Warning: Using Lob types (NVARCHAR(MAX), VARCHAR(MAX), and VARBINARY(MAX)) with Clustered Columnstore Index is in public preview. Do not use it on production data without backup during public preview period.
Here are some more new things in regard to in memory tables
sp_spaceused
is now supported for in-memory tables.sp_rename
is now supported for native modules.CASE
statements are now supported for native modules.- The limitation of 8 indexes on in-memory tables has been removed.
TOP (N) WITH TIES
is now supported in native modules.ALTER TABLE
against in-memory tables is now substantially faster in some cases.- Transaction redo In-memory tables is now done in parallel. This substantially reduces the time to do failovers or in some cases restarts.
- In-memory checkpoint files can now be stored on Azure Storage. This provides equivalent capabilities to MDF compared to LDF files, which already have this capability.
sys.dm_os_host_info
This is a new OS related system DMV and it returns one row that displays operating system version information.SELECT * FROM sys.dm_os_host_info;
Here is the output
host_platform | host_distribution | host_release | host_service_pack_level | host_sku | os_language_version |
---|---|---|---|---|---|
Windows | Windows 10 Pro | 6.3 | 48 | 1033 |
Here is a sample result set on Linux:
host_platform | host_distribution | host_release | host_service_pack_level | host_sku | os_language_version |
---|---|---|---|---|---|
Linux | Ubuntu | 16.04 | NULL | 1033 |
Here is what it looks like in SSMS
That is all for now..... you can find all the new stuff on MSDN https://msdn.microsoft.com/en-US/library/mt788653(SQL.130).aspx
SQL Server 2016 SP1 released, SQL Server vNext available for download
Today Microsoft announced the CTP of the next version of SQL Server, you can download it here https://www.microsoft.com/evalcenter/evaluate-sql-server-vnext-ctp
As you can see I am already downloading this version
What's New in SQL Server vNext https://msdn.microsoft.com/en-US/library/mt788653(SQL.130).aspx
Install SQL Server on Linux https://docs.microsoft.com/en-us/sql/linux/sql-server-linux-setup
SQL Server on Linux Documentation https://docs.microsoft.com/en-us/sql/linux/
SQL Server Service Pack 1
Also announced was Service Pack 1 of SQL Server 2016, you can download that here https://go.microsoft.com/fwlink/?linkid=835368
There are so many cool things in this service pack. For one, all the editions now support all the programmability features like indexed views, columnstore indexes, partitioning etc etc, see image below
You are still bound by the memory and CPU limits but at least your code doesn't have to change, this is good news for ISVs.
A couple of more tidbits....
- CREATE OR ALTER (Yes, we heard you !!!) – New CREATE OR ALTER support makes it easier to modify and deploy objects like Stored Procedures, Triggers, User–Defined Functions, and Views. This was one of the highly requested features by developers and SQL Community.
- New DMF sys.dm_exec_query_statistics_xml – Use this DMF to obtain actual query execution showplan XML (with actual number of rows) for a query which is still being executed in a given session (session id as input parameter). The showplan with a snapshot of current execution statistics is returned when profiling infrastructure (legacy or lightweight) is on.
- New DMF for incremental statistics – New DMF sys.dm_db_incremental_stats_properties to expose information per–partition for incremental stats.
- Better correlation between diagnostics XE and DMVs – Query_hash and query_plan_hash are used for identifying a query uniquely. DMV defines them as varbinary(8), while XEvent defines them as UINT64. Since SQL server does not have “unsigned bigint”, casting does not always work. This improvement introduces new XEvent action/filter columns equivalent to query_hash and query_plan_hash except they are defined as INT64 which can help correlating queries between XE and DMVs.
There is much more, see all the news here: https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2016-service-pack-1-sp1-released/
I also played around with some of the new stuff after installing vNext, see here:
Playing around with the new stuff in SQL Server vNext and SQL Server 2016 SP1
Saturday, November 12, 2016
What's new in SQL Server 2016: Truncate partitions
In my The best thing about the PASS summit for me is... post I said that I would do the following
I will also do a series about new stuff in SQL Server 2016, but that will be in between all the other posts. Whenever someone asked during a session how many people were already on SQL Server 2016, very few hands went up, this is the reason, I want to blog about this as well.This is the first post in that series
For all these series of posts, I plan to have a repo on github with a yet to be determined name. Then I will have all the SQL code for all the series organized there
You can find the github repo for the series here whatisnewinsqlserver2016, all the code for this post can be found here TruncatePartitions
New in SQL Server is the ability to truncate a partition or more than 1 partition, this is very nice because we all know that truncate is much faster than a delete. Let's see how this all works
First create this simple table
CREATE TABLE SalesPartitioned( YearCol SMALLINT NOT NULL, OrderID INT NOT NULL, SomeData UNIQUEIDENTIFIER DEFAULT newsequentialid()) GO
Now it is time to insert some data, we are going to create data for 6 years, we will use those years then to partition the data on. This query will insert 6 times 2048 rows, 2048 rows per year
INSERT SalesPartitioned (YearCol,OrderID) SELECT 2013,number FROM master..spt_values WHERE type = 'P' UNION ALL SELECT 2014,number + 2048 FROM master..spt_values WHERE type = 'P' UNION ALL SELECT 2015,number + 4096 FROM master..spt_values WHERE type = 'P' UNION ALL SELECT 2016,number + 6144 FROM master..spt_values WHERE type = 'P' UNION ALL SELECT 2017,number + 8192 FROM master..spt_values WHERE type = 'P' UNION ALL SELECT 2018,number + 10240 FROM master..spt_values WHERE type = 'P'
Now let's create the partition function
CREATE PARTITION FUNCTION pfFiscalYear(SMALLINT) AS RANGE LEFT FOR VALUES(2013,2014,2015,2016,2017) GO
Create the partition scheme
CREATE PARTITION SCHEME psFiscalYear AS PARTITION pfFiscalYear ALL TO ([PRIMARY]) GO
Add a primary key to the table, add it to the partition scheme we created above
ALTER TABLE dbo.SalesPartitioned ADD CONSTRAINT PK_Sales PRIMARY KEY CLUSTERED (YearCol,OrderID) ON psFiscalYear(YearCol) GO
Now let's see what we have as far as counts for each partition
SELECT partition_number,rows FROM sys.partitions WHERE object_id = OBJECT_ID('SalesPartitioned')
Here are the results
partition_numberrows
12048
22048
32048
42048
52048
62048
As you can see we have 6 partitions, each partition has 2048 rows
We can verify this by running a count ourselves
SELECT YearCol, $PARTITION.pfFiscalYear(YearCol) AS Partition, COUNT(*) AS [COUNT] FROM SalesPartitioned GROUP BY $PARTITION.pfFiscalYear(YearCol),YearCol ORDER BY Partition; GO
YearColPartitionCOUNT
201312048
201422048
201532048
201642048
201752048
201862048
Now it is time to truncate a partition.
In order to truncate a partition, you use the following syntax
TRUNCATE TABLE ........
WITH ( PARTITIONS ( { <partition_number_expression> | <range> } [ , ...n ] ) )
So if we just want to truncate partition 2, we would execute the following
TRUNCATE TABLE SalesPartitioned WITH (PARTITIONS (2)); GO
Checking those same count queries from before shows that partition 2 has no rows after we executed the truncate command
SELECT partition_number,rows FROM sys.partitions WHERE object_id = OBJECT_ID('SalesPartitioned')
partition_numberrows
12048
20
32048
42048
52048
62048
SELECT YearCol, $PARTITION.pfFiscalYear(YearCol) AS Partition, COUNT(*) AS [COUNT] FROM SalesPartitioned GROUP BY $PARTITION.pfFiscalYear(YearCol),YearCol ORDER BY Partition; GO
YearColPartitionCOUNT
201312048
201532048
201642048
201752048
201862048
There are two ways you can truncate a bunch of partitions, for example if you want to remove 4,5 and 6, you can use WITH (PARTITIONS (4, 5, 6)) or you can use a range like shown in the code below
TRUNCATE TABLE SalesPartitioned WITH (PARTITIONS (4 TO 6)); GO
After we execute that and we check the counts again, we see that we are now only left with partitions 1 and 3
SELECT partition_number,rows FROM sys.partitions WHERE object_id = OBJECT_ID('SalesPartitioned')
Here are the results from that query
partition_numberrows
12048
20
32048
40
50
60
SELECT YearCol, $PARTITION.pfFiscalYear(YearCol) AS Partition, COUNT(*) AS [COUNT] FROM SalesPartitioned GROUP BY $PARTITION.pfFiscalYear(YearCol),YearCol ORDER BY Partition; GO
Here are the results
YearColPartitionCOUNT
201312048
20153 2048
What happens if you specify an invalid partition in your truncate statement?
TRUNCATE TABLE SalesPartitioned WITH (PARTITIONS (12));
You get the following message
Msg 7722, Level 16, State 2, Line 1
Invalid partition number 12 specified for table 'SalesPartitioned', partition number can range from 1 to 6.
There you have it, this is how truncate partitions work
You can find the github repo for the series here whatisnewinsqlserver2016, all the code for this post can be found here TruncatePartitions
Subscribe to:
Posts (Atom)