Saturday, May 13, 2017
All the Build 2017 data related sessions in 1 spot, SQL Server, CosmosDB, Azure SQL DB and more
Are you bored this weekend? No problem, I have got you covered, I have put together a page with all the data related sessions from the build 2017 conference. I have embedded the videos and also added links to the page at the channel 9 site. I have also added links to the presentations and source code if it was available. I have arranged this in 4 categories:
SQL Server
Azure SQL DB
CosmosDB
The rest of the sessions
This way the sessions are not all intermingled
SQL Server
Securing your app data with Microsoft SQL Server 2017
by Tommy Mullaney
Join us for an overview of strategy and best practices to build secure applications using SQL Server. SQL Server offers a versatile toolset of security capabilities, which enable you to protect sensitive data and meet stringent compliance requirements. You’ll leave with an understanding of how these capabilities fit together to protect your application as a whole, including the new security enhancements available in the latest version of SQL Server.
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4018
Evolving SQL workloads from software to SaaS: Data for devs immersion
by Tara Shankar Jana
In this modern era; applications must recover from user errors efficiently, must be optimized for transactional performance, provide real-time insights into app transactions, must be secure and complaint, and support seamless integration to cloud, hybrid scenarios and other data types/systems. These applications must also learn and adapt to deliver more powerful experiences using cloud-based data services, applications, and intelligence. In this session we use a scenario-based hands-on lab program (immersion), to offer you a cohesive and consistent experience to build modern on-premises/cloud based applications with intelligence. Build hack-free, high performance, scalable applications using Microsoft SQL Server, Microsoft ML and Azure data services.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4013
Advanced SQL Server on Linux (Docker) via Entity Framework (EF)
by Tara Shankar Jana
It’s SQL Server as you already know it today, but it’s now native to Linux. It works with your data, your favorite tools, application frameworks (Entity Framework) and programming languages. In this session, we pull the code for an application running in cloud to a Linux-based machine. We test and enhance the application for performance (using EF, in-memory) and security against SQL vNext (Linux). Then we re-deploy the changes using DevOps to Azure with just a flip of d, show how the application works across platforms, on-premises, and in the cloud. Best part, all of it will be done in ten minutes.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4001
SQL Unplugged
by Scott Klein, Rohan Kumar
In this Channel 9 Live session we'll discuss all things SQL
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/C9L20
Serving AI with data: How to use R, Python, and machine learning with Microsoft SQL Server 2017
by Umachandar Jayachandran, Nellie Gustafsson
In this session, learn how to use R and Python to perform statistical and machine learning tasks in SQL Server 2017. Also, learn how to operationalize your R or Python scripts using the SQL Server integration.
Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8076
download:Slides View Slides Online
Microsoft SQL Server 2017 in the modern datacenter: how to run on Linux, Docker, OpenShift, and Kubernetes
by Travis Wright, Tobias Ternstrom
With the on-premises data center evolving into a private cloud, the fast pace of innovation in public clouds, and the strong interest in container technology it begs the question: How do you run a database in this new environment? In this session we examine what it means that SQL Server is evolving to support both Windows and Linux, including running inside a container on both platforms. Should you run SQL Server inside a Docker container? How does it play with orchestration technologies like OpenShift and Kubernetes? The brief answer is that we believe SQL Server should be ubiquitous and run in our customers environment of choice. Come to the session to hear the long answer with a bunch of demos!
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8080
How to do Predictive Modelling using R and SQL Server ML Services
4 days ago by Umachandar Jayachandran, Nellie Gustafsson
Learn how to use R scripts from T-SQL to perform training and scoring and leverage parallelism and streaming capabilities to get better performance.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/T6070
Built-in machine learning in Microsoft SQL Server 2017 with Python
by Sumit Kumar
Machine learning services in SQL Server 2017 provides Python support for in-database machine learning, now. In this session we show the basics of how to run Python code in SQL Server. We then discuss how any app that can talk to SQL Server can get intelligence from machine learning models running in SQL Server. We showcase an app that uses a Python-based deep learning model built and deployed in SQL Server. The model leverages an open source deep learning framework running with SQL Server and utilizes GPU for boosting model training performance.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/T6067
Modernize your database development lifecycle with SQL Server Data Tools in Visual Studio
2 days ago by Tara Raj
Learn how SQL Server Data Tools (SSDT) turns Visual Studio into a powerful environment for database development. Easily build, debug, maintain, and refactor databases inside Visual Studio with a declarative model that spans all the phases of database development and easily enables continuous integration and deployment for your databases. Work offline with a database project, or work directly with a connected database instance in Azure SQL Database, Azure SQL Data Warehouse, and SQL Server running on Windows, Linux, or Docker, on-premises or in any cloud.
[フレーム] Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4009
Azure SQL DB
Get to the cloud faster with Azure SQLDB Managed Instance and Database Migration Service
by Lindsey Allen, Harini Gupta
A new, expanded Azure SQL Database offers fully-managed server instance with greater compatibility with SQL Server application features and more, and the ability to move hundreds of databases at once using the Database Migration Service. It provides security isolation with Azure Virtual Network, along with built-in HADR, built-in intelligent performance tuning, and intelligent security services that are already available in Azure SQL Database. To reduce the friction of transitioning your relational database estate to public cloud, expanded Azure SQL Database provides SQL Server application compatibility including commonly used cross database references using three-part names, CLR, SQL Agent, Transactional Replication, Change Data Capture, Service Broker, and a lot more. We showcase a five-step seamless migration experience from your on-premises SQL Server instances to expanded Azure SQL Database using the newly announced Azure Database Migration Service.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4008
by Lindsey Allen, Harini Gupta
A new, expanded Azure SQL Database offers fully-managed server instance with greater compatibility with SQL Server application features and more, and the ability to move hundreds of databases at once using the Database Migration Service. It provides security isolation with Azure Virtual Network, along with built-in HADR, built-in intelligent performance tuning, and intelligent security services that are already available in Azure SQL Database. To reduce the friction of transitioning your relational database estate to public cloud, expanded Azure SQL Database provides SQL Server application compatibility including commonly used cross database references using three-part names, CLR, SQL Agent, Transactional Replication, Change Data Capture, Service Broker, and a lot more. We showcase a five-step seamless migration experience from your on-premises SQL Server instances to expanded Azure SQL Database using the newly announced Azure Database Migration Service.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4008
Migrating Oracle database to the Azure SQL DB with Database Migration Service
by Shamik Ghosh, Alexander Ivanov
In this virtual session, you learn how to assess, remediate and migrate the schema, data, and server artifacts from on-premises Oracle instances to Azure SQL DB. We walk you through an end-to-end experience using Azure data migration tools and services to help solve your database migration needs.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4182
download: Slides View Slides Online
How to build global-scale applications with Microsoft Azure SQL Database
by Rohan Kumar, Bob Strudwick
Join us in this session to learn how to build a global-scale IoT application by walking through a sample application and real-world customer case study. Gain insight on building an IoT application on a fully managed Azure database as a service, with built-in intelligence for performance, cost and security, dynamic scaling and hassle-free HADR, and infrastructure maintenance. We show you how to use Azure SQL Database enterprise grade in-memory engines to handle high volume and high concurrent transactions; while running real-time analytics queries on the same database. New features are released in Azure SQL Database in fast cadence; we show you the new Graph processing in the context of an IoT scenario, Adaptive Query Processing, predictive analytics, and the first in market built-in homomorphic data encryption feature for securing data at rest and in motion. It will be a fun learning hour.
[フレーム] Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8018
Design Patterns for SaaS applications on Azure SQL Database
by Julie Strauss, Bill Gibson
Experience the power of building multi-tenant SaaS applications on Azure SQL Database, Microsoft’s fully managed database as a service platform: Using a sample SaaS application, we walk through a series of SaaS-focused design and management patterns that have been distilled from work with a multitude of customers. Patterns spans from multi-tenant provisioning, schema management, performance monitoring and management to operational analytics. The code for the sample application, plus management scripts, ARM templates and tutorials, will be available for download in an easy-to-explore “SaaS-in-a-Box” package, enabling you to jump-start your own SaaS application.
[フレーム] Link to Build site: https://channel9.msdn.com/Events/Build/2017/T6025
CosmosDB
CosmosDB
by Seth Juarez, Rimma Nehme
In this session we meet with Rimma Nehme and Seth Juarez to discuss some of the latest Build 2017 announcements around CosmosDB
Link to Build site: https://channel9.msdn.com/Events/Build/2017/C9L08
Azure Cosmos DB: Build planet scale mobile apps in minutes
by Kirill Gavrylyuk
This session has no description, but the video is below
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4012
Azure Cosmos DB: API for MongoDB
by Andrew Hoh
Azure Cosmos DB (formerly known as Azure DocumentDB) natively supports multiple APIs; one of which is the API for MongoDB. Use existing code, applications, drivers, and tools to work with Azure Cosmos DB. Benefit from the fully managed and scalable Azure database, while continuing to use familiar skills and tools for MongoDB. Come and watch this video to learn about the feature and how to migrate to this cosmic-scale database, Azure Cosmos DB.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4011
Azure Cosmos DB: NoSQL capabilities everyone should know about
by Aravind Ramachandran
Microsoft Azure provides a fully managed NoSQL database service built for fast and predictable performance, high availability, elastic scaling, global distribution, and ease of development. As a schema-free NoSQL database, the service provides rich and familiar SQL query capabilities with consistent low latencies on JSON data - ensuring that 99% of your reads are served under 10 milliseconds and 99% of your writes are served under 15 milliseconds. These unique benefits make it a great fit for web, mobile, gaming, IoT, AI, and many other applications that need seamless scale and global replication. Come and learn about the NoSQL capabilities in Azure Cosmos DB that every developer should know about.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/T6058
A lap around Azure HDInsight and Cosmos DB Open Source Analytics + NoSQL
by Andrew Liu, Raghav Mohan
Recently, we released the Spark Connector for our distributed NoSQL service – Azure Cosmos DB (formerly known as Azure DocumentDB). By connecting Apache Spark running on top Azure HDInsight to Azure Cosmos DB, you can accelerate your ability to solve fast-moving data science problems and machine learning. The Spark to Azure Cosmos DB connector efficiently exploits the native Cosmos DB managed indexes and enables updateable columns when performing analytics, push-down predicate filtering against fast-changing globally-distributed data, ranging from IoT, data science, and analytics scenarios. Come learn how you can perform blazing fast planet-scale data processing with Azure Cosmos DB and HDInsight.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4010
Data lakes, U-SQL, Azure DBMS and the rest
How to get started with the new Azure DBMS for PostgreSQL.
by Eric Spear, Irakliy Khaburzaniya, Sunil Kamath
Join this session to learn about Microsoft’s latest announcement of managed PostgreSQL database on Azure. In this breakout, we will learn from two early adopting customers, how they've leveraged this latest database service to innovate faster. We will learn from their experience of using the managed PostgreSQL service, including migrating to the service, and discuss next steps in their application journey. We will walk through some of the key service features and discuss how you as a developer can migrate your existing applications or develop new applications that use this managed PostgreSQL in Azure. If you’re a developer with applications that use PostgreSQL today, whether on-premises or cloud, and want to learn about how this new managed service can help, this session is for you!
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8046
download: Slides View Slides Online
How to get started with the new Azure DBMS for MySQL.
by Gebi Liang, Jason M. Anderson, Matt Williams
Join this session to learn about Microsoft’s managed MySQL offering in Azure. We’ll walk through Microsoft’s strategy for supporting Open-Source database systems in Azure and what it means to you as a developer as you look to develop or deploy applications that use MySQL in Azure. An overview of the architecture of the service along with how Azure Database for MySQL is integrated with other Azure Services such as Web Apps will also be discussed and demo’d. If you’re a developer with applications using MySQL today, whether on-prem or already in Azure today, this session is for you!
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8045
download: Slides View Slides Online Source Code
How to serve AI with data: The future of the Microsoft Data Platform
by Joseph Sirosh, Alex Miller
The cloud is truly becoming the “brain” for our connected planet. You’re not just running algorithms in the cloud, rather you’re connecting that with data from sensors from around the world. By bringing data into the cloud, you can integrate all of the information, apply machine learning and AI on top of that, and deliver apps that are continuously learning and evolving in the cloud. Consequently, the devices and applications connected to cloud are learning and becoming increasingly intelligent. Please join us on a journey through the core new patterns that are emerging as we bring advanced intelligence, the cloud, IoT, and big data together.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8081
How to run AI at Petabyte Scale with cognitive functions in the Azure Data Lake
by Wee Hyong Tok
In this session, learn how you can use Azure Data Lake (ADL) for doing Big Cognition on raw customer support data. Learn how you can use ADL to perform key phrase extraction, sentiment analysis, and how you can use R/Python scripts for support volume forecasting. In addition, learn how you can use federated queries in ADL with Azure SQL Database. Discover how you can pull all these insights into an Azure SQL Data Warehouse, and using Azure Analysis Services to enable interactive analysis of the processed data. Join us for this exciting session as we show how you can develop intelligent applications by using the insights derived from processing massive Petabyte-scale datasets.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/B8065
download: Slides View Slides Online Source Code
Image processing at scale using U-SQL in Azure Data Lake
by Saveen Reddy
Making use of cognitive capabilities such as Image OCR or Sentiment Analysis of text is straightforward with small datasets of a few terabytes. But, at the scale of hundreds of terabytes or even a petabyte, you need a different approach to that can massively scale out *AND* be simple to build. Azure Data Lake offers a straightforward way of programming using .NET code against these massive Petabyte-scale datasets without the need to become a deep expert in distributed computing, big data technologies, or machine learning.Link to Build site:
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/T6040
Lift and shift any runtime into U-SQL for large scale processing
by Michael Rys
One of U-SQL’s major strengths is to scale out user code over your large data lake assets. This presentation first shows how we have used the U-SQL extensibility model to give you access to Python and R and then shows how to run Scala from within U-SQL.
[フレーム]
Link to Build site: https://channel9.msdn.com/Events/Build/2017/P4004
There you have have a bunch of data related sessions for your viewing pleasure
Wednesday, May 10, 2017
Not sure that I like the message from Online Resumable Index Rebuild in SQL Server 2017
I was messing around with the Online Resumable Index Rebuild, this is new in CTP 2 of SQL Server 2017. I don't know that I like the output from an resumable index rebuild
Let's take a look at what I did
First I created this table
CREATE TABLE dbo.TestIndexRebuild( name nvarchar(35) NULL, number int NOT NULL, type nchar(3) NOT NULL, low int NULL, high int NULL, status int NULL, somebigchar char(2000) ) ON [PRIMARY] CREATE CLUSTERED INDEX CI_TestIndexRebuild ON TestIndexRebuild(name,number)
I made the table wide by adding a 2000 character column, I then added a clustered index to the table
I then pumped in a bunch of data
INSERT INTO TestIndexRebuild SELECT *,REPLICATE('A',2000) FROM master..spt_values GO 500
I then executed the following
ALTER INDEX CI_TestIndexRebuild on TestIndexRebuild REBUILD WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=1)
Here is the output I got after 1 minute
Msg 3643, Level 16, State 1, Line 19
The operation elapsed time exceeded the maximum time specified for this operation. The execution has been stopped.
Msg 596, Level 21, State 1, Line 18
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 18
A severe error occurred on the current command. The results, if any, should be discarded.
I don't really like that, index rebuild has been paused because elapsed time has exceeded the maximum time displayed in black instead of red would have been fine with me. I don't need to see that the session is in a kill state or that a severe error occurred
I then executed the same command again
ALTER INDEX CI_TestIndexRebuild on TestIndexRebuild REBUILD WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=1)
Here is the output
Warning: An existing resumable operation with the same options was identified for the same index on 'TestIndexRebuild'. The existing operation will be resumed instead.
While that was running in a second window, I executed the following
ALTER INDEX CI_TestIndexRebuild on TestIndexRebuild ABORT
All you get as output is
Command(s) completed successfully.
However, in the widow where you executed the resumable index rebuild you get this
Msg 1219, Level 16, State 1, Line 1
Your session has been disconnected because of a high priority DDL operation.
Msg 596, Level 21, State 1, Line 0
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 0
A severe error occurred on the current command. The results, if any, should be discarded.
If you execute the ABORT again while the index is not being rebuilt, you get
Msg 10638, Level 16, State 2, Line 1
ALTER INDEX 'ABORT' failed. There is no pending resumable index operation for the index 'CI_TestIndexRebuild' on 'TestIndexRebuild'.
That makes sense
With a PAUSE Same thing happens as with ABORT, when you pause the index rebuild
ALTER INDEX CI_TestIndexRebuild on TestIndexRebuild PAUSE
All you get as output is
Command(s) completed successfully
But you get those other messages in the original window
You can also execute a PAUSE followed by an ABORT, you will only get one set of messages, no error is displayed in the window where you executed the code below
ALTER INDEX CI_TestIndexRebuild on TestIndexRebuild PAUSE GO ALTER INDEX CI_TestIndexRebuild on TestIndexRebuild ABORT
I think the kill state and severe error occurred is a little over the top.
What is your opinion?
Now having said all that, I do like the resumable index rebuilds, it pretty much mimics the defragment/reorganize functionality. It continues from where it was when if was running last
There are some more things you can specify, for example, here is a sample command
ALTER INDEX test_idx on test_table RESUME WITH (MAXDOP=2, MAX_DURATION= 240 MINUTES, WAIT_AT_LOW_PRIORITY (MAX_DURATION=10, ABORT_AFTER_WAIT=BLOCKERS)) ;
Read more about this in section J of ALTER INDEX
Tuesday, May 09, 2017
SQL Server 2017: SQL Graph
With the CTP2 build of SQL Server 2017, you now have the ability to use SQL Graph
I decided to play around with it a little, I based the code here on the sample I found in Books On Line. To follow along, first create this database, you have to be on SQL Server 2017 CTP2 or higher in order for the code to work
CREATE DATABASE GraphDBTest; GO USE GraphDBTest; GO
In SQL Graph, you have node tables and edge table, here is the description for both tables from Books On Line
A node table represents an entity in a graph schema. Every time a node table is created, along with the user defined columns, an implicit $node_id column is created, which uniquely identifies a given node in the database. The values in $node_id are automatically generated and are a combination of object_id of that node table and an internally generated bigint value
It is recommended that users create a unique constraint or index on the $node_id column at the time of creation of node table, but if one is not created, a default unique, non-clustered index is automatically created.
An edge table represents a relationship in a graph. Edges are always directed and connect two nodes. An edge table enables users to model many-to-many relationships in the graph. An edge table may or may not have any user defined attributes in it. Every time an edge table is created, along with the user defined attributes, three implicit columns are created in the edge table:
Column name | Description |
---|---|
$edge_id | Uniquely identifies a given edge in the database. It is a generated column and the value is a combination of object_id of the edge table and a internally generated bigint value. However, when the $edge_id column is selected, a computed value in the form of a JSON string is displayed. $edge_id is a pseudo-column, that maps to an internal name with hex string in it. When you select $edge_id from the table, the column name will appear as $edge_id_\<hex_string> . Using pseudo-column names in quereis is the recommended way of querying the internal $edge_id column and using internal name with hex string should be avoided. |
$from_id | Stores the $node_id of the node, from where the edge originates. |
$to_id | Stores the $node_id of the node, at which the edge terminates. |
Similar to the $node_id column, it is recommended that users create a unique index or constraint on the $edge_id column at the time of creation of the edge table, but if one is not created, a default unique, non-clustered index is automatically created on this column. It is also recommended, for OLTP scenarios, that users create an index on ($from_id, $to_id) columns, for faster lookups in the direction of the edge.
Here is what a node table looks like, the only difference is the AS NODE at the end
CREATE TABLE Person ( ID INT PRIMARY KEY, name VARCHAR(100) ) AS NODE;
Inserting into a node table is same as inserting into a regular table
INSERT INTO Person VALUES (1,'Denis'); INSERT INTO Person VALUES (2,'Shidhar'); INSERT INTO Person VALUES (3,'Geoff'); INSERT INTO Person VALUES (4,'Kevin'); INSERT INTO Person VALUES (5,'Liz'); INSERT INTO Person VALUES (6,'Melanie');
Let's see what the output looks like
SELECT * FROM Person
$node_id_A0703BD81C174451BEC7688009010A5B ID name
{"type":"node","schema":"dbo","table":"Person","id":0} 1 Denis
{"type":"node","schema":"dbo","table":"Person","id":1} 2 Shidhar
{"type":"node","schema":"dbo","table":"Person","id":2} 3 Geoff
{"type":"node","schema":"dbo","table":"Person","id":3} 4 Kevin
{"type":"node","schema":"dbo","table":"Person","id":4} 5 Liz
{"type":"node","schema":"dbo","table":"Person","id":5} 6 Melanie
As you can see, there is JSON in the $node_id column
The sys.tables view has 2 new columns, is_edge and is_node. If we query the sys.tables view now for the Person table we just created, you will see that is_node = 1
SELECT t.is_edge,t.is_node,* FROM sys.tables t WHERE name = 'Person'
Here is the result
is_edge is_node name object_id schema_id type_desc create_date
0 1 Person 901578250 1 USER_TABLE 2017年05月09日 08:55:30.513
Now we will add another table, this table will have some data about DB products
CREATE TABLE DBProduct ( ID INT NOT NULL, name VARCHAR(100) ) AS NODE; INSERT INTO DBProduct VALUES (1,'SQL Server'); INSERT INTO DBProduct VALUES (2,'Cassandra'); INSERT INTO DBProduct VALUES (3,'Oracle'); INSERT INTO DBProduct VALUES (4,'FoundationDB'); INSERT INTO DBProduct VALUES (5,'MongoDB'); INSERT INTO DBProduct VALUES (6,'Vertica'); INSERT INTO DBProduct VALUES (7,'Volt');
And now it is time for our edge tables, we will have 2 of them, a likes table and a friendOf table
CREATE TABLE likes (rating INT) AS EDGE; CREATE TABLE friendOf AS EDGE;
To insert into the likes table, we have to say which person likes what product. The person will go into the $from_id column, the DBProduct will got into the $to_id column. You $Node_id to grab the id to populate this table
The node_id for Person with an id of 1 will go into the $from_id column, the node_id for DBProduct with an id of 1 will go into the $to_id column. That looks like this
((SELECT $node_id FROM Person WHERE id = 1), (SELECT $node_id FROM DBProduct WHERE id = 1),5);Here are the insert statements to populate this table
INSERT INTO likes VALUES ((SELECT $node_id FROM Person WHERE id = 1), (SELECT $node_id FROM DBProduct WHERE id = 1),5); INSERT INTO likes VALUES ((SELECT $node_id FROM Person WHERE id = 1), (SELECT $node_id FROM DBProduct WHERE id = 2),15); INSERT INTO likes VALUES ((SELECT $node_id FROM Person WHERE id = 2), (SELECT $node_id FROM DBProduct WHERE id = 1),6); INSERT INTO likes VALUES ((SELECT $node_id FROM Person WHERE id = 3), (SELECT $node_id FROM DBProduct WHERE id = 3),7); INSERT INTO likes VALUES ((SELECT $node_id FROM Person WHERE id = 4), (SELECT $node_id FROM DBProduct WHERE id = 4),8); INSERT INTO likes VALUES ((SELECT $node_id FROM Person WHERE id = 5), (SELECT $node_id FROM DBProduct WHERE id = 5),9);
Let's look at what it looks like in the likes edge table
SELECT * FROM likes
Here is what it looks like, as you can see it is a bunch of JSON.. (click on the image for a bigger version)
Now we will also insert some data into the friendOf table, it is the same as with the likes table, lookup the node ids and insert the data
INSERT INTO friendof VALUES ((SELECT $NODE_ID FROM person WHERE ID = 1), (SELECT $NODE_ID FROM person WHERE ID = 2)); INSERT INTO friendof VALUES ((SELECT $NODE_ID FROM person WHERE ID = 1), (SELECT $NODE_ID FROM person WHERE ID = 5)); INSERT INTO friendof VALUES ((SELECT $NODE_ID FROM person WHERE ID = 2), (SELECT $NODE_ID FROM person WHERE ID = 3)); INSERT INTO friendof VALUES ((SELECT $NODE_ID FROM person WHERE ID = 3), (SELECT $NODE_ID FROM person WHERE ID = 1)); INSERT INTO friendof VALUES ((SELECT $NODE_ID FROM person WHERE ID = 4), (SELECT $NODE_ID FROM person WHERE ID = 2)); INSERT INTO friendof VALUES ((SELECT $NODE_ID FROM person WHERE ID = 5), (SELECT $NODE_ID FROM person WHERE ID = 4));
Now it is time to run some queries
This query will return all the products that Denis likes
-- Find DBProducts that Denis likes SELECT DBProduct.name FROM Person, likes, DBProduct WHERE MATCH (Person-(likes)->DBProduct) AND Person.name = 'Denis';
name
-------------
SQL Server
Cassandra
Not surprisingly, SQL Server is in that list
What about my friends? What DB products do they like?
-- Find DBProducts that Denis' friends like SELECT DBProduct.name as DBName,person2.name FROM Person person1, Person person2, likes, friendOf, DBProduct WHERE MATCH(person1-(friendOf)->person2-(likes)->DBProduct) AND person1.name='Denis';
DBName name
--------------- ------------
SQL Server Shidhar
MongoDB Liz
Okay, SQL Server and MongoDB, I can live with that
What about Liz, what does her friend Kevin like?
SELECT DBProduct.name as DBName,person2.name FROM Person person1, Person person2, likes, friendOf, DBProduct WHERE MATCH(person1-(friendOf)->person2-(likes)->DBProduct) AND person1.name='Liz';
DBName name
-------------
FoundationDB Kevin
FoundationDB? Didn't Apple buy that and then shut down the downloads?
So what product is there that both friends like?
To do that we specify that person1 is a friendOf person 2
and both person1 and person2 like the same product. Since you cannot use the same table twice, you need to use an alias, in this case I have used l2 as an alias for the likes table
SELECT DBProduct.name as DBName,person1.name,person2.name FROM Person person1, Person person2, likes, friendOf, DBProduct, likes as l2 WHERE MATCH(person1-(friendOf)->person2-(likes)->DBProduct AND person1-(l2)->DBProduct)
DBName name name
-----------
SQL Server Denis Shidhar
What a surprise, can you believe it, it is SQL Server!!
Let's take a look at some meta data, this time, we are adding some columns from the sys.columns view, I have added graph_type_desc and is_edge
select t.name, c.name, c.graph_type_desc,t.is_edge,t.is_node from sys.columns c join sys.tables t on c.object_id = t.object_id where graph_type is not null order by t.name, c.name
Here is the output from that query
name name graph_type_desc is_edge is_node
DBProduct $node_id_4C8D1144EF7745FD8BACDB7178353A65 GRAPH_ID_COMPUTED 0 1
DBProduct graph_id_8FA26FC01AE94CF9921364ECEFB71621 GRAPH_ID 0 1
friendOf $edge_id_FBBA05CB4E0F49E7919EC346DBB21F92 GRAPH_ID_COMPUTED 1 0
friendOf $from_id_143FE9B1B3E942EB99FF8396B292CD19 GRAPH_FROM_ID_COMPUTED 1 0
friendOf $to_id_DFF0DCFA07FE45E3A31E1925F5E6F886 GRAPH_TO_ID_COMPUTED 1 0
friendOf from_id_71FB1C6C6D82428889276A07A9AF32E4 GRAPH_FROM_ID 1 0
friendOf from_obj_id_4310B8C357D641EC990E91329687BDAA GRAPH_FROM_OBJ_ID 1 0
friendOf graph_id_3722B217CA404686BE44855E883AA6FA GRAPH_ID 1 0
friendOf to_id_D01976015F664F91A0A29819D79EE6A5 GRAPH_TO_ID 1 0
friendOf to_obj_id_8C26534B00DF47FBA87FBED6D14C3201 GRAPH_TO_OBJ_ID 1 0
likes $edge_id_CA70E11B83A44BD99DAD3B14EFCC2E08 GRAPH_ID_COMPUTED 1 0
likes $from_id_FF1F089BA1224A00A1C94F68415C6B08 GRAPH_FROM_ID_COMPUTED 1 0
likes $to_id_7728587E08684D94961B4F2234702B87 GRAPH_TO_ID_COMPUTED 1 0
likes from_id_ACD968F7815C4A3A9B5B6C5506235EDE GRAPH_FROM_ID 1 0
likes from_obj_id_185C6B5EB64F4C0A8E3E1CF65092F706 GRAPH_FROM_OBJ_ID 1 0
likes graph_id_16575BE2C0764232B5B440524166DF15 GRAPH_ID 1 0
likes to_id_D164826C35374D3E98EDBC3FA77457B3 GRAPH_TO_ID 1 0
likes to_obj_id_373354D4D7014EC89771C48F961561BB GRAPH_TO_OBJ_ID 1 0
Person $node_id_A0703BD81C174451BEC7688009010A5B GRAPH_ID_COMPUTED 0 1
Person graph_id_40FFD8418C1343639EA62CA66B6CE414 GRAPH_ID 0 1
You might have noticed that SQL Graph queries use old style joins
So instead of this
SELECT DBProduct.name as DBName,person1.name,person2.name FROM Person person1, Person person2, likes, friendOf, DBProduct, likes as l2
I can do this right?
SELECT DBProduct.name as DBName,person1.name,person2.name FROM Person person1 CROSS JOIN Person person2 CROSS JOIN likes CROSS JOIN friendOf CROSS JOIN DBProduct CROSS JOIN likes as l2
Yes that works, both queries return 54432 rows
Will it work with the WHERE clause?
SELECT DBProduct.name as DBName,person1.name,person2.name FROM Person person1 CROSS JOIN Person person2 CROSS JOIN likes CROSS JOIN friendOf CROSS JOIN DBProduct CROSS JOIN likes as l2 WHERE MATCH(person1-(friendOf)->person2-(likes)->DBProduct AND person1-(l2)->DBProduct)
Msg 13920, Level 16, State 1, Line 164
Identifier 'person1' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 164
Identifier 'friendOf' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 164
Identifier 'person2' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 164
Identifier 'person2' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 164
Identifier 'likes' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 164
Identifier 'DBProduct' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 165
Identifier 'person1' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 165
Identifier 'l2' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Msg 13920, Level 16, State 1, Line 165
Identifier 'DBProduct' in a MATCH clause is used with a JOIN clause or APPLY operator. JOIN and APPLY are not supported with MATCH clauses.
Boooo ... that is a big fat nope
Also keep the following in mind
There are certain limitations on node and edge tables in this release:
- Local or global temporary tables cannot be node or edge tables.
- Table types and table variables cannot be declared as a node or edge table.
- Node and edge tables cannot be created as system-versioned temporal tables.
- Node and edge tables cannot be memory optimized tables.
- Users cannot update the $from_id and $to_id columns of an edge using UPDATE statement. To update the nodes that an edge connects, users will have to insert the new edge pointing to new nodes and delete the previous one.
This was just a first look at Graph SQL, I will have to play around with it a little more to get more familiar with it and see if there are any use cases for me
In the meantime, visit Books On Line and start playing around with SQL Graph: https://docs.microsoft.com/en-us/sql/relational-databases/graphs/sql-graph-overview
Friday, May 05, 2017
SQL Server 2017: SELECT INTO File Group
With the CTP2 build of SQL Server 2017, you now have the ability of doing a SELECT INTO operation into a specific filegroup.
The syntax looks like this
SELECT * INTO TableName ON FileGroup FROM SomeQuery
What is the use of this you might ask? Well maybe you have some ad-hoc queries where you save some data but you don't want it sitting on your expensive SAN. Or maybe you populate staging tables on the fly and you want it to end up on a specific SSD because you need the speed of the SSD disk for these operations.
Of course all this can be done by creating the table first but this saves you at least some time..especially when all you have is a query and the data types are maybe unknown
Let's see how this all works, first create a database with just 1 data file and 1 log file
CREATE DATABASE [TestSelectInto] CONTAINMENT = NONE ON PRIMARY ( NAME = N'TestSelectInto', FILENAME = N'C:\DATA\TestSelectInto.mdf' , SIZE = 6MB , FILEGROWTH = 6MB ) LOG ON ( NAME = N'TestSelectInto_log', FILENAME = N'C:\DATA\TestSelectInto_log.ldf' , SIZE = 6MB , FILEGROWTH = 6MB ) GO
Now, add a new filegroup and name it ScratchFG
ALTER DATABASE [TestSelectInto] ADD FILEGROUP ScratchFG GO ALTER DATABASE [TestSelectInto] ADD FILE ( NAME= 'ScratchData', FILENAME = N'C:\DATA\ScratchData.mdf', SIZE = 6MB , FILEGROWTH = 6MB ) TO FILEGROUP ScratchFG;
Now that we added the filegroup, let's verify that we have 3 filegroups
USE TestSelectInto GO SELECT file_id,type_desc,name,physical_name FROM sys.database_files
file_idtype_descnamephysical_name
1ROWSTestSelectIntoC:\DATA\TestSelectInto.mdf
2LOGTestSelectInto_logC:\DATA\TestSelectInto_log.ldf
3ROWSScratchDataC:\DATA\ScratchData.mdf
As you can see, we have 3 filegroups
Now it is time to do our select into filegroup command, we will also do a select into without the filegroup specified
SELECT * INTO TestScratchFG ON ScratchFG FROM master..spt_values SELECT * INTO TestDefault FROM master..spt_values
How can we be sure where the table ended up? You can use the query below
SELECT OBJECT_SCHEMA_NAME(t.object_id) AS schema_name, t.name AS table_name, i.index_id, i.name AS index_name, fg.name AS filegroup_name FROM sys.tables t INNER JOIN sys.indexes i ON t.object_id = i.object_id INNER JOIN sys.filegroups fg ON i.data_space_id=fg.data_space_id
Here is what we see, the TestScratchFG table got created on the ScratchFG filegroup, the TestDefault table got created on the primary filegroup
schema_nametable_nameindex_idindex_namefilegroup_name
dboTestScratchFG0NULLScratchFG
dboTestDefault0NULLPRIMARY
What about temporary tables, can you use this syntax? Let's try.....
SELECT * INTO #TestScratchFG ON ScratchFG FROM master..spt_values
Msg 1921, Level 16, State 1, Line 60
Invalid filegroup 'ScratchFG' specified.
So as you can see, you cannot specify the filegroup of the DB that you are in...what you can do however is specifying primary
SELECT * INTO #TestScratchFG ON [primary] FROM master..spt_values
Since you cannot add filegroups to tempdb, primary is all you can specify...but then again why would you..there is no point. So if you are using temporary table, this syntax is useless.
That is all for this post, hopefully, you have seen how easy it is to use this and it might be of use to you in the future
Subscribe to:
Posts (Atom)