We are noticing below SQL is causing very High CPU and its been executed 10,000 times. Is there a way we can tune this T-sql?
SELECT ISNULL(max(trans_seq),0) +1
FROM inv_inventory_journal
WHERE organization_id = @P0
AND wkstn_id = '600'
From Execution Plan, it is suggesting to create the following index.
/*
Missing Index Details from ExecutionPlan1.sqlplan
The Query Processor estimates that implementing the following index could improve the query cost by 98.8039%.
*/
/*
USE [LKY_Xcenter]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[inv_inventory_journal] ([organization_id],[wkstn_id])
INCLUDE ([trans_seq])
GO
*/
Table Structure
-
3Have you considered using an IDENTITY column? You'll find it has a lot less contention than this max()+1 solution. Also, this query is in a serializable transaction, right? On its own, you can get multiple sessions retrieving the same value under the default isolation level.Aaron Bertrand– Aaron Bertrand2016年01月18日 01:32:41 +00:00Commented Jan 18, 2016 at 1:32
-
The suggestion seems correct. The parallelism might have occurred because of missing index forcing SQL Server to prepare this plan. Please create NCI as suggested and again run the query and look at actual execution plan. Also please add table structure and information about any index DDLShanky– Shanky2016年01月18日 04:18:07 +00:00Commented Jan 18, 2016 at 4:18
-
@Shanky so, you are suggesting to create the index?VeerM– VeerM2016年01月18日 16:47:32 +00:00Commented Jan 18, 2016 at 16:47
-
What @AaronBertrand said. Also, please post your table definitions as create statements instead of screenshotsTom V– Tom V2016年01月18日 22:57:03 +00:00Commented Jan 18, 2016 at 22:57
2 Answers 2
Based on your schema it looks like trans_seq
can't be null so I would remove the isnull
from the select
as that would have a small amount of CPU overhead in just checking the value that is returned from the max
function.
SELECT max(trans_seq) +1
FROM inv_inventory_journal
WHERE organization_id = @P0
AND wkstn_id = '600'
Also if your query is always setting the wkstn_id
to a constant 600. You could create a filtered index on that column.
USE [LKY_Xcenter]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[inv_inventory_journal] ([organization_id],[wkstn_id])
INCLUDE ([trans_seq])
WHERE [trans_seq] = 600;
GO
The nice thing about filtered indexes is that from an IO standpoint they only contain the rows that satisfies the where
statement defined in the index so you could get better performance from them.
One thing of note with the filtered indexes is that in order for the query optimizer to use them the value in the where
clause must be a constant in the query.
Here is a couple of references to using filtered indexes: MSDN
From what i experienced using indexes in big table.
It seems the sql engine will optimally used index in a set of columns.
For example, your primary key consist of set of columns. But the index of that set will not be used in sql optimation unless the whole set of column is used in the query.
That is why your original query did not use the index.
So, yes. To optimize the use big data table , index must be created for each specific set of columns used in query.
Sql server execution plan analysis help a lot to analize wich columns should be indexed.
This basic index design guide would also applied to other rdbms engine
Other way to optimize query that use computing or formula is by alter the table to include computed columns. Compute column can be added to index.