I'm looking to create some basic scripts that are aimed at running on production systems.
I want them to run with as little impact as possible.
They shouldn't affect performance whenever possible, and should be safe to run.
So far I've done this by:
- Keeping them small
- Keeping them simple
- Using
WITH (nolock)
- Using
OPTION (MAXDOP 1)
Is there anything else I could add to limit their impact?
An example of a script I'm looking into is:
SELECT
cast(SQLProcessUtilization as nvarchar) AS [SQL Server Process CPU Utilization],
cast(SystemIdle as nvarchar) AS [System Idle Process],
100 - SystemIdle - SQLProcessUtilization AS [Other Process CPU Utilization],
DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS [Event Time]
FROM
(SELECT
record.value('(./Record/@id)[1]', 'int') AS record_id,
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int') AS [SystemIdle],
record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int') AS [SQLProcessUtilization], [timestamp]
FROM
(SELECT
[timestamp], CONVERT(xml, record) AS [record]
FROM
sys.dm_os_ring_buffers WITH (nolock)
WHERE
(ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'
AND record LIKE '%<SystemHealth>%')
) AS x
) AS y
ORDER BY
[Event Time] asc;
Most of these scripts will be used to allow our first line help desk to gather information in a consistent way when requested.
-
Some more information could come if you share sample script. Are these monitoring scripts ?Shanky– Shanky2015年08月28日 09:36:03 +00:00Commented Aug 28, 2015 at 9:36
-
@Shanky I added an example. The purpose behind it is to allow our helpdesk to give consistent answers to questions coming from other departments. The question often arises to check cpu usage or memory usage on our instances. And as our current monitoring system does not have these statistics readily available for our helpdesk, I was tasked with creating some scripts to allow them to give consistent replies before we set up a way for them to access our MDW.Reaces– Reaces2015年08月28日 09:40:45 +00:00Commented Aug 28, 2015 at 9:40
-
2Seems to me your current monitoring system is inadequate if it doesn't have such basic metrics readily available.Colin 't Hart– Colin 't Hart2015年08月28日 10:15:14 +00:00Commented Aug 28, 2015 at 10:15
-
@Colin'tHart I agree. I'm very new at my current job and it's the first thing I want to change. Consistent and comprehensive statistic gathering. But you don't change everything over night, and I want first to get them using a template to respond to queries about the statistics, some of the responses I've seen through our mailbox have been lackluster.Reaces– Reaces2015年08月28日 10:33:43 +00:00Commented Aug 28, 2015 at 10:33
-
Wouldn't perfmon or resource monitor or procmon be better ways to capture CPU metrics than the system_health session?Aaron Bertrand– Aaron Bertrand2015年08月28日 12:20:03 +00:00Commented Aug 28, 2015 at 12:20
1 Answer 1
It rather depends on what the scripts are intended to do...
If you can cope with the occasional dirty read or unrepeatable read issue (or they are truly simple enough that these are unlikely/impossible), also set your transaction handling to the most lenient mode possible with SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
. This is actually equivalent to WITH(NOLOCK)
but covers the entire connection going forward instead of just the one table. There is also SET TRANSACTION ISOLATION LEVEL SNAPSHOT
which generally stops your query from blocking other concurrent transactions without the risk of dirty or unrepeatable reads, but that does have some potential IO impact on tempdb.
If your queries are making any updates to the database then note that WITH(NOLOCK) support is going to be removed from UPDATE
and DELETE
statements at some point (see https://technet.microsoft.com/en-us/library/ms187373.aspx).
-
Currently the scripts are meant to gather information on the servers and the databases on them.Reaces– Reaces2015年08月28日 09:43:31 +00:00Commented Aug 28, 2015 at 9:43