When comparing the execution time of two different queries, it's important to clear the cache to make sure that the execution of the first query does not alter the performance of the second.
In a Google Search, I could find these commands:
DBCC FREESYSTEMCACHE
DBCC FREESESSIONCACHE
DBCC FREEPROCCACHE
In fact, my queries are taking a more realistic time to complete after several executions than before. However, I'm not sure this is the recommended technique.
What's the best practice?
4 Answers 4
Personally, for a common query the 2nd and subsequent executions matter more.
Are you testing disk IO or query performance?
Assuming your query runs often and is critical, then you want to measure that under real life conditions. And you don't want to clear prod server caches each time...
If you want, you can:
DBCC DROPCLEANBUFFERS
clears clean (unmodified) pages from the buffer pool
Precede that with aCHECKPOINT
to flush any dirty pages to disk firstDBCC FLUSHPROCINDB
clears execution plans for that database
Also see (on DBA.SE)
-
5Got an error when running
DBCC FLUSHPROCINDB
: An incorrect number of parameters was given to the DBCC statement.Xin– Xin2017年03月22日 01:41:03 +00:00Commented Mar 22, 2017 at 1:41 -
1Finally found it:
DECLARE @myDb AS INT = DB_ID(); DBCC FLUSHPROCINDB(@myDb); GO
from here: stackoverflow.com/questions/7962789/…Hans Vonn– Hans Vonn2019年06月25日 20:31:09 +00:00Commented Jun 25, 2019 at 20:31
Late answer but may be of use to other readers
DBCC DROPCLEANBUFFERS
is an often used command for query testing and gauging query execution speed. This command (when run) leaves behind only the dirty pages, which is actually a small portion of data. It removes all the clean pages for an entire server.
Be advised that this command should not be run on production environment. Running this command will result in mostly empty buffer cache. Running any query after executing the DBCC DROPCLEANBUFFERS
command, will use physical reads to bring back the data into the cache, which is very likely going to be a lot slower than memory.
Again, treat this command similarly to DBCC FREEPROCCACHE
- it should not be run on any production server unless you absolutely know what you are doing.
This can be a useful development tool because you can run a query in a performance testing environment over and over without any changes in speed/efficiency due to caching of data in memory.
Learn more at: http://www.sqlshack.com/insight-into-the-sql-server-buffer-cache/
I was always told to use:
dbcc dropcleanbuffers;
From MSDN:
Use DBCC DROPCLEANBUFFERS to test queries with a cold buffer cache without shutting down and restarting the server.
To drop clean buffers from the buffer pool, first use CHECKPOINT to produce a cold buffer cache. This forces all dirty pages for the current database to be written to disk and cleans the buffers. After you do this, you can issue DBCC DROPCLEANBUFFERS command to remove all buffers from the buffer pool.
Edit: the OP's question is about caches, and the use of commands such as DBCC FREEPROCCACHE
, which clears stored execution plans. Apparently I didn't have my glasses on, because my answer addressed buffers, which is to say recently-read data stored in memory for quick retrieval. Hopefully this is still of use to someone.
The other answers are correct about reasons to not run [DBCC DROPCLEANBUFFERS][1]
. However, there are also a couple of reasons to do so:
1: Consistency
If you want to compare two different queries or procedures which are attempting to do the same thing in different ways, they're likely to hit the same pages. If you naively run query #1 then query #2, the second may be much faster simply because those pages were cached by the first query. If you clear the cache before each execution, they start on an even footing.
If you do want to test hot-cache performance, be sure to run the queries several times, alternating, and discard the first couple of runs. Average the results.
2: Worst-case performance
Say you have a query which takes one second against a hot cache but one minute against a cold cache. An optimization which makes the in-memory query 20% slower but the IO-bound query 20% faster could be a big win: no one will care about the extra 200 ms of run time under normal circumstances, but if something forces a query to run against disk, taking 48 seconds instead of 60 is material. It might be enough to save a sale.
This is less of a concern on modern systems with tens of gigabytes of memory, and relatively fast SAN and SSD storage, but it still matters. If some analyst runs a massive table scan query against your OLTP database which wipes out half of your buffer cache, storage-efficient queries will get you back up to speed sooner.
-
Thanks for merging together the difference pieces of advice into a cohesive answer.Joel Christophel– Joel Christophel2020年04月29日 22:10:54 +00:00Commented Apr 29, 2020 at 22:10
-
1@MartinSmith: Good point! I will edit to clarify, though this answer is arguably irrelevant to OP's specific question, now that you've brought my attention to exactly what they've asked.Jon of All Trades– Jon of All Trades2020年04月30日 14:48:12 +00:00Commented Apr 30, 2020 at 14:48
Explore related questions
See similar questions with these tags.