Context: I have a SQL Server instance running on one of our VMs with 64 GB of memory. There is currently a max limit of 60 GB and a min limit of 0 GB. SQL Server is the only application running on the server that consumes significant memory.
I recently launched a feature that makes repeated expensive queries to SQL Server and anytime there are a lot of users using the feature, the memory usage increases as expected. I haven't hit anywhere near the limit, but I'm curious as to what happens when we do approach it.
Per the MS docs, I've read that SQL Server dynamically handles memory for use as it needs, but I haven't been able to find anything about the logic in which SQL Server dynamically frees memory.
Questions:
- Does it begin freeing memory before hitting the specified limit? What (else) triggers memory to be freed?
- How does SQL Server determine which memory to free? Does it free memory in a FIFO manner?
Am I completely missing any points?
1 Answer 1
I haven't been able to find anything about the logic in which SQL Server dynamically frees memory.
That's mostly due to the fact that internal memory usage is an implementation detail which can (and does) change between major versions and even CUs. This is not to mention the number of trace flags that can further change behaviors.
There are two knobs which you've already found MinServerMemory
and MaxServerMemory
which make the basics of what administrators can use to affect the overall behavior in the operating system environment (OSE). Outside of this, there are trace flags, most of which aren't publicly available, which can change behaviors.
The next item is sort-of administrator controlled which is the amount of memory and other applications which are running on the OSE. SQL Server does listen to Windows which can set a Low Memory Condition for the OSE to tell applications that the computer is low on memory and applications can (if they subscribe to the event). SQL Server takes action by setting different flags internally, which then cause the various clerks to implement an external memory pressure routine - not all do. If they have this implemented, this will kick in, whether or not it helps is anyone's guess. Having your antivirus or whatever software kick in and scan everything on the system while eating up 20 GB of memory might not be good if you've already set max server memory very high.
Finally, there is the internal memory pressure which is between different memory usage inside of SQL Server. Some items implement an internal memory pressure routine, some do not, for example the lock manager will not give up memory.
The last application level controllable item (sort-of, again) is workload. If there aren't queries that are tuned and taking tons of concurrent locks, the lock manager will grow in size and never give that back. Application tuning (along with query tuning) can and will get you far as large memory grants are allowed (depending on version and CU) to go above the max server memory which can cause the bad chain of events reaction.
If you want to learn more about SQL Server internal and external memory management, I'd recommend the SQL Server Internals book. It's from 2012 but the core parts of it are still applicable.