We run numerous 2 node SQL Server 2008 R2 Failover Clusters in Windows Server 2008 R2 on a VMWare eSXi virtual platform. Each cluster hosts 3-4 SQL instances with varying degrees of memory requirement.
At present we use Max memory settings in SQL to allow for all 3 or 4 instances (with the greater of 10% or 4GB overhead for the OS) to run on a single node at any one time without impacting performance. I believe this is the widely accepted method for memory allocation on cluster if you're not using Aaron Bertrand's handy script for dynamic memory allocation, but does inevitably lead to wasted resource.
I recently got around to reading an Idera Whitepaper "Top 5 SQL Server Cluster Setup Mistakes" written by Kendra Little, who I massively respect, which implies that this approach may not actually be optimal. The whitepaper references a Microsoft article: https://msdn.microsoft.com/en-us/library/ms178067(v=sql.105).aspx. Specifically to this scenario:
Use min server memory to control memory usage. Establish minimum settings for each instance, so that the sum of these minimums is 1-2 GB less than the total physical memory on your machine. Again, you may establish these minimums proportionately to the expected load of that instance. This approach has the advantage that if not all instances are running at the same time, the ones that are running can use the remaining free memory. This approach is also useful when there is another memory-intensive process on the computer, since it would insure that SQL Server would at least get a reasonable amount of memory. The drawback is that when a new instance (or any other process) starts, it may take some time for the running instances to release memory, especially if they must write modified pages back to their databases to do so. You may also need to increase the size of your paging file significantly.
So, my question really, does anyone have any experience of this? I feel it might be a better idea to optimize memory usage in my environment as a minimum per node memory setting that would safely allow for all instance plus OS would then allow SQL more access to the otherwise unallocated RAM on the VM.
An example of what I mean:
Current Config: 2 x 32GB RAM VMs
VM1:
SQL Instance A - 0 Min Mem / 6GB Max Mem
SQL Instance B - 0 Min Mem / 8GB Max Mem
VM2:
SQL Instance A - 0 Min Mem / 6GB Max Mem
SQL Instance B - 0 Min Mem / 8GB Max Mem
Total Cluster Usage: 28GB Max SQL Memory
Suggested Config:
VM1:
SQL Instance A - 6GB Min Mem / Default Max Mem
SQL Instance B - 8GB Min Mem / Default Max Mem
VM2:
SQL Instance A - 6GB Min Mem / Default Max Mem
SQL Instance B - 8GB Min Mem / Default Max Mem
Total Cluster Usage: 28GB Min SQL Memory
Any thoughts and comments would be gratefully received.
This approach has the advantage that if not all instances are running at the same time
. I would suggest you to leaveMIN MEMORY
as default unless you are really encountering issues as once acquired, the buffer pool will not drop the memory below the level specified in the min server memory .. See my answer here. You should monitor thesys.dm_os_ring_buffers
- system health session for memory pressure.