I am currently managing a server cluster for a SaaS system. We currently have 6 SQL servers running MSSQL 2008 R2. Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack. We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.
Currently, each instance was installed without setting memory limits, but what we have found is that the first instances tend to use too much memory, leaving later instances (in starting order) with barely 200MB to work with, and the server OS with less than 1% physical memory available. This seems to cause excessive disk swapping, and latency issues.
What is the recommended way to split memory allocation in this case? Is there a formula to determine approximately how much memory one instance needs as a minimum from the number of clients and database size? Could I set a maximum of 300MB per instance and be done with it?
2018年12月13日 Final solution
I have finally figured out how to configure everything where it seems very stable and is completely symmetric across all instance, and since it took some digging and doing I thought I should share the solution I came to. This is probably not for everyone, but it seems to be the best solution to me.
The key is in this API: https://learn.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-setprocessworkingsetsizeex
This allows you to set a hard limit on the minimum and maximum working set size on a per process basis. This seems to be the cleanest to me, as it allows me to tell SQL server to use as much memory as it wants WITHIN this working set size, and it seems to optimize adequately by itself instead of running at capacity and trying to hog more memory all the time. Each instance will swap more overall, but since the usage is sporadic for each instance it seems to work just fine, and if it ever becomes a problem I can provision a few swap partitions off an SSD in the server rack as swap space for each server. I set the working set size to 300MB for each instance through a powershell script when each instance launches, and the minimal OS always has 1GB to work with, and everyone seems happy enough.
-
It is MS SQL 2008 R2 SP3 Enterprise.Drunken Code Monkey– Drunken Code Monkey2016年06月29日 17:39:18 +00:00Commented Jun 29, 2016 at 17:39
-
2@ypercubeTM @ Drunken code monkey. Did you ever consider rethinking your licensing costs versus hardware costs ratio having your setup?Tom V– Tom V2016年06月29日 17:48:26 +00:00Commented Jun 29, 2016 at 17:48
-
1@TomV Not really my problem, I just work here... :)Drunken Code Monkey– Drunken Code Monkey2016年06月29日 17:52:37 +00:00Commented Jun 29, 2016 at 17:52
-
Your 2018年12月13日 Final Solution edit should probably better be posted as an answer. Self-answering questions is encouraged as it benefits the community and keeps everything nice and tidy. You can even mark your own answer as being accepted. (I would personally wait a couple of days before accepting) Your answer will show up as being accepted and will allow others to find possible accepted solutions. Posting answers in questions isn't encouraged.John K. N.– John K. N.2018年12月14日 09:10:14 +00:00Commented Dec 14, 2018 at 9:10
2 Answers 2
Each server is a quad core virtual machine with 16GB of RAM provisioned on a server rack.
We populate each server with the maximum of 50 instances allowed by SQL Server. Each instance will be accessed sporadically by a maximum of 10 clients simultaneously, and each database will in general only be around 100MB to 500MB in size.
IMHO, your total RAM is too low. Please read my answer (with relevant links) SQL Server Maximum and Minimum memory configuration. They change when you have multiple instances of sql server running on a given host.
Capping SQL server max memory on a multi instance server is a balancing act and max memory is applicable to only buffer pool. If sql server needs more memory, it is going to use it.
You can even use Lock Pages in Memory (I would still opt for more memory before enabling LPM in your case).
As a starting point,
Baseline your instances. This will help you gauge what is good / acceptable for your workload.
Use
OptimizeInstanceMemory
script from Aaron's blog to help you get started. The blog post covers how to balance max memory dynamically when failover happens.
As a side note, you should monitor CPU, memory and disk utilization and based on the usage per client, you should charge them as well. Alternatively, you can move to Azure :-)
-
RAM is scaleable, we can reprovision the VMs anytime (rack has 256GB), but the issues we have been having are due to the OS getting starved, not the instances missing memory. I will keep the suggestion to increase RAM in mind if we still run into performance issues after the OS memory problem is solved.Drunken Code Monkey– Drunken Code Monkey2016年06月30日 15:11:45 +00:00Commented Jun 30, 2016 at 15:11
Set a maximum of 300MB per instance and be done with it. Seriously, you can monitor it with something like this to determine which may be candidates for giving a little more or less to.
SELECT DB_NAME(database_id) AS [Database Name],
COUNT(*) * 8/1024.0 AS [Cached Size (MB)]
FROM sys.dm_os_buffer_descriptors
WHERE database_id > 4 -- system databases
AND database_id <> 32767 -- ResourceDB
GROUP BY DB_NAME(database_id)
ORDER BY [Cached Size (MB)] DESC OPTION (RECOMPILE);
From here: Memory utilization per database - SQL Server
Good article here: http://strictlysql.blogspot.com/2013/05/how-to-size-sql-server-memory-why-is-it.html
Allocating memory to the OS (from the article):
1 GB of memory reserved to Operating System
1 GB each for every 4 GB in 4 to 16 GB
1 GB each for every 8 GB in more than 16 GB.
Split the remainder (i.e. 12 GBs) evenly across the 50 DBs, approximately 250MBs.
One configuration you may also consider is turning on "optimize for ad hoc workloads". This will essentially tell the SQL server not to cache the full query plans for queries until they've been run at least twice. This will keep the 'ad-hoc' or single-use queries from taking up this limited memory.
Also, you can minimize the impact of the transaction log on the memory by setting it to 'Simple' recover mode. You'll only be able to do this, if, in the event of failure, restoring from the last backup is okay. You can read some other limitations here, https://msdn.microsoft.com/en-us/library/ms189275.aspx.
I think that's fair until you see a reason to change it, particularly if these are individual clients who are in all other respects equal.
-
Comments are not for extended discussion; this conversation has been moved to chat.2017年08月29日 10:20:23 +00:00Commented Aug 29, 2017 at 10:20
Explore related questions
See similar questions with these tags.