I have the following configuration:
- a host machine that runs three docker containers:
- MongoDB
- Redis
- A program using the previous two containers to store data
Both Redis and MongoDB are used to store huge amounts of data. I know Redis needs to keep all its data in RAM and I am fine with this. Unfortunately, what happens is that mongo starts taking up a lot of RAM and as soon as the host RAM is full (we're talking about 32GB here), either mongo or Redis crashes.
I have read the following previous questions about this:
- Limit MongoDB RAM Usage: apparently most RAM is used up by the WiredTiger cache
- MongoDB limit memory: here apparently the problem was log data
- Limit the RAM memory usage in MongoDB: here they suggest to limit mongo's memory so that it uses a smaller amount of memory for its cache/logs/data
- MongoDB using too much memory: here they say it's WiredTiger caching system which tends to use as much RAM as possible to provide faster access. They also state
it's completely okay to limit the WiredTiger cache size, since it handles I/O operations pretty efficiently
- Is there any option to limit mongodb memory usage?: caching again, they also add
MongoDB uses the LRU (Least Recently Used) cache algorithm to determine which "pages" to release, you will find some more information in these two questions
- MongoDB index/RAM relationship: quote:
MongoDB keeps what it can of the indexes in RAM. They'll be swaped out on an LRU basis. You'll often see documentation that suggests you should keep your "working set" in memory: if the portions of index you're actually accessing fit in memory, you'll be fine.
- how to release the caching which is used by MongoDB?: same answer as in 5.
Now what I appear to understand from all these answers is that:
- For faster access it would be better for mongo to fit all indices in RAM. However, in my case, I am fine with indices partially residing on disk as I have a quite fast SSD.
- RAM is mostly used for caching by mongo.
Considering this, I was expecting mongo to try and use as much RAM space as possible but being able to function also with few RAM space and fetching most things from disk. However, I limited mongo Docker container's memory (to 8GB for instance), by using --memory
and --memory-swap
, but instead of fetching stuff from disk, mongo just crashed as soon as it ran out of memory.
How can I force mongo to use only the available memory and to fetch from disk everything that does not fit into memory?
2 Answers 2
As per MongoDB BOL Here Changed in version 3.4: Values can range from 256MB
to 10TB
and can be a float
. In addition, the default value has also changed.
Starting in 3.4
, the WiredTiger internal cache, by default, will use the larger of either:
50% of RAM minus 1 GB, or
256 MB.
With WiredTiger
, MongoDB utilizes both the WiredTiger internal cache
and the filesystem cache
.
Via the filesystem cache
, MongoDB automatically uses all free memory that is not used by the WiredTiger cache
or by other processes.
The storage.wiredTiger.engineConfig.cacheSizeGB limits the size of the WiredTiger
internal cache. The operating system will use the available free memory for filesystem cache, which allows the compressed MongoDB data files to stay in memory. In addition, the operating system
will use any free RAM to buffer file system blocks and file system cache.
To accommodate the additional consumers of RAM, you may have to decrease WiredTiger
internal cache size.
For further your ref WiredTiger Storage Engine and Configuration File Options
Actually, if you look closely, it's not mongod that dies for "out-of-memory", it's the kernel OOM (out of memory) manager that kills mongod, because it has the biggest memory usage.
Yes, you can try to solve the problem with monngodb configuration parameter cacheSizeGB, but in the container environment, it is better to use cgroups to limit the resources that any of your three containers get.
dmesg
correlating with the unexpected shutdown? The most likely possibility with Docker is that processes in the container detect the overall RAM available rather than the container limit.mongod
in a container (lxc
,cgroups
, Docker, etc.) that does not have access to all of the RAM available in a system, you must setstorage.wiredTiger.engineConfig.cacheSizeGB
to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container, but typically shouldn't be more than the default value of 50% of RAM minus 1GB.