I'm currently experiencing a rapid decrease of the insert Performance.
I have a rather simple data model
{
"Id": "ObjectId",
"SomeProp": "xy",
"SomeTags": [
{
"key": "x",
"value": "y"
}
]
}
Indices are created on of course the mongodb ("Id"), ("SomeProp") Ascending and ("SomeTags.key", "SomeTags.value") Ascending. The two custom indices also have background:true. (Also tried with Background:false but got worse results)
I use the new unordered bulk api to perform the inserts, this seemed to be slightly faster than batchinsert.
At the beginning (e.g. the first 100M rows) it'll insert constantly ~20-30k/sec. But then the speed suddenly drops to below 1k/sec.
Is there anything I can do, to keep the speed constant?
Below you can find the stats from MMS.
MongoDB Stats View Large
Collection Stats View large
-
I'm using MongoDB 2.6.1 on a dev machine with Windows 8.1 (i7, 32GB Ram, dedicated 2TB sata drive)coalmee– coalmee2014年05月21日 11:33:25 +00:00Commented May 21, 2014 at 11:33
1 Answer 1
The MMS monitoring graphs show that your disk basically can't keep up with the volume of data you are trying to persist to disk. The flush average is hitting over 100 seconds. Since MongoDB flushes data to disk every 60 seconds by default, that means that successive queues are queuing up and then impacting everything else. Basically you need more IO to sustain the write workload you are performing, likely an SSD, for that kind of sustained writing.
You can try throttling things by using different write concerns interspersed to get a more consistent throughput without completely overwhelming the system (check out j:true and a concern greater than 1 if you are running a replica set).