Recently we had high CPU/Memory and I/O usage on our MongoDB. While checking the logs all I found is some insert
during this period. While inspecting logs I noticed most of the insert logs have bytesRead
in the storage section. So I suspect this cause I/O then caching the data cause high memory.
After the insert spike the I/O and CPU went down but memory stayed the same which after a restart got resolved.
Is this disk read normal with insert operation? We are using Mongo v4.0 with WiredTiger
storage engine in CentOS7 VM.
2024年02月14日T23:39:44.533+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 34390, timeReadingMicros: 140837 } } 141ms
2024年02月14日T23:40:16.785+0800 I COMMAND [conn939845] insert db.user_log ninserted:1 keysInserted:11 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } storage:{ data: { bytesRead: 24150, timeReadingMicros: 506594 } } 507ms
1 Answer 1
Yes, in this case, that would be normal.
MongoDB users WiredTiger for storage, which stores all of the data and indexes in a b-tree structure. Updating a btree will require reading the root page of the tree, then internal pages depending on the tree depth, and finally the leaf page where the data will be stored.
keysInserted:11
indicates that inserting the document also required inserting 11 index keys. If those were 11 different indexes, that would mean reading root, internal, and leaf pages for each of those as well.
If any of those pages were already in the cache, that would reduce the total amount needing to be read from disk, so you may see significant varying numbers for similar inserts.
-
Thanks for the comment, yes this make sense for both memory and disk usage spike.goodfella– goodfella2024年02月26日 04:21:34 +00:00Commented Feb 26, 2024 at 4:21