Postgres instance restarted multiple times with out of memory errors. I could not understand what process would have consumed all memory.
2020年09月03日 04:54:14 PDT LOG: could not fork new process for connection: Cannot allocate memory TopMemoryContext: 799496 total in 13 blocks; 63080 free (17 chunks); 736416 used TopTransactionContext: 8192 total in 1 blocks; 7960 free (0 chunks); 232 used
Here are the current settings. Postgres version is 10.5.12
effective_cache_size | 16GB
maintenance_work_mem | 1GB
max_wal_size | 4GB
max_worker_processes | 8
temp_buffers | 8GB
work_mem | 256MB
shared_buffers | 32GB
max_connections | 3000
vm.overcommit_memory = 2
vm.overcommit_ratio = 80
-sh-4.2$ free -g
total used free shared buff/cache available
Mem: 62 9 0 36 52 16
Swap: 0 0 0
Can you please suggest if any configuration changes required.
1 Answer 1
If each of these 3000 connections used work_mem
only once (this limit is per node of the query plan, so it can be used several times by one database session), that could use almost 1TB RAM. I am not surprised that you are going OOM.
max_connections = 3000
is an insane setting. Do you have 3000 cores? Can your storage subsystem handle 3000 concurrent I/O requests?
I recommend that you use pgBouncer or another connection pool to reduce the number of database sessions.
By the way vm.overcommit_ratio
should be 100 if you have no swap, otherwise you cannot allocate more than 80% of the available RAM (but perhaps that's on purpose).