0

I have set shared_buffers to 256MB.

Using pg_buffercache extension I see that all of the buffers are in use:

SELECT pg_size_pretty(COUNT(*)*8192) as used FROM pg_buffercache;
 used
----------------
 256 MB

Now the problem is that when I use docker stats to view the database container memory it shows:

NAME CPU % MEM USAGE / LIMIT MEM % 
db 0.00% 31.07MiB / 1GiB 3.03%

Where is the shared_buffers memory stored? Shouldn't be in ram and displayed in docker stats?

asked Aug 4, 2021 at 22:10

1 Answer 1

0

This has nothing to do with work_mem, of course.

pg_buffercache has one row for every buffer slot, whether it has ever been used for anything or not. Use count(relfileode), not count(*), to count just the ones that are actually used.

I don't think there is any way with pg_buffercache to get the high water mark, the closest might be something like:

select max(bufferid) from pg_buffercache where relfilenode is not null;

But that will give too low an answer (for your purposes) if all the highest numbered bufferid belonged to a table that got dropped or truncated, for example.

answered Aug 5, 2021 at 1:09
2
  • Note that if I do a group by relfilenode: SELECT relfilenode::regclass::text, COUNT(*)*8192 FROM pse.pg_buffercache group by relfilenode::regclass::text I get usage by table and if I sum them I get the same issue, higher than reported ram usage. Commented Aug 5, 2021 at 8:07
  • 1
    GROUP BY doesn't remove NULLs, it just generates a group for them. Commented Aug 5, 2021 at 15:00

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.