10

Is there some way to select the page size of a Postgres database?

I know that the page size is usually 8K. I also read this email thread from 2003. I don't know that pg_controldata will work as I don't have access to the filesystem that the database is hosted on.

asked Jun 5, 2019 at 23:03

1 Answer 1

16

If you don't have access to pg_controldata ...

There is a simple way:

test=# SELECT current_setting('block_size');
 current_setting
-----------------
 8192

The manual:

The following "parameters" are read-only, and are determined when PostgreSQL is compiled or when it is installed. [...]

block_size (integer)

Reports the size of a disk block. It is determined by the value of BLCKSZ when building the server. The default value is 8192 bytes. The meaning of some configuration variables (such as shared_buffers) is influenced by block_size. See Section 19.4 for information.

To verify

Create a dummy table with only 1 small row: one data page is allocated. Then check the size of the "main" relation fork with pg_relation_size()

test=# CREATE TEMP TABLE foo AS SELECT 1 AS id;
SELECT 1
test=# SELECT pg_size_pretty(pg_relation_size('pg_temp.foo'));
 pg_size_pretty
----------------
 8192 bytes
(1 row)

So the page size is 8 kB, which is hardly surprising like you mentioned. The manual:

Every table and index is stored as an array of pages of a fixed size (usually 8 kB, although a different page size can be selected when compiling the server).

answered Jun 6, 2019 at 0:02
5
  • +1 - but just as a matter of interest, why would one compile the server with a different page size? Commented Jun 6, 2019 at 5:47
  • Never done this myself, but I imagine if you have large rows (that cannot be "toasted") it might save a bit of overhead. Not sure it would be worth the trouble. Commented Jun 6, 2019 at 11:12
  • @Vérace A btree index entry is limited to about 1/3 of the block size. That could be one motivation to increase it. You might also hope to get better performance, but I've never seen that pan out myself. Commented Jun 6, 2019 at 15:00
  • @ErwinBrandstetter - Thanks for that Erwin. You might be interested in a bonus that I'm offering Postgresql related - it should be right up your alley! Commented Jun 6, 2019 at 15:03
  • 1
    One plausible reason is if you want to match underlying storage block size (and not vice versa, for one reason or another). Like RAID stripe size, or ZFS recordsize. Also compression ratio might increase, at least for indexes, if data is sorted in larger chunks before compression. Commented Nov 5, 2019 at 17:08

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.