Is there any way to determine a rough estimate of the size (on disk) of an index before creating it? The size of the table and each column are known. I am particularly interested in GIN
indexes. Also any information of how the sizes of different index types relate to each other is appreciated. Is there a general rule of thumb like a GIN
index is always bigger than a B-TREE
index? Or is it too dependent on the data size and distribution?
To clarify: i am not looking for a tool. I am happy to do it by hand.
1 Answer 1
There is no general way to answer this (other than try it on your test server and see). GIN supports many different operators, like tsvector or trigrams, and they have different characteristics. In newer versions, GIN indexes use compression which can be pretty impressive when the same key value shows up over and over again. But that level of compression depends on the ordering of the rows.
For example, if I index a single text column with a lot of duplicate values (~50 million rows, with ~1.5 million distinct values, using the btree_gin
extension for the GIN), I get 2010 MB for the b-tree index and 435 MB for the GIN index. So no, the GIN is not always bigger. But in general (i.e. other than with btree_gin) you don't index the same types of data with GIN as you do with b-tree, so a direct comparison of sizes does not make much sense.
Explore related questions
See similar questions with these tags.