v8.4 and v9.2 (Yes, I know they're archaic and out of support, but there's nothing I can do about it.)
Some of the tables in the database which I manage (but don't design) have btree indexes on CHARACTER VARYING(256) fields. Even when the field values aren't all blank, the length of the data in them is no more than 12 characters.
Yes, the physical table storage for these columns is highly compressed, but what about the btree indexes? Would the indexes be more efficient if the columns were altered down to VARCHAR(12)?
Thanks
1 Answer 1
"Would the indexes be more efficient if the columns were altered down to VARCHAR(12)?"
No, and that is easy to test:
create table foo(val1 varchar(256), val2 varchar(12)); create index i_foo1 on foo(val1); create index i_foo2 on foo(val2);
insert into foo(val1,val2) select 'ABCDEFGHIJ','ABCDEFGHIJ' from generate_series(1,10000);
vacuum full;
select relname, pg_size_pretty(pg_relation_size(c.oid)) from pg_class c join pg_namespace n on n.oid = c.relnamespace where nspname like 'fiddle%';
relname | pg_size_pretty :------ | :------------- foo | 512 kB i_foo1 | 392 kB i_foo2 | 392 kB
dbfiddle here
"Yes, the physical table storage for these columns is highly compressed..."
It isn't compressed (unless the string is long), it's just that for varchar
(unlike char
) the length specifier is merely a constraint that determines the maximum length:
-
I knew that's what happened on tables, but haven't seen anything regarding indexes. Thanks.RonJohn– RonJohn2018年03月15日 09:58:34 +00:00Commented Mar 15, 2018 at 9:58
Explore related questions
See similar questions with these tags.