I'm building a search interface for a web site that currently searches through approximately 2 million pieces of real estate with approximately 60 million attributes (address, city, state, ZIP, beds, baths, assessed value, etc).
The tables are laid out so that there is a LEAD table (2 million rows) with one entry for each piece of real estate and a corresponding LEAD_DETAIL table (60 million rows) that holds name/value pairs for the attributes.
The LEAD table has the following structure:
LEAD_ID BIGINT PRIMARY KEY
LEAD_TYPE_NAME VARCHAR(50)
CREATED_DATE_TIME TIMESTAMP
MODIFIED_DATE_TIME TIMESTAMP
RECEIVED_EMAIL_ID BIGINT
NEW_LEAD SMALLINT
FDOLD_ID INTEGER
And the LEAD_DETAIL table has the following structure:
LEAD_DETAIL_ID BIGINT PRIMARY KEY
LEAD_ID BIGINT
NAME VARCHAR(100)
VALUE_STRING VARCHAR(1000)
VALUE_NUMERIC NUMERIC
VALUE_DATE DATE
The LEAD table has the following indexes (I've tried lots of indexing strategies to try and get this to work):
"lead_pkey" PRIMARY KEY, btree (lead_id)
"lead_created_date_time_idx" btree (created_date_time)
"lead_created_date_time_idx1" btree (created_date_time DESC)
"lead_created_date_time_lead_id_idx" btree (created_date_time DESC, lead_id)
"lead_fdold_id_idx" btree (fdold_id)
"lead_lead_id_created_date_time_idx" btree (lead_id, created_date_time DESC)
"lead_lead_id_lead_type_name_created_date_time_idx" btree (lead_id, lead_type_name, created_date_time DESC)
"lead_modified_date_time_idx" btree (modified_date_time)
And the LEAD_DETAIL table has the following indexes:
"lead_detail_pkey" PRIMARY KEY, btree (lead_detail_id)
"lead_detail_lead_id_name_key" UNIQUE CONSTRAINT, btree (lead_id, name)
"lead_detail_name_value_date_idx" btree (name, value_date)
"lead_detail_name_value_numeric_idx" btree (name, value_numeric)
"lead_detail_name_value_string_idx" btree (name, value_string)
"lead_detail_upper_value_string_idx" btree (name, upper(value_string::text))
Here is the query that I'm generating:
SELECT lead.lead_id
FROM lead, lead_detail
WHERE lead_detail.name = 'State'
AND UPPER(lead_detail.value_string) = 'FL'
AND lead.lead_id = lead_detail.lead_id
ORDER BY lead.created_date_time DESC
LIMIT 5000
Now, the distribution of the data is such that currently every lead is in the state of FL, so this is not exactly a highly "selective" query. More selective queries, e.g. for a small county, are quite fast.
Here is the explain plan on explain.depesz.com
My question is, given that we are not preventing users from issuing non-selective queries, how can we quickly return the top-n results for large results? I was hoping to use an INDEX on the CREATED_DATE_TIME column in conjunction with an ORDER BY and a LIMIT to produce the results quickly.
I've been working on this for hours and could really use some help. Thanks in advance!
1 Answer 1
Looks like (I'm by no means a Postgres expert) you have about two million rows in LEAD_DETAIL
table that satisfy the state condition... those two million rows are retrieved, hash joined to leads, the result sorted to return the top 5000 rows. Could you move the state to leads
table, so that it becomes a column there? Create and index on (upper(state), created_date_time)
and try again (of course, rewriting your query to use the new state column in leads table).
And by the way, most of your indexes on leads table are useless. Why have lead_created_date_time_idx
and lead_created_date_time_idx1
- they are the same, an ascending index can be used also for DESC sorts. Furthermore, why include lead_id
in indexes (except the one supporting the primary key constraint)?
-
I see what you mean about moving the state column to the LEAD table, but then the same logic would have to be followed for all of the other non-selective columns (e.g. beds, baths). I'd like to try and find a solution that avoids changing the table structures, if possible, because there is a lot of other existing software that uses these tables. Is there a way to speed up hash joins maybe?Dan– Dan2015年05月03日 21:22:56 +00:00Commented May 3, 2015 at 21:22
-
It's very hard to expect any kind of performance when the database needs to process several million rows to give you a result... the best that can be done about hash joins and sorts is to ensure the database can do the entire operation in-memory, without writing temporary data to disk. If you need performance, you must cut down the number of rows drastically in the early stages of query execution, and using a selective index is the only way I know of to achieve that.zgguy– zgguy2015年05月03日 21:40:46 +00:00Commented May 3, 2015 at 21:40
-
I tend to agree with everything that you're saying here. I suspect that either the table layout needs to change or we need to do a certain amount of pre-computation and store that for answering queries or something along those lines. Another option is simply disallowing non-selective queries ("please add more criteria"). But I was hoping for some low-hanging fruit. Maybe some advice on tuning PostgreSQL settings, etc. Maybe it's possible to do massive hash joins very quickly, etc.Dan– Dan2015年05月03日 21:53:27 +00:00Commented May 3, 2015 at 21:53
-
@Dan D. did you solve your problem?zgguy– zgguy2015年05月05日 06:12:50 +00:00Commented May 5, 2015 at 6:12
-
I decided to use a materialized view as an alternative way to get at this data faster (as we can tolerate some staleness in the results). Thanks for checking in!Dan– Dan2015年05月08日 01:02:11 +00:00Commented May 8, 2015 at 1:02
Explore related questions
See similar questions with these tags.
(lead_id, name, upper(value_string::text))
? And do you really need 5000 results?lead_detail
- have you run an(auto)analyze
on that table recently? And which version of PostgreSQL are you using?