I have a table station_logs
in a PostgreSQL 9.6 database:
Column | Type |
---------------+-----------------------------+
id | bigint | bigserial
station_id | integer | not null
submitted_at | timestamp without time zone |
level_sensor | double precision |
Indexes:
"station_logs_pkey" PRIMARY KEY, btree (id)
"uniq_sid_sat" UNIQUE CONSTRAINT, btree (station_id, submitted_at)
I'm trying to get the last level_sensor
value based on submitted_at
, for each station_id
. There are around 400 unique station_id
values, and around 20k rows per day per station_id
.
Before creating index:
EXPLAIN ANALYZE
SELECT DISTINCT ON(station_id) station_id, submitted_at, level_sensor
FROM station_logs ORDER BY station_id, submitted_at DESC;
Unique (cost=4347852.14..4450301.72 rows=89 width=20) (actual time=22202.080..27619.167 rows=98 loops=1) -> Sort (cost=4347852.14..4399076.93 rows=20489916 width=20) (actual time=22202.077..26540.827 rows=20489812 loops=1) Sort Key: station_id, submitted_at DESC Sort Method: external merge Disk: 681040kB -> Seq Scan on station_logs (cost=0.00..598895.16 rows=20489916 width=20) (actual time=0.023..3443.587 rows=20489812 loops=$ Planning time: 0.072 ms Execution time: 27690.644 ms
Creating index:
CREATE INDEX station_id__submitted_at ON station_logs(station_id, submitted_at DESC);
After creating index, for the same query:
Unique (cost=0.56..2156367.51 rows=89 width=20) (actual time=0.184..16263.413 rows=98 loops=1) -> Index Scan using station_id__submitted_at on station_logs (cost=0.56..2105142.98 rows=20489812 width=20) (actual time=0.181..1$ Planning time: 0.206 ms Execution time: 16263.490 ms
Is there a way to make this query faster? Like 1 sec for example, 16 sec is still too much.
-
2How many distinct station ids are there, i.e. how many rows does the query return? And what version of Postgres?ypercubeᵀᴹ– ypercubeᵀᴹ2017年06月24日 10:14:42 +00:00Commented Jun 24, 2017 at 10:14
-
Postgre 9.6, around 400 unique station_id, and around 20k records per day per station_idKokizzu– Kokizzu2017年06月24日 10:18:36 +00:00Commented Jun 24, 2017 at 10:18
-
This query returns a "last level_sensor value based on submitted_at, for each station_id". DISTINCT ON involves a random choice except in cases where you don't need it.philipxy– philipxy2017年06月29日 21:12:51 +00:00Commented Jun 29, 2017 at 21:12
2 Answers 2
For only 400 stations, this query will be massively faster:
SELECT s.station_id, l.submitted_at, l.level_sensor
FROM station s
CROSS JOIN LATERAL (
SELECT submitted_at, level_sensor
FROM station_logs
WHERE station_id = s.station_id
ORDER BY submitted_at DESC NULLS LAST
LIMIT 1
) l;
dbfiddle here (comparing plans for this query, Abelisto's alternative and your original)
Resulting EXPLAIN ANALYZE
as provided by the OP:
Nested Loop (cost=0.56..356.65 rows=102 width=20) (actual time=0.034..0.979 rows=98 loops=1)
-> Seq Scan on stations s (cost=0.00..3.02 rows=102 width=4) (actual time=0.009..0.016 rows=102 loops=1)
-> Limit (cost=0.56..3.45 rows=1 width=16) (actual time=0.009..0.009 rows=1 loops=102)
-> Index Scan using station_id__submitted_at on station_logs (cost=0.56..664062.38 rows=230223 width=16) (actual time=0.009$
Index Cond: (station_id = s.id)
Planning time: 0.542 ms
Execution time: <b>1.013 ms</b> -- !!
The only index you need is the one you created: station_id__submitted_at
. The UNIQUE
constraint uniq_sid_sat
also does the job, basically. Maintaining both seems like a waste of disk space and write performance.
I added NULLS LAST
to ORDER BY
in the query because submitted_at
isn't defined NOT NULL
. Ideally, if applicable!, add a NOT NULL
constraint to the column submitted_at
, drop the additional index and remove NULLS LAST
from the query.
If submitted_at
can be NULL
, create this UNIQUE
index to replace both your current index and unique constraint:
CREATE UNIQUE INDEX station_logs_uni ON station_logs(station_id, submitted_at DESC NULLS LAST);
Consider:
- Does creating a unique constraint on a Postgres column remove the need to index it?
- How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
This is assuming a separate table station
with one row per relevant station_id
(typically the PK) - which you should have either way. If you don't have it, create it. Again, very fast with this rCTE technique:
CREATE TABLE station AS
WITH RECURSIVE cte AS (
(
SELECT station_id
FROM station_logs
ORDER BY station_id
LIMIT 1
)
UNION ALL
SELECT l.station_id
FROM cte c
, LATERAL (
SELECT station_id
FROM station_logs
WHERE station_id > c.station_id
ORDER BY station_id
LIMIT 1
) l
)
TABLE cte;
I use that in the fiddle as well. You could use a similar query to solve your task directly, without station
table - if you can't be convinced to create it.
Detailed instructions, explanation and alternatives:
Optimize index
Your query should be very fast now. Only if you still need to optimize read performance ...
It might make sense to add level_sensor
as last column to the index to allow index-only scans, like joanolo commented.
Con: It makes the index bigger - which adds a little cost to all queries using it.
Pro: If you actually get index only scans out of it, the query at hand does not have to visit heap pages at all, which makes it about twice as fast. But that may be an insubstantial gain for the very fast query now.
However, I don't expect that to work for your case. You mentioned:
... around 20k rows per day per
station_id
.
Typically, that would indicate unceasing write load (1 per station_id
every 5 seconds). And you are interested in the latest row. Index-only scans only work for heap pages that are visible to all transactions (bit in the visibility map is set). You would have to run extremely aggressive VACUUM
settings for the table to keep up with the write load, and it would still not work most of the time. If my assumptions are correct, index-only scans are out, don't add level_sensor
to the index.
OTOH, if my assumptions hold, and your table is growing very big, a BRIN index might help. Related:
Or, even more specialized and more efficient: A partial index for only the latest additions to cut off the bulk of irrelevant rows:
CREATE INDEX station_id__submitted_at_recent_idx ON station_logs(station_id, submitted_at DESC NULLS LAST)
WHERE submitted_at > '2017-06-24 00:00';
Chose a timestamp for which you know that younger rows must exist.
You have to add a matching WHERE
condition to all queries, like:
...
WHERE station_id = s.station_id
AND submitted_at > '2017-06-24 00:00'
...
You have to adapt index and query from time to time.
Related answers with more details:
-
Anytime I know that I want a nested loop (often), using LATERAL is a performance boost for a number of situations.Paul Draper– Paul Draper2020年02月29日 00:08:16 +00:00Commented Feb 29, 2020 at 0:08
Try the classic way:
create index idx_station_logs__station_id on station_logs(station_id);
create index idx_station_logs__submitted_at on station_logs(submitted_at);
analyse station_logs;
with t as (
select station_id, max(submitted_at) submitted_at
from station_logs
group by station_id)
select *
from t join station_logs l on (
l.station_id = t.station_id and l.submitted_at = t.submitted_at);
EXPLAIN ANALYZE by ThreadStarter
Nested Loop (cost=701344.63..702110.58 rows=4 width=155) (actual time=6253.062..6253.544 rows=98 loops=1)
CTE t
-> HashAggregate (cost=701343.18..701344.07 rows=89 width=12) (actual time=6253.042..6253.069 rows=98 loops=1)
Group Key: station_logs.station_id
-> Seq Scan on station_logs (cost=0.00..598894.12 rows=20489812 width=12) (actual time=0.034..1841.848 rows=20489812 loop$
-> CTE Scan on t (cost=0.00..1.78 rows=89 width=12) (actual time=6253.047..6253.085 rows=98 loops=1)
-> Index Scan using station_id__submitted_at on station_logs l (cost=0.56..8.58 rows=1 width=143) (actual time=0.004..0.004 rows=$
Index Cond: ((station_id = t.station_id) AND (submitted_at = t.submitted_at))
Planning time: 0.542 ms
Execution time: 6253.701 ms
Explore related questions
See similar questions with these tags.