You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 0002_how_to_troubleshoot_and_speedup_postgres_restarts.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,7 +38,7 @@ The second reason – a lot of dirty buffers in the buffer pool – is less triv
38
38
- checkpoint tuning was performed in favor of fewer overheads of bulk random writes and fewer full-page writes (usually meaning that `max_wal_size` and `checkpoint_timeout` are increased)
39
39
- the latest checkpoint happened quite long ago (can be seen in PG logs in `log_checkpoint = on`, which is recommended to have in most cases).
40
40
41
-
The amount of dirty buffers is quite easy to observe, using extension pg_buffercache (standard contrib modele) and this query (may take significant time; see [the docs](https://postgresql.org/docs/current/pgbuffercache.html)):
41
+
The amount of dirty buffers is quite easy to observe, using extension pg_buffercache (standard contrib module) and this query (may take significant time; see [the docs](https://postgresql.org/docs/current/pgbuffercache.html)):
42
42
```sql
43
43
selectcount(*), pg_size_pretty(count(*) *8*1024)
44
44
from pg_buffercache
@@ -82,4 +82,4 @@ Interestingly, slow/failing `archive_command` can cause longer downtime during s
82
82
83
83
That's it. Note that we didn't cover various timeouts (e.g., pg_ctl's option `--timeout` and wait behavior `-w`, `-W`, see [Postgres docs](https://postgresql.org/docs/current/app-pg-ctl.html)) here and just discussed what can cause delays in shutdown/restart attempts.
84
84
85
-
Hope it was helpful - as usual, [subscribe](https://twitter.com/samokhvalov/), like, share, and comment! 💙
85
+
Hope it was helpful - as usual, [subscribe](https://twitter.com/samokhvalov/), like, share, and comment! 💙
Copy file name to clipboardExpand all lines: 0003_how_to_troubleshoot_long_startup.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -159,7 +159,7 @@ Bonus: how to simulate long startup / REDO time:
159
159
1. Increase the distance between checkpoints raising `max_wal_size` and `checkpoint_timeout` (say, `'100GB'` and `'60min'`)
160
160
2. Create a large table `t1` (say, 10-100M rows): `create table t1 as select i, random() from generate_series(1, 100000000) i;`
161
161
3. Execute a long transaction to data from `t1` (not necessary to finish it): `begin; delete from t1;`
162
-
4. Observe the amount of dirity buffers with extension `pg_buffercache`:
162
+
4. Observe the amount of dirty buffers with extension `pg_buffercache`:
163
163
- create extension `pg_buffercache`;
164
164
- `select isdirty, count(*), pg_size_pretty(count(*) * 8 * 1024) from pg_buffercache group by 1 \watch`
165
165
5. When the total size of dirty buffers reaches a few GiB, intentionally crash your server, sending `kill -9 <pid>` using PID of any Postgres backend process.
Copy file name to clipboardExpand all lines: 0004_tuple_sparsenes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -136,7 +136,7 @@ Thus, the Postgres executor must handle 88 KiB to return 317 bytes – this is f
136
136
- Index maintenance: bloat control as well + regular reindexing, because index health declines over time even if autovacuum is well-tuned (btree health degradation rates improved in PG14, but those optimization does not eliminate the need to reindex on regular basis in heavily loaded systems).
137
137
- Partitioning: one of benefits of partitioning is improved data locality.
138
138
139
-
**Option 2.** Use index-only scans instead of index scans. This can be achieved by using mutli-column indexes or covering indexes, to include all the columns needed for our query. For our example:
139
+
**Option 2.** Use index-only scans instead of index scans. This can be achieved by using multi-column indexes or covering indexes, to include all the columns needed for our query. For our example:
Copy file name to clipboardExpand all lines: 0010_flamegraphs_for_postgres.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -180,11 +180,11 @@ where i between 1000 and 2000;
180
180
(5 rows)
181
181
```
182
182
183
-
In this case, the planning time is really low, sub-millisecond – but I encountered with cases, when planning happened to be extremely slow, many seconds or even dozens of seconds. And it turned out (thanks to flamegraphs!) that analysing the Merge Join paths was the reason, so with "set enable_mergejoin = off" the planning time dropped to very low, sane values. But this is another story.
183
+
In this case, the planning time is really low, sub-millisecond – but I encountered with cases, when planning happened to be extremely slow, many seconds or even dozens of seconds. And it turned out (thanks to flamegraphs!) that analyzing the Merge Join paths was the reason, so with "set enable_mergejoin = off" the planning time dropped to very low, sane values. But this is another story.
184
184
185
185
## Some good mate
186
186
- Brendan Gregg's books: "Systems Performance" and "BPF Performance Tools"
187
-
-Brendant Gregg's talks – for example, ["eBPF: Fueling New Flame Graphs & more • Brendan Gregg"](https://youtube.com/watch?v=HKQR7wVapgk) (video, 67 min)
187
+
-Brendan Gregg's talks – for example, ["eBPF: Fueling New Flame Graphs & more • Brendan Gregg"](https://youtube.com/watch?v=HKQR7wVapgk) (video, 67 min)
188
188
-[Profiling with perf](https://wiki.postgresql.org/wiki/Profiling_with_perf) (Postgres wiki)
Copy file name to clipboardExpand all lines: 0012_from_pgss_to_explain__how_to_find_query_examples.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -99,7 +99,7 @@ TBD:
99
99
- tricks for versions <16
100
100
101
101
## Summary
102
-
- In PG14+, use `compute_query_id` to have quer`y_id values both in Postgres logs and `pg_stat_activity`
102
+
- In PG14+, use `compute_query_id` to have query_id values both in Postgres logs and `pg_stat_activity`
103
103
- Increase `track_activity_query_size` (requires restart) to be able to track larger queries in `pg_stat_activity`
104
104
- Organize workflow to combine records from `pg_stat_statements` and query examples from logs and `pg_stat_activity`, so when it comes to query optimization, you have good examples ready to be used with `EXPLAIN (ANALYZE, BUFFERS)`.
0 commit comments