1

I am inserting data from the tmp_details table into the details table using an INSERT query in chunks. I am also modifying the data while inserting it. The query is taking a lot of time to execute. My read IOPS is hitting the 2500s, and my write IOPS is near 100. My I/O Utilization is 100%, and my CPU Utilization is less than 10%. My RAM utilization is less than 40%. I am using ctid for chunk inserts of 2,000,000 rows. What can I do to improve the performance of my queries and reduce the I/O utilization? I want to increase CPU utilization to more than 80%.

Server Specifications:

  • PostgreSQL version: 15.6
  • RAM: 32 GB
  • Cores: 16
  • Disk Space: SSD 500 GB
  • OS: Linux Ubuntu 22.04

PostgreSQL Configuration:

max_connections = 200
shared_buffers = 8GB
effective_cache_size = 24GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 5242kB
huge_pages = try
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4

Table Details:

Table Name Row Count Size
source_cbsupi_tmp_details 60 Million 30 GB
source_npciupi_tmp_details 60 Million 30 GB
source_cbsupi_tmp_details 60 Million 30 GB

There are indexes on columns uniquekey, key_priority_radcs, key_priority_rtdps, is_processed, and key_priority_ratrs. I had to use the DISTINCT ON clause because I was getting duplicate rows due to the JOIN. I tried inserting in chunks of 1,000,000 rows using ctid, but it is still taking a lot of time to execute, most likely because it has to scan the whole table C and D for each iteration. So, I inserted the data as a whole 60 million rows and then performed a commit at the end. My aim is to run these similar insert queries for table C and D in parallel from the backend app server, but it will be pointless if my I/O Utilization is 100%.

Insert Query:

EXPLAIN
INSERT
 INTO
 cbsupi.source_cbsupi_details (codglacct,
 refusrno,
 key_priority_radcs,
 recon_created_date,
 dattxnposting,
 status,
 uniquekey,
 coddrcr,
 cbsacqiss,
 codacctno,
 amttxnlcy,
 acnotrim,
 priority_no,
 rrn,
 recon_updated_date,
 recon_date_1_to_2,
 recon_date_1_to_3,
 reconciliation_date_time ) (
 SELECT
 DISTINCT ON
 (A.uniquekey) A.codglacct,
 A.refusrno,
 A.key_priority_radcs,
 A.recon_created_date,
 A.dattxnposting,
 A.status,
 A.uniquekey,
 A.coddrcr,
 A.cbsacqiss,
 A.codacctno,
 A.amttxnlcy,
 A.acnotrim,
 A.priority_no,
 A.rrn,
 '2025-01-07 19:50:41' AS recon_updated_date,
 CASE
 WHEN C.key_priority_rtdps IS NOT NULL THEN '2025-01-07 19:50:41'
 ELSE NULL
 END::TIMESTAMP AS recon_date_1_to_2,
 CASE
 WHEN D.key_priority_ratrs IS NOT NULL THEN '2025-01-07 19:50:41'
 ELSE NULL
 END::TIMESTAMP AS recon_date_1_to_3,
 CASE
 WHEN (C.key_priority_rtdps IS NOT NULL
 AND D.key_priority_ratrs IS NOT NULL) THEN '2025-01-07 19:50:41'
 ELSE NULL
 END::TIMESTAMP AS reconciliation_date_time
 FROM
 cbsupi.source_cbsupi_tmp_details A
 LEFT JOIN switchupi.source_switchupi_tmp_details C ON
 (A.key_priority_radcs = C.key_priority_rtdps)
 LEFT JOIN npciupi.source_npciupi_tmp_details D ON
 (A.key_priority_radcs = D.key_priority_ratrs)
 WHERE
 A.is_processed IS NULL ) ON
 CONFLICT (uniquekey) DO
UPDATE
SET
 recon_updated_date = EXCLUDED.recon_updated_date,
 recon_date_1_to_3 = EXCLUDED.recon_date_1_to_3,
 key_priority_radcs = EXCLUDED.key_priority_radcs,
 status = EXCLUDED.status,
 reconciliation_date_time = EXCLUDED.reconciliation_date_time,
 codacctno = EXCLUDED.codacctno,
 amttxnlcy = EXCLUDED.amttxnlcy,
 recon_date_1_to_2 = EXCLUDED.recon_date_1_to_2,
 rrn = EXCLUDED.rrn,
 codglacct = EXCLUDED.codglacct,
 refusrno = EXCLUDED.refusrno,
 dattxnposting = EXCLUDED.dattxnposting,
 coddrcr = EXCLUDED.coddrcr,
 cbsacqiss = EXCLUDED.cbsacqiss,
 acnotrim = EXCLUDED.acnotrim,
 priority_no = EXCLUDED.priority_no;

Explain Results

"QUERY PLAN"
Insert on source_cbsupi_details (cost=72270111.44..73213761.44 rows=0 width=0)
 Conflict Resolution: UPDATE
 Conflict Arbiter Indexes: source_cbsupi_details_pkey
" -> Subquery Scan on ""*SELECT*"" (cost=72270111.44..73213761.44 rows=62910000 width=811)"
 -> Unique (cost=72270111.44..72584661.44 rows=62910000 width=823)
 -> Sort (cost=72270111.44..72427386.44 rows=62910000 width=823)
 Sort Key: a.uniquekey
 -> Hash Left Join (cost=10739152.00..50771187.50 rows=62910000 width=823)
 Hash Cond: (a.key_priority_radcs = d.key_priority_ratrs)
 -> Hash Left Join (cost=5337191.00..25537830.00 rows=62910000 width=800)
 Hash Cond: (a.key_priority_radcs = c.key_priority_rtdps)
 -> Seq Scan on source_cbsupi_tmp_details a (cost=0.00..2092124.00 rows=62910000 width=767)
 Filter: (is_processed IS NULL)
 -> Hash (cost=4118441.00..4118441.00 rows=60000000 width=33)
 -> Seq Scan on source_switchupi_tmp_details c (cost=0.00..4118441.00 rows=60000000 width=33)
 -> Hash (cost=4124101.00..4124101.00 rows=62910000 width=33)
 -> Seq Scan on source_npciupi_tmp_details d (cost=0.00..4124101.00 rows=62910000 width=33)
JIT:
 Functions: 24
" Options: Inlining true, Optimization true, Expressions true, Deforming true"

Questions:

  1. How can I improve the performance of the query and reduce the I/O utilization?
  2. Is there a way to run these similar insert queries in parallel from app without hitting the I/O utilization limit?
  3. Will inserting data in chunks benefit me, or is it better to insert the whole data at once? Because, From what I observed, inserting data in chunks is taking more time than inserting the whole data at once.

EDIT:

I am attaching the query plan for the select statement. Apparently, the Insert query is taking more than an hour, while the Select statement is only taking 265 seconds. I think my issue lies in the single commit at the end, possibly due to excessive log generation. Would it work if I keep auto-commit on? Is there a way to insert in chunks without looping through the whole table?

asked Jan 8 at 11:22
3
  • 1
    Hi, and welcome to dba.se! Could you provide us with your table structures as DDL - with all PRIMARY KEYs, INDEXes and other CONSTRAINTs. Also, could you please trim down the query to provide essential fields + 1/2 non-essential ones per table - it's a bit daunting looking at that mass of names. Also, please set track_io_timing = on in postgresql.conf` and then run `EXPLAIN (ANALYZE,, VERBOSE, COSTS, TIMING) <your query> - this will give a fuller picture of your problem. Commented Jan 8 at 13:03
  • Please answer the same question I asked you when you made the same post over on stackoverflow. Commented Jan 8 at 17:09
  • @jjanes I have attached a query plan for the select statement. Commented Jan 8 at 17:32

1 Answer 1

3

For the stand-alone select, it took over 4 minutes, and that was using 9 processes. The insert will not use parallelization (even for the select part of it), so we might naively expect it to take over 36 minutes just to run the select part of that query. And this doesn't include running triggers or updating indexes which of course an INSERT needs to do, so yes we could easily expect it to take well over an hour for the INSERT.

Unfortunately even if you let the INSERT run to completion with EXPLAIN (ANALYZE, BUFFERS) and with track_io_timing set to on, PostgreSQL would still not give you any meaningful information how much time was spent on maintaining indexes, either individually or collectively. But it would at least give you info on how long it took to run triggers, and also on how many rows were resolved by ON CONFLICT.

The best way to investigate the index issue would be to create a dummy table with the same columns, but with no triggers or indexes and see how long it takes to insert into that dummy table. Of course in this case no rows would result in ON CONFLICT actions, so this would not reflect that part of the situation. Then you could add the indexes back one by one and see if one particular one dominates the run time.

Is there a way to run these similar insert queries in parallel from app without hitting the I/O utilization limit?

You could add an ORDER BY to the select which would return rows in order which is more conducive to index lookup and maintaince. But since you have 5 separate indexes, it is unlikely there is one order which would satisfy all of them. Of course if it did work, you wouldn't even need to do it in parallel from the app in order to get the benefit.

My read IOPS is hitting the 2500s, and my write IOPS is near 100. My I/O Utilization is 100%,

Are these two independent determinations, or are you just assuming that since 2500 IOPS is hitting your promised rate than it must be 100% utilized but without independent assessment of that fact? Note that index maintenance will generally generate random IO, while your SELECT plan would primarily be generating sequential IO. How efficiently sequential IO gets bundled into IOPS depends on details or your kernel/FS/IO system which I do not know.

I think my issue lies in the single commit at the end, possibly due to excessive log generation.

That is unlikely, both on mechanistic/theoretical grounds, and because if it were true you would expect your IOPS to be dominated by writes, not reads.

answered Jan 8 at 21:21

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.