0

I want to set up some triggers on a DB2 box (LUW, 9.7). The DBAs are concerned that if the trigger fires too often, it will adversely affect general performance.

What's a reasonable cheap approach to sizing up performance impact? By "cheap", I mean I don't want to set up complicated monitoring/profiling tools; my impression is that this database was set up with a minimum of logging tools and so forth, and I don't want to change that. I'm thinking more in terms of things like watching for page faults and also things that can be seen with db2top.

The environment:

It's an enterprise-level analytics database. Data is fed into it via replication (Golden Gate, now owned by Oracle). The triggers we're creating track updates. (Upon update, a small record containing the primary key is stored in another table. It's not a great way to do things; we're trying to ship data somewhere else, and right now we're highly constrained and can't use a technically advanced and thorough solution like replication. An alternative is adding a column to record updates, but we've been told not to do that, though that might change.) One reason I'm leery of just adding triggers willy-nilly is I'm concerned I might break the replication process. My current idea is to add a trigger that performs a simple hash on the primary key (which is a BIGINT) and record updates only for a fraction of the rows, so I can get an idea of how often updates occur. Anyway, that's why I'm asking about simple, reasonable metrics to see how triggers affect e.g. inserts into a particular table.

asked Sep 24, 2015 at 13:56
3
  • Could you elaborate on the kind of triggers? What kind of environment are we talking about? Commented Sep 26, 2015 at 8:18
  • @data_henrik: see edit above. Commented Sep 27, 2015 at 10:47
  • So you basically want to use triggers to fill a "log table"? Standard approach. Commented Jun 2, 2016 at 12:14

1 Answer 1

1

There's no separate "trigger performance impact"; trigger code is essentially compiled into the execution plan of the statement that causes the trigger to fire, so measuring average DML statement response time before and after introduction of triggers should give you a general idea of the impact.

Of course, triggered actions will certainly affect other workloads, not only the triggering actions. I'm sure your DBAs have records of baseline database performance that they can compare to what happens after the triggers are added (presumably on a test system).

As to monitoring tools, DB2 is fully instrumented to provide detailed views of its performance from various aspects, using event monitors and monitor views/stored procedures, so you don't need to set up anything in addition to what's always there.

answered Sep 24, 2015 at 17:26
2
  • (1) Is there a way to measure insert times without setting up an experiment? (The tables get inserted/updated by a replication process.) (2) What monitoring tools would one most typically use for looking at e.g. insert performance? Commented Sep 25, 2015 at 12:45
  • Not sure I understand (1). As for (2), if you're strictly interested in insert performance, a wall clock or timer should be sufficient to measure average execution time. However, I have a feeling that you may want to measure overall impact of the added triggers on the entire database workload. For that I would start by calling MONREPORT.DBSUMMARY() before and after adding the triggers (while running the test workloads, obviously). Commented Sep 25, 2015 at 14:17

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.