Let's say I have a set of select statements that query important field values in a for-loop. The goal is to make sure that the rows are not updated by any other transaction so that this set of selects doesn't result in data that is out of date.
In theory, it seems that setting the transaction level to repeatable read
should solve the problem. In this case, we can begin the transaction in the first select statement and then reuse the same transaction in this loop to make sure that updates are blocked until this transaction is committed.
Is there anything I am missing? Probably, there are some other ways to be sure that stale rows are not selected.
UPDATE: a bit more details
I have a series of queries like select name from some_table where id = $id_param
and this $id_param
is set in a for-loop.
I am worried, however, that this name
field might be changed by another concurrent operation for some row or even get deleted. This would result in corrupted states for the final object.
It seems that based on the comment below, pessimistic locking could be the way to go i.e. using ...FOR UPDATE
, but I am not sure.
1 Answer 1
Whatever it is, you're doing it the wrong way.
Look for a way where all the updates can be done using a single update command.
you might need to use a temporary table and update from it.
UPDATE table_name SET { column_name = { expression | DEFAULT } [, ...] FROM other_table WHERE condition
Explore related questions
See similar questions with these tags.
SELECT ... FOR NO KEY UPDATE
for that. But that means that you have to keep a transaction open, which should never happen. What are you trying to achieve?