Sample Header Ad - 728x90

Postgres Repeatable Read vs Selecting rows that are changed by another ongoing transaction

1 vote
1 answer
1190 views
Let's say I have a set of select statements that query important field values in a for-loop. The goal is to make sure that the rows are not updated by any other transaction so that this set of selects doesn't result in data that is out of date. In theory, it seems that setting the transaction level to repeatable read should solve the problem. In this case, we can begin the transaction in the first select statement and then reuse the same transaction in this loop to make sure that updates are blocked until this transaction is committed. Is there anything I am missing? Probably, there are some other ways to be sure that stale rows are not selected. UPDATE: a bit more details I have a series of queries like select name from some_table where id = $id_param and this $id_param is set in a for-loop. I am worried, however, that this name field might be changed by another concurrent operation for some row or even get deleted. This would result in corrupted states for the final object. It seems that based on the comment below, pessimistic locking could be the way to go i.e. using ...FOR UPDATE, but I am not sure.
Asked by Don Draper (209 rep)
Sep 15, 2022, 04:35 PM
Last activity: Apr 18, 2025, 04:05 PM