Sample Header Ad - 728x90

Postgres: limiting serialization errors on serializable transactions

3 votes
1 answer
48 views
Recently I learnt that serializable isolation level in Postgres is based on optimistic locking, hence theoretically can be performed fully concurrently as long as transactions do not interfere with each other (e.g., not doing read-then-write operations on the same rows). However, in practice, algorithm for detecting such interferences may produce false positives. As written in the docs , row-level locks can be promoted to page-level locks, if it's preferred from resources usage point of view. This increases chances of getting serialization error. For example, when I try to update two different rows concurrently, and it turns out that those two rows are stored on the same page and that both transactions acquired page-level lock, the one that commits later will get serialization error. I was trying to address this by increasing max_pred_locks_per_transaction, max_pred_locks_per_relation and max_pred_locks_per_page, but no matter how big those values are, I still get the error. For instance, let's take a look at an example that simulates 1k concurrent, independent money transfer operations. With the following config:
enable_seqscan = off
max_locks_per_transaction = 4192
max_pred_locks_per_transaction = 4192
max_pred_locks_per_relation = 4192
max_pred_locks_per_page = 4192
Having the following table:
create table if not exists accounts (
    id bigserial primary key,
    balance numeric(9, 2) not null
    );
When I execute the following queries:
session #1> begin isolation level serializable;
    session #2> begin isolation level serializable;
    session #1> select id, balance from accounts where id in (select n from generate_series(1,1000,4) n); -- select from accts with id=1, 5, 9, ..., 997
    session #1> select id, balance from accounts where id in (select n+1 from generate_series(1,1000,4) n); --- select from accts with id=2, 6, 10, ..., 998
    session #2> select id, balance from accounts where id in (select n from generate_series(3,1000,4) n); --- select from accts with id=3, 7, 11, ..., 999
    session #2> select id, balance from accounts where id in (select n+1 from generate_series(3,1000,4) n); --- select from accts with id=4, 8, 12, ..., 1000
    session #3> select locktype, count(*) from pg_locks group by locktype;
    session #1>  update accounts set balance = 50 where id in (select n from generate_series(1,1000,4) n);
    session #1>  update accounts set balance = 50 where id in (select n+1 from generate_series(1,1000,4) n);
    session #2>  update accounts set balance = 50 where id in (select n from generate_series(3,1000,4) n);
    session #2>  update accounts set balance = 50 where id in (select n+1 from generate_series(3,1000,4) n);
    session #1> commit;
    session #2> commit;
Commit of transaction in session #2 gets rejected:
ERROR:  could not serialize access due to read/write dependencies among transactions
    DETAIL:  Reason code: Canceled on identification as a pivot, during commit attempt.
    HINT:  The transaction might succeed if retried.
And select on pg_locks (executed in the middle on session #3) returns the following results:
developer=# select locktype, count(*) from pg_locks group by locktype;
       locktype    | count 
    ---------------+-------
     page          |    14
     virtualxid    |     3
     tuple         |  1750
     transactionid |     1
     relation      |     7
    (5 rows)
There are 14 page pred locks, even though only 1750 tuple pred locks were acquired, meaning there was still room to allocate more tuple-level locks. I understand that in certain cases, tuple lock got promoted to page lock, and as a database user, I must be prepared to retry such transactions. Nonetheless it increases response time, and I'm wondering if it's possible to somehow setup the DB so that, for instance, in case of 1k concurrent updates, the DB would still use tuple-level locks and not go for page-level locks. Is it required to adjust some other configurations to achieve that? Thanks in advance!
Asked by Dawid Kałuża (33 rep)
Jun 16, 2025, 03:11 PM
Last activity: Jun 17, 2025, 06:22 AM