Optimized count for large table using triggers, views, or external cache
0
votes
1
answer
171
views
I have a public API method that calls a Postgres (14) database and returns a paginated list of rows belonging to a user along with a total count and page index. The count is very costly to perform (according to
pg_stat_statements
) and I wish to optimize it.
Would creating a trigger that executes on insert/delete of the table and adjusts a count for each user be a conventional way of solving this issue? Or should I consider view or some simple external caches like Redis?
Of note: the table has very high write and read rates.
Asked by JackMahoney
(101 rep)
Jan 7, 2024, 09:01 AM
Last activity: Jan 7, 2024, 01:38 PM
Last activity: Jan 7, 2024, 01:38 PM