Read+Write intense queries - should I split into a read-write and a readonly db - and then replicate data?
0
votes
0
answers
157
views
## Background and problem - Providers read/write and block, end-users read
I run a system (up-to-date availability of resources), that constantly takes in a fair amount of data points from many different providers (think sources of data like scraped sites, API consumed etc), resulting in a lot of read/insert/deletes. At any given time, ~2-3 providers are read/writing in the table. There are about 2 million rows in the table.
### Providers are OK
The providers reading/writing works just fine - does not matter if that process is sometimes a bit slow. No performance issues of any concern here.
### Users not so much, sadness ensues
At the same time, users are querying the same database/table, and sometimes this seems to mean that blocking results in very long query-times. A normal query-time is 100ms, but every so often, 20-30 sec queries happen - which is not great.
## Potential solution
Other than throwing more virtual metal at the problem, I have considered a different design:
### Two databases, replicated
- One database for all the providers to mess with, read, write
- Another database in which this table is read-only, so (hopefully) nothing will block user queries
- Replication from one to the other every 5-10 minutes (which I hope is intelligent enough to not cause blocking)
How does that sound? Would I be back at square one, because the replication causes the same issues?
Asked by Kjensen
(189 rep)
Jan 3, 2023, 05:13 PM