What is most efficient design pattern for a sql database containing billions of rows per user in a single table?
2
votes
2
answers
849
views
I work on a relatively large system where have started to run into performance problems scaling for multiple users.
The system is a .NET application, so query's are written using an ORM (entity framework), and the database is an Azure SQL database.
I'm a developer and not a DBA; Typically when we've hit performance limits, and have optimised our queries to the best of our ability, but if we are still throttling the database, I scale up to a higher tier to increase our DTUs and the problem is solved.
We're now at a point where it would be cheaper to give individual users their own database, rather than scale any further.
I wont go into the details of what we do, but essentially we have a constant stream of data being sent from our users which on **average** is writing about 100,000 rows of data per user, per day, to the same table. Our users need quick access to this data, which typically involves loading in one month to a year of data at a time.
My question is - In this scenario, what options do I have to maintain our performance.
As far as I can tell, my only options are:
1 - Generate each user their own table within the database (if that's even possible), so I only need to deal with a few billion rows per user when querying (35b per year).
2 - I generate each user their own database (which should help with the performance hit from concurrent queries, but would be a nightmare to manage)
3 - I just keep throwing more money at azure until it becomes technically impossible to scale any further?
Thanks.
Asked by Verno
(23 rep)
Feb 9, 2022, 12:10 PM
Last activity: Feb 10, 2022, 04:42 PM
Last activity: Feb 10, 2022, 04:42 PM