Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
231
views
Reference a schema from public table column
I have a multi-tenant Postgresql database with each schema belonging one of our customers (tenants). Each tenant has a table called `Messages`. So `CustomerX.Messages` contains all of the messages that are sent to `CustomerX`. All of the messages come in through a generic endpoint and are processed...
I have a multi-tenant Postgresql database with each schema belonging one of our customers (tenants). Each tenant has a table called
Messages
. So CustomerX.Messages
contains all of the messages that are sent to CustomerX
.
All of the messages come in through a generic endpoint and are processed a bit to determine which customer should receive the message. So the message pipeline looks like this from a high level:
RAW MESSAGE (public.Messages)
|
|
[PROCESSING]
|
|
DECRYPTED MESSAGE (some_tenant.Messages)
I would like to put the "pre-processed" messages into a public table containing the exact, unmodified input to the system. After going through a round of processing, I'd like to drop the final message into the customer's CustomerX.Messages
table.
# My Question
What is the best way for me to model this? I need to ensure the following:
1. There is exactly one message in "some" schema for each message in the public table.
2. There is exactly one message in the public schema for each message in a customer schema.
3. Given some message (in either the public or customer schema), I can find the other one.
wheresmycookie
(121 rep)
Jan 29, 2019, 05:06 PM
• Last activity: May 28, 2025, 03:06 AM
1
votes
1
answers
237
views
MongoDB as a log storage. Choosing shard key
I'm designing a log storage system based on MongoDB. I want to shard a log collection to increase ingestion and capacity (distribute writes to several machines) while allow fast search. I should be able to increase ingestion by adding more nodes to the cluster. My collection has following fields: **...
I'm designing a log storage system based on MongoDB.
I want to shard a log collection to increase ingestion and capacity (distribute writes to several machines) while allow fast search.
I should be able to increase ingestion by adding more nodes to the cluster.
My collection has following fields:
**Subsystem** - string, name of the application. E.g: "SystemA", "SystemB". ~ 100 unique values.
**Tenant** - string, the name of the deployment. It's used to separate logs from different application deployments / environments. E.g: "South TEST", "North DEV", "South PROD", "North PROD". ~ 20 unique values.
**Date** - timestamp.
**User** - string.
**SessionId** - guid, logically groups several related log records.
**Data** - BLOB, contains zipped data. Average size = 2Kb, maximum = 8Mb.
**Context** - array of key/value pairs. Both key and value are strings. It's used to store additional metadata associated with event.
The search could be performed by any combination of fields Subsystem, Date, User and Context.
Tenant almost always will be specified.
The question is - **what shard key and sharding strategy will be better in that case?**
My suggestions:
The simplest case is to shard by Tenant, but it will cause highly uneven data distribution, because PROD environments generates much more logs than DEV.
"Tenant + Subsystem" seems to be better but still there are subsystems that generates much more logs than other subsystems.
And also subsystem is not mandatory - user can omit subsystem during search and search query will be broadcasted.
"SessionId" will cause even data distribution but search requests will be broadcasted to all nodes.
Philipp Bocharov
(21 rep)
Jul 3, 2018, 12:13 PM
• Last activity: May 27, 2025, 12:11 PM
0
votes
1
answers
267
views
Multi-tenant reduce number of MongoDB connections
Our multi-tenant app has about 4000 databases per tenant and we are having a problem with performance because of too many connections. Creating MongoDB connections on each database is a heavy operation. We tried the following approaches but still could not reduce the number of connections. 1. Shardi...
Our multi-tenant app has about 4000 databases per tenant and we are having a problem with performance because of too many connections. Creating MongoDB connections on each database is a heavy operation.
We tried the following approaches but still could not reduce the number of connections.
1. Sharding only applying for the datasets
2. Increasing the pool size already applied
How to reduce the number of MongoDB connections? Is there any way to split databases into 1000,1000?
Orgil
(101 rep)
Oct 30, 2020, 10:54 AM
• Last activity: May 17, 2025, 08:00 PM
-1
votes
1
answers
811
views
Unable to create pluggable database on 19c using response file
I am trying to create a pluggable database using a response file, but it is giving me the error below, indicating that one of the mandatory parameters is missing. Although I have checked it multiple times, I don't see that any missing mandatory parameter. Below is my response file which I am using....
I am trying to create a pluggable database using a response file, but it is giving me the error below, indicating that one of the mandatory parameters is missing. Although I have checked it multiple times, I don't see that any missing mandatory parameter. Below is my response file which I am using.
dbca -silent -createPluggableDatabase -responseFile /f01/app/oracle/dbca_pdb_creation.rsp
[INS-04008] Invalid combination of arguments passed from the command line. One or more mandatory dependent arguments are not passed for the argument: -create pluggable database
I am able to create the database when passing the required parameters directly in command line as below:
dbca -silent -createPluggableDatabase -sourceDB testcdb -pdbName testpdb2 -createPDBFrom default -pdbAdminUserName PDBADMIN -pdbAdminPassword abcd1234
**Response file**
##############################################################################
## ##
## DBCA response file ##
## ------------------ ##
## Copyright(c) Oracle Corporation 1998,2019. All rights reserved. ##
## ##
## Specify values for the variables listed below to customize ##
## your installation. ##
## ##
## Each variable is associated with a comment. The comment ##
## can help to populate the variables with the appropriate ##
## values. ##
## ##
## IMPORTANT NOTE: This file contains plain text passwords and ##
## should be secured to have read permission only by oracle user ##
## or db administrator who owns this installation. ##
##############################################################################
#-------------------------------------------------------------------------------
# Do not change the following system-generated value.
#-------------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# GENERAL section is required for all types of database creations.
#-----------------------------------------------------------------------------
[GENERAL]
#-----------------------------------------------------------------------------
# Name : RESPONSEFILE_VERSION
# Datatype : String
# Description : Version of the database to create
# Valid values : "12.1.0"
# Default value : None
# Mandatory : Yes
#-----------------------------------------------------------------------------
responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0
#-----------------------------------------------------------------------------
# Name : OPERATION_TYPE
# Datatype : String
# Description : Type of operation
# Valid values : "createDatabase" \ "createTemplateFromDB" \ "createCloneTemplate" \ "deleteDatabase" \ "configureDatabase" \ "addInstance" (RAC-only) \ "deleteInstance" (RAC-only) \ "createPluggableDatabase" \ "unplugDatabase" \ "deletePluggableDatabase" \ "configurePluggableDatabase"
# Default value : None
# Mandatory : Yes
#-----------------------------------------------------------------------------
OPERATION_TYPE = "createPluggableDatabase"
#-----------------------*** End of GENERAL section ***------------------------
#----------------------------------------------------------------------------------
# CREATEPLUGGABLEDATABASE section is used when OPERATION_TYPE is defined as "createPluggableDatabase".
#----------------------------------------------------------------------------------
[CREATEPLUGGABLEDATABASE]
#----------------------------------------------------------------------------------
# Name : SOURCEDB
# Datatype : String
# Description : The source database is the SID
# This database must be local and on the same ORACLE_HOME.
# Default value : none
# Mandatory : YES
#-----------------------------------------------------------------------------
SOURCEDB = "testcdb"
#----------------------------------------------------------------------------------
# Name : PDBNAME
# Datatype : String
# Description : The name of new pluggable database
# This pdb name must not be same as sourcedb name.
# Default value : none
# Mandatory : YES
#-----------------------------------------------------------------------------
PDBNAME = "testpdb2"
#----------------------------------------------------------------------------------
# Name : CREATEASCLONE
# Datatype : Boolean
# Description : specify true or false for PDB to be create as Clone.
# : When "true" is passed a new PDB GUID is generated for the plugged in PDB
# Default value : true
# Mandatory : NO
#-----------------------------------------------------------------------------
CREATEASCLONE = "TRUE"
#----------------------------------------------------------------------------------
# Name : CREATEPDBFROM
# Datatype : String
# Description : specify the source of pdb to be plugged
# Valid values : DEFAULT | FILEARCHIVE | RMANBACKUP | USINGXML
# Default value : DEFAULT
# Mandatory : NO
#-----------------------------------------------------------------------------
CREATEPDBFROM = "DEFAULT"
#----------------------------------------------------------------------------------
# Name : PDBADMINUSERNAME
# Datatype : String
# Description : PDB Administrator user name
# Default value : None
# Mandatory : Mandatory only when creating new DEFAULT PDB
#-----------------------------------------------------------------------------
PDBADMINUSERNAME = "PDBADMIN"
#----------------------------------------------------------------------------------
# Name : PDBADMINPASSWORD
# Datatype : String
# Description : PDB Administrator user password
# Default value : None
# Mandatory : Mandatory only when creating new DEFAULT PDB
#-----------------------------------------------------------------------------
PDBADMINPASSWORD = "abcd1234"
#-----------------------*** End of createPluggableDatabase section ***------------------------
Please help.
mohd atif
(107 rep)
May 29, 2021, 07:24 PM
• Last activity: May 3, 2025, 02:04 AM
1
votes
1
answers
397
views
Is it good practice to have relationship between the tenant table and all other tables in a multi tenant application?
I am developing a multi tenant SaaS that uses Azure SQL for data storage. I make use of sp_set_session_context to make sure that any query only touches records relevant to the current tenant. I occationally get requests to delete a tenant from the database. That means the record in the Tenants table...
I am developing a multi tenant SaaS that uses Azure SQL for data storage. I make use of sp_set_session_context to make sure that any query only touches records relevant to the current tenant.
I occationally get requests to delete a tenant from the database. That means the record in the Tenants table and all other records in other tables with the same Tenant_Id. Today I do this with one DELETE statement per table.
Is it a good idea to create a relationship between Tenant and all other tables? This way I could use cascade delete to delete a tenant by simply removing a single record from the tenants table.
I am concerned about the more difficult recovery in the case of a mistakenly deleted tenant. I also want to be confident that there will be no serious performance issues with such a schema.
Gabriel Smoljar
(119 rep)
Aug 17, 2018, 04:04 PM
• Last activity: Apr 15, 2025, 05:02 PM
2
votes
1
answers
551
views
Aggregate data with multi-schema tenant architecture
We have an application that uses the tenant schema architecture, one database, has multiple schema per "tenant", each tenant is a replica with same tables etc. Sometimes we don't know the tenant or have to make global stats, in both case we need the aggregate of a table from each tenant. The solutio...
We have an application that uses the tenant schema architecture, one database, has multiple schema per "tenant", each tenant is a replica with same tables etc.
Sometimes we don't know the tenant or have to make global stats, in both case we need the aggregate of a table from each tenant.
The solution is a stored procedure that will create a view, the view is generated by looping over the tenant names, querying from it and using an union.
The view's definition ends up being something like : select * from tenant1.table union all ... select * from tenant2.table etc.
The stored procedure is ran every night in case a new tenant is added to update the view's declaration.
First request on the view is always slow (more than 1 minute) but then cache hits and it takes 3 seconds.
Optimizations have been done on the where clauses of the request by adding indexes, as well as union all instead of using union since duplicates are impossible.
I was wondering if there were better ways, performance-wise to do multi-tenant data aggregates ?
Database => tenant1_schema => actors, table2, ...
tenant2_schema => actors, table2, ...
...
public_schema => aggregateview
adaba
(121 rep)
Apr 29, 2022, 10:45 AM
• Last activity: Apr 15, 2025, 08:02 AM
2
votes
2
answers
1088
views
Modelling users and groups - but with multiple types of groups
I'm modelling a user management system in PostgreSQL that is fairly typical in that it consists of users, groups and permissions (and the relationships between them). However, its seems I'm dealing with different kinds of groups: some that represent an organization (e.g. a company) and some that rep...
I'm modelling a user management system in PostgreSQL that is fairly typical in that it consists of users, groups and permissions (and the relationships between them).
However, its seems I'm dealing with different kinds of groups: some that represent an organization (e.g. a company) and some that represent a role or differing permission levels within an organization (e.g. admin). The organizations represent tenants, or customers, in the business domain.
I currently see two possibilities: (a) create a separate table for organizations, or (b) add a boolean column to the group table called "is_organization" and use it to distinguish the two types.
Is there a best practice for modelling these type of system? Thank you.
Rob
(121 rep)
Apr 3, 2017, 12:46 AM
• Last activity: Apr 13, 2025, 02:02 AM
0
votes
2
answers
140
views
How to Hide SQL databases from specific users
I am working on implementing Multi Tenancy on SQL Database server. we are implementing approach of keeping one database per client. I want to Hide databases of one client from the users of other clients. I know a way by making a user Owner of that database and then denying view to all databases. but...
I am working on implementing Multi Tenancy on SQL Database server. we are implementing approach of keeping one database per client.
I want to Hide databases of one client from the users of other clients. I know a way by making a user Owner of that database and then denying view to all databases. but this works only for one users. if any one knows a general solution please let me know.
user3510083
(29 rep)
Feb 14, 2025, 02:24 PM
• Last activity: Feb 15, 2025, 03:11 PM
0
votes
1
answers
802
views
ODA X8-2M crossrestore with TDE wallet fails with: DCS-10001:Internal error encountered: failed to open the tde password based wallet
We are trying to crossrestore a TDE encrypted Oracle Database 19.12 on a new ODA X8-2M. TDE wallet has been successfully backedup (`odacli create-backup -in CDBET015 -c TDEWallet`) and transferred to the new ODA to the filesystem (not ASM). [oracle@oda-host tdewallet]$ ls -ltr total 72 -rwxrwxrwx 1...
We are trying to crossrestore a TDE encrypted Oracle Database 19.12 on a new ODA X8-2M.
TDE wallet has been successfully backedup (
odacli create-backup -in CDBET015 -c TDEWallet
) and transferred to the new ODA to the filesystem (not ASM).
[oracle@oda-host tdewallet]$ ls -ltr
total 72
-rwxrwxrwx 1 oracle oinstall 341 Feb 25 13:23 logfile.log
-rwxrwxrwx 1 oracle oinstall 5835 Feb 25 13:23 ewallet_202202241425400934_CDB.p12
Crossrestore with SBT-Tape was successful and RMAN completed the restore and the recovery. However the odacli register-database
was not yet started.
We tried to restore the wallet first, because it is not part of the rman crossrestore.
odacli restore-tdewallet -in CDBET015 -tl /u01/NFS_TDE/backup/CDB/tdewallet/ewallet.p12
The restore of the TDE wallet fails because the new ODA does not know the new database.
The ODA need to register the newly restored Database first. This fails because it does not have the wallet.
odacli register-database -c OLTP -s odb1 -sn CDB -t SI –tp
Enter SYS, SYSTEM and PDB Admin user password:
Retype SYS, SYSTEM and PDB Admin user password:
Enter TDE wallet password:
Retype TDE wallet password:
Job details:
odacli describe-job -i "7e60dc6f-0c69-4bb8-8205-bee6f6276b19"
Job details
----------------------------------------------------------------
ID: 7e60dc6f-0c69-4bb8-8205-bee6f6276b19
Description: Database service registration with db service name: CDBET015
Status: Failure
Created: February 25, 2022 4:04:21 PM CET
Message: DCS-10001:Internal error encountered: failed to open the tde password based wallet for database : CDB.ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN identified by ******** container=all
*
ERROR at line 1:
ORA-28367: wallet does not exist
Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
database Service registration February 25, 2022 4:04:22 PM CET February 25, 2022 4:05:54 PM CET Failure
database Service registration February 25, 2022 4:04:22 PM CET February 25, 2022 4:05:54 PM CET Failure
TDE parameter validate at destination February 25, 2022 4:04:22 PM CET February 25, 2022 4:04:22 PM CET Success
Enable OMF parameters February 25, 2022 4:04:23 PM CET February 25, 2022 4:04:23 PM CET Success
Setting db character set February 25, 2022 4:04:23 PM CET February 25, 2022 4:04:24 PM CET Success
Move Spfile to right location February 25, 2022 4:04:24 PM CET February 25, 2022 4:04:33 PM CET Success
Enable DbSizing Template February 25, 2022 4:04:33 PM CET February 25, 2022 4:05:32 PM CET Success
Copy Pwfile to Shared Storage February 25, 2022 4:05:32 PM CET February 25, 2022 4:05:39 PM CET Success
Add Startup Trigger to Open all PDBS February 25, 2022 4:05:39 PM CET February 25, 2022 4:05:40 PM CET Success
Running DataPatch February 25, 2022 4:05:40 PM CET February 25, 2022 4:05:53 PM CET Success
configuring TDE February 25, 2022 4:05:53 PM CET February 25, 2022 4:05:54 PM CET Failure
Opening wallet February 25, 2022 4:05:53 PM CET February 25, 2022 4:05:54 PM CET Failure
How to make the wallet accessible to the database/ODA?
r0tt
(1078 rep)
Feb 26, 2022, 10:50 AM
• Last activity: Feb 12, 2025, 04:07 AM
0
votes
1
answers
426
views
Postgres Search Path between connections
I'm a noob to Postgres, and am setting up a multi-tenant database using one schema per tenant. To target each tenant I need to set search_path accordingly. I'm worried about accidentally executing something intended for one tenant on another tenant, if the wrong search path is picked up somehow. I w...
I'm a noob to Postgres, and am setting up a multi-tenant database using one schema per tenant.
To target each tenant I need to set search_path accordingly.
I'm worried about accidentally executing something intended for one tenant on another tenant, if the wrong search path is picked up somehow.
I will be executing Functions from .Net code (via NpgSql). The functions will reside in the public schema. Each function will have the search path set inside it, like so:
BEGIN
PERFORM set_config('search_path', tenant, true);
...
END
Is there any danger, via concurrency or connection pooling, that the search paths for two calls might be shared, for instance if the 'true' parameter was forgotten or set to false.
I really want to be sure that there's no danger of data from one tenant crossing over in to another.
Thanks in advance.
Paul Guz
(13 rep)
Nov 21, 2019, 10:04 AM
• Last activity: Dec 27, 2024, 06:04 PM
0
votes
1
answers
116
views
Postgres Sharding
As a hypothetical example, say I wanted to shard a PostgreSQL database for a multi-tenant setup. Each of the tenants has an ID that is stored as a foreign key in a column on every record of every table. Is it possible to then shard the database by the tenants' ID for each record and allow querying b...
As a hypothetical example, say I wanted to shard a PostgreSQL database for a multi-tenant setup. Each of the tenants has an ID that is stored as a foreign key in a column on every record of every table. Is it possible to then shard the database by the tenants' ID for each record and allow querying by the shard?
Say there are four physical servers, all with one shard, and that there are four tenants:
+ Server 1 - Shard 1 - Tenant 1's data
+ Server 2 - Shard 2 - Tenant 2's data
+ Server 3 - Shard 3 - Tenant 3's data
+ Server 4 - Shard 4 - Tenant 4's data
Would it then be possible to allow an application such as a web framework, knowing what tenant it is serving, to then connect to and query (including reading and writing) the respective server with the respective shard?
lucifer34
(21 rep)
Feb 29, 2024, 09:33 PM
• Last activity: Feb 29, 2024, 10:19 PM
1
votes
0
answers
60
views
On demand snapshot and CDC from Postgresql multitenant DB
I have a Postgresql multitenant DB where each table contains tenant level metadata. e.g. Table "connector" Column | Type | Collation | Nullable | Default ----------------+-----------------------------+-----------+----------+--------------------------- id | uuid | | not null | public.uuid_generate_v4...
I have a Postgresql multitenant DB where each table contains tenant level metadata. e.g.
Table "connector"
Column | Type | Collation | Nullable | Default
----------------+-----------------------------+-----------+----------+---------------------------
id | uuid | | not null | public.uuid_generate_v4()
tenant_id | character varying(255) | | not null |
last_sync_date | timestamp without time zone | | |
type | character varying(255) | | not null |
created_date | timestamp without time zone | | |
modified_date | timestamp without time zone | | |
created_by | character varying(255) | | |
modified_by | character varying(255) | | |
version | bigint | | |
is_active | boolean | | not null | true
I am using AWS RDS and we have one DB per region. We need to support data migration scenarios wherein data for one tenant needs to be moved from one region DB to another. Basically the requirements are :
- Trigger data migration on tenant (customer's request) on demand
- This would include re-creating an initial snapshot of the tenant data and al incremental updates via CDC till the actual migration is completed.
- Have an easy data consistency validation mechanism.
- Migration completion is considered complete with a switch of all traffic for one tenant from source region to destination region.
I am looking at Debezium's CDC with incremental snapshots via signalling tables. But I am not sure if it supports the below:
- Does it support on-demand DB replication start/stop? I will start replication once with the trigger and stop it on migration completion
- I don't want to trigger snapshot every time with incremental snapshots. Can I do it for only certain tenants?
- Do I need to create a new replication slot for each tenant that I wan't to migrate. Creation of too many replication slots can cause WAL logs accumulating in case of consumer failures.
I am considering CDC mechanism so as to avoid doing bulk data exports in chunks during migration as that can be slow and error prone.
Would really appreciate some suggestions here as I am new to RDBMS CDC in general.
I have tried to capture whatever I felt was relevant to my scenario. Please let me know in case any specific area which is unclear and needs more explanation.
deGee
(111 rep)
Dec 16, 2023, 08:17 AM
0
votes
1
answers
77
views
Postgresql - Create role to connect to database and revoke all other permissions
I have an application which is configured with the multitenancy strategy of "single database multi schema". Now I want to create a role which can have only specific permission that is to : 1. connect to the database 2. should be able to switch role. Please help/guide me to create the as per the abov...
I have an application which is configured with the multitenancy strategy of "single database multi schema".
Now I want to create a role which can have only specific permission that is to :
1. connect to the database
2. should be able to switch role.
Please help/guide me to create the as per the above requirement.
No other object (like other schema, tables those schema) in the database should be visible for this role.
So that I can use this role to login and based on the tenant I can switch to another role which can have access to a particular schema only .
Subhajit
(101 rep)
Nov 10, 2023, 04:11 AM
• Last activity: Nov 10, 2023, 07:27 AM
0
votes
0
answers
183
views
Set role for database connection in java (spring boot with multitenancy)
For every http request there is tenant identifier based on which the schema is switched (datasource switching). connection.setSchema(tenantIdentifier); Now to secure the data access I am introducing role per schema using below code in my method getConnection () connection.createStatement().execute("...
For every http request there is tenant identifier based on which the schema is switched (datasource switching).
connection.setSchema(tenantIdentifier);
Now to secure the data access I am introducing role per schema using below code in my method getConnection ()
connection.createStatement().execute("SET ROLE " + role);
Is there any other alternative way/recommended way to set the role ?
Subhajit
(101 rep)
Nov 7, 2023, 03:37 PM
1
votes
2
answers
183
views
Scaling from Multiple Database to Single Database Architecture in SQL Server
My application is centered around self-contianed "workspaces". For many really good reasons (everything from management to security), we have always had a one-database-per-workspace architecture. Each database has identical schema, stored procedures, triggers, etc. There is a "database of databases"...
My application is centered around self-contianed "workspaces". For many really good reasons (everything from management to security), we have always had a one-database-per-workspace architecture. Each database has identical schema, stored procedures, triggers, etc. There is a "database of databases" that coordinates all of this. Works great.
The problem: scalability. It was recently proposed that a customer might want to have 100,000 workspaces. Obviously this is a non-starter for one SQL instance. Plus, each workspace might be rather small, but there'd also be a very wide size distribution - the biggest workspace could be 100x the size of the _median_. The top 1% of workspaces could easily constitute 90+% of the rows across all workspaces.
I'm looking for options for rearchitecting things to support this scenario, and here are some things I've considered and the issues I see with each.
- Keep the multi-database architecture but spread across multiple SQL instances. The problem is cost (both administrative and infrastructure). If we stick to a limit of 1,000 DBs on each instance, that's still 100 instances, spread across who knows how many actual VMs. But since so many of the workspaces will be small (much smaller than our current average), the revenue won't nearly scale accordingly. So I think this is probably out of the question and I'm focusing now on single-database architectures.
- Every workspace shares the same tables, indexed by workspace ID. So every table would need a new workspace ID column and every query needs to add the workspace condition in the WHERE clause (or more likely every real table is wrapped in an inline table-valued function that takes the WorkspaceID; anyway...) The primary key of every table would also have to be redefined to include the workspace ID since not every PK now is globally unique. Programming-wise this is all fine, but even with proper indexing and perfect query design (and no, not all our queries are perfect - the dreaded row scan still happens on occasion) is there any conceivable way this could perform as well - for everyone - as separate databases? More specifically can we guarantee that small projects won't suffer from the presence of big projects which could be taking up 100x more rows than the small ones? And what specific steps would need to be taken, whether it be the type of index to use or how to write queries to guarantee that the optimizer always narrows things down by workspace ID before it does literally anything else?
- Partitioning - from what I've read, this doesn't help with query performance, and it appears MS recommends limiting tables or indexes to 1000 partitions so this also won't help.
- Create the same set of tables but with a new schema for each workspace. I thought of this because there are no limits to the number of tables a database can have other than the overall 2G object limit. But I haven't explored this idea much. I'm wondering if there would be performance concerns with 100,000 schemas and millions of tables, views, stored procs, etc.
With all that, here is the specific question -
What specific features of SQL Server, and/or general strategies, including but not limited to things I've considered, would be most useful for maintaining a large number of self-contained data sets with identical schemas in a single giant database? To reiterate, maintaining performance as close as possible to a multi-database architecture is of top priority.
And needless to say, if any part of my assessment above seems incorrect or misguided I'd be glad to be corrected. Many thanks.
Peter Moore
(113 rep)
Aug 17, 2023, 05:30 PM
• Last activity: Aug 20, 2023, 06:23 PM
0
votes
0
answers
31
views
How to fast alter table when a system isolates tenants' data by database?
I am developing a multi-tenany system. My senior staff suggest to isolate tenants' data by database, not by schema nor discriminator. My question is, if I have 1000 customers, therefore I have 1000 databases also. So, now because of a new feature, I need to alter tables for the 1000 databases. Is if...
I am developing a multi-tenany system. My senior staff suggest to isolate tenants' data by database, not by schema nor discriminator.
My question is, if I have 1000 customers, therefore I have 1000 databases also.
So, now because of a new feature, I need to alter tables for the 1000 databases. Is if possible? Or, is there a easy way to alter tables for the 1000 databases ?
The hardware of my PostgreSQL is 32 core/256GB RAM, how many databases can an instance of PostgreSQL handle ?
wureka
(159 rep)
Jul 7, 2023, 08:15 AM
0
votes
0
answers
40
views
Scaling MySQL Server for Handling High Volume Requests in a Multitenant Application
I am currently working on developing a multi-tenant application where I have implemented a backend that identifies the tenant from the header and connects to the appropriate database. However, I encountered an issue when attempting to create multiple databases and sending requests to the MySQL serve...
I am currently working on developing a multi-tenant application where I have implemented a backend that identifies the tenant from the header and connects to the appropriate database.
However, I encountered an issue when attempting to create multiple databases and sending requests to the MySQL server. I consistently received an error indicating "too many connections."
I am seeking guidance on how to configure my MySQL server to handle a large number of requests, as my backend receives a significant influx of requests simultaneously. The high volume of database creation and simultaneous requests ultimately leads to a crash of my SQL server.
maher s
Jun 26, 2023, 06:13 AM
• Last activity: Jun 27, 2023, 10:13 PM
6
votes
5
answers
3467
views
SQL Server with multiple databases (one per client) - what is the best security practice in terms of logins/users/permissions?
We have multiple SQL Servers with each holding dozens of databases - one per client (a client in this case means a customer organization). These databases are accessed via an application, but the application is currently using a single Windows login. Therefore, this creates a security risk, i.e., it...
We have multiple SQL Servers with each holding dozens of databases - one per client (a client in this case means a customer organization). These databases are accessed via an application, but the application is currently using a single Windows login. Therefore, this creates a security risk, i.e., it's theoretically possible to access "the other" client's database if some application vulnerabilities exist.
What is the best way to handle this scenario?
Should we create a separate login for each client and have the application connect using separate login credentials? This will reduce the security risk but create significant management overhead (which might be worth it).
A follow-up question would be: should we use Windows AD security or SQL Server authentication in this case.
I appreciate any suggestions!
SQL_Guy
(685 rep)
May 2, 2023, 11:48 PM
• Last activity: May 11, 2023, 01:28 PM
0
votes
1
answers
234
views
How do I provide access to certain rows that are common for all users when implementing PostgreSQL Row Level Security?
We have a multi-tenant database and this is a table in which we have implemented row-level security based on tenant_id: | tenant_id| product_id | | ----- | -------------- | | 0 | A | | 0 | B | | 1 | 1A| | 1 | 1B| | 2 | 2A| | 2 | 2B | Our requirement is that the products associated with tenant_id = 0...
We have a multi-tenant database and this is a table in which we have implemented row-level security based on tenant_id:
| tenant_id| product_id |
| ----- | -------------- |
| 0 | A |
| 0 | B |
| 1 | 1A|
| 1 | 1B|
| 2 | 2A|
| 2 | 2B |
Our requirement is that the products associated with tenant_id = 0 should be available to all tenants.
Products associated with tenant_id != 0 should only be available to the relevant tenants.
Is there a way to implement this using security definers?
Or do we need to have separate tables?
user5706
(3 rep)
Jul 28, 2022, 04:36 AM
• Last activity: Jul 28, 2022, 09:29 AM
3
votes
3
answers
4587
views
Multiple instance in one server
Could you please help to explain why one might need multiple instances of SQL Server on the same server? What is a good point to use separate instances? Why wouldn't one use two databases in one instance instead? At any point is it a good idea to have separate multiple instance in the same server?
Could you please help to explain why one might need multiple instances of SQL Server on the same server? What is a good point to use separate instances? Why wouldn't one use two databases in one instance instead? At any point is it a good idea to have separate multiple instance in the same server?
Rada Tann
(31 rep)
Jun 20, 2019, 05:08 AM
• Last activity: Jun 29, 2022, 05:32 PM
Showing page 1 of 20 total questions