Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
10816 views
for mongo, not able to create users other than admin with error command createUser requires authentication :
This is the script I am using to create user accounts in MongoDB: ``` mongo <<EOF use admin; db.createUser({user:ram", pwd: "ram123!", roles:['root']}); db.createUser({user:"sam", pwd: "sam123!", roles:[{db:"config", role:'readWrite'}]}); EOF ``` This works for the creating first user, but not the s...
This is the script I am using to create user accounts in MongoDB:
mongo <
This works for the creating first user, but not the second user. This is the error that is returned:
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("f648b868-7863-4d5c-9912-e3e87b24f4e8") }
MongoDB server version: 4.4.6
switched to db admin
Successfully added user: { "user" : "ram", "roles" : [ "root" ] }
uncaught exception: Error: couldn't add user: command createUser requires authentication :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createUser@src/mongo/shell/db.js:1386:11
@(shell):1:1
bye
Acmakala Eesha (13 rep)
May 31, 2021, 04:27 AM • Last activity: Aug 1, 2025, 07:07 AM
0 votes
1 answers
1775 views
SQL Agent Job error
It may be really simple question, but at the moment I have a kind of mental loop and need your help to break it. SQL server 2016 is installed using a domain account. I didn't create an explicit login, there is only a standard generic NT SERVICE\MSSQL$XXX login and all SQL Agent Jobs like backup, reb...
It may be really simple question, but at the moment I have a kind of mental loop and need your help to break it. SQL server 2016 is installed using a domain account. I didn't create an explicit login, there is only a standard generic NT SERVICE\MSSQL$XXX login and all SQL Agent Jobs like backup, rebuild indices and dbcc checkdb are running with no problems, but if I create an SQL job running a simple select on the user database like select * from [AdventureWorks2016].[HumanResources].[EmployeePayHistory] where 1= 2 I get an error Executed as user: [active directory account of the SQL Service]. Failed to initialize sqlcmd library with error number -2147467259. [SQLSTATE 42000] (Error 22050). The step failed. I have to create a login for SQL service domain account to fix an error. Any ideas what happens were? Is this a security feature of the SQL server? UPD: after a drill-down I've found an answer about sqlcmd error: it caused by msdb.dbo.sp_send_dbmail with @query parameter if the query in @query fails. But I do not have an answer to my main question: why I can't run a query over the user database using an SQL server service account. UPD2: UPD2: as I realized my description is not that understandable, sorry for that. One more try. An SQL server is installed to run using active directory accout A. That means, it I open services.msc I see that account as a login account of the service. This configuration was implemented via configuration file, not as an after-setup change. SQL Server logins doesn't contain account A, only generic NT SERVICE\MSSQL$XXX and NT SERVICE\SQLAgent$XXX which are a members of sysadmin role as usual. All SQL maintenance jobs (index maintenance, DBCC and backups) are configured using Ola's scripts and running without any problems also on user databases. If I try to create an SQL Server Job to query user database even if it's a simple query like select * from [AdventureWorks2016].[HumanResources].[EmployeePayHistory] where 1= 2 the query can't run. SQL server trace shows an error: "The server principal A is not able to access the database "AdventureWorks2016" under the current security context" If I create a login A and add it to the sysadm role the job runs, no problems, but it's absolutely clear, as a sysadm it should not have problems to query a database. The point I do not understand: all other SQL jobs are running using generic account in the background, so there is a kind of impersonification in SQL: accout A is using a generic login. As an additional information: if I create a Login for A I can't delete it because SQL server means, User is logged in, so SQl user uses it in the backupground to connect to the SQL and to impersonificate a generic login. Why it's not possible if I query a user database via SQL Agent Job? DrillDown 2nd: USE [msdb] GO BEGIN TRANSACTION DECLARE @ReturnCode INT SELECT @ReturnCode = 0 /****** Object: JobCategory [[Uncategorized (Local)]] Script Date: 19.01.2022 11:21:44 ******/ IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1) BEGIN EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]' IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback END DECLARE @jobId BINARY(16) EXEC @ReturnCode = msdb.dbo.sp_add_job @job_name=N'test_query_userdb_2Steps', @enabled=1, @notify_level_eventlog=0, @notify_level_email=0, @notify_level_netsend=0, @notify_level_page=0, @delete_level=0, @description=N'No description available.', @category_name=N'[Uncategorized (Local)]', @owner_login_name=N'sa', @job_id = @jobId OUTPUT IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback /****** Object: Step [query separate] Script Date: 19.01.2022 11:21:44 ******/ EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'query separate', @step_id=1, @cmdexec_success_code=0, @on_success_action=3, @on_success_step_id=0, @on_fail_action=2, @on_fail_step_id=0, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'TSQL', @command=N'select * from dbo.Users where 1=2', @database_name=N'StackOverflow', @flags=0 IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback /****** Object: Step [query] Script Date: 19.01.2022 11:21:44 ******/ EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'query param', @step_id=2, @cmdexec_success_code=0, @on_success_action=1, @on_success_step_id=0, @on_fail_action=2, @on_fail_step_id=0, @retry_attempts=0, @retry_interval=0, @os_run_priority=0, @subsystem=N'TSQL', @command=N'EXEC msdb.dbo.sp_send_dbmail @profile_name=''my profile'', @recipients=''my email'', @subject=''ALERT: XX'', @query =''select * from dbo.users where 1 = 2''', @database_name=N'StackOverflow', @flags=0 IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1 IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)' IF (@@ERROR 0 OR @ReturnCode 0) GOTO QuitWithRollback COMMIT TRANSACTION GOTO EndSave QuitWithRollback: IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION EndSave: GO The same query runs as a 1st step of the job and crashes the job if it runs as a @query parameter of send_mail SP. There is no permission context change between steps.
Mainecoon21 (175 rep)
Jan 13, 2022, 10:35 AM • Last activity: Jul 28, 2025, 07:07 PM
0 votes
1 answers
157 views
How to secure SQL Server in an enterprise environment?
So I've been tasked with learning how to secure our SQL Servers. Here's the scenario: Individual users have read access to a table in SQL Server. This table have apx. 33 million rows, and growing (telemetry data). Some *genius* user discovered Power BI and installed it (Desktop), along with the on-p...
So I've been tasked with learning how to secure our SQL Servers. Here's the scenario: Individual users have read access to a table in SQL Server. This table have apx. 33 million rows, and growing (telemetry data). Some *genius* user discovered Power BI and installed it (Desktop), along with the on-prem data gateway on their workstation. Then they did the same thing for 40 of their friends. THEN they created a .pbix report to query this table. THEN they published the report to their personal workspace. THEN they emailed the report to their 40 friends with instructions on how to set up the gateway on each workstation - and how to enable the data sync schedule for twice a day. So now our 33 million records are exiting our network (40 users * twice a day = 80 times per day). Besides being a strain on the SQL Server - we just don't want that data being published to a service we don't have any control over by users that aren't authorized to be doing that. So the questions are: How can we prevent this scenario? What's the proper way to set up a database so a user can access the data, but not be able to publish the data off-site? Are there firewall rules we can set up to block the gateway service? My understanding was it just needed port 443 outbound - which we obviously can't block. Any recommended training courses that cover this kind of stuff? Thanks!
Sam Axe (183 rep)
Jun 10, 2019, 08:17 PM • Last activity: Jul 23, 2025, 06:08 AM
1 votes
1 answers
149 views
Why is to_char used when doing a divide by 0 in Oracle based SQLi?
For conditional-error-based SQLi, instead of writing 1/0 directly like MYSQL, we need to write to_char(1/0) for Oracle, what is the reason behind this? Example, Oracle: SELECT CASE WHEN (YOUR-CONDITION-HERE) THEN TO_CHAR(1/0) ELSE NULL END FROM dual Microsoft: SELECT CASE WHEN (YOUR-CONDITION-HERE)...
For conditional-error-based SQLi, instead of writing 1/0 directly like MYSQL, we need to write to_char(1/0) for Oracle, what is the reason behind this? Example, Oracle: SELECT CASE WHEN (YOUR-CONDITION-HERE) THEN TO_CHAR(1/0) ELSE NULL END FROM dual Microsoft: SELECT CASE WHEN (YOUR-CONDITION-HERE) THEN 1/0 ELSE NULL END Reference:
Vbr (11 rep)
Jan 23, 2023, 10:43 AM • Last activity: Jul 20, 2025, 11:04 AM
0 votes
1 answers
24 views
Locking down MariaDB for local development environment from external hosts?
I recently came across this little gem of an SQL query for MariaDB: SHOW GLOBAL VARIABLES LIKE 'log_error'; I get that there are tons of bots on the Internet and so I expected what I saw on my live server. However my local server on my home connection + VPN has a ton of bot requests too! I want to b...
I recently came across this little gem of an SQL query for MariaDB: SHOW GLOBAL VARIABLES LIKE 'log_error'; I get that there are tons of bots on the Internet and so I expected what I saw on my live server. However my local server on my home connection + VPN has a ton of bot requests too! I want to be explicitly clear: I do not remote in to my home system what-so-ever. I want to completely lock MariaDB down from external access and whitelist localhost, ::1 and maybe one network IP when I test from my Mac on rare occasion. I know MariaDB users are tied to a host/IP but that is for internal commands. I know I can change the port number and that should be easy enough. But that feels like something to do in combination. I simply want to only whitelist three IP addresses to ever be able to access MariaDB. I've also disabled public access to MariaDB in the Windows Firewall. **How do I lock down MariaDB so it is not accessible to external hosts?** A small sample of the nonsense I'm seeing in the log: > 2025-05-07 18:29:28 6475 [Warning] Hostname 'rnd.group-ib.com' does > not resolve to '80.82.70.133'. 2025-05-07 18:29:28 6475 [Note] > Hostname 'rnd.group-ib.com' has the following IP addresses: 2025-05-07 > 18:29:28 6475 [Note] - 5.9.185.30 > > 2025-05-08 6:36:50 7001 [Warning] IP address '34.77.36.4' has been > resolved to the host name '4.36.77.34.bc.googleusercontent.com', which > resembles IPv4-address itself. > > 2025-05-08 19:51:43 7176 [Warning] Aborted connection 7176 to db: > 'unconnected' user: 'unauthenticated' host: '91.223.169.88' (This > connection closed normally without authentication) > > 2025-05-08 20:46:33 7283 [Warning] Aborted connection 7283 to db: > 'unconnected' user: 'unauthenticated' host: '103.203.57.18' (This > connection closed normally without authentication) > > 2025-05-10 12:14:01 7840 [Warning] Host name 'hn.kd.ny.adsl' could not > be resolved: No such host is known. > > 2025-05-10 13:02:33 7842 [Warning] IP address '198.235.24.242' could > not be resolved: No such host is known. ---------- It's been five hours and eight minutes since I changed the port number and the nearly minute-by-minute ..."events" have completely stopped. That, of course, does not negate the opportunity to learn how else I can strengthen the security of MariaDB on Windows. ---------- dbdemon's answer did not include the exact syntax so I'll clarify what works and how I verified it here: #Block all IP addresses except the same computer: bind-address = 127.0.0.1,::1 I restarted MariaDB and it worked. But I did not know if it would actually block anything so I did the inverse to specify IP addresses I'm not using to intentionally get blocked and thus verify that this will work: #Intentionally test the format to blocking local access: bind-address = 3.4.5.6,7.5.3.2 ...which does work which verifies that the first bind-address line *does* work.
John (769 rep)
Jul 18, 2025, 03:32 AM • Last activity: Jul 19, 2025, 10:41 PM
2 votes
1 answers
62 views
AG DMK/SMK Issue SQL server 2022 CU20
I’m currently working on a proof of concept (POC) involving column-level encryption and key management (SMK/DMK) in a SQL Server 2019 Always On Availability Group (AG) environment (CU20). I’ve tested multiple scenarios and am trying to determine the best approach for ensuring encryption consistency...
I’m currently working on a proof of concept (POC) involving column-level encryption and key management (SMK/DMK) in a SQL Server 2019 Always On Availability Group (AG) environment (CU20). I’ve tested multiple scenarios and am trying to determine the best approach for ensuring encryption consistency across replicas. In my first test, I created the Database Master Key (DMK) before adding the database to the AG. Everything worked fine — the encrypted data was accessible from the readable secondary and even after failover. In another test, I added the DMK after the database was already in the AG. In this case, encrypted data was not accessible on the readable secondary or after failover. To fix this, I removed the database from the AG and re-added it with the password — this allowed encryption to work across all replicas. In a more complex scenario, I created a DMK without explicitly encrypting it with the local Service Master Key (SMK) on the primary. After a failover, the readable secondary (which became the new primary) couldn’t access encrypted data. To resolve this, I ran OPEN MASTER KEY DECRYPTION BY PASSWORD followed by ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY on the new primary. This fixed the issue on the new primary, but now the original secondary (i.e., the old primary) couldn’t access encrypted data. I noticed that after restarting the servers, only the latest SMK used for DMK encryption allows automatic decryption, and the previous SMK is no longer valid unless re-added. Given this behavior, I have a few questions: * Is it expected that only one SMK can be used for automatic decryption of a DMK at any time? * Should DBAs routinely re-encrypt the DMK with the local SMK after every failover to ensure continuity? * Would it be better to restore the same SMK across all replicas to maintain consistency — and is that considered safe in a production environment? I’m aware that SMK restore using FORCE can affect linked servers, credentials, and SSIS, so I’m hesitant to use it in prod without clarity. I’d really appreciate input on the best practices or any Microsoft-recommended approach for managing encryption chains in AG setups with minimal risk.
Manthan Prabhu (21 rep)
Jul 15, 2025, 04:19 AM • Last activity: Jul 16, 2025, 09:38 AM
0 votes
1 answers
148 views
How to setup Mysql master-slave replication with the slave meant for testing where divergence is OK
Problem: I have a staging DB server on which functionality is tested before pushing it to the production environment. Now, we want to start doing VA/PT (Vulnerability Analysis and Penetration Testing) on our application, but since that can be destructive on the staging DB server, we want to make a s...
Problem: I have a staging DB server on which functionality is tested before pushing it to the production environment. Now, we want to start doing VA/PT (Vulnerability Analysis and Penetration Testing) on our application, but since that can be destructive on the staging DB server, we want to make a separate Testing environment with a VAPT web+DB server. Requirements: 1. The data from the staging DB server must be replicated onto the VAPT DB server automatically so that specific new use cases, features, etc can be tested for vulnerabilities. 2. Due to VAPT activities (testing data, SQL Injection, possibly DROP TABLE exploits, etc) the VAPT DB server itself will also have its own data changes, i.e. divergence from Staging DB (Master) So, if I use simple Master-Slave replication as below I am assured of #1: Staging DB (Master) -> VAPT DB (Slave) But if I do #2, the slave will eventually diverge, which is fine for the testing environment, but, will it interrupt or mess with the Master-Slave replication as per #1? An obvious example where divergence will cause errors is a VA/PT activity that causes DROP TABLE users so that the Staging DB (Master) users table trying to INSERT/UPDATE data will cause replication errors. Some UPDATEs/DELETEs might cause errors too. In particular, If I use ROW-based replication divergence will happen quickly causing frequent errors. If I use STATEMENT-based replication, since ids will not match, it is possible that some data will break because ids are essential to link data in related tables even though we do not use foreign keys. Alternatively, instead of replication, I could **manually dump the Staging DB into the VAPT DB daily**, which would be cumbersome to automate. OR, I could make copy DBs and setup various partial copy operations, but that would complicate matters too much, given that I am not a developer and that my developers often make and revert changes of various sizes randomly. *EDIT: The data directory varies between 20-25 GB on Staging* Surely someone has come across this problem in their work so far and there might be a set of best practices for this situation i.e. maintaining a match between staging and testing environments in real-time while allowing testing freedom to play with the data. I tried googling for a while but the right phrasing for google escapes me. All I get is howtos for master slave replication, handling unwanted drift/divergence and so on. Nothing much about desired/accepted drift and divergence or partial replication. Thanks in advance.
site80443 (119 rep)
Apr 30, 2021, 03:13 PM • Last activity: Jul 14, 2025, 12:04 PM
1 votes
1 answers
183 views
Is it possible to login to an Oracle schema using a certificate?
The question is based on having to deal with lots of automated processing (testing, mostly). As it stands, all the schemas used have the same well known password, which is stored in lots of configuration files, which is bad from a security perspective. Using encrypted passwords does not really cut i...
The question is based on having to deal with lots of automated processing (testing, mostly). As it stands, all the schemas used have the same well known password, which is stored in lots of configuration files, which is bad from a security perspective. Using encrypted passwords does not really cut it. So the question is: Does Oracle DB allow the use of client certificates for authentication?
stmoebius (197 rep)
Jan 11, 2019, 02:05 PM • Last activity: Jul 8, 2025, 11:07 AM
0 votes
1 answers
184 views
MySQL Identifying Connection Attempt
I'm seeing the following errors repeatedly in the error log file for our production MySQL server: > [Warning] Access denied for user 'root'@'remote.appserver.com' (using > password: NO) So, I know where the connection attempt is coming from, but I don't know how to track down the application trying...
I'm seeing the following errors repeatedly in the error log file for our production MySQL server: > [Warning] Access denied for user 'root'@'remote.appserver.com' (using > password: NO) So, I know where the connection attempt is coming from, but I don't know how to track down the application trying to connect. Is there any way to determine the application, program, client initiating the request?
John (11 rep)
May 6, 2014, 07:59 PM • Last activity: Jul 1, 2025, 11:08 AM
1 votes
1 answers
192 views
What type of queries are considered as SQL injection?
I was testing efficacy of a paid **Database security solution** which has the ability to detect and block **SQLi attack**. For the testing purpose I have tried the following query against a **PostgreSQL** database through this tool which act as a proxy. `select * from test where id=1 or 1=1;` For th...
I was testing efficacy of a paid **Database security solution** which has the ability to detect and block **SQLi attack**. For the testing purpose I have tried the following query against a **PostgreSQL** database through this tool which act as a proxy. select * from test where id=1 or 1=1; For this query the tool were able to detect and block. But then I tried another one select * from test where id=1 or true; But this time the query got infiltrated and got the full table data as result. Isn't it a classical example of SQLi? What type of queries are actually considered as SQLi which I can test against the tool?
goodfella (595 rep)
Dec 1, 2023, 07:16 AM • Last activity: Jun 27, 2025, 06:06 AM
0 votes
1 answers
210 views
Is there a way of securing ProxySQL's own SQLite3 database?
As I understand **ProxySQL** has support for *SSL encryption* for inbound and outbound traffic (front/backend) and supports *password hashing*. However back in 2017, one had direct database access: https://dba.stackexchange.com/questions/164705/how-to-easily-bring-80-mysql-users-into-proxysql/212991...
As I understand **ProxySQL** has support for *SSL encryption* for inbound and outbound traffic (front/backend) and supports *password hashing*. However back in 2017, one had direct database access: https://dba.stackexchange.com/questions/164705/how-to-easily-bring-80-mysql-users-into-proxysql/212991 According to *Icarus*'s answer, it would be rather easy for someone who broke into the hosting server to gain access to all endpoints listed on this database by making a select call directly to it. Is there any way of encrypting the SQLite3 DB? Or perhaps of encrypting the data stored in the database (similar to how SSL is used for in-transit data)? Basically looking for *data-at-rest* securing strategies. Perhaps this is already implemented in recent versions? I could not find any documentation supporting it, but it did seem that may be the case: https://github.com/sysown/proxysql/blob/v2.0.13/lib/ProxySQL_Admin.cpp Which, if I understood correctly implements SHA1 encryption: https://stackoverflow.com/questions/3179021/sha1-hashing-in-sqlite-how If this is correct, is this restricted to password hashing? Or is it part of the SSL encryption implementation? Or is there actually a way of enabling encrypted "DISK" read/writes? Is this by any chance implemented by default on v2+ builds? Ultimately, I plan on encrypting the volume this is installed in, and have a second proxy layer so that I don't expose endpoint information; this would be an additional layer of protection. Thanks, I appreciate any insights you may have!
Eduardo A del Corral Lira (11 rep)
Jun 22, 2020, 02:13 PM • Last activity: Jun 26, 2025, 07:06 AM
1 votes
1 answers
192 views
Cassandra security - private IPs with some VPN solution or public IPs and open ports?
there are many different solutions to secure a cassandra cluster. One of them is to enter public IPs and open the 7000/9042 port on each sever. Another one is to have everything in a private network with private IPs and connect the nodes with some kind of VPN solution. I have OPNsense with Wireguard...
there are many different solutions to secure a cassandra cluster. One of them is to enter public IPs and open the 7000/9042 port on each sever. Another one is to have everything in a private network with private IPs and connect the nodes with some kind of VPN solution. I have OPNsense with Wireguard for this, but I'm wondering if this makes any problems in production. What are you guys using to prevent attacks on public ports? Thank you so much.
mulesky (33 rep)
May 6, 2020, 07:43 PM • Last activity: Jun 26, 2025, 06:06 AM
0 votes
1 answers
189 views
What does the presence of rsAccessDenied mean?
I'm troubleshooting a SSRS security issue where one user is being denied access, while another user with apparently identical settings is working as expected. I'm seeing "The permissions granted to user 'mydomain\myAccount' are insufficient for performing this operation" in the browser, but curiousl...
I'm troubleshooting a SSRS security issue where one user is being denied access, while another user with apparently identical settings is working as expected. I'm seeing "The permissions granted to user 'mydomain\myAccount' are insufficient for performing this operation" in the browser, but curiously NOT seeing the (rsAccessDenied) at the end. Is this indicative of something that can help me troubleshoot the issue?
Jeff Sacksteder (1317 rep)
Aug 21, 2015, 07:00 PM • Last activity: Jun 22, 2025, 01:06 AM
0 votes
3 answers
201 views
How safe are PostgreSQL triggers against attacks?
in my PostgreSQL database I setup some triggers to prevent DELETE and EDIT operations (the triggers make sure that these operations lead to nothing). However, I'm wondering how safe these triggers are? Is it possible to circumvent these triggers and delete data by trying to illegally hack the data?
in my PostgreSQL database I setup some triggers to prevent DELETE and EDIT operations (the triggers make sure that these operations lead to nothing). However, I'm wondering how safe these triggers are? Is it possible to circumvent these triggers and delete data by trying to illegally hack the data?
Niklay (1 rep)
May 28, 2019, 01:21 PM • Last activity: Jun 21, 2025, 12:19 AM
1 votes
1 answers
220 views
Securely manage and dynamically create multiple databases
# Background I have inherited a system that has several hundred databases. The primary "Clients" Database represents all clients with several tables like: WebUsers = username | password | server_id Servers = server_id | ip_address | The remaining "ClientInfo" databases are intended for one client. T...
# Background I have inherited a system that has several hundred databases. The primary "Clients" Database represents all clients with several tables like: WebUsers = username | password | server_id Servers = server_id | ip_address | The remaining "ClientInfo" databases are intended for one client. Tables like: Customers = id | first_name | last_name I am not a DBA. I'm a web developer who inherited a rough built system. So I apologize ahead of tie if this question seems silly. # The process The client visit a login server: login.example.com. This server has access to the "Clients" database. They login to their account which takes them to the correct web application server, sass1.example.com, that has credentials found in the "Servers" database and grants them a secure session. I think this is a pretty basic SASS web application setup. Think sales-force, get-satisfaction, that kind of thing. # My Problem All databases are managed under a single database login. So the login server technically has access to all the client databases. And each application server has access to all databases. The only limiter being a single string in the session that identifies the user. From a security standpoint it is my understanding that these databases should each have their own credentials. So that if one is breached via code injection remaining databases are secure. For example my code logs into the client db using environment variables. Then selects the database name that is stored in the secure session. A code injection could easily submit a query that changes the active database. # What is the secure way to manage this system? My rough plan was to have the login server point to the "Clients" database through a unique db user that has access to only that database. Then I would store the db user/password for each clients database in a "ClientList" table. Then on login the client specific database credentials would be pulled from an encrypted value in the "server" table. So the session would check if it's in a valid login state then instead of the string pointing to a database name it has an id that points to the "ClientList" table that has encrypted login credentials to just their database. But this doesn't seem much different than just managing all db's on the same login since the clients database still has all the passwords. A code injection would have access to the decrypt algorithms and just log in with different credentials. What is the right (secure) way to manage this system?
danielson317 (343 rep)
Jul 1, 2016, 07:01 PM • Last activity: Jun 14, 2025, 08:07 AM
0 votes
1 answers
607 views
Specifying MySQL user hosts with IPv6 addresses
Inspired after making the move to IPv6 on my MySQL server, and intrigued by the lack of documentation regarding 'user'@'[IPv6]' nomenclature I proceeded to discover how to limit my users based on their IPv6 address. My allocation is a /56 and so I wanted to limit my DB users to this prefix length. M...
Inspired after making the move to IPv6 on my MySQL server, and intrigued by the lack of documentation regarding 'user'@'[IPv6]' nomenclature I proceeded to discover how to limit my users based on their IPv6 address. My allocation is a /56 and so I wanted to limit my DB users to this prefix length. My test host was 2001:DB8:111:6601::a01:120 The following didn't work at all: 2001:DB8:111:6600:: 2001:DB8:111:6600::% 2001:DB8:111:6601:: 2001:DB8:111:66%:: 2001:DB8:0111:6601::% 2001:DB8:111:6600::/56 2001:DB8:111:6601::a01:120/56 2001:DB8:111:6601:0:0:a01:120/56 2001:DB8:111:6601::a01:120/128 2001:DB8:111:6601::/64 2001:DB8:111:6601::%/128 2001:DB8:111:6601::%/1 2001:DB8:111:6601::%/64 2001:DB8:111:6601::a01:0120 The following worked 2001:DB8:111:6601::% (too limiting as it's a /64, not /56) 2001:DB8:111:6601::a01:120 (too limiting as it's a host) 2001:DB8:111:66%::% (but may match 066x in the 7th and 8th bytes) 2001:DB8:111:66__::% (this appears to be the best fit) Some notes: - don't use leading zeros in fields - don't bother with /prefixlen
Scott (1 rep)
Feb 26, 2020, 07:46 AM • Last activity: Jun 9, 2025, 06:07 PM
2 votes
1 answers
1397 views
powershell problem when running on a remote server - access is denied
as a follow up from this question below: [Run a PowerShell script on a different server - from inside a SQL Server job][1] there you can see that for me tu run a powershell script from inside a sql server job, I need to create a proxy. Both the powershell script and the proxy script are on that link...
as a follow up from this question below: Run a PowerShell script on a different server - from inside a SQL Server job there you can see that for me tu run a powershell script from inside a sql server job, I need to create a proxy. Both the powershell script and the proxy script are on that link. what the powershell script does is to delete old backup files on a remote server. the question that I still have is: for the whole thing to work on windows server 2012, I could find not other way than granting Administrator on the remote box to the Proxy account. as you can see on the picture below. enter image description here another alternative was to create a powershell script that do both, backups and deletes, and create a sql server job with the following code: enter image description here Is this a powershell limitation when running on a remote server? Are there other ways to work around it, without having to grant admin to the proxy account? PowerShell Basics: Managing Remote Servers & Sessions
Marcello Miorelli (17274 rep)
Sep 16, 2015, 10:54 AM • Last activity: Jun 5, 2025, 04:05 PM
1 votes
1 answers
59 views
Created a 'datareader' user linked to Windows AD to embed in SSRS reports; and user has authentication issues
I am attempting to add a Windows AD User account in order to embed it in SSRS reports for org wide reports. Using SSMS (with \ \Administrator access): - Server Level: - Created the account on SQL Server > Security - Logins: General (Windows Authentication) - Default Database: \ - Server Roles: Publi...
I am attempting to add a Windows AD User account in order to embed it in SSRS reports for org wide reports. Using SSMS (with \\Administrator access): - Server Level: - Created the account on SQL Server > Security - Logins: General (Windows Authentication) - Default Database: \ - Server Roles: Public - User Mapping: \ (selected) - Status: - Connect: Grant - Login: Enabled - Database Level: - Membership: db_datareader
S M (11 rep)
Jun 3, 2025, 01:44 PM • Last activity: Jun 4, 2025, 09:57 AM
5 votes
2 answers
1491 views
what are the reasons why show advanced options is a security threat when left enabled?
I generally only [change temporarily][1] the values of [Show Advanced Options][2]. I have servers where this would be a convenient setting to leave on. are there real threats that could be avoided by denying this option on the server? talking about sql server 2019 here, although some servers are sti...
I generally only change temporarily the values of Show Advanced Options . I have servers where this would be a convenient setting to leave on. are there real threats that could be avoided by denying this option on the server? talking about sql server 2019 here, although some servers are still sql 2016.
Marcello Miorelli (17274 rep)
May 27, 2025, 05:30 PM • Last activity: May 28, 2025, 10:31 AM
0 votes
2 answers
258 views
Split database for security and privacy reasons
Currently I have a small business where I have a main product (written in PHP and MySql), several users, several companies, etc. The software occupies around 20 tables in its operation. The problem is that I want to expand the business and create a different program, but I want to unify the data. A...
Currently I have a small business where I have a main product (written in PHP and MySql), several users, several companies, etc. The software occupies around 20 tables in its operation. The problem is that I want to expand the business and create a different program, but I want to unify the data. A new program (different function) will use about 30 tables, but in reality there is a "core" that are the same tables, let's say 10. Here I have two options: * Create a larger database with 20 + 30 tables, (since some were repeated) = 40 tables. * Or create different and connected databases DB_core = 10 tables DB_1 = 10 tables DB_2 = 20 tables Why using different databases? I already want to develop this second software with a different team, but I don't want to give them access to some sensitive or valuable information, tables of people, companies, etc. This data will keep it in the "core" database and only in the production server. From what I understand, users can be created with permissions by tables, but I feel that it is much easier to keep us in separate databases. Are there any significant performance implications? Is the most extensive programming worth it? PD: Sometimes, I need the programmers team manage the production server.
Blaztix (101 rep)
Mar 25, 2020, 02:40 AM • Last activity: May 27, 2025, 10:08 AM
Showing page 1 of 20 total questions