Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
1235
views
Accessing Secured Storage account from Synapse Notebook
We have a Firewalled storage account which has a few files that we need to access via Synapse notebook. We do that with below code : abfss_path = f'abfss://container@storageaccount.dfs.core.windows.net/data.csv' df = spark.read.load(abfss_path, format='csv',header=True) display(df.limit(10)) The man...
We have a Firewalled storage account which has a few files that we need to access via Synapse notebook. We do that with below code :
abfss_path = f'abfss://container@storageaccount.dfs.core.windows.net/data.csv'
df = spark.read.load(abfss_path, format='csv',header=True)
display(df.limit(10))
The managed identity and my user (objectid) for the Synapse workspace has ' Storage Blob Data Contributor' role on the storage account.
The below have been enabled:
- Allow Azure services on the trusted services list to access this storage account.
- Specify resource instances that will have access to your storage account based on their system-assigned managed identity.
Running the above code works from the pipeline with a synapse notebook task but **fails when run from the studio through the notebook cell**.
Error : ***Operation failed: "This request is not authorized to perform this operation.", 403***
I have tried everything but can't get it to work.
Via Pipeline
Via Studio Notebook Cell
I can see AAD (pass through) takes precedence when running from studio, so I tried running the session as 'managed Identity'. This doesn't change anything and I keep getting same error.
How to get this working?
Note : I am not using the managed vnet in my synapse workspace.


Ramakant Dadhichi
(2338 rep)
Oct 20, 2022, 03:52 PM
• Last activity: Aug 9, 2025, 11:07 AM
2
votes
2
answers
1019
views
MySql Multi Master Active Active Mutli Write
I am establishing Multi Master Replication [All mater can write to their database], Initially I started with two server. After establishing connection and sync data on both servers. There are some questions which are bothering me like 1. If there is connection lost between two server and at that tim...
I am establishing Multi Master Replication [All mater can write to their database], Initially I started with two server. After establishing connection and sync data on both servers. There are some questions which are bothering me like
1. If there is connection lost between two server and at that time both updates the same row or create a row with same primary. There will be collision between them and sync will break.
2. Can we implement same things for Multiple Master configuration?
3. Is there any monitoring and conflict resolution tools which can I use?
Arjun Kaushal
(21 rep)
Aug 7, 2016, 05:04 PM
• Last activity: Aug 9, 2025, 10:08 AM
2
votes
2
answers
443
views
SQL trigger that outputs incremental numbers from zero whenever the month changes
I have the following trigger that outputs a number in the format such as 170200005 where 17 is the year and 02 is the month (February) and changes as months changes e.g March 03 then the last digits are incremental. BEGIN SET @var1 = LPAD((SELECT MONTH(CURRENT_DATE())), 2, '0'); SET @var2 = (SELECT...
I have the following trigger that outputs a number in the format such as 170200005 where 17 is the year and 02 is the month (February) and changes as months changes e.g March 03 then the last digits are incremental.
BEGIN
SET @var1 = LPAD((SELECT MONTH(CURRENT_DATE())), 2, '0');
SET @var2 = (SELECT YEAR(CURRENT_DATE));
SET @var2 = (SELECT MID(@var2, 3, 2));
SET @var1 = CONCAT(@var2, @var1);
INSERT INTO table1_seq VALUES (NULL);
SET NEW.id = CONCAT(@var1, LPAD(LAST_INSERT_ID(), 5, '0'));
END
I would like to have a code that will output in the same format as highlighted before but then starts from 00001 whenever the month changes.
amos chabwinja
(21 rep)
Mar 3, 2017, 07:06 AM
• Last activity: Aug 9, 2025, 09:03 AM
0
votes
1
answers
665
views
auto_explain does not log
looks like i do it wrong. I turned auto_explain on with adding next valued to postgresql.conf: shared_preload_libraries = 'pg_stat_statements,pg_wait_sampling,pg_stat_kcache,auto_explain' auto_explain.log_min_duration = '3s' in my monitoring I can see long running queries that exceeded limit of 3s a...
looks like i do it wrong. I turned auto_explain on with adding next valued to postgresql.conf:
shared_preload_libraries = 'pg_stat_statements,pg_wait_sampling,pg_stat_kcache,auto_explain'
auto_explain.log_min_duration = '3s'
in my monitoring I can see long running queries that exceeded limit of 3s and successfully finished.
'values' here says how long query lasted:
But when I look into my log 'postgresql-Mon.log' (and next days) there is nothing about those queries. Only a few strings about modules during database startup process.
Am I missing something?

Mikhail Aksenov
(430 rep)
Dec 10, 2019, 11:16 AM
• Last activity: Aug 9, 2025, 08:06 AM
0
votes
1
answers
797
views
Query DB2 without using Openquery
Is there a way to query a DB2 Linked Server without using Openquery? IE, is it possible to query DB2 through a linked server by doing something like the following? select * from [db2_linked_server].[database].[tablename]
Is there a way to query a DB2 Linked Server without using Openquery? IE, is it possible to query DB2 through a linked server by doing something like the following?
select *
from [db2_linked_server].[database].[tablename]
DForck42
(3068 rep)
Apr 17, 2017, 02:14 PM
• Last activity: Aug 9, 2025, 07:04 AM
0
votes
1
answers
786
views
pooler error: query_wait_timeout in pgbouncer logs
I am getting `pooler error: query_wait_timeout` for different databases in the pgbouncer log 6 to 7 times a day . When the error occurs for masterdata database , it gives me a timeout in the application. The configuration is below : ```auth_type: "md5" default_pool_size: 10 max_client_conn: 5000 max...
I am getting
pooler error: query_wait_timeout
for different databases in the pgbouncer log 6 to 7 times a day . When the error occurs for masterdata database , it gives me a timeout in the application.
The configuration is below :
: "md5"
default_pool_size: 10
max_client_conn: 5000
max_db_connections: 400
min_pool_size: 0
pool_mode: "transaction"
reserve_pool_size: 20
reserve_pool_timeout: 1
query_timeout: 0
ignore_startup_parameters: "search_path, extra_float_digits"
server_reset_query: "DISCARD ALL"
server_reset_query_always: 1
As the query timeout is disabled (query_timeout:0
) and the pool_mode
is set to transaction
, why am I still having this error? Do I need to set some min_pool_size
for my masterdata database to eliminate the above error?
kumar ashish
(1 rep)
Jan 8, 2024, 07:00 AM
• Last activity: Aug 9, 2025, 06:05 AM
1
votes
1
answers
872
views
Unable to upgrade to postgresql 13
I'm getting the following error: > postgres@:~/>cat loadable_libraries.txt could not load library >"$libdir/plpython2": ERROR: could not access file >"$libdir/plpython2": No such file or directory In database: db_name When trying to run pg_upgrade from PostgreSQL-11 to PostgreSQL-13. /usr/pgsql-13/b...
I'm getting the following error:
> postgres@:~/>cat loadable_libraries.txt could not load library
>"$libdir/plpython2": ERROR: could not access file
>"$libdir/plpython2": No such file or directory In database: db_name
When trying to run pg_upgrade from PostgreSQL-11 to PostgreSQL-13.
/usr/pgsql-13/bin/pg_upgrade -b /usr/pgsql-11/bin -B /usr/pgsql-13/bin -c -d /localpart0/db/postgres/11/db_name/data -D/localpart0/db/postgres/13/db_name/data -j 20 -r -v --clone
I don't have any plpython* extensions and already tried changing the pg_pltemplate entries for plpythonu and plpython2u to plpython3 and run the pg_upgrade but it fails either way.
Installed Packages:
> postgresql11-plpython.x86_64 11.7-1PGDG.rhel7 @postgres11.prd
> postgresql13-plpython3.x86_64 13.1-1PGDG.rhel7 @postgres13.prd
Any guidance would be greatly appreciated.
efrain berdecia
(31 rep)
Dec 21, 2020, 02:27 PM
• Last activity: Aug 9, 2025, 04:11 AM
0
votes
0
answers
21
views
Official example of `CREATE JSON DUALITY VIEW` gives syntax error
According to [the release note of MySQL 9.4](https://dev.mysql.com/doc/relnotes/mysql/9.4/en/news-9-4-0.html), *JSON duality views* are now supported. [*27.7.1 JSON Duality View Syntax*](https://dev.mysql.com/doc/refman/9.4/en/json-duality-views-syntax.html) shows this example: ```sql CREATE TABLE c...
According to [the release note of MySQL 9.4](https://dev.mysql.com/doc/relnotes/mysql/9.4/en/news-9-4-0.html) , *JSON duality views* are now supported.
[*27.7.1 JSON Duality View Syntax*](https://dev.mysql.com/doc/refman/9.4/en/json-duality-views-syntax.html) shows this example:
CREATE TABLE customers (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
first_name VARCHAR(25),
last_name VARCHAR(25),
street VARCHAR(50),
city VARCHAR(25),
state CHAR(2),
postcode CHAR(5),
phone CHAR(10),
email VARCHAR(50)
);
CREATE JSON DUALITY VIEW jdv_customers
AS
SELECT JSON_DUALITY_OBJECT( {
'_id' : c.id,
'name' : { 'fname' : c.first_name,
'lname' : c.last_name },
'email' : c.email,
'area' : { 'city' : c.city,
'state' : c.state }
}
)
FROM customers c;
Let me test that:
> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 9.4.0 |
+-----------+
> CREATE TABLE customers (
-> id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> first_name VARCHAR(25),
-> last_name VARCHAR(25),
-> street VARCHAR(50),
-> city VARCHAR(25),
-> state CHAR(2),
-> postcode CHAR(5),
-> phone CHAR(10),
-> email VARCHAR(50)
-> );
Query OK, 0 rows affected
> CREATE JSON DUALITY VIEW jdv_customers
-> AS
-> SELECT JSON_DUALITY_OBJECT( {
-> '_id' : c.id,
-> 'name' : { 'fname' : c.first_name,
-> 'lname' : c.last_name },
-> 'email' : c.email,
-> 'area' : { 'city' : c.city,
-> 'state' : c.state }
-> }
-> )
-> FROM customers c;
(1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '{\n '_id' : c.id,\n 'name' : { 'fname' : c.first_' at line 3")
Why?
Is this just a documentation bug or am I missing something?
---
Edit:
Reading [the documentation of JSON_DUALITY_OBJECT()
](https://dev.mysql.com/doc/refman/9.4/en/json-creation-functions.html#function_json-duality-object) , it seems
- {}
of JSON_DUALITY_OBJECT( { ... } )
found in the example is incorrect. We must omit that.
- Nested object { ... }
found in the example is incorrect. Instead we have to specify a subquery like this:
'name' : (
SELECT JSON_DUALITY_OBJECT(
/* ... */
)
FROM customers AS c2
WHERE c2.id = c.id
),
- And the documentation also says
> If the table is projected multiple times, the set of columns projected must be consistent across all instances of the table projection.
, meaning that top-level query and all of the subqueries must repeat the same set of columns.
Following these rules gives:
CREATE JSON DUALITY VIEW jdv_customers AS
SELECT JSON_DUALITY_OBJECT(
'_id' : c.id,
'first_name' : c.first_name,
'last_name' : c.last_name,
'email' : c.email,
'city' : c.city,
'state' : c.state,
'name' : (
SELECT JSON_DUALITY_OBJECT(
'id' : c2.id,
'first_name' : c2.first_name,
'last_name' : c2.last_name,
'email' : c2.email,
'city' : c2.city,
'state' : c2.state
)
FROM customers AS c2
WHERE c2.id = c.id
),
'area' : (
SELECT JSON_DUALITY_OBJECT(
'id' : c3.id,
'first_name' : c3.first_name,
'last_name' : c3.last_name,
'email' : c3.email,
'city' : c3.city,
'state' : c3.state
)
FROM customers AS c3
WHERE c3.id = c.id
)
)
FROM customers AS c;
This actually works (no syntax error) but is far more verbose then the official example.
I even doubt if this is really an intended use-case of JSON duality view. Probably the intended use-case is not to project a single flat row to a nested JSON object but to directly map an inherently nested structure (e.g. that given by JOIN
) to a JSON object without re-structuring.
ynn
(185 rep)
Aug 9, 2025, 03:00 AM
• Last activity: Aug 9, 2025, 03:53 AM
1
votes
1
answers
2188
views
Connectivity issues in replication process
I've been trying to add a new node to a replica set without success due to these errors: > 2019-08-12T06:48:15.031+0200 I ASIO [NetworkInterfaceASIO-RS-0] > Ending connection to host mongo03:27017 due to bad connection status; > 1 connections to that host remain open 2019-08-12T06:48:15.031+0200 I >...
I've been trying to add a new node to a replica set without success due to these errors:
> 2019-08-12T06:48:15.031+0200 I ASIO [NetworkInterfaceASIO-RS-0]
> Ending connection to host mongo03:27017 due to bad connection status;
> 1 connections to that host remain open 2019-08-12T06:48:15.031+0200 I
> REPL [replication-224] Restarting oplog query due to error:
> NetworkInterfaceExceededTimeLimit: error in fetcher batch callback:
> Operation timed out. Last fetched optime (with hash): { ts:
> Timestamp(1565585283, 1317), t: 9 }. Restarts
> remaining: 10 2019-08-12T06:48:15.032+0200 I REPL
> [replication-224] Scheduled new oplog query Fetcher source:
> mongo03:27017 database: local query: { find: "oplog.rs", filter: { ts:
> { $gte: Timestamp(1565585283, 1317) } }, tailable: true, oplogReplay:
> true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 9,
> readConcern: { afterOpTime: { ts: Timestamp(1565585283, 1317), t: 9 }
> } } query metadata: { $replData: 1, $oplogQueryData: 1,
> $readPreference: { mode: "secondaryPreferred" } } active: 1
> findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms shutting
> down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler
> request: RemoteCommand 2262327 -- target:mongo03:27017 db:local cmd:{
> find: "oplog.rs", filter: { ts: { $gte: Timestamp(1565585283, 1317) }
> }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS:
> 2000, batchSize: 13981010, term: 9, readConcern: { afterOpTime: { ts:
> Timestamp(1565585283, 1317), t: 9 } } } active: 1
> callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1
> retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms
> 2019-08-12T06:48:17.637+0200 I ASIO [NetworkInterfaceASIO-RS-0]
> Connecting to mongo03:27017 2019-08-12T06:48:17.645+0200 I ASIO
> [NetworkInterfaceASIO-RS-0] Successfully connected to mongo03:27017,
> took 8ms (2 connections now open to mongo03:27017)
> 2019-08-12T06:49:36.961+0200 I - [repl writer worker 13]
> DB.CollectionName collection clone progress: 221390504/643329790
> 34% (documents copied) 2019-08-12T06:51:52.859+0200 I ASIO
> [NetworkInterfaceASIO-RS-0] Ending connection to host mongo03:27017
> due to bad connection status; 1 connections to that host remain open
> 2019-08-12T06:51:52.859+0200 I REPL [replication-229] Restarting
> oplog query due to error: NetworkInterfaceExceededTimeLimit: error in
> fetcher batch callback: Operation timed out. Last fetched optime (with
> hash): { ts: Timestamp(1565585502, 204), t: 9 }.
> Restarts remaining: 10 2019-08-12T06:51:52.859+0200 I ASIO
> [NetworkInterfaceASIO-RS-0] Connecting to mongo03:27017
> 2019-08-12T06:51:52.859+0200 I REPL [replication-229] Scheduled
> new oplog query Fetcher source: mongo03:27017 database: local query: {
> find: "oplog.rs", filter: { ts: { $gte: Timestamp(1565585502, 204) }
> }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS:
> 2000, batchSize: 13981010, term: 9, readConcern: { afterOpTime: { ts:
> Timestamp(1565585502, 204), t: 9 } } } query metadata: { $replData: 1,
> $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" } }
> active: 1 findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms
> shutting down?: 0 first: 1 firstCommandScheduler:
> RemoteCommandRetryScheduler request: RemoteCommand 2272962 --
> target:mongo03:27017 db:local cmd:{ find: "oplog.rs", filter: { ts: {
> $gte: Timestamp(1565585502, 204) } }, tailable: true, oplogReplay:
> true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 9,
> readConcern: { afterOpTime: { ts: Timestamp(1565585502, 204), t: 9 } }
> } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0
> attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis:
> -1ms
I'm a little bit lost regarding how to debug it.
It's a 250GB database, the synchronization is being done through a private network 100Mb. There is an iptables in all servers but the traffic from the private network is allowed:
-A INPUT -i eth1 -p tcp -m tcp --dport 27017 -m state --state NEW,ESTABLISHED -j ACCEPT
At some point the replication process fails:
> 2019-08-12T02:22:37.582+0200 I REPL [replication-143] Restarting
> oplog query due to error: ExceededTimeLimit: error in fetcher batch
> callback: Executor error during find command :: caused by :: errmsg:
> "oper ation exceeded time limit". Last fetched optime (with hash): {
> ts: Timestamp(1565569328, 710), t: 9 }. Restarts
> remaining: 1 2019-08-12T02:22:37.582+0200 I REPL [replication-143]
> Scheduled new oplog query Fetcher source: mongo03:27017 database:
> local query: { find: "oplog.rs", filter: { ts: { $gte:
> Timestamp(1565569328, 710) } }, tailable: true, oplogReplay: true,
> awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 9,
> readConcern: { afterOpTime: { ts: Timestamp(1565569328, 710), t: 9 } }
> } query metadata: { $replData: 1, $oplogQ ueryData: 1,
> $readPreference: { mode: "secondaryPreferred" } } active: 1
> findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms shutting
> down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler
> re quest: RemoteCommand 1381614 -- target:mongo03:27017 db:local cmd:{
> find: "oplog.rs", filter: { ts: { $gte: Timestamp(1565569328, 710) }
> }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS:
> 2000, ba tchSize: 13981010, term: 9, readConcern: { afterOpTime: { ts:
> Timestamp(1565569328, 710), t: 9 } } } active: 1 callbackHandle.valid:
> 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl
> maxAttem pts: 1 maxTimeMillis: -1ms 2019-08-12T02:22:39.612+0200 I
> REPL [replication-142] Error returned from oplog query (no more
> query restarts left): ExceededTimeLimit: error in fetcher batch
> callback: Executor error during find command :: caused by :: errmsg:
> "operation exceeded time limit" 2019-08-12T02:22:39.615+0200 I REPL
> [replication-142] Finished fetching oplog during initial sync:
> ExceededTimeLimit: error in fetcher batch callback: Executor error
> during find command :: caused by :: errms g: "operation exceeded time
> limit". Last fetched optime and hash: { ts: Timestamp(1565569328,
> 710), t: 9 } 2019-08-12T02:22:43.034+0200 I REPL
> [replication-142] CollectionCloner ns:DB.CollectionName finished
> cloning with status: IllegalOperation: AsyncResultsMerger killed
> 2019-08-12T02:22:44.551+0200 W REPL [replication-142] collection
> clone for 'DB.CollectionName' failed due to IllegalOperation: While
> cloning collection 'DB.CollectionName' there was an error
> 'AsyncResultsMerger killed' 2019-08-12T02:22:44.551+0200 I REPL
> [replication-142] CollectionCloner::start called, on ns:DB.CollectionName2
> 2019-08-12T02:22:44.551+0200 W REPL [replication-142] database
> 'dbName' (9 of 9) clone failed due to ShutdownInProgress: collection
> cloner completed 2019-08-12T02:22:44.551+0200 I REPL
> [replication-142] Finished cloning data: ShutdownInProgress:
> collection cloner completed. Beginning oplog replay.
> 2019-08-12T02:22:44.551+0200 I REPL [replication-142] Initial sync
> attempt finishing up.
And it has to start over again.
Don't know if I could configure a bigger time out to avoid this disconnections.
Thanks in advance.
brb
(41 rep)
Aug 12, 2019, 05:12 AM
• Last activity: Aug 9, 2025, 03:03 AM
0
votes
1
answers
144
views
How would I add two mariadb servers to an existing pair and ensure replication
I have a pair of mariadb servers running on CentOS 7, version 5.5.52. Currently the Server 1 writes and reads and replicates to Server 2. I want to add two mariadb servers in AWS. I want one of these servers, Server 3, to read and write, I want it to replicate to Servers 1, 2 and 4; that is, I want...
I have a pair of mariadb servers running on CentOS 7, version 5.5.52. Currently the Server 1 writes and reads and replicates to Server 2. I want to add two mariadb servers in AWS. I want one of these servers, Server 3, to read and write, I want it to replicate to Servers 1, 2 and 4; that is, I want replication to occur across all mariadb servers. I also want Server 1 to replicate to the other three.
Link to schematic: https://www.flickr.com/photos/80050932@N06/51535391218/in/dateposted-public/
My question is, is this possible and thoughts on how to enable.
autossh is the tunnel that handles replication. Here is a line in
/usr/lib/systemd/system/autossh.service
that may offer insight:
User=root
ExecStart=/usr/bin/autossh -M 0 -N -q -o "ServerAliveInterval=60" -o "ServerAliveCountMax=3" -p 22 -l replica Server1 -L 3305:127.0.0.1:3306 -i /root/.keys/replica
hokie1999
(1 rep)
Sep 30, 2021, 01:04 PM
• Last activity: Aug 9, 2025, 02:07 AM
0
votes
1
answers
602
views
How to select maximum row value using optimal way
I have a large table: +----+-----------+------------+--------+--------+--------+ | Id | lClientId | datAccess | lProp1 | lProp2 | lProp3 | +----+-----------+------------+--------+--------+--------+ | 0 | 5 | 10/10/2020 | | | 9 | | 1 | 5 | 10/11/2020 | 2 | | | | 2 | 5 | 10/12/2020 | 2 | | | | 3 | 5 |...
I have a large table:
+----+-----------+------------+--------+--------+--------+
| Id | lClientId | datAccess | lProp1 | lProp2 | lProp3 |
+----+-----------+------------+--------+--------+--------+
| 0 | 5 | 10/10/2020 | | | 9 |
| 1 | 5 | 10/11/2020 | 2 | | |
| 2 | 5 | 10/12/2020 | 2 | | |
| 3 | 5 | 10/12/2020 | 3 | 10 | |
| 4 | 6 | 10/13/2020 | 3 | 10 | 8 |
| 5 | 6 | 10/29/2020 | | 11 | |
| 6 | 6 | 10/15/2020 | 9 | 13 | |
| 7 | 6 | 10/16/2020 | | | 16 |
+----+-----------+------------+--------+--------+--------+
I need to extract each prop for each clients (lProp1 - lProp3 fields) accociated with the last date with the last date of access (datAccess can be repeated, in this case i need the last one (by Id)). I.e. the desired result is:
+-----------+------------+--------+--------+--------+
| lClientId | datAccess | lProp1 | lProp2 | lProp3 |
+-----------+------------+--------+--------+--------+
| 5 | 10/12/2020 | 3 | 10 | 9 |
| 6 | 10/29/2020 | 9 | 11 | 16 |
+-----------+------------+--------+--------+--------+
I tried to create several subqueries with SELECT which selects the maximum of each column with corresponding maximum date but the performance is falling down. Is there a way to select desiried values by single query? Because this is very expensive to go thru all the rows in table each time it search maximum value of date. The single pass is enough to get desiried values.
I use ADODB and VBA so maybe it's better to go thru all the rows and extract desiried fields manually? Using Recordset.MoveNext i can check all the fields by using single pass. Just i thought the engine is faster than if i access each field by Recordeset/Variant/etc.
The database is MS ACCESS.
Thanks in advance.
John
(11 rep)
Dec 12, 2020, 09:06 PM
• Last activity: Aug 9, 2025, 01:05 AM
1
votes
2
answers
496
views
Advice: Should i use multiple physical databases or one physical database and several logical ones
We are creating a web application that will be used by companies as their business application every day, each company will have many employees and a lot of data. Is it wise for each company to have its own database or is there a better solution for all companies to be in one database? Now we use on...
We are creating a web application that will be used by companies as their business application every day, each company will have many employees and a lot of data.
Is it wise for each company to have its own database or is there a better solution for all companies to be in one database?
Now we use one database and each company has its own id and the employee has a company id. But the database is too loaded with data and the search is slow as well as difficult privacy management.
I would like quick data search in the database, easy implementation and not excessive maintenance costs.
bane
(11 rep)
Jan 22, 2023, 02:30 PM
• Last activity: Aug 9, 2025, 12:03 AM
0
votes
1
answers
146
views
Transactional replication Snapshot agent not starting
I am trying to consider all the options before setting up transactional replication on my 2008R2 server to 2012 distribution server to 2012 Subscriber server. All the three servers are on FULL recovery mode. Log backup is taken every 15 minutes and full backup is taken daily everyday on all the thre...
I am trying to consider all the options before setting up transactional replication on my 2008R2 server to 2012 distribution server to 2012 Subscriber server. All the three servers are on FULL recovery mode. Log backup is taken every 15 minutes and full backup is taken daily everyday on all the three server databases used in replication. We want to use the subscriber as a reporting server. Publisher is our main OLTP database
I have created domain accounts which are given appropriate permissions on the shared snapshot folder on the distributor. I have also give the unc path of this folder when setting up distributor and then granted db_owner rights to all the three sql logis created using these domain accounts to the distributor db on distributor server, and to all other dbs on publihser (dbs which will be replicated).
The snapshots were getting replicated to the shared folder on distributor very nicely when i tested the setup in uat servers. I had to install an updated version of SSMS (11.XX) on UAT publisher for the replication to work.
But then I decided not to update SSMS in my production server as someone suggested that since the servers are very old, they should not be changed for any setting, so I tried to use the SSMS of my distributor server and connect back to the publisher from distributor server. But then everything stopped working.
The distribution was created successfully on the distributor server. The publisher was also created successfully (using distributor SSMS). But the snapshot agent never started.
I then tried my previous setup (Uat setup - where I used updated SSMS on uat publisher server), that also did not work, the snapshot agent does not start - it eventualy fails after 10 minutes with the following error"
It says Steps2 failed. The password of the account repl_snapshot and repl_logreader has expired after 10 failed attempts the sql agent job fails
Please help :(
I tested all of this in test servers and it was working very well. Now suddenly the snapshot agent is failing only. I have changed the shared folder locations - by changing the name of the folder and re-granting the rights to all the domain users, also tried to setup distribution server and publisher server many many times. but noting is working, I have entered passwords correctly at every steps. Also granted the domain account sql logins the dbowner permissions to the required databases (distribution database on distribution server, and other replication databases also on publisher server)
The snapshot agent is failing even with the previous setup (using updated SSMS on the UAT publisher server). And it is failing when i use the Distributor's SSMS to login and connec tto publisher server. The error is same in both the cases
Adding more screenshots from my sql agent jobs when snapshot agent is started
And getting this error after snapshot agent finishes execution
Unable to start execution of step 2 (reason: Error authenticating proxy DOMAIN\repl_snapshot, system error: The password for this account has expired.). NOTE: The step was retried the requested number of times (10) without succeeding. The step fail


user25935915
(13 rep)
Sep 7, 2024, 10:17 AM
• Last activity: Aug 8, 2025, 11:04 PM
0
votes
1
answers
615
views
Query to update the JSON that does not contain a specific key
I want to query a table that has JSON data. The data of the column **detail** in the table **details** are as follows. ```none id | 1 detail | { myCompany: [{ email: 'company@company.com', inProfit: true }], myData: [ { email: 'example@example.com', personalUse: true }, { email: 'test@test.com', per...
I want to query a table that has JSON data. The data of the column **detail** in the table **details** are as follows.
id | 1
detail | {
myCompany: [{ email: 'company@company.com', inProfit: true }],
myData: [
{ email: 'example@example.com', personalUse: true },
{ email: 'test@test.com', personalUse: false },
{ email: 'test@example.com' },
],
};
The **JSON** value of the column **detail** is as follows
{
"myCompany":[
{
"email":"company@company.com",
"inProfit":true
}
],
"myData":[
{
"email":"example@example.com",
"personalUse":true
},
{
"email":"test@test.com",
"personalUse":false
},
{
"email":"test@example.com",
}
]
}
I want a query that updates the myData
. If the myData
field doesn't contain personalUse
then update that field with personalUse:true
. In the given example { email: 'test@example.com' }
field should update as { email: 'test@example.com', personalUse: true }
You can use the below queries to try it locally.
CREATE TABLE details (id bigserial primary key, detail json not null);
INSERT INTO details (detail)
VALUES
('{"myCompany":[{"email":"company@company.com", "inProfit":true } ],
"myData":[{"email":"example@example.com", "personalUse":true },
{"email":"test@test.com", "personalUse":false },
{"email":"test@example.com"} ] }');
Please, someone, help me out.
Rich
(1 rep)
Apr 23, 2020, 03:27 PM
• Last activity: Aug 8, 2025, 10:08 PM
0
votes
0
answers
45
views
Why does enabling Snapshot Isolation use up CPU?
I'm currently enabling Snapshot Isolation with `ALTER DATABASE [MyDB] SET ALLOW_SNAPSHOT_ISOLATION ON;`. It was taking longer that I expected, so I ran `sp_WhoIsActive`. I was surprised to discover that it has used 81,519 units of CPU in 83 seconds. Why does this happen?
I'm currently enabling Snapshot Isolation with
ALTER DATABASE [MyDB] SET ALLOW_SNAPSHOT_ISOLATION ON;
. It was taking longer that I expected, so I ran sp_WhoIsActive
. I was surprised to discover that it has used 81,519 units of CPU in 83 seconds. Why does this happen?
J. Mini
(1249 rep)
Aug 8, 2025, 09:14 PM
2
votes
2
answers
12176
views
MySQL hangs with nothing in error log
I'm having an issue where my MySQL server (part of a LAMP stack running WordPress) hangs every few hours (giving WordPress a "database connection error"). The error log shows nothing other than launching after a restart, and shutting down ("normal shutdown") when I need to restart it. The only thing...
I'm having an issue where my MySQL server (part of a LAMP stack running WordPress) hangs every few hours (giving WordPress a "database connection error").
The error log shows nothing other than launching after a restart, and shutting down ("normal shutdown") when I need to restart it. The only thing of note is that it says it is "purging the queue" during a restart, and then has ~300 lines like this:
151026 19:04:52 [Warning] /usr/sbin/mysqld: Forcing close of thread 31577 user: 'root'
The only other weird behaviour I've noticed is that running mysqladmin status shows "Questions" increasing rapidly, presumably until it hangs. I've looked online and can't find much discussion about what normal range or behaviour "Questions" should exhibit. Eg, this is after 10 minutes uptime:
Uptime: 521 Threads: 3 Questions: 16667 Slow queries: 0 Opens: 107 Flush tables: 1 Open tables: 100 Queries per second avg: 31.990
I have query_cache_size set to 50M, otherwise my.cnf is default on Ubuntu 14.04. This is the performance analysis from the mysqltuner perl script:
-------- Performance Metrics -------------------------------------------------
[--] Up for: 15m 52s (29K q [31.475 qps], 1K conn, TX: 73M, RX: 4M)
[--] Reads / Writes: 88% / 12%
[--] Binary logging is disabled
[--] Total buffers: 226.0M global + 2.7M per thread (151 max threads)
[OK] Maximum reached memory usage: 290.5M (1.81% of installed RAM)
[OK] Maximum possible memory usage: 631.8M (3.94% of installed RAM)
[OK] Slow queries: 0% (0/29K)
[OK] Highest usage of available connections: 15% (24/151)
[OK] Aborted connections: 0.00% (0/1515)
[OK] Query cache efficiency: 71.8% (17K cached / 24K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (10 temp sorts / 1K sorts)
[!!] Temporary tables created on disk: 59% (1K on disk / 1K total)
[OK] Thread cache hit rate: 97% (39 created / 1K connections)
[OK] Table cache hit rate: 93% (103 open / 110 opened)
[OK] Open file limit used: 4% (51/1K)
[OK] Table locks acquired immediately: 100% (8K immediate / 8K locks)
-------- MyISAM Metrics -----------------------------------------------------
[!!] Key buffer used: 18.2% (3M used / 16M cache)
[OK] Key buffer size / total MyISAM indexes: 16.0M/2.0M
[OK] Read Key buffer hit rate: 100.0% (6 cached / 0 reads)
[OK] Write Key buffer hit rate: 100.0% (8 cached / 0 writes)
-------- InnoDB Metrics -----------------------------------------------------
[--] InnoDB is enabled.
[!!] InnoDB buffer pool / data size: 128.0M/292.9M
[OK] InnoDB buffer pool instances: 1
[!!] InnoDB Used buffer: 75.48% (6183 used/ 8192 total)
[OK] InnoDB Read buffer efficiency: 99.70% (1960793 hits/ 1966769 total)
[!!] InnoDB Write buffer efficiency: 0.00% (0 hits/ 1 total)
[OK] InnoDB log waits: 0.00% (0 waits / 894 writes)
The hanging has started happening more frequently since an increase in logged in users (people accessing the database) and changes to my Apache config, which solved Apache crashes - my Apache prefork MPM is set as follows:
StartServers 20
MinSpareServers 10
MaxSpareServers 20
ServerLimit 1000
MaxRequestWorkers 500
MaxConnectionsPerChild 10000
The server is a DigitalOcean droplet with 8 CPU cores and 16GB RAM.
quackson
(21 rep)
Oct 27, 2015, 10:20 PM
• Last activity: Aug 8, 2025, 09:03 PM
2
votes
2
answers
26154
views
How can I find port running Oracle Enterprise Manager
I need to connect to the Oracle Enterprise manager of a remote machine. But I dont know the port on which OEM is running on the remote machine. I have access to the machine from terminal though. Is there a command to see the port number OEM is running on. The remote machine is CentOS and I need to o...
I need to connect to the Oracle Enterprise manager of a remote machine. But I dont know the port on which OEM is running on the remote machine. I have access to the machine from terminal though.
Is there a command to see the port number OEM is running on. The remote machine is CentOS and I need to open the OEM on my local browser (I am running windows 10)
I would really appreciate all the help. I am new to Oracle.
Nick Div
(141 rep)
Mar 26, 2017, 04:48 PM
• Last activity: Aug 8, 2025, 08:02 PM
0
votes
2
answers
708
views
Why we can FLASHBACK DROP table where it is DDL command and hence autocommited?
DROP and TRUNCATE both are DDL commands and both are autocommited.Then why we can recover DROP table by using FLASHBACK(unless it is purge) and why we can not recover TRUNCATED data(don't consider backup).
DROP and TRUNCATE both are DDL commands and both are autocommited.Then why we can recover DROP table by using FLASHBACK(unless it is purge) and why we can not recover TRUNCATED data(don't consider backup).
Paresh
(81 rep)
Aug 18, 2015, 05:19 AM
• Last activity: Aug 8, 2025, 07:02 PM
0
votes
2
answers
4772
views
Oracle user created information
Is it possible to determine what user created another user's account in Oracle database? I have two types of users in database: - Users having system and object privileges - Users having only system privileges I can find the owner of all users having system and object privileges. select distinct d.u...
Is it possible to determine what user created another user's account in Oracle database?
I have two types of users in database:
- Users having system and object privileges
- Users having only system privileges
I can find the owner of all users having system and object privileges.
select distinct d.username,d.account_status,t.privilege,t.owner
from dba_users d,dba_tab_privs t where d.username=t.grantee
and d.account_status='OPEN';
But this query doesn't fetch the users having only system Privileges. Please help how to find the owner of all those users having only system privileges?
Rahman Gurbaz
(23 rep)
Jan 10, 2018, 12:28 PM
• Last activity: Aug 8, 2025, 06:02 PM
0
votes
1
answers
138
views
Recovery_model for backupset table null sqlserver
Going through https://www.sqlshack.com/understanding-database-recovery-models/ I see that there are three types of recovery models: - SIMPLE - FULL - BULK_Logged But for the table msdb.dbo.backupset I can see the recovery_model column can be null. In what case would this take a NULL value. How to cr...
Going through https://www.sqlshack.com/understanding-database-recovery-models/
I see that there are three types of recovery models:
- SIMPLE
- FULL
- BULK_Logged
But for the table msdb.dbo.backupset I can see the recovery_model column can be null. In what case would this take a NULL value. How to create a backup such that it falls into this case.
Sonali Gupta
(99 rep)
Apr 29, 2021, 06:21 AM
• Last activity: Aug 8, 2025, 05:02 PM
Showing page 6 of 20 total questions