Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
11
views
postgres - log archive_command execution
we want to have a an entz in the postgres logs upon execution of the `archive_command`. Someone came up with this archive_command which should achieve this `test ! -f /opt/db/backup/postgres/archives/%f && echo $(date "+%F %T.%3N %Z") [$$] ARCH: archiving WAL: %f && cp %p /opt/db/backup/postgres/arc...
we want to have a an entz in the postgres logs upon execution of the
archive_command
.
Someone came up with this archive_command which should achieve this test ! -f /opt/db/backup/postgres/archives/%f && echo $(date "+%F %T.%3N %Z") [$$] ARCH: archiving WAL: %f && cp %p /opt/db/backup/postgres/archives/%f
The goal is to have a line telling us which wal file was archived at what time.
it throws an error though
2025-08-07 18:06:44.184 CEST FATAL: invalid value for parameter "archive_command": "test ! -f /opt/db/backup/postgres/archives/%f && echo $(date "+%F %T.%3N %Z")
[$$] ARCH: archiving WAL: %f && cp %p /opt/db/backup/postgres/archives/%f"
2025-08-07 18:06:44.184 CEST DETAIL: String contains unexpected placeholder "%F".
Any idea how to adjust the archive command in order to get this going correctly?
vrms
(271 rep)
Aug 7, 2025, 04:04 PM
• Last activity: Aug 9, 2025, 02:07 PM
4
votes
2
answers
898
views
Numeric collation sorts by digits instead of value- postgres
I have a table of users with ids in the following form `user123@domain.com`. When searching and sorting the users, I need `user1@domian.com` to be before `user14@domain.com` but since 4 is “smaller” than @ it sorts the other way around. After looking around I came across the following collation: CRE...
I have a table of users with ids in the following form
user123@domain.com
.
When searching and sorting the users, I need user1@domian.com
to be before user14@domain.com
but since 4 is “smaller” than @ it sorts the other way around.
After looking around I came across the following collation:
CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true');
Which fixed the problem, but caused another one, users with numbers that started with 2 appeared after users that started with 1
For example, user19@domain.com
came before user2@domain.com
From the docs, it says that kn-true
and looks at the number’s value but it looks like it only compares the digits.
I'm using Postgres 11.6. I have two servers: one on Windows (local) and a remote one on Linux.
Is anyone familiar with this issue?
Amit Keinan
(41 rep)
Dec 28, 2022, 07:17 PM
• Last activity: Aug 9, 2025, 01:56 PM
3
votes
1
answers
2529
views
Finding all the index corruptions in the entire Postgres DB
There is DB with unknown amount of corrupted unique indexes. Currently I discover them one by one when I try `REINDEX DATABASE CONCURRENTLY `, handle the specific duplication violation (manipulating data, mainly delete the duplications using scripts), reindex the specific table or index, and continu...
There is DB with unknown amount of corrupted unique indexes.
Currently I discover them one by one when I try
REINDEX DATABASE CONCURRENTLY
, handle the specific duplication violation (manipulating data, mainly delete the duplications using scripts), reindex the specific table or index, and continue to the next index (again only right after I discover it using REINDEX DATABASE CONCURRENTLY
).
Not mentioning that each time I get indexes with the '_ccnew' suffix, that AFAIK are indexes that were tried to be created by a previous reindex concurrently but couldn’t be done, usually because they are violating a uniqueness check.
The failed attempts to reindex concurrently will sit in there and should be dropped manually.
**Concurrent** reindex is used in order to prevent a shutdown.
I want to reduce those "roundtrips" of searching the next violation and wonder if there is a more efficient way to get a general data about the status of all the index corruptions or unique violations that a Postgres DB has.
KaliTheGreat
(189 rep)
Feb 21, 2024, 08:00 AM
• Last activity: Aug 9, 2025, 01:53 PM
1
votes
2
answers
44
views
Dynamically Adjusting max server memory Between Two Instances
I’ve got one server with two SQL instances. The server has 128 GB RAM. - Instance 1 has peak load from 08:00–10:00 - Instance 2 has peak load from 14:00–17:00 Currently: - Instance 1 has max server memory = 70 GB - Instance 2 has max server memory = 50 GB - The rest of the day, both are nearly idle....
I’ve got one server with two SQL instances. The server has 128 GB RAM.
- Instance 1 has peak load from 08:00–10:00
- Instance 2 has peak load from 14:00–17:00
Currently:
- Instance 1 has max server memory = 70 GB
- Instance 2 has max server memory = 50 GB
- The rest of the day, both are nearly idle.
Would it make sense to dynamically shift memory? give more to one instance during its peak while reducing it on the other?
Any downsides to doing that with a scheduled script?
Jonasstadi
(11 rep)
Aug 7, 2025, 03:04 PM
• Last activity: Aug 9, 2025, 01:41 PM
0
votes
1
answers
152
views
Postgres C API: How can we copy a Datum?
As far as I understand, `SPI_getbinval` returns a pointer into the passed row. That is, the following would be unsafe: ```c dat = SPI_getbinval(SPI_tuptable->vals[0], SPI_tuptable->tupdesc, 1, &isnull); SPI_finished(); if (isnull) PG_RETURN_NULL(); // Unsafe since SPI_finished() deallocates SPI_tupt...
As far as I understand,
SPI_getbinval
returns a pointer into the passed row. That is, the following would be unsafe:
dat = SPI_getbinval(SPI_tuptable->vals, SPI_tuptable->tupdesc, 1, &isnull);
SPI_finished();
if (isnull)
PG_RETURN_NULL();
// Unsafe since SPI_finished() deallocates SPI_tuptable
return dat;
The SPI interface provides the utility SPI_copytuple
. Is the following safe? Is there any problem with using SPI_tuptable->tupdesc
? Moreover, is there a more efficient way to do this directly with the returned Datum
instead of copying the entire HeapTuple
?
// We copy the whole tuple into the outer context. Better way to copy an individual Datum?
dat = SPI_getbinval(SPI_copytuple(SPI_tuptable->vals), SPI_tuptable->tupdesc, 1, &isnull);
SPI_finished();
if (isnull)
PG_RETURN_NULL();
return dat;
user2959071
(63 rep)
Nov 15, 2021, 01:43 AM
• Last activity: Aug 9, 2025, 12:08 PM
0
votes
1
answers
1235
views
Accessing Secured Storage account from Synapse Notebook
We have a Firewalled storage account which has a few files that we need to access via Synapse notebook. We do that with below code : abfss_path = f'abfss://container@storageaccount.dfs.core.windows.net/data.csv' df = spark.read.load(abfss_path, format='csv',header=True) display(df.limit(10)) The man...
We have a Firewalled storage account which has a few files that we need to access via Synapse notebook. We do that with below code :
abfss_path = f'abfss://container@storageaccount.dfs.core.windows.net/data.csv'
df = spark.read.load(abfss_path, format='csv',header=True)
display(df.limit(10))
The managed identity and my user (objectid) for the Synapse workspace has ' Storage Blob Data Contributor' role on the storage account.
The below have been enabled:
- Allow Azure services on the trusted services list to access this storage account.
- Specify resource instances that will have access to your storage account based on their system-assigned managed identity.
Running the above code works from the pipeline with a synapse notebook task but **fails when run from the studio through the notebook cell**.
Error : ***Operation failed: "This request is not authorized to perform this operation.", 403***
I have tried everything but can't get it to work.
Via Pipeline
Via Studio Notebook Cell
I can see AAD (pass through) takes precedence when running from studio, so I tried running the session as 'managed Identity'. This doesn't change anything and I keep getting same error.
How to get this working?
Note : I am not using the managed vnet in my synapse workspace.


Ramakant Dadhichi
(2338 rep)
Oct 20, 2022, 03:52 PM
• Last activity: Aug 9, 2025, 11:07 AM
2
votes
2
answers
1019
views
MySql Multi Master Active Active Mutli Write
I am establishing Multi Master Replication [All mater can write to their database], Initially I started with two server. After establishing connection and sync data on both servers. There are some questions which are bothering me like 1. If there is connection lost between two server and at that tim...
I am establishing Multi Master Replication [All mater can write to their database], Initially I started with two server. After establishing connection and sync data on both servers. There are some questions which are bothering me like
1. If there is connection lost between two server and at that time both updates the same row or create a row with same primary. There will be collision between them and sync will break.
2. Can we implement same things for Multiple Master configuration?
3. Is there any monitoring and conflict resolution tools which can I use?
Arjun Kaushal
(21 rep)
Aug 7, 2016, 05:04 PM
• Last activity: Aug 9, 2025, 10:08 AM
2
votes
2
answers
443
views
SQL trigger that outputs incremental numbers from zero whenever the month changes
I have the following trigger that outputs a number in the format such as 170200005 where 17 is the year and 02 is the month (February) and changes as months changes e.g March 03 then the last digits are incremental. BEGIN SET @var1 = LPAD((SELECT MONTH(CURRENT_DATE())), 2, '0'); SET @var2 = (SELECT...
I have the following trigger that outputs a number in the format such as 170200005 where 17 is the year and 02 is the month (February) and changes as months changes e.g March 03 then the last digits are incremental.
BEGIN
SET @var1 = LPAD((SELECT MONTH(CURRENT_DATE())), 2, '0');
SET @var2 = (SELECT YEAR(CURRENT_DATE));
SET @var2 = (SELECT MID(@var2, 3, 2));
SET @var1 = CONCAT(@var2, @var1);
INSERT INTO table1_seq VALUES (NULL);
SET NEW.id = CONCAT(@var1, LPAD(LAST_INSERT_ID(), 5, '0'));
END
I would like to have a code that will output in the same format as highlighted before but then starts from 00001 whenever the month changes.
amos chabwinja
(21 rep)
Mar 3, 2017, 07:06 AM
• Last activity: Aug 9, 2025, 09:03 AM
0
votes
1
answers
665
views
auto_explain does not log
looks like i do it wrong. I turned auto_explain on with adding next valued to postgresql.conf: shared_preload_libraries = 'pg_stat_statements,pg_wait_sampling,pg_stat_kcache,auto_explain' auto_explain.log_min_duration = '3s' in my monitoring I can see long running queries that exceeded limit of 3s a...
looks like i do it wrong. I turned auto_explain on with adding next valued to postgresql.conf:
shared_preload_libraries = 'pg_stat_statements,pg_wait_sampling,pg_stat_kcache,auto_explain'
auto_explain.log_min_duration = '3s'
in my monitoring I can see long running queries that exceeded limit of 3s and successfully finished.
'values' here says how long query lasted:
But when I look into my log 'postgresql-Mon.log' (and next days) there is nothing about those queries. Only a few strings about modules during database startup process.
Am I missing something?

Mikhail Aksenov
(430 rep)
Dec 10, 2019, 11:16 AM
• Last activity: Aug 9, 2025, 08:06 AM
0
votes
1
answers
797
views
Query DB2 without using Openquery
Is there a way to query a DB2 Linked Server without using Openquery? IE, is it possible to query DB2 through a linked server by doing something like the following? select * from [db2_linked_server].[database].[tablename]
Is there a way to query a DB2 Linked Server without using Openquery? IE, is it possible to query DB2 through a linked server by doing something like the following?
select *
from [db2_linked_server].[database].[tablename]
DForck42
(3068 rep)
Apr 17, 2017, 02:14 PM
• Last activity: Aug 9, 2025, 07:04 AM
0
votes
1
answers
789
views
pooler error: query_wait_timeout in pgbouncer logs
I am getting `pooler error: query_wait_timeout` for different databases in the pgbouncer log 6 to 7 times a day . When the error occurs for masterdata database , it gives me a timeout in the application. The configuration is below : ```auth_type: "md5" default_pool_size: 10 max_client_conn: 5000 max...
I am getting
pooler error: query_wait_timeout
for different databases in the pgbouncer log 6 to 7 times a day . When the error occurs for masterdata database , it gives me a timeout in the application.
The configuration is below :
: "md5"
default_pool_size: 10
max_client_conn: 5000
max_db_connections: 400
min_pool_size: 0
pool_mode: "transaction"
reserve_pool_size: 20
reserve_pool_timeout: 1
query_timeout: 0
ignore_startup_parameters: "search_path, extra_float_digits"
server_reset_query: "DISCARD ALL"
server_reset_query_always: 1
As the query timeout is disabled (query_timeout:0
) and the pool_mode
is set to transaction
, why am I still having this error? Do I need to set some min_pool_size
for my masterdata database to eliminate the above error?
kumar ashish
(1 rep)
Jan 8, 2024, 07:00 AM
• Last activity: Aug 9, 2025, 06:05 AM
1
votes
1
answers
872
views
Unable to upgrade to postgresql 13
I'm getting the following error: > postgres@:~/>cat loadable_libraries.txt could not load library >"$libdir/plpython2": ERROR: could not access file >"$libdir/plpython2": No such file or directory In database: db_name When trying to run pg_upgrade from PostgreSQL-11 to PostgreSQL-13. /usr/pgsql-13/b...
I'm getting the following error:
> postgres@:~/>cat loadable_libraries.txt could not load library
>"$libdir/plpython2": ERROR: could not access file
>"$libdir/plpython2": No such file or directory In database: db_name
When trying to run pg_upgrade from PostgreSQL-11 to PostgreSQL-13.
/usr/pgsql-13/bin/pg_upgrade -b /usr/pgsql-11/bin -B /usr/pgsql-13/bin -c -d /localpart0/db/postgres/11/db_name/data -D/localpart0/db/postgres/13/db_name/data -j 20 -r -v --clone
I don't have any plpython* extensions and already tried changing the pg_pltemplate entries for plpythonu and plpython2u to plpython3 and run the pg_upgrade but it fails either way.
Installed Packages:
> postgresql11-plpython.x86_64 11.7-1PGDG.rhel7 @postgres11.prd
> postgresql13-plpython3.x86_64 13.1-1PGDG.rhel7 @postgres13.prd
Any guidance would be greatly appreciated.
efrain berdecia
(31 rep)
Dec 21, 2020, 02:27 PM
• Last activity: Aug 9, 2025, 04:11 AM
0
votes
0
answers
22
views
Official example of `CREATE JSON DUALITY VIEW` gives syntax error
According to [the release note of MySQL 9.4](https://dev.mysql.com/doc/relnotes/mysql/9.4/en/news-9-4-0.html), *JSON duality views* are now supported. [*27.7.1 JSON Duality View Syntax*](https://dev.mysql.com/doc/refman/9.4/en/json-duality-views-syntax.html) shows this example: ```sql CREATE TABLE c...
According to [the release note of MySQL 9.4](https://dev.mysql.com/doc/relnotes/mysql/9.4/en/news-9-4-0.html) , *JSON duality views* are now supported.
[*27.7.1 JSON Duality View Syntax*](https://dev.mysql.com/doc/refman/9.4/en/json-duality-views-syntax.html) shows this example:
CREATE TABLE customers (
id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
first_name VARCHAR(25),
last_name VARCHAR(25),
street VARCHAR(50),
city VARCHAR(25),
state CHAR(2),
postcode CHAR(5),
phone CHAR(10),
email VARCHAR(50)
);
CREATE JSON DUALITY VIEW jdv_customers
AS
SELECT JSON_DUALITY_OBJECT( {
'_id' : c.id,
'name' : { 'fname' : c.first_name,
'lname' : c.last_name },
'email' : c.email,
'area' : { 'city' : c.city,
'state' : c.state }
}
)
FROM customers c;
Let me test that:
> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 9.4.0 |
+-----------+
> CREATE TABLE customers (
-> id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
-> first_name VARCHAR(25),
-> last_name VARCHAR(25),
-> street VARCHAR(50),
-> city VARCHAR(25),
-> state CHAR(2),
-> postcode CHAR(5),
-> phone CHAR(10),
-> email VARCHAR(50)
-> );
Query OK, 0 rows affected
> CREATE JSON DUALITY VIEW jdv_customers
-> AS
-> SELECT JSON_DUALITY_OBJECT( {
-> '_id' : c.id,
-> 'name' : { 'fname' : c.first_name,
-> 'lname' : c.last_name },
-> 'email' : c.email,
-> 'area' : { 'city' : c.city,
-> 'state' : c.state }
-> }
-> )
-> FROM customers c;
(1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '{\n '_id' : c.id,\n 'name' : { 'fname' : c.first_' at line 3")
Why?
Is this just a documentation bug or am I missing something?
---
Edit:
Reading [the documentation of JSON_DUALITY_OBJECT()
](https://dev.mysql.com/doc/refman/9.4/en/json-creation-functions.html#function_json-duality-object) , it seems
- {}
of JSON_DUALITY_OBJECT( { ... } )
found in the example is incorrect. We must omit that.
- Nested object { ... }
found in the example is incorrect. Instead we have to specify a subquery like this:
'name' : (
SELECT JSON_DUALITY_OBJECT(
/* ... */
)
FROM customers AS c2
WHERE c2.id = c.id
),
- And the documentation also says
> If the table is projected multiple times, the set of columns projected must be consistent across all instances of the table projection.
, meaning that top-level query and all of the subqueries must repeat the same set of columns.
Following these rules gives:
CREATE JSON DUALITY VIEW jdv_customers AS
SELECT JSON_DUALITY_OBJECT(
'_id' : c.id,
'first_name' : c.first_name,
'last_name' : c.last_name,
'email' : c.email,
'city' : c.city,
'state' : c.state,
'name' : (
SELECT JSON_DUALITY_OBJECT(
'id' : c2.id,
'first_name' : c2.first_name,
'last_name' : c2.last_name,
'email' : c2.email,
'city' : c2.city,
'state' : c2.state
)
FROM customers AS c2
WHERE c2.id = c.id
),
'area' : (
SELECT JSON_DUALITY_OBJECT(
'id' : c3.id,
'first_name' : c3.first_name,
'last_name' : c3.last_name,
'email' : c3.email,
'city' : c3.city,
'state' : c3.state
)
FROM customers AS c3
WHERE c3.id = c.id
)
)
FROM customers AS c;
This actually works (no syntax error) but is far more verbose then the official example.
I even doubt if this is really an intended use-case of JSON duality view. Probably the intended use-case is not to project a single flat row to a nested JSON object but to directly map an inherently nested structure (e.g. that given by JOIN
) to a JSON object without re-structuring.
ynn
(185 rep)
Aug 9, 2025, 03:00 AM
• Last activity: Aug 9, 2025, 03:53 AM
1
votes
1
answers
2188
views
Connectivity issues in replication process
I've been trying to add a new node to a replica set without success due to these errors: > 2019-08-12T06:48:15.031+0200 I ASIO [NetworkInterfaceASIO-RS-0] > Ending connection to host mongo03:27017 due to bad connection status; > 1 connections to that host remain open 2019-08-12T06:48:15.031+0200 I >...
I've been trying to add a new node to a replica set without success due to these errors:
> 2019-08-12T06:48:15.031+0200 I ASIO [NetworkInterfaceASIO-RS-0]
> Ending connection to host mongo03:27017 due to bad connection status;
> 1 connections to that host remain open 2019-08-12T06:48:15.031+0200 I
> REPL [replication-224] Restarting oplog query due to error:
> NetworkInterfaceExceededTimeLimit: error in fetcher batch callback:
> Operation timed out. Last fetched optime (with hash): { ts:
> Timestamp(1565585283, 1317), t: 9 }. Restarts
> remaining: 10 2019-08-12T06:48:15.032+0200 I REPL
> [replication-224] Scheduled new oplog query Fetcher source:
> mongo03:27017 database: local query: { find: "oplog.rs", filter: { ts:
> { $gte: Timestamp(1565585283, 1317) } }, tailable: true, oplogReplay:
> true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 9,
> readConcern: { afterOpTime: { ts: Timestamp(1565585283, 1317), t: 9 }
> } } query metadata: { $replData: 1, $oplogQueryData: 1,
> $readPreference: { mode: "secondaryPreferred" } } active: 1
> findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms shutting
> down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler
> request: RemoteCommand 2262327 -- target:mongo03:27017 db:local cmd:{
> find: "oplog.rs", filter: { ts: { $gte: Timestamp(1565585283, 1317) }
> }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS:
> 2000, batchSize: 13981010, term: 9, readConcern: { afterOpTime: { ts:
> Timestamp(1565585283, 1317), t: 9 } } } active: 1
> callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1
> retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms
> 2019-08-12T06:48:17.637+0200 I ASIO [NetworkInterfaceASIO-RS-0]
> Connecting to mongo03:27017 2019-08-12T06:48:17.645+0200 I ASIO
> [NetworkInterfaceASIO-RS-0] Successfully connected to mongo03:27017,
> took 8ms (2 connections now open to mongo03:27017)
> 2019-08-12T06:49:36.961+0200 I - [repl writer worker 13]
> DB.CollectionName collection clone progress: 221390504/643329790
> 34% (documents copied) 2019-08-12T06:51:52.859+0200 I ASIO
> [NetworkInterfaceASIO-RS-0] Ending connection to host mongo03:27017
> due to bad connection status; 1 connections to that host remain open
> 2019-08-12T06:51:52.859+0200 I REPL [replication-229] Restarting
> oplog query due to error: NetworkInterfaceExceededTimeLimit: error in
> fetcher batch callback: Operation timed out. Last fetched optime (with
> hash): { ts: Timestamp(1565585502, 204), t: 9 }.
> Restarts remaining: 10 2019-08-12T06:51:52.859+0200 I ASIO
> [NetworkInterfaceASIO-RS-0] Connecting to mongo03:27017
> 2019-08-12T06:51:52.859+0200 I REPL [replication-229] Scheduled
> new oplog query Fetcher source: mongo03:27017 database: local query: {
> find: "oplog.rs", filter: { ts: { $gte: Timestamp(1565585502, 204) }
> }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS:
> 2000, batchSize: 13981010, term: 9, readConcern: { afterOpTime: { ts:
> Timestamp(1565585502, 204), t: 9 } } } query metadata: { $replData: 1,
> $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" } }
> active: 1 findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms
> shutting down?: 0 first: 1 firstCommandScheduler:
> RemoteCommandRetryScheduler request: RemoteCommand 2272962 --
> target:mongo03:27017 db:local cmd:{ find: "oplog.rs", filter: { ts: {
> $gte: Timestamp(1565585502, 204) } }, tailable: true, oplogReplay:
> true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 9,
> readConcern: { afterOpTime: { ts: Timestamp(1565585502, 204), t: 9 } }
> } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0
> attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis:
> -1ms
I'm a little bit lost regarding how to debug it.
It's a 250GB database, the synchronization is being done through a private network 100Mb. There is an iptables in all servers but the traffic from the private network is allowed:
-A INPUT -i eth1 -p tcp -m tcp --dport 27017 -m state --state NEW,ESTABLISHED -j ACCEPT
At some point the replication process fails:
> 2019-08-12T02:22:37.582+0200 I REPL [replication-143] Restarting
> oplog query due to error: ExceededTimeLimit: error in fetcher batch
> callback: Executor error during find command :: caused by :: errmsg:
> "oper ation exceeded time limit". Last fetched optime (with hash): {
> ts: Timestamp(1565569328, 710), t: 9 }. Restarts
> remaining: 1 2019-08-12T02:22:37.582+0200 I REPL [replication-143]
> Scheduled new oplog query Fetcher source: mongo03:27017 database:
> local query: { find: "oplog.rs", filter: { ts: { $gte:
> Timestamp(1565569328, 710) } }, tailable: true, oplogReplay: true,
> awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 9,
> readConcern: { afterOpTime: { ts: Timestamp(1565569328, 710), t: 9 } }
> } query metadata: { $replData: 1, $oplogQ ueryData: 1,
> $readPreference: { mode: "secondaryPreferred" } } active: 1
> findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms shutting
> down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler
> re quest: RemoteCommand 1381614 -- target:mongo03:27017 db:local cmd:{
> find: "oplog.rs", filter: { ts: { $gte: Timestamp(1565569328, 710) }
> }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS:
> 2000, ba tchSize: 13981010, term: 9, readConcern: { afterOpTime: { ts:
> Timestamp(1565569328, 710), t: 9 } } } active: 1 callbackHandle.valid:
> 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl
> maxAttem pts: 1 maxTimeMillis: -1ms 2019-08-12T02:22:39.612+0200 I
> REPL [replication-142] Error returned from oplog query (no more
> query restarts left): ExceededTimeLimit: error in fetcher batch
> callback: Executor error during find command :: caused by :: errmsg:
> "operation exceeded time limit" 2019-08-12T02:22:39.615+0200 I REPL
> [replication-142] Finished fetching oplog during initial sync:
> ExceededTimeLimit: error in fetcher batch callback: Executor error
> during find command :: caused by :: errms g: "operation exceeded time
> limit". Last fetched optime and hash: { ts: Timestamp(1565569328,
> 710), t: 9 } 2019-08-12T02:22:43.034+0200 I REPL
> [replication-142] CollectionCloner ns:DB.CollectionName finished
> cloning with status: IllegalOperation: AsyncResultsMerger killed
> 2019-08-12T02:22:44.551+0200 W REPL [replication-142] collection
> clone for 'DB.CollectionName' failed due to IllegalOperation: While
> cloning collection 'DB.CollectionName' there was an error
> 'AsyncResultsMerger killed' 2019-08-12T02:22:44.551+0200 I REPL
> [replication-142] CollectionCloner::start called, on ns:DB.CollectionName2
> 2019-08-12T02:22:44.551+0200 W REPL [replication-142] database
> 'dbName' (9 of 9) clone failed due to ShutdownInProgress: collection
> cloner completed 2019-08-12T02:22:44.551+0200 I REPL
> [replication-142] Finished cloning data: ShutdownInProgress:
> collection cloner completed. Beginning oplog replay.
> 2019-08-12T02:22:44.551+0200 I REPL [replication-142] Initial sync
> attempt finishing up.
And it has to start over again.
Don't know if I could configure a bigger time out to avoid this disconnections.
Thanks in advance.
brb
(41 rep)
Aug 12, 2019, 05:12 AM
• Last activity: Aug 9, 2025, 03:03 AM
0
votes
1
answers
145
views
How would I add two mariadb servers to an existing pair and ensure replication
I have a pair of mariadb servers running on CentOS 7, version 5.5.52. Currently the Server 1 writes and reads and replicates to Server 2. I want to add two mariadb servers in AWS. I want one of these servers, Server 3, to read and write, I want it to replicate to Servers 1, 2 and 4; that is, I want...
I have a pair of mariadb servers running on CentOS 7, version 5.5.52. Currently the Server 1 writes and reads and replicates to Server 2. I want to add two mariadb servers in AWS. I want one of these servers, Server 3, to read and write, I want it to replicate to Servers 1, 2 and 4; that is, I want replication to occur across all mariadb servers. I also want Server 1 to replicate to the other three.
Link to schematic: https://www.flickr.com/photos/80050932@N06/51535391218/in/dateposted-public/
My question is, is this possible and thoughts on how to enable.
autossh is the tunnel that handles replication. Here is a line in
/usr/lib/systemd/system/autossh.service
that may offer insight:
User=root
ExecStart=/usr/bin/autossh -M 0 -N -q -o "ServerAliveInterval=60" -o "ServerAliveCountMax=3" -p 22 -l replica Server1 -L 3305:127.0.0.1:3306 -i /root/.keys/replica
hokie1999
(1 rep)
Sep 30, 2021, 01:04 PM
• Last activity: Aug 9, 2025, 02:07 AM
0
votes
1
answers
602
views
How to select maximum row value using optimal way
I have a large table: +----+-----------+------------+--------+--------+--------+ | Id | lClientId | datAccess | lProp1 | lProp2 | lProp3 | +----+-----------+------------+--------+--------+--------+ | 0 | 5 | 10/10/2020 | | | 9 | | 1 | 5 | 10/11/2020 | 2 | | | | 2 | 5 | 10/12/2020 | 2 | | | | 3 | 5 |...
I have a large table:
+----+-----------+------------+--------+--------+--------+
| Id | lClientId | datAccess | lProp1 | lProp2 | lProp3 |
+----+-----------+------------+--------+--------+--------+
| 0 | 5 | 10/10/2020 | | | 9 |
| 1 | 5 | 10/11/2020 | 2 | | |
| 2 | 5 | 10/12/2020 | 2 | | |
| 3 | 5 | 10/12/2020 | 3 | 10 | |
| 4 | 6 | 10/13/2020 | 3 | 10 | 8 |
| 5 | 6 | 10/29/2020 | | 11 | |
| 6 | 6 | 10/15/2020 | 9 | 13 | |
| 7 | 6 | 10/16/2020 | | | 16 |
+----+-----------+------------+--------+--------+--------+
I need to extract each prop for each clients (lProp1 - lProp3 fields) accociated with the last date with the last date of access (datAccess can be repeated, in this case i need the last one (by Id)). I.e. the desired result is:
+-----------+------------+--------+--------+--------+
| lClientId | datAccess | lProp1 | lProp2 | lProp3 |
+-----------+------------+--------+--------+--------+
| 5 | 10/12/2020 | 3 | 10 | 9 |
| 6 | 10/29/2020 | 9 | 11 | 16 |
+-----------+------------+--------+--------+--------+
I tried to create several subqueries with SELECT which selects the maximum of each column with corresponding maximum date but the performance is falling down. Is there a way to select desiried values by single query? Because this is very expensive to go thru all the rows in table each time it search maximum value of date. The single pass is enough to get desiried values.
I use ADODB and VBA so maybe it's better to go thru all the rows and extract desiried fields manually? Using Recordset.MoveNext i can check all the fields by using single pass. Just i thought the engine is faster than if i access each field by Recordeset/Variant/etc.
The database is MS ACCESS.
Thanks in advance.
John
(11 rep)
Dec 12, 2020, 09:06 PM
• Last activity: Aug 9, 2025, 01:05 AM
1
votes
2
answers
496
views
Advice: Should i use multiple physical databases or one physical database and several logical ones
We are creating a web application that will be used by companies as their business application every day, each company will have many employees and a lot of data. Is it wise for each company to have its own database or is there a better solution for all companies to be in one database? Now we use on...
We are creating a web application that will be used by companies as their business application every day, each company will have many employees and a lot of data.
Is it wise for each company to have its own database or is there a better solution for all companies to be in one database?
Now we use one database and each company has its own id and the employee has a company id. But the database is too loaded with data and the search is slow as well as difficult privacy management.
I would like quick data search in the database, easy implementation and not excessive maintenance costs.
bane
(11 rep)
Jan 22, 2023, 02:30 PM
• Last activity: Aug 9, 2025, 12:03 AM
0
votes
1
answers
146
views
Transactional replication Snapshot agent not starting
I am trying to consider all the options before setting up transactional replication on my 2008R2 server to 2012 distribution server to 2012 Subscriber server. All the three servers are on FULL recovery mode. Log backup is taken every 15 minutes and full backup is taken daily everyday on all the thre...
I am trying to consider all the options before setting up transactional replication on my 2008R2 server to 2012 distribution server to 2012 Subscriber server. All the three servers are on FULL recovery mode. Log backup is taken every 15 minutes and full backup is taken daily everyday on all the three server databases used in replication. We want to use the subscriber as a reporting server. Publisher is our main OLTP database
I have created domain accounts which are given appropriate permissions on the shared snapshot folder on the distributor. I have also give the unc path of this folder when setting up distributor and then granted db_owner rights to all the three sql logis created using these domain accounts to the distributor db on distributor server, and to all other dbs on publihser (dbs which will be replicated).
The snapshots were getting replicated to the shared folder on distributor very nicely when i tested the setup in uat servers. I had to install an updated version of SSMS (11.XX) on UAT publisher for the replication to work.
But then I decided not to update SSMS in my production server as someone suggested that since the servers are very old, they should not be changed for any setting, so I tried to use the SSMS of my distributor server and connect back to the publisher from distributor server. But then everything stopped working.
The distribution was created successfully on the distributor server. The publisher was also created successfully (using distributor SSMS). But the snapshot agent never started.
I then tried my previous setup (Uat setup - where I used updated SSMS on uat publisher server), that also did not work, the snapshot agent does not start - it eventualy fails after 10 minutes with the following error"
It says Steps2 failed. The password of the account repl_snapshot and repl_logreader has expired after 10 failed attempts the sql agent job fails
Please help :(
I tested all of this in test servers and it was working very well. Now suddenly the snapshot agent is failing only. I have changed the shared folder locations - by changing the name of the folder and re-granting the rights to all the domain users, also tried to setup distribution server and publisher server many many times. but noting is working, I have entered passwords correctly at every steps. Also granted the domain account sql logins the dbowner permissions to the required databases (distribution database on distribution server, and other replication databases also on publisher server)
The snapshot agent is failing even with the previous setup (using updated SSMS on the UAT publisher server). And it is failing when i use the Distributor's SSMS to login and connec tto publisher server. The error is same in both the cases
Adding more screenshots from my sql agent jobs when snapshot agent is started
And getting this error after snapshot agent finishes execution
Unable to start execution of step 2 (reason: Error authenticating proxy DOMAIN\repl_snapshot, system error: The password for this account has expired.). NOTE: The step was retried the requested number of times (10) without succeeding. The step fail


user25935915
(13 rep)
Sep 7, 2024, 10:17 AM
• Last activity: Aug 8, 2025, 11:04 PM
0
votes
1
answers
616
views
Query to update the JSON that does not contain a specific key
I want to query a table that has JSON data. The data of the column **detail** in the table **details** are as follows. ```none id | 1 detail | { myCompany: [{ email: 'company@company.com', inProfit: true }], myData: [ { email: 'example@example.com', personalUse: true }, { email: 'test@test.com', per...
I want to query a table that has JSON data. The data of the column **detail** in the table **details** are as follows.
id | 1
detail | {
myCompany: [{ email: 'company@company.com', inProfit: true }],
myData: [
{ email: 'example@example.com', personalUse: true },
{ email: 'test@test.com', personalUse: false },
{ email: 'test@example.com' },
],
};
The **JSON** value of the column **detail** is as follows
{
"myCompany":[
{
"email":"company@company.com",
"inProfit":true
}
],
"myData":[
{
"email":"example@example.com",
"personalUse":true
},
{
"email":"test@test.com",
"personalUse":false
},
{
"email":"test@example.com",
}
]
}
I want a query that updates the myData
. If the myData
field doesn't contain personalUse
then update that field with personalUse:true
. In the given example { email: 'test@example.com' }
field should update as { email: 'test@example.com', personalUse: true }
You can use the below queries to try it locally.
CREATE TABLE details (id bigserial primary key, detail json not null);
INSERT INTO details (detail)
VALUES
('{"myCompany":[{"email":"company@company.com", "inProfit":true } ],
"myData":[{"email":"example@example.com", "personalUse":true },
{"email":"test@test.com", "personalUse":false },
{"email":"test@example.com"} ] }');
Please, someone, help me out.
Rich
(1 rep)
Apr 23, 2020, 03:27 PM
• Last activity: Aug 8, 2025, 10:08 PM
0
votes
0
answers
48
views
Why does enabling Snapshot Isolation use up CPU?
I'm currently enabling Snapshot Isolation with `ALTER DATABASE [MyDB] SET ALLOW_SNAPSHOT_ISOLATION ON;`. It was taking longer that I expected, so I ran `sp_WhoIsActive`. I was surprised to discover that it has used 81,519 units of CPU in 83 seconds. Why does this happen?
I'm currently enabling Snapshot Isolation with
ALTER DATABASE [MyDB] SET ALLOW_SNAPSHOT_ISOLATION ON;
. It was taking longer that I expected, so I ran sp_WhoIsActive
. I was surprised to discover that it has used 81,519 units of CPU in 83 seconds. Why does this happen?
J. Mini
(1248 rep)
Aug 8, 2025, 09:14 PM
Showing page 8 of 20 total questions