Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
471
views
Clarification on steps creating Database Replication Definition
I am being asked to create database replication definitions for a couple of databases (around 20 databases). I just wanted to clarify is the below-mentioned steps are the correct steps to be carried out? Let's says 'customer' is one of my database. 1. Set ddl in tran to true `sp_dboption customer,"d...
I am being asked to create database replication definitions for a couple of databases (around 20 databases). I just wanted to clarify is the below-mentioned steps are the correct steps to be carried out?
Let's says 'customer' is one of my database.
1. Set ddl in tran to true
sp_dboption customer,"ddl in tran", true
2. Mark the primary database using sp_reptostandby
sp_reptostandby customer,'all'
3. Set the RepAgent parameter send warm standby xacts to true
sp_config_rep_agent customer,'send warm standby xacts', 'true'
4. Create db repdef/subs
`create database replication definition db_repdef_customer
with primary at DS.customer
replicate DDL
replicate system procedures
go`
`create subscription db_sub_customer
for database replication definition db_repdef_customer
with primary at DS.customer
with replicate at DS2.customer1
without materialization
go`
(Note: DS.customer and DS2.customer1 are ASE-ASE replication)
After I have followed the above steps to create db repdef/subs, I hit a lot of permission issues on my replication ID to do INSERT/UPDATE/DELETE operators on those tables I did not setup table replication yet. Further check on these tables in my 'customer' database (ex. I tried to do insert/update/delete operations manually on tables without setting table repdef, I realised that the data replication is working for all the tables under the 'customer' database with/without I setup table replication. Is this normal? Any steps I have missed out? Please help.
Tittus
(33 rep)
Mar 13, 2018, 03:00 AM
• Last activity: Aug 10, 2025, 05:00 AM
0
votes
1
answers
977
views
Slow performance of query with $and & $or operators
My query with `$and`, `$or` operators is performing poorly. Even though I've built a compound index on all three fields that are specified in my query criteria, the query does a complete collection scan. I have tried both simple find and aggregation and the Winning plan for both is COLLSCAN. ### Sim...
My query with
$and
, $or
operators is performing poorly. Even though I've built a compound index on all three fields that are specified in my query criteria, the query does a complete collection scan. I have tried both simple find and aggregation and the Winning plan for both is COLLSCAN.
### Simple find
Query:
db.test.find({ $or: [ {id1: "900004"}, {relatedid: "900004"}], $and: [ { isValid: "Y" } ] } )
Execution stats:
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "XXX",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"$or" : [
{
"Id1" : {
"$eq" : "900004"
}
},
{
"relatedID" : {
"$eq" : "900004"
}
}
]
},
{
"isValid" : {
"$eq" : "Y"
}
}
]
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"$and" : [
{
"$or" : [
{
"Id1" : {
"$eq" : "900004"
}
},
{
"relatedID" : {
"$eq" : "900004"
}
}
]
},
{
"isValid" : {
"$eq" : "Y"
}
}
]
},
"direction" : "forward"
},
"rejectedPlans" : []
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 61,
"totalKeysExamined" : 0,
"totalDocsExamined" : 100006,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"$and" : [
{
"$or" : [
{
"Id1" : {
"$eq" : "900004"
}
},
{
"relatedID" : {
"$eq" : "900004"
}
}
]
},
{
"isValid" : {
"$eq" : "Y"
}
}
]
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 60,
"works" : 100008,
"advanced" : 1,
"needTime" : 100006,
"needYield" : 0,
"saveState" : 781,
"restoreState" : 781,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 100006
}
},
"serverInfo" : {
"host" : "XXXX",
"port" : ,
"version" : "4.0.9",
"gitVersion" : "XXX"
},
"ok" : 1.0
}
### Aggregation
Query:
db.test.aggregate( [{ "$match":{ "$and": [ {"isValid": "Y"}, { "$or": [ {"Id1": "900004"}, {"relatedID": "900004"} ]}] }} ] )
Execution Stats:
{
"stages" : [
{
"$cursor" : {
"query" : {
"$and" : [
{
"isValid" : "Y"
},
{
"$or" : [
{
"Id1" : "900004"
},
{
"relatedID" : "900004"
}
]
}
]
},
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "XXXXX",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"$or" : [
{
"Id1" : {
"$eq" : "900004"
}
},
{
"relatedID" : {
"$eq" : "900004"
}
}
]
},
{
"isValid" : {
"$eq" : "Y"
}
}
]
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"$and" : [
{
"$or" : [
{
"Id" : {
"$eq" : "900004"
}
},
{
"relatedID" : {
"$eq" : "900004"
}
}
]
},
{
"isValid" : {
"$eq" : "Y"
}
}
]
},
"direction" : "forward"
},
"rejectedPlans" : []
}
}
}
],
"ok" : 1.0
}
How can I get a fast a query for my results?
Anban
(45 rep)
Jun 6, 2019, 01:01 PM
• Last activity: Aug 10, 2025, 04:04 AM
0
votes
1
answers
143
views
How to create a table with only indexes and no actual data?
**Question:** Is is possible to create a table in PostgreSQL for the sole purpose of creating specialized indexes (used for advanced search use-cases), without storing duplicate data in such tables? **Context:** I have a table with a JSONB column that can contain dynamic (and nested keys). I've mana...
**Question:** Is is possible to create a table in PostgreSQL for the sole purpose of creating specialized indexes (used for advanced search use-cases), without storing duplicate data in such tables?
**Context:** I have a table with a JSONB column that can contain dynamic (and nested keys). I've managed to create a function, called
jsonb_flatten
which can convert {a: {b: "val"}}
to {"a.b": "val"}
. However, I need trigram indexes on all possible values, which is not support by regular JSONB indexes, i.e. GIST/GIN. One solution could be to duplicate this data into a table with two fields key | value | original_row_id
and create a trigram index on the value
column, but that would end-up unnecessarily duplicating the data. Thus, my question.
Saurabh Nanda
(333 rep)
Jul 13, 2024, 04:29 PM
• Last activity: Aug 10, 2025, 03:06 AM
0
votes
1
answers
136
views
column values do not return properly when column name called out specifically in select statement
I have a database with this table (version 5.0.95): > describe shift; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL...
I have a database with this table (version 5.0.95):
> describe shift;
+-----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| person_id | int(11) | NO | MUL | NULL | |
| utc_date | datetime | NO | MUL | NULL | |
| mins | int(11) | NO | | NULL | |
| active_utc_date | datetime | YES | | NULL | |
| active_mins | int(11) | YES | | NULL | |
| absence_id | int(11) | YES | | NULL | |
| updated_by | int(11) | YES | | NULL | |
| updated_name | varchar(100) | YES | | NULL | |
| updated_date | datetime | YES | | NULL | |
+-----------------+--------------+------+-----+---------+----------------+
Sample of what it looks like is here:
| 9502849 | 327037 | 2017-06-12 11:00:00 | 540 | 2017-06-12 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502850 | 327037 | 2017-06-13 11:00:00 | 540 | 2017-06-13 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502851 | 327037 | 2017-06-14 11:00:00 | 540 | 2017-06-14 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502852 | 327037 | 2017-06-15 11:00:00 | 540 | 2017-06-15 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502853 | 327037 | 2017-06-16 11:00:00 | 540 | 2017-06-16 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
+---------+-----------+---------------------+------+---------------------+-------------+------------+------------+---------------+---------------------+
940 rows in set (0.09 sec)
I wanted to see entries with utc_date as just today:
> select utc_date from shift where (person_id = 327037 and date(utc_date) = curdate());
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
+------------+
940 rows in set (0.08 sec)
That wasn't great, because uct_date got changed to be curdate()...
And now, anytime I try to look at utc_date, it always shows that same date:
> select utc_date from shift where (person_id = 327037 and utc_date like '%2016%');
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
| 2016-04-21 |
+------------+
940 rows in set, 1 warning (0.08 sec)
But if I view the table with a *, I still see it as it should be:
> select * from shift where person_id = 327037;
| 9502849 | 327037 | 2017-06-12 11:00:00 | 540 | 2017-06-12 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502850 | 327037 | 2017-06-13 11:00:00 | 540 | 2017-06-13 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502851 | 327037 | 2017-06-14 11:00:00 | 540 | 2017-06-14 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502852 | 327037 | 2017-06-15 11:00:00 | 540 | 2017-06-15 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
| 9502853 | 327037 | 2017-06-16 11:00:00 | 540 | 2017-06-16 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 |
+---------+-----------+---------------------+------+---------------------+-------------+------------+------------+---------------+---------------------+
940 rows in set (0.10 sec)
So I know that it didn't change in the table - but for whatever reason, when I try to display the field by name, it seems to remember whatever caused it to display as today (presumably it took this as an assignment: date(utc_date) = curdate()).
How do I undo that assignment?
MaQleod
(371 rep)
Apr 21, 2016, 08:19 PM
• Last activity: Aug 10, 2025, 02:10 AM
0
votes
2
answers
578
views
Migration from MySQL to Oracle generates empty tables
I'm using SQL Developer to migrate a database from MySQL 5.7 to Oracle 18c XE. The problem is that the tables are generated but empty, even though I'm specifying that the data should move online. I also looked at the scripts, and there's no data either. What I see in the log is `DataMove.DISABLE_CON...
I'm using SQL Developer to migrate a database from MySQL 5.7 to Oracle 18c XE. The problem is that the tables are generated but empty, even though I'm specifying that the data should move online. I also looked at the scripts, and there's no data either.
What I see in the log is
DataMove.DISABLE_CONSTRAINTS_FAILED
. How to fix this?
These are the migration options:

ps0604
(51 rep)
Mar 10, 2019, 06:27 PM
• Last activity: Aug 10, 2025, 01:02 AM
0
votes
1
answers
136
views
Why pgadmin dont enter in admin panel?
I install postgresql-12.4 on my win pc. When i'm trying to enter into admin panel (pgadmin) i cathcing message: could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5433? could not connect to serve...
I install postgresql-12.4 on my win pc. When i'm trying to enter into admin panel (pgadmin) i cathcing message:
could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5433? could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5433?
I have 2 users on my pc. Admin and another user with adming rights. Under admin profile pgadmin works good, uder another user profile pgadmin throws error message. In services server is running.
Артём Власов
Sep 1, 2020, 06:32 AM
• Last activity: Aug 10, 2025, 12:04 AM
2
votes
1
answers
5146
views
Status shows InnoDB: 128 rollback segment(s) are active waiting for purge to start
MySQL process was killed due to memory issues. When start command was issued, server started and is working but a status command shows the following: InnoDB: 128 rollback segment(s) are active. InnoDB: Waiting for purge to start InnoDB: Percona XtraDB (http://www.percona.com) 5.6.3...514901 Plugin '...
MySQL process was killed due to memory issues. When start command was issued, server started and is working but a status command shows the following:
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: Percona XtraDB (http://www.percona.com) 5.6.3...514901
Plugin 'FEEDBACK' is disabled.
Server socket created on IP: '::'.
/usr/sbin/mysqld: ready for connections.
Is this something that needs to be addressed?
Dean Sanchez
(21 rep)
Oct 2, 2018, 01:00 PM
• Last activity: Aug 9, 2025, 11:07 PM
2
votes
1
answers
1063
views
Should I connect to my replica through PgBouncer?
We're using a setup of one Postgresql master server and a several replicas to be able to withstand master failure. Connection to master server is made through PgBouncer. Currently the idea is that replicas would serve reads when the master is down. Is it a good idea to connect to replicas via PgBoun...
We're using a setup of one Postgresql master server and a several replicas to be able to withstand master failure. Connection to master server is made through PgBouncer.
Currently the idea is that replicas would serve reads when the master is down.
Is it a good idea to connect to replicas via PgBouncer during master failover/switchover? Should it be the same PgBouncer instance or a separate one (for fault tolerance)?
We build a web service and a set of background jobs, database transactions are typically short-lived. If there's an article explaining this, I would appreciate if you share a link
Currently the idea is that replicas would serve reads when the master is down.
Is it a good idea to connect to replicas via PgBouncer during master failover/switchover? Should it be the same PgBouncer instance or a separate one (for fault tolerance)?
We build a web service and a set of background jobs, database transactions are typically short-lived. If there's an article explaining this, I would appreciate if you share a link
chester89
(127 rep)
Sep 15, 2018, 06:30 AM
• Last activity: Aug 9, 2025, 10:04 PM
0
votes
1
answers
139
views
How to combine values in one column separated by commas
My code in postgre sql looks like this: ``` select ouput.Serial ID as "Serial ID", case when ouput.parameters->>'10' = 'true' then 'hello' else '' end as "Hello", case when ouput.parameters->>'11' = 'true' then 'World' else '' end as "World" from ouput left join company_maintenance_entities abc on a...
My code in postgre sql looks like this:
select
ouput.Serial ID as "Serial ID",
case when ouput.parameters->>'10' = 'true' then 'hello' else '' end as "Hello",
case when ouput.parameters->>'11' = 'true' then 'World' else '' end as "World"
from ouput
left join company_maintenance_entities abc on abc.id = ouput.maintenance_entity_id
left join ouput_statuses on issue_statuses.id = ouput.status_id
where
ouput.parameters->>'10' = 'true' or ouput.parameters->>'11' = 'true'
order by ouput.Serial ID
On output I get
| Serial ID | Hello | World |
| -------- | -------------- | -------------- |
| 1| hello|World |
| 2| | World |
| 3| hello| |
| 4| | World |
| 5| hello| World |
| 6| hello| World |
How can I make query select sure that the Hello and World columns are joined in one column separated by commas and it turns out like this?
| Serial ID | hello world |
| -------- | -------------- |
| 1| hello, world|
| 2| world|
| 3| hello|
| 4| world|
| 5| hello, world|
| 6| hello, world|
H1Z1
(1 rep)
Apr 1, 2024, 03:54 PM
• Last activity: Aug 9, 2025, 09:01 PM
1
votes
1
answers
1478
views
How to read the MySQL Audit.log file in a SQL table?
I tried to read the Audit.log file of the MySQL Enterprise using a table or other more friendly format and easy to access and understand, but the xml format with which it is written has an irregular structure: 2019-02-17T12:53:23 UTC 1_2019-02-17T12:53:23 Audit 1 1 ./mysqld --basedir=/MYSQL_HOME/mys...
I tried to read the Audit.log file of the MySQL Enterprise using a table or other more friendly format and easy to access and understand, but the xml format with which it is written has an irregular structure:
2019-02-17T12:53:23 UTC
1_2019-02-17T12:53:23
Audit
1
1
./mysqld --basedir=/MYSQL_HOME/mysql
x86_64-linux-glibc2.12
5.6.38-enterprise-commercial-advanced-log
2019-02-19T02:22:23 UTC
2_2019-02-17T12:53:23
Connect
460432
0
0
nbryan
172.17.3.150
connect
nbryan
2019-02-19T02:22:23 UTC
3_2019-02-17T12:53:23
Query
460432
0
0
flopez[nbryan] @ [172.17.3.150]
172.17.3.150
set_option
/*!40101 set @@session.wait_timeout=28800 */
As you can see, the same tags are not always generated, so I have not been able to format it to a table from the SQL Server (there is a lot of information on how to read xml formats using sql). I wanted to know if there is another way, maybe from the same MySQL, although it is very difficult to find something of this manager in relation to other solutions.
Any help or idea would be very helpful.
Thank you.
Fran.J
(121 rep)
Mar 7, 2019, 02:15 PM
• Last activity: Aug 9, 2025, 07:08 PM
3
votes
3
answers
5937
views
Should dates in dimensional tables use dimDate?
Assuming my dimDate has a surrogate key. Should all the date columns in the *dimensional tables* (not the fact tables) store the surrogate key of date dimension? Or just plain date? For example, in dimensional table dimCustomer, there may be birthday, join date, graduation date, .... Etc.
Assuming my dimDate has a surrogate key. Should all the date columns in the *dimensional tables* (not the fact tables) store the surrogate key of date dimension? Or just plain date?
For example, in dimensional table dimCustomer, there may be birthday, join date, graduation date, .... Etc.
u23432534
(1565 rep)
Jun 5, 2017, 07:54 PM
• Last activity: Aug 9, 2025, 06:05 PM
1
votes
2
answers
56
views
Tuning UPSERT in PG14 w/ 130M records table
I'm tuning the server in order to speed up several SQL queries generated by a product and that can't be modified or tuned themselves. I'm stuck on a query that creates a CTO an then an INSERT / SELECT statement w/ a ON CONFLICT clause, this is the execution plan : Insert on data_location.md_associe_...
I'm tuning the server in order to speed up several SQL queries generated by a product and that can't be modified or tuned themselves.
I'm stuck on a query that creates a CTO an then an INSERT / SELECT statement w/ a ON CONFLICT clause, this is the execution plan :
Insert on data_location.md_associe_interlo_coord_red t (cost=0.57..592368.31 rows=0 width=0) (actual rows=0 loops=1)
Conflict Resolution: UPDATE
Conflict Arbiter Indexes: pkmd_associe_interlo_coord_red
Conflict Filter: (t.b_batchid Index Scan using i2mi_associe_interlo_coord_red on data_location.mi_associe_interlo_coord_red s (cost=0.57..592368.31 rows=1343311 width=1187) (actual rows=1341390 loops=1)
Output: s.b_pubid, s.b_sourceid, s.b_classname, s.b_batchid, (COALESCE(s.b_credate, '2025-08-01 08:43:28.853'::timestamp without time zone))::timestamp(6) without time zone, (COALESCE(s.b_upddate, '2025-08-01 08:43:28.853'::timestamp without time zone))::timestamp(6) without time zone, COALESCE(s.b_creator, 'semadmin'::character varying), COALESCE(s.b_updator, 'semadmin'::character varying), s.b_matchgrp, s.b_xgrp, s.b_confirmationstatus, NULL::numeric, NULL::character varying, s.b_hassuggmerge, s.b_suggmergeid, s.b_suggmergeconfscore, NULL::character varying, s.b_suggmergemasterscount, s.associe_interlo_coord_red, NULL::numeric, s.b_confirmedsdpk, s.num_compte_externe, s.siret, s.siren, s.nir, NULL::date, s.type_interlocuteur, NULL::integer, s.f_redevabilite, s.fp_redevabilite, s.fs_redevabilite, s.f_interlocuteur, s.fp_interlocuteur, s.fs_interlocuteur, s.f_coordonnee, s.fp_coordonnee, s.fs_coordonnee, s.id_golden_redevabilite, NULL::integer, s.dt_maj, s.code_organisme_source
Index Cond: (s.b_batchid = '52615'::numeric)
Filter: ((s.b_classname)::text = 'AssocieInterloCoordRede'::text)
Buffers: shared hit=9 read=239313 written=57
Settings: effective_cache_size = '43GB', effective_io_concurrency = '50', max_parallel_workers = '4', random_page_cost = '2', search_path = '"$user", public, extensions', work_mem = '1GB'
Table mi_associe_interlo_coord_red has +131M records (+140GB on disk) and md_associe_interlo_coord_red +129M records has (+81GB on disk).
Colonne | Type | Collationnement | NULL-able | Par défaut | Stockage | Compression | Cible de statistiques | Description
---------------------------+--------------------------------+-----------------+-----------+------------+----------+-------------+-----------------------+-------------
b_pubid | character varying(30) | | not null | | extended | | |
b_sourceid | character varying(128) | | not null | | extended | | |
b_classname | character varying(128) | | not null | | extended | | |
b_batchid | numeric(38,0) | | not null | | main | | |
b_credate | timestamp(6) without time zone | | | | plain | | |
b_upddate | timestamp(6) without time zone | | | | plain | | |
b_creator | character varying(128) | | | | extended | | |
b_updator | character varying(128) | | | | extended | | |
b_pendingactions | character varying(30) | | | | extended | | |
b_matchgrp | numeric(38,0) | | | | main | | |
b_oldmatchgrp | numeric(38,0) | | | | main | | |
b_xgrp | uuid | | | | plain | | |
b_confirmationstatus | character varying(30) | | | | extended | | |
b_confscore | numeric(38,0) | | | | main | | |
b_confscoretype | character varying(30) | | | | extended | | |
b_hassuggmerge | boolean | | | | plain | | |
b_suggmergeid | numeric(38,0) | | | | main | | |
b_suggmergeconfscore | numeric(38,0) | | | | main | | |
b_suggmergeconfscoretype | character varying(30) | | | | extended | | |
b_suggmergemasterscount | numeric(38,0) | | | | main | | |
associe_interlo_coord_red | numeric(38,0) | | | | main | | |
b_oldsdpk | numeric(38,0) | | | | main | | |
b_confirmedsdpk | numeric(38,0) | | | | main | | |
num_compte_externe | character varying(18) | | | | extended | | |
siret | character varying(14) | | | | extended | | |
siren | character varying(9) | | | | extended | | |
nir | character varying(13) | | | | extended | | |
dtmaj | date | | | | plain | | |
type_interlocuteur | character varying(128) | | | | extended | | |
f_redevabilite | numeric(38,0) | | | | main | | |
fp_redevabilite | character varying(30) | | | | extended | | |
fs_redevabilite | character varying(128) | | | | extended | | |
f_interlocuteur | numeric(38,0) | | | | main | | |
fp_interlocuteur | character varying(30) | | | | extended | | |
fs_interlocuteur | character varying(128) | | | | extended | | |
f_coordonnee | numeric(38,0) | | | | main | | |
fp_coordonnee | character varying(30) | | | | extended | | |
fs_coordonnee | character varying(128) | | | | extended | | |
id_golden_redevabilite | character varying(128) | | | | extended | | |
dt_maj | date | | | | plain | | |
code_organisme_source | character varying(3) | | | | extended | | |
Index :
"pkmi_associe_interlo_coord_red" PRIMARY KEY, btree (b_pubid, b_sourceid)
"fkmi_associe_interlocuteur_re2" btree (fp_interlocuteur, fs_interlocuteur)
"fkmi_associe_interlocuteur_re3" btree (fp_coordonnee, fs_coordonnee)
"fkmi_associe_interlocuteur_red" btree (fp_redevabilite, fs_redevabilite)
"i1mi_associe_interlo_coord_red" btree (associe_interlo_coord_red)
"i2mi_associe_interlo_coord_red" btree (b_batchid)
"i3mi_associe_interlo_coord_red" btree (b_matchgrp)
"i4mi_associe_interlo_coord_red" btree (b_suggmergeid)
"i5mi_associe_interlo_coord_red" btree (b_xgrp)
"usr_idx_mi_associe_interlocuteur_red_b_pendingactions" btree (b_pendingactions)
"usr_mi_associe_interlo_coord_red_b_batchid_idx" btree (b_batchid, b_classname, b_pubid, b_sourceid, fp_coordonnee, fs_coordonnee, fp_interlocuteur, fs_interlocuteur, fp_redevabilite, fs_redevabilite)
"usr_mi_associe_interlo_coord_red_fp_fs_type" btree (fp_interlocuteur, fs_interlocuteur, type_interlocuteur)
"usr_mi_associe_interlo_coord_red_id_golden_redevabilite_idx" btree (id_golden_redevabilite, type_interlocuteur)
Méthode d'accès : heap
Options: autovacuum_enabled=true, fillfactor=70
Colonne | Type | Collationnement | NULL-able | Par défaut | Stockage | Compression | Cible de statistiques | Description
---------------------------+--------------------------------+-----------------+-----------+------------+----------+-------------+-----------------------+-------------
b_pubid | character varying(30) | | not null | | extended | | |
b_sourceid | character varying(128) | | not null | | extended | | |
b_classname | character varying(128) | | not null | | extended | | |
b_batchid | numeric(38,0) | | not null | | main | | |
b_credate | timestamp(6) without time zone | | | | plain | | |
b_upddate | timestamp(6) without time zone | | | | plain | | |
b_creator | character varying(128) | | | | extended | | |
b_updator | character varying(128) | | | | extended | | |
b_matchgrp | numeric(38,0) | | | | main | | |
b_xgrp | uuid | | | | plain | | |
b_confirmationstatus | character varying(30) | | | | extended | | |
b_confscore | numeric(38,0) | | | | main | | |
b_confscoretype | character varying(30) | | | | extended | | |
b_hassuggmerge | boolean | | | | plain | | |
b_suggmergeid | numeric(38,0) | | | | main | | |
b_suggmergeconfscore | numeric(38,0) | | | | main | | |
b_suggmergeconfscoretype | character varying(30) | | | | extended | | |
b_suggmergemasterscount | numeric(38,0) | | | | main | | |
associe_interlo_coord_red | numeric(38,0) | | not null | | main | | |
b_oldsdpk | numeric(38,0) | | | | main | | |
b_confirmedsdpk | numeric(38,0) | | | | main | | |
num_compte_externe | character varying(18) | | | | extended | | |
siret | character varying(14) | | | | extended | | |
siren | character varying(9) | | | | extended | | |
nir | character varying(13) | | | | extended | | |
dtmaj | date | | | | plain | | |
type_interlocuteur | character varying(128) | | | | extended | | |
f_redevabilite | numeric(38,0) | | | | main | | |
fp_redevabilite | character varying(30) | | | | extended | | |
fs_redevabilite | character varying(128) | | | | extended | | |
f_interlocuteur | numeric(38,0) | | | | main | | |
fp_interlocuteur | character varying(30) | | | | extended | | |
fs_interlocuteur | character varying(128) | | | | extended | | |
f_coordonnee | numeric(38,0) | | | | main | | |
fp_coordonnee | character varying(30) | | | | extended | | |
fs_coordonnee | character varying(128) | | | | extended | | |
id_golden_redevabilite | character varying(128) | | | | extended | | |
dt_maj | date | | | | plain | | |
code_organisme_source | character varying(3) | | | | extended | | |
Index :
"pkmd_associe_interlo_coord_red" PRIMARY KEY, btree (b_pubid, b_sourceid)
"fkmd_associe_interlocuteur_re2" btree (fp_interlocuteur, fs_interlocuteur)
"fkmd_associe_interlocuteur_re3" btree (fp_coordonnee, fs_coordonnee)
"fkmd_associe_interlocuteur_red" btree (fp_redevabilite, fs_redevabilite)
"i1md_associe_interlo_coord_red" btree (associe_interlo_coord_red)
"i2md_associe_interlo_coord_red" btree (b_suggmergeid)
"usr_idx_md_associe_interlocuteur_red_b_pendingactions" btree (b_batchid)
Méthode d'accès : heap
Server has 64GB RAM and 16 vCores, concurrency is low, typcally 3/4 postgres process running. Workload is mostly huge batches, and very little OLTP.
Adding memory is not an option because of the company policies
Create a covering index is not an option because of the limit of 27 for columns in INCLUDE clause.
I tried to recreate the PK by including the b_batchid column (to avoid heap read while handling the conflict I thought) but it didn't help
How does it handle the conflict ? why the cost of the upsert is 0?
Ivan Rododendro
(31 rep)
Aug 7, 2025, 12:13 PM
• Last activity: Aug 9, 2025, 05:49 PM
1
votes
1
answers
712
views
mongo2.6 local file size on secondary is much larger than primary
I set up a new standalone mongo2.6 and copy one folder(database) from another machine to its data path (because I enabled `directoryperdb`). Then I set up another machine to make them as a replica set. After `rs.initiate` and data sync finished, I found the local file size on primary and secondary a...
I set up a new standalone mongo2.6 and copy one folder(database) from another machine to its data path (because I enabled
directoryperdb
).
Then I set up another machine to make them as a replica set.
After rs.initiate
and data sync finished, I found the local file size on primary and secondary are:
primary: 232G
secondary: 316G
I found similar question here
https://stackoverflow.com/questions/14843229/mongodb-replica-set-disk-size-difference-in-primary-and-secondary-nodes
But as the above answer, the primary should use more disk space than the secondary.
I need to migrate about 25T data from the old mongo 2.6 cluster.
If every database's local file size takes so much more disk space in the secondary, then I need to take this as a spec consideration in the newly bought machine. :(
CSJ
(155 rep)
Feb 6, 2017, 12:49 AM
• Last activity: Aug 9, 2025, 05:08 PM
0
votes
1
answers
642
views
MySQL upgrade via mysqldump
We want to upgrade to MySQL 8.0 from MySQL 5.6 but upgrade manual seems to cumbersome and risky to follow doing an in-place upgrade so we have decided to do it via MySQL Workbench export (mysqldump) and import our exported database tables into a fresh new MySQL 8.0 installation Should it be successf...
We want to upgrade to MySQL 8.0 from MySQL 5.6 but upgrade manual seems to cumbersome and risky to follow doing an in-place upgrade so we have decided to do it via MySQL Workbench export (mysqldump) and import our exported database tables into a fresh new MySQL 8.0 installation
Should it be successful or should we do something else in the middle for it to work?
Matias Haeussler
(111 rep)
Jan 18, 2021, 05:00 PM
• Last activity: Aug 9, 2025, 04:09 PM
0
votes
1
answers
24
views
Linked Server failure on clustered SQL Server
I have two clustered Microsoft SQL Servers (`SQLA` & `SQLB`) installed, and confirmed that both of the servers have an ODBC connector for a local PostgreSQL server. From that ODBC connection, I have a linked server created for use in some stored procedures that fails at least once a fortnight with t...
I have two clustered Microsoft SQL Servers (
Linked Server settings:
EXEC master.dbo.sp_addlinkedserver @server = N'POSTGRESP23', @srvproduct=N'', @provider=N'MSDASQL', @datasrc=N'PSQLPROD'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'POSTGRESP23',@useself=N'False',@locallogin=NULL,@rmtuser=N'postgres',@rmtpassword='########'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'collation compatible', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'data access', @optvalue=N'true'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'dist', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'pub', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'rpc', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'rpc out', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'sub', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'connect timeout', @optvalue=N'0'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'collation name', @optvalue=null
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'lazy schema validation', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'query timeout', @optvalue=N'0'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'use remote collation', @optvalue=N'true'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'remote proc transaction promotion', @optvalue=N'true'
SQLA
& SQLB
) installed, and confirmed that both of the servers have an ODBC connector for a local PostgreSQL server.
From that ODBC connection, I have a linked server created for use in some stored procedures that fails at least once a fortnight with this error message:
> Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "POSTGRESP23"
When troubleshooting the issues, the ODBC connector on both SQLA
and SQLB
tests successfully from the System DSN menu on the server; the error originates from the linked server.
Currently, to fix this for convenience and lower downtime I am just deleting the linked server and remaking it, pointing it to the same ODBC object. However, this is not a sustainable process.
Can anyone suggest where to look when troubleshooting? As I'm at a loss.
**Additional Information**
psqlODBC_X64
is installed on both machines already from https://odbc.postgresql.org/
System DSN settings:


NathanM
Jul 30, 2025, 10:48 PM
• Last activity: Aug 9, 2025, 03:27 PM
0
votes
1
answers
713
views
MS SQL Creating Tables and Adding Values by a Trigger
Table_A is an existing table which has 3 columns Id (PK), Name, Stock. I would like to create a trigger to create a table. For example, Table_A has an item, Id (PK) = 3, Name = Apple, Stock = 10. By using triggers I would like to create a new table called Table_B and when I write the following code...
Table_A is an existing table which has 3 columns Id (PK), Name, Stock.
I would like to create a trigger to create a table.
For example, Table_A has an item, Id (PK) = 3, Name = Apple, Stock = 10.
By using triggers I would like to create a new table called Table_B and when I write the following code "INSERT INTO Table_B VALUES(3,5)" a new table will be created and it will have 2 columns ProductId and Quanity. At the same time the value for Stock for Table_A will be decreased by 5. Every time I insert something to Table_B I would like them to get added to Table_B and the stock in Table_A to be decreased by the quantity of that item inserted with the related ProductId. Thanks!
Nikola
(1 rep)
Apr 29, 2021, 01:05 AM
• Last activity: Aug 9, 2025, 02:10 PM
0
votes
1
answers
14
views
postgres - log archive_command execution
we want to have a an entz in the postgres logs upon execution of the `archive_command`. Someone came up with this archive_command which should achieve this `test ! -f /opt/db/backup/postgres/archives/%f && echo $(date "+%F %T.%3N %Z") [$$] ARCH: archiving WAL: %f && cp %p /opt/db/backup/postgres/arc...
we want to have a an entz in the postgres logs upon execution of the
archive_command
.
Someone came up with this archive_command which should achieve this test ! -f /opt/db/backup/postgres/archives/%f && echo $(date "+%F %T.%3N %Z") [$$] ARCH: archiving WAL: %f && cp %p /opt/db/backup/postgres/archives/%f
The goal is to have a line telling us which wal file was archived at what time.
it throws an error though
2025-08-07 18:06:44.184 CEST FATAL: invalid value for parameter "archive_command": "test ! -f /opt/db/backup/postgres/archives/%f && echo $(date "+%F %T.%3N %Z")
[$$] ARCH: archiving WAL: %f && cp %p /opt/db/backup/postgres/archives/%f"
2025-08-07 18:06:44.184 CEST DETAIL: String contains unexpected placeholder "%F".
Any idea how to adjust the archive command in order to get this going correctly?
vrms
(271 rep)
Aug 7, 2025, 04:04 PM
• Last activity: Aug 9, 2025, 02:07 PM
4
votes
2
answers
901
views
Numeric collation sorts by digits instead of value- postgres
I have a table of users with ids in the following form `user123@domain.com`. When searching and sorting the users, I need `user1@domian.com` to be before `user14@domain.com` but since 4 is “smaller” than @ it sorts the other way around. After looking around I came across the following collation: CRE...
I have a table of users with ids in the following form
user123@domain.com
.
When searching and sorting the users, I need user1@domian.com
to be before user14@domain.com
but since 4 is “smaller” than @ it sorts the other way around.
After looking around I came across the following collation:
CREATE COLLATION numeric (provider = icu, locale = 'en-u-kn-true');
Which fixed the problem, but caused another one, users with numbers that started with 2 appeared after users that started with 1
For example, user19@domain.com
came before user2@domain.com
From the docs, it says that kn-true
and looks at the number’s value but it looks like it only compares the digits.
I'm using Postgres 11.6. I have two servers: one on Windows (local) and a remote one on Linux.
Is anyone familiar with this issue?
Amit Keinan
(41 rep)
Dec 28, 2022, 07:17 PM
• Last activity: Aug 9, 2025, 01:56 PM
3
votes
1
answers
2541
views
Finding all the index corruptions in the entire Postgres DB
There is DB with unknown amount of corrupted unique indexes. Currently I discover them one by one when I try `REINDEX DATABASE CONCURRENTLY `, handle the specific duplication violation (manipulating data, mainly delete the duplications using scripts), reindex the specific table or index, and continu...
There is DB with unknown amount of corrupted unique indexes.
Currently I discover them one by one when I try
REINDEX DATABASE CONCURRENTLY
, handle the specific duplication violation (manipulating data, mainly delete the duplications using scripts), reindex the specific table or index, and continue to the next index (again only right after I discover it using REINDEX DATABASE CONCURRENTLY
).
Not mentioning that each time I get indexes with the '_ccnew' suffix, that AFAIK are indexes that were tried to be created by a previous reindex concurrently but couldn’t be done, usually because they are violating a uniqueness check.
The failed attempts to reindex concurrently will sit in there and should be dropped manually.
**Concurrent** reindex is used in order to prevent a shutdown.
I want to reduce those "roundtrips" of searching the next violation and wonder if there is a more efficient way to get a general data about the status of all the index corruptions or unique violations that a Postgres DB has.
KaliTheGreat
(189 rep)
Feb 21, 2024, 08:00 AM
• Last activity: Aug 9, 2025, 01:53 PM
1
votes
2
answers
47
views
Dynamically Adjusting max server memory Between Two Instances
I’ve got one server with two SQL instances. The server has 128 GB RAM. - Instance 1 has peak load from 08:00–10:00 - Instance 2 has peak load from 14:00–17:00 Currently: - Instance 1 has max server memory = 70 GB - Instance 2 has max server memory = 50 GB - The rest of the day, both are nearly idle....
I’ve got one server with two SQL instances. The server has 128 GB RAM.
- Instance 1 has peak load from 08:00–10:00
- Instance 2 has peak load from 14:00–17:00
Currently:
- Instance 1 has max server memory = 70 GB
- Instance 2 has max server memory = 50 GB
- The rest of the day, both are nearly idle.
Would it make sense to dynamically shift memory? give more to one instance during its peak while reducing it on the other?
Any downsides to doing that with a scheduled script?
Jonasstadi
(11 rep)
Aug 7, 2025, 03:04 PM
• Last activity: Aug 9, 2025, 01:41 PM
Showing page 11 of 20 total questions