Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
385 views
Allow user to drop only tables / views created by that user in MySQL
Is it possible to allow a user to *only* drop tables and views that he/she created in MySQL? How is this achieved?
Is it possible to allow a user to *only* drop tables and views that he/she created in MySQL? How is this achieved?
Giovanni Colitti (131 rep)
Jan 20, 2021, 04:54 PM • Last activity: Aug 10, 2025, 02:08 PM
0 votes
1 answers
56 views
(SOLVED) Source and Replica in a MariaDB 'pair' do not start exchanging data
On Linux Slackware PCs, I have a 'Source' database on one PC, and a 'Replica' on the other. On the Source end (11.4.7-MariaDB) I've programmed ID=1. On the Replica end (10.11.6-MariaDB), I set ID=11 and set the Source IP. I've started them, and they seem to like each other, as the Show replica statu...
On Linux Slackware PCs, I have a 'Source' database on one PC, and a 'Replica' on the other. On the Source end (11.4.7-MariaDB) I've programmed ID=1. On the Replica end (10.11.6-MariaDB), I set ID=11 and set the Source IP. I've started them, and they seem to like each other, as the Show replica status reports everything as Ok, no errors, comms ok, etc... BUT, when I create a new table on the 'Source' machine, nothing happens on the 'Replica' - no new table is created. And, after I manually create the table on the Replica, and I insert a record on the Source, no data transfer happens either. No pointers are incremented, seemingly no data is transferred. And no errors seem to be appearing in the logs. Here are the status reports for Source and Replica. I dispair, I consulted AI, and after several sessions, AI was in the same state as I was... repeatedly asking me to insert new records at the Source and checking at the other end, maybe hoping something would repair itself. Also suggesting extra complications such as adding SSH, Gtid and other optionals. What can I do to get the show on the road? I've reinstalled MariaDB at the Replica. I'm somewhat reticent in changing the DB at the Source, as it's in use. I've checked permissions on files and directories... Last session ended at 04:30 AM this morning. **source** status:
show master status\G
*************************** 1. row ***************************
            File: mysql-bin.000020
        Position: 328
    Binlog_Do_DB: homeorptemp, farma1, test_replication
Binlog_Ignore_DB: 
1 row in set (0.000 sec)
**replica** status:
slave status\G
*************************** 1. row ***************************
                Slave_IO_State: Waiting for master to send event
                   Master_Host: 192.168.1.102
                   Master_User: my_repl
                   Master_Port: 3306
                 Connect_Retry: 60
               Master_Log_File: mysql-bin.000020
           Read_Master_Log_Pos: 328
                Relay_Log_File: mysql-relay-bin.000007
                 Relay_Log_Pos: 627
         Relay_Master_Log_File: mysql-bin.000020
              Slave_IO_Running: Yes
             Slave_SQL_Running: Yes
               Replicate_Do_DB: 
           Replicate_Ignore_DB: 
            Replicate_Do_Table: 
        Replicate_Ignore_Table: 
       Replicate_Wild_Do_Table: 
   Replicate_Wild_Ignore_Table: 
                    Last_Errno: 0
                    Last_Error: 
                  Skip_Counter: 0
           Exec_Master_Log_Pos: 328
               Relay_Log_Space: 1235
               Until_Condition: None
                Until_Log_File: 
                 Until_Log_Pos: 0
            Master_SSL_Allowed: No
            Master_SSL_CA_File: 
            Master_SSL_CA_Path: 
               Master_SSL_Cert: 
             Master_SSL_Cipher: 
                Master_SSL_Key: 
         Seconds_Behind_Master: 0
 Master_SSL_Verify_Server_Cert: No
                 Last_IO_Errno: 0
                 Last_IO_Error: 
                Last_SQL_Errno: 0
                Last_SQL_Error: 
   Replicate_Ignore_Server_Ids: 
              Master_Server_Id: 1
                Master_SSL_Crl: 
            Master_SSL_Crlpath: 
                    Using_Gtid: No
                   Gtid_IO_Pos: 
       Replicate_Do_Domain_Ids: 
   Replicate_Ignore_Domain_Ids: 
                 Parallel_Mode: optimistic
                     SQL_Delay: 0
           SQL_Remaining_Delay: NULL
       Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
              Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
    Slave_Transactional_Groups: 0
          Replicate_Rewrite_DB: 
1 row in set (0.001 sec)
**EDIT** my.cnf at the replica:
[client]
[mysqld]
port                    = 3306
socket                  = /var/run/mysql/mysql.sock
skip-external-locking
bind-address            = 0.0.0.0
key_buffer_size         = 256M
max_allowed_packet      = 32M
table_open_cache        = 1024
sort_buffer_size        = 2M
net_buffer_length       = 8K
read_buffer_size        = 512K
read_rnd_buffer_size    = 1M
skip-networking         = OFF
server-id               = 11
log-bin                 = mysql-bin
binlog_format           = mixed
innodb_buffer_pool_size = 512M
innodb_log_file_size    = 256M
read_only               = OFF
log_warnings		= 1
log_slave_updates 	= ON

[mysqldump]
quick
max_allowed_packet     = 32M

[mysql]
no-auto-rehash

[myisamchk]
show processlist;
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
| Id | User        | Host      | db   | Command   | Time  | State                                                  | Info             | Progress |
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
|  5 | system user |           | NULL | Slave_IO  | 17725 | Waiting for master to send event                       | NULL             |    0.000 |
|  6 | system user |           | NULL | Slave_SQL | 17724 | Slave has read all relay log; waiting for more updates | NULL             |    0.000 |
|  8 | root        | localhost | NULL | Query     |     0 | starting                                               | show processlist |    0.000 |
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
3 rows in set (0.000 sec)
jcoppens (121 rep)
Aug 5, 2025, 03:32 PM • Last activity: Aug 10, 2025, 01:25 PM
2 votes
1 answers
503 views
Handling unique IDs on client side
Reading around on the topic hasn't got me closer to understanding best practices. I'm also relatively new and self-taught in programming, so might have some questions about how to actually execute the solution. The app I'm building requires a lot of DOM elements to be generated based on database row...
Reading around on the topic hasn't got me closer to understanding best practices. I'm also relatively new and self-taught in programming, so might have some questions about how to actually execute the solution. The app I'm building requires a lot of DOM elements to be generated based on database rows. These should be updatable by the user and at the minute I'm using the auto incrementing primary key as an html attribute id. Obviously I don't want users to modify this, or in the associated javascript object, to then affect other users' data when saved or updated. I have considered the following options: 1. Not using auto incrementing but a uuid or guid, but I've read these are not all that advisable, especially for db performance. 2. Concealing the client side id by hashing it when served on the server side. This also doesn't sound particularly performant. 3. Having a 'lookup table' that is referenced and produces another, non-incremental id. Doesn't seem like a bad idea but I'm a bit unsure how best to go about generating this and how it should look. 4. Including an extra column in the tables that would be updated that contains for example the users name and would be combined with the primary key. This seems simple enough but the idea of adding this extra column to every table that might be updated sounds redundant and not normalized, etc. 5. Keeping the primary keys visible but producing a session table for the user that tracks what the which IDs were sent out and therefore which can be updated. New rows can be generated so long as other ones belonging to other users aren't updated. Or, maybe I'm barking up the wrong tress here :) Thanks!
Mikey B (21 rep)
Sep 1, 2018, 12:39 PM • Last activity: Aug 10, 2025, 01:06 PM
0 votes
0 answers
37 views
Slow DELETE due to FK trigger on partitioned child table
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance. **Original setup: ** I have two tables: `tasks` (parent) with ~65M rows and `records` (child) with ~200M rows There is a foreign key from records.task_id → tasks.id When executing a `DELETE` statement on...
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance.

**Original setup:
** I have two tables: tasks (parent) with ~65M rows and records (child) with ~200M rows
There is a foreign key from records.task_id → tasks.id
When executing a DELETE statement on tasks it takes approximately 12 seconds to complete
explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
                                                                      QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
 Delete on public.tasks  (cost=5628.30..1402047.95 rows=0 width=0) (actual time=1323.519..1323.520 rows=0 loops=1)
   Buffers: shared hit=1160307
   ->  Bitmap Heap Scan on public.tasks  (cost=5628.30..1402047.95 rows=502675 width=6) (actual time=45.328..373.039 rows=513093 loops=1)
         Output: ctid
         Recheck Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
         Heap Blocks: exact=133678
         Buffers: shared hit=134121
         ->  Bitmap Index Scan on tasks__project_id  (cost=0.00..5502.63 rows=502675 width=0) (actual time=24.057..24.058 rows=513093 loops=1)
               Index Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
               Buffers: shared hit=443
 Query Identifier: -8361055271394889220
 Planning Time: 0.069 ms
 Trigger RI_ConstraintTrigger_a_991596545 for constraint records_task_id_dfa43a78_fk_tasks: time=11377.093 calls=513093
 Execution Time: 12742.066 ms
(14 rows)
I decided to partition the records table based on project_id (list) by doing the following:
1. Create new table tasks_new and copy the content of tasks 2. Create a new partitioned records table PARTITION BY LIST (project_id) 3. Change the PRIMARY KEY to be based on (project_id, id) 4. Add foreign key constraint to tasks_new based on (project_id, task_id) 5. Generate 800 tables and populate the data from records This works as expected, however, when running and comparing the DELETE statement from tasks_new I see it is slower than the original (none partitioned) setup. explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks_new WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
                                                                    QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
 Delete on public.tasks_new  (cost=15925.46..1400465.79 rows=0 width=0) (actual time=1352.341..1352.342 rows=0 loops=1)
   Buffers: shared hit=1162960
   ->  Bitmap Heap Scan on public.tasks_new  (cost=15925.46..1400465.79 rows=497147 width=6) (actual time=62.350..390.696 rows=513093 loops=1)
         Output: ctid
         Recheck Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
         Heap Blocks: exact=133678
         Buffers: shared hit=136774
         ->  Bitmap Index Scan on tasks_new_pkey  (cost=0.00..15801.17 rows=497147 width=0) (actual time=41.590..41.590 rows=513093 loops=1)
               Index Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
               Buffers: shared hit=3096
 Query Identifier: -3093644494149541820
 Planning Time: 0.066 ms
 Trigger RI_ConstraintTrigger_a_991528806 for constraint records_parti_task_id_dfa43a78_fk_tasks: time=28558.369 calls=513093
 Execution Time: 29965.025 ms
(14 rows)
I saw that most of the time is spent on the foreign key trigger (to see it, I intentionally set constraints all immediate;) and it is SLOWER than the test without the partitions.

RI_ConstraintTrigger_a_991596545 fires RI_FKey_noaction_del in AFTER DELETE of table tasks_new. However I don't know how RI_FKey_noaction_del is implemented and I suspect it doesn't work well in partitioned tables or might does the scan per all partitions together (not efficient). Please help me understand better how it works and how can I improve it.

p.s., edit: I forgot to mention that I also added an index (project_id, record_id) to the tasks_new table (partitioned)
Cowabunga (145 rep)
Aug 7, 2025, 10:36 AM • Last activity: Aug 10, 2025, 12:27 PM
0 votes
2 answers
12651 views
Getting current number of connections in mysql
I want to know if there is any parameter to show the current number of mysql connections (or peak value in the past 10 minutes or something else). Right now, I only see max and max_used connections. mysql> show global status like '%connection%'; +-----------------------------------+-----------------...
I want to know if there is any parameter to show the current number of mysql connections (or peak value in the past 10 minutes or something else). Right now, I only see max and max_used connections. mysql> show global status like '%connection%'; +-----------------------------------+---------------------+ | Variable_name | Value | +-----------------------------------+---------------------+ | Connection_errors_accept | 0 | | Connection_errors_internal | 0 | | Connection_errors_max_connections | 434 | | Connection_errors_peer_address | 0 | | Connection_errors_select | 0 | | Connection_errors_tcpwrap | 0 | | Connections | 2380515 | | Max_used_connections | 152 | | Max_used_connections_time | 2020-07-09 19:28:43 | +-----------------------------------+---------------------+ 9 rows in set (0.00 sec) mysql> show global variables like '%connection%'; +--------------------------+-------------------+ | Variable_name | Value | +--------------------------+-------------------+ | character_set_connection | latin1 | | collation_connection | latin1_swedish_ci | | max_connections | 300 | | max_user_connections | 0 | +--------------------------+-------------------+
mahmood (113 rep)
Jul 12, 2020, 07:46 AM • Last activity: Aug 10, 2025, 12:04 PM
1 votes
1 answers
548 views
Write latency for adding an index on read replica
This is a question that I've not been able to find any good documentation for on the internet. I think it's a common industry standard that for tables with heavy writes, you add an index on the read replica and not on the master. This way, you're ensuring the writes on master don't slow down, while...
This is a question that I've not been able to find any good documentation for on the internet. I think it's a common industry standard that for tables with heavy writes, you add an index on the read replica and not on the master. This way, you're ensuring the writes on master don't slow down, while also ensuring the reads on read replica are faster. But doesn't the read replica also have writes, in the form of binlog replication? If I'm not wrong, the replication is simply a form of copy-pasting the commands from the binlog as-is. But the index on the read replica would still have to be updated, right? Since the binlog file won't have any "index updates" commands (because the master doesn't even have the index), I'm assuming that the "index update" on the read replica happens when an INSERT or an UPDATE commands comes from the binlog. So, is it still not a "WRITE" operation that happens? Will it still not slow down the write latency? I understand that not all DBs have this capability of indexing a read replica separately, so this question is restricted to only those DBs that do, such as MySQL.
Sidharth Samant (203 rep)
Jun 1, 2023, 07:46 AM • Last activity: Aug 10, 2025, 11:06 AM
0 votes
1 answers
155 views
The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it `ServerB`. I have a stored procedure (`SomeStoredProcedure`) that selects from Linked Server `ServerB`. If I explicitly set the isolation level to `SERIALIZA...
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it ServerB. I have a stored procedure (SomeStoredProcedure) that selects from Linked Server ServerB. If I explicitly set the isolation level to SERIALIZABLE and then try to insert the results of SomeStoredProcedure into a local temp table, I get the following error: > OLE DB provider "MSOLEDBSQL" for linked server "ServerB" returned message "The parameter is incorrect.". > > Msg 7399, Level 16, State 1, Line 1 > > The OLE DB provider "MSOLEDBSQL" for linked server "ServerB" reported an error. One or more arguments were reported invalid by the provider. > > Msg 7391, Level 16, State 2, Line 1 > > The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction. If I just execute the procedure directly (without inserting the results into a local temp table) it works. If I don't use the SERIALIZABLE isolation level, it also works. (Other explicit isolation levels work as well.) I have tried disabling Enable Promotion of Distributed Transactions for RPC as mentioned in other answers: Linked Server Options But no dice, same error: Error I understand that the query wants to promote to a distributed transaction for the above scenario since a Linked Server is involved (I assume enforcing SERIALIZABLE isolation is more involved across a remote server). But is it possible to prevent it from promoting to a distributed transaction under these circumstances? The same issue is reproducible using sp_executesql to select from the Linked Server as well. Repro code for example:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

DROP TABLE IF EXISTS #LocalTempTable;
CREATE TABLE #LocalTempTable (ID INT);

INSERT INTO #LocalTempTable (ID)
EXEC sp_executesql N'SELECT ID FROM ServerB.DatabaseName.SchemaName.SomeTable;';
*Reminder: I don't own this 3rd party server, and can't change any settings on it such as actually enabling the MSDTC.
J.D. (40913 rep)
Jun 27, 2025, 06:06 PM • Last activity: Aug 10, 2025, 10:03 AM
1 votes
2 answers
1709 views
Relationship from Orders to Shop and Supplier, or to general User?
I have a platform where - a User can have a Shop, which can place Orders - a User can be a Supplier, which can see Orders from Shops coming in. I have created the following diagram depicting the structure I have modeled so far: [![enter image description here][1]][1] Is that a *good* schema? Or, wou...
I have a platform where - a User can have a Shop, which can place Orders - a User can be a Supplier, which can see Orders from Shops coming in. I have created the following diagram depicting the structure I have modeled so far: enter image description here Is that a *good* schema? Or, would it make more sense to *couple* the relationships to the User straight away? I'm guessing that will cause more overhead in my backend? (Laravel - PHP) Thank you!
Miguel Stevens (169 rep)
May 9, 2017, 02:32 PM • Last activity: Aug 10, 2025, 09:08 AM
1 votes
1 answers
427 views
In MySQL replication, why aren't queries run on the slave when there was no USE statement on master?
We have replication running from our master server to a slave. `binlog_format` is the default; `mixed`. We have noticed something weird and are not sure whether it's a bug or intended behaviour. Certain statements are being ignored. ## Ignored DELETE FROM schema_name.table_name WHERE id = 123; ## Wo...
We have replication running from our master server to a slave. binlog_format is the default; mixed. We have noticed something weird and are not sure whether it's a bug or intended behaviour. Certain statements are being ignored. ## Ignored DELETE FROM schema_name.table_name WHERE id = 123; ## Works USE schema_name; DELETE FROM schema_name.table_name WHERE id = 123; ## Also works! USE different_schema_name; DELETE FROM schema_name.table_name WHERE id = 123; When I delete something without previously selecting a database on master, the query is ignored on the slave. But when I issue a USE schema_name on the master before the statement, *even if it is a different schema*, then it does run on the slave. Regardless of the USE and whether they run on the slave, they always appear in the binary logs on master. We noticed this as we have a script that tidies up old shopping carts that loops over a list of databases and issues DELETE FROM schema_name.carts WHERE created_at ..., but replication was breaking when an INSERT INTO carts SET cart_number = 'xxx' call took place on the slave; the cart number (unique column) no longer existed on master (after a successful check to validate such) but on the slave it still existed so it threw the error and broke replication. **Is this intended behaviour? What's the best method to ensure this kind of behaviour doesn't happen?** For reference and possibly meaningful, master is MariaDB (in a drbd cluster) and slave is Percona.
Leonard Challis (142 rep)
Dec 7, 2018, 01:38 PM • Last activity: Aug 10, 2025, 07:08 AM
0 votes
1 answers
748 views
Joining without a unique identifier
**Payments** | PaymentRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------- | ----------- | | P01 | ABC | 100 | MR01 | P02 | ABC | 200 | MR02 | P03 | XYZ | 200 | MR03 **Refunds** | ReundRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------...
**Payments** | PaymentRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------- | ----------- | | P01 | ABC | 100 | MR01 | P02 | ABC | 200 | MR02 | P03 | XYZ | 200 | MR03 **Refunds** | ReundRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------- | ----------- | | R01 | ABC | -10 | MR04 | R02 | ABC | -100 | MR05 These tables used to be joined on MerchantRef, resulting in a list of payments and refunds. This worked because only single refunds against a product were allowed. Multiple refunds are now allowed, meaning MerchantRef must change for each refund (payment provider requirement) therefore breaking the join. If I change the join use Product, I end up with duplicated rows. Is there method or some trickery that will allow me to join on Product at all? While Product can exist multiple times, it always refers to the same thing. I don't think there's a way to proceed as we have no way of knowing which refund refers to which payment. The only approach I can think of to SUM() PaymentAmount for all matching Product records, and do the same for refunds, and do it based on total. SQL isn't my forte at all, but am I right in thinking my only solution here is having a reference between the tables that never changes? I'm trying to keep changes to a minimum, I never would have started with this design, but here we are.
supermanpineaplle (1 rep)
Feb 5, 2021, 02:59 PM • Last activity: Aug 10, 2025, 06:01 AM
0 votes
1 answers
471 views
Clarification on steps creating Database Replication Definition
I am being asked to create database replication definitions for a couple of databases (around 20 databases). I just wanted to clarify is the below-mentioned steps are the correct steps to be carried out? Let's says 'customer' is one of my database. 1. Set ddl in tran to true `sp_dboption customer,"d...
I am being asked to create database replication definitions for a couple of databases (around 20 databases). I just wanted to clarify is the below-mentioned steps are the correct steps to be carried out? Let's says 'customer' is one of my database. 1. Set ddl in tran to true sp_dboption customer,"ddl in tran", true 2. Mark the primary database using sp_reptostandby sp_reptostandby customer,'all' 3. Set the RepAgent parameter send warm standby xacts to true sp_config_rep_agent customer,'send warm standby xacts', 'true' 4. Create db repdef/subs `create database replication definition db_repdef_customer with primary at DS.customer replicate DDL replicate system procedures go` `create subscription db_sub_customer for database replication definition db_repdef_customer with primary at DS.customer with replicate at DS2.customer1 without materialization go` (Note: DS.customer and DS2.customer1 are ASE-ASE replication) After I have followed the above steps to create db repdef/subs, I hit a lot of permission issues on my replication ID to do INSERT/UPDATE/DELETE operators on those tables I did not setup table replication yet. Further check on these tables in my 'customer' database (ex. I tried to do insert/update/delete operations manually on tables without setting table repdef, I realised that the data replication is working for all the tables under the 'customer' database with/without I setup table replication. Is this normal? Any steps I have missed out? Please help.
Tittus (33 rep)
Mar 13, 2018, 03:00 AM • Last activity: Aug 10, 2025, 05:00 AM
0 votes
1 answers
977 views
Slow performance of query with $and & $or operators
My query with `$and`, `$or` operators is performing poorly. Even though I've built a compound index on all three fields that are specified in my query criteria, the query does a complete collection scan. I have tried both simple find and aggregation and the Winning plan for both is COLLSCAN. ### Sim...
My query with $and, $or operators is performing poorly. Even though I've built a compound index on all three fields that are specified in my query criteria, the query does a complete collection scan. I have tried both simple find and aggregation and the Winning plan for both is COLLSCAN. ### Simple find Query:
db.test.find({ $or: [ {id1: "900004"}, {relatedid: "900004"}], $and: [ { isValid: "Y" } ] } )
Execution stats:
{
    "queryPlanner" : {
        "plannerVersion" : 1,
        "namespace" : "XXX",
        "indexFilterSet" : false,
        "parsedQuery" : {
            "$and" : [ 
                {
                    "$or" : [ 
                        {
                            "Id1" : {
                                "$eq" : "900004"
                            }
                        }, 
                        {
                            "relatedID" : {
                                "$eq" : "900004"
                            }
                        }
                    ]
                }, 
                {
                    "isValid" : {
                        "$eq" : "Y"
                    }
                }
            ]
        },
        "winningPlan" : {
            "stage" : "COLLSCAN",
            "filter" : {
                "$and" : [ 
                    {
                        "$or" : [ 
                            {
                                "Id1" : {
                                    "$eq" : "900004"
                                }
                            }, 
                            {
                                "relatedID" : {
                                    "$eq" : "900004"
                                }
                            }
                        ]
                    }, 
                    {
                        "isValid" : {
                            "$eq" : "Y"
                        }
                    }
                ]
            },
            "direction" : "forward"
        },
        "rejectedPlans" : []
    },
    "executionStats" : {
        "executionSuccess" : true,
        "nReturned" : 1,
        "executionTimeMillis" : 61,
        "totalKeysExamined" : 0,
        "totalDocsExamined" : 100006,
        "executionStages" : {
            "stage" : "COLLSCAN",
            "filter" : {
                "$and" : [ 
                    {
                        "$or" : [ 
                            {
                                "Id1" : {
                                    "$eq" : "900004"
                                }
                            }, 
                            {
                                "relatedID" : {
                                    "$eq" : "900004"
                                }
                            }
                        ]
                    }, 
                    {
                        "isValid" : {
                            "$eq" : "Y"
                        }
                    }
                ]
            },
            "nReturned" : 1,
            "executionTimeMillisEstimate" : 60,
            "works" : 100008,
            "advanced" : 1,
            "needTime" : 100006,
            "needYield" : 0,
            "saveState" : 781,
            "restoreState" : 781,
            "isEOF" : 1,
            "invalidates" : 0,
            "direction" : "forward",
            "docsExamined" : 100006
        }
    },
    "serverInfo" : {
        "host" : "XXXX",
        "port" : ,
        "version" : "4.0.9",
        "gitVersion" : "XXX"
    },
    "ok" : 1.0
}
### Aggregation Query:
db.test.aggregate( [{ "$match":{ "$and": [ {"isValid": "Y"}, { "$or": [ {"Id1": "900004"}, {"relatedID": "900004"} ]}] }} ] )
Execution Stats:
{
    "stages" : [ 
        {
            "$cursor" : {
                "query" : {
                    "$and" : [ 
                        {
                            "isValid" : "Y"
                        }, 
                        {
                            "$or" : [ 
                                {
                                    "Id1" : "900004"
                                }, 
                                {
                                    "relatedID" : "900004"
                                }
                            ]
                        }
                    ]
                },
                "queryPlanner" : {
                    "plannerVersion" : 1,
                    "namespace" : "XXXXX",
                    "indexFilterSet" : false,
                    "parsedQuery" : {
                        "$and" : [ 
                            {
                                "$or" : [ 
                                    {
                                        "Id1" : {
                                            "$eq" : "900004"
                                        }
                                    }, 
                                    {
                                        "relatedID" : {
                                            "$eq" : "900004"
                                        }
                                    }
                                ]
                            }, 
                            {
                                "isValid" : {
                                    "$eq" : "Y"
                                }
                            }
                        ]
                    },
                    "winningPlan" : {
                        "stage" : "COLLSCAN",
                        "filter" : {
                            "$and" : [ 
                                {
                                    "$or" : [ 
                                        {
                                            "Id" : {
                                                "$eq" : "900004"
                                            }
                                        }, 
                                        {
                                            "relatedID" : {
                                                "$eq" : "900004"
                                            }
                                        }
                                    ]
                                }, 
                                {
                                    "isValid" : {
                                        "$eq" : "Y"
                                    }
                                }
                            ]
                        },
                        "direction" : "forward"
                    },
                    "rejectedPlans" : []
                }
            }
        }
    ],
    "ok" : 1.0
}
How can I get a fast a query for my results?
Anban (45 rep)
Jun 6, 2019, 01:01 PM • Last activity: Aug 10, 2025, 04:04 AM
0 votes
1 answers
143 views
How to create a table with only indexes and no actual data?
**Question:** Is is possible to create a table in PostgreSQL for the sole purpose of creating specialized indexes (used for advanced search use-cases), without storing duplicate data in such tables? **Context:** I have a table with a JSONB column that can contain dynamic (and nested keys). I've mana...
**Question:** Is is possible to create a table in PostgreSQL for the sole purpose of creating specialized indexes (used for advanced search use-cases), without storing duplicate data in such tables? **Context:** I have a table with a JSONB column that can contain dynamic (and nested keys). I've managed to create a function, called jsonb_flatten which can convert {a: {b: "val"}} to {"a.b": "val"}. However, I need trigram indexes on all possible values, which is not support by regular JSONB indexes, i.e. GIST/GIN. One solution could be to duplicate this data into a table with two fields key | value | original_row_id and create a trigram index on the value column, but that would end-up unnecessarily duplicating the data. Thus, my question.
Saurabh Nanda (333 rep)
Jul 13, 2024, 04:29 PM • Last activity: Aug 10, 2025, 03:06 AM
0 votes
1 answers
136 views
column values do not return properly when column name called out specifically in select statement
I have a database with this table (version 5.0.95): > describe shift; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL...
I have a database with this table (version 5.0.95): > describe shift; +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | person_id | int(11) | NO | MUL | NULL | | | utc_date | datetime | NO | MUL | NULL | | | mins | int(11) | NO | | NULL | | | active_utc_date | datetime | YES | | NULL | | | active_mins | int(11) | YES | | NULL | | | absence_id | int(11) | YES | | NULL | | | updated_by | int(11) | YES | | NULL | | | updated_name | varchar(100) | YES | | NULL | | | updated_date | datetime | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ Sample of what it looks like is here: | 9502849 | 327037 | 2017-06-12 11:00:00 | 540 | 2017-06-12 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502850 | 327037 | 2017-06-13 11:00:00 | 540 | 2017-06-13 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502851 | 327037 | 2017-06-14 11:00:00 | 540 | 2017-06-14 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502852 | 327037 | 2017-06-15 11:00:00 | 540 | 2017-06-15 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502853 | 327037 | 2017-06-16 11:00:00 | 540 | 2017-06-16 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | +---------+-----------+---------------------+------+---------------------+-------------+------------+------------+---------------+---------------------+ 940 rows in set (0.09 sec) I wanted to see entries with utc_date as just today: > select utc_date from shift where (person_id = 327037 and date(utc_date) = curdate()); | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | +------------+ 940 rows in set (0.08 sec) That wasn't great, because uct_date got changed to be curdate()... And now, anytime I try to look at utc_date, it always shows that same date: > select utc_date from shift where (person_id = 327037 and utc_date like '%2016%'); | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | | 2016-04-21 | +------------+ 940 rows in set, 1 warning (0.08 sec) But if I view the table with a *, I still see it as it should be: > select * from shift where person_id = 327037; | 9502849 | 327037 | 2017-06-12 11:00:00 | 540 | 2017-06-12 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502850 | 327037 | 2017-06-13 11:00:00 | 540 | 2017-06-13 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502851 | 327037 | 2017-06-14 11:00:00 | 540 | 2017-06-14 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502852 | 327037 | 2017-06-15 11:00:00 | 540 | 2017-06-15 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | | 9502853 | 327037 | 2017-06-16 11:00:00 | 540 | 2017-06-16 11:00:00 | 510 | -1 | NULL | NULL | 2015-01-01 00:00:00 | +---------+-----------+---------------------+------+---------------------+-------------+------------+------------+---------------+---------------------+ 940 rows in set (0.10 sec) So I know that it didn't change in the table - but for whatever reason, when I try to display the field by name, it seems to remember whatever caused it to display as today (presumably it took this as an assignment: date(utc_date) = curdate()). How do I undo that assignment?
MaQleod (371 rep)
Apr 21, 2016, 08:19 PM • Last activity: Aug 10, 2025, 02:10 AM
0 votes
2 answers
578 views
Migration from MySQL to Oracle generates empty tables
I'm using SQL Developer to migrate a database from MySQL 5.7 to Oracle 18c XE. The problem is that the tables are generated but empty, even though I'm specifying that the data should move online. I also looked at the scripts, and there's no data either. What I see in the log is `DataMove.DISABLE_CON...
I'm using SQL Developer to migrate a database from MySQL 5.7 to Oracle 18c XE. The problem is that the tables are generated but empty, even though I'm specifying that the data should move online. I also looked at the scripts, and there's no data either. What I see in the log is DataMove.DISABLE_CONSTRAINTS_FAILED. How to fix this? These are the migration options: enter image description here
ps0604 (51 rep)
Mar 10, 2019, 06:27 PM • Last activity: Aug 10, 2025, 01:02 AM
0 votes
1 answers
136 views
Why pgadmin dont enter in admin panel?
I install postgresql-12.4 on my win pc. When i'm trying to enter into admin panel (pgadmin) i cathcing message: could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5433? could not connect to serve...
I install postgresql-12.4 on my win pc. When i'm trying to enter into admin panel (pgadmin) i cathcing message: could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5433? could not connect to server: Connection refused (0x0000274D/10061) Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5433? I have 2 users on my pc. Admin and another user with adming rights. Under admin profile pgadmin works good, uder another user profile pgadmin throws error message. In services server is running.
Артём Власов
Sep 1, 2020, 06:32 AM • Last activity: Aug 10, 2025, 12:04 AM
2 votes
1 answers
5146 views
Status shows InnoDB: 128 rollback segment(s) are active waiting for purge to start
MySQL process was killed due to memory issues. When start command was issued, server started and is working but a status command shows the following: InnoDB: 128 rollback segment(s) are active. InnoDB: Waiting for purge to start InnoDB: Percona XtraDB (http://www.percona.com) 5.6.3...514901 Plugin '...
MySQL process was killed due to memory issues. When start command was issued, server started and is working but a status command shows the following: InnoDB: 128 rollback segment(s) are active. InnoDB: Waiting for purge to start InnoDB: Percona XtraDB (http://www.percona.com) 5.6.3...514901 Plugin 'FEEDBACK' is disabled. Server socket created on IP: '::'. /usr/sbin/mysqld: ready for connections. Is this something that needs to be addressed?
Dean Sanchez (21 rep)
Oct 2, 2018, 01:00 PM • Last activity: Aug 9, 2025, 11:07 PM
2 votes
1 answers
1063 views
Should I connect to my replica through PgBouncer?
We're using a setup of one Postgresql master server and a several replicas to be able to withstand master failure. Connection to master server is made through PgBouncer. Currently the idea is that replicas would serve reads when the master is down. Is it a good idea to connect to replicas via PgBoun...
We're using a setup of one Postgresql master server and a several replicas to be able to withstand master failure. Connection to master server is made through PgBouncer.
Currently the idea is that replicas would serve reads when the master is down.
Is it a good idea to connect to replicas via PgBouncer during master failover/switchover? Should it be the same PgBouncer instance or a separate one (for fault tolerance)?
We build a web service and a set of background jobs, database transactions are typically short-lived. If there's an article explaining this, I would appreciate if you share a link
chester89 (127 rep)
Sep 15, 2018, 06:30 AM • Last activity: Aug 9, 2025, 10:04 PM
0 votes
1 answers
139 views
How to combine values ​in one column separated by commas
My code in postgre sql looks like this: ``` select ouput.Serial ID as "Serial ID", case when ouput.parameters->>'10' = 'true' then 'hello' else '' end as "Hello", case when ouput.parameters->>'11' = 'true' then 'World' else '' end as "World" from ouput left join company_maintenance_entities abc on a...
My code in postgre sql looks like this:
select
ouput.Serial ID as "Serial ID",
   case when ouput.parameters->>'10' = 'true' then 'hello' else '' end as "Hello",
   case when ouput.parameters->>'11' = 'true' then 'World' else '' end as "World"
from ouput
left join company_maintenance_entities abc on abc.id = ouput.maintenance_entity_id 
left join ouput_statuses on issue_statuses.id = ouput.status_id
where
    ouput.parameters->>'10' = 'true' or ouput.parameters->>'11' = 'true'
order by ouput.Serial ID
On output I get | Serial ID | Hello | World | | -------- | -------------- | -------------- | | 1| hello|World | | 2| | World | | 3| hello| | | 4| | World | | 5| hello| World | | 6| hello| World | How can I make query select sure that the Hello and World columns are joined in one column separated by commas and it turns out like this? | Serial ID | hello world | | -------- | -------------- | | 1| hello, world| | 2| world| | 3| hello| | 4| world| | 5| hello, world| | 6| hello, world|
H1Z1 (1 rep)
Apr 1, 2024, 03:54 PM • Last activity: Aug 9, 2025, 09:01 PM
1 votes
1 answers
1478 views
How to read the MySQL Audit.log file in a SQL table?
I tried to read the Audit.log file of the MySQL Enterprise using a table or other more friendly format and easy to access and understand, but the xml format with which it is written has an irregular structure: 2019-02-17T12:53:23 UTC 1_2019-02-17T12:53:23 Audit 1 1 ./mysqld --basedir=/MYSQL_HOME/mys...
I tried to read the Audit.log file of the MySQL Enterprise using a table or other more friendly format and easy to access and understand, but the xml format with which it is written has an irregular structure: 2019-02-17T12:53:23 UTC 1_2019-02-17T12:53:23 Audit 1 1 ./mysqld --basedir=/MYSQL_HOME/mysql x86_64-linux-glibc2.12 5.6.38-enterprise-commercial-advanced-log 2019-02-19T02:22:23 UTC 2_2019-02-17T12:53:23 Connect 460432 0 0 nbryan 172.17.3.150 connect nbryan 2019-02-19T02:22:23 UTC 3_2019-02-17T12:53:23 Query 460432 0 0 flopez[nbryan] @ [172.17.3.150] 172.17.3.150 set_option /*!40101 set @@session.wait_timeout=28800 */ As you can see, the same tags are not always generated, so I have not been able to format it to a table from the SQL Server (there is a lot of information on how to read xml formats using sql). I wanted to know if there is another way, maybe from the same MySQL, although it is very difficult to find something of this manager in relation to other solutions. Any help or idea would be very helpful. Thank you.
Fran.J (121 rep)
Mar 7, 2019, 02:15 PM • Last activity: Aug 9, 2025, 07:08 PM
Showing page 10 of 20 total questions