Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
0 answers
9 views
Kafka Debezium Connectors aurora DB lost connection, taking 14mins exactly to recover
I am using strimzi kafka setup in kubernetes production. I have 5 connect pods, 1 strimzi operator and ~9 connectors distributed across these 5 pods. Some pods are getting below error and only connectors hosted on those pods are going down and recovering on its own exactly after 14 minutes. INFO: Ke...
I am using strimzi kafka setup in kubernetes production. I have 5 connect pods, 1 strimzi operator and ~9 connectors distributed across these 5 pods. Some pods are getting below error and only connectors hosted on those pods are going down and recovering on its own exactly after 14 minutes. INFO: Keepalive: Trying to restore lost connection to aurora-db-prod.cluster-randomstring.us-east-1.rds.amazonaws.com:3306 However, other pods which did not have this message, continues to receive messages just fine. usually this happens in 3 pods, and ends up impact almost 6-7 connectors, and after 14 minutes exactly it recovers without losing any data by starting from where it stopped on last offset. One of these connectors is very crucial and we can not expect it to be down for more than 3 minutes as it is customer impacting. ChatGPT recommended adding below for faster recovery, but it didn't help either: # ---- Fast Recovery Timeouts ---- database.connectionTimeout.ms: 10000 # Fail connection attempts fast (default: 30000) database.connect.backoff.max.ms: 30000 # Cap retry gap to 30s (default: 120000) # ---- Connector-Level Retries ---- connect.max.retries: 30 # 20 restart attempts (default: 3) connect.backoff.initial.delay.ms: 1000 # Small delay before restart connect.backoff.max.delay.ms: 8000 # Cap restart backoff to 8s (default: 60000) retriable.restart.connector.wait.ms: 5000 Below is the full config of one of my connector. ## NOTE: Except for the last few configs, all other configs are same for all connectors. Only last 8 configs are connector specific. KafkaConnectors: dbc-mysql-tables-connector: enabled: true annotations: {} labels: strimzi.io/cluster: debezium-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 1 autoRestart: enabled: true maxRestarts: 10 config: database.server.name: mysql_prod_tables snapshot.mode: schema_only snapshot.locking.mode: none topic.creation.enable: true topic.creation.default.replication.factor: 3 topic.creation.default.partitions: 1 topic.creation.default.compression.type: snappy database.history.kafka.topic: schema-changes.prod.mysql database.include.list: prod snapshot.new.tables: parallel tombstones.on.delete: "false" topic.naming.strategy: io.debezium.schema.DefaultTopicNamingStrategy topic.prefix: main.mysql key.converter.schemas.enable: "false" value.converter.schemas.enable: "false" key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter schema.history.internal.kafka.topic: schema-history.prod.mysql include.schema.changes: true message.key.columns: "prod.*:id" decimal.handling.mode: string producer.override.compression.type: zstd producer.override.batch.size: 800000 producer.override.linger.ms: 5 producer.override.max.request.size: 50000000 database.history.kafka.recovery.poll.interval.ms: 60000 schema.history.internal.kafka.recovery.poll.interval.ms: 30000 errors.tolerance: all heartbeat.interval.ms: 30000 # 30 seconds, for example heartbeat.topics.prefix: debezium-heartbeat retry.backoff.ms: 800 errors.retry.timeout: 120000 errors.retry.delay.max.ms: 5000 errors.log.enable: true errors.log.include.messages: true # ---- Fast Recovery Timeouts ---- database.connectionTimeout.ms: 10000 # Fail connection attempts fast (default: 30000) database.connect.backoff.max.ms: 30000 # Cap retry gap to 30s (default: 120000) # secrets: database.host: database.port: 3306 database.user: database.apssword: # ---- Connector-Level Retries ---- connect.max.retries: 30 # 20 restart attempts (default: 3) connect.backoff.initial.delay.ms: 1000 # Small delay before restart connect.backoff.max.delay.ms: 8000 # Cap restart backoff to 8s (default: 60000) retriable.restart.connector.wait.ms: 5000 #below values are different for each connector, above are same for all connectors. database.server.name: mysql_prod_tables snapshot.mode: schema_only database.include.list: prod message.key.columns: "prod.*:id" database.server.id: 5434535 table.exclude.list: table.include.list: "" errors.deadletterqueue.topic.name: dlq.prod.mysql.tables ## Also NOTE: I am using debezium connector 2.7.4-final which is almost 4-5 version older. Can this be a bug in this older version which was resolved later? From what I checked online, I couldn't find any such which can confirm my doubts. Please help me as this is impacting our SLAs with customers almost every alternate day and I am still a rookie with Kafka and strimzi.
SafiJunaid (101 rep)
Aug 10, 2025, 01:00 PM • Last activity: Aug 10, 2025, 09:48 PM
0 votes
1 answers
858 views
How to we get history of sql statements of sysdba?
We have been running a list of commands in cli: sqlplus / as sysdba We did not have "history on" and we logged out of the session. How do we get the list of sql statements we ran before as sysdba? We tried the following: SELECT h.sample_time, u.username, h.program, h.module, s.sql_text FROM DBA_HIST...
We have been running a list of commands in cli: sqlplus / as sysdba We did not have "history on" and we logged out of the session. How do we get the list of sql statements we ran before as sysdba? We tried the following: SELECT h.sample_time, u.username, h.program, h.module, s.sql_text FROM DBA_HIST_ACTIVE_SESS_HISTORY h, DBA_USERS u, DBA_HIST_SQLTEXT s WHERE sample_time >= SYSDATE - 1 AND h.user_id=u.user_id AND h.sql_id = s.sql_iD ORDER BY h.sample_time Also tried select * from v$sql
Kristi Nanza_ (3 rep)
Sep 30, 2022, 08:42 PM • Last activity: Aug 10, 2025, 09:04 PM
0 votes
2 answers
139 views
Migrate availability group listener to a different availability group
Hopefully a simple question to answer! Can you point an existing availability group listener to a different availability group? Current production environments are using Enterprise licenses and the desire is to reduce these to Standard. There is a piece of work to create new basic availability group...
Hopefully a simple question to answer! Can you point an existing availability group listener to a different availability group? Current production environments are using Enterprise licenses and the desire is to reduce these to Standard. There is a piece of work to create new basic availability groups for the databases in question. The hope was that at the point of switchover, the process would be to point the existing listeners to the new availability groups where appropriate.
AM28337 (1 rep)
Jan 28, 2025, 12:05 PM • Last activity: Aug 10, 2025, 08:02 PM
1 votes
1 answers
908 views
How can I arrange database structure for ranking card in a list in a Kanban Board?
thank you for helping me. I got a problem when I design my database structure for `kanban board` application. How can I design kanban board to rank card in a list for sorting,drag and drop,.. Currently this is my design 1. cards table -id (P) -list_id -name -title ??? (How to design for ranking card...
thank you for helping me. I got a problem when I design my database structure for kanban board application. How can I design kanban board to rank card in a list for sorting,drag and drop,.. Currently this is my design 1. cards table -id (P) -list_id -name -title ??? (How to design for ranking cards in list for sorting, drag and drop in list) 2. card_lists table -id (P) -board_id -name ?? (How to design for ranking status card list in boards for sorting, drag and drop in board) 3. boards table -id -name enter code here Imagine I am doing the card movement function in the kanban board, (move in the list or move from one list to another). I'm having a problem using a weight as an metric that determines the order of cards in a list. The more we move the card, the more number of decimal digits of the weight => Leads to the data type: FLOAT or DOUBLE cannot be saved correctly (divide and divide and divide,... ) So is there any way to overcome this problem and is there a better way to calculate the weight to store in the database? Thank you so much for your help This is problem description
DFX Nguyễn (111 rep)
Dec 30, 2021, 11:14 AM • Last activity: Aug 10, 2025, 07:03 PM
0 votes
1 answers
1278 views
Merge sub-queries to a single query
I am trying to merge the nested **MYSQL** queries into one First of all I fire this query select names , city , category from names_data group by names , city ,category ; this produces a result of ~28000 from master `names_data` table now for each entry in the result I fire this query inside a pytho...
I am trying to merge the nested **MYSQL** queries into one First of all I fire this query select names , city , category from names_data group by names , city ,category ; this produces a result of ~28000 from master names_data table now for each entry in the result I fire this query inside a python loop select date, name, city , old_mul, new_mul from name_changes where name=' + name + ' and city=' + city + """' order by date desc limit 2 ; I just want the last 2 records date wise for each name and city combination from this name_changes table I have tried few JOINS and groups operation but was not able to achieve what this Right Now I have 1 + 28000*1 queries **I want to reduce the queries** Help is really appreciated :) p.s. the tables are different
Beginner (1 rep)
Jul 7, 2016, 07:30 AM • Last activity: Aug 10, 2025, 06:00 PM
1 votes
1 answers
753 views
Compression in MongoDb 3.2 does not work
I installed MongoDb 3.2, and configured snappy compression in mongodb configuration file: engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 99 journalCompressor: none collectionConfig: blockCompressor: snappy indexConfig: prefixCompression: true However I do not think compression actually wo...
I installed MongoDb 3.2, and configured snappy compression in mongodb configuration file: engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 99 journalCompressor: none collectionConfig: blockCompressor: snappy indexConfig: prefixCompression: true However I do not think compression actually work as I could see average object size in the collection is about the same (I filled collection created from scratch by the same objects right after configuration change and service restart): MMAPv1 no-compression avgObjSize: 5kb WiredTiger snappy avgObjSize: 4kb WiredTiger zlib avgObjSize: 4kb when I run stats on collection here is what I see for "snappy": "compression" : { "compressed pages read" : NumberInt(0), "compressed pages written" : NumberInt(5), "page written failed to compress" : NumberInt(0), "page written was too small to compress" : NumberInt(4), "raw compression call failed, additional data available" : NumberInt(0), "raw compression call failed, no additional data available" : NumberInt(0), "raw compression call succeeded" : NumberInt(0) }, for zlib: { "compressed pages read" : NumberInt(0), "compressed pages written" : NumberInt(1), "page written failed to compress" : NumberInt(0), "page written was too small to compress" : NumberInt(3), "raw compression call failed, additional data available" : NumberInt(0), "raw compression call failed, no additional data available" : NumberInt(1), "raw compression call succeeded" : NumberInt(2) } Not sure how to read this info, and not sure how can I detect what's wrong with compression. I should also say I had only few (like 10) objects in each scenario. Would it be enough to accurately measure average object compression ratio? **UPDATE:** with 1188 documents mongodb ver3 shown around 50% better compression ratio comparing to mongodb ver2.6 (MMAPv1), however got no difference between snappy and zlib: dataSize and avgObjSize are the same for some reason. When I request statistics it shown WiredTiger:creationString:...block_compressor=zlib.. for the first database and block_compressor=snappy for the second one, so databases and collections seemed to be created with right compression algorithms
YMC (111 rep)
Aug 28, 2016, 06:30 PM • Last activity: Aug 10, 2025, 05:00 PM
1 votes
1 answers
1334 views
Able to connect with connection manager but SSIS package does not authenticate
I have an SSIS package that connects MSSQL2008 server to MSSQL2000 and pulls data off the 2000 server and transfers it to the 2008 server. The 2000 server was recently virtualized and since then the SSIS package will not authenticate. I can see both servers with OSQL -L/sqlcmd -L. All services, agen...
I have an SSIS package that connects MSSQL2008 server to MSSQL2000 and pulls data off the 2000 server and transfers it to the 2008 server. The 2000 server was recently virtualized and since then the SSIS package will not authenticate. I can see both servers with OSQL -L/sqlcmd -L. All services, agents and browsers are running. The SQL user that runs the SSIS package can connect via Test Connection button in the Package Explorer>Connection Manager, but when I attempt to execute the package the error output indicates that the login failed, the password does not match. Error 18456 Sev 14 State 8. I am sure the password is correct as I just created the new SQL user to test. Am I missing something that the SSIS package may authenticate somewhere else? Should the connection string include the password, or does that get pulled form elsewhere in the SSIS package? I am not a DB admin/SQL guy so please excuse my green-ness if I am overlooking something simple here, which I hope I am. Thanks for any input
mcv110 (75 rep)
Apr 21, 2021, 09:27 PM • Last activity: Aug 10, 2025, 04:04 PM
0 votes
1 answers
136 views
optimization with subquery not working as expected
I have a MySQL InnoDB database. One table called `affymetrixProbeset` contains more than 300 million rows. Querying this table with `INNER JOIN` to other tables, with an `ORDER BY` and offset/limit takes almost 4 minutes. I saw that creating a subquery could be more optimized. I tried to move from m...
I have a MySQL InnoDB database. One table called affymetrixProbeset contains more than 300 million rows. Querying this table with INNER JOIN to other tables, with an ORDER BY and offset/limit takes almost 4 minutes. I saw that creating a subquery could be more optimized. I tried to move from my initial query with INNER JOIN to an approach using a subquery The query is:
SELECT affymetrixProbeset.* 
FROM affymetrixProbeset 
WHERE affymetrixProbeset.bgeeAffymetrixChipId IN(
    SELECT affymetrixChip.bgeeAffymetrixChipId 
    FROM affymetrixChip INNER JOIN cond ON affymetrixChip.conditionId = cond.conditionId
    WHERE cond.speciesId = 9606)
order by affymetrixProbeset.affymetrixProbesetId, affymetrixProbeset.bgeeGeneId
limit 10;
The columns used in the ORDER BY correspond to the primary key of the table affymetrixProbeset. The explain related to this query is
+----+-------------+--------------------+------------+--------+------------------------------+-------------+---------+--------------------------------------------------+-------+----------+----------------------------------------------+
| id | select_type | table              | partitions | type   | possible_keys                | key         | key_len | ref                                              | rows  | filtered | Extra                                        |
+----+-------------+--------------------+------------+--------+------------------------------+-------------+---------+--------------------------------------------------+-------+----------+----------------------------------------------+
|  1 | SIMPLE      | affymetrixChip     | NULL       | index  | PRIMARY,conditionId          | conditionId | 3       | NULL                                             | 12990 |   100.00 | Using index; Using temporary; Using filesort |
|  1 | SIMPLE      | cond               | NULL       | eq_ref | PRIMARY,speciesId            | PRIMARY     | 3       | bgee_v15_dev.affymetrixChip.conditionId          |     1 |    10.99 | Using where                                  |
|  1 | SIMPLE      | affymetrixProbeset | NULL       | ref    | PRIMARY,bgeeAffymetrixChipId | PRIMARY     | 3       | bgee_v15_dev.affymetrixChip.bgeeAffymetrixChipId |   175 |   100.00 | NULL                                         |
+----+-------------+--------------------+------------+--------+------------------------------+-------------+---------+--------------------------------------------------+-------+----------+----------------------------------------------+
The explain is exactly the same as my initial query. The query takes the same time (~4 minutes) It looks like the ORDER BY is not done on the affymetrixProbeset table but directly on the affymetrixChip table that is part of the subquery. I was expecting MySQL to run the subquery and then order using the affymetrixProbeset table. Could you explain to me why it is not done as I expected? I checked also the performance of separating this query in two. The first one is:
SELECT affymetrixChip.bgeeAffymetrixChipId 
FROM affymetrixChip INNER JOIN cond ON affymetrixChip.conditionId = cond.conditionId
WHERE cond.speciesId = 9606;
I then passed the bgeeAffymetrixChipIds to the query
SELECT affymetrixProbeset.* 
FROM affymetrixProbeset 
WHERE affymetrixProbeset.bgeeAffymetrixChipId IN(.....)
order by affymetrixProbeset.affymetrixProbesetId, affymetrixProbeset.bgeeGeneId
limit 10;
I wrote ..... in the query as I had more than 5000 IDs. The query run almost instantaneous as it order using the primary key. Could you please explain why the subquery did not perform as I expected? Is there an other option to optimize the query than creating 2 queries? UPDATE: After adding some indexes proposed by @Rick, the explain now looks like :
+----+-------------+--------------------+------------+--------+-----------------------------------------------------+-------------+---------+------------------------------------------------+-------+----------+----------------------------------------------+
| id | select_type | table              | partitions | type   | possible_keys                                       | key         | key_len | ref                                            | rows  | filtered | Extra                                        |
+----+-------------+--------------------+------------+--------+-----------------------------------------------------+-------------+---------+------------------------------------------------+-------+----------+----------------------------------------------+
|  1 | SIMPLE      | affymetrixChip     | NULL       | index  | PRIMARY,conditionId                                 | conditionId | 6       | NULL                                           | 12561 |   100.00 | Using index; Using temporary; Using filesort |
|  1 | SIMPLE      | cond               | NULL       | eq_ref | PRIMARY,speciesId                                   | PRIMARY     | 3       | bgee_v15_0.affymetrixChip.conditionId          |     1 |    10.94 | Using where                                  |
|  1 | SIMPLE      | affymetrixProbeset | NULL       | ref    | PRIMARY,bgeeAffymetrixChipId_2,bgeeAffymetrixChipId | PRIMARY     | 3       | bgee_v15_0.affymetrixChip.bgeeAffymetrixChipId |   174 |   100.00 | NULL                                         |
+----+-------------+--------------------+------------+--------+-----------------------------------------------------+-------------+---------+------------------------------------------------+-------+----------+----------------------------------------------+
The query still takes 3 minutes to run. The SHOW CREATE TABLE of the PK/FK/indexes of these tables are :
# table cond
  PRIMARY KEY (conditionId),
  UNIQUE KEY speciesId (speciesId,conditionId),
  UNIQUE KEY anatEntityId (anatEntityId,cellTypeId,stageId,speciesId,sex,sexInferred,strain),
  KEY exprMappedConditionId (exprMappedConditionId),
  KEY cellTypeId (cellTypeId),
  KEY stageId (stageId),
  CONSTRAINT cond_ibfk_1 FOREIGN KEY (exprMappedConditionId) REFERENCES cond (conditionId) ON DELETE CASCADE,
  CONSTRAINT cond_ibfk_2 FOREIGN KEY (anatEntityId) REFERENCES anatEntity (anatEntityId) ON DELETE CASCADE,
  CONSTRAINT cond_ibfk_3 FOREIGN KEY (cellTypeId) REFERENCES anatEntity (anatEntityId) ON DELETE CASCADE,
  CONSTRAINT cond_ibfk_4 FOREIGN KEY (stageId) REFERENCES stage (stageId) ON DELETE CASCADE,
  CONSTRAINT cond_ibfk_5 FOREIGN KEY (speciesId) REFERENCES species (speciesId) ON DELETE CASCADE

# table affymetrixChip
  PRIMARY KEY (bgeeAffymetrixChipId),
  UNIQUE KEY affymetrixChipId (affymetrixChipId,microarrayExperimentId),
  UNIQUE KEY conditionId (conditionId,bgeeAffymetrixChipId),
  KEY microarrayExperimentId (microarrayExperimentId),
  KEY chipTypeId (chipTypeId),
  CONSTRAINT affymetrixChip_ibfk_1 FOREIGN KEY (microarrayExperimentId) REFERENCES microarrayExperiment (microarrayExperimentId) ON DELETE CASCADE,
  CONSTRAINT affymetrixChip_ibfk_2 FOREIGN KEY (chipTypeId) REFERENCES chipType (chipTypeId) ON DELETE SET NULL,
  CONSTRAINT affymetrixChip_ibfk_3 FOREIGN KEY (conditionId) REFERENCES cond (conditionId) ON DELETE CASCADE

#table affymetrixProbeset
  PRIMARY KEY (bgeeAffymetrixChipId,affymetrixProbesetId),
  UNIQUE KEY bgeeAffymetrixChipId_2 (bgeeAffymetrixChipId,affymetrixProbesetId,bgeeGeneId),
  KEY bgeeAffymetrixChipId (bgeeAffymetrixChipId,expressionId,bgeeGeneId,normalizedSignalIntensity),
  KEY expressionId (expressionId),
  KEY bgeeGeneId (bgeeGeneId,bgeeAffymetrixChipId,affymetrixProbesetId),
  CONSTRAINT affymetrixProbeset_ibfk_1 FOREIGN KEY (bgeeAffymetrixChipId) REFERENCES affymetrixChip (bgeeAffymetrixChipId) ON DELETE CASCADE,
  CONSTRAINT affymetrixProbeset_ibfk_2 FOREIGN KEY (bgeeGeneId) REFERENCES gene (bgeeGeneId) ON DELETE CASCADE,
  CONSTRAINT affymetrixProbeset_ibfk_3 FOREIGN KEY (expressionId) REFERENCES expression (expressionId) ON DELETE SET NULL
UPDATE 2 : Sizes of the tables as result of SHOW TABLE STATUS
| Name                                                | Engine | Version | Row_format | Rows       | Avg_row_length | Data_length   | Max_data_length | Index_length | Data_free    | Auto_increment | Create_time         | Update_time         | Check_time | Collation       | Checksum | Create_options | Comment     
| affymetrixChip                                      | InnoDB |      10 | Dynamic    |      13020 |            122 |       1589248 |               0 |      1097728 |      2097152 |         104523 | 2022-12-02 13:23:15 | NULL                | NULL       | utf8_general_ci |     NULL |                |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| affymetrixProbeset                                  | InnoDB |      10 | Dynamic    |  312237020 |             93 |   29169287168 |               0 |  41072721920 |      6291456 |           NULL | 2022-10-31 09:34:15 | NULL                | NULL       | utf8_general_ci |     NULL |                |  
| cond                                                | InnoDB |      10 | Dynamic    |      44578 |            106 |       4734976 |               0 |      6815744 |      2097152 |          44663 | 2022-12-02 13:19:09 | NULL                | NULL       | utf8_general_ci |     NULL |                |
The query below gives 4889 results
SELECT COUNT(*) FROM cond WHERE speciesId = 9606;
The query below gives 5452 results
SELECT COUNT(bgeeAffymetrixChipId) FROM affymetrixChip INNER JOIN cond ON cond.conditionId = affymetrixChip.conditionId WHERE cond.speciesId = 9606;
jwollbrett (1 rep)
Nov 25, 2022, 05:18 PM • Last activity: Aug 10, 2025, 03:05 PM
1 votes
1 answers
385 views
Allow user to drop only tables / views created by that user in MySQL
Is it possible to allow a user to *only* drop tables and views that he/she created in MySQL? How is this achieved?
Is it possible to allow a user to *only* drop tables and views that he/she created in MySQL? How is this achieved?
Giovanni Colitti (131 rep)
Jan 20, 2021, 04:54 PM • Last activity: Aug 10, 2025, 02:08 PM
0 votes
1 answers
56 views
(SOLVED) Source and Replica in a MariaDB 'pair' do not start exchanging data
On Linux Slackware PCs, I have a 'Source' database on one PC, and a 'Replica' on the other. On the Source end (11.4.7-MariaDB) I've programmed ID=1. On the Replica end (10.11.6-MariaDB), I set ID=11 and set the Source IP. I've started them, and they seem to like each other, as the Show replica statu...
On Linux Slackware PCs, I have a 'Source' database on one PC, and a 'Replica' on the other. On the Source end (11.4.7-MariaDB) I've programmed ID=1. On the Replica end (10.11.6-MariaDB), I set ID=11 and set the Source IP. I've started them, and they seem to like each other, as the Show replica status reports everything as Ok, no errors, comms ok, etc... BUT, when I create a new table on the 'Source' machine, nothing happens on the 'Replica' - no new table is created. And, after I manually create the table on the Replica, and I insert a record on the Source, no data transfer happens either. No pointers are incremented, seemingly no data is transferred. And no errors seem to be appearing in the logs. Here are the status reports for Source and Replica. I dispair, I consulted AI, and after several sessions, AI was in the same state as I was... repeatedly asking me to insert new records at the Source and checking at the other end, maybe hoping something would repair itself. Also suggesting extra complications such as adding SSH, Gtid and other optionals. What can I do to get the show on the road? I've reinstalled MariaDB at the Replica. I'm somewhat reticent in changing the DB at the Source, as it's in use. I've checked permissions on files and directories... Last session ended at 04:30 AM this morning. **source** status:
show master status\G
*************************** 1. row ***************************
            File: mysql-bin.000020
        Position: 328
    Binlog_Do_DB: homeorptemp, farma1, test_replication
Binlog_Ignore_DB: 
1 row in set (0.000 sec)
**replica** status:
slave status\G
*************************** 1. row ***************************
                Slave_IO_State: Waiting for master to send event
                   Master_Host: 192.168.1.102
                   Master_User: my_repl
                   Master_Port: 3306
                 Connect_Retry: 60
               Master_Log_File: mysql-bin.000020
           Read_Master_Log_Pos: 328
                Relay_Log_File: mysql-relay-bin.000007
                 Relay_Log_Pos: 627
         Relay_Master_Log_File: mysql-bin.000020
              Slave_IO_Running: Yes
             Slave_SQL_Running: Yes
               Replicate_Do_DB: 
           Replicate_Ignore_DB: 
            Replicate_Do_Table: 
        Replicate_Ignore_Table: 
       Replicate_Wild_Do_Table: 
   Replicate_Wild_Ignore_Table: 
                    Last_Errno: 0
                    Last_Error: 
                  Skip_Counter: 0
           Exec_Master_Log_Pos: 328
               Relay_Log_Space: 1235
               Until_Condition: None
                Until_Log_File: 
                 Until_Log_Pos: 0
            Master_SSL_Allowed: No
            Master_SSL_CA_File: 
            Master_SSL_CA_Path: 
               Master_SSL_Cert: 
             Master_SSL_Cipher: 
                Master_SSL_Key: 
         Seconds_Behind_Master: 0
 Master_SSL_Verify_Server_Cert: No
                 Last_IO_Errno: 0
                 Last_IO_Error: 
                Last_SQL_Errno: 0
                Last_SQL_Error: 
   Replicate_Ignore_Server_Ids: 
              Master_Server_Id: 1
                Master_SSL_Crl: 
            Master_SSL_Crlpath: 
                    Using_Gtid: No
                   Gtid_IO_Pos: 
       Replicate_Do_Domain_Ids: 
   Replicate_Ignore_Domain_Ids: 
                 Parallel_Mode: optimistic
                     SQL_Delay: 0
           SQL_Remaining_Delay: NULL
       Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
              Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
    Slave_Transactional_Groups: 0
          Replicate_Rewrite_DB: 
1 row in set (0.001 sec)
**EDIT** my.cnf at the replica:
[client]
[mysqld]
port                    = 3306
socket                  = /var/run/mysql/mysql.sock
skip-external-locking
bind-address            = 0.0.0.0
key_buffer_size         = 256M
max_allowed_packet      = 32M
table_open_cache        = 1024
sort_buffer_size        = 2M
net_buffer_length       = 8K
read_buffer_size        = 512K
read_rnd_buffer_size    = 1M
skip-networking         = OFF
server-id               = 11
log-bin                 = mysql-bin
binlog_format           = mixed
innodb_buffer_pool_size = 512M
innodb_log_file_size    = 256M
read_only               = OFF
log_warnings		= 1
log_slave_updates 	= ON

[mysqldump]
quick
max_allowed_packet     = 32M

[mysql]
no-auto-rehash

[myisamchk]
show processlist;
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
| Id | User        | Host      | db   | Command   | Time  | State                                                  | Info             | Progress |
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
|  5 | system user |           | NULL | Slave_IO  | 17725 | Waiting for master to send event                       | NULL             |    0.000 |
|  6 | system user |           | NULL | Slave_SQL | 17724 | Slave has read all relay log; waiting for more updates | NULL             |    0.000 |
|  8 | root        | localhost | NULL | Query     |     0 | starting                                               | show processlist |    0.000 |
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
3 rows in set (0.000 sec)
jcoppens (121 rep)
Aug 5, 2025, 03:32 PM • Last activity: Aug 10, 2025, 01:25 PM
2 votes
1 answers
503 views
Handling unique IDs on client side
Reading around on the topic hasn't got me closer to understanding best practices. I'm also relatively new and self-taught in programming, so might have some questions about how to actually execute the solution. The app I'm building requires a lot of DOM elements to be generated based on database row...
Reading around on the topic hasn't got me closer to understanding best practices. I'm also relatively new and self-taught in programming, so might have some questions about how to actually execute the solution. The app I'm building requires a lot of DOM elements to be generated based on database rows. These should be updatable by the user and at the minute I'm using the auto incrementing primary key as an html attribute id. Obviously I don't want users to modify this, or in the associated javascript object, to then affect other users' data when saved or updated. I have considered the following options: 1. Not using auto incrementing but a uuid or guid, but I've read these are not all that advisable, especially for db performance. 2. Concealing the client side id by hashing it when served on the server side. This also doesn't sound particularly performant. 3. Having a 'lookup table' that is referenced and produces another, non-incremental id. Doesn't seem like a bad idea but I'm a bit unsure how best to go about generating this and how it should look. 4. Including an extra column in the tables that would be updated that contains for example the users name and would be combined with the primary key. This seems simple enough but the idea of adding this extra column to every table that might be updated sounds redundant and not normalized, etc. 5. Keeping the primary keys visible but producing a session table for the user that tracks what the which IDs were sent out and therefore which can be updated. New rows can be generated so long as other ones belonging to other users aren't updated. Or, maybe I'm barking up the wrong tress here :) Thanks!
Mikey B (21 rep)
Sep 1, 2018, 12:39 PM • Last activity: Aug 10, 2025, 01:06 PM
0 votes
0 answers
37 views
Slow DELETE due to FK trigger on partitioned child table
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance. **Original setup: ** I have two tables: `tasks` (parent) with ~65M rows and `records` (child) with ~200M rows There is a foreign key from records.task_id → tasks.id When executing a `DELETE` statement on...
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance.

**Original setup:
** I have two tables: tasks (parent) with ~65M rows and records (child) with ~200M rows
There is a foreign key from records.task_id → tasks.id
When executing a DELETE statement on tasks it takes approximately 12 seconds to complete
explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
                                                                      QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
 Delete on public.tasks  (cost=5628.30..1402047.95 rows=0 width=0) (actual time=1323.519..1323.520 rows=0 loops=1)
   Buffers: shared hit=1160307
   ->  Bitmap Heap Scan on public.tasks  (cost=5628.30..1402047.95 rows=502675 width=6) (actual time=45.328..373.039 rows=513093 loops=1)
         Output: ctid
         Recheck Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
         Heap Blocks: exact=133678
         Buffers: shared hit=134121
         ->  Bitmap Index Scan on tasks__project_id  (cost=0.00..5502.63 rows=502675 width=0) (actual time=24.057..24.058 rows=513093 loops=1)
               Index Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
               Buffers: shared hit=443
 Query Identifier: -8361055271394889220
 Planning Time: 0.069 ms
 Trigger RI_ConstraintTrigger_a_991596545 for constraint records_task_id_dfa43a78_fk_tasks: time=11377.093 calls=513093
 Execution Time: 12742.066 ms
(14 rows)
I decided to partition the records table based on project_id (list) by doing the following:
1. Create new table tasks_new and copy the content of tasks 2. Create a new partitioned records table PARTITION BY LIST (project_id) 3. Change the PRIMARY KEY to be based on (project_id, id) 4. Add foreign key constraint to tasks_new based on (project_id, task_id) 5. Generate 800 tables and populate the data from records This works as expected, however, when running and comparing the DELETE statement from tasks_new I see it is slower than the original (none partitioned) setup. explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks_new WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
                                                                    QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
 Delete on public.tasks_new  (cost=15925.46..1400465.79 rows=0 width=0) (actual time=1352.341..1352.342 rows=0 loops=1)
   Buffers: shared hit=1162960
   ->  Bitmap Heap Scan on public.tasks_new  (cost=15925.46..1400465.79 rows=497147 width=6) (actual time=62.350..390.696 rows=513093 loops=1)
         Output: ctid
         Recheck Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
         Heap Blocks: exact=133678
         Buffers: shared hit=136774
         ->  Bitmap Index Scan on tasks_new_pkey  (cost=0.00..15801.17 rows=497147 width=0) (actual time=41.590..41.590 rows=513093 loops=1)
               Index Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
               Buffers: shared hit=3096
 Query Identifier: -3093644494149541820
 Planning Time: 0.066 ms
 Trigger RI_ConstraintTrigger_a_991528806 for constraint records_parti_task_id_dfa43a78_fk_tasks: time=28558.369 calls=513093
 Execution Time: 29965.025 ms
(14 rows)
I saw that most of the time is spent on the foreign key trigger (to see it, I intentionally set constraints all immediate;) and it is SLOWER than the test without the partitions.

RI_ConstraintTrigger_a_991596545 fires RI_FKey_noaction_del in AFTER DELETE of table tasks_new. However I don't know how RI_FKey_noaction_del is implemented and I suspect it doesn't work well in partitioned tables or might does the scan per all partitions together (not efficient). Please help me understand better how it works and how can I improve it.

p.s., edit: I forgot to mention that I also added an index (project_id, record_id) to the tasks_new table (partitioned)
Cowabunga (145 rep)
Aug 7, 2025, 10:36 AM • Last activity: Aug 10, 2025, 12:27 PM
0 votes
2 answers
12651 views
Getting current number of connections in mysql
I want to know if there is any parameter to show the current number of mysql connections (or peak value in the past 10 minutes or something else). Right now, I only see max and max_used connections. mysql> show global status like '%connection%'; +-----------------------------------+-----------------...
I want to know if there is any parameter to show the current number of mysql connections (or peak value in the past 10 minutes or something else). Right now, I only see max and max_used connections. mysql> show global status like '%connection%'; +-----------------------------------+---------------------+ | Variable_name | Value | +-----------------------------------+---------------------+ | Connection_errors_accept | 0 | | Connection_errors_internal | 0 | | Connection_errors_max_connections | 434 | | Connection_errors_peer_address | 0 | | Connection_errors_select | 0 | | Connection_errors_tcpwrap | 0 | | Connections | 2380515 | | Max_used_connections | 152 | | Max_used_connections_time | 2020-07-09 19:28:43 | +-----------------------------------+---------------------+ 9 rows in set (0.00 sec) mysql> show global variables like '%connection%'; +--------------------------+-------------------+ | Variable_name | Value | +--------------------------+-------------------+ | character_set_connection | latin1 | | collation_connection | latin1_swedish_ci | | max_connections | 300 | | max_user_connections | 0 | +--------------------------+-------------------+
mahmood (113 rep)
Jul 12, 2020, 07:46 AM • Last activity: Aug 10, 2025, 12:04 PM
1 votes
1 answers
548 views
Write latency for adding an index on read replica
This is a question that I've not been able to find any good documentation for on the internet. I think it's a common industry standard that for tables with heavy writes, you add an index on the read replica and not on the master. This way, you're ensuring the writes on master don't slow down, while...
This is a question that I've not been able to find any good documentation for on the internet. I think it's a common industry standard that for tables with heavy writes, you add an index on the read replica and not on the master. This way, you're ensuring the writes on master don't slow down, while also ensuring the reads on read replica are faster. But doesn't the read replica also have writes, in the form of binlog replication? If I'm not wrong, the replication is simply a form of copy-pasting the commands from the binlog as-is. But the index on the read replica would still have to be updated, right? Since the binlog file won't have any "index updates" commands (because the master doesn't even have the index), I'm assuming that the "index update" on the read replica happens when an INSERT or an UPDATE commands comes from the binlog. So, is it still not a "WRITE" operation that happens? Will it still not slow down the write latency? I understand that not all DBs have this capability of indexing a read replica separately, so this question is restricted to only those DBs that do, such as MySQL.
Sidharth Samant (203 rep)
Jun 1, 2023, 07:46 AM • Last activity: Aug 10, 2025, 11:06 AM
0 votes
1 answers
155 views
The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it `ServerB`. I have a stored procedure (`SomeStoredProcedure`) that selects from Linked Server `ServerB`. If I explicitly set the isolation level to `SERIALIZA...
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it ServerB. I have a stored procedure (SomeStoredProcedure) that selects from Linked Server ServerB. If I explicitly set the isolation level to SERIALIZABLE and then try to insert the results of SomeStoredProcedure into a local temp table, I get the following error: > OLE DB provider "MSOLEDBSQL" for linked server "ServerB" returned message "The parameter is incorrect.". > > Msg 7399, Level 16, State 1, Line 1 > > The OLE DB provider "MSOLEDBSQL" for linked server "ServerB" reported an error. One or more arguments were reported invalid by the provider. > > Msg 7391, Level 16, State 2, Line 1 > > The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction. If I just execute the procedure directly (without inserting the results into a local temp table) it works. If I don't use the SERIALIZABLE isolation level, it also works. (Other explicit isolation levels work as well.) I have tried disabling Enable Promotion of Distributed Transactions for RPC as mentioned in other answers: Linked Server Options But no dice, same error: Error I understand that the query wants to promote to a distributed transaction for the above scenario since a Linked Server is involved (I assume enforcing SERIALIZABLE isolation is more involved across a remote server). But is it possible to prevent it from promoting to a distributed transaction under these circumstances? The same issue is reproducible using sp_executesql to select from the Linked Server as well. Repro code for example:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

DROP TABLE IF EXISTS #LocalTempTable;
CREATE TABLE #LocalTempTable (ID INT);

INSERT INTO #LocalTempTable (ID)
EXEC sp_executesql N'SELECT ID FROM ServerB.DatabaseName.SchemaName.SomeTable;';
*Reminder: I don't own this 3rd party server, and can't change any settings on it such as actually enabling the MSDTC.
J.D. (40913 rep)
Jun 27, 2025, 06:06 PM • Last activity: Aug 10, 2025, 10:03 AM
1 votes
2 answers
1708 views
Relationship from Orders to Shop and Supplier, or to general User?
I have a platform where - a User can have a Shop, which can place Orders - a User can be a Supplier, which can see Orders from Shops coming in. I have created the following diagram depicting the structure I have modeled so far: [![enter image description here][1]][1] Is that a *good* schema? Or, wou...
I have a platform where - a User can have a Shop, which can place Orders - a User can be a Supplier, which can see Orders from Shops coming in. I have created the following diagram depicting the structure I have modeled so far: enter image description here Is that a *good* schema? Or, would it make more sense to *couple* the relationships to the User straight away? I'm guessing that will cause more overhead in my backend? (Laravel - PHP) Thank you!
Miguel Stevens (169 rep)
May 9, 2017, 02:32 PM • Last activity: Aug 10, 2025, 09:08 AM
1 votes
1 answers
427 views
In MySQL replication, why aren't queries run on the slave when there was no USE statement on master?
We have replication running from our master server to a slave. `binlog_format` is the default; `mixed`. We have noticed something weird and are not sure whether it's a bug or intended behaviour. Certain statements are being ignored. ## Ignored DELETE FROM schema_name.table_name WHERE id = 123; ## Wo...
We have replication running from our master server to a slave. binlog_format is the default; mixed. We have noticed something weird and are not sure whether it's a bug or intended behaviour. Certain statements are being ignored. ## Ignored DELETE FROM schema_name.table_name WHERE id = 123; ## Works USE schema_name; DELETE FROM schema_name.table_name WHERE id = 123; ## Also works! USE different_schema_name; DELETE FROM schema_name.table_name WHERE id = 123; When I delete something without previously selecting a database on master, the query is ignored on the slave. But when I issue a USE schema_name on the master before the statement, *even if it is a different schema*, then it does run on the slave. Regardless of the USE and whether they run on the slave, they always appear in the binary logs on master. We noticed this as we have a script that tidies up old shopping carts that loops over a list of databases and issues DELETE FROM schema_name.carts WHERE created_at ..., but replication was breaking when an INSERT INTO carts SET cart_number = 'xxx' call took place on the slave; the cart number (unique column) no longer existed on master (after a successful check to validate such) but on the slave it still existed so it threw the error and broke replication. **Is this intended behaviour? What's the best method to ensure this kind of behaviour doesn't happen?** For reference and possibly meaningful, master is MariaDB (in a drbd cluster) and slave is Percona.
Leonard Challis (142 rep)
Dec 7, 2018, 01:38 PM • Last activity: Aug 10, 2025, 07:08 AM
0 votes
1 answers
748 views
Joining without a unique identifier
**Payments** | PaymentRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------- | ----------- | | P01 | ABC | 100 | MR01 | P02 | ABC | 200 | MR02 | P03 | XYZ | 200 | MR03 **Refunds** | ReundRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------...
**Payments** | PaymentRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------- | ----------- | | P01 | ABC | 100 | MR01 | P02 | ABC | 200 | MR02 | P03 | XYZ | 200 | MR03 **Refunds** | ReundRef | Product | PaymentAmount | MerchantRef | | -------- | --------- | ------------- | ----------- | | R01 | ABC | -10 | MR04 | R02 | ABC | -100 | MR05 These tables used to be joined on MerchantRef, resulting in a list of payments and refunds. This worked because only single refunds against a product were allowed. Multiple refunds are now allowed, meaning MerchantRef must change for each refund (payment provider requirement) therefore breaking the join. If I change the join use Product, I end up with duplicated rows. Is there method or some trickery that will allow me to join on Product at all? While Product can exist multiple times, it always refers to the same thing. I don't think there's a way to proceed as we have no way of knowing which refund refers to which payment. The only approach I can think of to SUM() PaymentAmount for all matching Product records, and do the same for refunds, and do it based on total. SQL isn't my forte at all, but am I right in thinking my only solution here is having a reference between the tables that never changes? I'm trying to keep changes to a minimum, I never would have started with this design, but here we are.
supermanpineaplle (1 rep)
Feb 5, 2021, 02:59 PM • Last activity: Aug 10, 2025, 06:01 AM
0 votes
1 answers
471 views
Clarification on steps creating Database Replication Definition
I am being asked to create database replication definitions for a couple of databases (around 20 databases). I just wanted to clarify is the below-mentioned steps are the correct steps to be carried out? Let's says 'customer' is one of my database. 1. Set ddl in tran to true `sp_dboption customer,"d...
I am being asked to create database replication definitions for a couple of databases (around 20 databases). I just wanted to clarify is the below-mentioned steps are the correct steps to be carried out? Let's says 'customer' is one of my database. 1. Set ddl in tran to true sp_dboption customer,"ddl in tran", true 2. Mark the primary database using sp_reptostandby sp_reptostandby customer,'all' 3. Set the RepAgent parameter send warm standby xacts to true sp_config_rep_agent customer,'send warm standby xacts', 'true' 4. Create db repdef/subs `create database replication definition db_repdef_customer with primary at DS.customer replicate DDL replicate system procedures go` `create subscription db_sub_customer for database replication definition db_repdef_customer with primary at DS.customer with replicate at DS2.customer1 without materialization go` (Note: DS.customer and DS2.customer1 are ASE-ASE replication) After I have followed the above steps to create db repdef/subs, I hit a lot of permission issues on my replication ID to do INSERT/UPDATE/DELETE operators on those tables I did not setup table replication yet. Further check on these tables in my 'customer' database (ex. I tried to do insert/update/delete operations manually on tables without setting table repdef, I realised that the data replication is working for all the tables under the 'customer' database with/without I setup table replication. Is this normal? Any steps I have missed out? Please help.
Tittus (33 rep)
Mar 13, 2018, 03:00 AM • Last activity: Aug 10, 2025, 05:00 AM
0 votes
1 answers
977 views
Slow performance of query with $and & $or operators
My query with `$and`, `$or` operators is performing poorly. Even though I've built a compound index on all three fields that are specified in my query criteria, the query does a complete collection scan. I have tried both simple find and aggregation and the Winning plan for both is COLLSCAN. ### Sim...
My query with $and, $or operators is performing poorly. Even though I've built a compound index on all three fields that are specified in my query criteria, the query does a complete collection scan. I have tried both simple find and aggregation and the Winning plan for both is COLLSCAN. ### Simple find Query:
db.test.find({ $or: [ {id1: "900004"}, {relatedid: "900004"}], $and: [ { isValid: "Y" } ] } )
Execution stats:
{
    "queryPlanner" : {
        "plannerVersion" : 1,
        "namespace" : "XXX",
        "indexFilterSet" : false,
        "parsedQuery" : {
            "$and" : [ 
                {
                    "$or" : [ 
                        {
                            "Id1" : {
                                "$eq" : "900004"
                            }
                        }, 
                        {
                            "relatedID" : {
                                "$eq" : "900004"
                            }
                        }
                    ]
                }, 
                {
                    "isValid" : {
                        "$eq" : "Y"
                    }
                }
            ]
        },
        "winningPlan" : {
            "stage" : "COLLSCAN",
            "filter" : {
                "$and" : [ 
                    {
                        "$or" : [ 
                            {
                                "Id1" : {
                                    "$eq" : "900004"
                                }
                            }, 
                            {
                                "relatedID" : {
                                    "$eq" : "900004"
                                }
                            }
                        ]
                    }, 
                    {
                        "isValid" : {
                            "$eq" : "Y"
                        }
                    }
                ]
            },
            "direction" : "forward"
        },
        "rejectedPlans" : []
    },
    "executionStats" : {
        "executionSuccess" : true,
        "nReturned" : 1,
        "executionTimeMillis" : 61,
        "totalKeysExamined" : 0,
        "totalDocsExamined" : 100006,
        "executionStages" : {
            "stage" : "COLLSCAN",
            "filter" : {
                "$and" : [ 
                    {
                        "$or" : [ 
                            {
                                "Id1" : {
                                    "$eq" : "900004"
                                }
                            }, 
                            {
                                "relatedID" : {
                                    "$eq" : "900004"
                                }
                            }
                        ]
                    }, 
                    {
                        "isValid" : {
                            "$eq" : "Y"
                        }
                    }
                ]
            },
            "nReturned" : 1,
            "executionTimeMillisEstimate" : 60,
            "works" : 100008,
            "advanced" : 1,
            "needTime" : 100006,
            "needYield" : 0,
            "saveState" : 781,
            "restoreState" : 781,
            "isEOF" : 1,
            "invalidates" : 0,
            "direction" : "forward",
            "docsExamined" : 100006
        }
    },
    "serverInfo" : {
        "host" : "XXXX",
        "port" : ,
        "version" : "4.0.9",
        "gitVersion" : "XXX"
    },
    "ok" : 1.0
}
### Aggregation Query:
db.test.aggregate( [{ "$match":{ "$and": [ {"isValid": "Y"}, { "$or": [ {"Id1": "900004"}, {"relatedID": "900004"} ]}] }} ] )
Execution Stats:
{
    "stages" : [ 
        {
            "$cursor" : {
                "query" : {
                    "$and" : [ 
                        {
                            "isValid" : "Y"
                        }, 
                        {
                            "$or" : [ 
                                {
                                    "Id1" : "900004"
                                }, 
                                {
                                    "relatedID" : "900004"
                                }
                            ]
                        }
                    ]
                },
                "queryPlanner" : {
                    "plannerVersion" : 1,
                    "namespace" : "XXXXX",
                    "indexFilterSet" : false,
                    "parsedQuery" : {
                        "$and" : [ 
                            {
                                "$or" : [ 
                                    {
                                        "Id1" : {
                                            "$eq" : "900004"
                                        }
                                    }, 
                                    {
                                        "relatedID" : {
                                            "$eq" : "900004"
                                        }
                                    }
                                ]
                            }, 
                            {
                                "isValid" : {
                                    "$eq" : "Y"
                                }
                            }
                        ]
                    },
                    "winningPlan" : {
                        "stage" : "COLLSCAN",
                        "filter" : {
                            "$and" : [ 
                                {
                                    "$or" : [ 
                                        {
                                            "Id" : {
                                                "$eq" : "900004"
                                            }
                                        }, 
                                        {
                                            "relatedID" : {
                                                "$eq" : "900004"
                                            }
                                        }
                                    ]
                                }, 
                                {
                                    "isValid" : {
                                        "$eq" : "Y"
                                    }
                                }
                            ]
                        },
                        "direction" : "forward"
                    },
                    "rejectedPlans" : []
                }
            }
        }
    ],
    "ok" : 1.0
}
How can I get a fast a query for my results?
Anban (45 rep)
Jun 6, 2019, 01:01 PM • Last activity: Aug 10, 2025, 04:04 AM
Showing page 9 of 20 total questions