Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
528
views
Insert into table select - Replication lag - Percona Server 5.6
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELEC...
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup.
I have set the below configuration:
1. ROW based replication is set.
2. Transaction Isolation is set to read-committed.
Today, there was a insert going on in my Master. It was in the format
INSERT INTO table1 SELECT * FROM table2
Table 2 has 200 million rows.
Though the number of insert records was only 5000 but the operation lasted for 30 mins. I observed replication lag during the insert operation.
I have load infile disabled due to security concerns. Hence I can't insert using that as well.
I went this article from Percona which says that this can be resolved if txn isolation is used as ROW and versions above 5.1 that this is resolved.
1. In what way I can make my slave to be in sync with Master in such conditions?
2. Why does the slave lag here?
tesla747
(1910 rep)
Dec 28, 2016, 04:08 PM
• Last activity: Aug 6, 2025, 12:02 AM
1
votes
1
answers
18472
views
Log-in as service account in SQL Server Management Studio
The stored procedure that performs a CTE in production is being executed using a service account. However, the stored procedure returns empty and I tried checking this using SSMS but I my domain account has no execute functionality in production. I tried logging into SQL Server Management Studio usi...
The stored procedure that performs a CTE in production is being executed using a service account. However, the stored procedure returns empty and I tried checking this using SSMS but I my domain account has no execute functionality in production.
I tried logging into SQL Server Management Studio using the service account but I ran through some problems. First was resolved by following this link - https://dba.stackexchange.com/questions/173785/the-certificate-chain-was-issued-by-an-authority-that-is-not-trusted
However, once that is executed, it returns as if no user name was provided:
Question is, can I login using a service account in SQL Server Management Studio?

Patrick
(111 rep)
Oct 21, 2018, 07:27 AM
• Last activity: Aug 5, 2025, 11:07 PM
1
votes
1
answers
1398
views
Redshift - is it possible to run query that select from external table - only if table exist
what i am try to do is something like this: select case when (/* here should come sql statement that check if table exist */) then (select count(*) from my_external_table) else 0 end I know that theoretically i could use the SVV_ALL views , but it's sometimes it's cuasing the query to be slow .. As...
what i am try to do is something like this:
select
case when (/* here should come sql statement that check if table exist */)
then
(select count(*) from my_external_table)
else
0
end
I know that theoretically i could use the SVV_ALL views , but it's sometimes it's cuasing the query to be slow ..
As an example for what i am looking for , this postgres query could be great for me , but Redshift do not support the
to_regclass
function:
select
case when exist(select * from (select to_regclass('my_table') as rg) where rg IS NOT NULL )
then
(select count(*) from my_table)
else
0
end
Eyal leshem
(145 rep)
Apr 27, 2022, 08:57 AM
• Last activity: Aug 5, 2025, 10:04 PM
0
votes
1
answers
139
views
How do I add 2 missing nodes to MariaDB Galera Cluster using IST and not SST
I inherited an issue where a MariaDB Galera Cluster has only one out of 3 nodes working. Node1: up and running in production and is accepting reads and writes Node2: fresh install it just need to sync with node 1 Node3: fresh install it just need to sync with node 1 If I run `service mysql start` on...
I inherited an issue where a MariaDB Galera Cluster has only one out of 3 nodes working.
Node1: up and running in production and is accepting reads and writes
Node2: fresh install it just need to sync with node 1
Node3: fresh install it just need to sync with node 1
If I run
service mysql start
on Node2/Node3 it changes the state of wsrep_local_state_comment
from Synced
to Donor/Desynced
and a SST is triggered but I will have a lot of downtime.
I have a 2TB of databases and it's going to take about 5-7 hours for a newly added node to sync, and I can't afford the downtime (as an idea, during the day I have 1GB/minute of binlogs and at night about 4GB/hour of binlogs).
I take a db backup every night with Percona XtraBackup and I know how to restore it on a fresh server.
I have galera.cache
set at 32GB.
Is there a way of restoring the other 2 nodes without triggering an SST?
I was thinking of restoring the backup of Node1 to Node2 maybe it will trigger an IST.
netrangermike
(1 rep)
Feb 21, 2023, 12:16 AM
• Last activity: Aug 5, 2025, 09:06 PM
1
votes
1
answers
1768
views
Multiple rows vs Single row with JSON
What is the most optimal way to do this? - In TV Show page I need to get all progress from all episodes in that TV Show - In Homepage I need to get the continue_watching - In Episode I need to get progress of the episode. - Some TV Shows have more than 100 episodes Ex: ```none USER_ID | SHOW_ID | LA...
What is the most optimal way to do this?
- In TV Show page I need to get all progress from all episodes in that TV Show
- In Homepage I need to get the continue_watching
- In Episode I need to get progress of the episode.
- Some TV Shows have more than 100 episodes
Ex:
USER_ID | SHOW_ID | LAST_SEEN_ID | LAST_SEEN_TIME
10 | 5 | 12 | ---
USER_ID | EPISODE_ID | PROGRESS
10 | 10 | 15
10 | 11 | 20
10 | 12 | 95
OR
USER_ID | SHOW_ID | PROGRESS | LAST_SEEN_ID | LAST_SEEN_TIME
10 | 5 | {"10":15, "11":20, "12": 95} | 12 | ---
In PosgreSQL I can get the individual progress with:
SELECT progress->'10' as progress...
I think that the best method is the first but it will create a lot of rows and this could make DB slower?
Angel Vega
(11 rep)
Apr 1, 2020, 11:10 PM
• Last activity: Aug 5, 2025, 08:04 PM
0
votes
1
answers
1371
views
mysql (mariadb) won't start after power outage (MacOS)
/var/log/system.log says (over and over) ``` Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server[2128]): Service exited with abnormal code: 1 Oct 13 19:34:01 Data-Server com.apple.xpc.launchd[1] (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seco...
/var/log/system.log says (over and over)
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service exited with abnormal code: 1
Oct 13 19:34:01 Data-Server com.apple.xpc.launchd (com.mariadb.server): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
/usr/local/var/mysql/Data-Server.local.err says (once recently, repeated a number of times well before the crash)
2020-10-13 2:44:25 20019181 [Warning] Aborted connection 20019181 to db: 'EcoReality' user: 'root' host: '10.1.2.2' (Got timeout reading communication packets)
First thing I did was to shutdown the launchctl entry, to keep it from constantly restarting.
# launchctl unload /Library/LaunchDaemons/com.mariadb.server.plist
Then I tried invoking mysqld manually:
# sudo /usr/local/bin/mysqld -u mysql
2020-10-13 20:46:09 0 [Note] /usr/local/bin/mysqld (mysqld 10.4.6-MariaDB) starting as process 2364 ...
2020-10-13 20:46:09 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-10-13 20:46:09 0 [Note] InnoDB: Uses event mutexes
2020-10-13 20:46:09 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-10-13 20:46:09 0 [Note] InnoDB: Number of pools: 1
2020-10-13 20:46:09 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-10-13 20:46:09 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-10-13 20:46:09 0 [Note] InnoDB: Completed initialization of buffer pool
2020-10-13 20:46:09 0 [ERROR] InnoDB: Invalid log block checksum. block: 81635496 checkpoint no: 2609153 expected: 296846624 found: 3735928559
2020-10-13 20:46:09 0 [ERROR] InnoDB: Missing MLOG_CHECKPOINT at 41797373564 between the checkpoint 41797373564 and the end 41797373440.
2020-10-13 20:46:09 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2020-10-13 20:46:09 0 [Note] InnoDB: Starting shutdown...
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' init function returned error.
2020-10-13 20:46:09 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2020-10-13 20:46:09 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-10-13 20:46:09 0 [Note] CONNECT: Version 1.06.0009 January 27, 2019
2020-10-13 20:46:09 0 [ERROR] Unknown/unsupported storage engine: InnoDB
2020-10-13 20:46:09 0 [ERROR] Aborting
So now I'm a bit stumped at the lack of diagnostic messages. Is there any way to coax more info out of mysqld when it goes down? Or should I just start incrementing innodb_force_recovery
until something interesting happens?
Jan Steinman
(191 rep)
Oct 14, 2020, 04:01 AM
• Last activity: Aug 5, 2025, 07:01 PM
0
votes
0
answers
10
views
AWS Aurora MySQL table archive running slow for one table
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement `bigint`) *n* rows at a time, calling a procedure to archive the data to a...
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement
bigint
) *n* rows at a time, calling a procedure to archive the data to a separate table and then deleting that same data from the main table. I'm doing it in small batches to prevent any long term locking of the main tables. It also sleep
s in between each loop iteration. Batch size and sleep time are configurable via a config file. On my test system, for this table, I'm using a batch size of 1000 and a sleep time of 0. Instance class is r7g.4xl.
Most tables archive at several thousand rows per second, which is acceptable. But I have one table whose archiving is going very slowly; averaging under 550 rows/sec. There is no other activity in the database (there are other archives running against other DBs in the cluster at the same time, but killing them didn't improve the performance of this one). Here's the table schema (the schema for the archive table is identical):
CREATE TABLE inbox_item
(
id
bigint NOT NULL AUTO_INCREMENT,
user_id
bigint NOT NULL,
template_id
bigint NOT NULL,
url
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
created_at
datetime NOT NULL,
hash
varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
parameters
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY hash_uidx
(hash
),
KEY template_id_idx
(template_id
),
KEY user_id_created_at_idx
(user_id
,created_at
)
) ENGINE=InnoDB AUTO_INCREMENT=442872663 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Note that while there are two large varchar
s, total actual data width is under 300 bytes. Here's the procedure that's being called:
CREATE DEFINER=root
@%
PROCEDURE archive_inbox_item_proc
(IN pkmin bigint, IN pkmax bigint, IN querymax bigint)
begin
declare exit handler for sqlexception
begin
get diagnostics condition 1
@err = MYSQL_ERRNO, @msg = MESSAGE_TEXT;
select -1;
select concat('Error ', cast(@err as char), ': ',@msg) 'Error';
rollback;
end;
start transaction;
insert ignore into inbox
.inbox_item_archive
select arctable.* from inbox
.inbox_item
as arctable where created_at = pkmin and arctable.id
= pkmin and arctable.id
< querymax and arctable.id
<= pkmax ;
select row_count();
commit;
end
pkmin
is always the actual minimum pkey value. There are no foreign keys or triggers referencing the table. Here's the table status:
Name: inbox_item
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 299879061
Avg_row_length: 243
Data_length: 72988737536
Max_data_length: 0
Index_length: 126937300992
Data_free: 45770342400
Auto_increment: 442872663
Create_time: 2025-03-28 06:15:36
Update_time: 2025-08-05 18:04:55
Check_time: NULL
Collation: utf8mb4_unicode_ci
Checksum: NULL
Create_options:
Comment:
Any ideas on what's causing this to run so slow relative to other tables in other databases?
Swechsler
(153 rep)
Aug 5, 2025, 06:05 PM
0
votes
1
answers
1705
views
MySQL cursor always exits out of loop
The cursor query and `select value` query returns rows if I run it in `mysql` but when in a cursor it always exits out of loop. Anything wrong here? I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with `BEFORE LOOP` and then ends with `EXIT`. CREATE PROCE...
The cursor query and
select value
query returns rows if I run it in mysql
but when in a cursor it always exits out of loop.
Anything wrong here?
I've added "BEFORE LOOP", "EXIT" and "IN LOOP" so it prints where it is but it always starts with BEFORE LOOP
and then ends with EXIT
.
CREATE PROCEDURE getTotal()
BEGIN
DECLARE HOSTID INTEGER;
DECLARE cITEMID INT;
declare finished bool default false;
DECLARE Total INT;
declare cur1 cursor for SELECT itemid FROM items WHERE hostid = 10579;
declare continue handler for not found set finished = true;
open cur1;
loop_1: loop
fetch cur1 into cITEMID;
SELECT "BEFORE LOOP";
if finished then
SELECT "EXIT";
leave loop_1;
end if;
SELECT "IN LOOP";
-- Test query
SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1;
-- Final select query will look like this.
-- SET @Total := @Total + (SELECT value from history_uint WHERE itemid = cITEMID ORDER BY itemid DESC LIMIT 1);
-- SELECT @Total;
end loop;
close cur1;
END //
DELIMITER ;
Queries:
SELECT itemid FROM items WHERE hostid = 10579;
| itemid |
| 12345 |
| 12346 |
| 12347 |
SELECT value from history_uint WHERE itemid = 12345 ORDER BY itemid DESC LIMIT 1;
| value |
| 1 |
SELECT * from history_uint;
| itemid | value | clock (unixtimestamp) |
| 12345 | 13 | 4364564654654 |
| 12346 | 1 | 4364564654657 |
| 12347 | 16 | 4364564654654 |
| 12345 | 13 | 4364564654756 |
| 12346 | 2 | 4364564654753 |
| 12347 | 15 | 4364564654756 |
Note: The clock column value is just made up.
R0bert2
(121 rep)
Apr 1, 2020, 07:06 PM
• Last activity: Aug 5, 2025, 06:00 PM
8
votes
1
answers
814
views
Cursoring over sys.databases skips databases
I know this question has been asked several times and I know the solution, but I am trying to understand the underlying cause of the problem: I have the following code to perform database backups. DECLARE @Filename VARCHAR(256) DECLARE @FileDate VARCHAR(15) DECLARE @Path VARCHAR(50) DECLARE @Name VA...
I know this question has been asked several times and I know the solution, but I am trying to understand the underlying cause of the problem:
I have the following code to perform database backups.
DECLARE @Filename VARCHAR(256)
DECLARE @FileDate VARCHAR(15)
DECLARE @Path VARCHAR(50)
DECLARE @Name VARCHAR(50)
-- specify database backup directory
SET @Path = '\MyPath'
-- specify filename date
SELECT @FileDate = CONVERT(VARCHAR(20), GETDATE(), 112) + '_' + REPLACE(CONVERT(VARCHAR(20), GETDATE(), 108),':','')
DECLARE db_cursor CURSOR FOR
SELECT [name]
FROM master.sys.databases
WHERE [name] NOT IN ('master', 'msdb', 'model', 'tempdb')
AND [state_desc] = 'ONLINE'
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @Name
WHILE @@FETCH_STATUS = 0
BEGIN
SET @Filename = @Path + @Name + '_Full_Backup_' + @FileDate + '.bak'
BACKUP DATABASE @Name
TO DISK = @Filename
WITH CHECKSUM,
COMPRESSION
FETCH NEXT FROM db_cursor INTO @Name
END
CLOSE db_cursor
DEALLOCATE db_cursor
Sometimes, only some databases are backed up, suggesting that the cursor is not iterating over all of the rows returned or that the query itself is not returning the names of all the databases it should be.
I am trying to understand *why* this happens. I know that the fix is to use a
STATIC
cursor, suggesting the issue is with the results in the underlying query
SELECT [name]
FROM master.sys.databases
WHERE [name] NOT IN ('master', 'msdb', 'model', 'tempdb')
AND [state_desc] = 'ONLINE'
are changing, but I can't see what would be changing (no database names would change and the Error log doesn't suggest that the database state has changed)
SE1986
(2172 rep)
Mar 9, 2020, 05:06 PM
• Last activity: Aug 5, 2025, 05:12 PM
0
votes
1
answers
2038
views
PostgreSQL get json key name after comparing values of keys
I have table which has a column of type json with name "demographic". I want to compare keys(predefined by me) by their values and the result of that should be the name of the key and not the value which later i will use to save to another column's rows. The json columns that i have and want to comp...
I have table which has a column of type json with name "demographic". I want to compare keys(predefined by me) by their values and the result of that should be the name of the key and not the value which later i will use to save to another column's rows.
The json columns that i have and want to compare values looks like this:
{
"he":{
"he_vakiy":29384,
"he_naiset":14803,
"he_miehet":14581,
"he_kika":1485,
"he_0_2":850,
"he_3_6"
}
}
I want to compare f.ex "he_naiset" and "he_miehet" and the one that has highest value should return the name of the "key" ;
Ive tried something like this:
select greatest(demographics->'he'->>'he_miehet',demographics->'he'->>'he_naiset') as greatest
from demographics;
Which will compare the values and return the highest values. But I don't know how I can get the name of the key.
I also tried something like this from:
with cte as
(
select *
, genre.key as genre
, row_number() over (partition by id order by value desc) as ord
from base.maakunta
cross
join lateral json_each_text(base.maakunta.demographic->'he') genre
)
select genre, value,ord
from cte
where ord = 1
;
But this one compares all the keys inside the JSON.
eko
(101 rep)
Mar 20, 2022, 02:48 PM
• Last activity: Aug 5, 2025, 05:03 PM
0
votes
0
answers
12
views
SQL Server Estimates don't use AVG_RANGE_ROWS for Uniqueidentifer Parameter
I'm trying to debug a very weird query row estimation. The query is very simple. I have a table `OrderItems` that contains for each Order (column `OrderId`) the items of the order. ```sql SELECT count(*) FROM orders.OrderItem WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1' ``` According to th...
I'm trying to debug a very weird query row estimation.
The query is very simple. I have a table
OrderItems
that contains for each Order (column OrderId
) the items of the order.
SELECT count(*)
FROM orders.OrderItem
WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1'
According to the statistics from IX_OrderItem_FK_OrderId
(that's just a normal unfiltered foreign key index CREATE INDEX IX_OrderItem_FK_OrderId on orders.OrderId(OrderId)
, the density is 1.2620972E-06 with 7423048 rows, so about ~9.3 items per order (if we ignore the items with OrderId = NULL
, if we include them there are even less).
The statistics are created with FULLSCAN, and are only slightly out of date (around ~0.2% new rows since the last recompute).
| Name | Updated | Rows | Rows Sampled | Steps | Density | Average key length | String Index | Filter Expression | Unfiltered Rows | Persisted Sample Percent |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| IX_OrderItem_FK_OrderId | Aug 3 2025 4:36PM | 7423048 | 7423048 | 198 | 0.1649756 |26.443027 | "NO " | NULL | 7423048 | 100 |
| All density | Average Length | Columns |
| --- | --- | --- |
| 1.2620972E-06 | 10.443027 | OrderId |
| 1.3471555E-07 | 26.443027 | OrderId, Id |
The query plan however expects, that the query returns 205.496 items. And in reality there are actually 0 results - because the orderId
doesn't exist.
Detailed Query Plan:
https://www.brentozar.com/pastetheplan/?id=hVKYNLmXSU
It probably uses the histogram for coming up with the estimate.
It should fall into following bucket with RANGE_HI_KEY = 'a39932d8-aa2c-f011-8b3d-000d3a440098'
. But that estimate should then be 6.87 according to the AVG_RANGE_ROWS.
It somehow looks like it uses the EQ_ROWS from the previous bucket (but 205 might also just be by accident).
| RANGE_HI_KEY | RANGE_ROWS | EQ_ROWS | DISTINCT_RANGE_ROWS | AVG_RANGE_ROWS |
| --- | --- | --- | --- | --- | --- |
| 9d2e2bea-aa6e-f011-8dca-000d3a3aa5e1 | 12889 | 205 | 2412 | 5.343698 |
| a39932d8-aa2c-f011-8b3d-000d3a440098 | 21923 | 107 | 3191 | 6.8702602 |
OPTION(RECOMPILE)
does not help.
Can somebody explain how SQL Server (in particularly Azure SQL) is coming up with that number?
- Does it really think that the parameter is close enough to the bucket start, and just takes the EQ_ROWS value even though the AVG_RANGE_ROWS is a lot smaller?
- Does it not understand the parameter because it's defined as VARCHAR? If I replace it with DECLARE @OrderId UNIQUEIDENTIFIER = '5a7e...'; WHERE OrderId = @OrderId
the estimate is down to 6. But if that's the reason, from where is the estimate 205?
Jakube
(101 rep)
Aug 5, 2025, 04:53 PM
0
votes
0
answers
19
views
Source and Replica in a MariaDB 'pair' do not start exchanging data
On Linux Slackware PCs, I have a 'Source' database on one PC, and a 'Replica' on the other. On the Source end (11.4.7-MariaDB) I've programmed ID=1. On the Replica end (10.11.6-MariaDB), I set ID=11 and set the Source IP. I've started them, and they seem to like each other, as the Show replica statu...
On Linux Slackware PCs, I have a 'Source' database on one PC, and a 'Replica' on the other.
On the Source end (11.4.7-MariaDB) I've programmed ID=1.
On the Replica end (10.11.6-MariaDB), I set ID=11 and set the Source IP.
I've started them, and they seem to like each other, as the Show replica status reports everything as Ok, no errors, comms ok, etc...
BUT, when I create a new table on the 'Source' machine, nothing happens on the 'Replica' - no new table is created.
And, after I manually create the table on the Replica, and I insert a record on the Source, no data transfer happens either. No pointers are incremented, seemingly no data is transferred. And no errors seem to be appearing in the logs. Here are the status reports for Source and Replica.
I dispair, I consulted AI, and after several sessions, AI was in the same state as I was... repeatedly asking me to insert new records at the Source and checking at the other end, maybe hoping something would repair itself. Also suggesting extra complications such as adding SSH, Gtid and other optionals.
What can I do to get the show on the road? I've reinstalled MariaDB at the Replica. I'm somewhat reticent in changing the DB at the Source, as it's in use. I've checked permissions on files and directories... Last session ended at 04:30 AM this morning.
**source** status:
show master status\G
*************************** 1. row ***************************
File: mysql-bin.000020
Position: 328
Binlog_Do_DB: homeorptemp, farma1, test_replication
Binlog_Ignore_DB:
1 row in set (0.000 sec)
**replica** status:
slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.102
Master_User: my_repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000020
Read_Master_Log_Pos: 328
Relay_Log_File: mysql-relay-bin.000007
Relay_Log_Pos: 627
Relay_Master_Log_File: mysql-bin.000020
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 328
Relay_Log_Space: 1235
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: optimistic
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0
Replicate_Rewrite_DB:
1 row in set (0.001 sec)
**EDIT**
my.cnf at the replica:
[client]
[mysqld]
port = 3306
socket = /var/run/mysql/mysql.sock
skip-external-locking
bind-address = 0.0.0.0
key_buffer_size = 256M
max_allowed_packet = 32M
table_open_cache = 1024
sort_buffer_size = 2M
net_buffer_length = 8K
read_buffer_size = 512K
read_rnd_buffer_size = 1M
skip-networking = OFF
server-id = 11
log-bin = mysql-bin
binlog_format = mixed
innodb_buffer_pool_size = 512M
innodb_log_file_size = 256M
read_only = OFF
log_warnings = 1
log_slave_updates = ON
[mysqldump]
quick
max_allowed_packet = 32M
[mysql]
no-auto-rehash
[myisamchk]
show processlist;
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
| Id | User | Host | db | Command | Time | State | Info | Progress |
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
| 5 | system user | | NULL | Slave_IO | 17725 | Waiting for master to send event | NULL | 0.000 |
| 6 | system user | | NULL | Slave_SQL | 17724 | Slave has read all relay log; waiting for more updates | NULL | 0.000 |
| 8 | root | localhost | NULL | Query | 0 | starting | show processlist | 0.000 |
+----+-------------+-----------+------+-----------+-------+--------------------------------------------------------+------------------+----------+
3 rows in set (0.000 sec)
jcoppens
(101 rep)
Aug 5, 2025, 03:32 PM
• Last activity: Aug 5, 2025, 04:18 PM
2
votes
1
answers
1860
views
MariaDB Inner join is slow
Here is my SQL (with explain): EXPLAIN SELECT * FROM GBI gbi INNER JOIN VF_KRED kred ON gbi.vendor = kred.vendor; Giving below the outcome: [![enter image description here][1]][1] Now, the `select` SQL takes ~10 seconds to produce the outcome. Below are the indexes on tables: - VF_KRED table: [
select
SQL takes ~10 seconds to produce the outcome.
Below are the indexes on tables:
- VF_KRED table:


Akshay Lokur
(121 rep)
Jun 25, 2019, 04:06 PM
• Last activity: Aug 5, 2025, 04:05 PM
1
votes
0
answers
8
views
Usage of **point** data type
Running MySql database Ver (8.x) on Windows IIS on AWS EC2. The application is for aviation industry, so I have used the **point** data type for location fields, assuming that I would store latitude/longitude in it. I don't anticipate doing Geometry calculations, all they need to know is the distanc...
Running MySql database Ver (8.x) on Windows IIS on AWS EC2.
The application is for aviation industry, so I have used the **point** data type for location fields, assuming that I would store latitude/longitude in it.
I don't anticipate doing Geometry calculations, all they need to know is the distance between two points. They will probably trust some official source for that.
I am unaware of how this data type actually works. What is one advantage to retaining **Point**?
Rohit Gupta
(2126 rep)
Aug 5, 2025, 03:20 PM
-1
votes
0
answers
11
views
Adding a 3rd replica to an AlwaysOn cluster
Customer wants to save money. They have 2 separate on-prem SQL Server AlwaysOn clusters, we already upgraded one of them (two nodes) to SQL Server 2022. Now the other 2-node cluster... What if we do not build a new cluster for this one, but instead, we add a 3rd node to the existing cluster to bette...
Customer wants to save money. They have 2 separate on-prem SQL Server AlwaysOn clusters, we already upgraded one of them (two nodes) to SQL Server 2022.
Now the other 2-node cluster... What if we do not build a new cluster for this one, but instead, we add a 3rd node to the existing cluster to better utilize the resources. Unfortunately we are not allowed to just simply put these databases on the first cluster under a separate AlwaysOn AG group. So as a compromise, we would run these databases on a 3rd node to give a bit more separation and use the other two nodes as Secondaries. This way the customer only needs to pay for one more node, not for two nodes.
What do you think about this idea? Would it impact and slow down the databases on the 1st AG group due to the added AlwaysOn redo queue?
Balazs Kiss
(21 rep)
Aug 5, 2025, 03:10 PM
0
votes
2
answers
140
views
Asking Suggestions regarding mysql optimization
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations. Running it over a cpanel setup. For a while its mostly mysql eating most of the cpu and hitting high load times ``` mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld ``` **Server config is** ``` Up...
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations.
Running it over a cpanel setup.
For a while its mostly mysql eating most of the cpu and hitting high load times
mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld
**Server config is**
Uptime 70 days
Operating System CentOS Linux 7 (Core) x64 File
Handles 14560 of 6511967
Processes 342
CPU Model AMD Ryzen 5 3600 6-Core Processor
Ram 64GB
I am trying to improve this and came across mysql tuner, here is what it has to say after running mysql with performance schema on for 2 days or more.
This isnt exactly my forte so the mycnf is just a blend of what I tried with an older mysqltuner suggestion but ive heard that the application would still require a human touch.
Would appreciate some help in optimizing the settings.
>> MySQLTuner 1.7.19 - Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 10.3.27-MariaDB-log
[OK] Operating on 64-bit architecture
-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/lib/mysql/server.name.here.err exists
[--] Log file: /var/lib/mysql/server.name.here.err(4M)
[OK] Log file /var/lib/mysql/server.name.here.err is readable.
[OK] Log file /var/lib/mysql/server.name.here.err is not empty
[OK] Log file /var/lib/mysql/server.name.here.err is smaller than 32 Mb
[!!] /var/lib/mysql/server.name.here.err contains 31430 warning(s).
[!!] /var/lib/mysql/server.name.here.err contains 23132 error(s).
[--] 60 start(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07 7:35:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 2) 2020-12-01 14:35:35 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 3) 2020-11-30 16:10:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 4) 2020-11-30 16:07:53 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 5) 2020-11-01 1:57:12 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 6) 2020-10-10 19:28:45 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 7) 2020-10-10 19:28:32 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 8) 2020-09-30 3:36:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 9) 2020-09-28 17:58:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 10) 2020-09-25 18:38:33 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 51 shutdown(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07 7:35:07 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 2) 2020-12-01 14:35:27 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 3) 2020-11-30 16:09:53 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 4) 2020-11-30 16:07:33 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 5) 2020-11-01 1:57:09 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 6) 2020-10-10 19:28:39 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 7) 2020-10-10 19:28:26 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 8) 2020-09-30 3:34:34 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 9) 2020-09-28 17:56:38 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 10) 2020-09-25 18:36:55 0 [Note] /usr/sbin/mysqld: Shutdown complete
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +PERFORMANCE_SCHEMA +SEQUENCE
[--] Data in MyISAM tables: 2.1G (Tables: 1387)
[--] Data in InnoDB tables: 3.2G (Tables: 2207)
[--] Data in MEMORY tables: 586.4K (Tables: 3)
[OK] Total fragmented tables: 0
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Security Recommendations ------------------------------------------------------------------
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] There is no basic password file list!
-------- CVE Security Recommendations --------------------------------------------------------------
[--] Skipped due to --cvefile option undefined
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 5h 57m 24s (167M q [862.613 qps], 2M conn, TX: 21024G, RX: 379G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is disabled
[--] Physical Memory : 62.8G
[--] Max MySQL memory : 43.5G
[--] Other process memory: 0B
[--] Total buffers: 5.0G global + 260.7M per thread (151 max threads)
[--] P_S Max memory usage: 104M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 26.5G (42.17% of installed RAM)
[OK] Maximum possible memory usage: 43.5G (69.35% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (324K/167M)
[OK] Highest usage of available connections: 55% (84/151)
[OK] Aborted connections: 0.00% (55/2878495)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[!!] Query cache may be disabled by default due to mutex contention.
[OK] Query cache efficiency: 40.4% (102M cached / 254M selects)
[!!] Query cache prunes per day: 3479297
[OK] Sorts requiring temporary tables: 0% (11K temp sorts / 6M sorts)
[!!] Joins performed without indexes: 12813
[!!] Temporary tables created on disk: 66% (2M on disk / 3M total)
[OK] Thread cache hit rate: 98% (40K created / 2M connections)
[OK] Table cache hit rate: 95% (4K open / 4K opened)
[OK] table_definition_cache(2097152) is upper than number of tables(3862)
[OK] Open file limit used: 7% (2K/40K)
[OK] Table locks acquired immediately: 99% (13M immediate / 13M locks)
-------- Performance schema ------------------------------------------------------------------------
[--] Memory used by P_S: 104.0M
[--] Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is enabled.
[--] Thread Pool Size: 12 thread(s).
[--] Using default value is good enough for your version (10.3.27-MariaDB-log)
-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Key buffer used: 24.9% (182M used / 734M cache)
[OK] Key buffer size / total MyISAM indexes: 700.0M/460.2M
[OK] Read Key buffer hit rate: 100.0% (1B cached / 41K reads)
[!!] Write Key buffer hit rate: 69.5% (804K cached / 558K writes)
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 4.0G/3.2G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (0.78125 %): 16.0M * 2/4.0G should be equal to 25%
[!!] InnoDB buffer pool instances: 8
[--] Number of InnoDB Buffer Pool Chunk : 32 for 8 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 100.00% (102924116296 hits/ 102924220151 total)
[!!] InnoDB Write Log efficiency: 73.75% (4679039 hits/ 6344450 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1665411 writes)
-------- AriaDB Metrics ----------------------------------------------------------------------------
[--] AriaDB is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/1B
[OK] Aria pagecache hit rate: 99.1% (277M cached / 2M reads)
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: MIXED
[--] XA support enabled: ON
[--] Semi synchronous replication Master: OFF
[--] Semi synchronous replication Slave: OFF
[--] This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Control warning line(s) into /var/lib/mysql/server.name.here.err file
Control error line(s) into /var/lib/mysql/server.name.here.err file
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
Increasing the query_cache size over 128M may reduce performance
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/internals/en/join-buffer-size.html
(specially the conclusions at the bottom of the page).
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries which have no LIMIT clause
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
query_cache_size (=0)
query_cache_type (=0)
query_cache_size (> 128M) [see warning above]
join_buffer_size (> 2.0M, or always use indexes with JOINs)
tmp_table_size (> 32M)
max_heap_table_size (> 32M)
innodb_log_file_size should be (=512M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
innodb_buffer_pool_instances(=4)
**And here is what my.cnf currently has**
**Note:** This is now edited based on suggestions by Wilson and Rick, Will save and restart mysql and get back to you guys with updates.
[client-server]
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
[mysqld]
# Logging and performance measurement here
log-error=/var/lib/mysql/ryzen.dogestream.com.err
# for enhanced slow query log
log_slow_verbosity=query_plan,explain
performance-schema=1
max_allowed_packet=268435456
max_heap_table=32M
tmp_table_size=32M
open_files_limit=40000
# Buffer sizes
join_buffer_size=2M
key_buffer_size=700M
sort_buffer_size=2M
# InnoDB stuff goes here
innodb_file_per_table=1
innodb_buffer_pool_size=15G
innodb_log_file_size=16M
local-infile=0
# from 1024 to conserve 90% of CPU cycles used for function
innodb_lru_scan_depth=100
# should always match table_open_cache
innodb_open_files=9000
# Query stuff goes here
# from 128M to conserve RAM for more useful purposes
query_cache_size=0
# from 1 for OFF to avoid 3 million+ query cache prunes per day
query_cache_type=0
# from 2M to conserve RAM
query_cache_limit=0
slow_query_log=1
slow_query_log_file=mysql-slow.log
long_query_time=0.1
# It says cache it is here
table_definition_cache=-1
# from 3000 to reduce tables opened_tables count
table_open_cache=9000
# from 16 to accomodate your 84 concurrent users
thread_cache_size=256
Sawada Tsunayoshi
(1 rep)
Dec 9, 2020, 12:52 PM
• Last activity: Aug 5, 2025, 03:06 PM
0
votes
2
answers
139
views
Unable to Add New Database in CloudPanel VPS - Access denied for user 'root'@'localhost'
**Problem** I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied. **Environment** - CloudPanel on VPS - MySQL Server - Hosting Provider: Hostinger **Error Message** `Access denied for user 'root'@'localhost'` **Steps I've Tried** *...
**Problem**
I'm facing an issue with my VPS running CloudPanel where I cannot add new databases due to MySQL root access being denied.
**Environment**
- CloudPanel on VPS
- MySQL Server
- Hosting Provider: Hostinger
**Error Message**
Access denied for user 'root'@'localhost'
**Steps I've Tried**
**1. Attempted direct MySQL login:**
mysql -u root -p
Result: Access denied error, tried multiple password combinations including:
- Root user password
- Hostinger password
- Default passwords like (12345, admin, root, etc..)
- No password
**2. Attempted MySQL safe mode:**
sudo systemctl stop mysql
sudo mysqld_safe --skip-grant-tables &
Result: Error message:
MySQL cannot start in safe mode because the required directory for the UNIX socket file does not exist.
**3. Attempted to kill running processes:**
sudo kill -9
Result: No process found with specified PID
**4. Created missing directory and set permissions:**
sudo mkdir -p /var/run/mysqld
sudo chown -R mysql:mysql /var/run/mysqld
**5. Restarted MySQL in safe mode and verified process:**
sudo mysqld_safe --skip-grant-tables &
ps aux | grep mysql
Result: Process found running
**6. Attempted passwordless root login:**
mysql -u root
Result: Still encountering access denied error
**Additional Context**
- Hostinger support was unable to resolve the issue
- CloudPanel tutorials show database creation through UI without command-line intervention
- Suspect issue might be related to CloudPanel's default MySQL user restrictions
*I have reached out to Hostinger support, but they were unable to provide a solution beyond what I have already attempted. Additionally, all the YouTube tutorials on setting up and creating a database in CloudPanel show a straightforward process with no access issues, as everything can be done directly from the UI without requiring any command-line input.
I suspect it might be related to CloudPanel’s default MySQL user restrictions or authentication settings, but I’m not sure how to proceed. Has anyone encountered a similar issue or knows how to regain root access?*
Joe
(1 rep)
Feb 2, 2025, 10:31 AM
• Last activity: Aug 5, 2025, 02:10 PM
0
votes
1
answers
66
views
Cannot import a database dump on Postgres 13.14+ while it loads fine in Postgres 13.13
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format). The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it: ``` ERROR: function public.label_id_constant() d...
I'm experiencing a problem with loading a PostgreSQL backup file (SQL format).
The SQL file has a function that is defined after another function where it's used. PostgreSQL 13.13 can handle such a backup file, while PostgreSQL 13.14 fails to load it:
ERROR: function public.label_id_constant() does not exist
LINE 1: SELECT public.uuid_increment($1, public.label_id_constant()...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT public.uuid_increment($1, public.label_id_constant())
CONTEXT: SQL function "label_id" during inlining
I've double-checked if there is SET check_function_bodies = false;
in the dump file. I also searched if I could disable the inlining during the dump load, but still no success.
I've distilled the dump file into a minimal reproducible example and attached it as a script to this ticket.
If anybody experienced anything similar, please help.
#!/bin/env bash
DUMP_FILE=$(mktemp)
trap "rm -f $DUMP_FILE" EXIT
cat - > "$DUMP_FILE" &2
docker run -d \
--name postgres-13.13 \
-e POSTGRES_HOST_AUTH_METHOD=trust \
-p 5432:5432 \
postgres:13.13
echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.13 pg_isready -h localhost -U postgres -q; do
sleep 1
done
cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"
docker stop postgres-13.13
docker rm postgres-13.13
echo "Testing with postgres 13.14" >&2
docker run -d \
--name postgres-13.14 \
-e POSTGRES_HOST_AUTH_METHOD=trust \
-p 5432:5432 \
postgres:13.14
echo "Waiting for postgres to start" >&2
while ! docker exec postgres-13.14 pg_isready -h localhost -U postgres -q; do
sleep 1
done
cat "$DUMP_FILE" | psql -h localhost -U postgres -v ON_ERROR_STOP=1 --port 5432 -e -1 && echo "******** Success ********" || echo "******** Failure ********"
docker stop postgres-13.14
docker rm postgres-13.14
--------
UPD:
What I've already tried:
Setting SET jit = off;
doesn't fix the problem.
UPD2:
1. I tried exporting our database using pg_dump, instead of the CloudSQL export API. It gave me the same error.
2. I tried to export the database, load it to 13.13, then export it from 13.13 and load it to 13.14, but the error was the same again.
---
UPD: I successfully migrated the DB with the following script:
https://paste.ubuntu.com/p/kgGGQzNcgp/
After migrating to PostgreSQL 17.5
, the issue persists. If I dump the DB with pg_dump, I cannot load it with the same error.
Renat
(101 rep)
Jan 21, 2025, 08:12 AM
• Last activity: Aug 5, 2025, 01:46 PM
1
votes
2
answers
144
views
How to concatenate multiple JSON arrays into a single array in MariaDB?
I'm working with a MariaDB database where I have a table with two columns: `id` and `values`, where `values` contains JSON arrays. Here is a simplified example of the data: | id | values | |----|-------------| | 1 | "[1, 2, 3]" | | 1 | "[5]" | | 2 | "[4]" | I want to group the rows by `id` and conca...
I'm working with a MariaDB database where I have a table with two columns:
id
and values
, where values
contains JSON arrays. Here is a simplified example of the data:
| id | values |
|----|-------------|
| 1 | "[1, 2, 3]" |
| 1 | "" |
| 2 | "" |
I want to group the rows by id
and concatenate the JSON arrays in the values
column into a single array, like this:
| id | values |
|----|---------------|
| 1 | "[1, 2, 3, 5]"|
| 2 | "" |
I tried using JSON_ARRAYAGG
but it creates an array of arrays, and I can't figure out how to merge the arrays into one. How can I achieve this in MariaDB?
SELECT id, JSON_ARRAYAGG(values)
FROM REC
GROUP BY id
Bonus question: How to output only unique values?
Emax
(111 rep)
Dec 16, 2024, 01:48 PM
• Last activity: Aug 5, 2025, 01:04 PM
21
votes
4
answers
5275
views
SQL Server cardinality hint
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)? i.e. something similar to Oracle's cardinality hint. My motivation is driven by the article, [How Good Are Query Optimizers, Really?][1] \[1] , where they test the influence of the cardinality estimator...
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)?
i.e. something similar to Oracle's cardinality hint.
My motivation is driven by the article, How Good Are Query Optimizers, Really? \[1] , where they test the influence of the cardinality estimator on a selection of a bad plan. Therefore, it would be sufficient if I could force the SQL Server to 'estimate' the cardinalities precisely for complex queries.
---
\[1] Leis, Viktor, et al. "How good are query optimizers, really?"
Proceedings of the VLDB Endowment 9.3 (2015): 204-215.
Radim Bača
(233 rep)
Mar 31, 2017, 07:17 AM
• Last activity: Aug 5, 2025, 12:16 PM
Showing page 1 of 20 total questions